From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Obstacle Avoidance with Lidar



Our project focuses on improving the safety of autonomous driving. For a human driver, there are many scenarios where obstacles must be detected and avoided, such as a slow moving vehicle or debris in the road; the safest thing to do in such situations may be to change lanes. We focused on this type of situation and attempted to develop our autonomous vehicle to mimic this behavior. Our goal is to have the vehicle detect an obstacle in its path and change lanes to avoid a collision and continue driving autonomously.

Team Member

Jahya Burke

Nguyen Bui

Rohan Floegel-Shetty

Samuel Sunarjo


Obstacle detection

Determine if there is an obstacle based on lidar data

Lane switching

Neural Network

Train a neural network (CNN) to drive autonomously and switch between left and right lane in a two-lane road

Computer Vision

Detect and localize lanes in the road through processing camera images




Brushless Motor




Motor drivers (ESC):


Raspberry PI:


Battery and battery alarm:


Control circuit:




Nvidia Jetson TX2 Developer Kit


Circuit diagram:

Lidar Obstacle Detection

Set up ROS on Raspberry Pi

Ubiquityrobotics has a nice Ubuntu image that has ROS and ydlidar ROS package installed. Note that this image runs a setup script that messes with the IP addresses of the RPI. This gave us a hard time to debug when we were trying to communicate between two RPIs via Ethernet.

Feel free to use any kind of image on the RPI that has ROS installed.

Launching the Lidar via ROS

Clone this package:


Follow instruction in the README file:

  1. Clone this project to your catkin's workspace src folder
  2. Running the catkin_make command to build ydlidar_node and ydlidar_client
  3. Create the name "/dev/ydlidar" for YDLIDAR
$ roscd ydlidar/startup
$ sudo chmod 777 ./*
$ sudo sh initenv.sh

Run YDLIDAR node and view using test application:

roslaunch ydlidar lidar.launch
rosrun ydlidar ydlidar_client

You should see YDLIDAR's scan result in the console

If we have ubuntu running UI, run this to UI node to see lidar data:

roslaunch ydlidar lidar_view.launch 

If got error: “... is not a package or launch file name ...”, try:

Recompile catkin using catkin_make
cd ~/catkin_ws
source devel/setup.sh

Lidar data and detection algorithm

The Lidar data format can be found here, which basically follows the ROS standardized data type sensor_msgs/LaserScan.

Using Lidar data to detect obstacles

The detection algorithm is designed to detect obstacles in front of the vehicle on the road. It is set to detect obstacle 1.5 meter away from the vehicle. Although the Lidar scans 360 degree with 0.5 degree resolution, the algorithm only consider lidar data 15 degrees in front of the car. Basically if half of the distances measured within this 15 degree range fall below a distance threshold (1.5 meter), then it signals detection of an obstacle. The main drawback of this approach of 15 degree range is that there is certain blind spot situation if an obstacle is too close, but should be fine if the vehicle starts driving without obstacles in the “blind spot”.

Limitation of current implementation

  1. Since the Lidar is mounted parallel to the ground, cannot detect short objects with a height lower than the height of the Lidar relative to the ground.
    1. Solution: Angle the Lidar, but would disable SLAM
    2. Solution: Use a 3D Lidar instead of 2D Lidar
  2. Blind spot outside of the scanned cone shape
    1. Solution: Scan 180° and check a rectangular box in front of car

Change Lanes - Neural Network

Tawn Kramer's fork of donkey car ( Tawn Donkey )has the capability of training behaviors. This can be used for a variety of tasks but in this case we will use it to change lanes. The general idea is that you train each behavior (left and right) separately. After training the two behaviors separately and being able to demonstrate that each behavior works properly, you can combine them into one model and then switch between the behaviors by pressing the L1 button of the PS3 joystick controller. In our case we will change the trigger to be the output of the Lidar obstacle detection algorithm instead.

Training Procedure:

  1. Ensure that the most recent version of Tawn Kramers's fork is installed on BOTH the donkeycar and the GPU cluster (or the computer that you are training the model on)
    1. In our case we had some trouble with this. We did not realize that Tawn Kramers fork was not installed on the GPU cluster. We also needed to have the Python library sklearn installed on the GPU cluster.
  2. Set TRAIN_BEHAVIORS = True in the config file (config.py) on the raspberry pi
    1. If you want to train behaviors other than two lanes then change the behavior labels in the config file as well
  3. Run python manage.py drive and click L1 to choose a behavior to train
    1. I chose to train one behavior at a time so each behavior was in a separate tub. This allows you to train the behaviors separately to see if each one works properly
  4. Collect data for training each behavior
    1. Carefully keep track of what is in each tub that you collect each time. Record what kind of maneuvers each tub contains. We suggest have separate tubs for each kind of maneuver. (See below for the types of maneuvers we collected)
    2. We used a straight-line track in the hallway in front of the ECE Makerspace. It took ~40k images to train each behavior. To train the lane behavior, we used toughly 25k straight driving images, 7k transition images (changing into correct lane), 5k small noise images (little wiggle) and 3k images guiding the car back into the lane.
    3. We had to train and test the model multiple times iteratively and try to collect new data that would correct previous errors. At first the model was too noisy, so we added more straight driving. Then the model only drove straight, so we added more lane transitions. Then the model wouldn't obey the outer boundary, so we added data of the car driving back towards the center when it was facing the outside of the lane.
    4. Training can be difficult but keep testing the model and adding/removing data as needed.
  5. Train the model on the GPU cluster with the normal settings and test the behaviors
    1. Keep TRAIN_BEHAVIORS = False in the config file on the GPU cluster. Transfer the model back to the car and test to make sure each behavior works properly.
  6. Train the behaviors together
    1. Use all the tubs for both behaviors and create a model. This is the part we had trouble with. We were unable to get around a Keras error so we could not complete the combined behavior training.
    2. The error we got was "Value Error: Error when checking target: expected angle_out to have shape (None,15) but got array with shape (128,1)." Hopefully you can get around this if you attempt behavior training! We suspect the error was caused by version dicrepenceies in the installed Tawn Kramer Donkey image or Tensorflow on the cluster.

Change Lanes - Computer Vision

Socket Communication

The Lidar and the obstacle detection algorithm were run on the Jetson TX2, since we found a ROS package for the Lidar which we could use to read data from the Lidar. However, we did not successfully install ROS and donkey car program onto the same device. Instead, we ran the donkey car program on the raspberry pi for control and ran the Lidar on the Jetson TX2. We then sent the signal of lane-changing from the Jetson TX2 to the raspberry pi via the socket program written in Python.


We set up the Jetson as the server and the raspberry pi as the client. To integrate the lane-changing signal into the donkey car control we made a part that would read the signal from the Lidar obstacle detection algorithm and then trigger the behavior change. Since we couldn't get the behavior model to work, we used the right lane model and triggered a user mode change instead of triggering a behavior change. When the obstacle was detected, the mode changed from user mode to local angle mode. When the mode changed to local angle, while the vehicle was in the left lane, the model would be activated and the vehicle would switch to the right lane.

Nvidia Jetson TX2


Power consumption


ROS Communication

Setup Procedure

Connect Jetson TX2 and Raspberry Pi3 via ethernet cable:

  1. Using an ethernet cable to connect Jetson Tx2 and Raspberry PI 3 together.
  2. Set up static IP address for raspberry PI on Ethernet (1):
sudo vi /etc/network/interfaces
  1. Modify the ethernet interface for eth0:
  2. Edit ethernet to get static IP address
iface eth0 inet manual
auto eth0
iface eth0 inet static
  1. On Jetson TX2, set static address for ethernet to

Test connection:

pi@ucsdrobocar011:~ $ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=1.56 ms
64 bytes from icmp_seq=2 ttl=64 time=0.603 ms
64 bytes from icmp_seq=3 ttl=64 time=0.607 ms
64 bytes from icmp_seq=4 ttl=64 time=0.589 ms
64 bytes from icmp_seq=5 ttl=64 time=0.715 ms

Set up Jetson TX2 running roscore:

Check IP addresses on the master machine (Jetson Tx2):

nvidia@tegra-ubuntu:~$ hostname -I 2605:e000:1c0d:c0ba:0:e8b3:ea64:e929 

Set this machine IP address ( as ROS MASTER:

nvidia@tegra-ubuntu:~$export ROS_MASTER_URI=
nvidia@tegra-ubuntu:~$ export ROS_IP=

Check IP addresses on the slave machine (Raspberry pi):

nvidia@tegra-ubuntu:~$ hostname -I 2605:e000:1c0d:c0ba:0:e8b3:ea64:e929 

Set ROS MASTER as the IP of Jetson TX2:

nvidia@tegra-ubuntu:~$export ROS_MASTER_URI=

Set ROS to use this IP address:

nvidia@tegra-ubuntu:~$export ROS_IP=

Test to move turlesim simulation:

  • Create simple publisher on Raspberry pi follow instructions here
  • Create simple subcriber on Jetson Tx2 follow instructions here
  • Run ROS and confirm data being transferred correctly


  • No crossover ethernet cable needed. Ethernet port on Raspberry Pi is auto sensing.
  • If using Raspbian Image with UI, then set up will be same as set up Jetson Tx2 with ip
  • ROS using 3 environment variables to communicate:
    • ROS_IP
  • Somehow, if we do not correctly setup the ROS_HOSTNAME variable, all the topics can be seen from both machines, but the data will NOT be sent. I think that roslaunch/rosrun uses ROS_HOSTNAME to resolve hostnames into IP address
    • To fix this issue, I had to change the hosts file in the slave machine.
    • On Raspberry PI, run:
$ vi /etc/hosts

Then add the line below, so ROS will know how to resolve hostnames into IP address (Tegra-ubuntu here is the name of Jetson TX2):     tegra-ubuntu

Then add this command to ~/.profile so the command above is loaded whenever a new bash session is created:

export ROS_IP=
source ~/catkin_ws/devel/setup.bash

Then go to ~/.bashrc to disable ubiquity setup script by commenting out the following line:

# source /etc/ubiquity/env.sh

Do the same thing on the Master machine (Jetson TX2)(Not required but recommended):

$ vi /etc/hosts

Then add this line, so ROS will know how to resolve hostnames into IP address (ucsdrobocar012 here is the name of Raspberry pi, which can be changed using the command vi /etc/hostname):     ucsdrobocar012

Then modify these command in ~/.bashrc so these environment variables will be set whenever a new bash session is created:

export ROS_IP=
  • Issue: If I set:
export ROS_HOSTNAME=tegra-ubuntu

Then when running roscore, it will use ROS_MASTER_URI=http://tegra-ubuntu:11311. And in the /etc/hosts file, tegra-ubuntu is resolved as

More information about ROS environment variables can be found here.

Hardware diagram

Software diagram

Set up images transportation between ROS nodes:

Install scipy for ubiquity image:

$sudo apt-get install python-scipy

Create a simple ROS node that reads a jpeg file and publishes it.


To update the donkey car package to the latest version:

cd ~/donkeycar
git pull
pip install -e .


Update apt-get:

sudo apt-get update
sudo apt-get install i2c-tools

If you got following error:

E: Syntax error /etc/apt/apt.conf.d/99translations:2: Extra junk at end of file

Then look at the bold file, add “;” into all apt line.

Remember to enable I2C in raspberry config:


Detect I2C:

sudo i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f

00: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

70: 70 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Calibrate channels:

$ donkey calibrate --channel 1
using donkey v2.2.0 ...
Enter a PWM setting to test(0-1500)500
Enter a PWM setting to test(0-1500)1000
Enter a PWM setting to test(0-1500)50
Enter a PWM setting to test(0-1500)1000
Enter a PWM setting to test(0-1500)1500
Enter a PWM setting to test(0-1500)20
Enter a PWM setting to test(0-1500)200
Enter a PWM setting to test(0-1500)500
Enter a PWM setting to test(0-1500)1500
Enter a PWM setting to test(0-1500)200

Pair Bluetooth Controller

Install Bluetooth software:

pi@ucsdrobocar01:~ $ sudo systemctl enable bluetooth.service
Synchronizing state for bluetooth.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d bluetooth defaults
Executing /usr/sbin/update-rc.d bluetooth enable
pi@ucsdrobocar01:~ $ sudo usermod -G bluetooth -a pi

Power cycle raspberry PI - reboot:

pi@ucsdrobocar01:~ $ sudo shutdown now

Connect Joystick to Raspberry PI using MINI USB cable:

Download tools:

wget http://www.pabr.org/sixlinux/sixpair.c
gcc -o sixpair sixpair.c -lusb
sudo ./sixpair

Disconnect Joystick. Turn on bluetooth and start pairing:

$ bluetoothctl
[bluetooth]# agent on
Agent registered
[bluetooth]# trust 00:16:FE:74:12:B7
[CHG] Device 00:16:FE:74:12:B7 Trusted: yes
Changing 00:16:FE:74:12:B7 trust succeeded
[bluetooth]# quit
Agent unregistered
[DEL] Controller B8:27:EB:49:2D:8C ucsdrobocar01 [default]

Verify connection:

$ ls /dev/input

Modified config.py file:


Start driving and collect data

Start driving:

$python manage.py drive

Since we modified the config file (shown below) to use the joystick by default and auto-recording, we are unable to use the web interface; the data will automatically be recorded when the vehicle is moving:


To be able to use the web interface, set the joystick default option to False:


And access to control panel at <IP address>:8887 after running the manage.py script.

PS3 button functions:

Up: increase Throttle speed (0-100% of max speed set in config.py)

Down: decrease Throttle speed (0-100% of max speed set in config.py)

Left joystick(up and down only): Steering

Right joystick(up and down only): Throttle


The PI hangs and corrupts the data in the tub. This will cause problem during training:

Unexpected error: <class 'json.decoder.JSONDecodeError'>
Traceback (most recent call last):
 File "manage.py", line 183, in <module>
   train(cfg, tub, model)
 File "manage.py", line 148, in train
   tubgroup = TubGroup(tub_names)
   raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

To fix this issue, we need to use:

donkey tubcheck tub_1_18-05-02 --fix

To check and delete corrupt data:

Checking tub:tub_1_18-05-02.
Found: 20348 records.
Unexpected error: <class 'json.decoder.JSONDecodeError'>
problems with record, removing: tub_1_18-05-02 20342

Calculating car position in turning

Training the model

Set up GPU cluster at ieng6.ucsd.edu by following the instruction here:

After transferring the data from the RPI to the cluster, train and transfer the trained model back to the PI. Then the donkey car is ready for a test run!

using donkey v2.2.0 ...

Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:8a:00.0, compute capability: 6.1)

254/255 [============================>.] - ETA: 0s - loss: 1.4248 - angle_out_loss: 1.5825 - throttle_out_loss: 0.5732Epoch 00001: val_loss improved from inf to 1.13746, saving model to ucsd0502.h5

255/255 [==============================] - 124s 487ms/step - loss: 0.7270 - angle_out_loss: 0.8077 - throttle_out_loss: 0.0297 - val_loss: 0.9560 - val_angle_out_loss: 1.0622 - val_throttle_out_loss: 0.0305

Training can also be done on a local computer after installing the donkey car package on the local computer or using a Docker Image.


Camera position is too low. In order to see the track, we have to lower the mount for the camera to point more downward, which would make the image to contain less background noise and less affected by the lightning condition.

At noon or at 5pm, the vehicle is more likely to get off the track at the straight portion of the track, which possibly caused by the training data teaching the car to go straight at the straight line but never teaching the vehicle to go back to the track from the edge or outter side of the track. This can be mitigated by collecting training data where the car moves to the side of the track and moves back into the track.











We would like to acknowledge software developer Tawn Kramer for his implementation of the Donkey Car, forked from the Github user Will Roscoe. We would also like to thank Professor Mauricio de Oliveira and Jack Silberman for their advice and encouragement in the completion of this project. We would like to thank the UC San Diego Electrical and Computer Engineering (ECE) Department and Mechanical and Aerospace Engineering (MAE) Department for waiving our lab fees for the course. Finally, we would like to thank immensely the UCSD’s DSMLP instructional GPU cluster service, funded by ITS, JSOE, and CogSci departments, for providing us GPU clusters to train our models.