2018SpringTeam1
Obstacle Avoidance with Lidar
AUTONOMOUS VEHICLE ECE/MAE 148 Team 01
Introduction
Our project focuses on improving the safety of autonomous driving. For a human driver, there are many scenarios where obstacles must be detected and avoided, such as a slow moving vehicle or debris in the road; the safest thing to do in such situations may be to change lanes. We focused on this type of situation and attempted to develop our autonomous vehicle to mimic this behavior. Our goal is to have the vehicle detect an obstacle in its path and change lanes to avoid a collision and continue driving autonomously.
Team Member
Jahya Burke
Nguyen Bui
Rohan Floegel-Shetty
Samuel Sunarjo
Objectives
Obstacle detection
- Determine if there is an obstacle based on lidar data
Lane switching
Neural Network
- Train a neural network (CNN) to drive autonomously and switch between left and right lane in a two-lane road
Computer Vision
- Detect and localize lanes in the road through processing camera images
Components
Chassis:
Brushless Motor
Servo:
Motor drivers (ESC):
Raspberry PI:
Battery and battery alarm:
Control circuit:
Nvidia Jetson TX2 Developer Kit
Circuit diagram:
Mechanical Design
During this project, the chassis was supplemented with additional attachments order to help with object and lane detection.
Hardware Platform
The first major attachment was an acrylic hardware platform that was attached to the chassis through 3D printed L-shaped brackets. This platform allowed for all the hardware components necessary for the achievement of the project’s objectives to be affixed to the robocar. Slots were included in the original design of the platform to allow for the passage of wires through the platform. As the project progressed, this platform was modified with hand-drilled clearance holes when it was necessary to bolt certain hardware components to the platform.
Camera and Lidar Mount
In order for the robocar to detect obstacles and lanes, it was necessary for the YDLidar and Raspberry Pi (RPI) Camera to be attached to the robocar using mounts. Since the correct positioning of the mounts would need to be determined during testing, it was necessary for these mounts to be adjustable and move independent of each other. As a result, both of the mounts were designed using multiple 3D printed parts. The lidar and camera were both attached to their own platforms. The camera and lidar mounts were then both attached to their own sets of arms. By attaching these platforms to separate sets of arms, the camera and lidar mounts were able to be adjusted independent of each other. The arms for the lidar mount were allowed to rotate completely about the x-axis and only allowed the lidar platform to be rotated between 0 and -60°. Allowing for this additional rotation of the lidar platform allowed for viewing range of the lidar to be altered. The camera arms, on the other hand, not only granted complete rotation of the arms about the x-axis but also for the camera platform to be translated along the z-axis.Finally, both these set of arms were connected to a single base that was attached to the hardware platform.
Lidar Obstacle Detection
Lidar
Lidar (Light Detection and Ranging) is a 3D map-making technology. The sensor spins while firing rapid pulses of laser light, about 150,000 times a second, at surfaces. The sensor then determines how far away obstacles are by measuring the amount of time it takes for these pulses to bounce back. By repeating this process rapidly, the lidar is able to create a 3D map of its surroundings. The lidar is incorporated in this project in order to detect obstacles and their distances from the robocar.
Our Lidar is YDLIDAR X4.
Set up ROS on Raspberry Pi
Ubiquityrobotics has a nice Ubuntu image that has ROS and ydlidar ROS package installed. Note that this image runs a setup script that messes with the IP addresses of the RPI. This gave us a hard time to debug when we were trying to communicate between two RPIs via Ethernet.
Feel free to use any kind of image on the RPI that has ROS installed.
Launching the Lidar via ROS
Clone this package:
https://github.com/EAIBOT/ydlidar
Follow instruction in the README file:
- Clone this project to your catkin's workspace src folder
- Running the catkin_make command to build ydlidar_node and ydlidar_client
- Create the name "/dev/ydlidar" for YDLIDAR
$ roscd ydlidar/startup $ sudo chmod 777 ./* $ sudo sh initenv.sh
Run YDLIDAR node and view using test application:
roslaunch ydlidar lidar.launch rosrun ydlidar ydlidar_client
You should see YDLIDAR's scan result in the console
If we have ubuntu running UI, run this to UI node to see lidar data:
roslaunch ydlidar lidar_view.launch
If got error: “... is not a package or launch file name ...”, try:
- Recompile catkin using catkin_make
cd ~/catkin_ws catkin_make
- Run
source devel/setup.sh
Lidar data and detection algorithm
The Lidar data format can be found here, which basically follows the ROS standardized data type sensor_msgs/LaserScan.
Using Lidar data to detect obstacles
- The detection algorithm is designed to detect obstacles in front of the vehicle on the road. It is set to detect obstacle 1.5 meter away from the vehicle. Although the Lidar scans 360 degree with 0.5 degree resolution, the algorithm only consider lidar data 15 degrees in front of the car. Basically if half of the distances measured within this 15 degree range fall below a distance threshold (1.5 meter), then it signals detection of an obstacle. The main drawback of this approach of 15 degree range is that there is certain blind spot situation if an obstacle is too close, but should be fine if the vehicle starts driving without obstacles in the “blind spot”.
Limitation of current implementation
- Since the Lidar is mounted parallel to the ground, cannot detect short objects with a height lower than the height of the Lidar relative to the ground.
- Solution: Angle the Lidar, but would disable SLAM
- Solution: Use a 3D Lidar instead of 2D Lidar
- Blind spot outside of the scanned cone shape
- Solution: Scan 180° and check a rectangular box in front of car
Change Lanes - Neural Network
Tawn Kramer's fork of donkey car ( Tawn Donkey )has the capability of training behaviors. This can be used for a variety of tasks but in this case we will use it to change lanes. The general idea is that you train each behavior (left and right) separately. After training the two behaviors separately and being able to demonstrate that each behavior works properly, you can combine them into one model and then switch between the behaviors by pressing the L1 button of the PS3 joystick controller. In our case, we will change the trigger to be the output of the Lidar obstacle detection algorithm instead.
Neural Networks
Neural networks is a method to perform machine learning where a computer is taught to perform a task based on previous training. This network is comprised of artificial neurons that are known as nodes. These units are arranged in layers, and nodes in different layers are connected to each other. These connections are characterized by a weight, which represents how much a node impacts a node in a different layer; consequently, a higher node indicates a greater influence.
In order to train these networks, information is passed into the first layer of nodes. This information then flows throughout the rest of the layers of nodes while being multiplied by the aforementioned weights, which are initially arbitrary. If the input for a node surpasses a threshold value, that node is then activated and connections that are downstream of the node are activated. If the input does not reach a certain value, the node does not activate and the downstream connections are not activated; therefore, the information for this node does not get passed along to the ensuing layers of nodes. In order for the network to output the desired result, the connections between the nodes are adjusted.
These connections are modified through a feedback process known as backpropagation. In this process, the output of the network is compared to the desired output and the difference is then used to adjust the weights of the connections between the nodes starting with the connections between the final layers and then working backward until all the connections are altered. This process is repeated with different training information until the output of the network outputs the desired results. At this point, the network has been trained and can be introduced to new information, and the network will respond accordingly. In our project, the robocar was trained to detect and change lanes using neural networks.
Training Procedure
- Ensure that the most recent version of Tawn Kramers's fork is installed on BOTH the donkeycar and the GPU cluster (or the computer that you are training the model on)
- In our case we had some trouble with this. We did not realize that Tawn Kramers fork was not installed on the GPU cluster. We also needed to have the Python library sklearn installed on the GPU cluster.
- Set TRAIN_BEHAVIORS = True in the config file (config.py) on the raspberry pi
- If you want to train behaviors other than two lanes then change the behavior labels in the config file as well
- Run python manage.py drive and click L1 to choose a behavior to train
- I chose to train one behavior at a time so each behavior was in a separate tub. This allows you to train the behaviors separately to see if each one works properly
- Collect data for training each behavior
- Carefully keep track of what is in each tub that you collect each time. Record what kind of maneuvers each tub contains. We suggest have separate tubs for each kind of maneuver. (See below for the types of maneuvers we collected)
- We used a straight-line track in the hallway in front of the ECE Makerspace. It took ~40k images to train each behavior. To train the lane behavior, we used toughly 25k straight driving images, 7k transition images (changing into correct lane), 5k small noise images (little wiggle) and 3k images guiding the car back into the lane.
- We had to train and test the model multiple times iteratively and try to collect new data that would correct previous errors. At first the model was too noisy, so we added more straight driving. Then the model only drove straight, so we added more lane transitions. Then the model wouldn't obey the outer boundary, so we added data of the car driving back towards the center when it was facing the outside of the lane.
- Training can be difficult but keep testing the model and adding/removing data as needed.
- Train the model on the GPU cluster with the normal settings and test the behaviors
- Keep TRAIN_BEHAVIORS = False in the config file on the GPU cluster. Transfer the model back to the car and test to make sure each behavior works properly.
- Train the behaviors together
- Use all the tubs for both behaviors and create a model. This is the part we had trouble with. We were unable to get around a Keras error so we could not complete the combined behavior training.
- The error we got was "Value Error: Error when checking target: expected angle_out to have shape (None,15) but got array with shape (128,1)." Hopefully you can get around this if you attempt behavior training! We suspect the error was caused by version dicrepenceies in the installed Tawn Kramer Donkey image or Tensorflow on the cluster.
Change Lanes - Computer Vision
Socket Communication
The Lidar and the obstacle detection algorithm were run on the Jetson TX2, since we found a ROS package for the Lidar which we could use to read data from the Lidar. However, we did not successfully install ROS and donkey car program onto the same device. Instead, we ran the donkey car program on the raspberry pi for control and ran the Lidar on the Jetson TX2. We then sent the signal of lane-changing from the Jetson TX2 to the raspberry pi via the socket program written in Python.
We did attempt to find an Ubuntu image or compile a package for the donkey car, Maeve Automation. However, we were not able to flash the image correctly or built the Maeve ROS package successfully.
Socket
We set up the Jetson as the server and the raspberry pi as the client. To integrate the lane-changing signal into the donkey car control we made a part that would read the signal from the Lidar obstacle detection algorithm and then trigger the behavior change. Since we couldn't get the behavior model to work, we used the right lane model and triggered a user mode change instead of triggering a behavior change. When the obstacle was detected, the mode changed from user mode to local angle mode. When the mode changed to local angle, while the vehicle was in the left lane, the model would be activated and the vehicle would switch to the right lane.
Nvidia Jetson TX2
Setup
Power consumption
SLAM
ROS Communication
Setup Procedure
Connect Jetson TX2 and Raspberry Pi3 via ethernet cable:
- Using an ethernet cable to connect Jetson Tx2 and Raspberry PI 3 together.
- Set up static IP address for raspberry PI on Ethernet (1):
sudo vi /etc/network/interfaces
- Modify the ethernet interface for eth0:
- Edit ethernet to get static IP address
iface eth0 inet manual auto eth0 iface eth0 inet static address 192.168.2.2 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 192.168.2.1
- On Jetson TX2, set static address for ethernet to 192.168.2.3:
Test connection:
pi@ucsdrobocar011:~ $ ping 192.168.2.3 PING 192.168.2.3 (192.168.2.3) 56(84) bytes of data. 64 bytes from 192.168.2.3: icmp_seq=1 ttl=64 time=1.56 ms 64 bytes from 192.168.2.3: icmp_seq=2 ttl=64 time=0.603 ms 64 bytes from 192.168.2.3: icmp_seq=3 ttl=64 time=0.607 ms 64 bytes from 192.168.2.3: icmp_seq=4 ttl=64 time=0.589 ms 64 bytes from 192.168.2.3: icmp_seq=5 ttl=64 time=0.715 ms
Set up Jetson TX2 running roscore:
Check IP addresses on the master machine (Jetson Tx2):
nvidia@tegra-ubuntu:~$ hostname -I 192.168.2.3 192.168.1.133 192.168.55.1 2605:e000:1c0d:c0ba:0:e8b3:ea64:e929
Set this machine IP address (192.168.2.3) as ROS MASTER:
nvidia@tegra-ubuntu:~$export ROS_MASTER_URI=http://192.168.2.3:11311 nvidia@tegra-ubuntu:~$ export ROS_IP=192.168.2.3
Check IP addresses on the slave machine (Raspberry pi):
nvidia@tegra-ubuntu:~$ hostname -I 192.168.2.3 192.168.1.133 192.168.55.1 2605:e000:1c0d:c0ba:0:e8b3:ea64:e929
Set ROS MASTER as the IP of Jetson TX2:
nvidia@tegra-ubuntu:~$export ROS_MASTER_URI=http://192.168.2.3:11311
Set ROS to use this IP address:
nvidia@tegra-ubuntu:~$export ROS_IP=192.168.2.2
Test to move turlesim simulation:
- Create simple publisher on Raspberry pi follow instructions here
- Create simple subcriber on Jetson Tx2 follow instructions here
- Run ROS and confirm data being transferred correctly
Note:
- No crossover ethernet cable needed. Ethernet port on Raspberry Pi is auto sensing.
- If using Raspbian Image with UI, then set up will be same as set up Jetson Tx2 with ip 192.168.2.2
- ROS using 3 environment variables to communicate:
- ROS_MASTER_URI
- ROS_IP
- ROS_HOSTNAME
- Somehow, if we do not correctly setup the ROS_HOSTNAME variable, all the topics can be seen from both machines, but the data will NOT be sent. I think that roslaunch/rosrun uses ROS_HOSTNAME to resolve hostnames into IP address
- To fix this issue, I had to change the hosts file in the slave machine.
- On Raspberry PI, run:
$ vi /etc/hosts
Then add the line below, so ROS will know how to resolve hostnames into IP address (Tegra-ubuntu here is the name of Jetson TX2):
192.168.2.3 tegra-ubuntu
Then add this command to ~/.profile so the command above is loaded whenever a new bash session is created:
export ROS_MASTER_URI=http://192.168.2.3:11311 export ROS_IP=192.168.2.2 source ~/catkin_ws/devel/setup.bash
Then go to ~/.bashrc to disable ubiquity setup script by commenting out the following line:
# source /etc/ubiquity/env.sh
Do the same thing on the Master machine (Jetson TX2)(Not required but recommended):
$ vi /etc/hosts
Then add this line, so ROS will know how to resolve hostnames into IP address (ucsdrobocar012 here is the name of Raspberry pi, which can be changed using the command vi /etc/hostname):
192.168.2.3 ucsdrobocar012
Then modify these command in ~/.bashrc so these environment variables will be set whenever a new bash session is created:
export ROS_MASTER_URI=http://192.168.2.3:11311 export ROS_IP=192.168.2.3
- Issue: If I set:
export ROS_HOSTNAME=tegra-ubuntu
Then when running roscore, it will use ROS_MASTER_URI=http://tegra-ubuntu:11311. And in the /etc/hosts file, tegra-ubuntu is resolved as 172.0.1.1. That mean we are running another ROS MASTER, not the one with ip 192.168.2.2.
More information about ROS environment variables can be found here.
Hardware diagram
Connections:
- Camera is connected to Raspberry Pi via PI camera port.
- ESC and Steering is connected to Raspberry PI via GPIO using I2C protocol.
- Raspberry Pi is connected to Jetson TX2 via ethernet cable for Socket communication and ROS Communication.
- Lidar is connected Jetson TX2 via USB port.
- Both Raspberry Pi and Jetson are connected to wifi for debugging, developing, and monitoring all applications.
Software diagram
Set up images transportation between ROS nodes:
Install scipy for ubiquity image:
$sudo apt-get install python-scipy
Create a simple ROS node that reads a jpeg file and publishes it.
Other
To update the donkey car package to the latest version:
cd ~/donkeycar git pull pip install -e .
I2C:
Update apt-get:
sudo apt-get update sudo apt-get install i2c-tools
If you got following error:
E: Syntax error /etc/apt/apt.conf.d/99translations:2: Extra junk at end of file
Then look at the bold file, add “;” into all apt line.
Remember to enable I2C in raspberry config:
Raspi-config
Detect I2C:
sudo i2cdetect -y 1
- 0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: 70 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Calibrate channels:
$ donkey calibrate --channel 1 using donkey v2.2.0 ... Enter a PWM setting to test(0-1500)500 Enter a PWM setting to test(0-1500)1000 Enter a PWM setting to test(0-1500)50 Enter a PWM setting to test(0-1500)1000 Enter a PWM setting to test(0-1500)1500 Enter a PWM setting to test(0-1500)20 Enter a PWM setting to test(0-1500)200 Enter a PWM setting to test(0-1500)500 Enter a PWM setting to test(0-1500)1500 Enter a PWM setting to test(0-1500)200
Pair Bluetooth Controller
Install Bluetooth software:
pi@ucsdrobocar01:~ $ sudo systemctl enable bluetooth.service Synchronizing state for bluetooth.service with sysvinit using update-rc.d... Executing /usr/sbin/update-rc.d bluetooth defaults Executing /usr/sbin/update-rc.d bluetooth enable pi@ucsdrobocar01:~ $ sudo usermod -G bluetooth -a pi
Power cycle raspberry PI - reboot:
pi@ucsdrobocar01:~ $ sudo shutdown now
Connect Joystick to Raspberry PI using MINI USB cable:
Download tools:
wget http://www.pabr.org/sixlinux/sixpair.c gcc -o sixpair sixpair.c -lusb sudo ./sixpair
Disconnect Joystick. Turn on bluetooth and start pairing:
$ bluetoothctl [bluetooth]# agent on Agent registered [bluetooth]# trust 00:16:FE:74:12:B7 [CHG] Device 00:16:FE:74:12:B7 Trusted: yes Changing 00:16:FE:74:12:B7 trust succeeded [bluetooth]# [bluetooth]# quit Agent unregistered [DEL] Controller B8:27:EB:49:2D:8C ucsdrobocar01 [default]
Verify connection:
$ ls /dev/input mice
Modified config.py file:
#JOYSTICK USE_JOYSTICK_AS_DEFAULT = True JOYSTICK_MAX_THROTTLE = 0.30 JOYSTICK_STEERING_SCALE = 0.85 AUTO_RECORD_ON_THROTTLE = True
Start driving and collect data
Start driving:
$python manage.py drive
Since we modified the config file (shown below) to use the joystick by default and auto-recording, we are unable to use the web interface; the data will automatically be recorded when the vehicle is moving:
USE_JOYSTICK_AS_DEFAULT = True AUTO_RECORD_ON_THROTTLE = True
To be able to use the web interface, set the joystick default option to False:
USE_JOYSTICK_AS_DEFAULT = FALSE
And access to control panel at <IP address>:8887 after running the manage.py script.
PS3 button functions:
Up: increase Throttle speed (0-100% of max speed set in config.py)
Down: decrease Throttle speed (0-100% of max speed set in config.py)
Left joystick(up and down only): Steering
Right joystick(up and down only): Throttle
Issue:
The PI hangs and corrupts the data in the tub. This will cause problem during training:
Unexpected error: <class 'json.decoder.JSONDecodeError'> Traceback (most recent call last): File "manage.py", line 183, in <module> train(cfg, tub, model) File "manage.py", line 148, in train tubgroup = TubGroup(tub_names) … raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
To fix this issue, we need to use:
donkey tubcheck tub_1_18-05-02 --fix
To check and delete corrupt data:
Checking tub:tub_1_18-05-02. Found: 20348 records. Unexpected error: <class 'json.decoder.JSONDecodeError'> problems with record, removing: tub_1_18-05-02 20342
Calculating car position in turning
Training the model
Set up GPU cluster at ieng6.ucsd.edu by following the instruction here:
After transferring the data from the RPI to the cluster, train and transfer the trained model back to the PI. Then the donkey car is ready for a test run!
using donkey v2.2.0 ...
…
Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:8a:00.0, compute capability: 6.1)
254/255 [============================>.] - ETA: 0s - loss: 1.4248 - angle_out_loss: 1.5825 - throttle_out_loss: 0.5732Epoch 00001: val_loss improved from inf to 1.13746, saving model to ucsd0502.h5
…
255/255 [==============================] - 124s 487ms/step - loss: 0.7270 - angle_out_loss: 0.8077 - throttle_out_loss: 0.0297 - val_loss: 0.9560 - val_angle_out_loss: 1.0622 - val_throttle_out_loss: 0.0305
Training can also be done on a local computer after installing the donkey car package on the local computer or using a Docker Image.
Issue:
Camera position is too low. In order to see the track, we have to lower the mount for the camera to point more downward, which would make the image to contain less background noise and less affected by the lightning condition.
At noon or at 5pm, the vehicle is more likely to get off the track at the straight portion of the track, which possibly caused by the training data teaching the car to go straight at the straight line but never teaching the vehicle to go back to the track from the edge or outter side of the track. This can be mitigated by collecting training data where the car moves to the side of the track and moves back into the track.
Reference
https://docs.google.com/document/d/1mW28cSvn9k_IMt9VH5ROb30LsvgICu1G5Mnx4OHdMGc/edit#
https://www.youtube.com/watch?v=RCw3oIlxozA
http://wiki.ros.org/rospy_tutorials/Tutorials/WritingPublisherSubscriber
http://wiki.ros.org/ROS/EnvironmentVariables#ROS_IP.2BAC8-ROS_HOSTNAME
http://wiki.ros.org/ROS/EnvironmentVariables
https://github.com/tawnkramer/donkey
https://downloads.ubiquityrobotics.com/pi.html
https://maeveautomation.org/development/2017/6/18/release-2
https://www.explainthatstuff.com/introduction-to-neural-networks.html
http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414
http://www.lidar-uk.com/how-lidar-works/
Acknowledgement
We would like to acknowledge software developer Tawn Kramer for his implementation of the Donkey Car, forked from the Github user Will Roscoe. We would also like to thank Professor Mauricio de Oliveira and Jack Silberman for their advice and encouragement in the completion of this project. We would like to thank the UC San Diego Electrical and Computer Engineering (ECE) Department and Mechanical and Aerospace Engineering (MAE) Department for waiving our lab fees for the course. Finally, we would like to thank immensely the UCSD’s DSMLP instructional GPU cluster service, funded by ITS, JSOE, and CogSci departments, for providing us GPU clusters to train our models.