Difference between revisions of "2020FallTeam6"
|Line 59:||Line 59:|
==Useful Knowledge ==
==Useful Knowledge ==
Python language and other languages is very useful. Also have a basic knowledge in circuits and how each component work is important.
== Results ==
== Results ==
Revision as of 11:54, 17 December 2020
Udai Kandah [MAE] - 4th year Mechanical Engineering
Cris Madla [MAE] - 4th year Mechanical Engineering
Ethen Lerner [ECE] - 4th year Computer Engineering
The goal of this project was to get a vehicle to autonomously drive 5 laps. We accomplished this in two different ways: using deep learning and using OpenCV. The deep learning model employed the behavior cloning built into the DonkeySim framework. This means that to train the car to drive itself, we drove our model manually for 30+ laps around the track while taking 20 pictures per second and recording our controller inputs. Next, we used this data to train our model on the UCSD supercomputer. Finally, we loaded this model back onto the vehicle and were able to reproduce our driving patterns autonomously.
We also attempted autonomous driving without any type of machine learning. For this, we used OpenCV to filter in only the track markings from the camera images in order to follow the yellow stripes throughout the track. We built this system entirely on ROS nodes so that it is extremely modular and can be applied to many different systems and hardware. The basic order of events: camera node publishes images, CV node subscribes to those images and publishes throttle and steering based on passing the images through a filter, and finally, a car node subscribes to the steering and throttle data and uses the PCA-9685 library to output controls to our hardware.
Our original project idea (which was scrapped due to Covid-19 restrictions) was to train the vehicle to recognize arrow directions and eventually more complicated instructions. We were able to get fairly good results using traffic sign data to train; we achieved a 98% accuracy in differentiating left and right arrows.
Design and Parts
Camera Mount that hold the camera at different angles. This helps in getting different frames to track the lines on the Track. The only problem with this design is that it just fits a specific size of camera.
We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.
- Jetson nano with memory card
- PCA9685 PWM
- DC-DC convertor
- Servo motor
- Speed controller
The DonkeySim platform allows for simple and easy-to-use behavior cloning. The way this type of deep learning works is it takes snapshots while someone manually drives the vehicle. This snapshot is a pair of (image, controls) where the controls can be any button inputs on the controller including throttle. Once sufficient amounts of data are accumulated, then a neural network is trained with the images as training inputs and the controls as training targets. Essentially the system attempts to predict a combination of inputs given an image. Therefore, it is essential that manual driving be as consistent as possible so that outliers in the training set do not drastically reduce training efficacy. This is also why having a lot of data is desirable; one mistake on the track can be drowned out by thousands of other images. To speed up training we took advantage of access to the UCSD supercomputer center.
The neural network itself is a convolutional neural network (CNN) that maps images to controller inputs (throttle and steering). The way a CNN works is it has several convolutional layers and pooling layers stacked on top of one another. The depth (number of layers) helps increase the complexity and therefore how specific the details the network can learn. Each layer works by convolving a filter over the image and passing the resulting image to a pooling filter which helps reduce dimensionality and provides some translational invariance. Because the CNN is built into the DonkeySim framework, and since it is so effective, there was no need for us to alter it or create our own.
ROS and OpenCV
Our open CV script was based on filtering for the road markings as well as polygon masking to get rid of unwanted noise. Besides a color filter, we also tested other methods such as canny edge finding as well as perspective warping in order to extract the radius of curvature of the road ahead. Unfortunately do time constraints we were only able to implement our color-based filter. The cv algorithm calculated the off-center error of the car relative to the yellow markings and passed the information to a simple PID filter, which takes the error and outputs again that is mapped to the steering of the car. Even with brief, tuning we were able to get excellent performance besides a slight phase lag introduced by the physical system. Given more time the next steps would have been to map the PID directly to a kinematic model of the car steering with additional Kalman filtering on setpoints sent by CV. By using a ROS system and separating our CV node we were able to switch out different control methods relatively quickly.
The following gif, is a test implementation of our edge finding based CV algorithm. Despite our final implementation being a color filter version, the data from this alternative filtering method can easily be used to determine new set points to determine the error and send to the steering control node.
Know Python language and other languages is very useful. Also have a basic knowledge in circuits and how each component work is important.
Autonomous laps using Deep Learning
10 Autonomous laps
Autonomous laps using OpenCV
6 Autonomous laps 
turning towered the yellow line using OpenCV
wheel turning 
DonkeySim Documentation - https://docs.donkeycar.com/ Information on CNN - https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 Introductory Deep Learning - http://neuralnetworksanddeeplearning.com/chap1.html