2020FallTeam6

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search

Team Members

Udai Kandah [MAE] - 4th year Mechanical Engineering

Cris Madla [MAE] - 4th year Mechanical Engineering

Ethen Lerner [ECE] - 4th year Computer Engineering

Project Overview

The goal of this project was to get a vehicle to autonomously drive 5 laps. We accomplished this in two different ways: using deep learning and using OpenCV. The deep learning model employed the behavior cloning built into the DonkeySim framework. This means that to train the car to drive itself, we drove our model manually for 30+ laps around the track while taking 20 pictures per second and recording our controller inputs. Next, we used this data to train our model on the UCSD supercomputer. Finally, we loaded this model back onto the vehicle and were able to reproduce our driving patterns autonomously.

We also attempted autonomous driving without any type of machine learning. For this, we used OpenCV to filter in only the track markings from the camera images in order to follow the yellow stripes throughout the track. We built this system entirely on ROS nodes so that it is extremely modular and can be applied to many different systems and hardware. The basic order of events: camera node publishes images, CV node subscribes to those images and publishes throttle and steering based on passing the images through a filter, and finally, a car node subscribes to the steering and throttle data and uses the PCA-9685 library to output controls to our hardware.

Our original project idea (which was scrapped due to Covid-19 restrictions) was to train the vehicle to recognize arrow directions and eventually more complicated instructions. We were able to get fairly good results using traffic sign data to train; we achieved a 98% accuracy in differentiating left and right arrows.

20201110 152706-compressed.jpg

Design and Parts

3D Printed

Camera Mount that hold the camera at different angles. This helps in getting different frames to track the lines on the Track. The only problem with this design is that it just fits a specific size of camera. 320px-Camera Assembly.png

Laser cut

We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.

800px-Base Plate6.png

Circuit Design

Components

  • Jetson nano with memory card
  • PCA9685 PWM
  • Relay
  • DC-DC convertor
  • LED
  • Switch
  • Camera
  • Bettery
  • Servo motor
  • Speed controller

Schematic

Schemeit-project.png

Implementation

Software

DonkeySim

The DonkeySim platform allows for simple and easy-to-use behavior cloning. The way this type of deep learning works is it takes snapshots while someone manually drives the vehicle. This snapshot is a pair of (image, controls) where the controls can be any button inputs on the controller including throttle. Once sufficient amounts of data are accumulated, then a neural network is trained with the images as training inputs and the controls as training targets. Essentially the system attempts to predict a combination of inputs given an image. Therefore, it is essential that manual driving be as consistent as possible so that outliers in the training set do not drastically reduce training efficacy. This is also why having a lot of data is desirable; one mistake on the track can be drowned out by thousands of other images. To speed up training we took advantage of access to the UCSD supercomputer center.

The neural network itself is a convolutional neural network (CNN) that maps images to controller inputs (throttle and steering). The way a CNN works is it has several convolutional layers and pooling layers stacked on top of one another. The depth (number of layers) helps increase the complexity and therefore how specific the details the network can learn. Each layer works by convolving a filter over the image and passing the resulting image to a pooling filter which helps reduce dimensionality and provides some translational invariance. Because the CNN is built into the DonkeySim framework, and since it is so effective, there was no need for us to alter it or create our own.

https://docs.donkeycar.com/

ROS and OpenCV

Our open CV script was based on filtering for the road markings as well as polygon masking to get rid of unwanted noise. Besides a color filter, we also tested other methods such as canny edge finding as well as perspective warping in order to extract the radius of curvature of the road ahead. Unfortunately do time constraints we were only able to implement our color-based filter. The CV algorithm calculated the off-center error of the car relative to the yellow markings and passed the information to a simple PID filter, which used the error to output put a controlled gain that is subsequently mapped to the steering of the car. Even with brief, tuning we were able to get excellent performance besides a slight phase lag introduced by the physical system. Given more time the next steps would have been to map the PID directly to a kinematic model of the car steering with additional Kalman filtering on setpoints sent by CV. By using a ROS system and separating our CV node we were able to switch out different control methods relatively quickly.

The car steering node was based on relevant DonkeyCar framework files found in the Donkeycar. parts folder. For more precise control it is recommended to model a simple mathematical model of the car steering angles to determine turning radaii.

The following gif, is a test implementation of our edge finding based CV algorithm. Despite our final implementation being a color filter version, the data from this alternative filtering method can easily be used to determine new set points to determine the error and send to the steering control node. Ezgif.com-gif-maker.gif

Useful Knowledge

Know Python language and other languages is very useful. Also have a basic knowledge in circuits and how each component work is important.

Results

Autonomous laps using Deep Learning

10 Autonomous laps[1]

Autonomous laps using OpenCV

6 Autonomous laps [2]

turning towered the yellow line using OpenCV

wheel turning [3]


References

DonkeySim Documentation - https://docs.donkeycar.com/ Information on CNN - https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 Introductory Deep Learning - http://neuralnetworksanddeeplearning.com/chap1.html

ROS Documentation - http://wiki.ros.org/ ROS Online Learning Platform (RobotIgnite Academy) - https://app.theconstructsim.com/#/Home