Difference between revisions of "2020FallTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 21: Line 21:
=== Laser cut ===
=== Laser cut ===
We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.
We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.
[[File:base.png]]
[[File:base.png]]



Revision as of 00:33, 17 December 2020

Team Members

Udai Kandah [MAE] - 4th year Mechanical Engineering

Cris Madla [MAE] - 4th year Mechanical Engineering

Ethen Lerner [ECE] - 4th year Computer Engineering

Project Overview

The goal of this project was to get a vehicle to autonomously drive 5 laps. We accomplished this in two different ways: using deep learning and using OpenCV. The deep learning model employed the behavior cloning built into the DonkeySim framework. This means that to train the car to drive itself, we drove our model manually for 30+ laps around the track while taking 20 pictures per second and recording our controller inputs. Next, we used this data to train our model on the UCSD supercomputer. Finally, we loaded this model back onto the vehicle and were able to reproduce our driving patterns autonomously.

We also attempted autonomous driving without any type of machine learning. For this we used OpenCV to filter in only yellow from the camera images in order to follow the yellow stripes throughout the track. We built this system entirely on ROS nodes so that it is extremely modular and can be applied to many different systems and hardware. The basic order of events: camera node publishes images, CV node subscribes to those images and publishes throttle and steering based off passing the images through a filter, and finally, a car node subscribes to the steering and throttle data and uses the PCA-9685 library to output controls to our hardware.

Our original project idea (which was scrapped due to Covid restrictions) was to train the vehicle to recognize arrow directions and eventually more complicated instructions. We were able to get fairly good results using traffic sign data to train; we achieved a 98% accuracy in differentiating left and right arrows.

Design and Parts

3D Printed

Camera Mount that hold the camera at different angles. This helps in getting different frames to track the lines on the Track. The only problem with this design is that it just fits a specific size of camera. Camera mount11.png

Laser cut

We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.

Base.png

Circuit Design

Components

  • Jetson nano with memory card
  • PCA9685 PWM
  • Relay
  • DC-DC convertor
  • LED
  • Switch
  • Camera
  • Bettery
  • Servo motor
  • Speed controller

Schematic

Schemeit-project.png

Implementation

Software

DonkeySim

The DonkeySim platform allows for simple and easy-to-use behavior cloning. The way this type of deep learning works is it takes snapshots while someone manually drives the vehicle. This snapshot is a pair of (image, controls) where the controls can be any button inputs on the controller including throttle. Once sufficient amounts of data are accumulated, then a neural network is trained with the images as training inputs and the controls as training targets. Essentially the system attempts to predict a combination of inputs given an image. Therefore, it is essential that manual driving be as consistent as possible so that outliers in the training set do not drastically reduce training efficacy. This is also why having a lot of data is desirable; one mistake on the track can be drowned out by thousands of other images. To speed up training we took advantage of access to the UCSD supercomputer center.

The neural network itself is a convolutional neural network (CNN) that maps images to controller inputs (throttle and steering). The way a CNN works is it has several convolutional layers and pooling layers stacked on top of one another. The depth (number of layers) helps increase the complexity and therefore how specific the details the network can learn. Each layer works by convolving a filter over the image and passing the resulting image to a pooling filter which helps reduce dimensionality and provides some translational invariance. Because the CNN is built into the DonkeySim framework, and since it is so effective, there was no need for us to alter it or create our own.

https://docs.donkeycar.com/

ROS and OpenCV

Useful Knowledge

know Python language and other languages is very useful.


Results

Autonomous laps using Deep Learning

10 Autonomous laps[1]

Autonomous laps using OpenCV

6 Autonomous laps [2]


Future Suggestions

References

DonkeySim Documentation - https://docs.donkeycar.com/ Information on CNN - https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 Introductory Deep Learning - http://neuralnetworksanddeeplearning.com/chap1.html

ROS Documentation - http://wiki.ros.org/ ROS Online Learning Platform (RobotIgnite Academy) - https://app.theconstructsim.com/#/Home