Difference between revisions of "2020FallTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 7: Line 7:


== Project Overview ==
== Project Overview ==
In this project we worked on getting 5 autonomous loops using Deep Learning and another 5 autonomous loops using OpenCV on an outside closed track. For the Deep Learning we drove our model for 30+ laps around the track with 20Hz that is 20 pictures per seconds then trained it to drive autonomously. Although for the OpenCV we used ROS packages ...  
The goal of this project was to get a vehicle to autonomously drive 5 laps. We accomplished this in two different ways: using deep learning and using OpenCV. The deep learning model employed the behavior cloning built into the DonkeySim framework. This means that to train the car to drive itself, we drove our model manually for 30+ laps around the track while taking 20 pictures per second and recording our controller inputs. Next, we used this data to train our model on the UCSD supercomputer. Finally, we loaded this model back onto the vehicle and were able to reproduce our driving patterns autonomously.
 
We also attempted autonomous driving without any type of machine learning. For this we used OpenCV to filter in only yellow from the camera images in order to follow the yellow stripes throughout the track. We built this system entirely on ROS nodes so that it is extremely modular and can be applied to many different systems and hardware. The basic order of events: camera node publishes images, CV node subscribes to those images and publishes throttle and steering based off passing the images through a filter, and finally, a car node subscribes to the steering and throttle data and uses the PCA-9685 library to output controls to our hardware.
 
Our original project idea (which was scrapped due to Covid restrictions) was to train the vehicle to recognize arrow directions and eventually more complicated instructions. We were able to get fairly good results using traffic sign data to train; we achieved a 98% accuracy in differentiating left and right arrows.


== Design and Parts ==
== Design and Parts ==

Revision as of 18:42, 16 December 2020

Team Members

Udai Kandah [MAE] - 4th year Mechanical Engineering

Cris Madla [MAE] - 4th year Mechanical Engineering

Ethen Lerner [ECE] - 4th year Computer Engineering

Project Overview

The goal of this project was to get a vehicle to autonomously drive 5 laps. We accomplished this in two different ways: using deep learning and using OpenCV. The deep learning model employed the behavior cloning built into the DonkeySim framework. This means that to train the car to drive itself, we drove our model manually for 30+ laps around the track while taking 20 pictures per second and recording our controller inputs. Next, we used this data to train our model on the UCSD supercomputer. Finally, we loaded this model back onto the vehicle and were able to reproduce our driving patterns autonomously.

We also attempted autonomous driving without any type of machine learning. For this we used OpenCV to filter in only yellow from the camera images in order to follow the yellow stripes throughout the track. We built this system entirely on ROS nodes so that it is extremely modular and can be applied to many different systems and hardware. The basic order of events: camera node publishes images, CV node subscribes to those images and publishes throttle and steering based off passing the images through a filter, and finally, a car node subscribes to the steering and throttle data and uses the PCA-9685 library to output controls to our hardware.

Our original project idea (which was scrapped due to Covid restrictions) was to train the vehicle to recognize arrow directions and eventually more complicated instructions. We were able to get fairly good results using traffic sign data to train; we achieved a 98% accuracy in differentiating left and right arrows.

Design and Parts

3D Printed

Camera Mount that hold the camera at different angles. This helps in getting different frames to track the lines on the Track. The only problem with this design is that it just fits a specific size of camera. Camera mount11.png

Laser cut

We laser cut a plat to fit all the components on it (camera mount, Jetson, PWM, Switch and DC convector). Our plate has multiple holes with different sizes to organize the wires.

circuit design

Components

  • Jetson nano with memory card
  • PCA9685 PWM
  • Relay
  • DC-DC convertor
  • LED
  • Switch
  • Camera
  • Bettery
  • Servo motor
  • Speed controller

Schemeit-project.png

Implemntation

Useful Knowledge

Results

Future Suggestions

References