Difference between revisions of "2019FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 33: Line 33:
[[File:CameraBase.png|200px|thumb|left|alt text]]
[[File:CameraBase.png|200px|thumb|left|alt text]]
[[File:CameraHead.png|200px|thumb|left|alt text]]
[[File:CameraHead.png|200px|thumb|left|alt text]]


===Ultrasound Casings===
===Ultrasound Casings===

Revision as of 02:21, 4 December 2019

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS


The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We utilized computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Must Have

  • Object Recognition
  • GPS Navigation
  • Working IMU interface

Nice to Have

  • Obstacle avoidance
  • Search within specified radius
  • IMU communicating directly to the Jetson without the Arduino

Mechanical Components

Base Plate

alt text




Camera Mount

alt text
alt text

Ultrasound Casings

Hardware

Devices

IMU

Ultrasonic Sensors

USB GPS

Arduino Mega

Wiring

  • Insert picture and maybe some explanations

Software

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.

  • Insert explanation of algorithm and psuedocode

The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each box of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson.

The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and the donkey car continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.

Conclusion

Possible Improvements

Project Links

Resources