Difference between revisions of "2019FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 30: Line 30:
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
*Insert explanation of algorithm and psuedocode
*Insert explanation of algorithm and psuedocode
The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework but implemented using pytorch. The model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. I started with open source code and expanded it to work with the video from our jetson.
The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each box of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson.


== Conclusion ==
== Conclusion ==

Revision as of 01:36, 4 December 2019

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS

Project Links

The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We plan to utilize computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Hardware

Devices

IMU

Ultrasonic Sensors

USB GPS

Arduino Mega

Wiring

  • Insert picture and maybe some explanations

Software

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.

  • Insert explanation of algorithm and psuedocode

The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each box of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson.

Conclusion

Possible Improvements

Resources