Difference between revisions of "2019FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 32: Line 32:
We wanted an adjustable camera mount to experiment with different angles so we printed 2 separate pieces to mount the camera.
We wanted an adjustable camera mount to experiment with different angles so we printed 2 separate pieces to mount the camera.


===Ultrasound Casings===
===Ultrasound Distance Sensor Housings===
[[File:UltrasoundPt1.png|200px|thumb|center]]
[[File:UltrasoundPt1.png|200px|thumb|center]]
[[File:UltrasoundPt2.png|200px|thumb|center]]
[[File:UltrasoundPt2.png|200px|thumb|center]]

Revision as of 07:13, 4 December 2019

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS


The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We utilized computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Must Have

  • Object Recognition
  • GPS Navigation
  • Working IMU interface

Nice to Have

  • Obstacle avoidance
  • Search within specified radius
  • IMU communicating directly to the Jetson without the Arduino

Mechanical Components

Base Plate

Our base plate model

We went with a relatively simple base plate to accommodate all of our hardware and help a little with cable management.

Camera Mount

Camera Mount
Camera Mount Base

We wanted an adjustable camera mount to experiment with different angles so we printed 2 separate pieces to mount the camera.

Ultrasound Distance Sensor Housings

UltrasoundPt1.png
UltrasoundPt2.png

We printed some casings for our ultrasound sensors incase the car had any collisions.

Hardware

Devices

IMU

Ultrasonic Sensors

USB GPS

Arduino Mega

Schematic

schematic .png

Brief Schematic Description: In our design, three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO040) are connected to the Arduino Mega. The distance sensors require 5V and trigger and echo pins on the Arduino, while the IMU requires 3.3V and SDA/SCL pins for the I2C connection.

Software

Navigation

For navigating the car, we integrated all of the sensors into the software. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. By calculating the needed compass bearing between current GPS coordinate and desired GPS coordinate, we obtained the error between the current compass bearing and needed compass bearing. Based on the error, we calculated

The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshhold, the car would turn either right or left based on which ultrasonic sensor detects a further obstacle.

To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter.


Object Recognition

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects. The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.

Steps to Reproduce Our Result

  • Build a Donkey Car
  • Implement GPS navigation on Donkey Car (clone github repo)
  • Install PyTorch and OpenCV on Jetson Nano
  • Implement Object Recognition (clone github repo)
  • Specify object to find (In cam.py change obj variable to desired object)
  • Run Object Recognition in conjunction with Donkey Car

Videos of Final Result

Challenges

Conclusion

Possible Improvements

Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach.

Project Links

Resources