2019FallTeam8

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS


The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We utilized computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Must Have

  • Object Recognition
  • GPS Navigation
  • Working IMU interface

Nice to Have

  • Obstacle avoidance
  • Search within specified radius
  • IMU communicating directly to the Jetson without the Arduino

Mechanical Components

Base Plate

Our base plate model

We went with a relatively simple base plate to accommodate all of our hardware and help a little with cable management.

Camera Mount

Camera Mount
Camera Mount Base

We wanted an adjustable camera mount to experiment with different angles so we designed and printed two separate pieces to mount the camera.

Ultrasound Distance Sensor Housings

UltrasoundPt1.png
UltrasoundPt2.png

We printed some casings for our ultrasound sensors in case the car had any collisions, and to help with mounting the sensors to the car chassis.

Hardware

Devices

IMU (SparkFun Bosch BNO040)

The IMU, or inertial measurement unit, is a device containing a gyroscope, magnetometer, and accelerometer which can be used to determine a multitude of things such as acceleration, bearing, or rotational rate. We used the magnetometer (and the accompanying SparkFun Arduino library for this chip) to determine the orientation of the vehicle. The IMU requires 3.3V and SDA/SCL pins for the I2C connection. Data is then read from the Arduino by the Jetson via serial connection (USB). The magnetometer outputs the strength of the Earth's magnetic field in three dimensions X,Y,Z at a given orientation. To create an appropriate compass bearing where north = 0/360 degrees, south = 180 degrees etc., we are only concerned with the strength of the field in the X and Y directions. The arctangent (y/x) provides us with an angle, which can be adjusted to set north = 0/360 degrees. Compensating for true north is also important, as the north we find when using the magnetometer is actually magnetic north.


info about accelerometer?

Ultrasonic Sensors (HC-SR04)

Hc-sr04.png

We utilized three HC-SR04 ultrasonic distance sensors to find obstacles threatening our vehicle's bearing. Mounted in the front of the car (left, middle, and center), the sensors can reliably determine the location of an obstacle, which then results in appropriate avoidance steering measures. Each HC-SR04 sensor requires 5V from the Arduino Mega, as well as a trigger (TRIG) and echo digital pin connection.

USB GPS (U-blox7)

Ublox7gps.png

To retrieve parsable GPS data, we used a U-blox7 GPS/GLONASS USB receiver. This sensor outputs GPS data to the serial input on the Jetson which can be appropriately parsed.

Arduino Mega (ELEGOO MEGA 2560 R3)

Arduinomega.png

To control the IMU chip and the three HC-SR04 ultrasonic distance sensors, we decided to use an Arduino micro-controller connected to the Jetson Nano via USB. Data printed to the serial output on the Arduino can then be read via the serial input on the Jetson.

Schematic

schematic .png

Brief Schematic Description: In our design, we added three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO040). Both are connected to an Arduino Mega.

Software

Navigation

For navigating the car, we integrated all of the sensors into the software. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. By calculating the needed compass bearing between current GPS coordinate and desired GPS coordinate, we obtained the error between the current compass bearing and needed compass bearing. Based on the error, we calculated

The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshhold, the car would turn either right or left based on which ultrasonic sensor detects a further obstacle.

To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter.


Object Recognition

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects. The yolo algorithm essentially using bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.

Steps to Reproduce Our Result

  • Build a Donkey Car
  • Implement GPS navigation on Donkey Car (clone github repo)
  • Install PyTorch and OpenCV on Jetson Nano
  • Implement Object Recognition (clone github repo)
  • Specify object to find (In cam.py change obj variable to desired object)
  • Run Object Recognition in conjunction with Donkey Car

Videos of Final Result

Challenges

Conclusion

Possible Improvements

Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach.

Project Links

Resources