Difference between revisions of "2019FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 124: Line 124:


===Object Recognition===
===Object Recognition===
[[File:Flowchart.jpg|500px|thumb|center|]]
[[File:Flowchart.jpg|500px|thumb|left|]]
[[File:Yolo.jpg|500px|thumb|right|]]
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
The yolo algorithm essentially uses bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. To get the base implementation of the algorithm I used [https://github.com/ayooshkathuria/pytorch-yolo-v3 this github repo] and made changes from there.  
The yolo algorithm essentially uses bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. To get the base implementation of the algorithm I used [https://github.com/ayooshkathuria/pytorch-yolo-v3 this github repo] and made changes from there.  

Revision as of 23:34, 10 December 2019

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS


The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We utilized computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Must Have

  • Object Recognition
  • GPS Navigation
  • Working IMU interface

Nice to Have

  • Obstacle avoidance
  • Search within specified radius
  • IMU communicating directly to the Jetson without the Arduino

Mechanical Components

Base Plate

Our base plate model

We went with a relatively simple base plate to accommodate all of our hardware and help a little with cable management.

Camera Mount

Camera Mount
Camera Mount Base

We wanted an adjustable camera mount to experiment with different angles so we designed and printed two separate pieces to mount the camera.

Ultrasound Distance Sensor Housings

UltrasoundPt1.png
UltrasoundPt2.png

We printed some casings for our ultrasound sensors in case the car had any collisions, and to help with mounting the sensors to the car chassis.

Hardware

We used an NVIDIA Jetson Nano as our computer. Instead of connecting the sensors directly to the Jetson Nano, we connected multiple sensors to the Arduino which helped us manage all of the data from the sensors. The Jetson Nano pulled data from the Arduino through a serial connection and used the data to determine the controls of the car.

Hardware System Diagram

Additional Hardware/Devices

IMU (SparkFun Bosch BNO080)

Bno080.png


The IMU, or inertial measurement unit, is a device containing a gyroscope, magnetometer, and accelerometer which can be used to determine a multitude of things such as acceleration, bearing, or rotational rate. We used the magnetometer (and the accompanying SparkFun Arduino library for this chip) to determine the orientation of the vehicle. The IMU requires 3.3V and SDA/SCL pins for the I2C connection. Data is then read from the Arduino by the Jetson via serial connection (USB). The magnetometer outputs the strength of the Earth's magnetic field in three dimensions X,Y,Z at a given orientation. To create an appropriate compass bearing where north = 0/360 degrees, south = 180 degrees etc., we are only concerned with the strength of the field in the X and Y directions. The arctangent (y/x) provides us with an angle, which can be adjusted to set north = 0/360 degrees. Compensating for true north is also important, as the north we find when using the magnetometer is actually magnetic north.

info about accelerometer?

Ultrasonic Sensors (HC-SR04)

Hc-sr04.png


We utilized three HC-SR04 ultrasonic distance sensors to find obstacles threatening our vehicle's bearing. Mounted in the front of the car (left, middle, and center), the sensors can reliably determine the location of an obstacle, which then results in appropriate avoidance steering measures. Each HC-SR04 sensor requires 5V from the Arduino Mega, as well as a trigger (TRIG) and echo digital pin connection.




USB GPS (U-blox7)

Ublox7gps.png


To retrieve parsable GPS data, we used a U-blox7 GPS/GLONASS USB receiver. This sensor outputs GPS data to the serial input on the Jetson which can be appropriately parsed.






Arduino Mega (ELEGOO MEGA 2560 R3)

Arduinomega.png


To control the IMU chip and the three HC-SR04 ultrasonic distance sensors, we decided to use an Arduino micro-controller connected to the Jetson Nano via USB. Data printed to the serial output on the Arduino can then be read via the serial input on the Jetson.





Schematic

148schematic bb.png


Brief Schematic Description: In our design, we added three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO080). Both are connected to an Arduino Mega. Serial output from the Arduino Mega is then read by the serial input on the Jetson Nano.



.

Software

Navigation

For navigating the car, we integrated all of the sensors into the software. In order to create functions in "manage.py", we had to use object-oriented programming to add the functions as donkey car parts. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. After calculating the needed compass bearing between the current GPS coordinate and the desired GPS coordinate, we obtained the error for the compass bearing. Based on the error, we calculated the input angle into the servo to accurately steer the car.

Another design of the car is autonomous maneuvering around obstacles. The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshold, the car would turn either right or left based on which ultrasonic sensor detects a further object.

To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter. The Kalman filter estimates the current position of the car and produces more precise estimates by reducing noise. The filter works by producing estimates of the current state variables and their uncertainties. Once the next measurements are recorded, the estimates are updated with a weighted average. As more weights are given, the estimates increase in certainty. One of the major difficulties with applying the filter is that the IMU samples at a much faster rate than the GPS. To compensate this, the filter only updates the estimation when the GPS records a position. To use this filter, we had to convert our GPS coordinates into meters, where +x is east and +y is north. Instead of a local coordinate system, the acceleometer reference coordinate frame were also converted to match the new coordinate frame by using the compass bearing. We did not have time to tune the filter, so we were not able to use the filter in real time.

Software diagram.PNG

We plotted our results using google maps API. An example of the path of is shown in the figure below. Some parts of the path was us carrying the car.

Gps map example.PNG

Object Recognition

Flowchart.jpg
Yolo.jpg

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects. The yolo algorithm essentially uses bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. To get the base implementation of the algorithm I used this github repo and made changes from there. The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.

Method

  • Build a standard Donkey Car
  • Implement GPS navigation on Donkey Car (clone github repo)
  • Install PyTorch and OpenCV on Jetson Nano
  • Implement Object Recognition (clone github repo)
  • Specify object to find (In cam.py change obj variable to desired object)
  • Run Object Recognition in conjunction with Donkey Car

Challenges

  • We had a lot of trouble with our IMUs. We had reliability issues communicating with IMU via I2C. An IMU will work for a week or two then refuse to work later. We’ve had to switch out our second IMU for our third, which is currently working properly. We believe that this may be an issue with clock synchronization between the master and slave in the I2C protocol. A possible solution is switching to SPI instead of I2C.
  • It was difficult to integrate the object detection algorithm with Donkey Car
  • GPS sensor has a lot of noise
  • Sometimes when the object detection algorithm is run, the Jetson (when powered by the battery) stops working because the program is computationally expensive and the Jetson struggles giving all input devices proper voltage. A possible solution for this would be to power all other devices directly from the battery so the Jetson can run the program more smoothly.

Conclusion

Our car is capable of correcting its direction to move towards a specified GPS location within 5-10m and avoiding larger objects. From our demo, we can see that the system is not robust enough to fully navigate to multiple locations while avoiding objects. The sensors do not have high accuracy or precision. The ultrasonic sensors only detect objects that are large and perpendicular to the sensors. If the object is slanted, the ultrasonic sensors does not receive the bounced transmission.

Possible Improvements

Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach. We were unable to reach all of our nice to have goals, so if we had more time we would troubleshoot our reliability issues with the car randomly not working sometimes and figure out why. We would also change the interface with the IMU so that it directly communiciates with the Jetson as opposed to communicating through the Arduino. To improve on the accuracy of our sensors, we can switch to higher quality sensor or use a filter. The Kalman filter has been coded but not tested or tuned for real time navigation.

Advice to Future Students

Project Links

https://github.com/lun002/ucsdrobocar-team8-fall2019.git

Resources

SparkFun Bosch BNO040 Arduino Library: https://github.com/sparkfun/Qwiic_IMU_BNO080

HC-SR04 Arduino Library: https://github.com/JRodrigoTech/Ultrasonic-HC-SR04

Sensor fusion https://github.com/sharathsrini/Kalman-Filter-for-Sensor-Fusion