Difference between revisions of "2019FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
 
(45 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[File:Team8car.PNG|right]]


== Team Members ==
== Team Members ==
Line 103: Line 105:
In our design, we added three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO080). Both are connected to an Arduino Mega. Serial output from the Arduino Mega is then read by the serial input on the Jetson Nano.
In our design, we added three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO080). Both are connected to an Arduino Mega. Serial output from the Arduino Mega is then read by the serial input on the Jetson Nano.


== Software ==
===Object Recognition===
[[File:Yolo_Arch.png|500px|thumb|center|]]
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
The YOLO algorithm essentially uses bounding boxes to locate and classify objects in a given image. To output probabilities for the objects within the bounding boxes, we used a CNN using the darknet framework, but implemented with pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects.
[[File:Yolo.jpg|500px|thumb|center|]]
The YOLO algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. To get the base implementation of the algorithm I used [https://github.com/ayooshkathuria/pytorch-yolo-v3 this github repo] and made changes from there.
The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.
[[File:Object_recogniztion_example.PNG|500px|thumb|center|]]


===Navigation===
For navigating the car, we integrated all of the sensors into the software. In order to create functions in "manage.py", we had to use object-oriented programming to add the functions as donkey car parts. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. After calculating the needed compass bearing between the current GPS coordinate and the desired GPS coordinate, we obtained the error for the compass bearing. Based on the error, we calculated the input angle into the servo to accurately steer the car.


[[File:Software_diagram.PNG|900px|thumb|center]]


Another design of the car is autonomous maneuvering around obstacles. The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshold, the car would turn either right or left based on which ultrasonic sensor detects a further object. If the error between the compass bearings gives an output of turning left when there is an object on the left, the car would drive straight until the ultrasonic sensors detects a clear path on the left side. The same algorithm is applied when the car is steering right. This is used to avoid steering into a wall.


.
{{#widget:YouTube|id=s9XvI05Qks8|start=2}}


== Software ==
{{#widget:YouTube|id=YHlxfLSbFQQ|start=4}}


===Navigation===
For navigating the car, we integrated all of the sensors into the software. In order to create functions in "manage.py", we had to use object-oriented programming to add the functions as donkey car parts. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. After calculating the needed compass bearing between the current GPS coordinate and the desired GPS coordinate, we obtained the error for the compass bearing. Based on the error, we calculated the input angle into the servo to accurately steer the car.
Another design of the car is autonomous maneuvering around obstacles. The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshold, the car would turn either right or left based on which ultrasonic sensor detects a further object. 


To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter. The Kalman filter estimates the current position of the car and produces more precise estimates by reducing noise. The filter works by producing estimates of the current state variables and their uncertainties. Once the next measurements are recorded, the estimates are updated with a weighted average. As more weights are given, the estimates increase in certainty. One of the major difficulties with applying the filter is that the IMU samples at a much faster rate than the GPS. To compensate this, the filter only updates the estimation when the GPS records a position. To use this filter, we had to convert our GPS coordinates into meters, where +x is east and +y is north. Instead of a local coordinate system, the acceleometer reference coordinate frame were also converted to match the new coordinate frame by using the compass bearing. We did not have time to tune the filter, so we were not able to use the filter in real time.
To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter. The Kalman filter estimates the current position of the car and produces more precise estimates by reducing noise. The filter works by producing estimates of the current state variables and their uncertainties. Once the next measurements are recorded, the estimates are updated with a weighted average. As more weights are given, the estimates increase in certainty. One of the major difficulties with applying the filter is that the IMU samples at a much faster rate than the GPS. To compensate this, the filter only updates the estimation when the GPS records a position. To use this filter, we had to convert our GPS coordinates into meters, where +x is east and +y is north. Instead of a local coordinate system, the acceleometer reference coordinate frame were also converted to match the new coordinate frame by using the compass bearing. We did not have time to tune the filter, so we were not able to use the filter in real time.


[[File:Software_diagram.PNG|900px|thumb|center]]
{{#widget:YouTube|id=gzX8t4rh918}}


We plotted our results using google maps API. An example of the path of is shown in the figure below. Some parts of the path was us carrying the car.
We plotted our results using google maps API. An example of the path of is shown in the figure below. Some parts of the path was us carrying the car. A demo of our car is also shown below. The car is navigating out of EBU2 and towards Geisel. The car also avoids running into the wall on the left side.
[[File:Gps map example.PNG|400px|thumb|center]]
[[File:Gps map example.PNG|400px|thumb|center]]


===Object Recognition===
{{#widget:YouTube|id=lRD92zSSx5I}}
For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects.
The yolo algorithm essentially uses bounding boxes to locate and classify objects in a given image. To classify the objects, I used a CNN using the darknet framework, but implemented using pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects. The yolo algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson.
The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.


== Method ==
== Method ==
* Build a standard Donkey Car
* Build a standard Donkey Car
* Implement GPS navigation on Donkey Car (clone github repo)
* Implement GPS navigation on Donkey Car (clone [https://github.com/lun002/ucsdrobocar-team8-fall2019.git github repo])
* Install PyTorch and OpenCV on Jetson Nano
* Install PyTorch and OpenCV on Jetson Nano
* Implement Object Recognition (clone github repo)
* Implement Object Recognition (clone [https://github.com/lun002/ucsdrobocar-team8-fall2019.git github repo])
* Specify object to find (In cam.py change obj variable to desired object)
* Specify object to find (In cam.py change obj variable to desired object)
* Run Object Recognition in conjunction with Donkey Car
* Run Object Recognition in conjunction with Donkey Car
== Result ==


==Challenges==
==Challenges==
*We had a lot of trouble with our IMUs. We had reliability issues communicating with IMU via I2C. An IMU will work for a week or two then refuse to work later. We’ve had to switch out our second IMU for our third, which is currently working properly. We believe that this may be an issue with clock synchronization between the master and slave in the I2C protocol. A possible solution is switching to SPI instead of I2C.
*We had a lot of trouble with our IMUs. We had reliability issues communicating with IMU via I2C. An IMU will work for a week or two then refuse to work later. We’ve had to switch out our second IMU for our third, which is currently working properly. We believe that this may be an issue with clock synchronization between the master and slave in the I2C protocol. A possible solution is switching to SPI instead of I2C. Also, the magnetometer was affected by the breadboard’s magnetic field, so we had to move the IMU far enough away to receive reliable bearing data
*The ultrasonic sensors have trouble recognizing surfaces that are not perpendicular to the transmitter and receiver
*It was difficult to integrate the object detection algorithm with Donkey Car
*It was difficult to integrate the object detection algorithm with Donkey Car
*GPS sensor has a lot of noise
*GPS sensor has a lot of noise
Line 145: Line 152:


== Conclusion ==
== Conclusion ==
Our car is capable of correcting its direction to move towards a specified GPS location within 5-10m and avoiding larger objects. From our demo, we can see that the system is not robust enough to fully navigate to multiple locations while avoiding objects. The sensors do not have high accuracy or precision. The ultrasonic sensors only detect objects that are large and perpendicular to the sensors. If the object is slanted, the ultrasonic sensors does not receive the bounced transmission.


=== Possible Improvements  ===
=== Possible Improvements  ===
Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach. We were unable to reach all of our nice to have goals, so if we had more time we would troubleshoot our reliability issues with the car not working sometimes to figure out why as well as try to implement a circumnavigation algorithm to search for the object once it reaches the specified GPS location and work on getting the IMU to communicate directly with the Jetson.
Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach. We were unable to reach all of our nice to have goals, so if we had more time we would troubleshoot our reliability issues with the car randomly not working sometimes and figure out why. We would also change the interface with the IMU so that it directly communiciates with the Jetson as opposed to communicating through the Arduino. To improve on the accuracy of our sensors, we can switch to higher quality sensor or use a filter. The Kalman filter has been coded but not tested or tuned for real time navigation.
 
=== Advice to Future Students  ===
 
A major issue we faced was hardware robustness. We would advise any future students in MAE 148 to check the limitations of their hardware before starting the project. A lot of time would have been saved if we had known that some sensors were nearly impossible to interface with and obtain meaningful data.


== Project Links ==
== Project Links ==
https://github.com/lun002/ucsdrobocar-team8-fall2019.git
https://github.com/lun002/ucsdrobocar-team8-fall2019.git
Dodging the wall - https://youtu.be/lRD92zSSx5I
Not detecting angled surface - https://youtu.be/xyBTUSFFaYg
(makerspace demonstration) not detecting angled surface - https://youtu.be/f8DfLloi28Y
Object detection - https://youtu.be/-6il31v1RSg
Stationary steering adjustment due to obstacle (front & left/right side) - https://youtu.be/s9XvI05Qks8
Stationary steering straightening out due to obstacle (left/right side) - https://youtu.be/YHlxfLSbFQQ
Presentations links:
week 5 https://drive.google.com/open?id=1QchSli4EXOpSxFwy2i_l7f1Z68p6QdRXwyrJ_8QTbbE
week 6 https://docs.google.com/presentation/d/1vGwSQzeLkYXMpmU7Kn_5x0xb3aMzNYj2NdTg4rfoCJk/edit?usp=sharing
week 7 https://drive.google.com/open?id=1sBMRDvMANr-OVY7T5p-vZZ-2EAFNTgY5n32U7fYUrAk
week 8 https://drive.google.com/open?id=1MVlAMO_dkNPhxMfYVtZlPRQIfeyd2jstKoNCYZFSuDs
week 9 https://drive.google.com/open?id=1aMqr9O0h5lpU8R9dVc05HrMFVLmPHk8AKA_-Zeq1HRA
final https://drive.google.com/open?id=1urzwBbltjm8pCd-EJxi44i40arWUigonz4iGDyiBSUw


== Resources ==
== Resources ==

Latest revision as of 04:22, 11 April 2022

Team8car.PNG

Team Members

Fall 2018

  • Lovpreet Hansra, Computer Science BS
  • Adam Porter, Electrical Engineering BS
  • Luan Nguyen, Aerospace Engineering BS


The Idea

Our goal was to provide our car with particular GPS coordinates, have the car navigate to the destination coordinates by using the magnetometer within the IMU, and then search for a particular object within a given radius. We utilized computer vision to distinguish between objects and find the desired one while the car circumnavigates the area.

Must Have

  • Object Recognition
  • GPS Navigation
  • Working IMU interface

Nice to Have

  • Obstacle avoidance
  • Search within specified radius
  • IMU communicating directly to the Jetson without the Arduino

Mechanical Components

Base Plate

Our base plate model

We went with a relatively simple base plate to accommodate all of our hardware and help a little with cable management.

Camera Mount

Camera Mount
Camera Mount Base

We wanted an adjustable camera mount to experiment with different angles so we designed and printed two separate pieces to mount the camera.

Ultrasound Distance Sensor Housings

UltrasoundPt1.png
UltrasoundPt2.png

We printed some casings for our ultrasound sensors in case the car had any collisions, and to help with mounting the sensors to the car chassis.

Hardware

We used an NVIDIA Jetson Nano as our computer. Instead of connecting the sensors directly to the Jetson Nano, we connected multiple sensors to the Arduino which helped us manage all of the data from the sensors. The Jetson Nano pulled data from the Arduino through a serial connection and used the data to determine the controls of the car.

Hardware System Diagram

Additional Hardware/Devices

IMU (SparkFun Bosch BNO080)

Bno080.png


The IMU, or inertial measurement unit, is a device containing a gyroscope, magnetometer, and accelerometer which can be used to determine a multitude of things such as acceleration, bearing, or rotational rate. We used the magnetometer (and the accompanying SparkFun Arduino library for this chip) to determine the orientation of the vehicle. The IMU requires 3.3V and SDA/SCL pins for the I2C connection. Data is then read from the Arduino by the Jetson via serial connection (USB). The magnetometer outputs the strength of the Earth's magnetic field in three dimensions X,Y,Z at a given orientation. To create an appropriate compass bearing where north = 0/360 degrees, south = 180 degrees etc., we are only concerned with the strength of the field in the X and Y directions. The arctangent (y/x) provides us with an angle, which can be adjusted to set north = 0/360 degrees. Compensating for true north is also important, as the north we find when using the magnetometer is actually magnetic north.

info about accelerometer?

Ultrasonic Sensors (HC-SR04)

Hc-sr04.png


We utilized three HC-SR04 ultrasonic distance sensors to find obstacles threatening our vehicle's bearing. Mounted in the front of the car (left, middle, and center), the sensors can reliably determine the location of an obstacle, which then results in appropriate avoidance steering measures. Each HC-SR04 sensor requires 5V from the Arduino Mega, as well as a trigger (TRIG) and echo digital pin connection.




USB GPS (U-blox7)

Ublox7gps.png


To retrieve parsable GPS data, we used a U-blox7 GPS/GLONASS USB receiver. This sensor outputs GPS data to the serial input on the Jetson which can be appropriately parsed.






Arduino Mega (ELEGOO MEGA 2560 R3)

Arduinomega.png


To control the IMU chip and the three HC-SR04 ultrasonic distance sensors, we decided to use an Arduino micro-controller connected to the Jetson Nano via USB. Data printed to the serial output on the Arduino can then be read via the serial input on the Jetson.





Schematic

148schematic bb.png


Brief Schematic Description: In our design, we added three ultrasonic distance sensors (model: HC-SR04) and an SparkFun IMU (Bosch BNO080). Both are connected to an Arduino Mega. Serial output from the Arduino Mega is then read by the serial input on the Jetson Nano.

Software

Object Recognition

Yolo Arch.png

For object recognition, we implemented a YOLO algorithm trained using the COCO dataset which classifies 80 different classes of objects. The YOLO algorithm essentially uses bounding boxes to locate and classify objects in a given image. To output probabilities for the objects within the bounding boxes, we used a CNN using the darknet framework, but implemented with pytorch. The object detection model uses leaky ReLU as its activation function, and has a convolutional layer and pooling layer at every step. The last layers are fully connected and use softmax to get probabilities for each class. This is a commonly used and efficient sequence to classify images of objects.

Yolo.jpg

The YOLO algorithm uses a bounding box method to detect multiple objects in a frame. The image is broken up into a grid and each section of the grid has a specified number of boxes and each box can detect an object. The box that contains the center of the object is the box responsible for classifying the object. For this implementation, I started with open source code and expanded it to work for our project and with the Jetson. To get the base implementation of the algorithm I used this github repo and made changes from there. The algorithm must be run in conjunction with the DonkeyCar program. The way we integrated the object detection was to have the program write to a file and create a donkey car part that continuously read the file for changes. When they donkey car reads that the specified object has been found, it stops moving, and if the object is removed from the frame it starts searching for the object again.

Object recogniztion example.PNG

Navigation

For navigating the car, we integrated all of the sensors into the software. In order to create functions in "manage.py", we had to use object-oriented programming to add the functions as donkey car parts. We obtained the position of the car using the GPS and the direction of the car using the IMU's magnetometer. To start the navigation, we defined a list of GPS coordinates. Once the car reaches within 1m of the first location, the car will drive to the next location and repeat until it reaches the final location. We converted the three-dimensional magnetometer reading into a compass bearing which defines the current direction of the car. After calculating the needed compass bearing between the current GPS coordinate and the desired GPS coordinate, we obtained the error for the compass bearing. Based on the error, we calculated the input angle into the servo to accurately steer the car.

Software diagram.PNG

Another design of the car is autonomous maneuvering around obstacles. The three ultrasonic sensors in the front were used to determine distances of obstacles and the safest direction to turn. If the middle ultrasonic sensor detected an object closer than the defined threshold, the car would turn either right or left based on which ultrasonic sensor detects a further object. If the error between the compass bearings gives an output of turning left when there is an object on the left, the car would drive straight until the ultrasonic sensors detects a clear path on the left side. The same algorithm is applied when the car is steering right. This is used to avoid steering into a wall.


To accurately obtain the position of the car, we applied sensor fusion with the GPS and accelerometer by using a Kalman filter. The Kalman filter estimates the current position of the car and produces more precise estimates by reducing noise. The filter works by producing estimates of the current state variables and their uncertainties. Once the next measurements are recorded, the estimates are updated with a weighted average. As more weights are given, the estimates increase in certainty. One of the major difficulties with applying the filter is that the IMU samples at a much faster rate than the GPS. To compensate this, the filter only updates the estimation when the GPS records a position. To use this filter, we had to convert our GPS coordinates into meters, where +x is east and +y is north. Instead of a local coordinate system, the acceleometer reference coordinate frame were also converted to match the new coordinate frame by using the compass bearing. We did not have time to tune the filter, so we were not able to use the filter in real time.

We plotted our results using google maps API. An example of the path of is shown in the figure below. Some parts of the path was us carrying the car. A demo of our car is also shown below. The car is navigating out of EBU2 and towards Geisel. The car also avoids running into the wall on the left side.

Gps map example.PNG

Method

  • Build a standard Donkey Car
  • Implement GPS navigation on Donkey Car (clone github repo)
  • Install PyTorch and OpenCV on Jetson Nano
  • Implement Object Recognition (clone github repo)
  • Specify object to find (In cam.py change obj variable to desired object)
  • Run Object Recognition in conjunction with Donkey Car

Challenges

  • We had a lot of trouble with our IMUs. We had reliability issues communicating with IMU via I2C. An IMU will work for a week or two then refuse to work later. We’ve had to switch out our second IMU for our third, which is currently working properly. We believe that this may be an issue with clock synchronization between the master and slave in the I2C protocol. A possible solution is switching to SPI instead of I2C. Also, the magnetometer was affected by the breadboard’s magnetic field, so we had to move the IMU far enough away to receive reliable bearing data
  • The ultrasonic sensors have trouble recognizing surfaces that are not perpendicular to the transmitter and receiver
  • It was difficult to integrate the object detection algorithm with Donkey Car
  • GPS sensor has a lot of noise
  • Sometimes when the object detection algorithm is run, the Jetson (when powered by the battery) stops working because the program is computationally expensive and the Jetson struggles giving all input devices proper voltage. A possible solution for this would be to power all other devices directly from the battery so the Jetson can run the program more smoothly.

Conclusion

Our car is capable of correcting its direction to move towards a specified GPS location within 5-10m and avoiding larger objects. From our demo, we can see that the system is not robust enough to fully navigate to multiple locations while avoiding objects. The sensors do not have high accuracy or precision. The ultrasonic sensors only detect objects that are large and perpendicular to the sensors. If the object is slanted, the ultrasonic sensors does not receive the bounced transmission.

Possible Improvements

Our project is a pretty good base model for finding an object in some specified location. Improvements that could be made would be developing an efficient search algorithm for the 2 meter radius that GPS navigation allows us to reach. We were unable to reach all of our nice to have goals, so if we had more time we would troubleshoot our reliability issues with the car randomly not working sometimes and figure out why. We would also change the interface with the IMU so that it directly communiciates with the Jetson as opposed to communicating through the Arduino. To improve on the accuracy of our sensors, we can switch to higher quality sensor or use a filter. The Kalman filter has been coded but not tested or tuned for real time navigation.

Advice to Future Students

A major issue we faced was hardware robustness. We would advise any future students in MAE 148 to check the limitations of their hardware before starting the project. A lot of time would have been saved if we had known that some sensors were nearly impossible to interface with and obtain meaningful data.

Project Links

https://github.com/lun002/ucsdrobocar-team8-fall2019.git

Dodging the wall - https://youtu.be/lRD92zSSx5I

Not detecting angled surface - https://youtu.be/xyBTUSFFaYg

(makerspace demonstration) not detecting angled surface - https://youtu.be/f8DfLloi28Y

Object detection - https://youtu.be/-6il31v1RSg

Stationary steering adjustment due to obstacle (front & left/right side) - https://youtu.be/s9XvI05Qks8

Stationary steering straightening out due to obstacle (left/right side) - https://youtu.be/YHlxfLSbFQQ

Presentations links: week 5 https://drive.google.com/open?id=1QchSli4EXOpSxFwy2i_l7f1Z68p6QdRXwyrJ_8QTbbE week 6 https://docs.google.com/presentation/d/1vGwSQzeLkYXMpmU7Kn_5x0xb3aMzNYj2NdTg4rfoCJk/edit?usp=sharing week 7 https://drive.google.com/open?id=1sBMRDvMANr-OVY7T5p-vZZ-2EAFNTgY5n32U7fYUrAk week 8 https://drive.google.com/open?id=1MVlAMO_dkNPhxMfYVtZlPRQIfeyd2jstKoNCYZFSuDs week 9 https://drive.google.com/open?id=1aMqr9O0h5lpU8R9dVc05HrMFVLmPHk8AKA_-Zeq1HRA final https://drive.google.com/open?id=1urzwBbltjm8pCd-EJxi44i40arWUigonz4iGDyiBSUw

Resources

SparkFun Bosch BNO040 Arduino Library: https://github.com/sparkfun/Qwiic_IMU_BNO080

HC-SR04 Arduino Library: https://github.com/JRodrigoTech/Ultrasonic-HC-SR04

Sensor fusion https://github.com/sharathsrini/Kalman-Filter-for-Sensor-Fusion