Difference between revisions of "2019SpringTeam4"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
 
(23 intermediate revisions by the same user not shown)
Line 3: Line 3:




Our project's main objective is to make our car into a mailman.
Our project's main objective is to make our car into an autonomous mailman.  




Line 23: Line 23:
After delivering the package, the vehicle should be able to recognize that after delivering a package, it should resume autonomous driving along the track. <br>
After delivering the package, the vehicle should be able to recognize that after delivering a package, it should resume autonomous driving along the track. <br>
''' Secondary Goals ''' <br>
''' Secondary Goals ''' <br>
<< 1. Number Recognition '' <br>
'' 1. Number Recognition '' <br>
After successfully implementing color recognition into the program, the team plans to program the vehicle to recognize numbers instead of colors. Having the vehicle recognize numbers instead of colors will be more difficult, but will allow the vehicle to recognize more unique patterns. <br>
After successfully implementing color recognition into the program, the team plans to program the vehicle to recognize numbers instead of colors. Having the vehicle recognize numbers instead of colors will be more difficult, but will allow the vehicle to recognize more unique patterns. <br>


== Circuitry ==
== Circuitry ==
circuity
[[File:circuitrysp19t4.PNG|700px]]
<br>
The above block diagram illustrates the connections between all the electronic components on our car.  <br>
 
== Vehicle Components ==
[[File:Car_sp19t4.PNG|500px]]
Image of Vehicle <br>
 
=== Base Vehicle ===
''Mechanical Components'' <br>
Race car chassis frame <br>
Four wheels <br>
Motor <br>
Servo <br>
Drive Train <br>
 
''Electrical Components'' <br>
Raspberry Pi b+ <br>
PWM board <br>
PICAM <br>
Wireless Relay <br>
LiPO Battery <br>


== Mechanical Components ==


=== Custom Components ===
=== Custom Components ===
''Baseplate'' <br>
''Baseplate'' <br>
A custom acrylic baseplate that was lasercut and used to mount other mechanical and electrical components to the vehicle. <br>
A custom acrylic baseplate that was lasercut and used to mount other mechanical and electrical components to the vehicle. <br>
[[File:Base_Plate_sp19t4.PNG|300px]]


''Camera Mount'' <br>
''Camera Mount'' <br>
A 3D-printed camera mount that is utilized to hold the PI camera in a fixed position while the vehicle is operating. This allowed the camera to record data from a constant point of view. The camera mount is made of two separate 3D-printed parts, a stand and the camera mount. These two parts are held together at a rotatable joint using a M3 nut and bolt. This design allows for the camera's angle to be easily adjusted without having to fabricate another camera mount. <br>
A 3D-printed camera mount that is utilized to hold the PI camera in a fixed position while the vehicle is operating. This allowed the camera to record data from a constant point of view. The camera mount is made of two separate 3D-printed parts, a stand and the camera mount. These two parts are held together at a rotatable joint using a M3 nut and bolt. This design allows for the camera's angle to be easily adjusted without having to fabricate another camera mount. <br>
[[File:Camera_Mount_sp19t4.PNG|300px]]


''Raspberry PI Case'' <br>
''Raspberry PI Case'' <br>
Line 43: Line 65:
''Mechanical Arm'' <br>
''Mechanical Arm'' <br>
This mechanical arm was used to "deliver" packages from the vehicle. It was composed of a TGY1501 H-Torque servo motor and three other 3D-printed parts; a motor mount, arm, and basket. The motor mount's purpose was to attach the servo motor to the base plate. The motor was screwed into the motor mount and the bottom of the mount was layered with velcro to stick to the base plate. The arm part's primary purpose was to connect to the motor's output shaft by having holes that lined up with the motor's own output shaft connector and then constraining the holes together. The basket was connected to the other end of the arm by two M3 bolts and is used to carry the deliverable package. The full mechanical arm was designed such that when the vehicle reaches a designated marker, the arm would simply rotate 70 degrees clockwise and empty the contents of the basket at its designated location. <br>
This mechanical arm was used to "deliver" packages from the vehicle. It was composed of a TGY1501 H-Torque servo motor and three other 3D-printed parts; a motor mount, arm, and basket. The motor mount's purpose was to attach the servo motor to the base plate. The motor was screwed into the motor mount and the bottom of the mount was layered with velcro to stick to the base plate. The arm part's primary purpose was to connect to the motor's output shaft by having holes that lined up with the motor's own output shaft connector and then constraining the holes together. The basket was connected to the other end of the arm by two M3 bolts and is used to carry the deliverable package. The full mechanical arm was designed such that when the vehicle reaches a designated marker, the arm would simply rotate 70 degrees clockwise and empty the contents of the basket at its designated location. <br>
[[File:Deliveryarm.PNG|300px]]


== Autonomous Laps ==
== Autonomous Laps ==


'''Indooor Laps''' <br>
'''Indooor Laps''' <br>
<embedvideo service="youtube" description="5 Indoor Autonomous Laps">https://youtu.be/9WRHqHZoUWc</embedvideo>
<embedvideo service="youtube" description="5 Indoor Autonomous Laps">https://youtu.be/9WRHqHZoUWc</embedvideo> <br>
 
We noticed several things that made training easier:<br>
*Drive at a constant throttle. We had our max throttle scale set to about 0.25 and pushed the joystick all the way forward. This means the auto throttle will always be at the same throttle. This does mean that you will have to slow the car down so you can react to turns, and therefore means overall the car is slower. What we gained from driving like this was that it was far easier to drive the laps consistently so our data was better.<br>
*Swerve the straights. If you don’t have any steering inputs on the straights, then the car will not recognize if it is going off course on the straights. Swerving allows the car to have a plan to correct itself.<br>
*Use a lot of data. We had a lot of separate tubs of data for different days of driving, and models made from only one or two tubs were not as effective as the one where we just threw all of our data at it. <br>
Our best model had:<br>
63,695 entries <br>
val_loss: 0.49858  <br>
val_angle_out_acc: 0.8334  <br>
val_throttle_out_acc: 0.9140 <br>




'''Outdoor Laps''' <br>
'''Outdoor Laps''' <br>
<embedvideo service="youtube" description="3 Outdoor Autonomous Laps">https://youtu.be/3yKMGWvzTWo</embedvideo>
<embedvideo service="youtube" description="3 Outdoor Autonomous Laps">https://youtu.be/3yKMGWvzTWo</embedvideo> <br>
We drove the outdoor laps the same way we drove the indoor laps. However, since the outdoor track was about 2-3 times wider than the indoor, we were able to up our throttle scale to about 0.4 and still be able to control the car. <br>
 
One thing we found different outdoor track compared to the indoor track was that the sunlight caused the camera image to extremely bright. The most glaring consequence of this was that the track’s tape outline was no longer distinguishable from the ground. To get around this, we drove early in the morning so the shadow of the building covered the track or when the sky was overcast. However, we never tried to gather or train data while the sun was out, so we don’t know for sure if the sun being out was a problem for training. <br>


== Project Progress ==
== OpenCV ==
[[File:Cv.png|300px]]
<br>
We use OpenCV to detect the color signal in the track. It is a simple detector to detect green color box from the camera image feed, it processing fairly fast (compare to other techniques such as classifier build with convolution neural network). <br>
Since color could look different in a different environment and different source (paper, LED, screen), we are using HSV (hue, saturation, value) color model to handle the range of color to be detected (be considered as Green).<br>
The green color by “range” (lower/upper bound) is been define as: <br>
[[File:Green.PNG|300px]]
<br>
Note this might not be the optimized range for detection, but it seems to work pretty well for us. It can be fine-tuned to achieve a better result. To set a wider range, the detector will be more easy to catch the color box but could be resulting misdetection. For example, classified blue as green. In the other hand, if we set a narrow range, the detector sometime will miss the green color due to the worst image quality from the camera. <br>
We also adding an extra filter to filter out noise in the image for a better result. <br>
[[File:Filter.PNG|500px]] <br>
OpenCV function findContours will help us to get the detected color region. The bitwise_and will create a masked image.
<br>
[[File:Green2.PNG|700px]] <br>
After we can detect the green color region in the image, we will need to find a condition that when the car should stop. What we do is calculate the area of the green color region and check the position of the left-top corner of the region box. When the color is close to the car, the area should be large and the x-axis position of the left-top corner should be small.
So, after we testing serval time, we set the condition of stop as: area>350 and x<20 When the stopping condition is met, we will set the car threshold as 0, so the car will be stopped.<br>
[[File:Area.PNG|400px]] <br>
Here is the code we check stopping condition: <br>
[[File:Areacode.PNG|500px]] <br>


progress
== Project Results ==


<embedvideo service="youtube" description="yes">https://www.youtube.com/watch?v=dQw4w9WgXcQ</embedvideo>
The team was able to successfully integrate a color recognition system as well as an mechanical arm that allows the vehicle to deliver a package after stopping. The video below is a demonstration of the project progress. The vehicle was able to drive around the track autonomously until it successfully recognizes a block of specified color from a minimum distance. The vehicle would then stop and allow the mechanical arm to deliver the package. While the vehicle was able to recognize a specific color and stop to deliver a package, it was ultimately unable to resume autonomous driving after delivering the package.
<embedvideo service="youtube" description="Project Results Video">https://www.youtube.com/watch?v=Cxs0iYWSAdQ</embedvideo>


== Project Challenges ==
1. The vehicle is unable to consistently stop once it recognizes a specific block of color. The team believed that this problem is a result of slow processing of a large amount of data as well as the loss of frames from the camera imaging. <br>
2. In order to account for the slow processing, the vehicle's throttle must be limited so that it can drive slowly enough to to process the camera images and recognize the specified block of color. <br>
3. Currently, the arm is limited to a simple swinging motion at a set speed. When trying to change the speed of the motor arm, it would often cause the autonomous driving program to lag and therefore interrupt the functionality of the autonomous driving program. <br>
4. The vehicle is currently unable to resume driving after delivering a package. This was mostly due to investing a majority of the time in developing the color recognition system of the vehicle and the way the stopping mechanism works with it. Currently, the color recognition program is written such that the the throttle is permanently set to zero once it the camera detects a specific color with an appropriate area. The code was written this way because during tests, the vehicle would often stop for a moment after detecting color, but resume driving quickly after because it would no longer detect the correct size of color block after driving past it. More time and further work would have most likely been sufficient to correct the program and implement the code to allow the vehicle to resume autonomous driving. <br>
5. The steering drive train connection to the servo output shaft would often disconnect, resulting in the vehicle's inability to control steering until it was reconnected <br>
6. The vehicle's wheels had a natural offset that resulted in the vehicle veering towards the right as it was driving. To combat this, the team would often drive in a zig zag pattern along straight paths in attempt to train the model to correct itself from veering off the track along straight paths. <br>


=== Section ===
== Future Recommended Work ==
The first recommended work to be done in the future would be to implement the code that would allow the vehicle to return to autonomous driving after delivering the package. The code would most likely utilize the mechanical arm's return to the neutral position as a condition to return to driving. <br>
Another recommendation would be to successfully implement number recognition as the primary form of address recognition instead of color recognition. The original project was supposed to utilize the number recognition system and we have written a house number recognizer, it can recognize the digit and classified the number. The model has been used is resnet50, and the model is trained with SVHN dataset. Here is the link to retinanet repo: https://github.com/fizyr/keras-retinanet There is a good explanation about the model and how to use the model in the repo. There also many other models will be working very well (and they could run faster). For example, YOLO could be a good choice for fast recognition. Here is one of the GitHub repo that uses resnet (we refer it for our attempt): https://github.com/penny4860/retinanet-digit-detector
<br>
This the result of prediction through our recognizer, it can successfully detect the “1” in the image with a pretty good confident score. <br>
[[File:Number.PNG|400px]]
<br>
However, it runs very slow (more than 10 sec to processing a high-resolution image in laptop CPU). It will be faster if we use a smaller image from the car’s camera and run the classifier with GPU. But, it doesn’t seem to has a way to work very well with raspberry pi which has very limited computing power. Using AWS or other cloud GPU computing platform could be a way to solve the problem. We can send the image to the cloud computing cluster and get the result back to signal the car. As the latency of AWS (US west EC2) is reasonably low, and the future internet connection could getting faster (5G is coming), we believe this could be a good solution in the future.
<br>

Latest revision as of 06:46, 15 June 2019

Introduction

Our project's main objective is to make our car into an autonomous mailman.


Team 4 Members

Kelvin Liu
Wesley Cheung
Zifeng Chen

Project Objectives

Primary Goals
1. Autonomous Driving
Have the vehicle be able to drive autonomously on a track by training it to mimic how a person drives on a track.
2. Color Recognition
In order to specify a designated location for the autonomous vehicle to stop and deliver its package, the team decided to use color recognition. In practice, the vehicle would be able to drive around track until it comes to a location marked with a specific color and recognizing this color marker would result in the car automatically stopping.
3. Package Delivery
Design and program a mechanical arm whose purpose is to "deliver a package". The mechanical arm will activate when the vehicle recognizes a color pattern, which makes the vehicle stop and allowing the mechanical arm to activate and dropping a package at the designated location.
5. Resuming Driving After Delivery
After delivering the package, the vehicle should be able to recognize that after delivering a package, it should resume autonomous driving along the track.
Secondary Goals
1. Number Recognition
After successfully implementing color recognition into the program, the team plans to program the vehicle to recognize numbers instead of colors. Having the vehicle recognize numbers instead of colors will be more difficult, but will allow the vehicle to recognize more unique patterns.

Circuitry

Circuitrysp19t4.PNG
The above block diagram illustrates the connections between all the electronic components on our car.

Vehicle Components

Car sp19t4.PNG Image of Vehicle

Base Vehicle

Mechanical Components
Race car chassis frame
Four wheels
Motor
Servo
Drive Train

Electrical Components
Raspberry Pi b+
PWM board
PICAM
Wireless Relay
LiPO Battery


Custom Components

Baseplate
A custom acrylic baseplate that was lasercut and used to mount other mechanical and electrical components to the vehicle.
Base Plate sp19t4.PNG

Camera Mount
A 3D-printed camera mount that is utilized to hold the PI camera in a fixed position while the vehicle is operating. This allowed the camera to record data from a constant point of view. The camera mount is made of two separate 3D-printed parts, a stand and the camera mount. These two parts are held together at a rotatable joint using a M3 nut and bolt. This design allows for the camera's angle to be easily adjusted without having to fabricate another camera mount.
Camera Mount sp19t4.PNG

Raspberry PI Case
A 3D-printed PI case that is utilized to protect the raspberry PI in the event of a crash. The design for this case was taken from a 3D-printing file sharing site at ----. Modifications were then made to the file in order to make adjustments to fit the PI.

Mechanical Arm
This mechanical arm was used to "deliver" packages from the vehicle. It was composed of a TGY1501 H-Torque servo motor and three other 3D-printed parts; a motor mount, arm, and basket. The motor mount's purpose was to attach the servo motor to the base plate. The motor was screwed into the motor mount and the bottom of the mount was layered with velcro to stick to the base plate. The arm part's primary purpose was to connect to the motor's output shaft by having holes that lined up with the motor's own output shaft connector and then constraining the holes together. The basket was connected to the other end of the arm by two M3 bolts and is used to carry the deliverable package. The full mechanical arm was designed such that when the vehicle reaches a designated marker, the arm would simply rotate 70 degrees clockwise and empty the contents of the basket at its designated location.
Deliveryarm.PNG

Autonomous Laps

Indooor Laps
<embedvideo service="youtube" description="5 Indoor Autonomous Laps">https://youtu.be/9WRHqHZoUWc</embedvideo>

We noticed several things that made training easier:

  • Drive at a constant throttle. We had our max throttle scale set to about 0.25 and pushed the joystick all the way forward. This means the auto throttle will always be at the same throttle. This does mean that you will have to slow the car down so you can react to turns, and therefore means overall the car is slower. What we gained from driving like this was that it was far easier to drive the laps consistently so our data was better.
  • Swerve the straights. If you don’t have any steering inputs on the straights, then the car will not recognize if it is going off course on the straights. Swerving allows the car to have a plan to correct itself.
  • Use a lot of data. We had a lot of separate tubs of data for different days of driving, and models made from only one or two tubs were not as effective as the one where we just threw all of our data at it.

Our best model had:
63,695 entries
val_loss: 0.49858
val_angle_out_acc: 0.8334
val_throttle_out_acc: 0.9140


Outdoor Laps
<embedvideo service="youtube" description="3 Outdoor Autonomous Laps">https://youtu.be/3yKMGWvzTWo</embedvideo>
We drove the outdoor laps the same way we drove the indoor laps. However, since the outdoor track was about 2-3 times wider than the indoor, we were able to up our throttle scale to about 0.4 and still be able to control the car.

One thing we found different outdoor track compared to the indoor track was that the sunlight caused the camera image to extremely bright. The most glaring consequence of this was that the track’s tape outline was no longer distinguishable from the ground. To get around this, we drove early in the morning so the shadow of the building covered the track or when the sky was overcast. However, we never tried to gather or train data while the sun was out, so we don’t know for sure if the sun being out was a problem for training.

OpenCV

Cv.png
We use OpenCV to detect the color signal in the track. It is a simple detector to detect green color box from the camera image feed, it processing fairly fast (compare to other techniques such as classifier build with convolution neural network).
Since color could look different in a different environment and different source (paper, LED, screen), we are using HSV (hue, saturation, value) color model to handle the range of color to be detected (be considered as Green).
The green color by “range” (lower/upper bound) is been define as:
Green.PNG
Note this might not be the optimized range for detection, but it seems to work pretty well for us. It can be fine-tuned to achieve a better result. To set a wider range, the detector will be more easy to catch the color box but could be resulting misdetection. For example, classified blue as green. In the other hand, if we set a narrow range, the detector sometime will miss the green color due to the worst image quality from the camera.
We also adding an extra filter to filter out noise in the image for a better result.
Filter.PNG
OpenCV function findContours will help us to get the detected color region. The bitwise_and will create a masked image.
Green2.PNG
After we can detect the green color region in the image, we will need to find a condition that when the car should stop. What we do is calculate the area of the green color region and check the position of the left-top corner of the region box. When the color is close to the car, the area should be large and the x-axis position of the left-top corner should be small. So, after we testing serval time, we set the condition of stop as: area>350 and x<20 When the stopping condition is met, we will set the car threshold as 0, so the car will be stopped.
Area.PNG
Here is the code we check stopping condition:
Areacode.PNG

Project Results

The team was able to successfully integrate a color recognition system as well as an mechanical arm that allows the vehicle to deliver a package after stopping. The video below is a demonstration of the project progress. The vehicle was able to drive around the track autonomously until it successfully recognizes a block of specified color from a minimum distance. The vehicle would then stop and allow the mechanical arm to deliver the package. While the vehicle was able to recognize a specific color and stop to deliver a package, it was ultimately unable to resume autonomous driving after delivering the package. <embedvideo service="youtube" description="Project Results Video">https://www.youtube.com/watch?v=Cxs0iYWSAdQ</embedvideo>

Project Challenges

1. The vehicle is unable to consistently stop once it recognizes a specific block of color. The team believed that this problem is a result of slow processing of a large amount of data as well as the loss of frames from the camera imaging.
2. In order to account for the slow processing, the vehicle's throttle must be limited so that it can drive slowly enough to to process the camera images and recognize the specified block of color.
3. Currently, the arm is limited to a simple swinging motion at a set speed. When trying to change the speed of the motor arm, it would often cause the autonomous driving program to lag and therefore interrupt the functionality of the autonomous driving program.
4. The vehicle is currently unable to resume driving after delivering a package. This was mostly due to investing a majority of the time in developing the color recognition system of the vehicle and the way the stopping mechanism works with it. Currently, the color recognition program is written such that the the throttle is permanently set to zero once it the camera detects a specific color with an appropriate area. The code was written this way because during tests, the vehicle would often stop for a moment after detecting color, but resume driving quickly after because it would no longer detect the correct size of color block after driving past it. More time and further work would have most likely been sufficient to correct the program and implement the code to allow the vehicle to resume autonomous driving.
5. The steering drive train connection to the servo output shaft would often disconnect, resulting in the vehicle's inability to control steering until it was reconnected
6. The vehicle's wheels had a natural offset that resulted in the vehicle veering towards the right as it was driving. To combat this, the team would often drive in a zig zag pattern along straight paths in attempt to train the model to correct itself from veering off the track along straight paths.

Future Recommended Work

The first recommended work to be done in the future would be to implement the code that would allow the vehicle to return to autonomous driving after delivering the package. The code would most likely utilize the mechanical arm's return to the neutral position as a condition to return to driving.
Another recommendation would be to successfully implement number recognition as the primary form of address recognition instead of color recognition. The original project was supposed to utilize the number recognition system and we have written a house number recognizer, it can recognize the digit and classified the number. The model has been used is resnet50, and the model is trained with SVHN dataset. Here is the link to retinanet repo: https://github.com/fizyr/keras-retinanet There is a good explanation about the model and how to use the model in the repo. There also many other models will be working very well (and they could run faster). For example, YOLO could be a good choice for fast recognition. Here is one of the GitHub repo that uses resnet (we refer it for our attempt): https://github.com/penny4860/retinanet-digit-detector
This the result of prediction through our recognizer, it can successfully detect the “1” in the image with a pretty good confident score.
Number.PNG
However, it runs very slow (more than 10 sec to processing a high-resolution image in laptop CPU). It will be faster if we use a smaller image from the car’s camera and run the classifier with GPU. But, it doesn’t seem to has a way to work very well with raspberry pi which has very limited computing power. Using AWS or other cloud GPU computing platform could be a way to solve the problem. We can send the image to the cloud computing cluster and get the result back to signal the car. As the latency of AWS (US west EC2) is reasonably low, and the future internet connection could getting faster (5G is coming), we believe this could be a good solution in the future.