Our project's main objective is to make our car into an autonomous mailman.
Team 4 Members
1. Autonomous Driving
Have the vehicle be able to drive autonomously on a track by training it to mimic how a person drives on a track.
2. Color Recognition
In order to specify a designated location for the autonomous vehicle to stop and deliver its package, the team decided to use color recognition. In practice, the vehicle would be able to drive around track until it comes to a location marked with a specific color and recognizing this color marker would result in the car automatically stopping.
3. Package Delivery
Design and program a mechanical arm whose purpose is to "deliver a package". The mechanical arm will activate when the vehicle recognizes a color pattern, which makes the vehicle stop and allowing the mechanical arm to activate and dropping a package at the designated location.
5. Resuming Driving After Delivery
After delivering the package, the vehicle should be able to recognize that after delivering a package, it should resume autonomous driving along the track.
1. Number Recognition
After successfully implementing color recognition into the program, the team plans to program the vehicle to recognize numbers instead of colors. Having the vehicle recognize numbers instead of colors will be more difficult, but will allow the vehicle to recognize more unique patterns.
Race car chassis frame
Raspberry Pi b+
A 3D-printed camera mount that is utilized to hold the PI camera in a fixed position while the vehicle is operating. This allowed the camera to record data from a constant point of view. The camera mount is made of two separate 3D-printed parts, a stand and the camera mount. These two parts are held together at a rotatable joint using a M3 nut and bolt. This design allows for the camera's angle to be easily adjusted without having to fabricate another camera mount.
Raspberry PI Case
A 3D-printed PI case that is utilized to protect the raspberry PI in the event of a crash. The design for this case was taken from a 3D-printing file sharing site at ----. Modifications were then made to the file in order to make adjustments to fit the PI.
This mechanical arm was used to "deliver" packages from the vehicle. It was composed of a TGY1501 H-Torque servo motor and three other 3D-printed parts; a motor mount, arm, and basket. The motor mount's purpose was to attach the servo motor to the base plate. The motor was screwed into the motor mount and the bottom of the mount was layered with velcro to stick to the base plate. The arm part's primary purpose was to connect to the motor's output shaft by having holes that lined up with the motor's own output shaft connector and then constraining the holes together. The basket was connected to the other end of the arm by two M3 bolts and is used to carry the deliverable package. The full mechanical arm was designed such that when the vehicle reaches a designated marker, the arm would simply rotate 70 degrees clockwise and empty the contents of the basket at its designated location.
<embedvideo service="youtube" description="5 Indoor Autonomous Laps">https://youtu.be/9WRHqHZoUWc</embedvideo>
We noticed several things that made training easier:
- Drive at a constant throttle. We had our max throttle scale set to about 0.25 and pushed the joystick all the way forward. This means the auto throttle will always be at the same throttle. This does mean that you will have to slow the car down so you can react to turns, and therefore means overall the car is slower. What we gained from driving like this was that it was far easier to drive the laps consistently so our data was better.
- Swerve the straights. If you don’t have any steering inputs on the straights, then the car will not recognize if it is going off course on the straights. Swerving allows the car to have a plan to correct itself.
- Use a lot of data. We had a lot of separate tubs of data for different days of driving, and models made from only one or two tubs were not as effective as the one where we just threw all of our data at it.
Our best model had:
<embedvideo service="youtube" description="3 Outdoor Autonomous Laps">https://youtu.be/3yKMGWvzTWo</embedvideo>
We drove the outdoor laps the same way we drove the indoor laps. However, since the outdoor track was about 2-3 times wider than the indoor, we were able to up our throttle scale to about 0.4 and still be able to control the car.
One thing we found different outdoor track compared to the indoor track was that the sunlight caused the camera image to extremely bright. The most glaring consequence of this was that the track’s tape outline was no longer distinguishable from the ground. To get around this, we drove early in the morning so the shadow of the building covered the track or when the sky was overcast. However, we never tried to gather or train data while the sun was out, so we don’t know for sure if the sun being out was a problem for training.
We use OpenCV to detect the color signal in the track. It is a simple detector to detect green color box from the camera image feed, it processing fairly fast (compare to other techniques such as classifier build with convolution neural network).
Since color could look different in a different environment and different source (paper, LED, screen), we are using HSV (hue, saturation, value) color model to handle the range of color to be detected (be considered as Green).
The green color by “range” (lower/upper bound) is been define as:
Note this might not be the optimized range for detection, but it seems to work pretty well for us. It can be fine-tuned to achieve a better result. To set a wider range, the detector will be more easy to catch the color box but could be resulting misdetection. For example, classified blue as green. In the other hand, if we set a narrow range, the detector sometime will miss the green color due to the worst image quality from the camera.
We also adding an extra filter to filter out noise in the image for a better result.
OpenCV function findContours will help us to get the detected color region. The bitwise_and will create a masked image.
After we can detect the green color region in the image, we will need to find a condition that when the car should stop. What we do is calculate the area of the green color region and check the position of the left-top corner of the region box. When the color is close to the car, the area should be large and the x-axis position of the left-top corner should be small. So, after we testing serval time, we set the condition of stop as: area>350 and x<20 When the stopping condition is met, we will set the car threshold as 0, so the car will be stopped.
Here is the code we check stopping condition:
The team was able to successfully integrate a color recognition system as well as an mechanical arm that allows the vehicle to deliver a package after stopping. The video below is a demonstration of the project progress. The vehicle was able to drive around the track autonomously until it successfully recognizes a block of specified color from a minimum distance. The vehicle would then stop and allow the mechanical arm to deliver the package. While the vehicle was able to recognize a specific color and stop to deliver a package, it was ultimately unable to resume autonomous driving after delivering the package. <embedvideo service="youtube" description="Project Results Video">https://www.youtube.com/watch?v=Cxs0iYWSAdQ</embedvideo>
1. The vehicle is unable to consistently stop once it recognizes a specific block of color. The team believed that this problem is a result of slow processing of a large amount of data as well as the loss of frames from the camera imaging.
2. In order to account for the slow processing, the vehicle's throttle must be limited so that it can drive slowly enough to to process the camera images and recognize the specified block of color.
3. Currently, the arm is limited to a simple swinging motion at a set speed. When trying to change the speed of the motor arm, it would often cause the autonomous driving program to lag and therefore interrupt the functionality of the autonomous driving program.
4. The vehicle is currently unable to resume driving after delivering a package. This was mostly due to investing a majority of the time in developing the color recognition system of the vehicle and the way the stopping mechanism works with it. Currently, the color recognition program is written such that the the throttle is permanently set to zero once it the camera detects a specific color with an appropriate area. The code was written this way because during tests, the vehicle would often stop for a moment after detecting color, but resume driving quickly after because it would no longer detect the correct size of color block after driving past it. More time and further work would have most likely been sufficient to correct the program and implement the code to allow the vehicle to resume autonomous driving.
5. The steering drive train connection to the servo output shaft would often disconnect, resulting in the vehicle's inability to control steering until it was reconnected
6. The vehicle's wheels had a natural offset that resulted in the vehicle veering towards the right as it was driving. To combat this, the team would often drive in a zig zag pattern along straight paths in attempt to train the model to correct itself from veering off the track along straight paths.
Future Recommended Work
The first recommended work to be done in the future would be to implement the code that would allow the vehicle to return to autonomous driving after delivering the package. The code would most likely utilize the mechanical arm's return to the neutral position as a condition to return to driving.
Another recommendation would be to successfully implement number recognition as the primary form of address recognition instead of color recognition. The original project was supposed to utilize the number recognition system and we have written a house number recognizer, it can recognize the digit and classified the number. The model has been used is resnet50, and the model is trained with SVHN dataset. Here is the link to retinanet repo: https://github.com/fizyr/keras-retinanet There is a good explanation about the model and how to use the model in the repo. There also many other models will be working very well (and they could run faster). For example, YOLO could be a good choice for fast recognition. Here is one of the GitHub repo that uses resnet (we refer it for our attempt): https://github.com/penny4860/retinanet-digit-detector
This the result of prediction through our recognizer, it can successfully detect the “1” in the image with a pretty good confident score.
However, it runs very slow (more than 10 sec to processing a high-resolution image in laptop CPU). It will be faster if we use a smaller image from the car’s camera and run the classifier with GPU. But, it doesn’t seem to has a way to work very well with raspberry pi which has very limited computing power. Using AWS or other cloud GPU computing platform could be a way to solve the problem. We can send the image to the cloud computing cluster and get the result back to signal the car. As the latency of AWS (US west EC2) is reasonably low, and the future internet connection could getting faster (5G is coming), we believe this could be a good solution in the future.