Difference between revisions of "2020FallTeam1"
Line 12: | Line 12: | ||
Initially the goal of our project was to design an autonomous vehicle that would utilize DonkeyAI framework, open CV, and sensors to detect and gather tennis balls in one area to facilitate collecting them after each set. Due to COVID-19, the scope of our project was pivoted to create a ROS package that enables the RoboCar to use images taken through its external camera to be processed through a mask and proportional-integral-derivative tuning | Initially the goal of our project was to design an autonomous vehicle that would utilize DonkeyAI framework, open CV, and sensors to detect and gather tennis balls in one area to facilitate collecting them after each set. Due to COVID-19, the scope of our project was pivoted to create a ROS package that enables the RoboCar to use images taken through its external camera to be processed through a mask and proportional-integral-derivative tuning | ||
in order to read the yellow lines on the external track and drive autonomously; the RoboCar uses RGB sensors to read the yellow lines and compute a centroidal value that it follows. | in order to read the yellow lines on the external track and drive autonomously; the RoboCar uses RGB sensors to read the yellow lines and compute a centroidal value that it follows. | ||
[[File:car.png|750px]] | |||
== Hardware == | == Hardware == |
Revision as of 04:54, 17 December 2020
Team Members
- Electrical Engineers
- Benjamin Crawford
- Heather Huntley
- Joshua Orozco
- Mechanical Engineers
- Peggy Tran
Project Overview
Initially the goal of our project was to design an autonomous vehicle that would utilize DonkeyAI framework, open CV, and sensors to detect and gather tennis balls in one area to facilitate collecting them after each set. Due to COVID-19, the scope of our project was pivoted to create a ROS package that enables the RoboCar to use images taken through its external camera to be processed through a mask and proportional-integral-derivative tuning in order to read the yellow lines on the external track and drive autonomously; the RoboCar uses RGB sensors to read the yellow lines and compute a centroidal value that it follows.
Hardware
Mechanical Design
- Camera Mount
- Base Plate
This base plate was printed on .25 inch acrylic and was used to hold all electrical components of the RoboCar.
Electronic Components
- Jetson Nano Developer Kit
- Adafruit PCA9685
- 16bit PWM controller
- USB webcam
- High power LEDs
- 11.1V 3S LiPo battery
- Battery voltage sensor
- 433MHz remote relay
- 12V-5V DC-DC voltage converter
The car our team used is built around a Traxxas Ford Fiesta chassis containing a DC motor, electronic speed controller and servo motor. The following diagram shows how all of the electronics are wired.
Electronic Wiring Schematic
Software
We set up a GitHub repository where the code used by the vehicle is stored. The repository contains an ROS package which is designed to be used with our car. The following diagram displays the overall structure of the out ROS package, Autonomous_ROS_Racer.
Milestones
Donkey AI training and AI driving
Using the donkeycar framework and the UCSD GPU cluster to speed up the behavioral modeling process we were able to do five autonomous laps on an outdoor track. As you can see in the video below:
Template:Video goes here
Driving with ROS architecture
Calibration and line following using ROS package
With the ROS package that we developed we completed five laps (with a little help) on the track in the Warren tents. Here is a video of that:
Template:Video goes here
Future Suggestions
General Suggestions
For software focused members, it's important to brush up on any python skills during the former half of the quarter to prepare for the latter half when you are coding your own RoboCar. For hardware focused members, they should brush up on relevant tolerance standards in order to minimize the amount of material they use when designing the baseplate and cam mount. For all members, they should retain as much as they can on ROS ignite and collaborate on the topics/concepts in order to consolidate their knowledge.
Troubleshooting with lighting and colors detected
Our advice for future teams is to account for the changes in lighting that the car will experience. This may include accounting for the way the time of day, or shadows can change the shade of yellow of the line. We solved this issue by tightening the range of color that the car will detect and follow. By the end of troubleshooting we also used the color sensitivity of the robot to our advantage. We wrote functions in the code that would change the way the car responds at different points in the track. For example, at each of the four turning corners in the track, the track is lined with orange tape. When our car encounters orange tape, it will respond by having a higher range of mobility when turning. The color orange detected helped the car stay straight when going down the long sides of the track while also allowing the car to turn tightly enough to stay on the track.
Troubleshooting with stationary car
Another piece of advice for our future teams is to troubleshoot the car’s reaction to yellow lines by holding it above the track. By holding the car directly above the track, you can take the tension off the wheels of the car, and get a more accurate representation of the reaction the car has to the track. When we first started to troubleshoot the steering of the car by holding the car to the left, right, and center of the track. We were able to see that we flipped the steering direction of the car. When the car was held to the left side of the track the wheels veered left, taking the car away from the yellow line.
Acknowledgements
We would like to thank Professors Jack Silberman and Mauricio de Oliveira for their help throughout the quarter and consistent work in keeping the course engaging and relevant throughout the difficulties imposed by the COVID pandemic. We would also like to recognize the hard work put in by the TA's Haoru and Leeor.