Difference between revisions of "2020FallTeam1"
|Line 14:||Line 14:|
== Hardware ==
== Hardware ==
Revision as of 06:10, 17 December 2020
- Electrical Engineers
- Benjamin Crawford
- Heather Huntley
- Joshua Orozco
- Mechanical Engineers
- Peggy Tran
Initially the goal of our project was to design an autonomous vehicle that would utilize DonkeyAI framework, open CV, and sensors to detect and gather tennis balls in one area to facilitate collecting them after each set. Due to COVID-19, the scope of our project was pivoted to create a ROS package that enables the RoboCar to use images taken through its external camera to be processed through a mask and proportional-integral-derivative tuning in order to read the yellow lines on the external track and drive autonomously; the RoboCar uses RGB sensors to read the yellow lines and compute a centroidal value that it follows.
- Camera Mount
- Base Plate
This base plate was printed on .25 inch acrylic and was used to hold all electrical components of the RoboCar.
- Jetson Nano Developer Kit
- Adafruit PCA9685 16bit PWM controller
- USB webcam
- High power LEDs
- 11.1V 3S LiPo battery
- Battery voltage sensor
- 433MHz remote relay
- 12V-5V DC-DC voltage converter
The car our team used is built around a Traxxas Ford Fiesta chassis containing a DC motor, electronic speed controller and servo motor. The following diagram shows how all of the electronics are wired.
Electronic Wiring Schematic
We set up a GitHub repository where the code used by the vehicle is stored. The repository contains an ROS package which is designed to be used with our car. The following diagram displays the overall structure of the ROS package:
The overall gist of the package is that two commands are used to achieve the goal of the program.
When inputting the command
roslaunch autocar autocar_pub.launch the publisher node pub is created. This command also begins running the
publisher.py which sends the data from the USB webcam into an openCV program and detects lanes and publishes a value to the
/throttle topic. Then to start the subscriber simply write the command
roslaunch autocar autocar_sub.launch after this the car should begin driving autonomously.
|Topic Name||Topic Type||Contents|
|Steering||Float64||Steering Value in range from -1 to 1|
|Throttle||Float64||Throttle Value in range from -1 to 1|
The publisher is a modified and upgraded version of the simple_cv_racer that was provided to us. It takes an image from the USB webcam and uses OpenCV to perform several color masks in a thin region of the photo and converts this into steering and throttle values using a PID controller. It then publishes these values to their respective topics. We created a mask for the yellow of the center line, one for the orange of the turn lanes, and finally one for the white of the outer lanes. These masks were denoised using a kernel of ones that way any small patches of color would be eliminated and would not affect the steering of our car. We then find the largest area of pixels in each mask and find the x coordinate of the center of that area. We then calculate a histogram of the amplitudes for each mask and select the position in the histogram with the highest amplitude as another x coordinate. These two x coordinates are averaged and we then look to see what we got. If there was no orange line detected which indicates that the car is on a straight the car will attempt to steer to the yellow position, it will also lock its wheels so that there will be less deviation from the straight line. If it does detect an orange line indicating that it is on a turn it will drive to a weighted average of the two x coordinates and will unlock its wheels so it has its full range of motion. Unfortunately we are not doing anything with the white data although some ideas we had for it are discussed in the future work section.
The subscriber simply subscribes to the two topics and then leverages the Adafruit Servokit library to communicate with the Adafruit PCA9685 and writes the values to the channels specified in the config object.
Donkey AI training and AI driving
Using the donkeycar framework and the UCSD GPU cluster to speed up the behavioral modeling process we were able to do five autonomous laps on an outdoor track. As you can see in the video below:
5 autonomous laps using DonkeyAI framework
Driving with ROS architecture
Calibration and line following using ROS package
With the ROS package that we developed we completed five laps (with a little help) on the track in the Warren tents. Here is a video of that:
Template:Video goes here
Potential Future Work / Unaccomplished Goals
We were initially planning on using the white lines of the track to guide us or work as barriers that the car absolutely would not cross. Unfortunately we did not have time to implement this in a meaningful way. Given enough time we would have like to been able to detect both the left and right lines and distinguish between them in order to inform our decision making process.
This is something we saw other groups implementing and while we thought it was very cool we opted to stick with centroid method in order to cut down on complexity and time spent debugging. However it is something that given enough time we would have been very excited to try out.
Suggestions For Future Teams
For software focused members, it's important to brush up on any python skills during the former half of the quarter to prepare for the latter half when you are coding your own RoboCar. For hardware focused members, they should brush up on relevant tolerance standards in order to minimize the amount of material they use when designing the baseplate and cam mount. For all members, they should retain as much as they can on ROS ignite and collaborate on the topics/concepts in order to consolidate their knowledge.
Troubleshooting with lighting and colors detected
Our advice for future teams is to account for the changes in lighting that the car will experience. This may include accounting for the way the time of day, or shadows can change the shade of yellow of the line. We solved this issue by tightening the range of color that the car will detect and follow. By the end of troubleshooting we also used the color sensitivity of the robot to our advantage. We wrote functions in the code that would change the way the car responds at different points in the track. For example, at each of the four turning corners in the track, the track is lined with orange tape. When our car encounters orange tape, it will respond by having a higher range of mobility when turning. The color orange detected helped the car stay straight when going down the long sides of the track while also allowing the car to turn tightly enough to stay on the track.
Troubleshooting with stationary car
Another piece of advice for our future teams is to troubleshoot the car’s reaction to yellow lines by holding it above the track. By holding the car directly above the track, you can take the tension off the wheels of the car, and get a more accurate representation of the reaction the car has to the track. When we first started to troubleshoot the steering of the car by holding the car to the left, right, and center of the track. We were able to see that we flipped the steering direction of the car. When the car was held to the left side of the track the wheels veered left, taking the car away from the yellow line.
We would like to thank Professors Jack Silberman and Mauricio de Oliveira for their help throughout the quarter and consistent work in keeping the course engaging and relevant throughout the difficulties imposed by the COVID pandemic. We would also like to recognize the hard work put in by the TA's Haoru and Leeor.