- 1 Project Team Members
- 2 Project Overview
- 3 Hardware
- 4 Software
- 5 Access Over Internet
- 6 Milestones
- 7 Future Improvements
- 8 Advice for Future Teams
- 9 Acknowledgements
Project Team Members
Nicolas Cridlig - Electrical Engineering
Theo Duffaut - Mechanical Engineering
Marine Sanosyan - Mathematics-Applied Science
Nathan Zager - Mechanical Engineering
Our project sought to deliver an autonomous driving vehicle using a number of methods. We utilized the DonkeyAI framework and open CV to achieve autonomous navigation around the outdoor track. Afterwards, we pivoted to building a ROS program from scratch using a number of nodes to import images, process them by creating a mask, and utilize a proportional-integral-derivative (PID) package to process the image signal. These signals could then be sent to the PWM board using a PWM driver ROS package. This course taught us how to build and troubleshoot ROS programs as well as work with openCV and image processing.
The camera is held in place by the case and the mount, which it is placed between before the casing is screwed on.
Camera Mount Piece
We designed the base plate to be able to firmly hold all of the electronic parts and allow efficient wire management. The plate was laser cut from a .25 inch piece of acrylic.
- Jetson Nano and microSD - Computer
- PWM Board - Servo motor and ESC Control
- USB Webcam - Image view
- Red and Blue LED -
- Voltage Checker - Low voltage alert
- Emergency Relay Switch - Emergency stop
- 5V Regulator
- Electronic Speed Controller(ESC)
OpenCV Donkey AI script
The OpenCV script provided by Tawn Kramer on GitHub can be found here: https://gist.github.com/tawnkramer/bce35b848692501a77c94092d019f97e
Programmed in python, it is able to follow lines of a specific color. To this this the program declares various functions, the most important being get_i_color and drive. These are executed sequentially in the run loop. To detect the lines in get_i_color, a target pixel with minimum and maximum HSV thresholds is defined and every other pixel color gets masked. Then the numpy library is used to create a histogram of the yellow pixels across the horizontal access of the screen. Whichever section of the histogram has the most values is set as target. The drive function uses target location and a PID controller to steer and throttle the robot so as to keep target in the center of its view.
Our ROS package uses OpenCV in a similar way to the Donkey script but with a dual script system to move our car autonomously. We found that for this application, this simple structure was the most fitting and easiest to implement. Our ROS package, named CVAI, is documented here: https://github.com/nzager14/robocar. The package uses one script, line_tracker.py, that extract the images from the camera, filter them and run input position values to a PID that himself publishes steering and throttle values to two dedicated topics (/steering_value and /throttle_value). In the second script, pub_PWM.py, a subscriber continuously read the values being published in the steering ant throttle topics, remap them to output proper PWM pulses values and finally communicate these values to the motor/servo drivers using the Adafruit_PCA9685 package (same drivers used by the donkey package)
A schematic outlining the entire signal from the camera to the motors as it is processed by our ROS package.
Image processing was primarily conducted using OpenCV and the __________ tool on robotigniteacademy, which coincidentally already had a GUI we could use to fine tune our HSV color values. The mask produced by our HSV tuning values is converted to a centroid and the simple_pid package is utilized to make up the error between this measured centroid and the center of the screen.
Image showing our masking (i.e. what our robot sees) and the centroid (where it is steering to)
- Image_processor -> PID_publisher
- PID_subscriber -> PWM_publisher
|steering_value||std_msgs.msg.Float64||Steering value sent to PWM script for processing|
|throttle_value||std_msgs.msg.Float64||Throttle value sent to PWM script for processing|
Access Over Internet
We set up the Jetson to be accessible by any member of our team from anywhere in the world. This required:
1) Set up RSA key and passphrase, turn off password login: https://www.redhat.com/sysadmin/configure-ssh-keygen
2) Create a static IP for you router (paid) or Dynamic DNS (free for mynetgear)
3) Adjust router to port forward SSH port 22
4)Share private RSA key with teammates
Good luck, feel free to contact me on insta @nic.icee. At the end your command to remote ssh will similar to: ssh -p 22 jetson@NAME.mynetgear.com
OpenCV Donkey Laps
Pre-PID tuning laps on the DonkeyAI framework. We came a long way!
Robodog's First Rodeo
OpenCV Donkey AI 5 Autonomous Laps
Our car was able to perform 5 autonomous laps at the outdoor track at EBUII using OpenCV and Donkey AI.
ROS 5 Autonomous Laps
Our car was able to perform 5 autonomous laps at the outdoor track in the Warren Mall tents using our ROS package. https://www.youtube.com/watch?v=bSq0yf4cCXE&feature=youtu.be
Given more time to work on the project, there are a couple changes we would like to institute:
- More scripts and nodes to handle a potentially higher workload and provide more structure should we decide to beef up our robot's AI capabilities
- Focus on reliability rather than speed around the track
- White line lane detection and avoidance
Advice for Future Teams
Plan to give yourself extra time for everything! When actually implementing hardware or especially software, murphy's law applies.
We would like to thank our professors Jack Silberman and Mauricio de Oliveira, as well as our teaching assistants Leeor Nehadea and Haoru Xue. We are very grateful for your hard work and dedication to our learning experience!