2022SummerTeam1

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team Members

2022 Summer Team 1
  • Najib Ocon - Biomedical Engineering (Middle)
  • Glenn Sivila - Electrical Engineering (Left)
  • Alex Gasca Rosas - Electrical Engineering (Right)

Final Project Overview

Autonomous Lead Car Following w/ Adjustable Following Distance (R) Fom:https://www.researchgate.net/figure/A-typical-car-following-scenario_fig11_241189046

For our final project, we wanted to incorporate the LD06 LiDAR sensor with OpenCV and ideally, the new (at the time) OAK-D Stereo Camera in order to track and follow a secondary (non-autonomous) lead car while maintaining an adjustable following distance, much like modern cars with adaptive cruise control. We also aimed to have our robocar avoid obstacles, specifically traffic cones.

Robot & Mechanical Design

For the first two weeks of class, we deliberated on a simple, robust design with no moving/hinging parts that would still give us enough clearance to access the various electrical components of the robocar. Due to some miscommunication/time constraints, we borrowed an existing, pre-cut, piece of acrylic to use as our base plate which holds all the hardware. We drew up a simple "L-Shaped" arch design to affix the baseplate to using Fusion360, which we 3D printed in PLA. The other hardware pieces we 3D modeled included the camera mount riser, angled camera mount base, and camera shroud (A 3D printed case for the Jetson NANO was generously provided for us). The camera mount riser slots into the angled base, providing an 11.8-degree angle so that the camera can easily see the ground in front of it. The 3D model for the LiDAR housing was generously provided by one of our TA's, Dominic. We attached the power button, Jetson NANO, camera, and LiDAR to the top section of the baseplate. Using a combination of zip-ties and velcro, we attached the anti-spark switch, DC-DC power distributor, and the VESC to the underside of the acrylic baseplate. In order to monitor the battery life of the 3-cell 11.1V LiPo battery we would be using, we affixed the provided voltage meter to the side of the robocar using velcro. Also again due to time constraints, the robocar chassis that was provided to all the teams were already equipped with upgraded clutches as well as brushless motors.


Mounting Arch (Fusion360)
Camera Riser Mount (Fusion360)
11.8 Degree Camera Mount Dock (Fusion360)
Camera Sun Shroud (Fusion360)
LD06 LiDAR Mount (Fusion360 - Provided by Dominic Nightingale)
Mounts Attached To Robocar Chassis
Acrylic Baseplate Attached To Mounts
Robocar Hardware Wiring Schematic
Fully Assembled Robocar (Pre-Final Project ver.)

For our final project, we were required to make a slight modification with our LiDAR-sensor placement so that it can properly detect the RC car it was meant to follow. We used double-sided mounting adhesive to stick it to the front bumper of the robocar.

Modified LiDAR Placement For Final Project

Donkey Car autonomous laps

We reimaged the Jetson NANO using the recovery image provided by class documentation in order to have the compatible versions of Donkey Sim AI Framework as well as OpenCV properly installed. After properly setting up the Jetson to connect to a Wi-fi signal as well as interface with a Logitech F710 (and XBOX) controller, we used the VESC tool to calibrate our VESC component to properly interface with our brushless in-runner motor as well as our servo. When we began to run Donkeycar on our Jetson, we ran into issues where steering would randomly crash our Donkeycar process. Thanks to Ivan Ferrier, we figured out that our servo was sending incompatible data back to the VESC, so we tested other servos until we found one that would properly run Donkeycar. After replacing the servo, we were able to use behavioral cloning to train our robocar to drive on the track based on user inputs as well as images from the camera feed. We ran approximately 25 laps around the track and used the GPU Cluster supported by UCSD's Supercomputer in order to train our data in a timely fashion. After training, our car was able to successfully navigate and complete 3 laps autonomously.

View Our Robocar Performing 3 Autonomous Laps Using Donkeycar Here: "https://www.youtube.com/watch?v=DwnRokEUDCw"

OpenCV and ROS2 autonomous laps

To incorporate OpenCV and ROS2 onto our Jetson, we utilized our TA's Docker container, which allowed the robocar to interface with ROS2 as it provided packages that enabled us to calibrate the color values of the camera, PID steering, as well as throttle values. Many hours were spent meticulously calibrating these values for the camera to properly detect and track the yellow center lane lines and steer smoothly enough so that the lane lines would remain within view of the camera. Some issues we ran into (other than the Jetson/Docker containers crashing intermittently and ssh/wifi not working) would be dealing with the glare from the outdoor lights reflecting off of the track and the max rpm/throttle values not saving consistently in the Docker container. To get around this, we had to calibrate the camera so that the noise from the glare would be ignored and we decided to go directly into the ros_racer_calibration.yaml file to make a lot of the PID/steering adjustments so that their values wouldn't be overwritten every time we opened the calibration program.

Calibrating Yellow Lane Line Detection w/ OpenCV


We believe that we may have had even better results if we had changed the placement of our camera. Since it is placed further forward, it made it more difficult to actually see the yellow lane lines when front of the robocar deviated too far from the lines. We believe that putting the camera further towards the rear of the robocar, it would have a wider view of the road which would allow it to still be able to see the lines it is supposed to track.

View Our Robocar Performing 3 Autonomous Laps Using OpenCV and ROS2 Here: "https://www.youtube.com/watch?v=H_uLfU2TcL4"

Final Project

Our initial proposal was one that our TA had proposed which was developing a ROS2 navigation package that would allow the robocar to navigate by only using the 2D planar LiDAR while. Due to the difficulty of being able to differentiate objects with a LiDAR due to its planar-scanning nature, we decided to try and incorporate the camera as well and use the camera for object detection using models we trained while the LiDAR would be used primarily as a distance sensor. After revising our proposal, we decided to try and have our robocar follow a non-autonomous lead RC car with adjustable following distance akin to adaptive cruise control in modern vehicles while also avoiding cones.

We first attempted to use the online object recognition and training program known as Roboflow (https://roboflow.com/) in order to train the camera to be able to recognize the lead RC car as well as cones as obstacles. We were able to properly train a model to recognize and track a cone, however, we were not so successful with training a model to recognize the lead car. We trained the model using images of the rear and sides of the lead car at various distances and angles (in the interest of time, we did not deem it necessary to feed the algorithm with images of the front of the vehicle since the robocar would follow the lead car, meaning that only the rear and sides of the car would be visible most of the time. If we were to get to the "No-Pass" functionality of our proposal, then a new model would be required to recognize the lead car from all angles). When we trained the model for the lead car, despite providing around 64 images and properly tracing each image to isolate the image of the car, using a webcam, it inconsistently tracked the lead car and would often track a human instead, which we found very strange since we only fed the algorithm images of the lead car.

Using Roboflow To Track A Cone After Training A Model
Using Roboflow To Track A Lead RC Car After Training A Model (as you can see, it wants to track a person's face as well as the car, which is not ideal)


Manually Isolating Each RC Car Image From Background
Using Roboflow To Train A Lead RC Car-Recognition Model

Since this ate up a lot of our time, we decided to scrap using Roboflow and go with YOLOv3 (a different real-time object detection algorithm with pre-trained models), as per Professor Silberman's recommendation. We would like to also thank Team 5 for helping us to get started on using YOLO with the tutorial from https://techvidvan.com/tutorials/opencv-vehicle-detection-classification-counting/. Using YOLO's pre-existing model we were able to detect, track, and recognize the RC car as a car from a pre-recorded video. Unfortunately, we were unable to successfully connect a webcam and establish a video feed connection with the algorithm. Also, from speaking with Team 5, they had issues with getting the algorithm working on the Jetson.

View The YOLOv3 Algorithm Detecting The Lead RC Car Here: "https://youtu.be/kp8kICF0nkQ"

Distorted & Laggy Video Feed Using YOLOv3 & Webcam


On the ROS2 side of things, we are grateful towards Dominic for letting us recycle his docker image for our Jetson NANO: "https://hub.docker.com/r/djnighti/ucsd_robocar". We implemented our changes onto Dominic's basics package through Virtual Studio Code and GitLab since it had a file ready to communicate between our lidar and actuator. Here is our branch where we added our code: "https://gitlab.com/ucsd_robocar2/ucsd_robocar_basics2_pkg/-/tree/team_1_ss_22". We wanted to steer our robot based on the angle it detects an object after a certain threshold distance to avoid obstacles. We did this through using the '/scan' topic to read off distances from our lidar, however we needed the angle where we're receiving information. To do this we added our node "lidarangle_node" that would publish our angle readings onto our topic "/lidarangles" to count the number of messages received from scan and determine the angle our lidar is reading information. Then from there, we wanted to send information from "/lidarangles" to our node that communicated between the lidar and actuator to receive messages and move the robot around the cone. If we had more time, we would implement YOLO detection to our package, would be through server and client based nodes that would override steering from lidar to follow the RC car once there were no obstacles in the way.

Here we are running our lidarangle_node along with Dominic's subpub_lidar_actuator node
With emphasis towards proof of concept, our latest error shows that both nodes are communicating to each other through our custom topic, publisher and subscription despite not running our car autonomously as how we would have liked

In the video below, we were able to read data from the LiDAR via RVIS. The moving line/points represent the lead RC car moving back and forth in physical space.

View The LiDAR Outputs On RVIS Here: "https://youtu.be/57dkWF2iZRk"

Thank You ECE/MAE 148!!!

ECE/MAE 148 Summer 2022 Class