2022SpringTeam1

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team Members

Kristen Bauer (ECE)

Sunny Su (MAE)

Naomi Chin (MAE)

Nathaniel Teng (MAE)

Our Car

Full Assembly

Full Assembly


Electronic Schematic

Circuit Diagram


Fast Prototyping

Acrylic Base Plate

Mae148 baseplate.PNG


LiDAR Mount

Mae148 lidarmount.PNG


Camera Mount

Mae148 cameramount.PNG


Jetson Case

Mae148 jetsoncase.PNG


Donkey

ROS2

Final Project

Proposal

ECE 148 Spring 2022 Team 1 Project Summary: Our project is to have the robot car be able to navigate itself, using ROS2, along the track on its own while also dodging obstacles in front of it. The car will use computer vision to be able to detect the track and follow it while the lidar detects obstacles that might be in the way. When there is nothing in the way, the car will detect the yellow lines in the middle of the track and follow that. When the car comes across an obstacle it has to look at the white line that borders the lanes and detect whether it is solid or dotted. If the white border line is solid, the car will maneuver around the obstacle on the left side, preventing the car from crossing the solid white line. Id the white border line is dotted, the car will maneuver around the obstacle on the right side, crossing over the white dotted line. Our car will also be able to stay on the right side of the lane.

Implementation

Software: This project was done using entirely ROS2 navigation in python. There is a package that is being used to control the robot and this package consists of 3 nodes: Calibration Node, Lane Detection Node, and Lane Guidance Node.

The calibration node will have two separate masks that will enable the car to detect two colors, white and yellow. The car needs to recognize yellow so that it can see yellow on the track and use it to follow the track. The car needs to recognize white so that it can decide if it should turn left or right if it is to encounter an obstacle. Once those color values are set and saved in the calibration, they will be saved onto a file that will be accessible by the other nodes like the lane detection node which will need those values for the next step

The lane detection node, like its name suggests, is responsible for detecting the lane. Upon first glance, the lane detection node works like a PID function where it calculates the error and sets the new calculated PID values that will determine the motion of the robot when it is following the lane. In addition, the lane detection node performs another function, and that is to count the number of contours it sees for the white color and use that number to identify if the white boarder line is solid or dotted. The lane detection node does this by taking in the raw image from the camera on the car, using the color values saved on a separate file by the calibration node to be able to detect white and yellow, and using the processed image to count the number of contours it sees for white. Once this number is calculated it will publish that value for the lane guidance node to subscribe to.

The lane guidance node is responsible for directly controlling the movement of the car. This node has a subscription to the Lidar which is a mechanism that can detect obstacles in the path of the car. This node also has subscribed to the lane detection node for the number of contours that were detected for white. When there is no obstacle detected, the car will follow the yellow lines in the middle of the lane and follow the track that way. Once the Lidar detects an obstacle, it will look at the number of white contours it sees and if there are more contours than usual being detected, the car will know that the white line is dotted since more contours means that the white line is not continuous. If it senses that the white line is dotted, it will cross that dotted line and dodge the obstacle by going to the right. If the line is solid, it will go to the left.

Overview

What We Promised: We promised that our car will be able to follow the track, stay on the right side of the lane, dodge obstacles, and detect if the white boarder line is solid or dotted and use that information to make a decision which direction it will go to avoid the obstacle

What We Delivered: The car can follow the track, stay on the right side of the road, detect yellow and white, detect if the white line is solid or dotted and make a decision to go left or right. The car is not perfect in it's detection of the white line being solid or dotted and that is because of the outside lighting being different everyday making the number of white contours different every time we use it. If we were to fix this if we had an extra week, we would have the car take samples of the road before going and get an average of the number of white contours it sees, then we would increase that averaged number by 10% and have that be the threshold contour value that determines of the white line is solid.

Potential Improvement: Adjust the PID control to make the car drive more smoothly and increase the throttle. Another potential improvement is to improve the obstacle detection to be able to label different obstacles. In addition to using the camera to detect the white and yellow lane dividers, obstacles of different colors and contours can be recognized and labeled as the car drives around the track.

Video Demonstration

Presentation

https://docs.google.com/presentation/d/1Tmz2EjylRb32J4ZYZ7HULlsweb6tPtrV-XAzQL8HxOU/edit?usp=sharing

GitHub

https://github.com/kpatriciabauer/ROS2-ECE-148-TEAM-1-SPRING-2022-UCSD

Acknowledgements

Professor Jack Silberman

TA Ivan Ferrier

TA Dominic Nightingale