2021WinterTeam2

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

About Our Team Members

Team 2 is composed of UCSD undergraduate students.

  • Jared Benge - Mechanical Engineering Senior
  • Nathan Burdick - Mechanical Engineering Senior
  • Blake Dixon - Computer Engineering Senior

Our team comprises of members that are new to ROS development. Because of this, we decided to tackle a project that has entry level and challenging goals.

Final Project Description: Line Following and Sign Detection

File:RonarocketT2.jpg
Rona Rocket Sporting the Intel RealSense L515 RGB/Sold State LIDAR camera

Develop a line following autonomous RC car that can detect stop signs. The RC car will track the yellow dashed line in the center of the race track. Once the RC car detects a stop sign, the car will stop at the designated line. ROS and OpenCV will be used to achieve these tasks. Our project will be using a new method of tracking the stop signs. An Intel RealSense L515 camera that is able to collect RGB and depth images will be used. The line following node will use the RGB images and the sign detection will use the depth images. Depth perception is made possible by a solid state LIDAR on the Intel RealSense L515.












"Must-Have" Project Requirements

- Develop a ROS/OpenCV package

- Line following capabilities

- Stop sign detection using

- Stop the RC car on a line once the sign is detected

Hardware

Provided Components

Jetson Nano Developer Kit

Intel RealSense L515 Camera

Adafruit PCA9685 16Bit PWM controller

QuicRUN WP1625 Electronic Speed Controller

Generic Components

Steering Servo Motor

Throttle DC motor

1/10 scale RC car chassis with ackerman steering.

12V-5V DC-DC voltage converter

433MHz remote relay

11.1V 3S LiPo Battery

LED lights

Battery Voltage Sensor

Designed Components

Base Plate

We wanted our base plate to be lightweight with maximum support for the other electronic components. The square holes allow the plate to be attached to the chassis of the RC car. The slots on the sides allow for the base plate supports to be attached. The linear pattern of holes allows for flexible placement of equipment. In the end, we decided to place strips of Velcro along the base plate to make quick adjustments of equipment easily.

Jetson Case

Making detailed measurements of the Jetson would be time consuming. We decided to find a Jetson case on Thingiverse. The case below was printed over a few days and fit perfectly. We had to cut holes in the side to feed wires through. This case looks great and protects the Jetson from collisions and the environment.

Intel RealSense L515 Camera Mount

The Intel RealSense L515 camera came with a tripod mount. We removed the tripod via a screw and used the ball mount. The design was focused on vibration reduction and rigidity. The camera ball mount fits snuggly and does not allow for any rotation unless the thumb screw is loosened. This design allows us to try different camera angles, then fix the mount once calibrated.

Adding Some Style: Car Spoiler

This was just to have some fun. We made sure to let everyone know our RC car was named "Rona Rocket"

Wiring Schematic

We followed along with team 4’s diagram from Fall 2020. This gave us a quick guide to get started. At first, we didn’t know much about each component. Over time, we became familiar with each electronic component and was able to troubleshoot PWM problems explained later in this project.

WiringT2.JPG

Software

ROS Melodic

ROS (Robot Operating system) is a framework that allows user to utilize tools and libraries to write robot software. Team 2 used ROS to manage programmatic operation of the robot. Tasks that relied on ROS included fetching video feed from a RGB-D camera and making imagery available for computer vision processing, determining correct throttle and steering decisions and relaying digital instructions to the car's control surfaces (steering and throttle).

OpenCV 2

OpenCV is a computer vision library that features many functions capable of managing object detection, edge finding, color filtering, and more. We used OpenCV to detect the yellow center lines of the track and stop signs.

Starting Point: UCSD_robo_car_simple_ros

Dominic Nightingale, one of the tutors, was generous enough to give the class access to a simple ROS package on his gitlab. This package guides the RC car along the yellow lines on the track using an RGB image from a camera. This package depends on ROS melodic, OpenCV2, adafruit_servokit library, and cv_bridge. ROS and OpenCV have already been explained, but what is the adafruit_servokit and cv_bridge? adafruit_servokit is a library that allows the operating system to communicate with the PWM board. cv_bridge converts between ROS image messages and OpenCV image as seen below

How cv_bridge communicates

Problems With Installation

We ran into a few errors with this package. There was a conflict with python3 and the ROS package. This caused some of the nodes to die instantly upon launch. Luckily, Dominic was able to update his package so we would continue with our project.

Understanding Dominic's ROS package

This ROS package uses a network of nodes that publish and subscribe to topics. Each node performs a certain task to achieve the goal of following a line. Descriptions of each node are shown below

Camera_server node

This node reads from the camera with CV2's interface and publishes the image to the camera_rgb topic. Some reformatting is done from the cv image fromat so it can be understood by ROS

Lane_detection node

This node reads from the camera_rgb topic and uses openCV to identify the yellow centerline using filtering techniques. Once the centerline is identified, a dot is placed. The x coordinate of the centroid is then published to the centroid topic.

Lane_guidance node

This node reads from the centroid topic. It then calculates the throttle and steering based on the x coordinate position of the centroid. The throttle value is based on whether or not a centroid exist. The car goes faster when the centroid is present and slows down when there isn't one. The steering is based on a proportional gain that is tuned to the specific car. A throttle and steering topic is then published.

Throttle_client node

This node subscribes to the throttle topic. A subscriber callback function is then used to validate the normalized throttle value. Once this value is determined, the adafruit_servokit module on channel 2 is then given this value to control the ESC and the DC motor.

Steering_client node

This node subscribes to the steering client. The same process as the throttle_client is used to control the steering servo motor on channel 1.

Table of Topics and Package Map

This table, provided on Dominic's gitlab page shows the topics mentioned previously. A package map is also available on the gitlab page.

Result of ROS Package

Here you can see the original image, filtered image, and centroid plot.

Output of Lane Detection Node

Tuning Throttle and Steering

We started by adjusting the PWM ports used in the ROS package for throttle and steering to match the ports used in DonkeyAI. This reduces the need to change physical ports. After setting this up, we attempted to run the line following ROS package. We did not have any response in steering and throttle. We determined that the values from the throttle and steering client were not calibrated. To fix this, we used ROS commands to find the lowest possible throttle value that gave us a motor response. The throttle values were then updated in "lane_guidance.py". Next the steering needed to be adjusted. We had an issue where the steering would only turn right and not left. We found out that the "lane_guidance.py" script was using a value of 0 for the centroid. This value should reflect the center of the image and not one of the sides. After calibrating the value to meet the midpoint of our image, the steering worked correctly.

Line Detection and Sign Detection Package

Since we are using a Intel RealSense camera, we needed to install a ROS wrapper in order to publish topics for the RGB and depth images. The wrapper can be found here, and some information about it can be found here. A useful page to learn the ROS commands that can be used is Intel developer page

Implementing the Intel RealSense2 Node and Topics

With the ROS wrapper installed, we now have access to intelrealsense2_camera node. This node publishes a number of different topics that relay the data from the camera. These can be found using the "rostopic list". We are most interested in the "/camera/color/image_raw" and "/camera/depth/image_rect_raw" topics that publish the RGB camera feed and solid state LIDAR camera feed.

Implementing Intel RealSense L515 Camera

To implement this new camera, we just needed to change the "camera_server.py" script to subscribe to the "/camera/color/image_raw" topic. However, when we did this, the image was tinted blue!

After some troubleshooting, we found that "camera_server.py" needed to be adjusted to "cv2.cvcolor" needed to convert from BGR --> RGB --> HSV using "COLOR_BGR2RGB" then "COLOR_RGB2HSV". This fixed the problem and the colors were back to normal.

The camera feed was not cropped correctly due to the change in camera resolution. The L515 has dimensions of 1280 x 720. We updated all values corresponding to camera width to get the cropping done correctly. When we tested on the track, we adjusted the height for the centroid image. This allows us to see multiple strips of yellow tape to reduce the chance of not finding a centroid. This should stabilize our model even further.

With the new camera showing the correct colors and cropped correctly, we noticed that the centroid of the track was not found. We believed that this is due to mask filter not using the right RGB value to focus on. The camera we are using has a different color profile than the previous one. Following this guide, we were able to find the HSV values that would work best for tracking yellow for the centerline and red for stop signs.

Adjustments to "lane_guidance.py"

We found while testing that the car will speed up when losing track of the centroid. We adjusted our code to use the last recorded centroid location when the centroid is not detected. This will allow us to continue on the straightaways if the the camera only sees the green between the lines and start a circle maneuver if it went off the track. We also gave a range of values for the straightaway condition instead of a integer. This will allow the car to speed up consistently since the centroid values hardly stay constant. We calibrated this range until we had a faster speed on the straightaways consistently. When the centroid is outside this range, the car has a slower speed to help guidance around corners. The steering has a proportional gain that we adjusted as needed to match the throttle values. This process of tuning was trial and error after the code was adjusted.

Basic Line following With L515 Camera