• Jacob Springman (ME)
• Seth Farrell (EE)
• Warren Otoshi (EE)
• Ji Lee? (NE)
The goal of this project was to create a car that could respond to different color traffic lights using only computer vision (OpenCV). This included stopping behind a limit line for a red light, going at normal speed for a green light, and either slowing down or continuing at normal speed for a yellow light depending on the car's distance from the limit line.
We first had to build our car and train it to drive autonomously before moving on to our traffic light project. This was done using an acrylic laser cut baseplate on which to mount all the electronics, as well as a 3D printed adjustable camera mount. We also 3D printed a protective case for our Jetson Nano, the file for which can be found here Once these were made, it was just a matter of downloading all software, wiring all necessary components together, and then training the car by driving laps. These were all done by following the provided instructions.
These are the mechanical designs that were required for this project. The base plate on the left was used to mount all electronics on the car using preexisting threaded holes and supports on the car frame. The middle is the base for our camera mount, and the right shows the camera mount to which the camera was attached. We designed the camera mount to be adjustable so that different angles could be tested when training in case certain angles were found to be more effective than others.
Two of the most crucial aspects of this project were the abilities to recognize color and distances with OpenCV in the images being taken by our USB camera and processed by the Jetson Nano. Learning and getting used to OpenCV was slow-going for us at first, but once we understood what was going on, it was not as complicated or confusing as we initially thought. We'll go over our color recognition code in a fair amount of detail so that anyone trying similar projects in the future can hopefully get a better understanding of OpenCV than we had when we started. Keep in mind the codes presented in this section had to be modified in order to integrate into the Donkey framework, which we'll go over in a later section.
Seeing as responding to different color lights was the purpose of our project, the color recognition aspect was critically important.
Converting Color Space
One problem we had early on was trying to recognize color in the RGB color space, which yielded very poor and inconsistent results. DO NOT USE RGB COLOR SPACE FOR COLOR RECOGNITION. Instead, after using OpenCV to read each image and store it as img, we converted from the RGB color scale to HSL (Hue, Saturation, Lightness), which we found to outperform all other color scales for our particular application (HSV also works well for many color recognition algorithms).
hsl = cv2.cvtColor(img, cv.COLOR_RGB2HLS) # convert image stored as variable img from RGB to HSL
Once the image was in an appropriate color space, we used set lower and upper bounds that represented the color we were looking for (red in this case). There are two sets of bounds for red because the red hue spans the lower and upper regions of the hue spectrum (see more about HSL color spectrums here). Note: in OpenCV, hue ranges from 0-180 (not the usual 0-360) and saturation and lightness range from 0-255.
# ranges for color values in HSL upper1 = (20, 255, 255) lower1 = (0, 0, 120) upper2 = (180, 255, 255) lower2 = (160, 0, 120)
Once the boundaries are specified, the cv2.inRange function loops through each pixel of the image and returns a 1 if the pixel is in the specified range, or a 0 elsewise. The variables mask 1 and mask 2 store arrays of these 1's and 0's, indicating which pixels contain the desired color. The cv2.bitwise_or combines the two masks into one, and the cv2.bitwise_and function lays the mask over the original image in a new image array called output.
mask1 = cv.inRange(hsl, lower1, upper1) mask2 = cv.inRange(hsl, lower2, upper2) mask = cv.bitwise_or(mask1, mask2) output = cv.bitwise_and(img, img, mask=mask)
That's it! This code should be enough to tell determine whether the color which defines the mask is present in an image frame.
Distance and Shape Detection
To determine distance and shape, we based our codes off the tutorials written by Adrian Rosebrock for distance and shape. We highly recommend going through and understanding these tutorials, as well as others available for free by Rosebrock, to get a good initial understanding of OpenCV.
Testing with Computer Webcam
To test programs with a computer webcam, we can use the following code to access the webcam and use each video frame from the webcam as the input images to the color recognition sequence:
# initialize webcam cv2.namedWindow("image") vc = cv2.VideoCapture(0) # check if webcam opened successfully if vc.isOpened(): rval, img = vc.read() else: rval = False while rval: rval, img = vc.read() ''' insert image processing code here in while loop ''' cv2.imshow("Masked Image", output) # show video masked image in window named "Masked Window" key = cv2.waitKey(20) # close window on esc press if key == 27: break cv2.destroyAllWindows() # close all open windows vc.release() # release webcam
Working with Donkey
With the necessary code written, we needed to implement it into the Donkey framework such that we could change the throttle based on the light and distance detected. Understanding the Donkey framework was one of the more difficult parts of this project, as, like OpenCV, it seems more complicated than it is at first. Essentially how Donkey works is that parts (which are just objects created from various classes contained in different .py files) are added to the vehicle one by one in the manage.py file. These parts can have inputs and outputs (or not) that are specified when they are added, and once added, the parts run one after another on loop (unless they are threaded parts, in which case they run at a frequency determined by other conditions) as long as the drive function is active. Therefore, the behavior of the vehicle can be modified by adding or modifying parts.
Initially, we attempted to implement our code in a similar fashion to previous teams by modifying preexisting Donkey parts (mainly the actuator.py and controller.py files). However, our initial attempts to do so were unsuccessful because the files we were modifying (which were stored in the donkeycar/donkeycar/parts directory) were not actually the files being imported by manage.py. Instead, manage.py imports its parts by default from some other location on the computer which is determined when donkey is installed. These files are essentially impossible to find, so if modifying preexisting parts (not recommended), copy the part's .py file into a known location and then import that file instead of the default one. What we instead decided to do was create our own parts, one for each color we wanted to recognize, which would output whether or not that color was detected. We then used that output to control the car's throttle in the manage.py file.
Creating the Donkey Parts
As mentioned, parts in Donkey are made from Python classes, so we created three classes in a separate .py file that would then be imported into manage.py and added to the vehicle. Each class has an __init__ method, which runs when the part is created in manage.py, and a run method, which is the method that is called continuously while the vehicle is operating. All three classes are essentially identical, but each recognizes a different color and outputs whether or not that color is present in the current video frame. All documentation here will be done for the color red. The __init__ method initializes the color boundaries and the Boolean value for the color. It also reads a sample image to calibrate the distance detection function (see Rosebrock's tutorial for more information on this) only for red and yellow, since distance does not matter when the light is green.
class RedLightDetection: def __init__(self): # ranges for color values in HSL self.upper_lim1 = (20, 255, 255) self.lower_lim1 = (0, 0, 120) self.upper_lim2 = (180, 255, 255) self.lower_lim2 = (160, 0, 120) # initialize light detection variables self.red = False # initial values for distance calculations img = cv2.imread('sample_distance.png') # read sample image self.knownWidth = 3 # width of object being detected in inches self.knownDistance = 12 # distance of object from camera in sample image marker = find_marker(img) # finds desired object to be detected self.focalLength = (marker * self.knownDistance) / self.knownWidth # calibrates camera self.dist = 0 # initialize distance variable for use in run() pass
Next we had to define the run method, which is where we take in the current image, run the color recognition and distance calculation codes, and output the Boolean values. Although we were also capable of detecting shape for further robustness, the low resolution of the camera made the shape detection too unreliable and so we decided not to use it.
def run(self, image_arr, greenLight): if image_arr is None: return None img = image_arr
The run method takes in an image in image_arr as well as the Boolean value for the greenlight, the reason for which will be explained shortly. The if statement is necessary here because the camera part of the car that takes the image is a threaded part, meaning that it does not necessarily run at the same rate as this part. Therefore, there will not always be an image to analyze in which case we want to just skip this part until there is a new image. If there is a new image available, we rename it img and feed it into the same color recognition code presented earlier.
Once we have the masked image output, we analyze it for contours by converting to grayscale, blurring it slightly, and thresholding it to create a binary image (all very typical operations for finding contours in an image). We then use the function cv2.findContours to actually detect the contours in the image.
gray = cv2.cvtColor(output, cv.COLOR_BGR2GRAY) # convert to grayscale blurred = cv2.GaussianBlur(gray, (5, 5), 0) # blur to get rid of noise thresh = cv2.threshold(blurred, 60, 255, cv.THRESH_BINARY) # threshold to get binary image cnts = cv2.findContours(thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # conviniently stores contours based on OpenCV version
If cnts is not empty, then it must mean there was red detected in the image, in which case we can set the red Boolean value to True. Also for the purposes of distance calculation, if multiple contours were detected, we assume the largest represents the light and only keep that one.
if np.size(cnts) != 0: maxContour = max(cnts, key=cv2.contourArea) # assume traffic light is largest contour in image else: maxContour =  if np.size(maxContour) != 0: # i.e. if red was detected self.red = True marker = find_marker(output) self.dist = (self.knownWidth * self.focalLength) / marker # distance to light elif greenLight: # keep red as true until green light self.red = False return self.red, self.dist
The elif statement above was created to avoid "flip flopping" between True and False, which would decrease the reliability of our code. Since there should always be a green light after a red light, we keep red True until a green light is detected. The yellow light part had a similar condition, but it was not necessary to the green light part. Finally, this part returns the Boolean value for red along with the distance to the light.
Adding Parts to Car
Understanding how donkey cars "parts" work is essential if you want to add anything new to the framework. When creating a part, there are two main things you need to consider: 1. The "Part" file 2. The Donkey Car Vehicle Loop
When the donkeycar is started, it runs through and sets up all the parts that it is told to do so. It then runs a main loop at set intervals and calls the run method of each part to update it.
Here is an example of our code. This adds three parts to the vehicle. Our parts are defined in a file called trafficLightDetection.py Take a look at the format in which parts are added:
V.add(objectCreatedFromPartClass, inputs = ['specify/anyString'], outputs =['boolean/red', 'distance/red'])
When you create another part you can tell it to take in inputs/outputs from other parts to use within code from your part
V.add(objectCreatedFromPartClass2, inputs = ['boolean/red], outputs =['distance/red'])
# Add our Parts ##### from trafficLightDetection import RedLightDetection redLightDetector = RedLightDetection() V.add(redLightDetector, inputs = ['cam/image_array', 'boolean/green'], outputs =['boolean/red', 'distance/red'])
from trafficLightDetection import GreenLightDetection greenLightDetector = GreenLightDetection() V.add(greenLightDetector, inputs = ['cam/image_array'], outputs =['boolean/green'])
from trafficLightDetection import YellowLightDetection yellowLightDetector = YellowLightDetection() V.add(yellowLightDetector, inputs = ['cam/image_array', 'boolean/green', 'boolean/red'], outputs =['boolean/yellow', 'distance/yellow']) # End our Parts
# Other openCV code from OpenCVPreProcess import OpenCvPreProcessor preProcess = OpenCvPreProcessor(cfg) V.add(preProcess, inputs = ["cam/image_array"], outputs = ["cam/image_array"])
Keep in mind that it DOES matter where you add your part in manage.py. It has to be placed within the drive method. Also, our code had to be added before we did any filtering to the camera images, because our code expects to be getting a normal RGB image. So make sure you know what is affecting the input/outputs that your part uses and place your part appropriately.
In our part file we had some code like this: This class is the actual "part" that we want to add.
class GreenLightDetection: # This is the class constructor def __init__(self): # Green values (lower, upper) #[41, 0, 120], [80, 255, 255] # ranges for color values in HSL self.upper_lim = (70, 255, 255) self.lower_lim = (50, 0, 120)
# initialize light detection variables self.green = False pass
def shutdown(self): return
# This is the method which is called from the vehicle drive loop # This will contain the bulk of your code # Note how you have two parameters (self, image_arr) # And when we created the part in manage.py we had 'cam/image_array' as an input # donkeycar takes care of providing the specified input to this method, so since we know what the input is # we can work with it here def run(self, image_arr): if image_arr is None: return None ...
To summarize, creating a part in donkeycar is not too hard. 1. Create a part file, ex testPart.py 2. Fill out the part file to do a job 3. Add your part somewhere appropriate in the manage.py drive loop
Controlling the Throttle
To modify the throttle our team took the easy way. In manage.py drive loop we noticed there was a part which had access to the throttle. The drive mode part.
So we hacked it by adding boolean values returned from our parts (created and added to the loop BEFORE accessing here) to the input of this class. This allowed us to add conditional statements within the part to stop/start/slow the throttle as needed.
class DriveMode: def run(self, mode, user_angle, user_throttle, pilot_angle, pilot_throttle, redLight, yellowLight, greenLight, distance_red, distance_yellow): # throttle is a value between 0.0 and 1.0 if mode == 'user': #if redLight and redLightDetector.first: # user_throttle = -1.0 # time.sleep(.5) # user_throttle = 0 # redLightDetector.first = False if redLight: if distance_red >= 140.0: user_throttle = 0.28 else: if not greenLight: user_throttle = 0.0 ...
Note here we had to add our inputs to the part.
V.add(DriveMode(), inputs=['user/mode', 'user/angle', 'user/throttle', 'pilot/angle', 'pilot/throttle', 'boolean/red', 'boolean/yellow', 'boolean/green', 'distance/red', 'distance/yellow'], outputs=['angle', 'throttle'])
This is what the part looked like before we modified it.
V.add(DriveMode(), inputs=['user/mode', 'user/angle', 'user/throttle', 'pilot/angle', 'pilot/throttle'], outputs=['angle', 'throttle'])
We were able to achieve pretty good results considering the limitations of our setup. Despite somewhat poor camera resolution and some unreliable measurements, we managed to achieve our goals and get a variety of successful responses as shown in the videos below.
This video shows our concept and the car's ability to exhibit different responses for each different color.
The red light response shows that our car automatically goes to a certain distance away from the light before stopping, allowing the car to come to rest at a limit line.
The car slows slightly when it sees a yellow light from far away, allowing to prepare for a red light and stop a safe distance away from the light.
The car ignores a late yellow light since it cannot safely come to a stop before going through the light.