From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Vehicle .jpg


The goal of this project is to use visual stimuli to alter the autonomous behavior of the car. While driving autonomously, the car should use vision to detect the presence of traffic signals and a stop sign, and adjust the driving model accordingly. When the car sees a red light, it should stop until the light is green. A stop sign should cause the car to switch lanes.

Group Members

Nicholas Blischak – Mechanical and Aerospace Engineering Department

Inseong Hwang – Mechanical and Aerospace Engineering Department

Tongji Luo – Electrical and Computer Engineering Department

Mechanical Parts

Camera Mount

The camera mount is composed of two 3D-printed pieces. It is designed to hold the camera in a fixed position, yet be easily adjustable. Two slots allow the mount to move forward and backward, as well as rotate from left to right.

The biggest flaw in this design is the construction. If any part of the mount breaks, an entirely new piece is required, taking several hours to print. It would be better if the design were more modular and used laser-cut parts, which would make repairs easier.

Camera mount19f3.jpg

Traffic Light


 for(int i = 0; i < 2; i++){
   pinMode(btnsRGB[i], INPUT);
   analogWrite(btnsRGB[i], 0);

 pinMode(led_r, OUTPUT);
 pinMode(led_g, OUTPUT);
 digitalWrite(led_g, LOW);
 digitalWrite(led_r, LOW);


digitalWrite - power provider to appropriate pins

void loop() {

   // If red button was pressed
   if(digitalRead(btnsRGB[0]) == HIGH){
     digitalWrite(led_g, LOW);
     digitalWrite(led_r, HIGH);

The code above is for Red light indicator. When the button for the red light is pressed, the arduino uno send power to whichever pin that is connected to the red light and stop providing power to the pin that is connected to green light and hold it for 5 seconds.

 // If green button was pressed
if(digitalRead(btnsRGB[1]) == HIGH){
     digitalWrite(led_g, HIGH);
     digitalWrite(led_r, LOW);

The code above is for Green light indicator. When the button for the green light is pressed, the arduino uno send power to whichever pin that is connected to the green light stop providing power to the pin that is connected to red light and hold it for 5 seconds.

Wiring Diagram

Schematic 3.jpg

Autonomous Driving

Indoor laps

Outdoor laps

Behavioral Model

To achieve a lane-switching driving model we used the behavioral model feature built into Donkey. The model uses an additional input to neural network to evaluate the steering and throttle based on an input condition. In this case, the two behavioral states correspond to left- and right-lane driving.


To train the model, it is necessary to record data under three conditions: driving in both lanes, and switching lanes. The model needs information of how to switch lanes, so it knows what to if the car is supposed to be in the right lane, but is currently in the left lane.


The biggest challenge faced was the narrowness of the lanes on the indoor track. We used the indoor track because the lights were more easily detected by the car, but it had the negative effect of making staying in the lanes more difficult. The model needed a lot more data than the other models to maintain lanes and switch reliably, and even with the amount of data collected the car would still sometimes make errors.

Light Detection


Detecting the traffic light is an import factor for the autonomous vehicle. For our project, we want the car to tell whether it sees the traffic lights or not. If so, what kind of light signal it sees and behave accordingly. Here are the pictures of our traffic light setting:

Red light on the track
Green light on the track

Here we set the traffic light in front of the cabinet in order to eliminate the influence of the background to the traffic light detection.

The ideal behavior of the car is: The car drives autonomously by the model we trained on the track. When it sees the red light, the car stop and stay there until we switch the light to green. While the car sees the red light, the Jetson would print "Red: stop" and while it sees the green, it will print "Green: moving" and start to move forward. Here we use a neural network to train a deep learning model to classify the light kind.

Image Preprocessing

Before we feed the image to the neural network, we should do some preprocessing to the image got from the camera (240 X 320 pixels). There are three purposes for the preprocessing process:

(1) Choose the Area of interest(ROI) in the images which contain the traffic light.

(2) Reduce the features of the parts in the image which are not the traffic light

(3) Downsize the image to reduce the model size in order to increase the speed of the prediction.

Original Images

Red light
Green light
No light

Preprocessing Code


   image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)      //Transform the BGR image to RGB space
   image = image[0:100, 100:240]                       //Choose the ROI, normally the light in front of the cabinet would be in this area of the image 
   image = cv2.resize(image, (40, 30))                 //Downsample the image to reduce the features of the background and reduce the model size 
   normalizer_image = image / 255.0 - 0.5              //Normalize the image in the range (-0.5, 0.5) 
   a = 5                                               //Set up the contrast factor 
   b = -0.5                                            //Set up the lighting factor
   image = normalizer_image * a + b                    //Reduce the lighting and increase the contrast


Processing results

Red light after processing

Green light processing

No light processing

After the preprocessing, If there is no light exist, the image would become nearly all black. If the light exist, then there would be some red or green area shows up on the background correspondingly.


Deep learning structure

For this light detection neural network, we designed a 7 layers neural network structure which has 3 convolution layers followed by 4 fully connected layers shown as follow:

(1) Relu(conv(hidden = 32, in_features = 3))

(2) Relu(conv(hidden = 48, in_features = 32))

(3) Relu(conv(hidden = 64, in_features = 48))

(4) FC(out_features = 100)

(5) FC(out_features = 50, in_features = 100)

(6) FC(out_features = 10, in_features = 50)

(7) FC(out_features = classes, in_features = 10)

Integration into Donkey

Here we integrate the light detection function as a part of the vehicle in the donkeycar frame work. Lights detection function take the images from the camera as the input and the output is a number in 0, 1, 2 which:

0 represents the red light is detected

1 represents there is no light being detected

2 represents the green light is detected

The camera takes 20 pictures per second but the model reading and prediction of the lights detection can only processing 2 images per second, so we also make the lights detection function as a multi-thread which runs simultaneously with the main function.

We also have another function which takes the number from lights detection, steering and throttle as inputs and output the new steering and throttle. The basic logic of this function is the stop the car when it sees the red and keep going when the light detection sees the green.

Sign Detection

The car detects stop signs using an object classifier in OpenCV. If the car detects a stop sign, the the behavioral state is changed to make the car switch lanes.

Integration into Donkey

Classifier Creation

Classification occurs inside the drive loop in vehicle.start(). In order to allow the vehicle object to access the classifier object, the classifier was added inside the Vehicle class, and instantiated in the __init__ method. Classification was added inside the drive loop rather than as a part because it was simpler to implement and had no serious effect on run time. Init.jpg

Drive Loop

First, the most recent image from the camera is retrieved and saved as a .jpeg in order to be read into OpenCV. This was the most efficient way we found to get the image into OpenCV, because the image is put into memory as a surface, and we could not determine how to manipulate the data to be read by OpenCV. Although this is not the most elegant solution, it is both time and memory efficient.

The image is converted into gray scale using OpenCV to make the classification more accurate.

Applying the classifier to the image yields an array of results. If the number of stop signs detected is non-zero, then the car switches lanes. Stopsigns.jpg

Behavioral State Manipulation

Donkey changes behavioral states using the command bh.increment_state()

Unfortunately, this requires the classification code to have access to the attributes of the object bh. To solve this problem, we instantiated the object bh inside the Vehicle class in the __init__ method along with the classifier object. If the car detects a stop sign, it changes the state with self.bh.increment_state().

It is also necessary to change any references in the manage.py drive method from bh to V.bh. Fortunately, there is only one reference.

We do not need to create another bh object because one already exists. It was created when the Vehicle object V was created at the beginning of the method.


Lane Changing

Stop Light Action


Biggest Challenges

The biggest problem we faced was reliability. The driving model was sometimes inconsistent due to the small lanes on the indoor track. This, in turn, sometimes made the light detection code malfunction. If we had more time, we would train the behavioral model on more data to make it consistent.