Difference between revisions of "2019FallTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
(Camera Mount)
(HSL Colors for Lane Detection)
Line 54: Line 54:
* insert white lane image
* insert white lane image
* insert yellow lane image
* insert yellow lane image
* insert HSL info images
[[File:Cv.png|thumb|HSL description|200px]]
[[File:Cv.png|thumb|HSL description|200px]]

Revision as of 18:03, 8 December 2019

Project Overview

Our project was to integrate OpenCV into donkeycar in order to try and improve training time and robustness of the model. We used OpenCV to pre-process images in order to extract edges and highlight the white and yellow lanes. We wanted to see if using those images as a combined 3-D image (edges, white lane, yellow lane) would improve training time and robustness of the model compared to the original donkeycar model which trained using RGB images.

Team Members

  • Cyrus Shen - ECE
  • Chenfeng Wu - ECE
  • Maximilian Stadler - CSE (UCSD Extension)
  • Isabella Franco - MAE


Mechanical Design


Camera Mount

3D model of camera mount used

Car Circuitry

circuit layout of car

Aspect Ratios and Cropping

The original configuration suggested the recording of images with the size of 160x120 pixels resulting in a 4 by 3 aspect ratio and doubling the resolution for the outdoor setting, i.e. to 320x240. The used camera is itself a 2 MP camera being capable of recording FHD images. Thus its native resolution is 16 by 9. To allow for easy comparison, we decided to record indoor and outdoor images at the same resolution. We furthermore tried different aspect ratios to find how much information, especially in the corners, is lost due to cropping. We modified the framework such that only the height and the aspect ratio are specified in the configuration files. The image width is derived. We tested 1 by 1, 4 by 3 and 16 by 9. We concluded that the native wide angle resolution gives us the most information in the corners, which seemed to be prone to errors in the beginning anyway.

code snippet for aspect ratio

1x1 aspect ratio straight
1x1 aspect ratio curve
4x3 aspect ratio straight
4x3 aspect ratio curve
16x9 aspect ratio straight
16x9 aspect ratio curve

Integrating OpenCV

The Donkeycar-Framework is coded in a relatively rigid fashion. Many hard-coded sections and in-line definitions make any customization hard. Our goal was to compare the original models with models using visual primitives as priors, namely edges and lane segmentations. While these primitives can be easily computed using OpenCV, coding had to be done carefully to keep the code modular and allow an easy comparison of the original version without any code changes and the OpenCV-based models. Furthermore, to allow flexible modifications, our code was kept parametric which resulted in several additions to config.py allowing for customizable preprocessing.

TODO add a general overview of code-changes

TODO add config allowing for customization

Lane Finding with OpenCV

HSL Colors for Lane Detection

In order to single out the white and yellow lanes from the rest of the image, we used HSL color-space to threshold the image for the colors white and yellow. The RGB image taken by the camera was converted to HSL using OpenCV's COLOR_RGB2HLS and then thresholded so that only white and yellow colors in the image were kept.

  • insert white lane image
  • insert yellow lane image
HSL description

Canny Edge Detection

To outline the lanes, we used OpenCV's canny function which shows edges in an image.

  • insert canny edge image

Combined 3-D image

We combined the white, yellow, and canny edge images into a 3-D image and trained on that instead of the original RGB image.

  • insert new 3D image



In order to compare our OpenCV model to the current donkeycar model, we trained both models on the same images. 10 separate models were trained, 5 for an indoor track and 5 for an outdoor track. The 3 indoor track models were trained with 1000, 5000, 10000, 20000, and 30000 images of the indoor track with the car going clockwise. The same was done for the outdoor track model except the images were taken from the outdoor track.

None of the clockwise models worked when driving counterclockwise.

Interesting Note

For the 1000, 5000, and 10000 indoor images, they were taken with the Makerspace doors closed so the images of the track in front of the door were darker. When we tried testing those 3 models with the Makerspace doors open, brightening the area of the track in front of the doors, none of the 3 donkeycar models could pass that part of the track. However, the OpenCV 10000 model could pass that part of the track most of the time which is significantly better than the donkeycar 10000 model which passed it 0 times. This is a good sign that our OpenCV model works better than donkeycar under brighter track conditions.

At 20000 and 30000 train images, both donkeycar and OpenCV models could pass that part of the track which could be due to overfitting.

Example of model failing with doors below:

Couldn't parse video from https://youtu.be/qoBcDreH5PE
  • donkeycar 5000 images model with doors open[1]
  • donkeycar 5000 images model with doors closed[2]

We tested each model based on the following criteria:

  1. Training time
  2. Local Angle completion of 3 clockwise laps on the indoor track
  3. Autonomous completion of 3 clockwise laps on the indoor track
  4. Completion of X clockwise laps on the outdoor track
  5. Completion of X counterclockwise lap on the outdoor track
  6. Able to drive when environment is brighter or darker than the trained images?

Indoor Track Model Results


Criteria \ Images Trained 1000 5000 10000 20000 30000
Training Time filler filler filler filler filler
Local Angle Completion Χ Χ
Autonomous Completion Χ Χ
Brighter Environment Χ Χ Χ


Criteria \ Images Trained 1000 5000 10000 20000 30000
Training Time filler filler filler filler filler
Local Angle Completion Χ
Autonomous Completion Χ
Brighter Environment Χ Χ

Outdoor Track Model Results


  • filler


  • filler