Difference between revisions of "2019FallTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
(Car Circuitry)
(Car Circuitry)
Line 13: Line 13:
  
 
== Car Circuitry ==
 
== Car Circuitry ==
[[File:ECE148_circuit.JPG|200px]]
+
[[File:ECE148_circuit.JPG|1500px]]
  
 
== Integrating OpenCV ==
 
== Integrating OpenCV ==

Revision as of 20:31, 2 December 2019

Introduction

Our project was to integrate OpenCV into donkeycar in order to try and improve training time and robustness of the model. We used OpenCV to pre-process images in order to extract edges and highlight the white and yellow lanes. We wanted to see if using those images as a combined 3-D image (edges, white lane, yellow lane) would improve training time and robustness of the model compared to the original donkeycar model which trained using RGB images.

Team Members

  • Cyrus Shen - ECE
  • Chenfeng Wu - ECE
  • Maximilian Stadler - CSE (UCSD Extension)
  • Isabella Franco - MAE

Mechanical Design

Camera Mount

Car Circuitry

ECE148 circuit.JPG

Integrating OpenCV

The Donkeycar-Framework is coded in a relatively rigid fashion. Many hard-coded sections and in-line definitions make any customization hard. Our goal was to compare the original models with models using visual primitives as priors, namely edges and lane segmentations. While these primitives can be easily computed using OpenCV, coding had to be done carefully to keep the code modular and allow an easy comparison of the original version without any code changes and the OpenCV-based models. Furthermore, to allow flexible modifications, our code was kept parametric which resulted in several additions to config.py allowing for customizable preprocessing.

TODO add general code-changes overview

TODO add config allowing for customization

Lane Finding with OpenCV

HSL Colors for Lane Detection

In order to single out the white and yellow lanes from the rest of the image, we used HSL color-space to threshold the image for the colors white and yellow. The RGB image taken by the camera was converted to HSL using OpenCV's COLOR_RGB2HLS and then thresholded so that only white and yellow colors in the image were kept.

  • insert white lane image
  • insert yellow lane image
  • insert HSL info images
HSL description

Canny Edge Detection

To outline the lanes, we used OpenCV's canny function which shows edges in an image.

  • insert canny edge image

Combined 3-D image

We combined the white, yellow, and canny edge images into a 3-D image and trained on that instead of the original RGB image.

  • insert new 3D image

Methodology

In order to compare our OpenCV model to the current donkeycar model, we trained both models on the same images. 6 separate models were trained, 3 for an indoor track and 3 for an outdoor track. The 3 indoor track models were trained with 1000, 5000, and 10000 images of the indoor track with the car going clockwise. The same was done for the outdoor track model except the images were taken from the outdoor track.

We tested each model based on the following criteria:

  1. Training time
  2. Completion of X clockwise laps on the indoor track
  3. Completion of X counterclockwise lap on the indoor track
  4. Completion of X clockwise laps on the outdoor track
  5. Completion of X counterclockwise lap on the outdoor track
  6. Able to drive when bright or dark outside?

Indoor Track Model Results

Donkeycar

  • filler

OpenCV

  • filler

Outdoor Track Model Results

Donkeycar

  • filler

OpenCV

  • filler

Old Project Wiki Layout

Donkeycar Indoor Track Model Results

insert video

OpenCV Indoor Track Model Results

insert video

Donkeycar Outdoor Track Model Results

insert video

OpenCV Outdoor Track Model Results

insert video