2018FallTeam1

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search

The 2018 Fall Team 1 focused on enhancing the driving mechanics of their autonomous vehicle. While the car was able to drive around a known track using reinforced learning to create a model, they went a step further and developed line recognition capabilities by implementing OpenCV [1] in the existing DonkeyCar system.

RoboCar.jpg

Team Members

  • YanQing Cheng - Computer Engineering
  • Zachary Pierson - Mechanical Engineering
  • Ryan Yung - Computer Engineering

Reinforcement Training

The initial build of the car relied on training the car on the set tracks to build a model that it could then use later to drive itself.

Hardware Design

The construction of the car consisted of assembling the car chassis, designing a mounting plate and camera mount, and wiring the electrical components.

Mounting Plate

Plate.jpg
The car required a plate to mount all components to, other than the battery, motor, and ESC. This meant that it had to have enough room on top of it to mount all the parts, but also have access to connect to the parts below the plate. The design was made in SolidWorks and cut using a Universal Laser Systems Laser Cutter and mounted to the car.


Camera Mount

Stand Green.jpg
For the train to work, it requires some type of input. For this project, the input was a camera attached to the Raspberry Pi. The data that the camera is reading is the lanes, so the camera needed to be angled downward to efficiently read that data. The camera also needed to sit high enough to see over the front of the car, thus the need for a tall mounting stand.




Training

Once the car was assembled, the car was then manually driven around the track using a Dualshock 3 Controller and input from the driver. After a sufficient amount of frames are acquired, the data can then be compiled to create a model for the car to then replicate.
While it is important to stay between the lanes, having the car stay perfectly between the lanes at all times and never diverge from a single path does not result in the best model. A good model is one that has a diverse input and many different data points. A wider range of data can be found by diverging from a straight path while training between the lanes, as well as recording data at different times and different environments so that when the model is running, it has more data to pull from.
Fully autonomous driving on Track 1.
Fully autonomous driving on Track 2.

Track 1

The first track is a simple track in a controlled environment with relatively few variables. This is an optimal environment to test the car and the DonkeyCar code before moving onto the larger track.

Track 2

The second track is more complicated with multiple turns and a long straightaway. Also since it is located outside, the data is subjected to the weather, time of day, a changing surrounding, and other variables. With these variables, more data is needed and more training is required as discussed previously.

Between the Lines Navigation

LineDiagram.jpg

While the car is able to maneuver around a known track, it will only work as long as the track is the same and it has enough data to recognize small variations to recognize the needed data. However to improve upon this, a system can be implemented to recognize the track and navigate accordingly without needed to train the car and create a model. In order to stay between the lines, two things must be known: the environment surrounding the car, and the location of the car. The environment of the car is found by analyzing the image taken from the camera to find the lanes. Once the surroundings of the car are analyzed and the lanes are found, the location of the car can then be found in reference to the lanes. The slope of each lane is found and then extended to the bottom of the frame to create a left point and a right point. The midpoint between the point is calculated, and with middle of the frame as the location of the car, the car then turns towards the midpoint, thus navigating to the center of the lanes.


OpenCV

Color Recognition

In order to analyze the environment, the image must be broken down to isolate the parts that are desired and then compiled back into one image to then calculate the values needed to find the individual slopes.
Image recorded by the mounted camera

Import all the libraries we need to use.

   import cv3
   import numpy as np
   import math

We created this function inside Controller.py.

   def img_process(img):
       image = img

This range subject to change due to the intensity of sunlight.

   lower_red = np.uint8([140, 40, 50]) 
   upper_red = np.uint8([150, 75, 75])
   red_mask = cv2.inRange(image, lower_red, upper_red)
An output of the red detected in the picture.
   lower_white = np.uint8([160, 150, 170])
   upper_white = np.uint8([220, 220, 220])
   white_mask = cv2.inRange(image, lower_white, upper_white)
An output of the white detected in the picture.
   mask = cv2.bitwise_or(white_mask, red_mask)
A combination of the red and white masks.

Therefore we get the image of white lines and red lines.

   result = img.copy()

Line Calculation

Once the image desired lines are isolated, the mask can then be analyzed to detect lines. By finding lines the environment surrounding the car can be determined. As well as the location of the car relative to the lanes.

Here we get the red lines from the red mask.

   height,width = mask.shape
   skel = np.zeros([height,width],dtype=np.uint8)      #[height,width,3]
   kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3))
   temp_nonzero_red = np.count_nonzero(red_mask)
   while(np.count_nonzero(red_mask) != 0):
       eroded = cv2.erode(red_mask,kernel)
       temp = cv2.dilate(eroded,kernel)
       temp = cv2.subtract(red_mask,temp)
       skel = cv2.bitwise_or(skel,temp)
       red_mask = eroded.copy()
   red_lines = cv2.HoughLinesP(skel,rho = 1,theta = np.pi/180,threshold = 5,minLineLength=2,maxLineGap=20)

And white lines from the white mask.

   skel = np.zeros([height,width],dtype=np.uint8)      #[height,width,3]
   kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3))
   temp_nonzero_white = np.count_nonzero(white_mask)
   while(np.count_nonzero(white_mask) != 0):
       eroded = cv2.erode(white_mask,kernel)
       temp = cv2.dilate(eroded,kernel)
       temp = cv2.subtract(white_mask,temp)
       skel = cv2.bitwise_or(skel,temp)
       white_mask = eroded.copy()
   white_lines = cv2.HoughLinesP(skel,rho = 1,theta = np.pi/180,threshold = 5,minLineLength=2,maxLineGap=20)

We create four arrays to store x,y values for the red lines and white lines.

   posX = []
   posY = []
   negX = []
   negY = []

This height determines the where shall we cut off the top part of the image, since there are lots of noise at the upper part of the image.

   useHeight = height / 2.5
   i = 0
   for xy in red_lines:
       x1 = xy[0][0]
       y1 = xy[0][1]
       x2 = xy[0][2]
       y2 = xy[0][3]
       if x1 != x2 and y1 != y2:
           gradient = (float(y1) - y2) / (x1 - x2)
           if y1 < useHeight:
               x1 = x1 - (y1 - useHeight) / gradient
               y1 = useHeight
           if y2 < useHeight:
               x2 = x2 - (y2 - useHeight) / gradient
               y2 = useHeight
           if y1 != y2:
               cv2.line(result,(int(x1),int(y1)),(int(x2),int(y2)),(255,0,0),1)
               posX.append(x1)
               posY.append(-1 * y1)
               posX.append(x2)
               posY.append(-1 * y2)
   j = 0
   for xy in white_lines:
       x1 = xy[0][0]
       y1 = xy[0][1]
       x2 = xy[0][2]
       y2 = xy[0][3]
       if x1 != x2 and y1 != y2:
           gradient = (float(y1) - y2) / (x1 - x2)
           # print('gradient', gradient)
           if y1 < useHeight:
               x1 = x1 - (y1 - useHeight) / gradient
               y1 = useHeight
           if y2 < useHeight:
               x2 = x2 - (y2 - useHeight) / gradient
               y2 = useHeight
           if y1 != y2:
               cv2.line(result,(int(x1),int(y1)),(int(x2),int(y2)),(255,0,0),1)
               negX.append(x1)
               negY.append(-1 * y1)
               negX.append(x2)
               negY.append(-1 * y2)
The resulting lines detected and calculated using OpenCV.

The exprectedY is the Y coordinate we want to calculate the X value.

   expectedY = -120
   flag1 = 0
   flag2 = 0

A check here to prevent no lines are detected.

   if (len(posX) != 0 and len(posY) != 0):
       pPos = np.polyfit(posX, posY, 1)
       xPos = (expectedY - pPos[1]) / pPos[0]
       flag1 = 1
   else:
       print("posX or posY was empty")
   if (len(negX) != 0 and len(negY) != 0):
       pNeg = np.polyfit(negX, negY, 1)
       xNeg = (expectedY - pNeg[1]) / pNeg[0]
       flag2 = 1
   else:
       print("negX or negY was empty")

If both lines are detected, we get the middle value, otherwise we just let the car go straight.

   if (flag1 and flag2):
       xResult = (xPos + xNeg) / 2
       return xResult
   else:
       return 0

Testing & Running

Handheld testing of the car's wheels to reading the track.

The first testing done was to make sure that the car was reacting to the image seen by the camera. While held in place the wheels can be seen moving while the car is being rotated. Although the wheels move erratically, this is due to the inconsistent image being picked up and the odd angle due to the car being held above the track. The line detection OpenCV algorithm draws a slightly different set of lines each time and that changes the net calculation which gives the delta.

The script was modified to update the steering angle at a lower frequency than the car was sending images. The car captures images at 20Hz so after some testing on 5 and 10 Hz, it was found that the 10Hz was a bit more stable and less jittery going down the track.

Testing the line recognition

When driving, the car oscillates from left to right rapidly, however it stays between the lines.

Despite the progress made with the line recognition, it could be improved by masking between sequential images. We expect that future groups will have a good jumping-off point for taking over the project and refining it for a more consistent and smooth behavior