2018SpringTeam7

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
Team7car.jpg

The objective of this project is to add an additional camera and array of ultrasonic sensors to the base model car. These additional sensors will interface with the donkeycar software and allow the car to identify and react to common city driving events. The following steps were taken to accomplish this task:

  1. Brushless to Brushed DC Motor drive train conversion
  2. Interface two Raspberry Pi 3 Model Bs
  3. Use OpenCV to identify and classify common road signs
  4. Implement all tasks into the DonkeyCar framework


Team Members

William Liu - william.liu@jacobs.ucsd.edu

Jingpei Lu - jil360@ucsd.edu

Gates Zeng - gjzeng@ucsd.edu

Drivetrain Conversion

The brushless DC motor (BLDC) provided on the original chassis has one fundamental problem, it cannot be controlled effectively at low speeds. This makes it difficult accurately emulate congested city driving environments. While the BLDC performed well for high speed driving, its lack of accuracy and precision was not ideal for this project. We found two solutions for this project. One, use a Sensored BLDC for improved control while maintaining top end performance. Two, use a Brushed DC Motor for improved control, but sacrifice top end performance.

The Brushed DC Motor was chosen as its top end performance was adequate for the scope of the project and, more importantly, its cost is an order of magnitude less compared to a Sensored BLDC.

Hardware

1/10 Scale Chassis with Brushed DC Motor

Initially, we had planned to port all our electronics onto a smaller chassis and convert the Brushless DC Motor Drivetrain into a Brushed DC Motor Drivetrain, as the Brushed DC Motor was readily available. The drivetrain conversion on the small chassis proved to be a success, we were able to achieve accurate and precise control of the throttle. There was one drawback of using the smaller chassis; first, although the car is smaller, the turning radius on the 1:10 scale car was much worse compared to the 1:8 scale car, secondly, if we were to use the smaller car, we would need to retain all of our self driving models to achieve autonomous driving again. Therefore, our next step was to look at a Brushed DC Motor conversion for the 1:8 scale car.

Design Constraints

There were three main considerations we took when we looked for a Brushed DC Motor suitable for the car.

  1. The motor needs to fit and mount onto the current motor mount.
  2. The motor needs to provide enough power to drive the car.
  3. The purchased motor needs to arrive in a timely matter.

Each of these constraints proved to be a challenge, as we could easily find a motor that fit two, but not three of the constraints.

For the physical dimensions of the motor, we need to find a motor that had a diameter of 35.8 ± 4.0mm dia., a length of 73.7 ± 4.0mm, and a shaft diameter of 5.0mm exactly.

The original BLDC had a Max Current of 90A, Max Voltage of 27V, Max Power of 2400W, KV of 2200 and Max RPM of 50000rpm. Since we were looking for a Brushed Motor, we looked for something with at least 10% of the original motor's power.

Unfortunately, most of the motors we found that fit all the criteria, did not meet the last: shipping time. Therefore, we looked to do make things work with what we had on hand; by converting the 1/10 scale motor to work on the 1/8 scale car.

Designing the Shaft Adapter

The 1/10 scale motor bolted directly into the 1/8 scale car's motor mounts. The only dimension that did not match was the diameter of the shaft. The 1/10 scale motor had a 3mm dia. shaft, while the 1/10 scale car required a 5mm dia. shaft.

Error creating thumbnail: Unable to save thumbnail to destination

Because the 1/10 scale motor used a round shaft, we were not sure if we could design an adapter that would not slip while driving. However, we wanted to try it anyways.

Proof of Concept

Paper adapter.jpg

We first constructed an adapter out of paper, this provided a cheap and easy way to see if an adapter would work. The paper adapter did not slip, however, since it was made of paper, the fitment was not perfect and there was no way to provide a proper hole for a set screw, we observed a lot of eccentric rotation of the pinion while driving.

Production Release

Next, we looked to design an adapter using the 3D printers. This allowed us to make a perfect fitting adapter that also includes a feature for the set screw fit and mate with the shaft. This eliminated the eccentric rotation of the pinion and produced great performance.

3to5.PNG

Exploded 3to5.PNG Together 3to5.PNG

Interfacing the Pi's

While a Raspberry Pi is a widely available and affordable platform for an autonomous vehicle, it does have hardware limitations that required our team to use a second Pi. One Pi will be used to run the DonkeyCar software to autonomously drive the car. The second Pi will be used to process traffic signs, and read data from the ultrasonic sensors. We needed a way to interface the two Pi's so that they could effectively communicate with each other. There were many solutions to this problem, we considered communication over WiFi, Ethernet, I2C and UART. We ended up choosing UART since the implementation seemed easy and straightforward.

Minicom

First, three female to female jumper cables were connected to the Pi's in the following configuration:

TX <-> RX

RX <-> TX

GND <-> GND

To test that serial communication between the Pi's was working, Minicom was installed, UART was enabled, and console service was removed on both Raspberry Pi's.

Installing Minicom

sudo apt-get install minicom

Enabling UART

To Enable UART, open "/boot/config.txt"

sudo nano /boot/config.txt

Add the lines below at the end of the file

# Enable UART
enable_uart=1

Reboot

sudo reboot

Disabling Console Service

Disable serial-getty service

sudo systemctl disable serial.getty@ttyS0.service

Open "/boot/cmdline.txt"

sudo nano /boot/cmdline.txt

Remove "console=serial0,115200", then save the file.

Reboot

sudo reboot

Testing Serial Communication

Launch minicom on both Pi's by running:

sudo minicom -b 115200 -o -D /dev/ttyS0

To see what you're typing on minicom, turn on Local Echo through the menu by typing Ctrl-A, then E.

You should now be able to type on one Pi, see the same text in the other Pi, and vice versa.

Pyserial

Installing Pyserial

To install pyserial

sudo apt-get install python-serial

Test Pyserial

To test that Pyserial works, we made a simple program that sent strings from one Pi to another.

This program takes an input and sends it to the other Pi:

import serial
import time
import io 

ser = serial.Serial(
port = '/dev/serial0', \
baudrate = 115200, \
bytesize = serial.EIGHTBITS, \
timeout = 0)

while True:
	input = raw_input("Input: ")
	ser.write(input)
	time.sleep(0.1)

ser.close()

This program listens for an input, decodes it, and prints it out to the console:

import serial 
import io

ser = serial.Serial( 
port = '/dev/serial0' ,\
baudrate = 115200 ,\
bytesize = serial.EIGHTBITS ,\
timeout = 0)

print("connected to: " + ser.port)

temp = ''

while True:
	for c in ser.read(1).decode('utf-8'):
		temp = temp + c
		#print(temp)
		if c == '\n':
			print(temp)
			temp = ''
	
ser.close()

Detecting Road Signs

To detect the traffic signs, we are using the Haar cascade classifier in OpenCV. Haar feature-based cascade classifiers is a machine learning based approach and it is achieved by Adaboost. OpenCV comes with a trainer as well as a detector of cascade classifier, so it is easy to train and use. Before working on the cascade classifier, you have to setup OpenCV in Python environment. You need to install OpenCV and Numpy on your computer. Then you have to clone the latest version of OpenCV in your working directory.

   git clone https://github.com/Itseez/opencv.git

Training Haar Cascade Classifier

To train a Haar cascade classifier we first have to prepare the training data. The train data includes positive samples and negative samples. The positive samples are the images containing the actual objects we want to detect. The easiest way to create positive images is using the opencv_createsamples application. You can find the details about this application here. The negative samples are arbitrary images, not containing objects we want to detect. You can create these images manually or extract from videos.

When we have those samples, we have to create the description files for both positive and negative samples. The format and details of the description files can be found here. After we have the positive samples, negative samples, and the description files, we can start to train our model. For training our model, we will use the opencv_traincascade application. The details of how to use this command to train the classifier can be found here.

For details and questions about how to train a Haar cascade classifier, check the tutorial below:

https://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html

https://pythonprogramming.net/haar-cascade-object-detection-python-opencv-tutorial/

Detecting Traffic Signs

After the training, we will have .xml files which are the models for the classifier. To use these models, we have to copy the .xml files to our project folder. Then, we can start a python script to detect the traffic signs.

First, we have to import the OpenCV library and Numpy library.

   import numpy as np
   import cv2

Then, we can load our classifier. For example, if we want to load the classifier of the stop sign which is named stopsign_classifier.xml, we can do this:

   stop_detect = cv2.CascadeClassifier('./stopsign_classifier.xml')

We want to detect the stop signs in videos. In the code below, we capture the video through the camera and read the images as img

   cam = cv2.VideoCapture(0)
   ret, img =cam.read()

In this case, we are using the inbuilt webcam of the laptop. So, the ID for VideoCapture is 0. Before using the classifier, we need to convert the image to gray scale.

   gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

Now we can apply our classifier to detect the stop signs in the image

   stops = stop_detect.detectMultiScale(gray,1.2,5)

Next, to visialize the detection, we can draw a rectangle on the detected stop signs and put "stop" on it.

   font = cv2.FONT_HERSHEY_SIMPLEX
   for (x,y,h,w) in signs:
       cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
       cv2.putText(img,"stop", (x,y+h),font, 1,(255,0,0),2)

For now, the detection step is almost done. To make the classifier work in real time, we need to put the code in a loop, so the complete code is like below.

import cv2
import numpy as np

stop_detect = cv2.CascadeClassifier('./stopsign_classifier.xml')
font = cv2.FONT_HERSHEY_SIMPLEX
cam = cv2.VideoCapture(0)

while True:
    ret, img =cam.read()
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    stops = stop_detect.detectMultiScale(gray,1.2,5)
    for (x,y,h,w) in stops:
        cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
        cv2.putText(img,"stop", (x,y+h),font, 1,(255,0,0),2)

    cv2.imshow('im',img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

cam.release()
cv2.destroyAllWindows()

Runing the Classifier on Raspberry Pi

To run the classifier on Raspberry Pi, we can simply import picamera and capture the image by running this:

with picamera.PiCamera() as camera:
    camera.resolution = (320, 240)
    camera.framerate = 24
    time.sleep(2)
    image = np.empty((240 * 320 * 3,), dtype=np.uint8)
    camera.capture(image, 'bgr')
    image = image.reshape((240, 320, 3))

However, running video processing is very slow on Raspberry Pi due to the constrained computing resource. Therefore, we are using the multi-threaded approach to increase the frame rate of the video stream. In this way, we can speed up the video stream by reducing I/O latency and ensuring the main thread is never blocked.

First, we have to have a script to handle reading threaded frames from the Raspberry Pi camera module. The script can be found here. So, instead of using the picamera library, we can import the script which is named PiVideoStream.

   from imutils.video.pivideostream import PiVideoStream

Then we can capture the image using the code below.

    vs = PiVideoStream().start()
    img = vs.read()

Moreover, we know the traffic sign always appears on the right side of the vehicle so we can crop the image to half. We only run the classifier on the right half of the image.

   img = img[:, (len(img[0])//2):]

By doing these, we can increase the FPS of our video processing for about three times.

Modifying the DonkeyCar Framework

Brief Code Overview

Donkeydiagram.png


We start the DonkeyCar framework by calling:

   python manage.py drive --model=<modelname>

Inside of manage.py, we create a Vehicle instance, which then initializes each part. One of which is the controller.

Within controller.py, there is base class named JoystickController. This provides a layer of abstraction for the different devices that are compatible with the DonkeyCar framework. The child classes will simply provide a mapping of inputs on the controller to functions and variables in the JoystickController class. In our case, PS3JoystickController is instantiated.

Inside of the JoystickController class, there is also an update() function that is polled by the ... instance. This function checks whether a button or trigger has been activated and calls the corresponding function to set the throttle, steering angle, recording setting, etc. For example, when we press 'X' on the controller, update() checks that the 'X' button was activated and then calls the emergency_stop() function which then stops recording, switches the car to user mode and stops the throttle.

Our Modifications

Since we want to control the speed of the car, we had to find a place to plug into the code. Therefore, we added to the update() function inside of the JoystickController class. Inside update(), we read in the serial input and activate the corresponding function.

We defined several new functions for the car.

To slow the car when it sees a yield sign:

   def slow_on_yield(self):
       print("Slowing on yield sign for 3 seconds")
       if (self.isAdjusted == False):
           self.prev_throttle = self.throttle
           self.throttle = self.THROTTLE_YIELD
           self.startTime = time.time()
           self.isAdjusted = True

To adjust the speed on a speed limit sign:

   def adjust_on_speed(self, speed):
       print("Adjusting speed on speed limit sign")
       self.throttle = speed 

To slow down the car before a stop sign:

   def slow_on_stop(self):
       if (self.isAdjusted == False):
           self.prev_throttle = self.throttle
           self.throttle = self.THROTTLE_SLOW_STOP
           self.isAdjusted = True

To stop a car at the stop sign:

   def stop_on_sign(self):
       print("Stopping on stop sign for 3 seconds")
       if (self.isAdjusted == False):
           self.prev_throttle = self.throttle
       self.throttle = 0.0
       self.startTime = time.time()
       self.isAdjusted = True

Maintains the currently set throttle for the duration of 3 seconds:

   def maintain_speed(self):
       if (time.time() - self.startTime > 3):
           self.isAdjusted = False
           self.throttle = self.prev_throttle
           print("Restoring speed")


In order to call these functions, we added to update() to read serial inputs and call the corresponding function:

       for c in self.ser.read().decode():
           # Stop sign
           if c == 'SIGNAL_STOP':
               self.stop_on_sign()
               # Speed limit sign
           elif c == 'SIGNAL_SPEED':
               self.adjust_on_speed(self.THROTTLE_SPEED_SIGN)
           # Yield sign
           elif c == 'SIGNAL_YIELD':
               self.slow_on_yield()
           # Slow before stop sign
           elif c == 'SIGNAL_PRESTOP':
               self.slow_on_stop()
       if (self.isAdjusted):
           self.maintain_speed()