2019WinterTeam8

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 19:38, 21 March 2019 by Winter2019team8 (talk | contribs) (Project Overview)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Team Members

• Brian Wong - Computer Engineering

• Xiaotao Guo - Electrical Engineering

• Saakib Akbany - Mechanical Engineering

• Joseph Chang - Electrical Engineering

Project Overview

This project develops an autonomous car that reacts to objects in its surroundings. Real-time object detection is implemented to recognize common road objects and react as a human driver would.

In the real world, autonomous cars need to be at least as safe as human drivers to be viable. They must emulate a human driver's response time (0.7-3 seconds to react, 2.3 seconds to hit brakes) to react quickly enough to avoid danger. Like human drivers, they also must react differently to every object.

The system is implemented on an autonomously-driven car using one Raspberry Pi and one camera in order to minimize footprint and cost.

Part of our code can be found in: [1]

Car Layout.png

Objectives

The car must react to a video stream in real-time. It will recognize different objects and react accordingly.

Primary Objective

If detects a person - car comes to a stop. When the person is removed from the field of view, the car continues driving normally.

Secondary Objectives

If detects a stop sign - car will stop for 3 seconds, and then continue.

If detects a water bottle - car will drive directly through the bottle.

If detects a pothole - car will drive around the pothole.

Objects.png

Mechanical

The car was built to 1/10 scale RC car specification.

• Acrylic base plate

Base Plate.png

• 3D-printed Camera Mount

Camera Mount.png

Electrical Components

• Raspberry Pi

• PCA9685 PWM

• RGB LED

• Servo

• PICAM

• ESC

• Motor

• Relay

• RB LED

• LiPo Battery

• Switches

• Step-down module

Schematic.png

Software

1. Setup Raspberry Pi

2. Download DonkeyCar Framework

3. Install TensorFlow

4. Calibrate throttle and steering

5. Setup connection between Pi and controller

6. Collect Indoor and Outdoor data (80 laps/model)

7. Train Indoor and Outdoor model

8. Test indoor and outdoor model

9. Train object recognition model with OpenCV Haar Cascade Classifiers

At first we want to use some pretrained model online, but most of them are trained using regular WebCAM and perform badly when we used them with our fisheye PiCAM so we decided to train our own model.

In terms of the training process, it contains several processes: getting raw pictures, creating samples and Cascade training:

9.1 Getting raw picture data

Before creating samples for model training, we need to get two types of pictures:

positive images -- our target object taken at different shape and orientation

negative images -- background where the objects will show up

The positive images and negative images are shown below:

Error creating thumbnail: Unable to save thumbnail to destination
Error creating thumbnail: Unable to save thumbnail to destination

9.2 creating sample

We took around 40 postive images, but we need more positive images to increase the accuracy of our model. OpenCV has provided a tool called opencv_createsamples, which create several positive sample at randomly shapes and orientation. Here we create 1500 positive samples with wide/height ratio of 15/20. Notice the wide/height ratio should be close to that of the real objects.

9.3 train the cascade model (.xml file)

Here we trained the classifier using tool function opencv_traincascade with some parameters:

numStages 20 -- numbers of stages we want to train, here we trained the model with 20 stages

numPos 1000 numNeg 600 -- numbers of positive and negative samples we use in training, notice they must equal or less than we create above

-w 15 -h 20 -- the width and height of our target object, it should be proportional to that of positive sample. And it should be not too large, at first we use 60/80 and it took several hours to train, even in GPU cluster.

featureType LBP -- in order to increase training speed, we configured feature type option with LBP

9.4 Test the model

After training the model, we need to run the following script to add it into our camera class

Error creating thumbnail: Unable to save thumbnail to destination

10. Modify Donkeycar Framework to detect objects and react

Camera: modify class to send images through OpenCV model

Output: RGB image, boolean whether object is detected

Error creating thumbnail: Unable to save thumbnail to destination

Controller: modify class to adjust throttle and steering based on image

Input: throttle, steering values generated by autopilot model, boolean whether object is detected

Output: throttle and steering values

Error creating thumbnail: Unable to save thumbnail to destination

Here we tried two strategies:

(1) At each loop, stop the car when the camera detects a person or it would just keep going.

pros: With this strategy, the model can avoid most needless stop due to the false True (detects a person when there is no person) because the car would fully stop only when the camera keep seeing person.

cons: But due to the accuracy of our model and react speed of the car, the camera would miss the object at certain positions at return a false False (didn't detect a person but actually there is a person) as shown in section result

(2) Or stop the car whenever the camera detects a person then skip several following loops to make sure the car stop fully.

pros: With this strategy, the car could stop at most circumstances when a person shows up

cons: Sometimes the car would stop due to camera return a false True

We choose the second strategy based on the following reason:

(1) Our cascade model would be more likely to generate false False than false True, which means when there is no person it would mostly return a False, which is correct. However, it successfully detects a person only when the person is not far from its focus.

(2) In the real world, it would be much safer to stop due to a false True than keep going due to a false False.

11. Integrate objection detection with autonomous driving

Results

5 Autonomous Indoor Laps

3 Autonomous Outdoor Laps

Real-time object detection with RaspberryPi

Stops when person detected

Stops until person out of frame

Stops too late, person out of frame, runs over person

Autonomous Driving with Object Detection

Autonomous Driving with Stop Sign Detection

Challenges Encountered

• Reverse throttle stopped working

 The Pi was turned off after collecting training data. The car was turned on to collect more data, but reverse stopped working for 
 no reason. After much debugging, no problem could be found. Two hours later, it suddenly fixed itself.

• Neutral became very fast without reason

 While training our indoor model, the car began running extremely fast at neutral. We checked using calibration again and changed 
 neutral throttle from 370 to 340.

• Selecting object recognition software

 We wanted to use tinyYOLO (open-source object recognition software) to train our model. The model detected objects at 1-2fps 
 which was too slow for an autonomous car. We opted to train our model with OpenCV's Haar Cascade Classifier which ran much 
 faster.

• Integrated with autonomous model

 Object detection model with autonomous driving was tested in a a new indoor track with a new, less-optimized model. The car veers 
 a lot and avoided objects we wanted it to detect. It also reacts very slowly. The cause is unknown, but it is not due to 
 detection speed. The car detects the object as early as without autonomous driving.

Future Improvements

There are many obstacles and objects drivers must detect and adapt to on the open road and autonomous cars must do the same. One way our project can be improved is adding more objects such as the water bottle and pothole we did not get to.

Another is to develop more complex algorithms to allow for special maneuvers. For example, one of our secondary objectives was for our car to circumnavigate a pothole when it is detected. This would require depth or size measurements and would require a more complex algorithm.

Our project would also benefit from a dedicated camera for image recognition. The current camera is angled downwards and has a fisheye lens. It distorts the frame edges and cannot see far. If an object is on the side of the street, it is not detected. Greater accuracy could be achieved by using a flat lens camera and faced it directly forward.

When integrated with autonomous driving, our car avoids objects we want it to detect or reacts too slowly and runs into the object. To minimize object avoidance, different positive/negative image ratios can be used to train the Haar Cascade models where positive is the object and negative is the background. We used 1000/600. The 'number of training stages' parameter can also be optimized. The cause of the delayed reaction is uncertain, but is a possible topic of future work.

References

• Real-Time Face Recognition: An End-to-End Project
https://www.hackster.io/mjrobot/real-time-face-recognition-an-end-to-end-project-a10826

• Train your own OpenCV Haar classifier
https://github.com/mrnugget/opencv-haar-classifier-training

• DonkeyCar
https://github.com/tawnkramer/donkey