2019SpringTeam7

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 01:09, 17 June 2019 by Spring2019Team7 (talk | contribs) (Schematic)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Project Team Members

From Left: Anne, Mayuki, Matias, David
  • David Zhenan Yin - Electrical Engineering
  • Anne Kimberly Edquilang - Mechanical Engineering
  • Mayuki Adam Sasagawa - Mechanical Engineering
  • Matias Ricardo Lee Choi - Mechanical Engineering
From Left: Anne, Mayuki, David, Matias
Photo taken by Raspberry Pi 5MP 1080P Camera using Realtime Object Detection

Project Overview

This project develops an autonomous car that reacts to objects in its surroundings. Real-time object detection is implemented to recognize common road objects and react as a human driver would. In the real world, autonomous cars need to be at least as safe as human drivers to be viable. They must emulate a human driver's response time (0.7-3 seconds to react, 2.3 seconds to hit brakes) to react quickly enough to avoid danger. Like human drivers, they also must react differently to every object. The system is implemented on our autonomously-driven car using one Raspberry Pi and one camera in order to minimize footprint and cost.

Main Components

Schematic

Team7circuit.png

Hardware

  • Lazer cut parts

Acrylic top plate

Error creating thumbnail: Unable to save thumbnail to destination
  • 3D printed parts

Dual Pi Camera Mount

Error creating thumbnail: Unable to save thumbnail to destination


  • Mascot
Error creating thumbnail: Unable to save thumbnail to destination

Drive Train

DriveTrain.jpg

Challenges

Challenge Solution
Car not driving straight Shaft realignment
Steering and throttle not working, burned ESC Replaced ESC
Steering stopped working, throttle works, overheating servo motor Replaced servo motor
Raspberry pi does not boot Replace pi

What we learned
When we force the servo motor to steer too much then turn off the pi/car, it could cause the motor to be stuck constantly running. The DC motor inside the servo could have been burned that way. It is crucial to make sure all of the car's hardware is perfect before we start programming or training the car. For example, all the images and training might be for nothing if the camera mount is not secure and consistent.

Software

Using the classroom wifi and SSH to access the raspberry pi, we installed Donkey, Tensorflow, and paired the PS3 controller with the raspberry pi

Calibration Values

  • Throttle:
    • forward: 440
    • neutral: 410
    • reverse: 380
  • Steering:
    • left: 200
    • neutral: 280
    • right: 360
  • Autonomous Driving Training:
DataTub.png

Roughly 27,000 images were recorded from driving 40 laps around the track. Those images were used to develop the autonomous driving model.

After training all the data collected on the GPU cluster, we were able to obtain a relatively high accuracy model.

Error creating thumbnail: Unable to save thumbnail to destination

Challenges

Challenge Solution
Could not access directory Directory name has space, changed it to underscore
Protobuf installation was time-consuming Patience
RaspbianLite not compatible for screen share on Mac
  • Detailed instruction will be provided in the project section
  • Installed tightvncserver, netatalk, and ahavi-daemon
  • Modified some code in the afpd.service file
  • Modified tightvncserver file and changed permission of the file we modified
  • Killed the server, restarted ahavi-daemon
  • We were able to use vnc to do screenshare on Mac

Milestones

Indoor Autonomous Driving

6 Autonomous Indoor Laps

Outdoor Autonomous Driving

3 Autonomous Outdoor Laps

Object Recognition

Software and Hardware List

The following are needed to complete this project.

  • Raspberry Pi 3B or 3B+
  • 32GB MicroSD card
  • Any high-resolution Pi Camera
  • Raspbian Stretch with desktop and recommended software(Link to download)
  • TensorFlow
  • OpenCV

Steps Overview

  • Raspberry Pi setup
  • Install TensorFlow, OpenCV, and Protobuf
  • Setup Tensorflow directory structure and path variable
  • Download model from model zoo
  • Start object detection


For this project, we were able to follow the guide provided by EdjeElectronics. We used TensorFlow's object detection API on raspberry pi 3B+, which uses Protobuf. Protobuf is a package that implements Google's Protocol Buffer data format. Currently there's no easy way to install Protobuf on Raspberry Pi because it needs to be compiled from the source and then installed, but fortunately, we found an installation guide by OSDevLab.(Link to Installation Guide)

First, we start by downloading TensorFlow and installing necessary packages.
The packages used are: libatlas-base-dev,pillow, lxml, jupyter, matplotlib, cython, and python-tk.

Then we install some packages and OpenCV.
The packages used are: libjpeg-dev, libtiff5-dev, libjasper-dev, libpng12-dev, libavcodec-dev, libavformat-dev, libswscale-dev, libv4l-dev, libxvidcore-dev, libx264-dev, and qt4-dev-tools IF ERROR OCCURED, RUN sudo apt-get update to solve Then install OpenCV

  • pip3 install opencv-python

Then follow the installation instruction written by OSDevLab to compile and install Protobuf. (Link to Installation Guide)
We downloaded the Protobuf release from GitHub, double check if there's a newer version available then download the newest version. Configure the build process by following the installation guide and start building the package. Generally, the process takes about 1 hour.
After finishing the building process, start the checking process. This takes even longer (around an hour and a half).

We now have to start setting up the TensorFlow structure and modifying the PYTHONPATH environment variable.

Then, modify the PYTHONPATH environment variable to point at some directories inside the TensorFlow repository we just downloaded by issuing:

  • sudo nano ~/.bashrc

On the last line (at the end of the file), add:

  • export PYTHONPATH=$PYTHONPATH:/home/pi/tensorflow1/models/research:/home/pi/tensorflow1/models/research/slim

Next, we used the TensorFlow detection model zoo (Link to model zoo). This is Google's collection of pre-trained object detection models with different levels of speed and accuracy. Because we are using Raspberry Pi, which doesn't have very good computing power, we need to use a model with less processing power so that our model will be less laggy. However, this will make it less accurate. The model we choose to use is called SSD_Lite. Once you have the model and the pi is configured, you can start object detection.

Screenshot2.png

Screenshare Between Mac and Raspberry Pi Desktop

Atainter has made a very detailed instruction on how to do screen sharing between Mac and Raspberry Pi.
The instruction for completing the task is the following: (Link to the Guide)

Results

Realtime Object Detection

With a second camera and raspberry pi with TensorFlow and OpenCV installed, the robot can detect everyday home objects at a rate of 0.54 - 0.70 FPS while also being able to drive our autonomous car. The program did an especially good job recognizing humans and could recognize our team member up to 10.67 meters away from the camera.
Every object, including person, was more easily recognized when in slight motion, as the program could differentiate it from the background. This was less of a problem when the car was in slow motion.
Sometimes, objects would not be recognized at all, and this may be because the object was not registered in the list of recognizable everyday objects or because the objects we used did not fit the standard description of said object. The green Hydroflask bottle seemed to be recognized very easily, however.

Potential Improvement

Due to screen sharing, the video feed and object recognition from the Pi camera is a little bit lagging on the Mac screen. A better way to solve this issue is that instead of using screen sharing, we could have used a monitor directly connected to the Raspberry using HDMI cable.
Due to the Raspberry Pi's relatively low computing power, we could have chosen a different model; however, the tradeoff would be lower accuracy in recognizing objects.

Future Developments

In our current car, we have 2 pis:
1 - the first pi is responsible for autonomous driving
2 - the second pi is responsible for real time object detection

In the future, we would like the implement communication between the two pis which are currently separately controlled. By having the two raspberry pis and cameras communicate, we can apply machine learning on a greater data set. Rather than only applying machine learning on driving images sent from only the bottom pi camera, we can have machine learning be applied to the images recorded by the 2nd camera from the object detection portion of the project. Using this information, we can train the model to accomplish a greater variety of things such as:

  • Roadside sign recognition
  • Train our own model for object detection
  • Pedestrian recognition
  • Incoming obstacle based automated maneuvers


Link to Final Presentation

Reference

We would like to give special thanks to: