2018FallTeam3

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Introduction

Our project mainly concentrates on the donkey car's ability to avoid the object on its way. In real life, car is the most popular vehicle in the world, and naturally, people drive their cars every single day. Of course, there will be days when they are absolutely exhausted after a long day at work, and accidents might happen. With the thought of reducing the possibilities of car accident and improving human life, we attempted to develop the car function to avoid the object by using structure depth sensors and camera.

Team Members

Tejas Bhakta

Walter Zimmer

Alex Graff

Kevin Nguyen

Mechanical

Plate and Camera Mount Design

The camera mount was designed to be as simple as possible. This meant keeping the angle static--a dynamic camera mount with variable angle would have taken longer to design well--and minimizing assembly requirements. A single part with few bolts/screws was the solution. The camera angle turned out to be relatively reasonable. This was ensured in the base sketch of the part in Solidworks, where a line was drawn from the location on the mount where the camera sits down to the ground (measured in real life from the top of the plate). This represented the line of sight of the Raspberry Pi camera and indicated the camera would be looking about 11 inches in front of the car's bumper.
A similar method was utilized for the design of the new camera mount that held both the Pi camera and the Structure depth sensor used in our team's project.
CAD file of acrylic car plate. Slots were used for mounting posts due to unknown tolerance in manufacturing methods.
CAD file of 3D printed Raspberry Pi Camera mount. Mount was made with a static angle as minimal assembly (screws, bolts, number of parts) was desired.
Camera geometry shows Raspberry Pi camera line of sight (diagonal dashed line) and bumper plane (selected line). Pi camera was pointing at a point ~11" in front of car.

Assembly

Components

Puzzle globe
PWM Controller
Puzzle globe
Raspberry Pi
Puzzle globe
Motor
Puzzle globe
Servo Motor
Battery


Wiring Process

Wiring Schematic

Based on the instruction, the wiring schematic was made to ensure that the wiring process was carefully done without crucial mistakes which might lead to short circuit.
Team 3 Wiring Schematic

Soldering

The soldering was essential for the whole circuit to work. If the soldered connection was bad, it might short-circuit and burn valuable parts. Thus, every step was carefully done throughout the entire wiring process. Unfortunately, somehow, one of the bad connections actually short-circuited and burned the relay at the end of week 5. However, the car was assembled and wired back again in time.

Soldered Connection
Soldered Connection
Short circuit on relay


Autonomous Driving using Neural Network and GPU Cluster

Training

See the autonomous driving here:

Indoor

Project

Obstacle Avoidance using Structure Depth Sensor

The structure depth sensor is used a 3D scanner in this project. With the ability to capture the dense geometry in real-time, the structure depth sensor allows the Raspberry Pi to recognize the object on the way of the car. Specifically, from the image of the structure depth sensor and video from the camera, the Raspberry Pi extracts the image frame by frames from the video, and with the neural networks, the donkey car will be able to recognize the object due to the repetition of objects in training. Thus, the car will be able to recognize the objects on its path and move away from them.

Scan of Alex's friend using Structure sensor with Skanect application

The original idea was to use the images from the Pi camera (RGB) combined with the depth information from the Structure sensor, and use this data to train a model on the Neural Network. It theory, this could be as simple as increasing the dimension of the image array sent to the neural network from 3 layers deep (RGB) to 4 layers (RGB & depth). In practice, being able to combine and align the two images both in terms of position and resolution matching, was enough of a hassle to consider a different method.

The Structure sensor has a built in IR camera and the SDK has a built in function to align the images from the IR camera and the depth sensor. Since color information would potentially be irrelevant with the introduction of depth information, the IR camera was used in place of the Pi camera.

Structure Sensor


Neural Network

As mentioned above, the neural networks plays an essential part for donkey car to recognize the object in its path, so it is important to understand the fundamental concept about neural network. First of all, the neural network is an artificial intelligence. It is acting like a brain, but how it operates is different from a biological brain. Apparently, its structure contains several nodes connected together like neurons in a brain, and each node can transmit signal from one to another. However, its main ability is visual pattern recognition. In the other word, visual pattern recognition simply means learning from examples. For instance, the neural networks learn from car training, and throughout multiple training examples, it develops a rule or an idea that recognizes the pattern of the car, thereby driving the car on its own without hitting the cones in the same path as training.

New Camera Mount

Occipital provides CAD files (here, under "Download Starter CAD") for developers who would like to 3D print or create their own mounts for the Structure sensor. They also provide instructions for position and tolerance for combined usage with visual-light cameras.

Structure sensor base bracket provided by Occiptal
New camera mount designed to hold both Raspberry Pi camera and Structure sensor.
Progression of camera mounts: (Left) Pi camera only; (Middle) 1st iteration for structure mount; (Right) Final iteration when decided not to use Pi camera at all for depth training

Issues with Dark IR images

The picture below shows a side-by-side of the IR image captured by the IR camera on the Structure sensor (left) and the depth image created (right). Clearly, the images are fairly dark, and potentially not all depth data is being captured. To increase autonomous driving performance, it was suggested we build a small circuit to power some IR LEDs that can light up the environment in front of the car with more IR-wavelength light. 940nm IR LEDs were used.

(Left) IR image; (Right) depth point cloud
Team3 IR circuit schem 2.png

Issues with minimum depth distance readings

The Structure Sensor camera has a minimum reading distance of about 1 foot. This means every obstacle/line must be at least 1 foot away from the sensor in order to be properly read. The final iteration of the camera mount solves this issue by making the focus point 2 feet away.

Training

The goal of training the car is to collect the data as much as possible. With our light levels and angles optimized so that the sensor is able to detect the lane lines correctly, we needed to collect as much data as possible. This meant training laps indoors and outdoors multiple times, with different cone setups.

OpenNI2

OpenNI2 was a difficult SDK to get started with. In order to begin, we needed to install the OpenNI2 SDK binary for our Rasberry Pi. Since our Pi is a Linux device with an ARM processor, we needed to install the OpenNI 2.2.0.33 Beta (ARM) package, which is usually as straight-forward as running the setup and Make file. However, this release from Structure was bugged and gave us errors saying that a library we needed was not there, when that library was there.

In order to try to resolve this we simply cloned the uncompiled GitHub repo and compiled it locally on our Pi. Luckily, the GitHub repo did not have the same bug, and we were able to proceed. This mishap from Occipital used up a lot of our time, so if you are setting this sensor up for use on a Pi, be sure to use the GitHub repo and not the ARM SDK binary Occiptal provides!

1.png

Integration with Donkey

In order to use our Structure Sensor with the Donkey Framework which is written in Python, we needed to install the proper Python bindings for OpenNI2. This allows us to call the functions needed to operate the camera in a Donkey Framework part, which we created.

2.png

Creating the Part

We use these bindings to get a infrared and depth array, then we concatenate them and input it to the neural network as a multidimensional array, mimicking how a normal Camera would input the array (as a 3D RGB array). We need to add a part so Donkey can collect data with the Structure Sensor every time we drive.

You can view the part we made for the Donkey Framework here:

3.png
4.png

Driving

Watch footage of the driving here:

While to the human eye, this may not seem like anything discernible, to our car, it sees the lines with the Infrared camera and the cones with the Depth camera. After enough training, our model will be able to follow the lane lines, and avoid the cones at the proper time because of Depth data.

Obstacle avoidance can be done with a normal RGB camera, but turns to avoid the obstacle can be off-timed sometimes and cause accidents because of it. Avoidance maneuvers trained on the model without depth information could screw up with changing cone sizes (taller cones look closer than the cones used to train, and shorter cones look farther). With the depth sensor, this cannot happen since it is able to see distance data as well, and can turn at the right moment every time.

This footage is on the indoor track. Because of rain, lighting, and scheduling, we were unable to train a model outside in time before next quarters students needed the Structure Sensor .

6.png