From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 10:19, 11 June 2019 by Spring2019Team3 (talk | contribs) (Project)
Jump to: navigation, search


The goal of our project is to follow a person using IR sensors. Under real world circumstances, the concept of our project can be applied to indoor self-following shopping cart system for supermarkets. The car can follow a person who has an IR LED patch attached at a certain range.

Team Members

Jose Manuel Rodriguez - ME

Yuan Chen - ME

Jason Nan - BENG: BSYS

Po Hsiang Huang - EE


Plate and Camera Mount
The camera is designed to be adjustable and can be easily locked in place with a screw and a nut. During our training, we need to find the optimal camera angle thus making the camera stand adjustable saved us a lot of time. The stand for the Structure Sensor is set to 30 degree of inclination since the camera itself has a wide range of view of 160 degrees and it needs to capture the IR patch that is attached to the back of a person. The Acrylic platform is printed with Laser-cut machine. Multiple mounting holes are cut for variety of mounting hardware choices if applicable.

3D printed camera mount for the pi-camera
3D printed camera mount for Structure Sensor
3D printed housing for Raspberry Pi
Circuit Diagram
Below is the circuit diagram for the robot car setup. A emergency stop circuit is used using a relay which can be remotely controlled. This adds extra safety during the test and training. Two LEDs are used to indicating the connection. Blue means circuit is connected and the car can be controlled by the controller, and red means circuit is cut-off.
Circuit diagram for autonomous driving. Drawn using Fritzing
LED patch is consists of 25 IR LEDs

Autonomous Driving

Indoor Circuit
Here is a video of 7 laps of indoor driving
Outdoor Circuit
Here is a video of 3 laps of outdoor driving


Target Following using Structure Sensor
We will be using the IR sensor form the structure sensor to detect the IR LED patch that is attached to the back of the targeted following object, using the modified donkey training program, we can train the car the follow the LED patch within a distance of 1m.
Neural Network
Unlike the name suggests, the neuron network in our autonomous driving training functions very differently from our brains. The core idea is the visual pattern recognition. By learning from tons of examples that are presented by us to teach the program what actions are taken under what circumstances. Under large amount of training, the program will set up a data base large enough to somewhat 'understand' what to do under certain circumstances. In this class, by using camera to recognizing the track lines, the donkey frame work will generate throttle and steering output in autonomous driving based on our inputs during training.
IR Patch
The IR LED patch is connected using the following circuit diagram, allowing maximum current allowed to flow through LEDs to get the maximum brightness.
Circuit diagram for LED patch
Issues with dark IR images
Although we tried to power the LEDs to the maximum brightness, structure sensor still struggles to pick up any clear and distinctive images of the patch even indoors due to the limitation of the physical range of 1 foot. A much stronger IR patch is needed for better image capture on the structure sensor side.
Issues with OpenNI2
Integrate to Donkey Framework