Difference between revisions of "2019SpringTeam3"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
(Mechanical)
 
(36 intermediate revisions by the same user not shown)
Line 1: Line 1:
  
 
== Introduction ==
 
== Introduction ==
The goal of our project is to follow a person using IR sensors. Under real world circumstances, the concept of our project can be applied to indoor self-following shopping cart system for supermarkets. The car can follow a person who has an IR LED patch attached at a certain range.  
+
The goal of our project is to follow a person using IR sensors. Under real world circumstances, the concept of our project can be applied to indoor self-following shopping cart system for supermarkets. The car can follow a person who has an IR LED patch attached at a certain range.
 +
[[File:car.jpg|thumb|center|Fully installed robocar]]
  
 
== Team Members ==
 
== Team Members ==
Line 13: Line 14:
  
 
== Mechanical ==
 
== Mechanical ==
The camera is designed to be adjustable and can be easily locked in place with a screw and a nut. During our training, we need to find the optimal camera angle thus making the camera stand adjustable saved us a lot of time. The stand for the Structure Sensor is set to 30 degree of inclination since the camera itself has a wide range of view of 160 degrees and it needs to capture the IR patch that is attached to the back of a person. The Acrylic platform is printed with Laser-cut machine. Multiple mounting holes are cut for variety of mounting hardware choices if applicable.
+
;Plate and Camera Mount
 +
:The camera is designed to be adjustable and can be easily locked in place with a screw and a nut. During our training, we need to find the optimal camera angle thus making the camera stand adjustable saved us a lot of time. The stand for the Structure Sensor is set to 30 degree of inclination since the camera itself has a wide range of view of 160 degrees and it needs to capture the IR patch that is attached to the back of a person. The Acrylic platform is printed with Laser-cut machine. Multiple mounting holes are cut for variety of mounting hardware choices if applicable.
 +
 
 +
 
 +
[[File:camera_mount1.png|thumb|center|300px|3D printed camera mount for the pi-camera]]
 +
[[File:camera_mount2.jpg|thumb|center|300px|3D printed camera mount for Structure Sensor]]
 +
 
 +
[[File:pi_housing.jpg|thumb|center|500px|3D printed housing for Raspberry Pi]]
 +
;Circuit Diagram
 +
:Below is the circuit diagram for the robot car setup. A emergency stop circuit is used using a relay which can be remotely controlled. This adds extra safety during the test and training. Two LEDs are used to indicating the connection. Blue means circuit is connected and the car can be controlled by the controller, and red means circuit is cut-off.
 +
 
 +
[[File:Circuit.jpg|thumb|center|800px|Circuit diagram for autonomous driving. Drawn using Fritzing]]
 +
;Components
 +
:
 +
[[File:LED_patch.jpg|thumb|center|500px|LED patch is consisted of 25 IR LEDs]]
 +
[[File:Structure Sensor.jpg|thumb|center|500px|Structure Sensor]]
 +
 
 +
== Autonomous Driving ==
 +
;Indoor Circuit
 +
:[https://youtu.be/_WHiRTZI7i8/ Here]: is a video of 7 laps of indoor driving
 +
[[File:Indoor.jpg|thumb|centre|500 px|Indoor]]
 +
 
 +
;Outdoor Circuit
 +
:[https://youtu.be/Q4VjUSXi748/ Here] is a video of 3 laps of outdoor driving
 +
[[File:Outdoor.jpg|thumb|centre|500 px|Outdoor]]
 +
 
 +
== Project==
 +
;Target Following using Structure Sensor
 +
:We will be using the IR sensor form the structure sensor to detect the IR LED patch that is attached to the back of the targeted following object, using the modified donkey training program, we can train the car the follow the LED patch within a distance of 1m.
 +
[[File:Following.jpg|thumb|center|500px|IR patch setup]]
 +
[[File:Despth.png|thumb|center|500px|Left: Depth view; Right: Pure IR Image]]
 +
 
 +
;Neural Network
 +
:Unlike the name suggests, the neuron network in our autonomous driving training functions very differently from our brains. The core idea is the visual pattern recognition. By learning from tons of examples that are presented by us to teach the program what actions are taken under what circumstances. Under large amount of training, the program will set up a data base large enough to somewhat 'understand' what to do under certain circumstances. In this class, by using camera to recognizing the track lines, the donkey frame work will generate throttle and steering output in autonomous driving based on our inputs during training. 
 +
 
 +
;IR Patch
 +
:The IR LED patch is connected using the following circuit diagram, allowing maximum current allowed to flow through LEDs to get the maximum brightness.
 +
[[File:LED_circuit.jpg|thumb|center|500px|Circuit diagram for LED patch]]
 +
 
 +
;Issues with dark IR images
 +
:Although we tried to power the LEDs to the maximum brightness, structure sensor still struggles to pick up any clear and distinctive images of the patch even indoors due to the limitation of the physical range of 1 foot. A much stronger IR patch is needed for better image capture on the structure sensor side.
 +
[[File:Dark.png|thumb|center|500px|Very faint and weak recognition for the IR patch]]
 +
 
 +
;Issues with OpenNI2
 +
:OpenNI2 is pretty finicky. DO NOT USE THE PACKAGE FROM STRUCTURE SENSOR PAGE! Go to the openni2 [[github]] and compile it locally on your pi. Basically git clone it and there will be a folder called OPENNI2. Go to that folder and type "make" in the command line.
 +
 
 +
If you run into this error:
 +
/usr/include/arm-linux-gnueabihf/gnu/stubs.h:7:29: fatal error: gnu/stubs-soft.h: No such file or directory
 +
include <gnu/stubs-soft.h>
 +
                           
 +
type make -mfloat-abi=hard and it should fix it.
 +
 
 +
Here is the weird part of OPENNI2, you have to manually change the config files called "PSLINK.ini" and "PS1080.ini".
 +
Find these files in OPENNI2,path .../OpenNI2/Config/OpenNI2/Drivers$
 +
 
 +
You want to change the line:
 +
";UsbInterface=2" to "UsbInterface=0" *Note that you are removing the semi colon. This is on purpose.
 +
 
 +
Before you try working on it on the RPI, you should download the driver onto your personal laptop to see the camera in action. You also have to change the two config files.
 +
 
 +
;Opencv and openni on python
 +
:These are libraries that you have to install to use the structure sensor. Openni can be installed from git install openni and should not cause many problems.
 +
Opencv was a bit trickier, you have to download the source code and compile it locally. Follow this [[link]] for the procedure.
 +
 
 +
'''One way to test if you have the correct libraries is to manually import it in python.'''
 +
In d2t in your environment, bring up the python system by typing "python" in the command line.
 +
The output should look like this:
 +
Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
 +
[GCC 7.3.0] on linux2
 +
Type "help", "copyright", "credits" or "license" for more information.
 +
>>>
 +
type "import openni"
 +
and "from cv2 import *"
 +
if you do not get any errors, then you have installed all the libraries successfully
 +
 
 +
;Integrate to Donkey Framework
 +
:2018 fall team3 did this integration. You can get the part and altered manage.py file from their team page. Once you have all their code imported, then you need to change the camera type in your config file, add this line
 +
#CAMERA
 +
#CAMERA_TYPE = "PICAM"  # (PICAM|WEBCAM|CVCAM|MOCK)
 +
CAMERA_TYPE = "STRUCTURE_CAM"
 +
to choose the structure sensor you just comment out the pi cam line and uncomment the structure_cam line.
 +
 
 +
;Other helpful hints
 +
:Do not burn your pi and if you have to reflash it I have a quickstart guide after you etch it. [https://docs.google.com/document/d/1a8gRD94A997kgQ1KPVzIqIG01a-TTWfC9h7RqIZMhRE/edit?usp=sharing]
 +
 
 +
;Result
 +
:We are able to acquire pictures and data from PC successfully however we are unable to integrate the structure sensor to the Raspberry due to install error of OpenNI2. Another issue is that our IR LED patch is not strong enough for the structure sensor to pickup clearly. We encountered lots of set backs and unexpected difficulties during this course. Things that we would like to recommend future groups who intend to work on similar project that do not drive under sunny hot weather as your micro SD card might melt under such condition. If possible, try to find other IR sensor. Although structure sensor looks cool and is quite expensive, the software it uses is quite outdated. OpenNI2 was used generally in around 2012-2016, which could cost teams to spend a lot of time researching and trouble shooting without any good solutions.

Latest revision as of 15:35, 14 June 2019

Introduction

The goal of our project is to follow a person using IR sensors. Under real world circumstances, the concept of our project can be applied to indoor self-following shopping cart system for supermarkets. The car can follow a person who has an IR LED patch attached at a certain range.

Fully installed robocar

Team Members

Jose Manuel Rodriguez - ME

Yuan Chen - ME

Jason Nan - BENG: BSYS

Po Hsiang Huang - EE

Mechanical

Plate and Camera Mount
The camera is designed to be adjustable and can be easily locked in place with a screw and a nut. During our training, we need to find the optimal camera angle thus making the camera stand adjustable saved us a lot of time. The stand for the Structure Sensor is set to 30 degree of inclination since the camera itself has a wide range of view of 160 degrees and it needs to capture the IR patch that is attached to the back of a person. The Acrylic platform is printed with Laser-cut machine. Multiple mounting holes are cut for variety of mounting hardware choices if applicable.


3D printed camera mount for the pi-camera
3D printed camera mount for Structure Sensor
3D printed housing for Raspberry Pi
Circuit Diagram
Below is the circuit diagram for the robot car setup. A emergency stop circuit is used using a relay which can be remotely controlled. This adds extra safety during the test and training. Two LEDs are used to indicating the connection. Blue means circuit is connected and the car can be controlled by the controller, and red means circuit is cut-off.
Circuit diagram for autonomous driving. Drawn using Fritzing
Components
LED patch is consisted of 25 IR LEDs
Structure Sensor

Autonomous Driving

Indoor Circuit
Here: is a video of 7 laps of indoor driving
Indoor
Outdoor Circuit
Here is a video of 3 laps of outdoor driving
Outdoor

Project

Target Following using Structure Sensor
We will be using the IR sensor form the structure sensor to detect the IR LED patch that is attached to the back of the targeted following object, using the modified donkey training program, we can train the car the follow the LED patch within a distance of 1m.
IR patch setup
Left: Depth view; Right: Pure IR Image
Neural Network
Unlike the name suggests, the neuron network in our autonomous driving training functions very differently from our brains. The core idea is the visual pattern recognition. By learning from tons of examples that are presented by us to teach the program what actions are taken under what circumstances. Under large amount of training, the program will set up a data base large enough to somewhat 'understand' what to do under certain circumstances. In this class, by using camera to recognizing the track lines, the donkey frame work will generate throttle and steering output in autonomous driving based on our inputs during training.
IR Patch
The IR LED patch is connected using the following circuit diagram, allowing maximum current allowed to flow through LEDs to get the maximum brightness.
Circuit diagram for LED patch
Issues with dark IR images
Although we tried to power the LEDs to the maximum brightness, structure sensor still struggles to pick up any clear and distinctive images of the patch even indoors due to the limitation of the physical range of 1 foot. A much stronger IR patch is needed for better image capture on the structure sensor side.
Very faint and weak recognition for the IR patch
Issues with OpenNI2
OpenNI2 is pretty finicky. DO NOT USE THE PACKAGE FROM STRUCTURE SENSOR PAGE! Go to the openni2 github and compile it locally on your pi. Basically git clone it and there will be a folder called OPENNI2. Go to that folder and type "make" in the command line.

If you run into this error: /usr/include/arm-linux-gnueabihf/gnu/stubs.h:7:29: fatal error: gnu/stubs-soft.h: No such file or directory include <gnu/stubs-soft.h>

type make -mfloat-abi=hard and it should fix it.

Here is the weird part of OPENNI2, you have to manually change the config files called "PSLINK.ini" and "PS1080.ini". Find these files in OPENNI2,path .../OpenNI2/Config/OpenNI2/Drivers$

You want to change the line: ";UsbInterface=2" to "UsbInterface=0" *Note that you are removing the semi colon. This is on purpose.

Before you try working on it on the RPI, you should download the driver onto your personal laptop to see the camera in action. You also have to change the two config files.

Opencv and openni on python
These are libraries that you have to install to use the structure sensor. Openni can be installed from git install openni and should not cause many problems.

Opencv was a bit trickier, you have to download the source code and compile it locally. Follow this link for the procedure.

One way to test if you have the correct libraries is to manually import it in python. In d2t in your environment, bring up the python system by typing "python" in the command line. The output should look like this: Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>>

type "import openni"

and "from cv2 import *" if you do not get any errors, then you have installed all the libraries successfully

Integrate to Donkey Framework
2018 fall team3 did this integration. You can get the part and altered manage.py file from their team page. Once you have all their code imported, then you need to change the camera type in your config file, add this line
  1. CAMERA
  2. CAMERA_TYPE = "PICAM" # (PICAM|WEBCAM|CVCAM|MOCK)

CAMERA_TYPE = "STRUCTURE_CAM" to choose the structure sensor you just comment out the pi cam line and uncomment the structure_cam line.

Other helpful hints
Do not burn your pi and if you have to reflash it I have a quickstart guide after you etch it. [1]
Result
We are able to acquire pictures and data from PC successfully however we are unable to integrate the structure sensor to the Raspberry due to install error of OpenNI2. Another issue is that our IR LED patch is not strong enough for the structure sensor to pickup clearly. We encountered lots of set backs and unexpected difficulties during this course. Things that we would like to recommend future groups who intend to work on similar project that do not drive under sunny hot weather as your micro SD card might melt under such condition. If possible, try to find other IR sensor. Although structure sensor looks cool and is quite expensive, the software it uses is quite outdated. OpenNI2 was used generally in around 2012-2016, which could cost teams to spend a lot of time researching and trouble shooting without any good solutions.