2019FallTeam4
Introduction
Our team(Team 3-Ds on my transcript) chose to create a experimental comparison between an Intel Realsense 3D depth camera and a simple webcam for both static and dynamic obstacle avoidance. We chose to do this project for multiple reasons. One being that obstacle avoidance is always going to have to be considered when making autonomous vehicles, to help prevent damages to both the vehicle and its surroundings. The other being that no one has integrated the real sense into the donkey frame work and we hope that our donkey part and trial and error trying different 3D cameras will benefit the open source DonkeyCar community.
Team Members
Ysabelle Lam [MAE]
Jordan Prazak [ECE]
Hongjian Cui [UCSD Extension]
Design and Assembly of Donkey
Plate and Camera Mount Design
Webcam: The camera mount was designed to be adjustable as we did not know the optimal camera angle for both the indoor and outdoor track. The mount and camera case connect with using a M3 screw. The screw/nut combo allows the camera angle to be easily adjusted for optimal training.
Camera mount:
Camera case:
Intel Realsense: We started off having the real sense case at a 30 degree angle which we found to be the ideal angle with the webcam. However, with further use we found that it would be better to have the real sense at a flatter angle so that it could see further ahead and we could have a better use of the point cloud.
Mount Plate: We designed the mount plate with some tolerance. The car chassis was designed to be used for all terrain, so we wanted a mount plate that could keep up with the chassis.
Mount plate CAD
Wiring Schematic
Autonomous Driving using Donkey Car Neural Network Training
Indoor
Indoor video with autonomous lap.
Outdoor
Outdoor video with autonomous lap.
Sample loss diagram after training:
Project: 3D Camera v. Webcam Obstacle Avoidance
Choosing 3D Camera
Structure Sensor
-Software development kit only in C++
-No viable way to transfer depth data gained from the camera to run in python file for real time.
-Could not implement into donkey framework.
Structure Core
-SDK in python
-Now able to collect depth and IR data in python
-Wrote/ found programs to implement in donkey
-Only does depth field and infrared meaning there is no RGB values
-Will be a problem with outdoors (as sunlight is IR) and there is no other data to rely on outdoors
Intel Realsense
-SDK in python
-Marketed for both indoor and outdoor use
-Collects calibrated imaging sub-system that features Active IR or Passive Stereo Depth technology, RGB images, rolling or global shutter image sensor technology, wide or standard FOV
-Meaning point cloud and images are not dependent on IR
Mounting Camera
We started off having the real sense case at a 30 degree angle which we found to be the ideal angle with the webcam. However, with further use we found that it would be better to have the real sense at a flatter angle so that it could see further ahead and we could have a better use of the point cloud.
You can see the final iteration here:
Donkey Part for Realsense
Removing an RGB Channel to fit Depth Data
This part removed green channel data from RGB and replaced with depth data. Allowing the data to be used to train in the donkey car framework.
Image that car sees with removed green channel:
Integrating both RGB and Depth Data
With this part the depth and RGB are obtained by vertically concatenating colorized depth image with RGB data. It also added post processing of data provided by Realsense (spatial/temporal filtering, hole filling). The depth data is only 1 layer, so using a 3 color image to represent depth causes redundant pixel values. To solve this we split the depth image by 3 and spread over 3 layers (R,G,B) in order to reduce image size and improve training time.
Insert images.
Static Obstacle Comparison
Indoor Track
Outdoor Track
Dynamic Obstacle Comparison
Indoor Track
Outdoor Track
Conclusions
Camera Recommendation
Problems Encountered and Lessons Learned
This project had its ups and downs. Ultimately, we accomplished what we set out to do but we hit some hiccups along the way. Ultimately what we learned was to read documentation and product support before attempting to implement a product. We also learned code wrappers are a blessing.