Difference between revisions of "Project stereo"
Line 39: | Line 39: | ||
==tawn_donkey.py== | ==tawn_donkey.py== | ||
Tawn Kramer's stereo code works by taking the images from the two stereo cameras(Left, Right), making them monochrome, producing an image equal to the difference of intensities (Left minus Right), then layering the three images and saving it as a single RGB image. | Tawn Kramer's stereo code works by taking the images from the two stereo cameras(Left, Right), making them monochrome, producing an image equal to the difference of intensities (Left minus Right), then layering the three images and saving it as a single RGB image. This allows Donkeycar to utilize two cameras without having to change any other code in the architecture. | ||
{| style="border-spacing: 2px; border: 1px solid darkgray;" | {| style="border-spacing: 2px; border: 1px solid darkgray;" | ||
Line 52: | Line 52: | ||
| [[File:Combined image.jpg|Combined Image]] | | [[File:Combined image.jpg|Combined Image]] | ||
|} | |} | ||
The combined images are saved into tubs into the data directory just as the images taken by the single Picamera would be. | |||
==Results== | ==Results== |
Revision as of 21:54, 23 March 2018
The goal of this project was to implement stereo vision in the Donkeycar framework using two USB cameras, training the neural network to drive our robot car around a track, and compare its performance to that of Donkeycar vehicle trained on the same track with only one camera.
Page still in progress.
Mounting USB Cameras to the Car
A 3D printed mount was created for the front of the car was created that would hold both USB cameras. The mount was designed so that the seats for both cameras would move as a single unit, pointing them at the same angle and minimizing the possibility of them pointing in different directions.
The mount also contains screw holes in the center that were intended to attach the original Picamera mount that we used earlier in the first half of the quarter. See the section "Possible Improvements" for more details on how this was meant to be used.
Training
The tawn_donkey.py template file provided by Tawn Kramer (here) already has code to implement stereo vision with the USB cameras. This code uses the pygame library, which does not come preinstalled in the Donkeycar framework. To install pygame, log onto the RPI and type in the following:
> Commands go here
After this, create a vehicle with the tawn_donkey.py template file, then change into the new directory:
> donkey createcar --path ~/stereoCar --template tawn_donkey > cd stereoCar
Note, stereoCar is the name of the new vehicle. Replace it in the commands above with the name of your choosing. Next, modify config.py to calibrate the steering and throttle as shown in the class tutorial, but with the additional step of changing the following parameter:
#CAMERA CAMERA_TYPE = "WEBCAM" IMAGE_W = 160 IMAGE_H = 120 IMAGE_DEPTH = 3 CAMERA_FRAMERATE = DRIVE_LOOP_HZ
Now Donkeycar is set to use stereo vision. Make sure the cameras are plugged into the USB ports of the RPI, and start driving the car to collect data by typing the following:
> python manage.py drive --camera=stereo
To train the neural network with your data, follow the instrustions given in the class tutorial. To run the model on your RPI, remember to add the "--camera=stereo" flag.
> python manage.py drive --model==models/*your model goes here*.h5 --camera=stereo
tawn_donkey.py
Tawn Kramer's stereo code works by taking the images from the two stereo cameras(Left, Right), making them monochrome, producing an image equal to the difference of intensities (Left minus Right), then layering the three images and saving it as a single RGB image. This allows Donkeycar to utilize two cameras without having to change any other code in the architecture.
Left | Right | Difference | Combined Image |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
The combined images are saved into tubs into the data directory just as the images taken by the single Picamera would be.
Results
filler
Depth Perception
filler
Code
filler
Possible Improvements
For a more accurate comparison of the performance between the stereo and mono regimes, we considered using the Picamera concurrently with the two USB cameras to record data through all three cameras simultaneously. This would ensure that both the stereo and mono models were trained in the same conditions with similar data, giving us a mono model that would act as a proper control group. We didn't have the time to program thi functionality, but this might be something worth implementing in future projects.
The code we wrote for depth perception was functional, but running on the Raspberry Pi hardware was too slow. Future teams could work to improve our existing code, or even implement depth perception on faster hardware instead.