Difference between revisions of "2019FallTeam2"
|Line 21:||Line 21:|
=== Acrylic Mounting Plate ===
=== Acrylic Mounting Plate ===
[[File:Base_Plate_Render.jpg|400px|thumb||A render of the base plate design.]]
[[File:Base_Plate_Drawing.jpg|400px|thumb||The dimensions used (in mm) in the base plate design above]]
=== Camera Mount ===
=== Camera Mount ===
Revision as of 22:37, 3 December 2019
Modify the Donkey Car framework and physical car to autonomously deliver personalized mail to multiple different destinations based on number. Ideally, the Donkey Car model will be trained to do autonomous laps while hugging the right side of the road. Upon recognizing a mailbox number using OpenCV computer vision, the car will stop, queue the right package to be delivered, push the package off the side of the vehicle and continue on to the next delivery.
- Must have:
- Deliver personalized mail to or directly in front of all destinations
- Train the car to do laps in autonomous mode and then deliver mail when near a driveway by using its mechanical on board systems
- Nice to have
- Have control over where mail gets delivered to each driveway
- Train the car to hug the right side of the road in autonomous mode
- Andrew Ma, MAE
- Bijan Ardalan, MAE
- Lucas Hwang, ECE
Acrylic Mounting Plate
We originally angled our camera down 60 degrees from the vertical in order to capture a large image of the ground and track lines. However, in order to capture the numbers for our package delivery on the side of the road, we decreased the angle to 30 degrees to capture more of the horizon.
To house our Jetson Nano, we 3D printed a case taken off of Thingiverse. A link to the case can be found here: https://www.thingiverse.com/thing:3518410
Mail Delivery Mechanism
Choosing The Motors
We ended up using neon pink construction paper with numbers drawn in black marker for our signs. This design was optimal as the construction paper was matte and did not have any glare when photos were taken with the webcam. We had tried using other materials like folders and we found that the glare off the surface made it hard for OpenCV to properly recognize contours. Additionally, we tried a few different colors of construction paper before settling on pink. The camera has a tendency of adding a blue tint to everything in the picture, so pink stood out the most out of all the colors. Additionally, we 3D printed stands for our signs and used cardboard as a backing so we would be able to freely switch out the color as well as rotate them. (Insert picture of sign here)
The software can be divided into two parts, which are the motor control and OpenCV number recognition respectively. For the motor control we created our own class called myMotor.py which was added into the parts folder. We based this file off the actuator.py file which already exists as a part in Donkey. myMotor.py contains two classes (myMotor and myMotorThrottle). The myMotor class is in charge of initialization of default values for the motor as well as interfacing with the PWM board (the PCA9685). The myMotorThrottle class contains methods which set the pulse for the motors and thereby controls how fast the motors spin.
To control the servo motors on top of the car, we used a modified version of actuator.py, which can be found in the default DonkeyCar parts file, since we used the same PWM board to control both the steering and throttle servos as well as the mail delivery servos. Once we created the class, it was simply a matter of adding it as a part to manage.py so the DonkeyCar framework recognizes the part.
MyMotor.py takes advantage of the imported Adafruit PCA9685 library for the corresponding PWM board to set the PWM signal. We then adjust the pulse time constant. The pulse time constant represents how much time it takes to move the slider one box over. For example, moving from box 3 to box 5 would mean spinning the motor for two times the pulse constant and so on. In order to change the default motor speeds, we added some constants within the myconfig.py file found in the d3 directory. These constants represent the pulses required to stop the motor, spin it clockwise, and spin it counterclockwise.
Neural Network Training
For our neural network training we did not make any changes to the original Donkeycar framework. We used about 12,000 records in order to train a model indoors. One of the main reasons we decided to train our car indoors, was due to the rainy weather outside. The rain not only damaged the signs that we created but also negatively affected our number recognition. This was because there was very low brightness when it rained as opposed to other times when we had trained the model, making it hard for our camera to recognize the proper contours needed for number recognition.
Although the number recognition software operates separately from the Donkeycar framework, it was important to train the model with the signs on the course. If we added the signs after the car had been trained, there was the possibility that the car would not know how to respond when seeing a sign. This is due to the fact that the model associates steering and throttle with a given array of RGB values. we concluded that if we were to introduce a sudden change in the RGB values (i.e. adding a neon pink sign to the course) that the model was accustomed to seeing, our model would not perform as well.
Number Recognition Methodology
Although number recognition can be done in a variety of ways, for our project we decided to use seven-segment number recognition. This process can be broken down into several steps:
1. Color Filtering
In order to create a region of interest within all photos taken by the camera on the car, we decided to make the sign neon pink. Therefore, the first action that we wanted the number recognition software to accomplish was recognizing cropping the photo to only look at the pink construction paper where the number was written. In order to do this, we created a mask using RGB filtering to filter out all colors except for pink. We also know that the sign will be on the right side of the screen so we automatically crop the photo to the right half. Here is a picture of the original input to our camera and the black and white mask we created after color filtering. White represents all pink within the photo and black represents every other color.
2. Contour Recognition
OpenCV defines a contour as any continuous line of the same color. For our software, there are two contours which need to be recognized. The first contour we call the display contour. This contour is the outline of the construction paper and is easily recognized within OpenCV after the color filtering has been applied. After cropping the photo to only the display contour, we need to recognize the digit contour. This is the contour formed by the digit drawn on the piece of construction paper. The photo is cropped for a final time after the digit contour is found. Two photos of both contours:
Number Recognition Code
Breaking From Autonomous Mode
Final Implementation In Donkey
Challenges and Solutions
The Final Prototype
Mail Delivery In Action