Difference between revisions of "2019FallTeam2"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 186: Line 186:
=== Mail Delivery In Action ===
=== Mail Delivery In Action ===
asdf
asdf
[[File:MailDelivery]]


=== Results ===
=== Results ===

Revision as of 02:37, 4 December 2019

Project Objective

Modify the Donkey Car framework and physical car to autonomously deliver personalized mail to multiple different destinations based on number. Ideally, the Donkey Car model will be trained to do autonomous laps while hugging the right side of the road. Upon recognizing a mailbox number using OpenCV, the car will stop, queue the right package to be delivered, push the package off the side of the vehicle and continue on to the next delivery.

Must have:
  • Deliver personalized mail to or directly in front of all destinations
  • Train the car to do laps in autonomous mode and then deliver mail when near a driveway by using its mechanical on board systems
Nice to have
  • Have control over where mail gets delivered to each driveway
  • Train the car to hug the right side of the road in autonomous mode

The Team

  • Andrew Ma, MAE
  • Bijan Ardalan, MAE
  • Lucas Hwang, ECE

TestYoda.jpg

Mechanical Design

Acrylic Mounting Plate

Our mounting plate was designed very early on in the design process so our specific requirements were largely unknown at the time. The acrylic piece was designed to allow standard size motors to be mounted in the center and fed down through the middle gap of the baseplate. It was also designed with holes for a couple different screw sizes for mounting different components onto it. thin slots were designed in the front and back for zip ties and larger slots were put on the sides for cable management. It was laser-cut with 1/4th inch acrylic since we had an idea

A render of the base plate design.
The dimensions used (in mm) in the base plate design above.













Camera Mount

We originally angled our camera down 60 degrees from the vertical in order to capture a large image of the ground and track lines. However, in order to capture the numbers for our package delivery on the side of the road, we decreased the angle to 30 degrees to capture more of the horizon.

CameraMount.png















Jetson Case

To house our Jetson Nano, we 3D printed a case taken off of Thingiverse at https://www.thingiverse.com/thing:3518410

Jetson Case.jpg

Mail Delivery Mechanism

asdf

Choosing The Motors

Our design requirements were based on the need to precisely choose a package to deliver. We chose to use servo motors instead of DC motors because they have internal control systems which saves us from having to design and implement external sensors and a control system. Originally we used the servo motors already in the lockers, however we realized that they were "position" servos that had a limited degree of rotation. For our design that used a rack and pinion system, this wouldn't work, so instead of redesigning everything we ordered continuous rotation servos. The servos we used can be found here. [1] Our design uses two servo motors, one for each rack and pinion.

Sign Design

We ended up using neon pink construction paper with numbers drawn in black marker for our signs. This design was optimal as the construction paper was matte and did not have any glare when photos were taken with the webcam. We had tried using other materials like folders and we found that the glare off the surface made it hard for OpenCV to properly recognize contours. Additionally, we tried a few different colors of construction paper before settling on pink. The camera has a tendency of adding a blue tint to everything in the picture, so pink stood out the most out of all the colors. Additionally, we 3D printed stands for our signs and used cardboard as a backing so we would be able to freely switch out the color as well as rotate them. (Insert picture of sign here)

Software Design

Software Overview

The software can be divided into two parts, which are the motor control and OpenCV number recognition respectively. For the motor control we created our own class called myMotor.py which was added into the parts folder. We based this file off the actuator.py file which already exists as a part in Donkey. myMotor.py contains two classes (myMotor and myMotorThrottle). The myMotor class is in charge of initialization of default values for the motor as well as interfacing with the PWM board (the PCA9685). The myMotorThrottle class contains methods which set the pulse for the motors and thereby controls how fast the motors spin.

Servo Motors

To control the servo motors on top of the car, we used a modified version of actuator.py, which can be found in the default DonkeyCar parts file, since we used the same PWM board to control both the steering and throttle servos as well as the mail delivery servos. Once we created the class, it was simply a matter of adding it as a part to manage.py so the DonkeyCar framework recognized the part.

MyMotor.py takes advantage of the imported Adafruit PCA9685 library for the corresponding PWM board to set the PWM signal. We then manually determined the pulse time constant. The pulse time constant represents how much time it takes to move the slider one box over. For example, moving from box 3 to box 5 would mean spinning the motor for two times the pulse constant and so on. In order to change the default motor speeds, we added the part to the myconfig.py file found in the d3 directory as well as the config variables for the PWM pulses required to stop the motor, spin it clockwise, and spin it counterclockwise.

Neural Network Training

For our neural network training we did not make any changes to the original Donkeycar framework. We used about 12,000 records in order to train a model indoors. One of the main reasons we decided to train our car indoors, was due to the rainy weather outside. The rain not only damaged the signs that we created but also negatively affected our number recognition. This was because there was very low brightness when it rained as opposed to other times when we had trained the model, making it hard for our camera to recognize the proper contours needed for number recognition.

Although the number recognition software operates separately from the Donkeycar framework, it was important to train the model with the signs on the course. If we added the signs after the car had been trained, there was the possibility that the car would not know how to respond when seeing a sign. This is due to the fact that the model associates steering and throttle with a given array of RGB values. we concluded that if we were to introduce a sudden change in the RGB values (i.e. adding a neon pink sign to the course) that the model was accustomed to seeing, our model would not perform as well.

Number Recognition Methodology

Although number recognition can be done in a variety of ways, for our project we decided to use seven-segment number recognition. This process can be broken down into several steps:


1. Color Filtering

In order to create a region of interest within all photos taken by the camera on the car, we decided to make the sign neon pink. Therefore, the first action that we wanted the number recognition software to accomplish was recognizing cropping the photo to only look at the pink construction paper where the number was written. In order to do this, we created a mask using RGB filtering to filter out all colors except for pink. We also know that the sign will be on the right side of the screen so we automatically crop the photo to the right half. Here is a picture of the original input to our camera and the black and white mask we created after color filtering. White represents all pink within the photo and black represents every other color. Team2 2019 NumberRecongition ColorFiltering 1.PNGTeam2 2019 NumberRecongition ColorFiltering 2.PNG


2. Contour Recognition

OpenCV defines a contour as any continuous line of the same color. For our software, there are two contours which need to be recognized. The first contour we call the display contour. This contour is the outline of the construction paper and is easily recognized within OpenCV after the color filtering has been applied. After cropping the photo to only the display contour, we need to recognize the digit contour. This is the contour formed by the digit drawn on the piece of construction paper. The photo is cropped for a final time after the digit contour is found. Two photos of both contours:

Team2 2019 NumberRecongition Contour 1.PNG Team2 2019 NumberRecongition Contour 2.PNG


3. Seven Segement Recognition

After isolating the digit contour. The program splits the photo into seven different segments. It then compares how much of each segment is filled with white pixels. If over 50% of a segment is filled with white pixels, then the segment is considered to be present. After repeating this process, a lookup is performed on a dictionary which has digit values associated with each unique seven segment key.

Team2 2019 NumberRecongition SevenSegment 1.PNGTeam2 2019 NumberRecongition SevenSegment 2.PNG

Number Recognition Code

The code first defines a a dictionary for all 10 digits and the corresponding segements which light up in a seven segment display. Each 1, within the DIGITS_LOOKUP represents a lit up segement on a seven segment display.

Team2 2019 CodeSnip 1.png


Next, the code defines the upper and lower bounds for the RGB mask, applies the mask to the photo, and then crops the photo to only the right half.

Team2 2019 CodeSnip 2.png

The image is resized and an edge map is created. The program finds all contours within the edge map and then sorts them.

Team2 2019 CodeSnip 3.png

The program iterates through each contour and stops once it has found a contour of the right size. This contour becomes the display contour which represents the piece of pink construction paper.

Team2 2019 CodeSnip 4.png

The image is then cropped to just the display contour and a series of operations are applied in order to clean up the photo. This reduces noise and makes it easier to recognize the digit contour. The same process of looking for contours in then repeated to find the digit contour.

Team2 2019 CodeSnip 5.png

This is the second for loop which looks for the digit contour. Once a contour of the appropriate size is found, the image is again cropped to only the digit contour.

Team2 2019 CodeSnip 6.png

Breaking From Autonomous Mode

asdf

Final Implementation In Donkey

asdf

Lessons Learned

asdf

Useful Knowledge

asdf

Challenges and Solutions

asdf

The Final Prototype

asdf

Mail Delivery In Action

asdf File:MailDelivery

Results

asdf

Future Improvements

asdf

References

asdf