2019SpringTeam6

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 21:03, 10 June 2019 by Spring2019Team6 (talk | contribs) (Vehicle Design)
Jump to: navigation, search
Dramatic trackangle.jpg

Introduction

The initial goal of our project was to use a time of flight sensor to detect possible parking spaces and begin a parallel parking maneuver whenever the parking space satisfies the minimum length requirements. We drew our inspiration from the 2019 Winter Team 6 and Team 7. However, different from previous years, we measured the length of the parking space based on pure distance using an encoder instead of a constant velocity and time. Additionally, we also integrated this method to the donkey framework so that we are able to train our model base on velocity instead of the throttle.

Team Members

• Andrew Sanchez

• Connor Roberts

• Genggeng Zhou

• Ian Delaney

• Ivan Ferrier

Error creating thumbnail: Unable to save thumbnail to destination

Vehicle Design

The car was already fully assembled when we received it. To start with, we had to design a baseplate to hold all electronic components and a camera mount to hold the camera at a desired angle.

Baseplate6.png Camera mount prototype.png

Our original camera mount was quite flexible, the height and camera angle could be adjusted to the desired position by adjusting the joints. However, we soon found out that this design is not stable due to vibrations from the car, and the camera position can be easily shifted during driving which was not suitable for training. We switched to a more stable design using measured parameters.

File:Camera mount.jpg

To increase downforce, we added a GT wing!

GT wing.png

Wiring Diagram

Wiring diagram.jpg

Objectives

Indoor Training


Outdoor Training

When we started training for the outdoor track, we noticed that the camera could not effectively capture the track as it was much wider than the indoor track. We solved this by 3-D printing an extension mount that allowed for easy switching between our indoor and outdoor camera setups. With experience from indoor training, outdoor training was relatively easy. We were lucky that we did our training on an overcast day, which provides consistent light throughout the day. While training the model, we tried our best to keep the car at the center of the track and apply a constant throttle. Eventually, we trained our first model based on 25k data points, which was approximately 25 laps. We were able to achieve 3 fully autonomous laps using this model.

Robofest Competition

During the Robofest competition, we noticed two things we have been doing wrong for the whole time. Throughout our training, we believed that we should always keep the car at the center of the track so that our model will do exactly the same. However, it turns out that sometimes a bad driver can build a better model because the model needs to know what to do when something happens. In other words, with a "good driver", the model will not necessarily know what to do when it is going out of the tracking because the model may have never encountered such situation during training. But with a "bad driver", the model will learn that it should make a left or right turn when it is going out of the track. The second thing was that we have been scaling down the throttle value when we run the model because we were worried that the car won't make those really sharp turns. During the competition, we scaled the throttle value to about 0.7, the car moves really slow and it barely completed three autonomous laps. When we scaled the throttle value up to 1.1, the car was able to complete more than 5 autonomous laps without any problem. We concluded that when running the model, the throttle value needs to be consistent with the value used for training.

Project

Github

To get quick started on a parallel parking project or speed training clone our Github outside the d2t directory of your project with the following command:

cd ~/ #insures you don't overwrite your current working d2t folder

git clone https://github.com/tonikhd/AutonomousRC.git

It is important to note that our code breaks the current manage.py file in our directory and instead uses the file WorkingSpeedTraining.py to run models, train, and drive the car due to the fact that we are training with speed as an input rather than throttle value

Encoder

Magnetometer
Magnet Modification

Our original plan was to connect a rotary encoder directly to the drive train to measure the distance traveled by the car. But with the help of Professor Silberman and some friends, we came up with a much simpler design. In our final design, three magnets were evenly placed on the gear, and a magnetometer was used to tick once whenever the magnet passed the magnetometer. The total distance traveled during a certain period can then be determined by multiplying the total amount of ticks and distance traveled by the car per tick, which was calibrated manually. This design may not be as accurate as a rotary encoder, but it was sufficient for our application and saved space on the chassis as the original encoder was significantly larger.

The code for the encoder that we used in our parallel parking script is the file labeled t6_encoder.py[1] in our team's Github repository. It removes a running thread from Tawn's version and instead returns speed and distance when called on from a function, rather than constantly polling for speed and distance.

The code for the encoder used in the speed based training uses Tawn's encoder.py[2] file.

Parallel Parking

The parallel parking script polls for edges using the time of flight sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking maneuver.

The code for the parallel parking script is labeled parallelpark.py[3] in our Github repo.

Speed Based Training

Obstacles

  • Installing libraries for the time of flight sensor model VL53L1X with pip gave us an error involving "site packages" not being found. It turns out the issue had to do with the virtual environment and pip, and the solution was to install the libraries manually. To install the libraries clone the Github[4] library for the time of flight onto the PI using the following command.

    git clone https://github.com/pimoroni/vl53l1x-python

    Then run the command:

    sudo python vl53l1x-python/setup.py install

    This will install the time of flight required site packages to the appropriate place

  • The indoor track had poor lighting so to combat this we installed a series of LEDs under the bumper using popsicle sticks

  • Dramatic frontview.jpeg


Improvements

Some potential improvements include:

  • Determining the minimum parking space based on vehicle dimensions
  • Autonomously adjust the position of the vehicle using multiple ToF sensors before parallel parking
    • Additionally, using these extra time of flight sensors to keep the car parallel to the side of the "street" and at the distance that is optimal for parallel parking
  • Implementing the PS3 controller to drive the car in our parallel parking application
  • Run the parallel parking script alongside one of our autonomous driving models