From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 22:57, 14 June 2019 by Spring2019Team6 (talk | contribs) (Parallel Parking)
Jump to: navigation, search
Dramatic trackangle.jpg


The initial goal of our project was to use a time of flight sensor to detect possible parking spaces and begin a parallel parking maneuver whenever the parking space satisfies the minimum length requirements. We drew our inspiration from the 2019 Winter Team 6 and Team 7. However, different from previous years, we measured the length of the parking space based on pure distance using an encoder instead of a constant velocity and time. Additionally, we also integrated this method to the donkey framework so that we are able to train our model base on velocity instead of the throttle.

Team Members

• Andrew Sanchez

• Connor Roberts

• Genggeng Zhou

• Ian Delaney

• Ivan Ferrier

Group picture.jpg

Left to right: Andrew, Ivan, Ian, Genggeng, Connor

Vehicle Design

The car was already fully assembled when we received it. To start with, we had to design a baseplate to hold all electronic components and a camera mount to hold the camera at a desired angle.

Baseplate6.png Camera mount prototype.png

Our original camera mount was quite flexible, the height and camera angle could be adjusted to the desired position by adjusting the joints. However, we soon found out that this design is not stable due to vibrations from the car, and the camera position can be easily shifted during driving which was not suitable for training. We switched to a more stable design using measured parameters.

New camera mount.jpg

To increase downforce, we added a GT wing!

GT wing.png

Wiring Diagram

The wiring Diagram includes all basic setup, as well as ToF sensor and the Hall effect sensor, which are only used in the team project to measure distance.

Wiring diagram.jpg


Indoor Training

When we were training for the indoor track, we decided the best strategy was to drive along the center of the track and use constant throttle to train the model. At that moment, we had some problem with the calibration of the steering, and the car kept drifting to the left, we had to build a model on a lot of data to accomplish the indoor track.

Outdoor Training

When we started training for the outdoor track, we noticed that the camera could not effectively capture the track as it was much wider than the indoor track. We solved this by 3-D printing an extension mount that allowed for easy switching between our indoor and outdoor camera setups. With experience from indoor training, outdoor training was relatively easy. We were lucky that we did our training on an overcast day, which provides consistent light throughout the day. While training the model, we tried our best to keep the car at the center of the track and apply a constant throttle. Eventually, we trained our first model based on 25k data points, which was approximately 25 laps. We were able to achieve 3 fully autonomous laps using this model.

Robofest Competition

During the Robofest competition, we noticed two things we have been doing wrong for the whole time. Throughout our training, we believed that we should always keep the car at the center of the track so that our model will do exactly the same. However, it turns out that sometimes a bad driver can build a better model because the model needs to know what to do when something happens. In other words, with a "good driver", the model will not necessarily know what to do when it is going out of the tracking because the model may have never encountered such situation during training. But with a "bad driver", the model will learn that it should make a left or right turn when it is going out of the track. The second thing was that we have been scaling down the throttle value when we run the model because we were worried that the car won't make those really sharp turns. During the competition, we scaled the throttle value to about 0.7, the car moves really slow and it barely completed three autonomous laps. When we scaled the throttle value up to 1.1, the car was able to complete more than 5 autonomous laps without any problem. We concluded that when running the model, the throttle value needs to be consistent with the value used for training.



To get quick started on a parallel parking project or speed training clone our Github outside the d2t directory of your project with the following command:

cd ~/ #insures you don't overwrite your current working d2t folder

git clone https://github.com/tonikhd/AutonomousRC.git

It is important to note that our code breaks the current manage.py file in our directory and instead uses the file WorkingSpeedTraining.py to run models, train, and drive the car due to the fact that we are training with speed as an input rather than throttle value


Magnet Modification

Our original plan was to connect a rotary encoder directly to the drive train to measure the distance traveled by the car. But with the help of Professor Silberman and some friends, we came up with a much simpler design. In our final design, three magnets were evenly placed on the gear, and a magnetometer was used to tick once whenever the magnet passed the magnetometer. The total distance traveled during a certain period can then be determined by multiplying the total amount of ticks and distance traveled by the car per tick, which was calibrated manually. This design may not be as accurate as a rotary encoder, but it was sufficient for our application and saved space on the chassis as the original encoder was significantly larger.

The code for the encoder that we used in our parallel parking script is the file labeled t6_encoder.py[1] in our team's Github repository. It removes a running thread from Tawn's version and instead returns speed and distance when called on from a function, rather than constantly polling for speed and distance.

The code for the encoder used in the speed based training uses Tawn's encoder.py[2] file.

Parallel Parking

We borrowed our inspiration from 2019 Winter Team 6 and Team 7, but different from previous teams, where parallel parking was done based on time and certain constant throttle, our algorithm uses distance reading from the encoder. Now the car is able to find a suitable parking spot no matter what speed it is driving at. The parallel parking script polls for edges using the time of flight sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking maneuver.

Error creating thumbnail: Unable to save thumbnail to destination

Essentially when the time of flight sensor detects a sudden increase above a certain value in its reading, it detects the first edge of the parking spot, and the distance traveled is recorded using the encoder. After that, when it detects a sudden decrease of a certain value in its reading, it detects the second edge, and a new value of distance traveled will be recorded. The size of the parking spot will be calculated by subtracting the old distance from the new distance reading from the encoder. If the size of the parking spot is smaller than about 1.5 times the length of the car, the parking spot will be too small to park, and "the parking spot is too small" will be displayed in the terminal. If the size of the parking spot is larger than that value, the car will brake and start parking. The distance traveled during braking is also recorded to ensure accuracy.

The code for the parallel parking script is labeled parallelpark.py[3] in our Github repo.

Speed Based Training

The original donkey framework trains model based on steering and throttle. It extracts steering and throttle value from every single image taken during the training and builds a model. Our training algorithm, however, extracts steering and speed value (recorded through the Hall effect sensor) from images and builds a model based on those values. However, to physically apply it to our car, we also had to incorporate a PID controller into the donkey framework, otherwise, the car wouldn't know what throttle to use to reach the desired speed. The code for speed training and PID controller are labeled as WorkingSpeedTraining.py[4] and PID_speed_controller.py[5], respectively.

Error creating thumbnail: Unable to save thumbnail to destination


  • Installing libraries for the time of flight sensor model VL53L1X with pip gave us an error involving "site packages" not being found. It turns out the issue had to do with the virtual environment and pip, and the solution was to install the libraries manually. To install the libraries clone the Github[6] library for the time of flight onto the PI using the following command.

    git clone https://github.com/pimoroni/vl53l1x-python

    Then run the command:

    sudo python vl53l1x-python/setup.py install

    This will install the time of flight required site packages to the appropriate place

  • The indoor track had poor lighting so to combat this we installed a series of LEDs under the bumper using popsicle sticks

  • Dramatic frontview.jpeg

  • We accidentally updated the pi system when installing the time of flight sensor library, and the controller ended up not working. We had to re-flash the system, reinstall donkey first and then Tensorflow. It took us a lot of time to redo all of these and it was incredibly slow.


Some potential improvements include:

  • Determining the minimum parking space based on vehicle dimensions
  • Autonomously adjust the position of the vehicle using multiple ToF sensors before parallel parking
    • Additionally, using these extra time of flight sensors to keep the car parallel to the side of the "street" and at the distance that is optimal for parallel parking
  • Implementing the PS3 controller to drive the car in our parallel parking application
  • Run the parallel parking script alongside one of our autonomous driving models