Difference between revisions of "2019SpringTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
(Vehicle Design)
Line 103: Line 103:
<p>This will install the tof required site packages to the appropriate place</p>
<p>This will install the tof required site packages to the appropriate place</p>
* The indoor track had poor lighting so to combat this we installed a series of LEDs under the bumper using popsicle sticks
== Improvements ==
== Improvements ==

Revision as of 20:15, 10 June 2019

Dramatic trackangle.jpg


The goal of our project is to use Time of Flight sensor to detect possible parking spaces and set the car to start parallel parking whenever the parking space satisfies the minimum length requirements. We draw our inspiration from the 2019 Winter Team 6 and Team 7. However, different from previous teams, we measure the length of the parking space based on speed instead of time. To add on to that, we also integrated this method to the donkey framework so that we are able to train our model base on velocity instead of the throttle.

Team Members

• Andrew Sanchez

• Connor Roberts

• Genggeng Zhou

• Ian Delaney

• Ivan Ferrier

Group picture.jpg

Vehicle Design

The car was already fully assembled when we received the car. To start with, we had to design a baseplate to hold all electronic components and a camera mount to hold the camera at a certain angle.

Baseplate6.png Camera mount prototype.png

Our original camera mount was quite flexible, the height and camera angle can be adjusted to the desired position by adjusting the joints. However, we soon found out that this design is not stable, and the camera position can be easily shifted during driving, which is not suitable for training. We switched to a more stable design using measured parameters.

File:Camera mount.jpg

To increase downforce, we add a GT wing!

GT wing.png

Wiring Diagram

Wiring diagram.jpg



Indoor Training

Outdoor Training

When we started training for the outdoor track, we noticed that the camera could not fully capture the track because the outdoor track is much wider than the indoor track, we had to 3D print a small block to add on to the mount. With experience from indoor training, outdoor training was relatively easy. We were lucky that we did our training on an overcast day, which provides consistent light throughout the day. While training the model, we tried our best to keep the car at the center of the track and applying constant throttle. Eventually, we trained our first model based on 25k data points, which is about 25 laps. We were able to achieve fully autonomous for 3 laps using this model.

Robofest Competition

During the Robofest competition, we noticed two things we have been doing wrong for the whole time. Throughout our training, we believed that we should always keep the car at the center of the track so that our model will do exactly the same. However, it turns out that sometimes a bad driver can build a better model because the model needs to know what to do when something happens. In other words, with a "good driver", the model will not necessarily know what to do when it is going out of the tracking because the model may have never encountered such situation during training. But with a "bad driver", the model will learn that it should make a left or right turn when it is going out of the track. The second thing was that we have been scaling down the throttle value when we run the model because we were worried that the car won't make those really sharp turns. During the competition, we scaled the throttle value to about 0.7, the car moves really slow and it barely completed three autonomous laps. When we scaled the throttle value up to 1.1, the car was able to complete more than 5 autonomous laps without any problem. We concluded that when running the model, the throttle value needs to be consistent with the value used for training.



To get quick started on a parallel parking project or speed training clone our Github outside the d2t directory of your project with the following command:

cd ~/ #insures you don't overwrite your current working d2t folder

git clone https://github.com/tonikhd/AutonomousRC.git

It is important to note that our code breaks the current manage.py file in our directory and instead uses the file WorkingSpeedTraining.py to run models, train, and drive the car due to the fact that we are training with speed as an input rather than throttle value


Magnet Modification

Our original plan was to connect a rotary encoder directly to the drive train to measure the distance traveled by the car. But with the help of Jack and his friends, we came up with a much simpler design. In our final design, three magnets were evenly placed on the gear, and a magnetometer was used to tick once whenever a magnet passes through the magnetometer. The total distance traveled during a certain period can then be determined by multiplying the total amount of ticks and distance traveled by the car per tick, which can be calibrated manually. This design may not be as accurate as a rotary encoder, but it is sufficient for our application and saved us a lot of space because the rotary encoder is huge.

The code for the encoder that we use in our parallel parking script is the file labeled t6_encoder.py[1] in our team's Github repository. It removes a running thread from Tawn's version and instead returns speed and distance when called on from a function, rather than constantly polling for speed and distance.

The code for the encoder used in the speed based training uses Tawn's encoder.py[2] file.

Parallel Parking

The parallel parking script polls for edges using the tof sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking.

The code for the parallel parking script is labeled parallelpark.py[3] in our Github repo.

Speed Based Training


  • Installing libraries for the tof sensor model VL53L1X with pip gave us an error involving "site packages" not being found. It turns out the issue had to do with the virtual environment and pip, and the solution was to install the libraries manually. To install the libraries clone the Github[4] library for the tof onto the PI using the following command.

    git clone https://github.com/pimoroni/vl53l1x-python

    Then run the command:

    sudo python vl53l1x-python/setup.py install

    This will install the tof required site packages to the appropriate place

    • The indoor track had poor lighting so to combat this we installed a series of LEDs under the bumper using popsicle sticks

    Dramatic frontview.jpeg


Some potential improvements include: Determine the minimum parking space based on vehicle dimensions. Autonomously adjust the position of the vehicle using multiple ToF sensors before parallel parking. Right now we have to manually move the car to the starting position for good parallel parking. To further improve that, one can use multiple ToF sensors to move the car to desired position (optimized gap distance between the car and the obstacle) and make sure the car is parallel to the side.