Difference between revisions of "2019SpringTeam6"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search
(Speed Based Training)
(Speed Based Training)
 
(18 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
== Team Members ==  
 
== Team Members ==  
  
• Andrew Sanchez
+
• Andrew Sanchez (ECE)
  
• Connor Roberts
+
• Connor Roberts (ECE)
  
• Genggeng Zhou
+
• Genggeng Zhou (MAE)
  
• Ian Delaney
+
• Ian Delaney (MAE)
  
• Ivan Ferrier
+
• Ivan Ferrier (ECE)
  
 
[[File:Group_picture.jpg|500px|center]]
 
[[File:Group_picture.jpg|500px|center]]
  
Left to right: Andrew, Ivan, Ian, Genggeng (Not pictured: Connor Roberts)
+
Left to right: Andrew, Ivan, Ian, Genggeng, Connor
  
 
== Vehicle Design ==  
 
== Vehicle Design ==  
Line 85: Line 85:
 
=== Parallel Parking ===
 
=== Parallel Parking ===
  
We borrowed our inspiration from 2019 Winter Team 6 and Team 7, but different from previous teams, where parallel parking was done based on time and certain constant throttle, our algorithm uses distance reading from the encoder. Now the car is able to find a suitable parking spot no matter what speed it is driving at.
+
We borrowed our inspiration from 2019 Winter Team 6 and Team 7, but different from previous teams, where parallel parking was done based on time and certain constant throttle, our algorithm uses distance reading from the encoder. Now the car is able to find a suitable parking spot no matter what speed it is driving at. The parallel parking script polls for edges using the time of flight sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking maneuver, here is a schematic.
The parallel parking script polls for edges using the time of flight sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking maneuver.  
+
 
 +
[[File:parallelparking.png|500px|center]]
 +
 
 +
Essentially when the time of flight sensor detects a sudden increase above a certain value in its reading, it detects the first edge of the parking spot, and the distance traveled is recorded using the encoder. After that, when it detects a sudden decrease of a certain value in its reading, it detects the second edge, and a new value of distance traveled will be recorded. The size of the parking spot will be calculated by subtracting the old distance from the new distance reading from the encoder. If the size of the parking spot is smaller than about 1.5 times the length of the car, the parking spot will be too small to park, and "the parking spot is too small" will be displayed in the terminal. If the size of the parking spot is larger than that value, the car will brake and start parking. The distance traveled during braking is also recorded to ensure accuracy.
  
 
The code for the parallel parking script is labeled parallelpark.py[https://github.com/tonikhd/AutonomousRC/blob/master/parallelpark.py] in our Github repo.
 
The code for the parallel parking script is labeled parallelpark.py[https://github.com/tonikhd/AutonomousRC/blob/master/parallelpark.py] in our Github repo.
Line 96: Line 99:
 
=== Speed Based Training ===
 
=== Speed Based Training ===
  
The original donkey framework trains model based on steering and throttle. It extracts steering and throttle value from every single image taken during the training and builds a model. Our training algorithm, however, extracts steering and speed value (recorded through the Hall effect sensor) from images and builds a model based on those values. However, to physically apply it to our car, we also had to incorporate a PID controller into the donkey framework, otherwise, the car wouldn't know what throttle to use to reach the desired speed.  
+
The original Donkey framework uses the relationship between the joystick position and the image of any given point along the track to create a model. The framework essentially uses an image classification algorithm to find a relationship between the image and the corresponding location of the joystick. This is broken up into two separate parts. The angle is handled in one part and the throttle is handled in another part. For the throttle, the Donkey framework takes the joystick position and translates its position from -1 to +1 to a pwm value using the config file. These pwm values are saved and are used later with the Donkey framework. After a model has been created and the car is put on the track, the model will look at the image it sees and try and figure out the pwm value that corresponds to that image. This method works quite well, but there are some problems with this method, and it can be improved. The main problem with this method, is that pwm does not always map to the same speed. Essentially, pwm gives the car a percentage of the battery power. The higher the pwm, the faster the car will go, but the same pwm value will not always result in the same speed. If a battery is full (let’s say 12V) and we give the car a pwm of 780 it might get the car going about 1 meter per second. Later, if the battery is almost dead (let’s say 9V) this same pwm value of 180 will get the car moving closer to .75 meters per second. To put that in perspective, that is the difference between going down the highway at 60mph and 45mph. Clearly that is a problem as our goal would ideally travel at the same speed regardless of the battery power.
 +
In order to fix this problem, we wanted to train the model with speed rather than joystick position or pwm. To get the speed of the car, we recorded the speed of the car using the Hall effect sensor that we had attached to the driveshaft. The Donkey part that we used for this section had already been written before we started our project. Essentially it keeps track of the number of ticks in a given amount of time and converts that into a speed. Here is how manage.py was edited to keep track of the speed
 +
 
 +
[[File:KEEPING_TRACK_OF_SPEED.png|800px]]
 +
 
 +
(if debug is set to True, the speed and distance will be printed to the terminal while driving)
 +
 
 +
Once we had the speed of the car, we had to change the system so that it could save this speed while driving. In order to do this, we edited manage.py so that the speed would be recorded to be used later by train.py . Although is turned out to be really simple, it actually took a while to figure out how to do this. In order to save any variable, you just have to add the name of the variable that you want to save to the inputs tub.
 +
 
 +
[[File:INPUTS_TUB.png|800px]]
 +
 
 +
Even though this will save the tub, it will not automatically be incorporated into the model when training. In order to get the system to train with the speed, train.py has to be changed so that the model will be looking for meters_second instead of user/throttle
 +
 
 +
[[File:TRAIN_WITH_USER_THROTTLE.png|800px]]
 +
 
 +
In order to keep things simple (not have to change everything) we left the variable named throttle even though it is actually speed now.  
 +
Once we have made it to this point, we have a system that will record the speed and can then autonomously output the desired speed after training the model (it will still be called throttle but will now range from around 0 to 4 meters per second rather than 376pwm to 395pwm).
 +
At this point, even though it may seem like we have accomplished a lot, if we were to stop here, the model would be useless. In order to make it so that the car can use this alternatively acquired throttle, we have to make it so that the car can figure out which pwm will get the speed moving at the set throttle. In order to accomplish this we need to incorporate a PID controller that will adjust the pwm until the car is moving at the desired speed.
 +
The first step to incorporating our PID controller with the new model was to make it so that we could inject our own code whenever the car autonomously controlls the speed.
 +
Here is a photo of how the system will work with the PID controller
  
 
[[File:Throttle_Controller.jpg|500px|center]]
 
[[File:Throttle_Controller.jpg|500px|center]]
 +
 +
We started out by changing DriveMode so that it will pass another variable into the output so that we can work with it inside of actuator.py.
 +
 +
[[File:DRIVE_MODE_CHANGE.png|800px]]
 +
 +
(The hashtags are everywhere that we changed the code)
 +
Now, we have a boolean value that we can use within actuator.py to tell the system when the model is being run autonomously. We can use this to interject our PID controller only when the car is fully autonomous.
 +
 +
[[File:PWMThrottle.png|800px]]
 +
[[File:PWMThrottle_IF_ELSE.png|800px]]
 +
 +
As you can see, we changed the run function to have an if else block that uses the PIDBoolean value we incorporated so that our PID controller will only output the pwm when the car is driving autonomously (when the PIDBoolean is True).
 +
From here we made it so that we could dynamically edit the scalar, offset, and P value by adding the highlighted code below.
 +
 +
[[File:DYNAMIC_LOADING.png|800px]]
 +
 +
It is important to have the try except block if the scalar_offset_P.txt file is going to be edited and saved while manage.py is being run, this is so that an error will not be thrown when it scalar_offset_P.txt is saved while manage.py is trying to read from it.
 +
Altogether our system keeps track of the speed when driving and saves it to the tubs. Then we can make a model that will take this speed and make a model with these tubs. These models can then output the speed of the car. Our PID controller can then take this model and run the car at the set speed. As an added bonus, the scalar, offset, and P values can be dynamically edited while manage.py is running.
 +
 +
Here are the files mentioned in this page (note that manage.py is WorkingSpeedTraining.py)
 +
 +
manage.py[https://l.messenger.com/l.php?u=https%3A%2F%2Fcdn.fbsbx.com%2Fv%2Ft59.2708-21%2F61078409_2379516152325613_6545297324631916544_n.py%2FWorkingSpeedTraining.py%3F_nc_cat%3D101%26_nc_oc%3DAQlWH3tzML59EsRanvyH3a9E4VoE1Ch2gbQHPtJ2NNXCM-33LdFvVKNnJqGaYomwQz27gUMQ1Ph1bleYIsVTP-q1%26_nc_ht%3Dcdn.fbsbx.com%26oh%3D1459dbc21d6d688fb9c549337bac26f0%26oe%3D5D081EB3%26dl%3D1&h=AT2rZiViMd5RnfZ2XpZmwRMIvSMvmSEj1ps65tnkr0RI7Pxtj4vgHKPG9PE1BdKPc35FKGQDdWDAgPdLC6gokXIY97ffOb_lc2kVwNdjJg8HAxzvoitKZiCe7aI6dZgmWCWrxx-b5wUf9JixlOyEqbWWnpMDLvyE]
 +
 +
train.py[https://l.messenger.com/l.php?u=https%3A%2F%2Fcdn.fbsbx.com%2Fv%2Ft59.2708-21%2F60990005_351183388905167_8048702647517052928_n.py%2Ftrain.py%3F_nc_cat%3D104%26_nc_oc%3DAQmH0NSchGnwoEcxTmgY6QmQ5fLDxVO2gTlMRC4DSktfWzFY-H_jvxu8FRmOqjOgQ9aN4nWzhSxqe_HK6_nHr-mm%26_nc_ht%3Dcdn.fbsbx.com%26oh%3D1cc043896b62d071d6b4ec8e2cf2a19d%26oe%3D5D081928%26dl%3D1&h=AT1HBi8wqye7bTyb950tsB50xRDv1XPhIauHZXozQrvHGIz1KsmkACH_0aT1WdIeYITJ6dynH7BwfvhYRzJTpSSA2YpjHZGpcJxQ-3LxhVjbIOjEW_eDpTurksVmV1cT8PloBNRXDaXuaUNVeG7rMaS-Yx31lgnG]
 +
 +
actuator.py[https://l.messenger.com/l.php?u=https%3A%2F%2Fcdn.fbsbx.com%2Fv%2Ft59.2708-21%2F61231198_385944668932576_271586664915140608_n.py%2Factuator.py%3F_nc_cat%3D103%26_nc_oc%3DAQm8nM19v-ANCLssUJjX9ekjCXxNl0_0mYmA_53UhXf5-GeZJ71zp4UxS_P0fLNMF-y9SLhRyw2wG447eTV1F6zb%26_nc_ht%3Dcdn.fbsbx.com%26oh%3D84b868acfb27e03061f12c770769c4f1%26oe%3D5D07B37E%26dl%3D1&h=AT1r1wVjVxj9ZsP5Lo-gOr4y1O3GfJWYO7kcR2YxvBBaJZTiYtZZ_9kd71mHN8mo5zU2QLWJxr1cvgHnyaMTBBAWIJ8-GoiOTO4dS4oa4Ug_dXJOUVUPhdUIbY7O6w5kSPHb4TQYJRxzwO4cMNQoegWGzgjemEhw]
 +
 +
scalar_offset_P.txt[https://l.messenger.com/l.php?u=https%3A%2F%2Fcdn.fbsbx.com%2Fv%2Ft59.2708-21%2F61453237_2338846499737184_8347419336645804032_n.txt%2Fscalar_offset_P.txt%3F_nc_cat%3D103%26_nc_oc%3DAQlgQkhev4yobUvkst2Ppb2XJ-NewW_kqULfQH8HOIKBDPP0nJVPR27BcT8T2sJYTgLWGfA4sqaD2NAaIVwnANRV%26_nc_ht%3Dcdn.fbsbx.com%26oh%3D107bef1be1b1bc08ea5ff1b99aea438e%26oe%3D5D06DE32%26dl%3D1&h=AT3ajSme50s950tmw2dOm1n6M41YBTcPIi0i1YVvnQDoS0v8cqM5IqX6_TYbzmGLvlG9OpgVaEzawsqHtZGaIebsJDuhSLvXogEQOgvc31y9ZGw4tN8lhL2vQXmwNuEwBfk2drUGI3tGE-ujcuiNsU5ep3uVGgwm]
  
 
== Obstacles ==
 
== Obstacles ==
Line 129: Line 179:
 
* Implementing the PS3 controller to drive the car in our parallel parking application
 
* Implementing the PS3 controller to drive the car in our parallel parking application
 
* Run the parallel parking script alongside one of our autonomous driving models
 
* Run the parallel parking script alongside one of our autonomous driving models
 +
* Do parallel parking with a trailer

Latest revision as of 16:35, 15 June 2019

Dramatic trackangle.jpg

Introduction

The initial goal of our project was to use a time of flight sensor to detect possible parking spaces and begin a parallel parking maneuver whenever the parking space satisfies the minimum length requirements. We drew our inspiration from the 2019 Winter Team 6 and Team 7. However, different from previous years, we measured the length of the parking space based on pure distance using an encoder instead of a constant velocity and time. Additionally, we also integrated this method to the donkey framework so that we are able to train our model base on velocity instead of the throttle.

Team Members

• Andrew Sanchez (ECE)

• Connor Roberts (ECE)

• Genggeng Zhou (MAE)

• Ian Delaney (MAE)

• Ivan Ferrier (ECE)

Group picture.jpg

Left to right: Andrew, Ivan, Ian, Genggeng, Connor

Vehicle Design

The car was already fully assembled when we received it. To start with, we had to design a baseplate to hold all electronic components and a camera mount to hold the camera at a desired angle.

Baseplate6.png Camera mount prototype.png

Our original camera mount was quite flexible, the height and camera angle could be adjusted to the desired position by adjusting the joints. However, we soon found out that this design is not stable due to vibrations from the car, and the camera position can be easily shifted during driving which was not suitable for training. We switched to a more stable design using measured parameters.

New camera mount.jpg

To increase downforce, we added a GT wing!

GT wing.png

Wiring Diagram

The wiring Diagram includes all basic setup, as well as ToF sensor and the Hall effect sensor, which are only used in the team project to measure distance.

Wiring diagram.jpg

Objectives

Indoor Training

When we were training for the indoor track, we decided the best strategy was to drive along the center of the track and use constant throttle to train the model. At that moment, we had some problem with the calibration of the steering, and the car kept drifting to the left, we had to build a model on a lot of data to accomplish the indoor track.


Outdoor Training

When we started training for the outdoor track, we noticed that the camera could not effectively capture the track as it was much wider than the indoor track. We solved this by 3-D printing an extension mount that allowed for easy switching between our indoor and outdoor camera setups. With experience from indoor training, outdoor training was relatively easy. We were lucky that we did our training on an overcast day, which provides consistent light throughout the day. While training the model, we tried our best to keep the car at the center of the track and apply a constant throttle. Eventually, we trained our first model based on 25k data points, which was approximately 25 laps. We were able to achieve 3 fully autonomous laps using this model.

Robofest Competition

During the Robofest competition, we noticed two things we have been doing wrong for the whole time. Throughout our training, we believed that we should always keep the car at the center of the track so that our model will do exactly the same. However, it turns out that sometimes a bad driver can build a better model because the model needs to know what to do when something happens. In other words, with a "good driver", the model will not necessarily know what to do when it is going out of the tracking because the model may have never encountered such situation during training. But with a "bad driver", the model will learn that it should make a left or right turn when it is going out of the track. The second thing was that we have been scaling down the throttle value when we run the model because we were worried that the car won't make those really sharp turns. During the competition, we scaled the throttle value to about 0.7, the car moves really slow and it barely completed three autonomous laps. When we scaled the throttle value up to 1.1, the car was able to complete more than 5 autonomous laps without any problem. We concluded that when running the model, the throttle value needs to be consistent with the value used for training.

Project

Github

To get quick started on a parallel parking project or speed training clone our Github outside the d2t directory of your project with the following command:

cd ~/ #insures you don't overwrite your current working d2t folder

git clone https://github.com/tonikhd/AutonomousRC.git

It is important to note that our code breaks the current manage.py file in our directory and instead uses the file WorkingSpeedTraining.py to run models, train, and drive the car due to the fact that we are training with speed as an input rather than throttle value

Encoder

Magnetometer
Magnet Modification

Our original plan was to connect a rotary encoder directly to the drive train to measure the distance traveled by the car. But with the help of Professor Silberman and some friends, we came up with a much simpler design. In our final design, three magnets were evenly placed on the gear, and a magnetometer was used to tick once whenever the magnet passed the magnetometer. The total distance traveled during a certain period can then be determined by multiplying the total amount of ticks and distance traveled by the car per tick, which was calibrated manually. This design may not be as accurate as a rotary encoder, but it was sufficient for our application and saved space on the chassis as the original encoder was significantly larger.

The code for the encoder that we used in our parallel parking script is the file labeled t6_encoder.py[1] in our team's Github repository. It removes a running thread from Tawn's version and instead returns speed and distance when called on from a function, rather than constantly polling for speed and distance.

The code for the encoder used in the speed based training uses Tawn's encoder.py[2] file.

Parallel Parking

We borrowed our inspiration from 2019 Winter Team 6 and Team 7, but different from previous teams, where parallel parking was done based on time and certain constant throttle, our algorithm uses distance reading from the encoder. Now the car is able to find a suitable parking spot no matter what speed it is driving at. The parallel parking script polls for edges using the time of flight sensor and measures the distance between spots using the encoder. The parameter for spot length was determined by the length of the car and the car's turning radius. Once the car detects an adequate length spot, it performs the parallel parking maneuver, here is a schematic.

Parallelparking.png

Essentially when the time of flight sensor detects a sudden increase above a certain value in its reading, it detects the first edge of the parking spot, and the distance traveled is recorded using the encoder. After that, when it detects a sudden decrease of a certain value in its reading, it detects the second edge, and a new value of distance traveled will be recorded. The size of the parking spot will be calculated by subtracting the old distance from the new distance reading from the encoder. If the size of the parking spot is smaller than about 1.5 times the length of the car, the parking spot will be too small to park, and "the parking spot is too small" will be displayed in the terminal. If the size of the parking spot is larger than that value, the car will brake and start parking. The distance traveled during braking is also recorded to ensure accuracy.

The code for the parallel parking script is labeled parallelpark.py[3] in our Github repo.

Speed Based Training

The original Donkey framework uses the relationship between the joystick position and the image of any given point along the track to create a model. The framework essentially uses an image classification algorithm to find a relationship between the image and the corresponding location of the joystick. This is broken up into two separate parts. The angle is handled in one part and the throttle is handled in another part. For the throttle, the Donkey framework takes the joystick position and translates its position from -1 to +1 to a pwm value using the config file. These pwm values are saved and are used later with the Donkey framework. After a model has been created and the car is put on the track, the model will look at the image it sees and try and figure out the pwm value that corresponds to that image. This method works quite well, but there are some problems with this method, and it can be improved. The main problem with this method, is that pwm does not always map to the same speed. Essentially, pwm gives the car a percentage of the battery power. The higher the pwm, the faster the car will go, but the same pwm value will not always result in the same speed. If a battery is full (let’s say 12V) and we give the car a pwm of 780 it might get the car going about 1 meter per second. Later, if the battery is almost dead (let’s say 9V) this same pwm value of 180 will get the car moving closer to .75 meters per second. To put that in perspective, that is the difference between going down the highway at 60mph and 45mph. Clearly that is a problem as our goal would ideally travel at the same speed regardless of the battery power. In order to fix this problem, we wanted to train the model with speed rather than joystick position or pwm. To get the speed of the car, we recorded the speed of the car using the Hall effect sensor that we had attached to the driveshaft. The Donkey part that we used for this section had already been written before we started our project. Essentially it keeps track of the number of ticks in a given amount of time and converts that into a speed. Here is how manage.py was edited to keep track of the speed

KEEPING TRACK OF SPEED.png

(if debug is set to True, the speed and distance will be printed to the terminal while driving)

Once we had the speed of the car, we had to change the system so that it could save this speed while driving. In order to do this, we edited manage.py so that the speed would be recorded to be used later by train.py . Although is turned out to be really simple, it actually took a while to figure out how to do this. In order to save any variable, you just have to add the name of the variable that you want to save to the inputs tub.
Error creating thumbnail: Unable to save thumbnail to destination

Even though this will save the tub, it will not automatically be incorporated into the model when training. In order to get the system to train with the speed, train.py has to be changed so that the model will be looking for meters_second instead of user/throttle

TRAIN WITH USER THROTTLE.png

In order to keep things simple (not have to change everything) we left the variable named throttle even though it is actually speed now. Once we have made it to this point, we have a system that will record the speed and can then autonomously output the desired speed after training the model (it will still be called throttle but will now range from around 0 to 4 meters per second rather than 376pwm to 395pwm). At this point, even though it may seem like we have accomplished a lot, if we were to stop here, the model would be useless. In order to make it so that the car can use this alternatively acquired throttle, we have to make it so that the car can figure out which pwm will get the speed moving at the set throttle. In order to accomplish this we need to incorporate a PID controller that will adjust the pwm until the car is moving at the desired speed. The first step to incorporating our PID controller with the new model was to make it so that we could inject our own code whenever the car autonomously controlls the speed. Here is a photo of how the system will work with the PID controller

Error creating thumbnail: Unable to save thumbnail to destination
We started out by changing DriveMode so that it will pass another variable into the output so that we can work with it inside of actuator.py. 

DRIVE MODE CHANGE.png

(The hashtags are everywhere that we changed the code) Now, we have a boolean value that we can use within actuator.py to tell the system when the model is being run autonomously. We can use this to interject our PID controller only when the car is fully autonomous.

PWMThrottle.png PWMThrottle IF ELSE.png

As you can see, we changed the run function to have an if else block that uses the PIDBoolean value we incorporated so that our PID controller will only output the pwm when the car is driving autonomously (when the PIDBoolean is True). From here we made it so that we could dynamically edit the scalar, offset, and P value by adding the highlighted code below.

DYNAMIC LOADING.png

It is important to have the try except block if the scalar_offset_P.txt file is going to be edited and saved while manage.py is being run, this is so that an error will not be thrown when it scalar_offset_P.txt is saved while manage.py is trying to read from it.

Altogether our system keeps track of the speed when driving and saves it to the tubs. Then we can make a model that will take this speed and make a model with these tubs. These models can then output the speed of the car. Our PID controller can then take this model and run the car at the set speed. As an added bonus, the scalar, offset, and P values can be dynamically edited while manage.py is running.

Here are the files mentioned in this page (note that manage.py is WorkingSpeedTraining.py)

manage.py[4]

train.py[5]

actuator.py[6]

scalar_offset_P.txt[7]

Obstacles

  • Installing libraries for the time of flight sensor model VL53L1X with pip gave us an error involving "site packages" not being found. It turns out the issue had to do with the virtual environment and pip, and the solution was to install the libraries manually. To install the libraries clone the Github[8] library for the time of flight onto the PI using the following command.

    git clone https://github.com/pimoroni/vl53l1x-python

    Then run the command:

    sudo python vl53l1x-python/setup.py install

    This will install the time of flight required site packages to the appropriate place

  • The indoor track had poor lighting so to combat this we installed a series of LEDs under the bumper using popsicle sticks

  • Dramatic frontview.jpeg

  • We accidentally updated the pi system when installing the time of flight sensor library, and the controller ended up not working. We had to re-flash the system, reinstall donkey first and then Tensorflow. It took us a lot of time to redo all of these and it was incredibly slow.


Improvements

Some potential improvements include:

  • Determining the minimum parking space based on vehicle dimensions
  • Autonomously adjust the position of the vehicle using multiple ToF sensors before parallel parking
    • Additionally, using these extra time of flight sensors to keep the car parallel to the side of the "street" and at the distance that is optimal for parallel parking
  • Implementing the PS3 controller to drive the car in our parallel parking application
  • Run the parallel parking script alongside one of our autonomous driving models
  • Do parallel parking with a trailer