Difference between revisions of "Project encoders"
(Created page with "== Abstract == In this class, we built an autonomous car during the first six weeks and worked on a project about encoders for odometry in the rest four weeks. The car was ab...") |
(No difference)
|
Latest revision as of 18:45, 27 March 2018
Abstract
In this class, we built an autonomous car during the first six weeks and worked on a project about encoders for odometry in the rest four weeks. The car was able to drive autonomously on the track at Geisel for three laps. Photosensors were used to measure how many times the wheel turned so that we could find out the distance traveled by the car, which could be used to enhance the performance of the car in the future studies.
Team Members
- Carlos Sandoval
- David Bielsil
- Lumei Xue
- Jason Joshua
Introduction
With the rapid development of technologies, autonomous cars have become a hot topic. Though they have not been put into use, many engineers aim at making autonomous cars more reliable and more efficient. Because of the importance of this topic, we worked on a 1/10 scale autonomous car which is able to perform on a simulated city track this quarter. In addition, encoders are helpful in measuring the distance traveled by the car, which could be used to enhance the performance of the car.
Procedures
Part 1: Getting Donkey Car to Work
We started by cutting a platform and printing out mounts for it. To save time, we borrowed the design from a friend and made some adjustments. Referring to the photos posted on basecamp, we assembled cables to make the Emergency Machine Off (EMO) work.
After that we assembled our donkey car by following the detailed instruction closely. (https://docs.google.com/document/d/1mW28cSvn9k_IMt9VH5ROb30LsvgICu1G5Mnx4OHdMGc/edit). There are several things that should be noted. Use the same network for the Raspberry PI (RPI) so that the laptop is able to access the RPI. Also, make sure to change the hostname for the RPI to avoid operating on another group’s RPI. Moreover, the calibration of the throttle and steering should be as precise as possible, otherwise the car might keep moving to the right or to the left. After the PS3 Joystick was paired to our RPI successfully, we were able to drive our car.
While we were working on the coding part, we also designed and printed out our camera mount. The first version was very rough due to limited time, as shown in Figure 1. It is clear that we forgot to take the screws on the camera as well as the cable into account. The second version of the camera mount consists of two parts, so we could change the angle easily and find out the best one (Figure 2). A piece of Velcro and a fishing line are used to attach the camera to the mount, because the screws may cause damage to the camera. This will be discussed further in the section “Problems we encountered”.
Figure 1. Camera Mount version 1
Figure 2. Camera Mount version 2
Part 2: Collecting Data and Training
The second step was to make the car to be autonomous by collecting data at the track and training the models on the GPU cluster. It took us a long time to figure out how to connect to the field router. We found out that the model worked the best when we collected the data for 40 to 60 laps, at different times of a day. Our first model did not work well because we drove the car too fast and only made sure that the car did not go off track during the process of collecting data. Following the centerline of the track for the most of the time with a lower speed, we obtained a better model than the previous ones. Note that the angle of the camera should keep the same all the time.
After we collected enough data, we followed the instruction online (https://docs.google.com/document/d/e/2PACX-1vTe9sehl7izNJJNypsDNABD4wg-F-AClAi0cYV3pIIRGpCknD7SEWQPEGqy_5DBRmFQtkulLkHkLxEm/pub) to train “Donkey Car” via campus GPU cluster. The model file was in the wrong folder in the cluster account initially, so we moved it to the folder “models” and also checked the location of the file on the RPI. After this was done, we were able to test the model and decide to use the model or not.
Part 3: Project (encoders for odometry)
Oue group basically worked on encoders and our first step was doing research online. There are three choices: using a belt around axial to measure number of turns, using a photosensor wheel well to track distance, and mounting an inertial measurement unit to sense the turning of the wheel. Comparing the cons and pros of three options, we decided to use photosensors because of the low price and good resolution. We managed to find a blogger who used photosensors and decided to basically copy what he did (http://www.blargh.co/2013/12/rc-car-quadrature-wheel-encoders.html). Therefore, we ordered components he used online, including three-pin-photosensors, triggers, and two kinds of Printed Circuit Boards (PCBs). We ordered the PCBs from this website:
https://www.oshpark.com/profiles/jleichty (Figure 3).
Figure 3. Two kinds of PCBs we bought.
It took us more than a week to get the components and we also realized the difficulty of soldering. When we were waiting for these things, Mauricio gave us several photosensors, a motor with encoder and arduino so that we could work on the coding part. First, we followed the schematic online to build a circuit which helped us to test a photosensor, as shown in Figure 4 and Figure 5.
Figure 4. Schematic for testing the photosensor
Figure 5. Circuit we built for testing
The voltage of the photosensor changed according to the color it detected: The voltage went up when it detected black while dropped down when it detected white. Second, we tested the motor with encoder on it (Figure 6 and Figure 7).
Figure 6. Circuit to test the encoder
Figure 7. The motor with encoder on it
As we turned the black disk on the motor, we got square waves which could be used to count how many times the disk turned (Figure 8).
Figure 8. Square waves we got
Figure 9. Arduino
Third, we connected the motor to the arduino and revised the code that were available online (https://github.com/wroscoe/donkey/blob/master/donkeycar/parts/encoder.py). Fourth, we printed out black and white stripes as the blogger did and attached the alternating black-and-white tracks on the inside surface of the wheel (Figure 10).
Figure 10. Alternating black/white tracks
Figure 11. A breadboard
Note that the tracks are 90 degrees out of phase.
After we got two breadboards from Mauricio, we put the photosensors on and built the circuits. Taking the limited room, the size of a pin, the motion of the wheel and the width of the track into account, we designed a mount for encoder (Figure 12 and Figure 13).
Figure 12. The encoder mount
Figure 13. Side view of the wheel
Results
Our car was able to drive autonomously on the track at Geisel for three laps. The model that worked best was made and tested right after we collected enough data. In addition, we used four photosensors, two on each front wheel, to measure the distance traveled by the car. The front view of our car is shown in Figure 14.
Figure 14. The front wheel of the car
The car was able to read the accurate number of wheel turns at slow speed, while it drops some counts at higher speed. A possible explanation for this is that the length of each black/white stripe is not long enough for the photosensors to distinguish two adjacent stripes at high speed.
Problems encountered
Of course we encountered a lot of problems, we only listed a few that took us a long time to solve.
Bent structures
The group found out that the car received was damage from the previous group. We planned to fix the bent rod however, this is too risky due to the time constraint. Moreover, this problem is not a big deal since the RC car is autonomous and it can try to straightened itself. One of the suspected cause of this problem is due to not bringing the EMO switch in hand when testing. At least a group member needed to bring the EMO switch in order this problem to be repeated again. A worst consequence might be bent chassis, broken cameras
Broken cameras
As for our group, camera has become our biggest obstacles. The total camera that the group used is 3 cameras. The first two cameras was not detected on the RPI. The group has multiple of testing method in order to know that the camera is not working, the first method is to let other group tested our camera with their RPI. The second method is to install a short “camera application” into the RPI and try to run the application with the camera. If it is not detected, wire from the camera to RPI might be another issue too. Moreover, the third camera has a problem of not connecting. And the group found out that a chip fell out of the camera. The reason why is because the tight space between the screw hole and the chip. Due to the fragile design of the camera, future group needed to take extra precaution of installing and designing frame for the camera.
Connection to the field router
It took us a long time to figure out how to connect to the field router. An error message kept showing up when we attempted to connect the laptop to the RPI. To solve this problem, we used the projector to find the IP address at the lab and then used it at the track. Though the IP address was changing, we found out that only the last digit could change. We basically tried the IP address out at the track and it worked fine.
Difficulty to design PCBs
Since we wanted to use the four-pin photosensors, we tried to revise the PCBs, which were designed for three-pin photosensors. We downloaded Autodesk Eagle but then found it very difficult for us to use the software, because none of us have any background in designing the PCBs. In addition, the PCBs have two layers, making our task even more challenging. Although we watched a tutorial online and got some ideas about how this software works, we could not find the components we need. Besides, there are many options available for one component, for example resistor, so we had no idea about which one we should use. Then we gave up our plan for designing the PCBs.
Conclusion
Future studies are needed to improve the accuracy of measuring the number of ticks, to utilize more photosensors on the back wheels, to design a better fit for easier assembly, and to have a proof of concept, like moving around a one square foot area. Moreover, we did not have a chance to use the parts we ordered due to the limitation of time. Maybe they can be used in the future.