We are Team2
Our project's main objective is to detect and remove obstacles utilizing an ultrasonic sensor and swinger arm respectively. There are certain environments that are filled with garbage or obstacles. It may be of importance to have the debris cleared before other vehicles or people come in to complete a certain task. In an attempt to create a project that would eventually detect debris on the road and dispose of it properly we began by implementing an ultrasonic sensor and sweeper arm. In future iterations of the project, other sensors or arm actuation can be implemented to further the capabilities of the robot.
Brian Chan 
Noe Saavedra Melchor
The base circuitry for the car before adding any additional components.
While wiring up the car initially, the DC-DC converter was soldered in backwards (so that the positive terminal was connected to ground). The entirety of the other components were tested with the Raspberry Pi power being supplied by a computer's USB port. Therefore, when the Pi was connected to the DC-DC converter for the first time for testing, the circuitry shorted. This resulted in multiple pins on the Pi being fried, including several connecting wires melting, and the PWM malfunctioning. After troubleshooting, we had to replace the Raspberry Pi, the PWM, and the DC-DC converter.
The replacement Raspberry Pi was faulty. We wired it up, calibrated the steering and throttle, and started to collect data. About 1 minute into collecting data, the Pi turned off and would not boot up. After consulting with the TAs, they determined that a faulty Pi was responsible.
The circuit including the additional components. For the ultrasonic sensor, a 1 kΩ and 2 kΩ resistor were used in series.
Raspberry Pi Case
To mount the raspberry pi to the top plate of the car, we needed a case that would securely hold it while still allowing for quick changes to the wiring. We used a pre-made file from All3DP to speed up the process. We modified the following 3D model and 3D printed it out of PLA.
For the base plate, we designed it in SolidWorks and cut it out of 1/4" acrylic. To allow for cable management, we left a center slot that would allow for wires to pass through from the chassis to the electrical components. To help with the alignment of the base plate with the stand-offs, we used a slotted hole at the back of the plate, which allowed for vertical adjustment. We could not easily or accurately measure the length from the back stand-offs to the front due to the limited range of the calipers in the lab; therefore, having slotted holes prevented misalignment issues. Apart from the camera mount, we used Velcro to secure the electrical components to the base plate.
The camera mount was a combination of of 3D printed parts and laser cut acrylic. The camera mount was designed to allow for quick adjustment of the height and angle of view, in order to optimize the camera position before starting to train. For the through bolts that connected the acrylic arm to the two 3D printed parts, two nuts were used as a substitute for a nylock nut, which was required to prevent the bolt from loosening under vibrations. While this camera mount allowed for easy adjustment of the camera position initially, it also was detrimental to consistent results when collecting data. When running the car in autonomous mode for the first time, the car crashed into one of the lockers and the camera angled down slightly. This change in view angle ruined the previous models that were trained--therefore requiring us to restart data collection. To mitigate the risk of this happening further in the project, we used superglue to adhere the acrylic arm into a fixed angle.
In the last week of the project, our camera mount was snapped while it was left in the storage locker. It is unknown the circumstances under which this happened; however, it could have been avoided by using a more robust mounting arm. While the 1/4" acrylic was suitable for normal operation of the car, it was too weak to withstand the abuse of multiple groups sharing a locker. Within the original design, there is a stress riser occurring between the transitions from 3D printed material to acrylic due to the sharp corner. For a redesigned mount, reducing this stress riser would result in a more robust design.
Ultrasonic Sensor and Sweeper Actuator
These components needed to be mounted at the front of the car without being in the line-of-sight of the camera. To achieve this, we removed the front foam bumper and used the existing tapped holes to mount a bracket for the servo. For the ultrasonic sensor, we used velcro to mount the 3D printed mount onto the top plate, so that the sensor would be in front of the car.
Team2 was able to collect data and train a model that successfully completed 5 indoor autonomous laps.
Indoor Lap Difficulties
We had a few issues with the collecting the data and training the model to complete autonomous laps for indoors. Initially, we had to recalibrate the steering because the car was not able to make the sharpest corner, even with the servo maxed out. This meant adjusting the swing arm of the steering. The throttle also proved to be problematic for training the model. We reduced the sensitivity of the throttle and attempted to run at a consistent speed. This vastly improved the accuracy of the car within the lines.
On the straight-aways, the car would veer to on side due to the alignment of the wheels. For our firsts few models, the car would often veer outside of the lines because we did not train it to correct its path on straight sections. To fix this issue, we added slight steering adjustments even on straight sections. This improved the model greatly; at this point, the car was able to do a couple laps autonomously.
We trained the car for a few more hours and trained the model using the new data. We believe that there was corrupted data in the dataset, because the new models would not drive or steer except under unusual circumstances--for example, when we would push the car with our hand, the car would accelerate with max throttle until we would use the killswitch. We tried to troubleshoot the issue; however, our only remedy was to scrap all the data and start from scratch. After the restarting our training, we did not have any issues.
We trained the car outdoors for 20 laps before training a new model for outdoors. While we were waiting, we used our best model from indoors to attempt a few laps outside. Using the same model, the robocar was able to complete 3 fully autonomous outdoor laps. The model was robust enough to detect the outer edges of the track despite the lighting differences in the training data. The model was able to detect the edges of the track and correctively steer in order to remain within the outdoor test track.
We did have a few minor issues on one particular corner, where multiple tape lines were overlapped and were not neatly done. However, the car did successfully correct for the corner before crossing over the inner line.
In order to add the feature of obstacle detection, the team decided to implement an ultrasonic sensor. We believed it would be simpler to implement than running a second camera and using computer vision on a second Raspberry Pi. We were able to implement an inexpensive HC-SR04 Ultrasonic Sensor Module  to the front of our vehicle and wire it directly to the Raspberry Pi that we were already using to run the DonkeyCar framework. It was simple to connect the sensor to the pi as per our circuit diagram in the corresponding section. Implementing code to start collecting data from the ultrasonic sensor was a simple process. 
However, as we tested with a moving obstacle the data was noisy and had high variance. This led to erroneous data that triggered the actuation of the additional servo motor at incorrect instances. To fix this issue we implemented a moving average filter that took the running average of the last three sampled data points. We did not want to use more data points because we were worried that the inherent delay caused by this filter would be too large that the car would crash into the obstacle without actuating the sweeper's arm.
This short python script controls the servo to do a quick 180-degree sweep . A full 180-degree sweep was enough to keep the arm out of the car's self-driving camera while the arm was in its resting position and to move an object completely out of the car's way.
We then combined the ultrasonic distance measurement script mentioned earlier with the sweeping script. This next script continuously takes averaged measurements and sweeps once if an object is less than 5cm from the sensor .
Our original plan was to have the sweeping mechanism work while the car was driving autonomously. We wanted the car to stop when an object was detected in its path, sweep the object away, and then continue driving. However, a number of issues prevented our car from functioning. Our camera mount snapped in the storage locker requiring us to retrain the PI. Our PS3 controller stopped connecting with the car which halted development on this feature. This was due to an update for the Raspberry Pi which affected the Bluetooth driver. The PS3 controller would continuously connect then disconnect from the PI. Reflashing the PI at this time would risk the sweeping script from functioning correctly. To use existing libraries for the ultrasonic sensor, the PI had to be up to date on it's software. We decided to continue development on the sweeping feature itself instead of the auto drive portion.
When we first got the servo, we ran into issues with controlling it which we contributed to problematic coding. The servo would intermediately move forward, but would not spin in the opposite direction. After troubleshooting the code using the steering servo as a stand-in, we realized that the additional servo was defective. Replacing the servo solved our issues with controlling it.