2022WinterTeam3
Team Three project
Team Project Final Goal
Create an autonomous vehicle that can detect images of arrows facing right or left and getting the vehicle to move that direction
Team Members
Zeyu Chen, Alexandra Fernandez, Sarp User, Thet Zaw
Zeyu: Responsible for training donkey system and allowing vehicle to run autonomous laps
Alexandra: Responsible for major cad modeling, documenting progress, and portions of the code
Sarp: Responsible for coding through python and implementing open cv to detect images
Thet: Responsible for implementing donkey system software in the nvidia jetson nano and training autonomous vehicle
Each member is tasked with different portions of the project in order to keep the time steady but the overall teamwork is what brings the entirety of the project together
Project Proposal
Original Project Proposal:
Delivery Bot:
- Have the robot be able to detect an object in the middle of the track. Bright object preferable like a red box
- Pick the object up with claws
- Bring that object to a separate point on the track after it detects another color such as blue
Parts needed for this would have been 3D printed claws run by a servo motor
What went wrong? Trying to code the robot along with the claw had become too complicated if it was integrated with the autonomous system. Running the two systems together would have been difficult to code. The only possible way to train the vehicle was with more time, however time was too limited. The only other way to get the claws to work was to do it manually, but this would have been difficult to control along with an autonomous vehicle.
Project Gantt Chart
New Project Proposal
Create an autonomous vehicle that can detect images of arrows facing right or left and getting the vehicle to move that direction
Using a camera, the vehicle can be trained to detect images of arrows facing certain directions autonomously. When it detects a direction of a right arrow, it should turn right, if it detects the direction of a left arrow, it should turn left, and so on.
Robot Training
The robot was trained in order to complete 3 autonomous vehicles on the campus track. As seen in the video, the vehicle completes 3 laps autonomously
https://drive.google.com/file/d/1VImK-z_aUpBb2s0zrt1zZHTIMa3UWFoi/view?usp=sharing
Final Project Sub-Systems and Robot Schematics
The robots wiring schematics are seen in the image to the right. In order for the robot to be able to run and work with the nvidia jetson nano, the correct wiring must be implemented
The baseplate model which mounts the nvidia and secures the battery within the vehicle can also be seen to the right. It is fitted to the shape of the chassis to reduce space and allow for the vehicle to not potentially get hooked onto the side of the
The Camera mount has one degree of freedom and it mounted directly on the front of vehicle itself and is used for object detection
The nvidia case is a public cad model but modified to fit extra wiring within our jetson nano
Original Project Modeling
These parts were not used in the final project but were essential portions for the original design. The claw system which integrated a servo motor in order to pick up and deliver objects
Weekly Presentations
Week 8: https://docs.google.com/presentation/d/16M8Hq5yRoftYKJABau9eC1vU7WTqeqfQXJeQ--NnbEw/edit?usp=sharing
Week 9: https://docs.google.com/presentation/d/1-tC24QWp5WTPpvG3gQXGgHRO-plDYi-zXIiXsqfGJls/edit?usp=sharing
Final Project
https://docs.google.com/presentation/d/1mmKAM_2s6SHAbPibGR57oIO-OlFowQgntp7GYXhB8tU/edit?usp=sharing
Resources
Github with car detection code: https://github.com/sarpuser/OpenCV-Arrow-Detection-with-PCA9685
acknowledgements:
Instructor:
Prof. Jack Silberman
TA:
Dominic Nightingale
Ivan Ferrier
Other resources:
ECE Makerspace
envision studios