Difference between revisions of "2022WinterTeam3"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 39: Line 39:
==== Project Gantt Chart ====
==== Project Gantt Chart ====

[[File:Screen_Shot_2022-03-03_at_11.30.51_AM|thumb|Group Gantt Chart]]

=== New Project Proposal ===
=== New Project Proposal ===

Revision as of 22:19, 20 March 2022

Team Three project

Team Project Final Goal

Create an autonomous vehicle that can detect images of arrows facing right or left and getting the vehicle to move that direction

Team Members

Zeyu Chen, Alexandra Fernandez, Sarp User, Thet Zaw

Zeyu: Responsible for training donkey system and allowing vehicle to run autonomous laps

Alexandra: Responsible for modeling parts and documenting progress

Sarp: Responsible for coding and training donkey system to detect images

Thet: Responsible for implementing donkey system software in the nvidia jetson nano

Each member is tasked with different portions of the project in order to keep the time steady but the overall teamwork is what brings the entirety of the project together

Project Proposal

Original Project Proposal:

Delivery Bot:

- Have the robot be able to detect an object in the middle of the track. Bright object preferable like a red box

- Pick the object up with claws

- Bring that object to a separate point on the track after it detects another color such as blue

Parts needed for this would have been 3D printed claws run by a servo motor


What went wrong? Trying to code the robot along with the claw had become too complicated if it was integrated with the autonomous system. Running the two systems together would have been difficult to code. The only possible way to train the vehicle was with more time, however time was too limited. The only other way to get the claws to work was to do it manually, but this would have been difficult to control along with an autonomous vehicle.

Project Gantt Chart



New Project Proposal

Create an autonomous vehicle that can detect images of arrows facing right or left and getting the vehicle to move that direction

Using a camera, the vehicle can be trained to detect images of arrows facing certain directions autonomously. When it detects a direction of a right arrow, it should turn right, if it detects the direction of a left arrow, it should turn left, and so on.

Robot Training

The robot was trained in order to complete 3 autonomous vehicles on the campus track. As seen in the video, the vehicle completes 3 laps autonomously

Final Project Sub-Systems and Robot Schematics

The robots wiring schematics are seen in the image to the right. In order for the robot to be able to run and work with the nvidia jetson nano, the correct wiring must be implemented

The baseplate model which mounts the nvidia and secures the battery within the vehicle can also be seen to the right. It is fitted to the shape of the chassis to reduce space and allow for the vehicle to not potentially get hooked onto the side of the

The Camera mount has one degree of freedom and it mounted directly on the front of vehicle itself and is used for object detection

The nvidia case is a public cad model but modified to fit extra wiring within our jetson nano

Original Project Modeling

These parts were not used in the final project but were essential portions for the original design. The claw system which integrated a servo motor in order to pick up and deliver objects

Weekly Presentations

Week 8:

Week 9:

Final Project