Difference between revisions of "2022SpringTeam2"
Line 9: | Line 9: | ||
==Team 2 Car == | ==Team 2 Car == | ||
''' | '''Top View''' | ||
[[File:Sideview.jpg|400px]] | [[File:Sideview.jpg|400px]] | ||
''' | '''Front View''' | ||
[[File:Frontview.jpg]] | [[File:Frontview.jpg]] |
Revision as of 21:03, 10 June 2022
Bold text
Team Members
Team
- Raymond Constantine - MAE
- Martin Heir - UPS
- Sam Liimatainen - ECE
- Zhenghao “Jack” Weng - MAE
Team 2 Car
Top View
Front View
Project Overview
The purpose of this project is to replicate autonomous interaction with traffic signals using the an RC Race Car Chassis, webcam, NVIDIA Jetson Nano as hardware, and ROS2 (Robotic Operating System) and Docker containers for software development. Taking cues from industry leadership, this was done using camera-based computer vision navigation—below is a summary of the project.
Mechanical Design
Baseplate
The baseplate had three slots running down its middle section that were 34.29 in long and three slots on one side that were 12.70 in long. All of these slots were 0.625 inches apart from each other and they all had a width of 0.32 in, which allowed M3 screws to be inserted inside those slots. The shorter slots were used to attach the Jetson Nano case to the baseplate and the longer slots were used to mount the baseplate onto the chassis and attach the camera mount to the baseplate. There was a wider slot located on the opposite side of the three short slots that allowed more space for wiring and hardware.
Camera Mount
The camera mount consisted of a simple joint with spacing for the camera wiring to run through the mount as well as spaced M3 holes for mounting to both the camera and the baseplate. The component was printed from PETG filament using a standard Fused Deposition Modeling (FDM) 3D printer.
Jetson Nano Protective Case
The Jetson Nano case sourced from an open-source repository and serves the purpose of protecting and securing the Jetson Nano from damage, should a crash occur (which it did). The component was printed from PETG filament using a standard Fused Deposition Modeling (FDM) 3D printer.
Electrical Design
Wiring Schematic
Programming Design
Color Filter Flowchart (OpenCV):
The computer vision script works by converting each frame into HSV space, forming a mask for each target color (red, yellow, and green), and applying the hough circle transform to each masked image. If a circle of the proper size and color range is detected, the script will output the corresponding traffic signal logic to be used by ROS2 for directing the car.
The GIF above is a visualization of the computer vision script detection.
ROS2 Flow Chart:
Above is a flow chart that depicts the structure of the ROS2 Nodes that guide the robot. Here we see that the Lane Detection Node subscribes to the camera feed topic (which contains raw camera frame data) and publishes the centroid locational data to the centroid topic. The Lane Guidance Node was modified by Team 2 to subscribe to both the centroid topic from the Lane Detection Node as well as the camera feed topic; the node will guide the car based on the centroid data relative to the center of the camera frame (allowing the car to follow the lane lines) and in the event that a traffic signal is detected, the commands to obey the traffic signal will override the lane guidance commands. The Lane Guidance node publishes actuator values to the cmd_vel topic, which is interpreted by Adafruit Twist Node that controls the PWM signal sent to the car's throttle and steering.
Project Demo:
Repositories ECE/MAE 148 WI22 Team2 GitHub ECE/MAE 148 WI22 Team2 GitLab (ROS2 Integration)