Difference between revisions of "2022SpringTeam2"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 27: Line 27:
  [[File:Cad1.png|300px]]
  [[File:Cad1.png|300px]]
  [[File:Cad2.png|300px]]
  [[File:Cad2.png|300px]]
The baseplate had three slots running down its middle section that were 34.29 in long and three slots on one side that were 12.70 in long. All of these slots were 0.625 inches apart from each other and they all had a width of 0.32 in, which allowed M3 screws to be inserted inside those slots. The shorter slots were used to attach the Jetson Nano case to the baseplate and the longer slots were used to mount the baseplate onto the chassis and attach the camera mount to the baseplate. There was a wider slot located on the opposite side of the three short slots that allowed more space for wiring and hardware.


== Electrical Design ==
== Electrical Design ==

Revision as of 21:07, 10 June 2022

Bold text

Team Members

Team

  • Raymond Constantine - MAE
  • Martin Heir - UPS
  • Sam Liimatainen - ECE
  • Zhenghao “Jack” Weng - MAE

Team 2 Car

Top View

Sideview.jpg

Front View

Frontview.jpg

Project Overview

Design and built an autonomous human following program using Yolo driven object detection algorithm This algorithm implements a box type target system that tracks human shape objects with in the camera’s field of view We utilized custom ROS nodes to have the incoming data from the camera publishing to a steering controller node, giving commands to the vesc via twist commands

Mechanical Design

Baseplate

Cad1.png
Cad2.png

Electrical Design

Wiring Schematic

WI22 Team2 Schematic.png


Programming Design

Color Filter Flowchart (OpenCV):

WI22 Team2 OpenCV Flowchart.png

The computer vision script works by converting each frame into HSV space, forming a mask for each target color (red, yellow, and green), and applying the hough circle transform to each masked image. If a circle of the proper size and color range is detected, the script will output the corresponding traffic signal logic to be used by ROS2 for directing the car.

StopLightGif.gif

The GIF above is a visualization of the computer vision script detection.


ROS2 Flow Chart:

ROS2 Flow Chart.png

Above is a flow chart that depicts the structure of the ROS2 Nodes that guide the robot. Here we see that the Lane Detection Node subscribes to the camera feed topic (which contains raw camera frame data) and publishes the centroid locational data to the centroid topic. The Lane Guidance Node was modified by Team 2 to subscribe to both the centroid topic from the Lane Detection Node as well as the camera feed topic; the node will guide the car based on the centroid data relative to the center of the camera frame (allowing the car to follow the lane lines) and in the event that a traffic signal is detected, the commands to obey the traffic signal will override the lane guidance commands. The Lane Guidance node publishes actuator values to the cmd_vel topic, which is interpreted by Adafruit Twist Node that controls the PWM signal sent to the car's throttle and steering.


Project Demo:

CarInAction.gif

Repositories
ECE/MAE 148 WI22 Team2 GitHub
ECE/MAE 148 WI22 Team2 GitLab (ROS2 Integration)