Difference between revisions of "2022SpringTeam2"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 36: Line 36:
  [[File:nodechart.png]]
  [[File:nodechart.png]]


The computer vision script works by converting each frame into HSV space,  forming a mask for each target color (red, yellow, and green), and applying the hough circle transform to each masked image. If a circle of the proper size and color range is detected, the script will output the corresponding traffic signal logic to be used by ROS2 for directing the car.
Human detection Node
“Responsible for detecting and locating humans”
[[File:StopLightGif.gif]]
Subscribes to camera
Publishes human pixel coordinates


The GIF above is a visualization of the computer vision script detection.
PID Node
 
“Responsible for steering control”
 
Subscribes to human pixel coordinates
'''ROS2 Flow Chart:'''
Publishes car steering commands
 
[[File:ROS2 Flow Chart.png]]
 
Above is a flow chart that depicts the structure of the ROS2 Nodes that guide the robot. Here we see that the Lane Detection Node subscribes to the camera feed topic (which contains raw camera frame data) and publishes the centroid locational data to the centroid topic. The Lane Guidance Node was modified by Team 2 to subscribe to both the centroid topic from the Lane Detection Node as well as the camera feed topic; the node will guide the car based on the centroid data relative to the center of the camera frame (allowing the car to follow the lane lines) and in the event that a traffic signal is detected, the commands to obey the traffic signal will override the lane guidance commands. The Lane Guidance node publishes actuator values to the cmd_vel topic, which is interpreted by Adafruit Twist Node that controls the PWM signal sent to the car's throttle and steering.





Revision as of 21:11, 10 June 2022

Bold text

Team Members

Team

  • Raymond Constantine - MAE
  • Martin Heir - UPS
  • Sam Liimatainen - ECE
  • Zhenghao “Jack” Weng - MAE

Team 2 Car

Top View

Sideview.jpg

Front View

Frontview.jpg

Project Overview

Design and built an autonomous human following program using Yolo driven object detection algorithm This algorithm implements a box type target system that tracks human shape objects with in the camera’s field of view We utilized custom ROS nodes to have the incoming data from the camera publishing to a steering controller node, giving commands to the vesc via twist commands

Mechanical Design

Baseplate

Cad1.png
Cad2.png

Electrical Design

Wiring Schematic

Wire.png

Programming Design

Color Filter Flowchart (OpenCV):

Nodechart.png

Human detection Node “Responsible for detecting and locating humans” Subscribes to camera Publishes human pixel coordinates

PID Node “Responsible for steering control” Subscribes to human pixel coordinates Publishes car steering commands


Project Demo:

CarInAction.gif

Repositories
ECE/MAE 148 WI22 Team2 GitHub
ECE/MAE 148 WI22 Team2 GitLab (ROS2 Integration)