From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Bold text

Team Members


  • Roger Kim


  • Bishwajit Roy

2000 (2).jpg]


  • Tara Len

Tara Len.jpg

  • Cameron Yenche


Team 2 Car

Isometric View


Side View

1000 (3).jpg

Project Overview

The purpose of this project is to replicate autonomous interaction with traffic signals using the an RC Race Car Chassis, webcam, NVIDIA Jetson Nano as hardware, and ROS2 (Robotic Operating System) and Docker containers for software development. Taking cues from industry leadership, this was done using camera-based computer vision navigation—below is a summary of the project.

Mechanical Design



The baseplate had three slots running down its middle section that were 34.29 in long and three slots on one side that were 12.70 in long. All of these slots were 0.625 inches apart from each other and they all had a width of 0.32 in, which allowed M3 screws to be inserted inside those slots. The shorter slots were used to attach the Jetson Nano case to the baseplate and the longer slots were used to mount the baseplate onto the chassis and attach the camera mount to the baseplate. There was a wider slot located on the opposite side of the three short slots that allowed more space for wiring and hardware.

Camera Mount


The camera mount consisted of a simple joint with spacing for the camera wiring to run through the mount as well as spaced M3 holes for mounting to both the camera and the baseplate. The component was printed from PETG filament using a standard Fused Deposition Modeling (FDM) 3D printer.

Jetson Nano Protective Case


The Jetson Nano case sourced from an open-source repository and serves the purpose of protecting and securing the Jetson Nano from damage, should a crash occur (which it did). The component was printed from PETG filament using a standard Fused Deposition Modeling (FDM) 3D printer.

Electrical Design

Wiring Schematic

WI22 Team2 Schematic.png

Programming Design

Color Filter Flowchart (OpenCV):

WI22 Team2 OpenCV Flowchart.png

The computer vision script works by converting each frame into HSV space, forming a mask for each target color (red, yellow, and green), and applying the hough circle transform to each masked image. If a circle of the proper size and color range is detected, the script will output the corresponding traffic signal logic to be used by ROS2 for directing the car.


The GIF above is a visualization of the computer vision script detection.

ROS2 Flow Chart:

ROS2 Flow Chart.png

Above is a flow chart that depicts the structure of the ROS2 Nodes that guide the robot. Here we see that the Lane Detection Node subscribes to the camera feed topic (which contains raw camera frame data) and publishes the centroid locational data to the centroid topic. The Lane Guidance Node was modified by Team 2 to subscribe to both the centroid topic from the Lane Detection Node as well as the camera feed topic; the node will guide the car based on the centroid data relative to the center of the camera frame (allowing the car to follow the lane lines) and in the event that a traffic signal is detected, the commands to obey the traffic signal will override the lane guidance commands. The Lane Guidance node publishes actuator values to the cmd_vel topic, which is interpreted by Adafruit Twist Node that controls the PWM signal sent to the car's throttle and steering.

Project Demo:


ECE/MAE 148 WI22 Team2 GitHub
ECE/MAE 148 WI22 Team2 GitLab (ROS2 Integration)