Difference between revisions of "2022WinterTeam4"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 40: Line 40:
== ROS2 3 Autonomous Laps ==
== ROS2 3 Autonomous Laps ==


[[Media:ROS2-3-Laps-Team4.mp4]]
[[Media:8mb.video-Bfc-Y3EXQ5rl.mp4 ]]
 


== Final Project ==
== Final Project ==

Revision as of 23:48, 21 March 2022

Team 4: Adaptive Cruise Control

Hussein Batteikh (ECE)

Jon Carlo Bergado Bibat (CE)

Haojin Chen (MAE)

David Qiao (MAE)


Physical Robot

Construction Design

Base148.png

The plate used has the specifications of m3 holes being 1.5 centimeters apart. There are 4 curved edges around the sides of the plate which connect to the 4 legs that were 3D printed to give more room for the wiring and raising the camera, jetson, and lidar to get the best view and data collection processes. There are extended holes on the edges of the base plate to allow better wiring adjustments and attaching the Jetson Nano at the rear side of the plate in order to place the LiDar and Camera mounts at the front. The camera mount was designed as one curved leg connected to two adjustable arms in which they would move the camera up and down to calibrate for the best viewing angle. The lidar mount is attached at the front of the plate with two m3 screws diagonally to support the weight of the lidar.


Electrical Schematic

Wiring148.png


Completed Car

148car.png


Donkey 3 Autonomous Laps

Media:8mb.video-vQW-nDmf1NJR.mp4

ROS2 3 Autonomous Laps

Media:8mb.video-Bfc-Y3EXQ5rl.mp4

Final Project

Gantt Chart

FinalGantt148.jpg


Objective

The goal is to program the robocar using ROS2 with the ability to detect the speed of a moving or stationary object in front of it through the LiDAR and have it adjust its speed accordingly to avoid collision while driving autonomously, as well as filtering out the front vision so that the object in the front does not confuse the training.

Approach

Vehicle will drive autonomously through lane tracing.

The vehicle’s central processing unit will detect the center of a lane through an image provided by a RGB camera.

The image will be processed through OpenCV where the center of the lane will be calculated.

Once the center of the lane is calculated, steering will be adjusted based on the error away from the vehicle’s center.

Vehicle will adjust throttle speed based on the distance from of an object in front of it.

LiDAR will distance data of any object in front of the vehicle’s frontal view range.

As a frontal object gets closer, throttle value will be adjusted to go slower or a complete stop.

A first order filter was applied to the LiDAR readings to reduce noisy data for better stability.


Throttle Schedule Algorithm

Throttlesched.jpg


Demonstration Video

Media:IMG_3689.mp4