2022WinterTeam6

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team 6: DKar

Team Members

  • Aksharan Saravanan, ECE
  • Hieu Luu, DSC
  • Katada Siraj, MAE

~~team pic~~


Project Overview

A robot that acts as an aimbot, in that it follows a target and dynamically adjusts a laser pointing at that target. The robot should use OpenCV to detect the target and ROS to guide steering and throttle based on the target position and distance.

Goals:

  • Have the car be able to autonomously follow a target (e.g. a poster board)
  • Modify the car to fit a laser pointer that aims at the target
  • (Nice-to-have)Recognize targets of different colors and adjust the following distance.
    • Alternative: Use depth data from intel to implement throttle control


Gantt Chart

Team6 gantt.png

Robot setup

Hardware Setup

Team6 robotside.jpeg Team6 robotfront.png

Notes

  • Velcroed the PWM board and DC-DC converter to the bottom of the base, so there would be sufficient space for the camera/servo/laser mechanism
  • Thingiverse original cad design for jetson nano case: https://www.thingiverse.com/thing:3532828
    • modified this design to allow wires through the back and a more secure fit for the Jetson
  • Webcam Camera mounted on top provided a nice FOV without taking up any more space
  • Relay is behind the Jetson

CAD/Mechanical Designs

~~images (for cad models and lasercut board)~~ (and maybe short description?)

Wiring

Team6 wireschema.png

Special Components

  • Intel RealSense RGBD Camera
  • Mini Laser
  • Micro Servo

Team6 specialcomponents.png

Notes

Configured the Jetson Nano and compiled OpenCV from source, using the class guide instructions.

DonkeyCar

3 Laps Video

Tips/Notes

OpenCV/ROS2

3 Laps Video

Tips/Notes

Final Project

some more explanation

How We Did It

  1. Image Processing using OpenCV to Detect a Colored Target
    1. Used Virtual Machine to run a simple python script that detects the largest rectangle which is the color blue, using masking, bound the rectangle, and then detect the line from the point at the center of the object to the point at the actual center of the frame
  2. Gathered Depth data using an RGBD camera
  3. Create a custom Target Detection node which Subscribes to Camera Topics and Publishes to the Twist and Servo Topics.
    1. Node was a python script using ROS2, and utilized Dominic's existing custom ROS2 metapackage to publish/subscribe to.
  4. Wrote a program which publishes commands to the appropriate nodes based on the gathered data.
    1. Implemented P controller for throttle which maintains a certain distance from the target
    2. Implemented a P controller for servo movement
  5. Integrated code into a ROS package (aimbot_pkg) within the ucsd_robocar_hub2 docker metapackage.


~~image~~

ROS Software Design

Team6 packagetree.png Team6 rosnodes.png

Source Code/Github

Github Link: Project Code

Custom ROS Node Link: Target Detection ROS2 Node

PID Algorithm

Demo Videos

Tips/Notes

Presentations

Final Project Presentation
Weekly Update Presentations

Challenges

  • Determining how to integrate custom ROS code within the Docker Container and getting the Nodes to communicate
  • Integrating the Intel RealSense Camera
  • Issue of possible “voltage spike” to the ESC causing full throttle in some cases
  • Connecting ideas of OpenCV image detection and doing data processing to publish to ROS topics
  • Components including the Jetson, PWM, and switch burned out so had to get new parts

Future Developments

Send reverse throttle controls.

  • This would enable our car to be more robust in maintaining a following distance because it would be able to correct itself when overshooting. We weren’t able to implement this because we didn’t address it soon enough and with the little time we had, we didn’t want to risk messing around with the ESC.

Recognize different colors and dynamically adjust following distance.

  • This would allow the car to be more responsive to the environment and more robust when following a target. We weren’t able to implement this as we didn’t get enough time and didn’t get to start.

Resources

https://gitlab.com/ucsd_robocar2/ucsd_robocar_hub2

Acknowledgements

Thanks to: Professor Silberman, Dominic, Ivan, Professor De Oliveira, as well as the other teams.