Difference between revisions of "2021FallTeam7"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 34: Line 34:


== Software Subsystems ==
== Software Subsystems ==
Link to source code found here: [https://github.com/BunnyFuFuu/ucsd_robo_car_simple_ros team7 source code]
Link to source code found here: [https://github.com/BunnyFuFuu/ucsd_robo_car_simple_ros Team7_Final_Project]


The origin of the majority of this code comes from [https://gitlab.com/djnighti Dominic Nightingale's] project [https://gitlab.com/djnighti/ucsd_robo_car_simple_ros '''ucsd_robo_car_simple_ros''']. We borrowed and modified the existing code base environment, added a pre-existing ROS package to communicate with the LiDAR onboard, and added our own steering logic corresponding to both the lane guidance model and the LiDAR model.
The origin of the majority of this code comes from [https://gitlab.com/djnighti Dominic Nightingale's] project [https://gitlab.com/djnighti/ucsd_robo_car_simple_ros '''ucsd_robo_car_simple_ros''']. We borrowed and modified the existing code base environment, added a pre-existing ROS package to communicate with the LiDAR onboard, and added our own steering logic corresponding to both the lane guidance model and the LiDAR model.

Revision as of 05:33, 10 December 2021

Team Members

  • Andrew Liang (ECE)
  • Jiansong Wang (MAE)
  • Shane Benetz (ECE)
  • Kevin Kruse (Extension/BIS)

Project Overview/Proposal

The goal of this project is to enhance the existing Framework of the ROS Navigation system with the integration of a Lidar in order to avoid obstacles while navigating and return to the correct route. In a short amount of words, our upgraded ROS navigation package integrated onto the Jetson Nano powered autonomous car allows it to detect objects in its path and correct its steering while still following a lane detection/guidance model based on onboard camera imaging.

Mechanics (Robot schematics and pictures)

Lap videos

Project Schedule / Gantt Chart

Software Development

Calibrating the existing ROS framework

One especially important thing for our project is a properly calibrated framework.

To achieve this, you can either adjust the controls to make it fit. However, an alternative and more precise solution is to apply a specific color filter. To do this, we first started a live transfer of the current image.

Bild1.png

Once we had the image, we opened a color tweezer using Microsoft Word and detected the color of the yellow center track.

Bild2.png

The program now gives us the color in RGB color space. Using a converter, e.g. https://www.peko-step.com/en/tool/hsvrgb_en.html we can now convert this color code into the HSV color code used by our OpenCV module. Finally we just have to configure this filter correctly and add some tolerances, because especially in twilight the colors vary. Once this process is complete, we get a perfectly calibrated system, as shown in the image below. We have to remark, that in this picture even the red line on the left is not detected at all (which is what we aimed for). Lastly, we have to mention that this scales of the HSV color space very by application: The website uses scales up 100 or 360, while our robot uses a scale up to 180. Thus, we used the scale of 360 and devided the resulting numbers by 2 to get the desired configuration.


Bild3.png

Software Subsystems

Link to source code found here: Team7_Final_Project

The origin of the majority of this code comes from Dominic Nightingale's project ucsd_robo_car_simple_ros. We borrowed and modified the existing code base environment, added a pre-existing ROS package to communicate with the LiDAR onboard, and added our own steering logic corresponding to both the lane guidance model and the LiDAR model.

Schematics

Links to additional resources (presentations/source code/GitHub/videos)

Acknowledgement

  • Jack Silberman (Professor)
  • Dominic Nightingale (TA)
  • Haoru Xue (TA)