Difference between revisions of "2021FallTeam7"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
 
(49 intermediate revisions by the same user not shown)
Line 5: Line 5:
* Kevin Kruse (Extension/BIS)
* Kevin Kruse (Extension/BIS)


= Project Overview/Proposal =
= Project Overview & Proposal =
The goal of this project is the integration of a Lidar in the existing Framework of the ROS Navigation system.
The goal of this project is to enhance the existing framework of the ROS Navigation system with the integration of a Lidar in order to avoid obstacles while navigating and return to the correct route. In a short amount of words, our upgraded ROS navigation package integrated onto the Jetson Nano powered autonomous car allows it to detect objects in its path and correct its steering while still following a lane detection/guidance model based on onboard camera imaging. The cursed "DNF" should rarely ever happen. Our objective was to make a car that would be able to continue on despite the circumstance, and unless something else ran into the car, it would find its way around things and not just follow a straight line blindly.  


= Robot schematics and pictures =
 
 
'''More in Depth:'''
 
While the car navigates the track using openCV lane detection, the lidar runs in the background scanning obstacles in front of the car in a certain range. 
Out-of-range detections and long distances are filtered out. Once an obstacle is detected in that range, the car will steer to a direction based on the
angle and distance feedback from the lidar. The steering magnitude is determined by a proportional controller. Once the car passes the obstacle and is
obstacle is out of range of the scan, the car will keep executing lane detection, detecting the yellow stripes on the track.
 
 
 
[[File:NavImage.jpg|400px|right]]
 
[[File:Car3_team7_2021.jpg|300px|left]]
 
 
[[File:Car2_team7_2021.jpg|500px|center]]


= Lap videos =
= Lap videos =
ROS Laps [https://youtu.be/CBhw0hiJWwo Here]
DonkeyCar Laps at two locations [https://youtu.be/T4n4dW6wzLA Here] and [https://youtu.be/NdxYFkHFzxE Here]


= Project Schedule / Gantt Chart =
= Project Schedule / Gantt Chart =
[[File:Newgantt.jpg]]


= Software Development =  
= Software Development =  
There are two parts presented here now; the first part being the calibration and setup of the existing ROS framework. The second part consists of the actual development of the new software subsystem integrated in the existing framework.
== Calibrating the existing ROS framework ==
== Calibrating the existing ROS framework ==
One especially important thing for our project is a properly calibrated framework.
One especially important thing for our project is a properly calibrated framework.
Line 25: Line 46:
Once we had the image, we opened a color tweezer using Microsoft Word and detected the color of the yellow center track.  
Once we had the image, we opened a color tweezer using Microsoft Word and detected the color of the yellow center track.  


[[File:Bild2.png|350px]]
[[File:Bild2.png|150px]]
 
The program now gives us the color in RGB color space. Using a converter, e.g. https://www.peko-step.com/en/tool/hsvrgb_en.html we can now convert this color code into the HSV color code used by our OpenCV module. Finally we just have to configure this filter correctly and add some tolerances, because especially in twilight the colors vary. Once this process is complete, we get a perfectly calibrated system, as shown in the image below. We have to remark, that in this picture even the red line on the left is not detected at all (which is what we aimed for).
Lastly, we have to mention that this scales of the HSV color space very by application: The website uses scales up 100 or 360, while our robot uses a scale up to 180. Thus, we used the scale of 360 and devided the resulting numbers by 2 to get the desired configuration.
 


The program now gives us the color in RGB color space. Using a converter, e.g. https://www.peko-step.com/en/tool/hsvrgb_en.html we can now convert this color code into the HSV color code used by our OpenCV module. Finally we just have to configure this filter correctly and add some tolerances, because especially in twilight the colors vary. Once this process is complete, we get a perfectly calibrated system, as shown in the image below. [[File:Bild3.png|350px]]
[[File:Bild3.png|350px]]


== Software Subsystems ==
== Software Subsystems ==
Link to source code found here: [https://github.com/BunnyFuFuu/ucsd_robo_car_simple_ros Team7_Final_Project]
The origin of the majority of this code comes from [https://gitlab.com/djnighti Dominic Nightingale's] project [https://gitlab.com/djnighti/ucsd_robo_car_simple_ros '''ucsd_robo_car_simple_ros''']. We borrowed and modified the existing code base environment, added a pre-existing ROS package to communicate with the LiDAR onboard, and added our own steering logic corresponding to both the lane guidance model and the LiDAR model.
[[File:Softwarediagram_team7_2021.jpg|600px|right]]
LiDAR: Detects objects at 5 degree increments in a cone in front of the robot
(-65 to +65) degrees range so as not to detect obstacles on the sides of the
robot.
This cone has a range of 0.15 to 1.2 meters
Output: 3 Element Float32 Array:
[Distance to object in meters, angle of strongest detection in degrees, object detected flag]
[[File:Lidardata.jpg|1050px|left]]
Potential Improvements:
* LiDAR currently works like a directional range sensor
* Maybe with better LiDAR or more detailed point cloud could train an actual object detection algorithm based on a currently existing backbone
Given the three obj detection outputs: We can actually create a steering
equation for steering around obstacles concurrently with waypoints.
Baseline steering equation (standard P controller):
'''Θ = -(k<sub>p</sub> * err<sub>x</sub>)'''
New steering equation (two-part P controller for second sensor input):
It’s opposite sign because baseline steers toward the point while we want to steer away from the waypoint (obstacle).
[[File:Equation_team7.jpg|1000px]]
Where k<sub>p</sub> is steering sensitivity from 0 to 1, err<sub>x</sub> is distance to center waypoint in meters, obj<sub>detected</sub> is a 0 or 1 dependent on if there was a detection,
Θ<sub>detection</sub> is the angle of detection, Θ<sub>max</sub> is 65 for our angle range of detection, and obj<sub>dist</sub> is the distance from the ego vehicle to the detected object.
This equation will not differ from the old equation if there is no detection, but if there is, it will add a scaled constant based on which side the object
was detected on (if it’s slightly to the left of the vehicle then steer right and vice versa), the inverse normalized angular distance from 0 to determine how hard the turn should be (turn should be proportionally less if the object was far from center), and the inverse normalized distance from the detected object to make the vehicle turn proportionally less the farther away from the object it is. Because all of these constants are normalized, the final output of the added portion will always be less than 1 as this portion is designed to provide a correction to the camera navigation and not overshadow it completely.
Potential Improvements:
* Adjust the algorithm to smoothen the driving and steering harshness
* Design it to look ahead a few waypoints to profile a path of motion rather than relying on camera to always have the next waypoint in frame (localization and odometry w/motor encoders, IMU, stereo camera navigation for depth estimation a huge plus)
* Implement I and D control logic so it is better at staying corrected to the track in the first place
==Final Product Example==
Our car can pass obstacles on the way to it's destination:
[[File:Object_avoided.mp4|2000px|center]]
... and it can even pass other cars!
(our DC/DC transformer is broken, so we had to use ground power)
[https://www.youtube.com/watch?v=HeHp9kNEgbo High speed passing]
[https://www.youtube.com/watch?v=jAuHQc4hMnU Low speed passing]
== Schematics ==
== Schematics ==
[[File:Circuitry.jpg|1000px]]
[[File:Pieces_together_team7_2021.jpg|600px|right]]
[[File:Pieces_team7_2021.png|800px]]
= Links to additional resources (presentations/source code/GitHub/videos) =
[https://github.com/BunnyFuFuu/ucsd_robo_car_simple_ros Github Source Code for Project Here]


= Links to additional resources (presentations/source code/GitHub) =
[https://3.basecamp.com/3838715/buckets/24178487/vaults/4244929417 Basecamp folder containing all videos/weekly presentations]


='''Acknowledgement'''=
='''Acknowledgement'''=

Latest revision as of 01:13, 11 December 2021

Team Members

  • Andrew Liang (ECE)
  • Jiansong Wang (MAE)
  • Shane Benetz (ECE)
  • Kevin Kruse (Extension/BIS)

Project Overview & Proposal

The goal of this project is to enhance the existing framework of the ROS Navigation system with the integration of a Lidar in order to avoid obstacles while navigating and return to the correct route. In a short amount of words, our upgraded ROS navigation package integrated onto the Jetson Nano powered autonomous car allows it to detect objects in its path and correct its steering while still following a lane detection/guidance model based on onboard camera imaging. The cursed "DNF" should rarely ever happen. Our objective was to make a car that would be able to continue on despite the circumstance, and unless something else ran into the car, it would find its way around things and not just follow a straight line blindly.


More in Depth:

While the car navigates the track using openCV lane detection, the lidar runs in the background scanning obstacles in front of the car in a certain range. Out-of-range detections and long distances are filtered out. Once an obstacle is detected in that range, the car will steer to a direction based on the angle and distance feedback from the lidar. The steering magnitude is determined by a proportional controller. Once the car passes the obstacle and is obstacle is out of range of the scan, the car will keep executing lane detection, detecting the yellow stripes on the track.


NavImage.jpg
Car3 team7 2021.jpg


Car2 team7 2021.jpg

Lap videos

ROS Laps Here

DonkeyCar Laps at two locations Here and Here

Project Schedule / Gantt Chart

Newgantt.jpg

Software Development

There are two parts presented here now; the first part being the calibration and setup of the existing ROS framework. The second part consists of the actual development of the new software subsystem integrated in the existing framework.

Calibrating the existing ROS framework

One especially important thing for our project is a properly calibrated framework.

To achieve this, you can either adjust the controls to make it fit. However, an alternative and more precise solution is to apply a specific color filter. To do this, we first started a live transfer of the current image.

Bild1.png

Once we had the image, we opened a color tweezer using Microsoft Word and detected the color of the yellow center track.

Bild2.png

The program now gives us the color in RGB color space. Using a converter, e.g. https://www.peko-step.com/en/tool/hsvrgb_en.html we can now convert this color code into the HSV color code used by our OpenCV module. Finally we just have to configure this filter correctly and add some tolerances, because especially in twilight the colors vary. Once this process is complete, we get a perfectly calibrated system, as shown in the image below. We have to remark, that in this picture even the red line on the left is not detected at all (which is what we aimed for). Lastly, we have to mention that this scales of the HSV color space very by application: The website uses scales up 100 or 360, while our robot uses a scale up to 180. Thus, we used the scale of 360 and devided the resulting numbers by 2 to get the desired configuration.


Bild3.png

Software Subsystems

Link to source code found here: Team7_Final_Project

The origin of the majority of this code comes from Dominic Nightingale's project ucsd_robo_car_simple_ros. We borrowed and modified the existing code base environment, added a pre-existing ROS package to communicate with the LiDAR onboard, and added our own steering logic corresponding to both the lane guidance model and the LiDAR model.

Softwarediagram team7 2021.jpg


LiDAR: Detects objects at 5 degree increments in a cone in front of the robot (-65 to +65) degrees range so as not to detect obstacles on the sides of the robot.

This cone has a range of 0.15 to 1.2 meters

Output: 3 Element Float32 Array: [Distance to object in meters, angle of strongest detection in degrees, object detected flag]

Lidardata.jpg


Potential Improvements:

  • LiDAR currently works like a directional range sensor
  • Maybe with better LiDAR or more detailed point cloud could train an actual object detection algorithm based on a currently existing backbone


Given the three obj detection outputs: We can actually create a steering equation for steering around obstacles concurrently with waypoints.

Baseline steering equation (standard P controller):

Θ = -(kp * errx)

New steering equation (two-part P controller for second sensor input): It’s opposite sign because baseline steers toward the point while we want to steer away from the waypoint (obstacle).

Equation team7.jpg

Where kp is steering sensitivity from 0 to 1, errx is distance to center waypoint in meters, objdetected is a 0 or 1 dependent on if there was a detection, Θdetection is the angle of detection, Θmax is 65 for our angle range of detection, and objdist is the distance from the ego vehicle to the detected object.

This equation will not differ from the old equation if there is no detection, but if there is, it will add a scaled constant based on which side the object was detected on (if it’s slightly to the left of the vehicle then steer right and vice versa), the inverse normalized angular distance from 0 to determine how hard the turn should be (turn should be proportionally less if the object was far from center), and the inverse normalized distance from the detected object to make the vehicle turn proportionally less the farther away from the object it is. Because all of these constants are normalized, the final output of the added portion will always be less than 1 as this portion is designed to provide a correction to the camera navigation and not overshadow it completely.

Potential Improvements:

  • Adjust the algorithm to smoothen the driving and steering harshness
  • Design it to look ahead a few waypoints to profile a path of motion rather than relying on camera to always have the next waypoint in frame (localization and odometry w/motor encoders, IMU, stereo camera navigation for depth estimation a huge plus)
  • Implement I and D control logic so it is better at staying corrected to the track in the first place

Final Product Example

Our car can pass obstacles on the way to it's destination:

File:Object avoided.mp4

... and it can even pass other cars!

(our DC/DC transformer is broken, so we had to use ground power)

High speed passing


Low speed passing

Schematics

Circuitry.jpg

Pieces together team7 2021.jpg

Pieces team7 2021.png

Links to additional resources (presentations/source code/GitHub/videos)

Github Source Code for Project Here

Basecamp folder containing all videos/weekly presentations

Acknowledgement

  • Jack Silberman (Professor)
  • Dominic Nightingale (TA)
  • Haoru Xue (TA)