2022WinterTeam5

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 07:22, 22 March 2022 by 2022WinterTeam5 (talk | contribs) (Filled in the Updated steering and throttle commands section)
Jump to navigation Jump to search

Team 5: Charging On The Go

TheVehicles.jpg

Team Members

  • Andrew Hallett, MAE
  • Juan SanJuan, ECE

Project Overview

Concept Inspiration:

  • We were inspired by in-flight refueling of aircraft. Taking this concept and applying it to electric autonomous vehicles we could potentially recharge vehicles out on the road without them needing to slow down or stop from their current task. In this project we explore methods of autonomous driving and tracking while the recharging concept is only visualized through an augmented reality overlay on the back of the low power state car. If we were to explore the charging method in another project we thought about ways of doing this through cable connections, inductive charging, or regenerative braking.

Goals:

  • Build 2 autonomous vehicles
  • Implement autonomous driving using ROS2
  • Implement a tracking solution to identify the low power state car
  • Use the tracking solution to update the steering and throttle controls
  • Simulate recharging through a visual representation

Gantt Chart

Gantt-wi-22-team5.jpg

Autonomous Vehicles

CAD/Mechanical Designs

There was no need for 2 different designs here. The black car was the original design, which featured a hinged mechanism to flip open the electronics bay. The blue car was updated based on features that didn't work on the first vehicle. The goal for the 2nd design was to have an open, more sleek concept so the parts were easily accessible if troubleshooting needed to be performed. This vehicle also included a new feature which clipped onto the rear bumper to hold the purple tracking solution card.

Chargingcar-wi-22-team5.jpg Leadcar-wi-22-team5.jpg

Wiring

Schematic-wi-22-team5.jpg

DonkeyCar

3 Laps Video

DonkeyCar Autonomous Laps

OpenCV/ROS2

3 Laps Video

ROS2 Autonomous Laps

Final Project

Autonomous Driving

The first goal of the project was to get both vehicles driving on the class track using the provide ROS2 code. This process involved calibrating the camera to track the contours of the yellow lines on the road, and calibrating the steering and throttle controls.

Issues:

  • The camera calibration depends on the lighting of the environment.
  • We tried to calibrate the motors so the vehicles would drive as slow as possible on the track. This became an issue when we swapped batteries and would sometimes result in the car not moving at all when testing our code.

Solutions:

  • It is recommended to do the calibration at night where the lighting is more consistent and always test on the track in those conditions.
  • To fix the battery issue we would use "ros2 launch ucsd_robocar_nav2_pkg all_components.launch.py" to activate all components and then send velocity commands to the motor using "ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.0, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: 0.0}}" to find the minimum threshold every time we swapped a battery. In the future it would be nice to have an encoder on the wheels to control speed based on rotation of the tires.

Vehicle Tracking

The next goal of the project was for the charging car to be able to track the low power state car once it caught up to it on the track. We explored complete tracking solutions such as AprilTag and ArUco Markers to accomplish this goal. Both of these methods are able to compute the precise 3D position, orientation, and identity of their respective tags relative to the camera. We wanted to use this information to help update the steering based on how far the centroid of the tag was away from the center of the camera capture. We also wanted to use the distance to update the throttle commands to either speed up or slow down in order to maintain a pacing distance of 2-3ft behind the low power state vehicle.

Issues:

  • Lack of coding experience prior to the class.
  • We had trouble finding examples that implemented these libraries with ROS.
  • The examples we did find were built using a different ROS distribution.

Solutions:

  • The code we needed to implement tracking the low power state vehicle was already solved with the provided ROS2 code. We spent the time to learn how it tracked the lines of the road and implemented a solution to track a purple tag as well on the back of the low power state vehicle. To accomplish this we created a new camera color calibration to track the purple tag and added logic to update steering and throttle commands.

Trackingpurple.jpg

Updating Steering and Throttle Commands

Once we began our project we decided we were going to use the source code of the lane detection package in order to move our charging car. For the steering we implemented a new mask with to track the color purple as well as the original mask to track the yellow lines on the road. After this our logic was simple:

In the Lane_detection node:

  1. Using the masks created for each color find both the yellow and purple contours (if any) using openCV's findContours function.
  2. Check if we have found any purple contours
  3. If so calculate the area of the biggest contour and apply the homography matrix of the battery on the purple contour
  4. If the area is big enough(meaning the car is close enough to start tracking) calculate error for the purple contour
  5. If the area is not big enough calculate the error for the yellow contour(s).
  6. Publish the error and the area of the purple contour( Area = 0 if not found).

In the Lane_Guidance node:

  1. Get data from Lane Detection node
  2. For the steering we use the same towards error method used in the original lane guidance source code
  3. For the throttle if we have not found purple(Area = 0) same as original code, else calculate percentage of area compared to min area(how far do I want the car to follow purple) and the max area( how close do I want to be from the lead car)
  4. Publish Steering and Throttle

- In Progress-

Pose Estimation

Although we had trouble implementing AprilTags and ArUco Markers we were still able to use the homography matrix idea to overlay the battery state on the purple tag. To accomplish this goal we used a combination of the code already provided to find the contours of the purple box which led to the coordinates of the points of the box. We then used the command getPerspectiveTransform() to find the matrix required to rotate, translate, and scale our battery image from the file default shape to overlay onto the purple tag on screen. The command warpPerspective() then multiplies the matrix by our image to then make the transformation. After that we separated the battery image from the black background and stitch the 2 images together. Below is our static demo getting the code to work on a still image. In the final demonstration, you can see this concept working with live video.

Full.png WarpPerspective.jpg WarpPerspectiveOverlay.jpg

Demonstrations

Final Demonstration

Presentation

Final Presentation

Challenges

- In Progress -

Future Developments

- In Progress -

Acknowledgements

- In Progress -