2021SpringTeam5

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search

Team Members

James Long - ECE

Austin Chung - MAE

Xuebin Zhu - MAE

Sydney Larson - ECE

Project Overview: Autonomous Terrestrial Drone Carrier

Mission

  • Increase the range and reliability of remote drone operations

Objective

  • Drone automatically detects Robocar
  • Drone automatically lands on Robocar

Demo Procedure

  • Drone takes off and moves away from the car
  • The car is driven to drone's proximity
  • Trigger automatic landing procedure
  • Drone automatically lands on the car

Hardware

Jetson Case

To add the Jetson Nano to the RC car, we needed a case to house the computer. This housing was for the protection of the device as well as to ensure its tight attachment to the RC car. Instead of designing a new case, we printed a case from Makerbot Thingiverse.

Connector Plate Design

Jetson Case Base

Jetson Case Cover


Connector Plate

Because the acrylic base plate was printed before the Jetson Nano Case, we needed an intermediate connecting device to connect the two pieces. The resulting design was a square plate with 8 screw holes. 4 holes to attach the Jetson Case and 4 holes to attach the acrylic plate.

Camera Mount


Camera Mount

A housing case was needed to hold the camera, protect it from potential impact, and angle the view to see the tracks. To obtain these goals, a three part camera mount was designed.

The Camera Mount Tower is a piece that connects the camera box to the acrylic baseplate. It has four screw holes at its base for the acrylic and multiple screw holes at the top to angle the camera box at different degrees. The top portion was designed so that the camera box is stably mounted to the tower at two screw contact points.

The Camera Box is composed of two pieces, the Camera Box Front and the Camera Box Back. The design of the Camera Box is made snugly fit the camera so that it remains protected when potential impact occurs.

Spring 2021 Team5 Camera Box Back.jpg Spring 2021 Team5 Camera Box Front.jpg Spring 2021 Team5 Camera Mount Tower.jpg

Camera Box Back, Camera Box Front, Camera Mount Tower


Base Plate

Labeled Hardware Components

Previous versions of the team's RC car had a 340mm x 100 mm rectangular acrylic baseplate. This base-plate was used to mount electronic components so that they remained protected during the use of the car. For this specific project, the base-plate had to be increased so that a larger April-tag could be attached and to increase the landing area for the drone. To increase the surface area of the base-plate, two trapezoids were added to the longer sides of the baseplate. These trapezoids have the dimension of base1 = 300 mm, base2 = 200mm and height of 50 mm.

Spring 2021 Team5 Previous Base Plate.jpg Spring 2021 Team5 New Base Plate.jpg

Old Baseplate Design vs New Baseplate Design


DJI Tello Drone

DJI Tello drone is used in our project, it is a very small and light weight robot. While its range and battery are very limited, it has decent agility and camera performance.

DJI Tello Drone

Tello User Manual

Tello Specs

Mirror Mount

One very important part for detecting the apriltag on car is the mirror and the mirror mount to enable the camera on the Tello drone look vertically down. The camera on the drone originally orients to the ground with the angle = arcsin(1/9). Thus when designing the mirror mount we adjusted this angle to 90 degrees down. Another important parameter on this mirror mount is the distance from mirror to the camera. It should be closed enough so that the whole scope of camera will be covered, but with this design, the thickness of the piece cannot be too small to keep the structural strength. The original design of this mirror mount is from https://www.thingiverse.com/thing:4697733, and we revised on it to get the mount as below.

Mirror mount side view.png Mirror mount front.png

Wiring Schematic

We used the schematic that team 5 from winter 2021 created to help us with wiring all of the Robocar components together. Since the only other hardware we added was the drone, which is obviously wireless, there is nothing more to add to this schematic.

Software

DonkeyCar Framework

In the first half of the quarter we used Donkey Car for a deep learning method of autonomous driving. First, we trained and tested AI autonomous driving model on simulator. Then, this process is repeated with real vehicle and track. The end goal is to successfully run 3 autonomous lap on the Warren track, as fast as possible.

DonkeyCar Autonomous Laps

Manual Control

  • Allows remote control using joystick

Autonomous Driving

  • Computer Vision based AI model
  • Collect lap images via manual driving

Virtual Simulation

  • Data collection and model testing on simulator
  • Different tracks available

Real Vehicle

  • Customized car design
  • Performance affected by various factors
    • Lighting, camera position, track environment

ROS

After getting our 3 autonomous laps using DonkeyCar, the next step was getting 3 autonomous laps using ROS.

ROS is short for Robotic Operating System. ROS includes many built-in libraries for robot control. We are using a ROS package developed by our TA Dominic that comes with line detection and lane detection as the options for autonomous driving.

ROS Autonomous Laps

Throttle and Steering Calibration

This is the very first step when it comes to using this package.

When calibrating the steering, you want to see that when you pass the messages of -1 and 1 that the wheels turn to the left and right respectively. Once you do this once you shouldn't have to calibrate the steering again.

When calibrating the throttle, make sure you are starting with very small values because of how sensitive the throttle is. If you notice that putting in values of 0.01, 0.02, etc. aren't working, your battery is likely low and just keep increasing the values until the wheels start turning. If your batter is really bad, the values could even get up to being up to 0.15. With the forward values being so high the reverse values will also be higher and you'll find that when you pass in a values of 0.01 you could be going backwards. You will likely need to re-calibrate the throttle every so often when testing as your battery runs out.

Once you have figured out what throttle values work for you, then you'll place them in the lane_guidance_node.py script.

Tuning Camera Parameters

Dominic provides a launch file ucsd_robo_car_calibration_launch.launch to launch the 2 scripts needed to find these values, however it can happen that you continuously get the error that this file did not exist when you launch it even though it does. So in that case you can just run the two scripts in separate terminals. Then many windows will pop up showing the different image processing steps that happen when you eventually run the lane or line detentions. A window with sliders to change how the images are processed. You can see the effect these sliders have by looking at the other windows. The values of the sliders are automatically saved even after ctrl+C, but I do suggest writing them down / taking a screen shot as you change them so you can figure out what did and didn't work.

Line Detection Node

This node subscribes to the camera_rgb topic which gets the values from the find camera values section. From these values, the camera_rgb publishes whether or not a line is found to the centroid topic. And whether or not a centroid is found determines how fast the car moves.

Lane Detection Node

This node is very similar to line detection, but it can now see if there are multiple lines in frame instead of just one if a green mask is applied in the find camera values section. For the most part we found that it is easier to only look at the dashed lines on the track instead of trying to see the dashed line and the outside line.

Issues and Problems

For the most part this is a plug-and-play package, however, we ran into a few issues when running it.

  • The Robocar's front axle is used for steering and on our particular car, the axle was not straight. So when doing the calibration steps it is very important to find where center is for your car and adjust accordingly in the lane_guidance_node.py file.
  • It can take a long time to figure out what camera values to use to use in order to be able to properly see the lanes, whether you are trying to see the dashed center line or the outside white line. A great starting point is to adjust what the camera is seeing so you aren't taking in too much extra information and only seeing the lanes in the camera.
  • If you find that the car is not turning enough, increasing the kp value in the lane_guidance_node will help with this.

Drone Programming

Software Requirements

TelloPy

  • The Python library we use for controlling the DJI Tello. It supports all motion commands and video streaming.

Pygame

  • The library for implementing keyboard control

PyAV

  • The library used in TelloPy for video streaming

pupil_apriltags

  • The Python wrapper for Apriltag3, the newest version

OpenCV

  • The library for displaying and processing image

Manual Control

We implemented drone's keyboard manual control based on the example in TelloPy's repo. It includes

  • Moving drone with 4 degrees of freedom
  • Image capture and video recording
  • Trigger for automatic landing algorithm


Apriltag Localization

Procedure

  1. Calibrate camera, obtain intrinsic matrix and distortion parameters
  2. Use the camera to capture an image containing Apriltag
  3. Start localization algorithm

Localization Algorithm

  • Input
    • Apriltag parameters: tag family, tag id, position and euler angles in global coordinate system
    • Camera parameters: intrinsic matrix and distortion parameters
    • Camera image
  • Output
    • Camera pose: camera position and euler angles in global coordinate system


Drone Landing

The landing algorithm is mostly hard coded. It has many tunable hyperparameters that control the drone's speed and sensitivity.

Automatic Landing Algorithm

  • Input
    • Drone position and euler angles: in global coordinate system with origin at tag center
  • Output
    • Drone motion: update speed in all 4 degrees of freedoms
    • Land signal: land when predefined conditions are met


Multiprocessing

Multiprocessing is applied to separate video streaming and drone localization apart, so the computationally heavy localization algorithm will not cause delay on updating camera feed.


Performance

Automatic Landing

Indoor Performance

  • Automatic landing usually takes from 30 - 70 seconds
    • Depending on drone's starting position

Outdoor Performance

  • Very susceptible to wind condition due to drone's light weight
  • Light condition (too bright) can also affect Apriltag detection


Issues and Problems

Package Installing

  • PyAV and pygame could be hard to install on Linux
  • We used PyCharm on Windows PC

Video Streaming

  • The video streaming functionality from TelloPy requires MPlayer, and does not work on Windows (We wrote our own)
  • There will be time delay in the beginning of the stream, but delay disappears very shortly
  • Glitch occurs occasionally, will affect Apriltag detection in that frame

Multiprocessing

  • Cannot use separate process to handle drone control and drone video streaming


Code and Resources

Our Github Repo

TelloPy Library

DJITelloPy Library    (Another Tello library, but its video streaming does not work on our Windows PC)

Drone Programming Tutorial

Tello SDK

ROS1 Tello Package

Thoughts for Future

Here we'd like to share some thoughts on potential improvements on current work, and the idea of extending this project to a larger one.

Improving Current Work

  • Drone video stream quality & develop new method to stream video
  • Automatic landing algorithm: Improve by sharing more relevant data between processes

Extension of this Project

  • Autonomous vehicle: self-navigate to the drone and retrieve it
  • ROS Interface of drone: implement all functionality with ROS
    • Notice: ROS Package for TelloPy already exists, but only for ROS1 Kinectic
  • Multiple Autonomous Robots & Computers
    • Coordinating drone and car together
    • Multiple computing device handling different tasks: image processing, communication, etc.

Our Message

We sincerely hope that this project could be continued by following teams, please do not hesitate to reach out to us if you have any questions.

Imagine how COOL it will be when we build a system of autonomously interacting robots? ;)