2021WinterTeam5

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Team 5 Robocar
Car build in progress.


Team Members

• Matthew Henry (MAE)
• Aldo Yew Siswanto (MAE)
• Timothée Doumard (UPS) - (UCSD Extension)
• Bret Gustafson (ECE)

Project Overview

The goal of this project was to create a fully autonomous car that is capable of quick obstacle detection, avoidance and navigation through depth image processing using a Intel RealSense D435 camera. The camera is used to capture both stereo depth and static RGB images of the track ahead in real time. Depending on the detected obstacle's trajectory (angle and distance) relative to the camera center), the car adjusts it's steering and throttle profiles to avoid said obstacle.

github repo: https://github.com/bfgustaf/ROS_ECE148/tree/master

Project Design

We first started our project by building our car to be able to train and drive autonomously on a preset race track. This is achieved by designing an acrylic laser cut baseplate to mount all the electronics along with a 3D printed modular camera mount case. Along with the designed 3D printed products, we also were able to 3D print a protective case for Jetson Nano, in which the link to the design can be found here. Following the production of each of our components, we assembled all our components together, and trained the car by driving numerous laps it the provided track.

After successfully running the car autonomously around the provided track, it was time to move on to the second part of our project. We replaced our initial camera into a depth measuring Intel RealSense camera, along with designing necessary changes to the camera mount design to fit. We then did the necessary ROS and OpenCV programming needed for our car to avoid obstacles.

Mechanical Design

Jetson Nano case

Courtesy of ecoiras, https://www.thingiverse.com/thing:3649612

Car frame parts

A standard RC hobby car chassis is used as the base. A custom camera mount and baseplate design were necessary to accommodate both the standard components needed for the car to function and the non-standard Realsense depth camera.

Intel RealSense D435 Camera Mount

The camera mount design for the Intel RealSense Camera is modular. We wanted a camera mount design with flexibility and adaptability both in terms of the camera's height and angle. Both the connector and cap component allows for height and angular camera adjustments respectively. Intuitively, the initial depth camera we were requisitioned was a different model (Intel Realsense 455). Therefore, only the camera's cap needed to be redesigned and reprinted when the camera's model was changed.

Baseplate Design

For the baseplate, we wanted a wide base along with incorporation of multiple mount holes in order to provide flexibility to the different designs and prototypes while still ensuring the minimal weight added to the car.

Platform screen.png

Electronics

The car's controller is an Nvidia Jetson Nano which feeds to an Adafruit 9685 pwm. The motor drive is equipped with a 12v DC motor, a Hobbywing Quicrun WP 1060 electronic speed control, and a 6v DC servo motor for steering control. The whole system is powered by a 12v, 3 cell, LiPo battery.

Electrical Wiring

Wiring.png

Software Design

The Jetson Nano is loaded with the Nvidia Developer Kit Ubuntu 18.04. As part of the core software, we compiled openCV 4.0 and built ROS Melodic Morenia. The external software used were the ROS Wrapper for Intel® RealSense™ Devices and the Depth Image to Laserscan packages.

ROS Framework Design

Layout for the ROS car control framework

ROS Map.png

OpenCV

The goal of OpenCV is to take in images from the Realsense camera and output meaningful data the ROS system can use to implement the steering and throttle. The workflow was simple. Using a raw color image from the RGB sensor, process the image thru OpenCV (using CV bridge) using the following steps:

  • Input the image and convert it's color space from RGB to HSV
  • Apply a noise reduction algorithm to filter out unwanted glare
  • Apply a filter mask to only allow the yellow range of colors to be seen
  • Draw contours around the unmasked objects
  • Calculate the area of the contours and select the largest contour by area
  • Draw a bounding box around the largest contour and calculate its centroid

The coordinates of the calculated centroid can then be passed onto the lane guidance node for position processing and steering value calculation.


Obstacle Avoidance

Angle parameters for declaring direction

For the obstacle avoidance, we wanted to do an adaptive speed based on the distance to the object. If the object if far away, we would not change the speed of our car, if it is in medium range, we would slow the car based on how close the object is, based of a function. And if the object is very close, the car would stop and wait for object to be removed to start moving again.

It is the same for the steering, if the object is too much on the right or on the left, it would not be pertinent to change direction or speed, so if the detected object is not on the car path, it would notice it, but not change the driving parameters. And if it is in the path, the car would steer right or left, based on its position, to avoid it.


Example depth image photo

Those conditions are applied to a python ROS code. A depth image taken with the RealSense camera is first translated into a laserscan message type. Using this laserscan message type, incoming obstacles are be detected along with their corresponding range (distance from the car to the obstacle) and direction (the angle along the horizon where is the obstacle with respect to the car).

Combining these range and direction values into a custom message type direction, the message is processed through two nodes:

  • laserdistance.py: Takes in the laserscan message, and publishes a corresponding direction to a topic received by the next code.
  • obstacleavoidance.py: Takes in the published direction message, and then adjust the original throttle and steering value received from the laneguidance to reflect the obstacle avoidance effect as explained above.

More of our codes are attached below.



Results

{{#ev:youtube|https://youtu.be/BnCI9XpQ9N4%7C300x560%7Cleft%7C OpenCV line following|frame}}


























Here is a video showing our car model following the center line using OpenCV.