From MAE/ECE 148 - Introduction to Autonomous Vehicles
Revision as of 23:08, 10 December 2021 by 2021FallTeam8 (talk | contribs)
Jump to navigation Jump to search

Team Members

  • Colin Riccio - Mechanical Engineering
  • Brian Pack - Mechanical Engineering
  • Carolin Lehmacher - Electrical Engineering


Robocar - Project

The picture on the right shows our Robocar and part of the wiring for our used project.

Robocar for our Project

Mechanical Design

Camera Mount

The mount system was 3d printed and had 3 parts: a camera backplate, a stand and a lidar mount. The Lidar mount ties into the front standoffs and the stand attaches to the electronics plate.

The camera mount has a pivot joint to allow for repositioning the camera angle. This proved to be initially helpful to set an optimal angle, but we ran into issues where if the mount was bumped it would lead to failures with our DonkeyCar models. It might be a good idea to first have a variable angle mount, find the best setting for the camera, and then create a static mount to use for training.

Camera assembly 8.PNG

Electronic Plate The plate was designed with a full array of mounting holes (with 1cm spacing) to facilitate the attachment of parts anywhere on the plate. It was laser cut from 1/8th inch acrylic.

Assembly 8.PNG

Electrical Design


  • Jetson Nano 2GB
  • Battery
  • USB Camera
  • Servo
  • Xerun 360
  • Anti spark switch
  • 12V to 5V Converter
  • Flipsky mini


The following picture shows the wiring of the robocar used for the project.


Autonomous Lab using ROS

The following video shows a short sequence of the autonomous labs of Team 8 using ROS.

File:ROS2 Lab.mov

Final Project

Project Overview Our goal for the final project was to build an image recognition node onto the ucsd_robocar2 that can detect a speed sign placed on the track and dynamically change the speed of the car.

Electronic Changes To control the speed of our car, we had to switch out the majority of our electronics and incorporate a VESC motor controller. The VESC can read the sensor in our brushless motor and use that information to set the RPM of the motor, meaning that instead of using PWM to control the motor we can now just send RPM values, directly changing the speed of the car. Because we switched to the VESC, we had to change from the ucsd_robo_car_simple_ros to the ucsd_robocar2 docker package as the ucsd_robo_car_simple_ros system cannot make use of the VESC.

Image Recognition To recognize different speed signs, we chose to use an Image Classification Neural Network. We decided to use image processing with OpenCV to filter the camera images delivered by the ros camera_node so that only the speed sign is visible in the hope it would make the Classification model stronger. We then used the ros calibration_node to find the best values for our filter, and then collected data for our 3 speed sign classes (~100 images of each sign for training and ~30 for validation) and one null class (~350 training images and ~90 for validation). To actually collect the data, a python script was written to capture an image from the camera, filter that image, and save it in the data folder structure used by the training pipeline we used.

After collecting the data, we used a training pipeline built by Team 1 for their final project which created a model built off of transfer learning from ResNet18.

We created a python script to test the model by taking the webcam image, filtering it, passing that image to the model, and printing out which sign it sees.

Creating a ROS Node We then created a new ros speed_reader_node which will subscribe to the camera_node to receive images from the webcam, and will publish a speed value to the lane_guidence_node. First we build the node to get a speed value from the user and publish that value, which would then change the speed of the car. After confirming publishing worked, we added a subscription to the camera_node to get the image and applied OpenCV filters to that image. That filtered image is then passed to the image recognition model which classifies which sign the camera currently sees. The speed associated with that sign is then published to the lane_guidence_node, which in turn changes the speed of the car.

Obstacles There were several major roadblocks that we dealt with in our project: - Changing our electronics took a significant amount of time as we had to solder a bunch of components, remove the old parts, redo wiring and install the new parts - The ROS2 docker package had several major bugs as it is still in beta - Lighting conditions in the tents meant it was difficult to use the line following package any time before the sun set - We were unable to get Pytorch(the package which runs the image classification model) loaded into the Docker container, which meant we were unable to actually get the final speed_reader_node to run

Future Improvements If given more time to work on the project, we would first want to get Pytorch loaded into the Docker container so that we can use the lane_guidence_node with the speed_reader_node running, recognizing signs and changing the speed. Then, we would like to improve the model by adding more speed signs and collecting more data to do more training. We would also like to experiment with the filter we used to try and make it less noisy.

Source Code

Modified Lane_Guidence_node:

import rclpy
from rclpy.node import Node
from std_msgs.msg import Float32, Int32, Int32MultiArray
from geometry_msgs.msg import Twist
import time
import os

NODE_NAME = 'lane_guidance_node'
SIGN_TOPIC_NAME = '/speed'

class PathPlanner(Node):
    def __init__(self):
        self.twist_publisher = self.create_publisher(Twist, ACTUATOR_TOPIC_NAME, 10)
        self.twist_cmd = Twist()
        self.centroid_subscriber = self.create_subscription(Float32, CENTROID_TOPIC_NAME, self.controller, 10)
        self.sign_subscriber = self.create_subscription(Float32, SIGN_TOPIC_NAME, self.sign_check, 10)

        # Default actuator values
                ('steering_sensitivity', 1),
                ('no_error_throttle', 1),
                ('error_throttle', 1),
                ('error_threshold', 1),
        self.steering_sensitivity = self.get_parameter('steering_sensitivity').value
        self.no_error_throttle = self.get_parameter('no_error_throttle').value
        self.error_throttle = self.get_parameter('error_throttle').value
        self.error_threshold = self.get_parameter('error_threshold').value
        self.zero_throttle = self.get_parameter('zero_throttle').value
            f'\nsteering_sensitivity: {self.steering_sensitivity}'
            f'\nno_error_throttle: {self.no_error_throttle}'
            f'\nerror_throttle: {self.error_throttle}'
            f'\nerror_threshold: {self.error_threshold}'
            f'\nzero_throttle: {self.zero_throttle}'

        # Default speed multiplier
        self.speed_mult = 1.0
    # Get speed multipier data from speed topic
    def sign_check(self, data):
        self.speed_mult = data.data
        print('speed set \n')

    def controller(self, data):
            kp = self.steering_sensitivity
            sm = self.speed_mult
            error_x = data.data
            if error_x <= self.error_threshold:
                throttle_float = float(self.no_error_throttle)
                throttle_float = float(self.error_throttle)
            steering_float = float(kp * error_x) #Apply multiplier
            if steering_float < -1.0:
                steering_float = -1.0
            elif steering_float > 1.0:
                steering_float = 1.0
            self.twist_cmd.angular.z = steering_float
            self.twist_cmd.linear.x = throttle_float*sm

        except KeyboardInterrupt:
            self.twist_cmd.linear.x = self.zero_throttle

    # def call_pub(self, twist_msg):
    #     self.twist_publisher.publish(self.twist_cmd)

def main(args=None):
    path_planner_publisher = PathPlanner()
    except KeyboardInterrupt:
        path_planner_publisher.get_logger().info(f'Shutting down {NODE_NAME}...')
        path_planner_publisher.twist_cmd.linear.x = path_planner_publisher.zero_throttle
        path_planner_publisher.get_logger().info(f'{NODE_NAME} shut down successfully.')

if __name__ == '__main__':