Difference between revisions of "2021FallTeam8"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 84: Line 84:


'''Creating a ROS Node'''
'''Creating a ROS Node'''
We then created a new ros node which will subscribe to the camera_node to receive images from the webcam, and will publish a speed value to the lane_guidence_node.  First we build the node to get a speed value from the user and publish that value, which would then change the speed of the car.  After confirming publishing worked, we added a subscription to the camera_node to get the image and applied OpenCV filters to that image. That filtered image is then passed to the image recognition model which classifies which sign the camera currently sees.  The speed associated with that sign is then published to the lane_guidence_node, which in turn changes the speed of the car.
We then created a new ros speed_reader_node which will subscribe to the camera_node to receive images from the webcam, and will publish a speed value to the lane_guidence_node.  First we build the node to get a speed value from the user and publish that value, which would then change the speed of the car.  After confirming publishing worked, we added a subscription to the camera_node to get the image and applied OpenCV filters to that image. That filtered image is then passed to the image recognition model which classifies which sign the camera currently sees.  The speed associated with that sign is then published to the lane_guidence_node, which in turn changes the speed of the car.




'''Obstacles'''
'''Obstacles'''
There were several major roadblocks that we dealt with in our
There were several major roadblocks that we dealt with in our project:
- Changing our electronics took a significant amount of time as we had to solder a bunch of components, remove the old parts, redo wiring and install the new parts
- The ROS2 docker package had several major bugs as it is still in beta
- Lighting conditions in the tents meant it was difficult to use the line following package any time before the sun set
- We were unable to get Pytorch(the package which runs the image classification model) loaded into the Docker container, which meant we were unable to actually get the final speed_reader_node to run
 
'''Future Improvements'''
If given more time to work on the project, we would first want to get Pytorch loaded into the Docker container so that we can use the lane_guidence_node with the speed_reader_node running, recognizing signs and changing the speed.  Then, we would like to improve the model by adding more speed signs and collecting more data to do more training.  We would also like to experiment with the filter we used to try and make it less noisy.

Revision as of 23:02, 10 December 2021

Team Members

  • Colin Riccio - Mechanical Engineering
  • Brian Pack - Mechanical Engineering
  • Carolin Lehmacher - Electrical Engineering

Robot

Robocar - Project

The picture on the right shows our Robocar and part of the wiring for our used project.

Robocar for our Project


Mechanical Design

Camera Mount

The mount system was 3d printed and had 3 parts: a camera backplate, a stand and a lidar mount. The Lidar mount ties into the front standoffs and the stand attaches to the electronics plate.

The camera mount has a pivot joint to allow for repositioning the camera angle. This proved to be initially helpful to set an optimal angle, but we ran into issues where if the mount was bumped it would lead to failures with our DonkeyCar models. It might be a good idea to first have a variable angle mount, find the best setting for the camera, and then create a static mount to use for training.

Camera assembly 8.PNG

Electronic Plate The plate was designed with a full array of mounting holes to facilitate the attachment of parts anywhere on the plate. It was laser cut from 1/8th inch acrylic.

Assembly 8.PNG

Electrical Design

Components

  • Jetson Nano 2GB
  • Battery
  • USB Camera
  • Servo
  • Xerun 360
  • Anti spark switch
  • 12V to 5V Converter
  • Flipsky mini


Wiring

The following picture shows the wiring of the robocar used for the project.

wiring_8.jpg

Autonomous Lab using ROS

The following video shows a short sequence of the autonomous labs of Team 8 using ROS.

File:ROS2 Lab.mov

Final Project

Project Overview Our goal for the final project was to build an image recognition node onto the ucsd_robocar2 that can detect a speed sign placed on the track and dynamically change the speed of the car.


Electronic Changes To control the speed of our car, we had to switch out the majority of our electronics and incorporate a VESC motor controller. The VESC can read the sensor in our brushless motor and use that information to set the RPM of the motor, meaning that instead of using PWM to control the motor we can now just send RPM values, directly changing the speed of the car. Because we switched to the VESC, we had to change from the ucsd_robo_car_simple_ros to the ucsd_robocar2 docker package as the ucsd_robo_car_simple_ros system cannot make use of the VESC.


Image Recognition To recognize different speed signs, we chose to use an Image Classification Neural Network. We decided to use image processing with OpenCV to filter the camera images delivered by the ros camera_node so that only the speed sign is visible in the hope it would make the Classification model stronger. We then used the ros calibration_node to find the best values for our filter, and then collected data for our 3 speed sign classes (~100 images of each sign for training and ~30 for validation) and one null class (~350 training images and ~90 for validation). To actually collect the data, a python script was written to capture an image from the camera, filter that image, and save it in the data folder structure used by the training pipeline we used.

After collecting the data, we used a training pipeline built by Team 1 for their final project which created a model built off of transfer learning from ResNet18.

We created a python script to test the model by taking the webcam image, filtering it, passing that image to the model, and printing out which sign it sees.


Creating a ROS Node We then created a new ros speed_reader_node which will subscribe to the camera_node to receive images from the webcam, and will publish a speed value to the lane_guidence_node. First we build the node to get a speed value from the user and publish that value, which would then change the speed of the car. After confirming publishing worked, we added a subscription to the camera_node to get the image and applied OpenCV filters to that image. That filtered image is then passed to the image recognition model which classifies which sign the camera currently sees. The speed associated with that sign is then published to the lane_guidence_node, which in turn changes the speed of the car.


Obstacles There were several major roadblocks that we dealt with in our project: - Changing our electronics took a significant amount of time as we had to solder a bunch of components, remove the old parts, redo wiring and install the new parts - The ROS2 docker package had several major bugs as it is still in beta - Lighting conditions in the tents meant it was difficult to use the line following package any time before the sun set - We were unable to get Pytorch(the package which runs the image classification model) loaded into the Docker container, which meant we were unable to actually get the final speed_reader_node to run

Future Improvements If given more time to work on the project, we would first want to get Pytorch loaded into the Docker container so that we can use the lane_guidence_node with the speed_reader_node running, recognizing signs and changing the speed. Then, we would like to improve the model by adding more speed signs and collecting more data to do more training. We would also like to experiment with the filter we used to try and make it less noisy.