Kennan Gonzales (MAE)
McKinley Souder (ECE)
Cade Wohltman (MAE)
UCSD has access to plenty of robots with autonomous capabilities however, a lot of them are sitting in a lab waiting to be used. Our team aims to implement a ROS package that would allow all of those robots to be of use even when people aren't in the immediate vicinity. To alleviate this issue a room would be setup with a camera such that an individual could log into one of UCSD's robots, deploy their programming package, and when they were done using it our ROS package would launch. Our ROS packaged intends to autonomously navigate itself to a designated location marked with an AprilTag.
- Make use of an Ackermann Steering RC car
- A camera with at least 30 FPS
- An RPLidar for collision prevention
- A SBC for data processing
- A PWM for both the servo and stepper motor
The baseplate was designed such that the Nvidia Jetson Nano could fit both vertically and horizontally on each edge of the baseplate. It also had a large cutout in the middle so that all cords and wires could fit through and hide underneath the frame. At the very end of either end are two slots where M4 screws are used to connect the baseplate to RC car. This design was supported by boosters that were printed with PLA plastic.
Donkey/ROSRacer Camera Mount Assembly
The camera mounting assembly was designed with the ability to adjust the height and angle of the camera to get the best possible position of the track. This design not only allowed for different camera position configurations but also prioritized protection of the camera in the event of a collision. In fact, the mount was able to survive a full speed collision with a wall and the only part that suffered any damage was the acrylic base plate. The only disadvantage to this design was the time that it took to print the components (~ 10 hrs).
The original design for the camera mount had to be altered to allow for the RPLidar to have 360 degree clearance to prevent any blind spots. The camera height was determined from the previous design and a hinge was designed in order to allow angle adjustment for the camera mount. The RPLidar was placed on top of a printed mount above the rest of the car and was completely unobstructed. The lidar mount also served to funnel the relay signal allowing the key fob to be used from a greater distance.
This connected Lidar to Lidar adapter.
- LiPo Battery
- Battery Monitor
- Stepper Motor (Throttle)
- Servo Motor (Steering)
- Electronic Speed Controller
- USB Camera
- 12V - 5V Buck Converter
- Jetson Nano
- Relay Module & Key Fob
- PWM (16 Channels)
Our files are available in this GitHub repository: garageNav
We used our TA Dominic's package [ucsd_robo_car_simple_ros](https://gitlab.com/djnighti/ucsd_robo_car_simple_ros) for his convenient launch file for throttle and steering, and to house our script and launch file.
Our project also depends on the [apriltag_ros](http://wiki.ros.org/apriltag_ros) package as well as the [rplidar](http://wiki.ros.org/rplidar) package to support both identifying AprilTags through the webcam and getting the sensor data from the lidar.
Our files integrate with ucsd_robo_car_simple_ros, and use the topics published by the ariltag_ros and rplidar packages to decide steering and throttle values for the car in order to autonomously return to the garage.
- This package uses the topics published by the two scripts found in our repo to get the camera image and configuration matrix. It subscribes to them to continuously detect tags present in the video stream from our webcam, and publishes their position relative to the camera frame to the /tf topic.
- This package enables us to read in data from the RPLidar A1, and publishes the data to the /scan topic. This topic contains an array of distances spanning the 360° around the Lidar, which are used for collision prevention in the room that the car is present in.
- Our files reside in this package, and it also enables us to publish data to the /throttle and /steering topics, telling our car where to turn and how fast to move.
Our script subscribes to /tf, giving us the tags that are currently in frame, and their position relative to the camera. It also subscribes to /scan to stop it from crashing into walls, and to guide it in the proper direction to turn. When our car sees the garage tag, it centers it in frame and drives towards it until close enough to stop using a proportional control for the steering value. When the back wall tag is in frame, our car uses the lidar to check which wall it is closest to, and turns in the other direction until it sees another tag. The side walls each have a tag that directs the car left or right towards the garage tag. The corresponding values for each tag are published to /throttle and /steering.
Clone the apriltag, rplidar, and ucsd_robo_car_simple_ros packages as well as any dependencies listed on their repositories. Then add the file in this repo's scripts dir to ucsd_robo_car_simple_ros's scripts dir, and this repo's launch file to ucsd_robo_car_simple_ros's launch dir. Additionally, the cv_ros and camera_info python files must be added to the apriltag_ros scripts directory to publish the camera topics for apriltag_ros to use. Our launch file will find the packages we depend on and launch their respective launch files. Within apriltag_ros, it is necessary to add the tags intended to be used to tags.yaml so that their transforms will be published.
The team was able to accomplish our goal and autonomously navigate the RC car to a designated location. The RC car's camera takes in images through the AprilTag ROS package and gives the car steering commands. A combination of tags can give steering directions such as turn left, turn right, center, and turn around. The turn around function also uses Lidar data to determine the proper steering command. If the car had a wall that was closer on its left side and saw the turn around tag, the GarageNav package will instruct the car to turn right. An implementation of our autonomous robot can be found here.
Future Plans & Improvements
The team originally planned on using the hector_slam ROS package to map out the environment with the RPLidar. Then the car could instantly localize itself in the room using a combination of AprilTags and ScanMatcher packages. This would have allowed the navigation stack in rviz to be used for setting goal positions and path planning. However, there were some issues integrating the AprilTag and RPLidar transforms together in the rviz environment. Due to time constraints the team decided to use