Difference between revisions of "2021WinterTeam1"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
 
(16 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Objective:'''
==Team Members==
 
Brandon Bao (MAE)
 
Evon Yip (ECE)
 
Sarah Imai (ECE)
 
==Objective==


Implement SLAM on the RC using the rp lidar and gmapping
Implement SLAM on the RC using the rp lidar and gmapping
Line 7: Line 15:
Use RVIZ to debug ROS algorithms / functionality
Use RVIZ to debug ROS algorithms / functionality


'''Sensors/Hardware:'''
==Sensors/Hardware==


RP Lidar: The laser scans find the distance to the surroundings that eventually forms a map using SLAM (Gmapping package).  
RP Lidar: The laser scans find the distance to the surroundings that eventually forms a map using SLAM (Gmapping package).  
Line 19: Line 27:
PWM/Servo Driver PCA9685: i2c controlled with a built in clock. Controls the throttle and steering of the car using 2 channels  
PWM/Servo Driver PCA9685: i2c controlled with a built in clock. Controls the throttle and steering of the car using 2 channels  


Chassis:
===Chassis===


[[File:Team1.1.png|500px]]
[[File:Team1.1.png|500px]]
Line 28: Line 36:
Lipo Battery and battery charger
Lipo Battery and battery charger
   
   
'''Schematic:'''
==Schematic==


[[File:Team1.2.png|1000px]]
[[File:Team1.2.png|1000px]]
Line 34: Line 42:




'''Final setup :'''
==Final Setup==


[[File:Team1.3.png|1000px]]
[[File:Team1.3.png|1000px]]
Line 40: Line 48:
[[File:Team1.4.png|1000px]]
[[File:Team1.4.png|1000px]]


'''Packages and Drivers:'''  
'''3D models and laser cuting:'''
 
We lasercut a simple base from acrylic, and 3D printed a camera mount, Jetson Nano mount, RP Lidar mount, and an IMU mount. We found that a relatively tall but forward looking camera was sufficient for line following and the earlier Donkey Car deep learning programs.
 
[[File:Team1.7.png|1000px]]
==Software==
 
===Packages and Drivers===


Slam_Gmapping: http://wiki.ros.org/gmapping  
Slam_Gmapping: http://wiki.ros.org/gmapping  
Line 50: Line 65:
Courtesy of Layne Clemen. More info: https://learn.sparkfun.com/tutorials/openlog-artemis-hookup-guide/configuration
Courtesy of Layne Clemen. More info: https://learn.sparkfun.com/tutorials/openlog-artemis-hookup-guide/configuration


Odom publisher:
Odom publisher
https://gist.github.com/atotto/f2754f75bedb6ea56e3e0264ec405dcf
https://gist.github.com/atotto/f2754f75bedb6ea56e3e0264ec405dcf


ROS lane guidance:https://gitlab.com/djnighti/ucsd_robo_car_simple_ros  
ROS lane guidance:https://gitlab.com/djnighti/ucsd_robo_car_simple_ros  


'''Algorithms:'''
===Algorithms===


Used gmapping-ros package combined with rplidar and IMU for mapping and motion detection.  
Used gmapping-ros package combined with rplidar and IMU for mapping and motion detection.  
Line 69: Line 84:
The obstacle avoidance node will subscribe to the /scan topic created using the rplidar package and adjust steering according to a few rules. The current trajectory is determined by the lane guidance script, which follows the dotted line. When the lidar scan indicates there is an obstacle ahead, the steering angle published will adjust to a new trajectory that will avoid this obstacle, until there is no obstacle. Then, the trajectory will guide back to undo the path change and return to the steering provided by the lane guidance.  
The obstacle avoidance node will subscribe to the /scan topic created using the rplidar package and adjust steering according to a few rules. The current trajectory is determined by the lane guidance script, which follows the dotted line. When the lidar scan indicates there is an obstacle ahead, the steering angle published will adjust to a new trajectory that will avoid this obstacle, until there is no obstacle. Then, the trajectory will guide back to undo the path change and return to the steering provided by the lane guidance.  


'''Schematic:'''
===Schematic===


Initially we wanted to use AMCL which uses particle filtering for localization for better estimation but due to time constraints, the package was not used, resulting in the ROS schematic shown below.
Initially we wanted to use AMCL which uses particle filtering for localization for better estimation but due to time constraints, the package was not used, resulting in the ROS schematic shown below.
Line 75: Line 90:
[[File:Team1.6.png|1000px]]
[[File:Team1.6.png|1000px]]


'''3D models and laser cuting:'''


We lasercut a simple base from acrylic, and 3D printed a camera mount, Jetson Nano mount, RP Lidar mount, and an IMU mount. We found that a relatively tall but forward looking camera was sufficient for line following and the earlier Donkey Car deep learning programs.
===RVIZ===
 
[[File:Team1.7.png|1000px]]
 
'''RVIZ:'''


RVIZ is a program useful in visualizing data streams in ROS nodes. For example, once set up, while SLAM is being run, the resulting map can be visualized in the program.  
RVIZ is a program useful in visualizing data streams in ROS nodes. For example, once set up, while SLAM is being run, the resulting map can be visualized in the program.  


When running the car, the Jetson Nano is not connected to a display, so to visualize the data streams, a Virtual Network Computing (VCN) viewer was tried used, which acts similar to a remote desktop system, however, NoMachine was used because it was simple to setup and get working. Once installed, to get RVIZ running on the Jetson Nano to display the LaserScan message, the following was run:
When running the car, the Jetson Nano is not connected to a display, so to visualize the data streams, a Virtual Network Computing (VCN) viewer was tried used, which acts similar to a remote desktop system, however, NoMachine was used because it was simple to set up and get working. Once installed, to get RVIZ running on the Jetson Nano to display the LaserScan message, the following was run:


Install NoMachine to vm
Install NoMachine to Virtual Machine


Launch NoMachine, configure it to connect to Jetson using ip.
Launch NoMachine, configure it to connect to Jetson using ip.
Ssh into Jetson Nano with displays on the vm ie: ssh -Y jetson@ip…
Ssh into Jetson Nano with displays on the vm ie: ssh -Y jetson@ip…


To re-enable GUI issue the command:
    To re-enable GUI issue the command:
# sudo systemctl set-default graphical.target
    sudo systemctl set-default graphical.target
or to start GUI session on a system without a current GUI just execute:
    or to start GUI session on a system without a current GUI just execute:
# sudo systemctl start gdm3.service
    sudo systemctl start gdm3.service
To launch lidar visualization
    To launch lidar visualization
# roslaunch rplidar_ros view_rplidar.launch
    roslaunch rplidar_ros view_rplidar.launch
To stop the GUI
    To stop the GUI
# sudo systemctl stop gdm3.service
    sudo systemctl stop gdm3.service


More info on RVIZ at:
More info on NoMachine Virtual Desktop for the Jetson at:
https://www.nomachine.com/download/linux&id=1
[https://www.nomachine.com/download/linux&id=1| Linux Download]
https://www.nomachine.com/download/linux&id=30&s=ARM
[https://www.nomachine.com/download/linux&id=30&s=ARM| Jetson ARM Download]




'''Results:'''
===Results===


'''Future plans/improvements:'''
[https://drive.google.com/file/d/1ZP7vPJd373EbI9J623BM4qSxPOwUbmrD/view?usp=sharing| Steering]
 
[https://drive.google.com/file/d/1GD2Fq7vDQ0Drg3jB9vIpkDgTQxTd9Ydn/view?usp=sharing| RVIZ]
 
===Future plans/improvements===


Include AMCL (http://wiki.ros.org/amcl) to improve localization accuracy through sensor fusion in estimation
Include AMCL (http://wiki.ros.org/amcl) to improve localization accuracy through sensor fusion in estimation
Better mounting for a more efficient weight distribution
Better mounting for a more efficient weight distribution


'''Other documentation:'''
==Other documentation==
 
[https://docs.google.com/presentation/d/1wiU7seEQiAnqI37EVfKImQQE7c46Ok3sfwfljbI782M/edit?usp=sharing| Slides]
 
[https://docs.google.com/document/d/1dTqOGP3MPpcebGvdv3HaGn2R0pZt9o2rEeFydWlKlMU/edit?usp=sharing| Simple Git Guide]


ROS dependencies
[https://github.com/sarahimai/ROS_148| ROS Dependencies (Github)]
Simple Git Guide

Latest revision as of 02:32, 17 March 2021

Team Members

Brandon Bao (MAE)

Evon Yip (ECE)

Sarah Imai (ECE)

Objective

Implement SLAM on the RC using the rp lidar and gmapping

Using lane guidance from ucsd_robo_car_ros package (provided by Dominic Nightingale) integrate particle filter SLAM outputs to guide the steering based on what obstacles are presented

Use RVIZ to debug ROS algorithms / functionality

Sensors/Hardware

RP Lidar: The laser scans find the distance to the surroundings that eventually forms a map using SLAM (Gmapping package).

IMU (Inertial Measurement Unit): We are using the SparkFun OpenLog Artemis IMU which outputs 6DOF acceleration and gryo data. https://www.adafruit.com/product/4754

Camera: The ELP Mini USB camera images feed into a lane tracking program which tracks the center of a lane through centroid computations.

Jetson Nano: The Jetson Nano is a single board computer (SBC) that acts as a central microcontroller

PWM/Servo Driver PCA9685: i2c controlled with a built in clock. Controls the throttle and steering of the car using 2 channels

Chassis

Team1.1.png

Brushless Motor: for throttle Servo : receives input from PWM for steering control Motor drivers (ESC) : connected to PWM and stops the motor(emergency stop) Lipo Battery and battery charger

Schematic

Team1.2.png


Final Setup

Team1.3.png

Team1.4.png

3D models and laser cuting:

We lasercut a simple base from acrylic, and 3D printed a camera mount, Jetson Nano mount, RP Lidar mount, and an IMU mount. We found that a relatively tall but forward looking camera was sufficient for line following and the earlier Donkey Car deep learning programs.

Team1.7.png

Software

Packages and Drivers

Slam_Gmapping: http://wiki.ros.org/gmapping Dependencies: https://github.com/ros-perception/openslam_gmapping

RPLidar: http://wiki.ros.org/rplidar

IMU driver: https://learn.sparkfun.com/tutorials/how-to-install-ch340-drivers?_ga=2.150182644.393793206.1614466573-1572058972.1607974083, under other Linux Distributions Courtesy of Layne Clemen. More info: https://learn.sparkfun.com/tutorials/openlog-artemis-hookup-guide/configuration

Odom publisher https://gist.github.com/atotto/f2754f75bedb6ea56e3e0264ec405dcf

ROS lane guidance:https://gitlab.com/djnighti/ucsd_robo_car_simple_ros

Algorithms

Used gmapping-ros package combined with rplidar and IMU for mapping and motion detection. Particle filter SLAM (simultaneous localization and mapping) keeps track of a motion model of the car and its surroundings to simultaneously create a map and locate where the car is in relation to the map. The ‘particle’ in particle filters represent one instance of a hypothesis of its location-- many particles are generated by accounting for the noise from the sensor readings, and the particle trajectory that most agree with the generated map is weighted to be more reliable. Particle resampling happens when certain particle weights are lower than a given threshold, resulting in more particles generated in the location with the higher weights. Luckily, there was no need to implement this algorithm, thanks to the gmapping package created by Brian Gerkey. gmapping requires a node to publish the relationship between the lidar and car reference points, as well as a populated /tf message which was created using data from the IMU. Lane Tracking: used Dominic’s repository (https://gitlab.com/djnighti/ucsd_robo_car_simple_ros/-/tree/master) The lane tracking identifies where the dotted white line is on the ground, and maintains the steering to follow the line. It computes the centroid which acts as the center line and is fed to the steering and throttle.

IMU publish odometry and conversion to tf https://gist.github.com/atotto/f2754f75bedb6ea56e3e0264ec405dcf

Obstacle Avoidance The obstacle avoidance node will subscribe to the /scan topic created using the rplidar package and adjust steering according to a few rules. The current trajectory is determined by the lane guidance script, which follows the dotted line. When the lidar scan indicates there is an obstacle ahead, the steering angle published will adjust to a new trajectory that will avoid this obstacle, until there is no obstacle. Then, the trajectory will guide back to undo the path change and return to the steering provided by the lane guidance.

Schematic

Initially we wanted to use AMCL which uses particle filtering for localization for better estimation but due to time constraints, the package was not used, resulting in the ROS schematic shown below.

Team1.6.png


RVIZ

RVIZ is a program useful in visualizing data streams in ROS nodes. For example, once set up, while SLAM is being run, the resulting map can be visualized in the program.

When running the car, the Jetson Nano is not connected to a display, so to visualize the data streams, a Virtual Network Computing (VCN) viewer was tried used, which acts similar to a remote desktop system, however, NoMachine was used because it was simple to set up and get working. Once installed, to get RVIZ running on the Jetson Nano to display the LaserScan message, the following was run:

Install NoMachine to Virtual Machine

Launch NoMachine, configure it to connect to Jetson using ip. Ssh into Jetson Nano with displays on the vm ie: ssh -Y jetson@ip…

   To re-enable GUI issue the command:
   sudo systemctl set-default graphical.target
   or to start GUI session on a system without a current GUI just execute:
   sudo systemctl start gdm3.service
   To launch lidar visualization
   roslaunch rplidar_ros view_rplidar.launch
   To stop the GUI
   sudo systemctl stop gdm3.service

More info on NoMachine Virtual Desktop for the Jetson at: Linux Download Jetson ARM Download


Results

Steering

RVIZ

Future plans/improvements

Include AMCL (http://wiki.ros.org/amcl) to improve localization accuracy through sensor fusion in estimation Better mounting for a more efficient weight distribution

Other documentation

Slides

Simple Git Guide

ROS Dependencies (Github)