Difference between revisions of "Projects"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
m (Added City Driving Suite Project)
Line 13: Line 13:
Equip the car with GPS and use it to perform navigation tasks.
Equip the car with GPS and use it to perform navigation tasks.


See [[project gps]] for details.
See [[project gps]] and [[2018SpringTeam3]] for details.


== Stereo Vision ==
== Stereo Vision ==
Line 37: Line 37:
Equip the car with an additional camera and a suite of ultrasonic sensors so the car can evaluate and respond to common stimuli found in city driving.  
Equip the car with an additional camera and a suite of ultrasonic sensors so the car can evaluate and respond to common stimuli found in city driving.  


See [[project City]] for details.  
See [[project City]] and [[2018SpringTeam7]] for details.  


== 3D Camera Intel RealSense ==
== 3D Camera Intel RealSense ==
Line 54: Line 54:


Incorporate image processing filters that can enhance the performance of the car. Ideas include: split field of view, line detection and following.
Incorporate image processing filters that can enhance the performance of the car. Ideas include: split field of view, line detection and following.
== Obstacle Avoidance with Lidar ==
Our project focuses on improving the safety of autonomous driving. For a human driver, there are many scenarios where obstacles must be detected and avoided, such as a slow moving vehicle or debris in the road; the safest thing to do in such situations may be to change lanes. We focused on this type of situation and attempted to develop our autonomous vehicle to mimic this behavior. Our goal is to have the vehicle detect an obstacle in its path and change lanes to avoid a collision and continue driving autonomously.
See [[2018SpringTeam1]] for details.
== Follow Me ==
Identify and follow a person, given two people in the field of view. Tiny YOLO was fed into OpenCV to identify targets in a live feed.
See [[2018SpringTeam2]] for details.
== Traffic ==
The goal of our project was to train the robocar to recognize common traffic signs, such as a stop sign and speed limits.
See [[2018SpringTeam4]] for details.
== Vehicle-to-Vehicle Communication ==
The aim of this project is to improve upon the existing DonkeyCar autonomous control framework by implementing situational awareness through inter-car communication of data collected by on-board sensors. To achieve this end, two cars were equipped with infrared and ultrasound sensors which were able to receive data from the track environment. The track environment was demarcated into two zones, and the entry point to a specific zone was fitted with an infrared signal emitter. This was done so that a car would know which zone it was in when its infrared sensor picked up a signal from an infrared emitter set up at a zone entry point. Zone and sensor information were then broadcasted to both cars in order to take action according to a given situation. For instance, if a car enters a zone that the other car is already in, the car that entered the zone last will slow down until the lead car exits the zone.
See [[2018SpringTeam5]] for details.
== Voice Control ==
The objective of this project is to add an voice control over the original car model. This voice feature will allow users to talk to the car to perform things like emergency stop as well as switching back and forth from autonomous-mode and joystick-mode.
See [[2018SpringTeam6]] for details.

Revision as of 16:11, 13 November 2018

These are some ideas for possible projects to be develop in the class.

Code is collected on the class github repository.

Playing Fetch

Using image recognition to localize, track and fetch a tennis ball.

See project fetch for details.

GPS Navigation

Equip the car with GPS and use it to perform navigation tasks.

See project gps and 2018SpringTeam3 for details.

Stereo Vision

Equip the car with two cameras, design a new controller that can take advantage of the stereo vision and evaluate its performance.

See project stereo for details.

Encoders for Odometry

Incorporate encoders on the wheels of the car and use the new measurement to improve the performance of the controller.

See project encoders for details.

ROS

Develop ROS nodes for the current setup of the car and demonstrate its capability.

See project ROS for details.

City Driving Suite

Equip the car with an additional camera and a suite of ultrasonic sensors so the car can evaluate and respond to common stimuli found in city driving.

See project City and 2018SpringTeam7 for details.

3D Camera Intel RealSense

Equip the car with a camera that can sense depth, design a new controller that can take advantage of the stereo vision and evaluate its performance.

Parallel Parking

Equip the car with ultrasonic sensor, design and train a controller that can park the car autonomously.

Convoy

Follow the car in front of you keeping distance. Control first car by remote control, then all autonomous.

Enhanced Image Processing

Incorporate image processing filters that can enhance the performance of the car. Ideas include: split field of view, line detection and following.

Obstacle Avoidance with Lidar

Our project focuses on improving the safety of autonomous driving. For a human driver, there are many scenarios where obstacles must be detected and avoided, such as a slow moving vehicle or debris in the road; the safest thing to do in such situations may be to change lanes. We focused on this type of situation and attempted to develop our autonomous vehicle to mimic this behavior. Our goal is to have the vehicle detect an obstacle in its path and change lanes to avoid a collision and continue driving autonomously.

See 2018SpringTeam1 for details.

Follow Me

Identify and follow a person, given two people in the field of view. Tiny YOLO was fed into OpenCV to identify targets in a live feed.

See 2018SpringTeam2 for details.

Traffic

The goal of our project was to train the robocar to recognize common traffic signs, such as a stop sign and speed limits.

See 2018SpringTeam4 for details.

Vehicle-to-Vehicle Communication

The aim of this project is to improve upon the existing DonkeyCar autonomous control framework by implementing situational awareness through inter-car communication of data collected by on-board sensors. To achieve this end, two cars were equipped with infrared and ultrasound sensors which were able to receive data from the track environment. The track environment was demarcated into two zones, and the entry point to a specific zone was fitted with an infrared signal emitter. This was done so that a car would know which zone it was in when its infrared sensor picked up a signal from an infrared emitter set up at a zone entry point. Zone and sensor information were then broadcasted to both cars in order to take action according to a given situation. For instance, if a car enters a zone that the other car is already in, the car that entered the zone last will slow down until the lead car exits the zone.

See 2018SpringTeam5 for details.

Voice Control

The objective of this project is to add an voice control over the original car model. This voice feature will allow users to talk to the car to perform things like emergency stop as well as switching back and forth from autonomous-mode and joystick-mode.

See 2018SpringTeam6 for details.