Difference between revisions of "2019WinterTeam1"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
 
(14 intermediate revisions by the same user not shown)
Line 1: Line 1:


Inspired by [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 Spring 2018 Team 6's SoPaRe project], Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's Alexa Voice Service.
Inspired by [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 Spring 2018 Team 6's SoPaRe project], Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's [https://en.wikipedia.org/wiki/Amazon_Alexa#Alexa_Voice_Service Alexa] Voice Service.
 
Our GitHub repository can be found [https://github.com/mae148-teamone/WI19-AlexaCar here]


== The Team ==
== The Team ==
Line 9: Line 11:
* Laura Morejón Ramírez, 3rd year Aerospace Engineering student
* Laura Morejón Ramírez, 3rd year Aerospace Engineering student
=== Objectives ===
=== Objectives ===
Team 1's main goals were to build a 1/10 scale vehicle that could reliably drive on its own on two different pre-designed tracks as well as respond to a collection of commands that it would identify using voice recognition. .
Team 1 set two main goals:
#Train a 1/10 scale vehicle that could reliably drive itself around two different pre-designed tracks:
## Indoor Track at SME
## Outdoor Track at EBU-II
#Create a process by which the vehicle could receive practical commands through a voice recognition service with a complex and comprehensive library, such as Alexa Voice Service. Our main objective was to have the car listen for and follow commands such as stop/go, toggle between autonomous and manual modes, and turn a small LED light on and off. As this page shows, we were able to not only meet, but also exceed these goals significantly.  


== The Car: Donkey ==
==The Project: Voice Recognition Using Alexa Developer Console ==
Using the provided DonkeyCar framework, and training our car
Although our project idea was inspired by the [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 Spring 2018 Team 6], we did some research and found an easier way to make our [http://www.donkeycar.com/ Donkey Car] successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called ''intents'', and give the program different potential commands, or ''utterances'' that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands  in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library for python, to program files that would convert voice commands into actions the raspberry pi could identify. 
=== Overview ===
=== Autonomy Kit ===


==The Project: Voice Recognition Using Alexa Skills Developer Console ==
Although our project idea was inspired by [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 2018SpringTeam6], we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called ''intents'', and give the program different potential commands, or ''utterances'' that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands  in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library fo python, to program files that would convert voice commands into actions the raspberry pi could identify. 
=== Structure ===
=== Structure ===
Our skill was divided into several intents, each of which had then at least one slot with two possible values:
Our skill was divided into several intents, each of which had then at least one slot with two possible values:
* GPPIO (General Purpose Input Output) Intent: Turns lights on car on or off
* GPIO (General Purpose Input/Output) Intent: Turns lights on car on or off
** Status slot: on/off
** Status slot: on/off
* Drive Intent: Starts and stops the car
* Drive Intent: Starts and stops the car
** Drive status slot: go/stop
** Drive status slot: go/stop
** Time Intent: Length of action
** Time slot: Length of action
* Throttle Intent: Controls vehicle velocity while it drives autonomously
** Modifier slot: Set the throttle to higher/lower
** Number slot: Sets the throttle to a specific number
* Orientation Intent: Turns the car around
* Orientation Intent: Turns the car around
**  Orientation slot: left/right
**  Orientation slot: left/right
* Mode Intent: Switches between driving modes
** Mode slot: Switches modes between local (completely autonomous), local angle (user-controlled throttle, autonomous steering), and user (completely user-controlled)
* Erase records Intent: Erases a number of records directly from the open tub in the Raspberry Pi
** Number slot: Any integer chosen by user
** "All" records could also be chosen, resulting in the deletion of all records in the current tub
* Recording Intent: Starts and stops taking records
** Status slot: Turns recording on/off
* Model Intent: Switches between trained models
** Model slot: Right now, we only have the indoor and outdoor models that the car has been trained for, but ideally one could upload many more models and be able to drive the vehicle in any track
=== Donkey in Action ===
The following video shows our vehicle responding to a series of commands uttered by a user using their voice. The terminal window on the left shows the return messages from the program, while the window in the middle is the Alexa Interface in action. Finally, the car is on the right, and the LED light that turns on shows when each command is received.
[https://youtu.be/MYV9T8UpKjA Please click here to watch video]


===Lessons Learned===
===Problems encountered and lessons learned===
Here there's a number of lessons that we learned.
* "Out of resources" issue when attempting to run the server from within Donkey itself. This might have been due to extra processing time since we added another threaded part to the code.
*Lag
**Mainly due to slow connection between Alexa Skills web interface and Raspberry Pi server. Might be solved by using a better wifi adapter on the RPi
**Also a small amount from writing to and reading from a text file
**To observe the time of command reception to command execution, we installed an LED light on our vehicle that would light up for a second after the command was read. We observed that usually (when commands were observable, such as turning around, or stopping) the action would start a short amount of time after the light had been turned on. From this, we learned that the main cause of lag was indeed the connection between the two
* For future teams attempting to use the Alexa Skill Set, we would really recommend becoming familiar with their service, and taking as much advantage of it as possible! In retrospect, we wish we would have thought of more commands to give our DonkeyCar, because implementing them with Alexa is incredibly easy once one understands the interface. The good news is, it is very simple to add synonyms and new utterances, so broadening this project is definitely doable!

Latest revision as of 23:07, 22 March 2019

Inspired by Spring 2018 Team 6's SoPaRe project, Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's Alexa Voice Service.

Our GitHub repository can be found here

The Team

Team Members

  • Michael Cornejo, 5th year Physics student
  • Steven Cotton, 4th year Electrical Engineering student
  • Jared Pham, Graduate student in Computer Engineering
  • Laura Morejón Ramírez, 3rd year Aerospace Engineering student

Objectives

Team 1 set two main goals:

  1. Train a 1/10 scale vehicle that could reliably drive itself around two different pre-designed tracks:
    1. Indoor Track at SME
    2. Outdoor Track at EBU-II
  2. Create a process by which the vehicle could receive practical commands through a voice recognition service with a complex and comprehensive library, such as Alexa Voice Service. Our main objective was to have the car listen for and follow commands such as stop/go, toggle between autonomous and manual modes, and turn a small LED light on and off. As this page shows, we were able to not only meet, but also exceed these goals significantly.

The Project: Voice Recognition Using Alexa Developer Console

Although our project idea was inspired by the Spring 2018 Team 6, we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called intents, and give the program different potential commands, or utterances that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library for python, to program files that would convert voice commands into actions the raspberry pi could identify.

Structure

Our skill was divided into several intents, each of which had then at least one slot with two possible values:

  • GPIO (General Purpose Input/Output) Intent: Turns lights on car on or off
    • Status slot: on/off
  • Drive Intent: Starts and stops the car
    • Drive status slot: go/stop
    • Time slot: Length of action
  • Throttle Intent: Controls vehicle velocity while it drives autonomously
    • Modifier slot: Set the throttle to higher/lower
    • Number slot: Sets the throttle to a specific number
  • Orientation Intent: Turns the car around
    • Orientation slot: left/right
  • Mode Intent: Switches between driving modes
    • Mode slot: Switches modes between local (completely autonomous), local angle (user-controlled throttle, autonomous steering), and user (completely user-controlled)
  • Erase records Intent: Erases a number of records directly from the open tub in the Raspberry Pi
    • Number slot: Any integer chosen by user
    • "All" records could also be chosen, resulting in the deletion of all records in the current tub
  • Recording Intent: Starts and stops taking records
    • Status slot: Turns recording on/off
  • Model Intent: Switches between trained models
    • Model slot: Right now, we only have the indoor and outdoor models that the car has been trained for, but ideally one could upload many more models and be able to drive the vehicle in any track

Donkey in Action

The following video shows our vehicle responding to a series of commands uttered by a user using their voice. The terminal window on the left shows the return messages from the program, while the window in the middle is the Alexa Interface in action. Finally, the car is on the right, and the LED light that turns on shows when each command is received.

Please click here to watch video

Problems encountered and lessons learned

  • "Out of resources" issue when attempting to run the server from within Donkey itself. This might have been due to extra processing time since we added another threaded part to the code.
  • Lag
    • Mainly due to slow connection between Alexa Skills web interface and Raspberry Pi server. Might be solved by using a better wifi adapter on the RPi
    • Also a small amount from writing to and reading from a text file
    • To observe the time of command reception to command execution, we installed an LED light on our vehicle that would light up for a second after the command was read. We observed that usually (when commands were observable, such as turning around, or stopping) the action would start a short amount of time after the light had been turned on. From this, we learned that the main cause of lag was indeed the connection between the two
  • For future teams attempting to use the Alexa Skill Set, we would really recommend becoming familiar with their service, and taking as much advantage of it as possible! In retrospect, we wish we would have thought of more commands to give our DonkeyCar, because implementing them with Alexa is incredibly easy once one understands the interface. The good news is, it is very simple to add synonyms and new utterances, so broadening this project is definitely doable!