2019WinterTeam1
Inspired by Spring 2018 Team 6's SoPaRe project, Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's Alexa Voice Service.
Our GitHub repository can be found here
The Team
Team Members
- Michael Cornejo, 5th year Physics student
- Steven Cotton, 4th year Electrical Engineering student
- Jared Pham, Graduate student in Computer Engineering
- Laura Morejón Ramírez, 3rd year Aerospace Engineering student
Objectives
Team 1 set two main goals:
- Train a 1/10 scale vehicle that could reliably drive itself around two different pre-designed tracks:
- Indoor Track at SME
- Outdoor Track at EBU-II
- Create a process by which the vehicle could receive practical commands through a voice recognition service with a complex and comprehensive library, such as Alexa Voice Service. Our main objective was to have the car listen for and follow commands such as stop/go, toggle between autonomous and manual modes, and turn a small LED light on and off. As this page shows, we were able to not only meet, but also exceed these goals significantly.
The Project: Voice Recognition Using Alexa Developer Console
Although our project idea was inspired by the Spring 2018 Team 6, we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called intents, and give the program different potential commands, or utterances that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library for python, to program files that would convert voice commands into actions the raspberry pi could identify.
Structure
Our skill was divided into several intents, each of which had then at least one slot with two possible values:
- GPIO (General Purpose Input/Output) Intent: Turns lights on car on or off
- Status slot: on/off
- Drive Intent: Starts and stops the car
- Drive status slot: go/stop
- Time slot: Length of action
- Throttle Intent: Controls vehicle velocity while it drives autonomously
- Modifier slot: Set the throttle to higher/lower
- Number slot: Sets the throttle to a specific number
- Orientation Intent: Turns the car around
- Orientation slot: left/right
- Mode Intent: Switches between driving modes
- Mode slot: Switches modes between local (completely autonomous), local angle (user-controlled throttle, autonomous steering), and user (completely user-controlled)
- Erase records Intent: Erases a number of records directly from the open tub in the Raspberry Pi
- Number slot: Any integer chosen by user
- "All" records could also be chosen, resulting in the deletion of all records in the current tub
- Recording Intent: Starts and stops taking records
- Status slot: Turns recording on/off
- Model Intent: Switches between trained models
- Model slot: Right now, we only have the indoor and outdoor models that the car has been trained for, but ideally one could upload many more models and be able to drive the vehicle in any track
Donkey in Action
The following video shows our vehicle responding to a series of commands uttered by a user using their voice. The terminal window on the left shows the return messages from the program, while the window in the middle is the Alexa Interface in action. Finally, the car is on the right, and the LED light that turns on shows when each command is received.
Please click here to watch video
Problems encountered and lessons learned
- "Out of resources" issue when attempting to run the server from within Donkey itself. This might have been due to extra processing time since we added another threaded part to the code.
- Lag
- Mainly due to slow connection between Alexa Skills web interface and Raspberry Pi server. Might be solved by using a better wifi adapter on the RPi
- Also a small amount from writing to and reading from a text file
- To observe the time of command reception to command execution, we installed an LED light on our vehicle that would light up for a second after the command was read. We observed that usually (when commands were observable, such as turning around, or stopping) the action would start a short amount of time after the light had been turned on. From this, we learned that the main cause of lag was indeed the connection between the two
- For future teams attempting to use the Alexa Skill Set, we would really recommend becoming familiar with their service, and taking as much advantage of it as possible! In retrospect, we wish we would have thought of more commands to give our DonkeyCar, because implementing them with Alexa is incredibly easy once one understands the interface. The good news is, it is very simple to add synonyms and new utterances, so broadening this project is definitely doable!