Inspired by Spring 2018 Team 6's SoPaRe project, Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's Alexa Voice Service.
- Michael Cornejo, 5th year Physics student
- Steven Cotton, 4th year Electrical Engineering student
- Jared Pham, Graduate student in Computer Engineering
- Laura Morejón Ramírez, 3rd year Aerospace Engineering student
Team 1 set two main goals:
- Train a 1/10 scale vehicle that could reliably drive itself around two different pre-designed tracks:
- Indoor Track at SME
- Outdoor Track at EBU-II
- Create a process by which the car could receive practical commands through a voice recognition service with a complex and comprehensive library, such as Alexa Voice Service
The Project: Voice Recognition Using Alexa Developer Console
Although our project idea was inspired by the Spring 2018 Team 6, we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called intents, and give the program different potential commands, or utterances that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library for python, to program files that would convert voice commands into actions the raspberry pi could identify.
Our skill was divided into several intents, each of which had then at least one slot with two possible values:
- GPPIO (General Purpose Input Output) Intent: Turns lights on car on or off
- Status slot: on/off
- Drive Intent: Starts and stops the car
- Drive status slot: go/stop
- Time Intent: Length of action
- Orientation Intent: Turns the car around
- Orientation slot: left/right
Here there's a number of lessons that we learned.