Inspired by Spring 2018 Team 6's SoPaRe project, Team 1 decided to also build a semi-autonomous car that could receive commands via voice recognition, but with a twist.
- Michael Cornejo, 5th year Physics student
- Steven Cotton, 4th year Electrical Engineering student
- Jared Pham, Graduate student in Computer Engineering
- Laura Morejón Ramírez, 3rd year Aerospace Engineering student
Team 1's main goals were to build a 1/10 scale vehicle that could reliably drive on its own on two different pre-designed tracks as well as respond to a collection of commands that it would identify using voice recognition. .
The Car: Donkey
Using the provided DonkeyCar framework, and training our car
The Project: Voice Recognition Using Alexa Skills Developer Console
Although our project idea was inspired by 2018SpringTeam6, we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called intents, and give the program different potential commands, or utterances that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library fo python, to program files that would convert voice commands into actions the raspberry pi could identify.
Our skill was divided into several intents, each of which had then at least one slot with two possible values:
- GPPIO (General Purpose Input Output) Intent: Turns lights on car on or off
- Status slot: on/off
- Drive Intent: Starts and stops the car
- Drive status slot: go/stop
- Time Intent: Length of action
- Orientation Intent: Turns the car around
- Orientation slot: left/right
Here there's a number of lessons that we learned.