Difference between revisions of "2019WinterTeam1"

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search
Line 1: Line 1:


Inspired by [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 Spring 2018 Team 6's SoPaRe project], Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of [https://en.wikipedia.org/wiki/Amazon_Alexa#Alexa_Voice_Service Amazon's Alexa] Voice Service.
Inspired by [https://guitar.ucsd.edu/maeece148/index.php/2018SpringTeam6 Spring 2018 Team 6's SoPaRe project], Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's [https://en.wikipedia.org/wiki/Amazon_Alexa#Alexa_Voice_Service Alexa] Voice Service.


== The Team ==
== The Team ==

Revision as of 03:28, 12 March 2019

Inspired by Spring 2018 Team 6's SoPaRe project, Winter 2019 Team 1 decided to also build a semi-autonomous vehicle that could receive commands via voice recognition, but with the added functionality of Amazon's Alexa Voice Service.

The Team

Team Members

  • Michael Cornejo, 5th year Physics student
  • Steven Cotton, 4th year Electrical Engineering student
  • Jared Pham, Graduate student in Computer Engineering
  • Laura Morejón Ramírez, 3rd year Aerospace Engineering student

Objectives

Team 1 set two main goals:

  1. Train a 1/10 scale vehicle that could reliably drive itself around two different pre-designed tracks:
    1. Indoor Track at SME
    2. Outdoor Track at EBU-II
  2. Create a process by which the car could receive practical commands through a voice recognition service with a complex and comprehensive library, such as Alexa Voice Service

The Project: Voice Recognition Using Alexa Skills Developer Console

Although our project idea was inspired by 2018SpringTeam6, we did some research and found an easier way to make our Donkey Car successfully recognize commands: using Amazon's existing voice recognition software, we were able to organize our commands into categories, called intents, and give the program different potential commands, or utterances that a user may call. Additionally, we had to modify the vehicle's code to make it both receive and respond to said commands in a timely manner. We explored and learned about "Flask-ASK", Alexa Skills Development library fo python, to program files that would convert voice commands into actions the raspberry pi could identify.

Structure

Our skill was divided into several intents, each of which had then at least one slot with two possible values:

  • GPPIO (General Purpose Input Output) Intent: Turns lights on car on or off
    • Status slot: on/off
  • Drive Intent: Starts and stops the car
    • Drive status slot: go/stop
    • Time Intent: Length of action
  • Orientation Intent: Turns the car around
    • Orientation slot: left/right

Lessons Learned

Here there's a number of lessons that we learned.