Starting Development with Amazon Echo

Here’s a simple guide on how to create a Node.js app hosted in Azure that will handle your Amazon Echo‘s API calls.

amazon-echo

  1. You will want to download and install Node.js if you haven’t already.
  2. Download the code from the repository here.
  3. Create an Azure account if you haven’t already and create a new web app.
  4. Using FTP, Git, or whichever method you would like, get the code into the location for your new azure web app.
  5. Join the Amazon Developer program for the Echo and create a new Echo app. (Note: In order to use this while in development on your Echo, the account needs to be the same one that the Echo is linked to)
  6. In your App information tab:
    1. Fill out your “App Name”. This will act as your official app name.
    2. Fill out your “Spoken Name”. You will want this to be short and simple to say in order to give it the easiest time to recognize.
    3. Give your “App Version” which will need to match the info you hand back through the API.
    4. Give your “App Endpoint” which will be your Azure webapp’s URL + the api endpoint. (Example: “https://echotest.azurewebsites.net/api/echo”)
  7. In your Interaction Model:
    1. Fill out your “Intent Schema”. The intent is the name of the function, slots are parameters, and the type when “literal” will give you back the speech-to-text recognized word. More info on this here.
    2. Fill out your “Spoken Utterances”. They should be tab separated between the intent and the sample phrases. Something interesting to note is that they suggest that you provide a sample for every number of literal device phrases from min to max. (In my case from 1-3 words, thus the repetitions.) It also does not like it when you have multiple of the same literals anywhere in the file.. More info on this here.
  8. After this, set your app to be ready for testing and you are on your way!
  9. Call Alexa with your Spoken Name by saying “Alexa, open {YourSpokenAppNameHere}”
  10. Now you can say the commands that you’ve designated in both your Nodejs web app and your Amazon app declarations for your response!

If you want to make it your own, you will need to modify the Node.js back-end to respond according to the requests that you allow while also altering your intent schema and spoken utterances.

 

Amazon Echo: Should I buy?

amazon_echoThe Amazon Echo is a home automation maker’s dream. It provides an easy way to use voice recognition to interact with your devices, but there are a couple things you should know about developing for it before you buy one and start.

  1. Despite it being called an Echo “App”, your development will take place in a web-service hosted in the cloud that can answer it’s calls. What the Echo will do is translate what it hears into text and then hand it off to your service by calling your API with a package that contains the information.
  2. Creating an “app” with Amazon for the Echo requires you to fill out an “Interaction Model” which consists of an “intent schema” and “sample utterances” as well as program your web-service.
    • The “intent schema” is pretty straightforward and you basically create a JSON array of “intent” which contain a name and “slots” which are used like parameters and you must define the type.
    • The “sample utterances” are a list of the “intent name” and potential sample phrases.

Making it talk to a web service hosted in Azure using Node.js turned out to be fairly trivial and I was able to get a basic implementation hooked into the OpenSmartHub that I have been developing in less than a couple hours. I even created a sample in a github repository for those who want simple instructions and an easy place to start.

It really is amazing to see it come together and interact with your voice commands in a custom scenario that you have developed, but still has a long way to go in order to improve it’s voice recognition. It works really well with the pre-programmed functions, but there aren’t that many that I find particularly useful in an every day scenario and it doesn’t do well with brands or non-dictionary words. For example, it recognizes “Pandora” because it’s a vital part of their pre-programmed functionality, but it doesn’t recognize “Yamaha” or “Wemo” well.

Another thing that I’ve noticed is that it can sometimes mix up the singular and plural versions of words when converting text-to-speech. (For example, mine would sometimes hear “lights” when I say “light”)

Overall, I think it’s going to only improve from here and I think it’s worthwhile to invest into in order to integrate voice-recognition and voice commands into your homemade projects!

Enabling Node.js WebSockets on Azure Web App

Recently I found myself confused by a websocket issue on an Azure deployment of a Node.js Socket.io app. In order to run WebSockets on an Azure WebApp or Website, you need to turn off IIS WebSockets module which conflicts with the Node.js WebSockets.

To do this, you have to turn on WebSockets in the configure of the app and do one of two things:

Option 1:

Create the following web.config file in your root folder prior to pushing to the Azure website:

Option 2:

FTP into your website’s location using the URL found in the Azure Management Portal and replace the web.config file in \site\wwwroot location with the content above.

Source:

http://azure.microsoft.com/en-us/documentation/articles/web-sites-nodejs-chat-app-socketio/

Hackster Hardware Weekend (Hackathon)

I had the chance to participate this past weekend in a Hackster.io Hardware Weekend in Seattle and was blown away by the setup. Like most hackathons, they had big sponsors like Intel, Microsoft, Spark, AT&T, etc. However, unlike most of the software hackathons I have been to, they provided some hardware for people to use including Intel Edison boards with Seeed Studio Starter Kits and more. They also provided some cool and useful swag like a portable charger (which I used to power a Spark Core for my demo) and a small portable Leatherman pocket tool that is perfect for my recent maker lifestyle. The energy they provided was just spectacular and even though it was their first time hosting a hackathon, I think it went smoothly.

The food wasn’t just your normal pizza and salad hackathon meals. It also included a legitimate breakfast with bacon, eggs, bagels, cheese, etc. Lunches and dinners were comprised of delicious sandwiches (kind of like Banh Mi), mexican food, and one meal of pizza. On the side they have a whole bunch of candy, popcorn, and more snacks along with the steady supply of coffee, soda, juice, and water.

Unfortunately, like with most hackathons, the crowd of participants thinned out by day 2 with most of the remaining people being interested in learning more or deeply involved in the hacking process.

I came to the event without a clue as to what I was going to build and not really sure if I wanted to join a team, make one, or run it solo. After hearing about some of the prizes for using certain APIs (Weather Underground and WebRTC) I decided to focus my time on the Weather Underground APIs. Even after deciding what I wanted to use, I didn’t really have a clear understanding of what my final product would be and how it could change the world. I just decided to start hacking something together that I thought would be cool to own and ended up going down the path alone.

Stages of my Creation:

  1. Started with the idea to read the forecast for the day and display it to you through an small LCD screen so that I wouldn’t need to pull up an app to view the forecast. Decided to use the Intel Edison, Cylon.js, and the Weather Underground APIs to do this.
  2. Added functionality that would open your windows using a servo if your indoor temperature was past your comfortable zone and the outdoor temperature was colder. I also added functionality to change these settings through buttons and rotary angle sensors on the board.
  3. Added functionality to push the data to the cloud
  4. Realized that I could also connect to a Spark Core and communicate with it via WiFi and the Cloud from the Intel Edison, so I integrated a lighting scenario with a wireless connection.
  5. Created a prototype case for the now deemed “Hub” in Autodesk Fusion 360.
  6. Created a webpage using AngularJS on Azure that would showcase the data my back-end was receiving so that I could view information on the go.

The prototype Open Source Home Automation Hub was born.

Some things I’ve come to realize for my next hackathon:

  • Work in teams! I worked solo this weekend and although I did a lot of work to combine all the components, I definitely could have gone further with the idea and took it to the next level ending up with a professional product rather than a hacked together demo.
  • Set up a team before hand and know the expertise of everyone in the team and how best to leverage them. (This also might mean vet out the people who might have less to contribute if you are going hardcore)
  • Sometime’s it’s more about the presentation, the story, and the idea than the execution during the hackathon (although that might be due to the hardware nature of this hackathon). After all, you only have so much time both to hack and to present your creation.
  • Network! This is really just a great opportunity to meet other people in a related field and find out their skills and platforms of choice. Who knows? You might find a couple new tools that might be useful for your future endeavors.
  • Roll with it. A vision is great, but be able to adapt if things don’t work out quite like you expected.  Sometimes code breaks and it can be stressful but learn from it and debug better.
  • Have fun!