Make It Dead Simple: Documentation

Lately I’ve been writing a lot of documentation, instructions, and guides for work, Hackster.io, and my side-project (Open Smart Hub). Most of the time the instructions needed to be compiled from multiple sources but kept simple and digestible. As a long time consumer of documentation, I’ve come to a couple realizations about instructions and guides. The most important is:

Make it dead simple. It should be fool-proof and easy to follow.

Don’t make assumptions about a reader’s skill level, the more knowledgable readers will skim over the instructions they already know, but newcomers will treasure those details.

How?

  1. Draw up a storyboard of the steps. Just like primary school, where they would tell you to brainstorm on a sheet of paper before writing an essay. This will help you figure out if there are any missing parts before you get into the nitty gritty details, whether or not the ordering makes sense, and if you need more research as well.
  2. Start with a schematic of the parts (if it’s a hardware related or multiple component documentation)
  3. Add pictures (these always help clarify for the unsure)
  4. Be concise. Make sure that each word added to your documentation adds value.
  5. Don’t overwhelm your reader. (Avoid using acronyms unless you have elaborated on them earlier in the documentation)

Making this kind of robust yet straightforward documentation takes time, but it will also reduce the amount of support and questions you might receive about it in the future.

Buy vs. Build a 3D Printer

I have been spending part of my spare time working slowly to get my Prusa i3 built and I am just now finishing the build. I’ve run into multiple problems throughout the experience and thought I would let you in on some of the frustrations. Here are some of those issues to keep in mind:

  • Sourcing all of the metric vs. inches components. Make sure that if you pick one, you stick with it for all the parts or be prepared to figure out which parts will require updates to the .scad files since they will need to be altered to fit your custom components. (Hint: metric is easier for following instructions but harder to source in the US)
  • While your dimensions may be right in the scad file, once printed, they may not match exactly to your specifications and may need to be reprinted.
  • There are many different models for each 3D printed part based on individual scenarios. If you are following instructions for a build, try to use their parts/print designs.
  • There are plenty of options for every component from the the hot-end to the extruder, bolts, rails, etc. and this makes sourcing the right 3D printed parts with the right bolts, nuts, etc. a lot more cumbersome than I initially expected.

Final Thoughts:

If you want to get the experience of building a 3D printer on your own or getting a cheaper 3D printer, the best solution is to buy a kit and then build it. You can alter most designs later to suit your desires. Since most kits for a Prusa i3 use the same Arduino Mega and RAMPS board setup, the software to control add-ons is pretty simple to change.

Now that I have gone through the process of sourcing and building my own, I wish I had just bought a kit and assembled all of those parts myself in order to save myself time, money, and frustration.

Node-OpenZWave on Raspberry Pi 2

As a continuation of my Open Smart Hub project I have been interested in adding Z-Wave and Zigbee devices to my supported devices and recently decided to swing for Z-Wave devices first. I bought a Z-Wave Z-Stick Series 2 USB Dongle from Aeon Labs and a simple Z-Wave Door Sensor in order to create the basic mesh network with just two devices.
z-wave-usb-stick

Since the Open Smart Hub is based in NodeJS, it only made sense to search for a Node port of the OpenZwave library. I stumbled upon Jonathan Perkin’s port of it to NodeJS (https://github.com/jperkin/node-openzwave)

Unfortunately, it does not work on Windows, and it seems to be having issues with the latest version of NodeJS… But luckily (or coincidentally) the Open Smart Hub runs on a Raspberry Pi 2 running Raspbian and NodeJS v0.10.28.

After the initial setup of my RPi2 with NodeJS, I got to work getting the node-openzwave module on my RPi2. I was seeing build errors when it was trying to install the module, but found a couple of blogs with information that in order to get it to work I might have to install a couple more tools.

sudo apt-get install build-essential make subversion
sudo apt-get install subversion libudev-dev make build-essential git-core python2.7 pkg-config libssl-dev

After that, it worked and I could call “npm install openzwave” and have it install properly.

Note: If you are interested in using it on Mac OSX, you will need to install the drivers for it. Read more about that process in a previous blog post.

Aeon Labs Z-Stick Series 2 on Mac OSX

z-wave-usb-stick

Aeon Labs, the makers of the Z-Stick, a USB dongle for implementing a Z-Wave controller, don’t seem to provide information on how to get it setup on various operating systems and using it for the OpenZWave library on Mac means that you’ll need to be able to access it through the /dev/ directory. In order to do this on Mavericks, you need to install the latest drivers for the USB stick: http://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx

After finishing the install, it should now be visible as /dev/tty.SLAB_USBtoUART if you open a console on OSX and type ls /dev/

From here you can begin to use the Z-Stick by calling on the /dev/tty.SLAB_USBtoUART endpoint.

NVM – Your Node.js Friend

I’ve been doing a lot of development on an old Node.js version lately (v0.10.28) and after realizing this, tried to update my Node version only to find out that a bunch of my previous code no longer worked due to base changes since my initial download of Node.js.

After some minor digging into hosting multiple Node.js versions but being able to switch between them whenever I wanted, I came across NVM (Node Version Manager). The NVM created by Creationix supports Mac and Ubuntu users, but for Windows users there are alternatives (nvm-windows and nvmw)

The basic gist of using it is to install NVM and call “nvm install 0.10.28” or whichever version you want to install. Then you can now call “nvm use 0.10.28” from your shell window and you are using that version! It makes it super simple to switch between versions of node and also check your development compatibility across the different versions and better inform your users.

Amazon Echo Has Promise for the Future

I got an invite to buy an Amazon Echo a while ago, but didn’t want to purchase one because it didn’t seem particularly useful. It’s only advantage to me was the SDK for speech that might be of use in the future.

After seeing the initial intro video and how scripted the commands had to be, I couldn’t justify the purchase.

If I wanted to get answers to questions I have, I would just pull out my phone and type it out rather than dealing with Speech-To-Text inadequacies when asking a long question. If I wanted to add something to a list, I would write it down on my notes or use my phone for the same reason.

I could play music using a voice command, but to be limited to my Amazon Music Library, Prime Music, or Pandora? No thanks, my audio receiver will do a better job with the audio quality in my home anyways.

On top of that, waiting for a delivery date a couple months later? No thanks.

The Turning Point

That was it, I forgot all about the product until recently. That was when I saw a couple Youtube videos showcasing hacks of the system to configure voice commands for other things! Now this is where it really gains some useful functionality.

Imagine using the mic array and speaker in the Echo to pick up your voice commands and give you audio feedback to commands you create yourself. As a developer, this would have unlimited possibilities in the home! My heart skipped a beat once I saw someone using it for these purposes despite the lack of an official SDK and I immediately started imagining the improvements I could make to my current projects in the home automation space and quickly came up with a couple scenarios that I “need” it for.

There are the typical scenarios like turning on or off appliances and lights in your home, but then there are bigger home automation scenarios where you would communicate with the Echo like you would a personal assistant.

Imagine waking up in the morning and talking to Echo and having it relay specific things you care about like the weather, news, calendar updates, family updates, etc. while also having it turn on the shower so it’s running at your preferred temperature by the time you jump in. Have it make your coffee so that when you get out of the shower, it’s ready. No need to preset things the day before, or stick to a generic schedule. It’s all voice activated.

Now imagine coming home from a day at work and asking it to turn on a specific “mood” for your home, like “summer breeze” that would open your blinds, open your windows, put on some light music, turn on just the right amount of lighting, etc. Have your home work for you!

It looks like Amazon is starting to see the value of this use case with the Echo too, because they recently announced an update that would allow their default voice commands to work with WeMo switches and Hue lights, but those are just basic scenarios.

After all my excitement about being able to create custom commands, I decided to purchase one (despite the couple months I’ll have to wait to finally receive it).

Still a Couple Faults

  • Works well for one room or an open-concept home, but you’ll need separate ones for each room if you want it to work everywhere. (Or maybe an extension of it in other rooms?)
  • From the demos online where people use it, it looks like their speech recognition system isn’t up to par with most of the other speech recognition system, yet.

Autonomous Driving Coming Soon

Recently Tesla announced that in the summer of 2015, Tesla cars would receive a software update to enable an autopilot feature (on highways) as well as a valet-like feature that could park the car for you and be summoned via a smartphone (on private property). This is going to change the world as we know it; the next step, fully autonomous cars.

“Tesla had been testing its autopilot on a route from San Francisco to Seattle, with company drivers letting the car navigate the West Coast largely unassisted.” – Elon Musk

As of right now everyone has a couple clear concerns. The main fears being “what if something were to go wrong with the autopilot feature” and “what if someone were to get into an accident while in the vehicle”. While this is a valid concern and certain regulations need to be in place to decide who would be responsible, there will obviously be rigorous testing and scrutiny on the new Tesla systems. Let’s not let our fear of the new and slightly unknown cloud our judgement.

These concerns will likely be similar to those expressed by the major news outlets in 2013 when a couple Tesla vehicles caught fire (all due to accidents). The statistics behind it tell the real story. According to the National Fire Protection Association

One vehicle fire per every 20 million miles driven by a conventional car vs. one vehicle fire per every 100 million miles in a Tesla.

If there is an option that we already have experience with, we are more likely to choose that option simply because of its familiarity. Sure there could be a robot uprising and our cars could drive us all off a cliff.. (woops I just added another scenario), but that would be unlikely.

“For consumers concerned about fire risk, there should be absolutely zero doubt that it is safer to power a car with a battery than a large tank of highly flammable liquid.” – Elon Musk

The main unsafe factor in normal driving is the person behind the wheel. In 2010, there were an estimated 5.4 million crashes! Imagine if you took the human element out of the equation. Autonomous cars would not speed, get distracted by phones, cut each other off, drive recklessly, or park like a** holes and leave you with two inches to squeeze your whole body out of your door. They would have faster reaction times, be better able to assess the situation, and even go a step further and provide services to people who could not drive themselves! Some of those scenarios are not ready currently due to the need to be tested vigorously under different situations. But the truth of the matter is that if done correctly, the cars would be safer than putting a human behind the wheel.

Back to current reality

How Tesla’s update could benefit us:

  • Autopilot during the commute home in traffic jams! Let the system auto-brake, speed up, stay within the lines and get you most of the way home safely without you ever having to lift a finger or foot.
  • Autopilot on road trips to relieve some of the stress when driving long distances
  • Reduce the number of car crashes and traffic jams as a result (tell the horrible drivers to get one)
  • Have a valet in your own home! You can get out of your car and let it park itself in the garage.
  • Make more people interested in an electric vehicle!
  • Paving the way for a new generation of automobiles that have the ability to drive for us.

Issues with the update:

  • What if regulations don’t allow the feature to be used in certain areas? The disparity could cause problems.
  • What if the software doesn’t live up to the expectation and actually causes accidents? (low chance of happening but is a possibility)
  • What if people who are driving their cars hit a Tesla and claim that it was the Tesla or Tesla driver’s fault? (easily fixed with cameras and sensor information from the Tesla)
  • Who is responsible if the accident is because of the Tesla?