To be honest, I thought the Snapchat spectacles were a waste of money when I first heard about them. “Oh great, another company trying to make glasses with a gimmick”. So I put them out of my mind and moved on despite hearing about all the craze in Los Angeles and New York regarding their pop-up Spectacle vending machines.
Fast forward to this past weekend. I was on a trip with a group of friends and a couple of them brought their Spectacles along. Every now and then they would press the button on the side of the glasses to start recording a 10 second video. Although they used it quite regularly, I almost never noticed them filming except for in darker areas where the ring of light around the camera would animate while it was recording. At this moment I had revised my opinion of them to “They’re great “spy-like” glasses for recording video”.
The true “aha” moment came at the end of each day when my friends would view the footage and transfer it to their phones. As we watched each clip I felt that I was seeing each moment through their eyes, not just some video they had recorded. It turns out that the 115 degree angle lens being placed on a pair of glasses right next to your eyes is a great POV (Point of View) recording combo. This led to the next realization, “Why bother taking out a camera/phone, or strapping a GoPro to your head to record?” Just throw on a pair of Spectacles and press the button to record while still living in the moment. There’s no need to check a screen to make sure you’ve got the shot, or fumble with opening up an app or sending it right then an there. The key to the future of the Spectacles (in my opinion) is that it allows you to live in the moment, record the moment, and relive the moment.
However, there are a couple of improvements that I think would greatly increase its adoption for “capturing the moment”.
Waterproof it (allows you to take it to more areas)
Hide the camera better or make it smaller (allows you to put it into other styles)
Make the lenses swappable so that you can have clear ones for night-time or indoors but shades for the day (allows you to wear it in more areas)
Make the battery last longer (always on the list for any electronic)
Make the video capture capabilities longer (having options is always better and being able to select the default length of clips would be nice. Press the button again in the middle of recording to stop it earlier)
Will I buy one now? Maybe. But I’ll be looking out for the V2 for sure.
It’s been a while since I last made a smart home device, not because my home is fully automated or because there wasn’t a need for another device, but because I still live in a rented unit and didn’t want to to spend the time making and setting up custom devices that would need to be torn down in the future.
Well the other day I realized that I could build another home automation device without a long-term stationary placement requirement! Not too long ago I built voice integration into my smart home system using the Amazon Echo (check out the articles here). While this worked well for moments without ambient noise, it failed to work well during parties, while watching movies, or while listening to music on my sound system. Obviously I needed another way to interact with these smart home devices and the current method of pulling out a phone or tablet, unlocking it, then switching between apps just didn’t appeal to me. What I really wanted was a universal remote that could also talk to my smart home devices.
So I started designing and planning out the features that I would want in my smart home controller and it had to be wireless charged (because replacing batteries or being tethered to a wall is archaic). Here’s the requirements I came up with:
LED Screen to provide visual input (battery life, device selected, value selected, etc.)
Neopixel Ring (because who doesn’t love feedback through colors?)
Lately I’ve focused more of my efforts on Youtube videos for the side projects that I build. While I enjoy the process of writing blog entries, I also found that I enjoy the visual draw and documentation capabilities that Youtube allows me for the generation process of my side projects. Through the provided analytics I’ve been able to see that the videos I make generate higher engagement, but the value of written posts in providing engagement for text and programming related questions and solutions is undeniable.
The latest video I’ve posted to my Youtube channel is about the making of an LED mask. I have thought about the project in the past and this October I decided to build one using a Particle Photon as the controller. Due to the spacing of the Neopixel LEDs on the strip (60 LEDs per meter) I decided to interlace the strips with an offset.
For those that don’t know, Particle was previously called Spark and I had used their first product, the Core, in the past before the Photon was released. The Photon is evolution of the Core. While much of the platform has stayed the same (both the good and the “could improv), it remains one of my favorite hardware development boards due to its size and capabilities. I’m looking forward to the upcoming release of the Electron that I backed on Kickstarter.
I originally saw the video above in May 2015 when The Void released the video and was given the spotlight by Virtual Reality enthusiasts. The ideas behind it reflect the future of virtual reality entertainment. Their goal is to make you feel like you are inside the virtual reality world. In order to accomplish this, they’ve integrated small elements into the landscape like heat, wind, water, texture, and movement to make it feel like you are experiencing what you see in the virtual world.
Virtual reality and augmented reality technologies are becoming more accessible and gaining popularity. A lot of companies like Oculus Rift, Microsoft HoloLens, HTC Vive, and Google Cardboard are investing in the space. While most companies are targeting home usage, it’s hard to feel truly immersed in a virtual world while sitting on your sofa or standing in your living room. The interactive environment with haptic feedback helps to truly forget where you are.
I just hope that the experiences of all Virtual Reality headsets live up to our imaginations.
One of the issues I’ve always had with mobile development is needing to scale images to support the multiple icon sizes and splash screen sizes. While customized icons and splash screens for each device is a valuable asset for any application, I’ve always viewed it as this tiring and monotonous task that needs to get done for mobile development.
ionic resources command. Now that I’ve been doing more mobile development with the Ionic framework and Cordova, I realized that they’ve got a plethora of tools to make mobile development easier.
In an Ionic project there is a “resources” folder that contains
splash.png along with android and ios folders. If you run
ionic resources from the command line within the Ionic project, it will automatically take the icon.png and splash.png and scale them or crop them to fit the appropriate sizes.
If you just want to regenerate the icons or splash pages only, use these commands:
I’ve always been interested in mobile application development and I learned Objective-C for iOS app development. However, as my web application development experience increased, I started to question the scalability of code for different mobile platforms (iOS, Android, and Windows). Developing separate applications for each platform with similar functionality but utilizing each platform’s separate SDKs and languages didn’t seem like the best method.
Because of this, I began to lose interest in native mobile development (Especially with the regular updates to the OS and slight changes that meant that I would have to keep updating the code for each new version) and I put off mobile development for web app development which could be scaled to work for mobile browsers using a blend of frameworks like Bootstrap, Foundation, AngularJS, and Node.js. Of course, making a mobile web app had some drawbacks. It loaded slower and would always need a connection to the internet in order to fetch the pages.
That changed recently when I was re-introduced to Cordova and PhoneGap as well as the Ionic Framework. I had heard about Cordova when I was working with Objective-C but after a quick initial investigation, decided that it wasn’t for me due to the limited functionality in the earlier days. After a deeper dive this time, I’ve begun to see it as a platform that I can develop for.
The biggest issue I reached early on was the need to talk to APIs and the CORS (Cross-Origin Resource Sharing) issue. Because some APIs don’t expect to be called client-side (which is what you code for in a Hybrid app), this causes some development pains. Rendering in the browser allows for fast iterative development and better debugging without having to rebuild each time, but will regularly throw the CORS issue in your face, while deploying to the phone will work perfectly fine but limit your debugging capabilities. To circumvent this, you need to open a new browser window with web security turned off (which is dangerous, so only use it for your application). More info on how to do this can be found here: http://superuser.com/questions/593726/is-it-possible-to-run-chrome-with-and-without-web-security-at-the-same-time
After that, I was in the free and clear and I’m starting more mobile development again.
Ever since February 22 when I entered the Hackster Hardware Weekend in Seattle, I’ve had a growing passion for the open source side of home automation. What started as a simple idea to automate the closing and opening of windows became something bigger than I ever imagined.
The Hackster.io Hardware Weekend was how the Open Smart Hub was born. I started with a hacked together hub that could run on the Intel Edison and automate a servo to act as the window opening mechanism based on WeatherUnderground API information or light/motion from a Spark.io Core (now named Particle.io). Once the event finished I realized that my implementation couldn’t scale and was horribly confusing to recreate.
I began to research the implementations that were available to the public. What were the open source options? What were the professional products? How did they succeed or fail to solve the problem? My conclusion was that the home automation space was cluttered with all the different companies, organizations, products, and applications. What we as consumers and I as a programmer needed was a simple platform to expand, integrate, and customize my personalized home automation experience. IFTTT is a great alternative but it is impossible to add your own devices, actions, functions, etc. There is no communal collaboration! If you added a device and someone else wanted to use the same sort of device, they would have to recreate it themselves.
That is when I began to reimplement the Open Smart Hub with a modular design. I chose Node.js as my platform because of it’s low barrier to entry for programmers, abundant tutorials, and abundant library of open source modules. The core of the new implementation is the configuration file that declares the available device types (think WeMo switches, Hue light bulbs, Nest, Weather Underground data, etc.) as well as a user’s stored scenarios and devices. I chose an implementation where you could fully own and have the ability to control everything. After all it’s your home!
The implementation is split into two parts, a local hub run on a Raspberry Pi 2 within your home network which handles all the interaction between your devices and an online hub that gives you an accessible UI from anywhere.