Generating Icons and Splash Screens with Ionic

One of the issues I’ve always had with mobile development is needing to scale images to support the multiple icon sizes and splash screen sizes. While customized icons and splash screens for each device is a valuable asset for any application, I’ve always viewed it as this tiring and monotonous task that needs to get done for mobile development.

Enter the  ionic resources  command. Now that I’ve been doing more mobile development with the Ionic framework and Cordova, I realized that they’ve got a plethora of tools to make mobile development easier.

In an Ionic project there is a “resources” folder that contains icon.png  and splash.png along with android and ios folders. If you run  ionic resources  from the command line within the Ionic project, it will automatically take the icon.png and splash.png and scale them or crop them to fit the appropriate sizes.

If you just want to regenerate the icons or splash pages only, use these commands:

ionic resources --icon

ionic resources --splash

For more info check out the Ionic Blog post: http://blog.ionic.io/automating-icons-and-splash-screens/

 

Developing Mobile Apps with HTML/CSS/JS

I’ve always been interested in mobile application development and I learned Objective-C for iOS app development. However, as my web application development experience increased, I started to question the scalability of code for different mobile platforms (iOS, Android, and Windows). Developing separate applications for each platform with similar functionality but utilizing each platform’s separate SDKs and languages didn’t seem like the best method.

Because of this, I began to lose interest in native mobile development (Especially with the regular updates to the OS and slight changes that meant that I would have to keep updating the code for each new version) and I put off mobile development for web app development which could be scaled to work for mobile browsers using a blend of frameworks like BootstrapFoundation, AngularJS, and Node.js. Of course, making a mobile web app had some drawbacks. It loaded slower and would always need a connection to the internet in order to fetch the pages.

ionicview

That changed recently when I was re-introduced to Cordova and PhoneGap as well as the Ionic Framework. I had heard about Cordova when I was working with Objective-C but after a quick initial investigation, decided that it wasn’t for me due to the limited functionality in the earlier days. After a deeper dive this time, I’ve begun to see it as a platform that I can develop for.

It still uses HTML, CSS, and Javascript, but it’s a specialized project developed solely for mobile applications which allows for native app deployment rather than mobile website creation and can call the actual mobile platform’s SDK. It’s different than a webapp port, but it allows the web developer to use the same language (re-use big portions of code) and create hybrid applications for each platform that look like native applications and render through cached files.

Here is a great article about the history of Cordova & PhoneGap and the UI framework for them known as Ionic and here is an article about what the Ionic Framework is.

The biggest issue I reached early on was the need to talk to APIs and the CORS (Cross-Origin Resource Sharing) issue. Because some APIs don’t expect to be called client-side (which is what you code for in a Hybrid app), this causes some development pains. Rendering in the browser allows for fast iterative development and better debugging without having to rebuild each time, but will regularly throw the CORS issue in your face, while deploying to the phone will work perfectly fine but limit your debugging capabilities. To circumvent this, you need to open a new browser window with web security turned off (which is dangerous, so only use it for your application). More info on how to do this can be found here: http://superuser.com/questions/593726/is-it-possible-to-run-chrome-with-and-without-web-security-at-the-same-time

After that, I was in the free and clear and I’m starting more mobile development again.

Google Cardboard and Unity

I have a friend who is really into Virtual Reality development and he introduced me to the Google Cardboard Kit. Google Cardboard is able to enable accessible VR tools through the use of smart phones as the basis for the platform. Since most people have smart phones, but very few have VR headsets, a simple and cheap conversion add-on was beneficial. I figured that with its cheap barrier to entry, I could at least test it out and as soon as my kit arrived, I started to do a bit of development for my iPhone 5 in Unity for it.

sunnypeak-vr-google-cardboard

I got this kit for $26 and unlike the other cardboard ones, this was made of plastic and had better construction with adjustable focus and lens separations.

Here are some of the thoughts I had while developing for and trying out the Google Cardboard SDK for Unity:

  • It’s awesome for game developers because it provides them with an easy way to take a 3D game and turn it into a VR experience! (as long as its been developed in Unity)
  • BUT theres a very limited way to interact with applications right now (aside from moving your head). The only real interaction is to use the “trigger” which is a magnet on the side of the headset. I’ve seen videos of people hooking up controllers to their Android phones, but it would be amazing if there were better guides on how to set up a wireless controller, Leap Motion, or Myo armband for interactions from a developer standpoint.
  • The Unity SDK really limits the functionality of the applications right now. For example, it’s almost impossible to figure out how to stream video to it from a server, or create a simple socket connection to a server in order to provide input / data. (I am trying to use the headset to move a servo that is connected to a WebCam for remote viewing)

Overall, I learned a lot going into a quick dive into developing a Unity application and using the Google Cardboard SDK, but I’m not sure that it’s ready yet for me to go much further. The biggest issues for me are the limited interaction and the lack of a usable cross-platform networking stack. I’m sure that it will get better, but for now I’ll just have to be happy playing a couple simple games and watching Youtube 360 videos.

If you want to see what I did as an entry point:

I used this “Roll-a-Ball” tutorial in order to get accustomed with developing in the Unity environment before I built this VRCameraDemo that take my phone’s back facing camera and places it on a plane in front of the viewer to recreate what it sees. (Not very “virtual” reality)