So, you want to write to an application of Google Glass? You want to implement your big idea as an app for Glass. You want to create some glassware.
With the Explorer Edition, there are two ways to approach Glass development. You can use the Google Mirror API, a power efficient development platform designed for wearable technology. Alternately, you can develop native Android-based applications for the device. It’s important to understand the differences between the technology approaches.
For Android application developers, native development has the the appeal of familiarity. You can target Glass using the same tools that you use for Android development. If it’s in SDK Level 15, it’s available to you on Glass. You can plug in a micro USB cable to glass and deploy over ADB just as you would a smartphone.
When you sideload an Android application onto a Glass device, you immediately notice some shortcomings. As of this writing, Glass does not include a launcher. Disconnected from ADB, there’s no clean way to start an application without installing a separate launcher or engaging in hackery. The lack of this integration immediately makes the native application feel like a second class experience.
As you develop screen layouts for your application, you will notice that Glass uses a non-standard Android pixel density. Elements on the 640x360 pixel display must be positioned using 1:1 pixels instead of device independent pixels. While input fields will render, there is no keyboard mechanism built into Glass to handle text input and the built-in speech recognition is unavailable to sideloaded Android applications.
When running an Android application, the touchpad on the side of the device acts like a 2-way left/right digital control with the swipe down gesture behaving like a back button. Tapping Glass in an Android application is the equivalent of hitting the center button on a four-way control or trackball click on a classic Android device. While you can select focus, activate buttons and navigate listviews using this mechanism, the user experience is not optimal and feels like the Glass device is merely providing a crude emulation of an Android device.
It is possible to pair a keyboard and mouse/trackpad with an Explorer Edition device. To do this, you must sideload a modified Android 4.0.4 Settings.apk, as Glass currently provides no built-in mechanism to pair with a Bluetooth HID device. Pairing these devices on Glass is a clumsy process, but it does provide an ability to improve traditional user input.
Understanding these significant limitations with side loaded Android applications, Google announced that a Glass Development Kit (GDK) would be forthcoming. Presumably, the GDK would address limitations with user input and provide a mechanism to launch these native applications. As of this writing, the GDK has yet to make an appearance.
Given these limitations, why might someone create a native application for Google Glass? Native applications have near complete access to the sensors and the Glass hardware including Bluetooth devices. These features are currently unavailable within the Mirror API. Using a native application also permits the developer to create a completely unique user experience outside the standard Glass user interface paradigm.
Unfortunately, the limitations of native applications are not merely functional. With the small 500mah battery, native applications can burn through the battery of the device at an incredible rate. If you have used the video recording capability of Glass or the built-in web browser, you have experienced this native application limitation. The Explorer Edition “all day” battery life can be exhausted in a matter of a few hours running native applications.
It is a severe limitation and challenge associated with the development on Glass. The Glass projector is relatively efficient compared to most smartphone screens, but any code that makes demands of the processor or video capability will quickly run down the battery. If you treat Glass development like a smartphone, it will act like an Android phone with a tiny battery.
ADD HEADING
There had long been speculation as to how users might develop applications for Google Glass. When the Explorer Edition became publicly available, Google also announced the Mirror API as the official platform/SDK for Glass development.
Whereas native application development favors developers with an Android experience, the Mirror API was developed from the ground up to leverage the skills of web developers. Yes, if you’ve developed web applications, you have the basic skills necessary to create Mirror API applications.
The standard Glass Explorer Edition experience is driven entirely through the Mirror API. Timeline items, navigation, tap menus, and voice recognition are all facilitated through the API. Even location-based functionality can be supported through Mirror. By deferring the “heavy-lifting” to web server-based applications, Mirror applications conserve power, because Glass acts like a thin client under the Mirror API.
As a thin client, Mirror API applications do require an always-on connection to the Internet. The Google Glass Explorer Edition is designed around having a reliable Internet connection.
When you develop applications using the Mirror API, you realize the benefits of all day battery life. Your applications alert, notify, and interact with the user in expected ways of the normal Glass experience. Mirror API applications are granted a set of user experience conventions that are consistent throughout the Glass experience.
For all of the “Now, this is Glass” power and capability, there are some limitations to using the Mirror API. Most of the sensors within the device are unavailable to you directly. While the Mirror API does provide location-based information, direct access to the GPS, accelerometer, Gyroscope, and compass are unavailable. While the API provides a great abstraction and experience, if you want to hack the hardware to do something that Google did not intend, you are in the wrong place. Applications built on mirror are also constrained in that they cannot directly interface with Bluetooth devices.
While these limitations may be showstoppers for the hardware hacker, the Mirror API provides a developer with an outstanding set of tools to quickly create applications. When you learn the Mirror API, you can create experiences for Google Glass in a matter of hours. Google has created an efficient platform to exploit the capabilities of the technology while at the same time being widely accessible to developers.
Yet, despite this technology foundation, why are developers having such a difficult time embracing the Mirror API? Part of the problem is that while Glass uses familiar technologies, it makes use of them in interesting ways. It is also significant that developing for the Explorer Edition is so dramatically different than smartphone and tablet development that has dominated developer mindshare for the past five years.
Developers tend to stick with the tools that they are comfortable with and Glass is a dramatic departure from the Android smartphone. If you want to exploit the Glass hardware, you can write a native app. If you want to exploit the Glass experience, it’s time to take the plunge with the Mirror API.
There has been error in communication with Booktype server. Not sure right now where is the problem.
You should refresh this page.