The GDK - Develop - Designing and Developing for Google Glass (2015)

Designing and Developing for Google Glass (2015)

Part III. Develop

Chapter 13. The GDK

The Google Glass ecosystem includes the ability to build client applications to be installed on Glass that interface directly with the system, doing things that largely aren’t available through services dependent on the cloud-resident Mirror API, including offline capabilities, sensor access, and real-time interactivity. The Glass Development Kit (GDK) is a library that extends the larger Android SDK by letting developers write full Android applications in Java and use associated tools for debugging, crash reporting, and analysis.

This chapter gives you an overview of the GDK, its capabilities, the distinct UI elements it provides for Glass, and design patterns for working with each type of UI element—the right way.

Mike DiGiovanni, an insatiably curious coder from New Jersey whose early add-ons for Glass included Winky, which later became the system wink-to-take-a-picture gesture, enthusiastically proclaimed about the GDK, “Native Glass development is, by far, the most exciting development that I’ve done in years.” Many throughout the Glass community happily echoed this sentiment.

Installed Apps Running on Glass

It’s important to note here that native development on Glass doesn’t change the core goals of the product—you’re still catering to microinteractions in a head-mounted display. This is key to being able to Think for Glass. The fundamentals of the Think for Glass philosophy don’t change at all; the GDK just provides a new set of tools at your disposal for building the perfect stage for your idea and enhancing interactivity in your own custom ways.

Here’s the billion-dollar secret about native Glassware that many in the media (including many covering the technology beat) got wrong: it didn’t just magically appear when the GDK was unveiled at Google’s San Francisco office at a hackathon before a group of anxious coders and reporters. If you use Glass freshly unboxed, without installing anything additional, you’re still using a lot of native Glassware all the time, presented in a couple of different ways. Several components of the core Glass firmware are applications providing an experience that completely honors the timeline model, in addition to living outside of it. In a twist of irony, GDK apps enhance the default wearable experience—and at the same time go rogue against it.

CAUTION

Here’s some terminology for those of you who are sticklers to detail before you go running us out of town on Amazon. Technically speaking, “native” Android programming refers to the Android Native Development Kit, an SDK allowing C/C++ libraries to be used within Java-based projects. The moniker has become muddied somewhat in mainstream use in recent years to the point that “native” now implies “anything mobile” and “not on the Web.”

We’ll do our best to not let this get out of hand in this chapter. And since we won’t be covering the NDK, just know that in the context of our discussion on the GDK we’re using “native” to refer to the writing of Java code for Glass and apps that run locally on the wearable, not Glassware using the Mirror API.

This type of Glassware doesn’t necessarily force you to mess with OAuth or go to the cloud to do anything—everything’s running on the device as a compiled program. Let this inspire you as a developer! There’s lots of room to create here.

Rather than pit the two frameworks against each other in a programmer’s holy war, the GDK and Mirror API productively share space, with both using the timeline as a staging environment. They tightly integrate as partners, not rivals, in helping deliver a very convenient user experience.However, your application code and the timeline run in completely separate processes.

This is what we fully respect as the sheer brilliance of the Glass ecosystem—there’s great blurring between what’s running RESTfully and what’s a locally executing process. This is a good thing! The trained eye (which by this point in the book you certainly have) can quickly pick out an installed app separate from a cloud-based service, but the timeline is what stitches them together into a coherent, unified interface.

This produces great choice and great opportunity for you as a Glassware designer/developer. You may have an idea germinating in your brain that hinges on being able to read the user’s rate of acceleration and bearing at any precise moment in time. Or you might be thinking about presenting data as some cool animation or complex 3D diagram that adapts to changes in real time. Perhaps you’ve got a can’t-miss concept involving an innovative use of the Glass camera, or you demand extremely low-latency communications and system responsiveness. Maybe you just require some custom functionality that the Mirror API just doesn’t provide. There’s the possibility your development team may have a vast legacy library written for an existing app that does cool things you’d like to apply over the wearable paradigm. Or, you might want to make an entire universe unto itself and give people a new way of experiencing the world.

Whereas the Mirror API is an agile platform that abstracts a lot of the gory details of system programming so that you can rapidly iterate and turn your ideas into usable products, the GDK gives you much more control over the exact implementation of what you’re trying to do. The trade-off, of course, is that the craft of building applications that run on the device isn’t exactly a job most of us mere mortals can do during our lunch hours. It’s a very complex and involved process requiring unit testing, debugging, refactoring, and deployment; and having total control means sacrificing speedy development. The benefit to you if you’ve done Android development previously is that you’ll be using the same skills and APIs to build Glass apps.

What Is the GDK?

The Glass Development Kit is an additional library to the Android SDK that lets you code up features specific to Glass. It’s an additional Java archive (JAR) you bundle with your Android projects that gives you access to classes to do Glass programming for programmatic control of Glass UX elements like voice recognition, gestures, and location data.

The GDK extends the core Android stack, which lets you work with UI widgets and layouts, along with the core hardware components like sensors and GPS, and managing activities, services, and broadcast receivers. Native programming gives you more granular dominion over user interface elements, as it is immediately responsive to user input.

The basic platform features several components from the Android SDK, which work, more or less, as they do on stock smartphones and tablets (for more on Android development, check out Learning Android, 2nd Edition, by Marko Gargenta and Masumi Nakamura:

Location provider

The user’s position on Earth.

Camera intent

Control the camera capturing for photos and video.

Recognizer intent

The system’s speech-to-text feature.

Options menu

Additional system controls.

The GDK add-on includes several components to let you programmatically work with Glass UI element:

CardBuilder

Manipulate, arrange, and style atomic representations of data in timeline items.

CardScrollView

Manage navigation over a collection of static timeline items.

Live cards

Render and update content rapidly within cards that sit left of the home screen.

Voice triggers

Insert new voice commands into the “OK Glass…” menu to launch apps.

GestureDetector

Capture user movement and trackpad swipes/taps.

The Android SDK and the GDK make a powerful team for an integrated development environment (IDE). The add-on has tight integration with Eclipse as well as the newer Android Studio. Work is also ongoing by the community to turn App Inventor, the WYSIWYG application creation tool built by Google and now maintained by MIT, into a utility to build Glassware.

While you’re free to build practically whatever your imagination conjures up with the GDK, this also means respecting some limitations. Unlike the Mirror API, you won’t be able to pick any programming language you like—you’re doing Android development, which means you’ll be using Java. (More specifically, you’ll be working with the subset of Java for the Android SDK.)

For the most current instructions on setting up a development environment to build Glassware with your preferred development environment, review the GDK Quick Start section of the Glass Developers documentation.

DRAWING AND ANIMATION

The drawing interface for the GDK is still OpenGL, meaning you can fully draw 2D and 3D graphics directly onto cards using an instance of the Android canvas as a drawing surface. The rather antiquated GDI classes are typically only used these days for doing simple shapes and polygons. RemoteViews can only render layouts, but rendering directly gives you the full range of Android’s drawing capabilities.

The Mini Games pack of Glassware uses several physics and graphics libraries to achieve fluid movement in both 2D and 3D environments.

Because the Mirror API at the time of this writing doesn’t support JavaScript, you’re unable to do animation loops with that framework. So if you’re looking at doing a high-impact game (which may still be low-intrusion, depending on your implementation), you’re going to need to work natively.

The Glass team has a fantastic code repo on GitHub that shows how to work with OpenGL for things like shaders and textures and other fun topics like that. It’s definitely worth perusing.

And what’s perhaps most encouraging if you’re coming over to Glassware development from old-school Android coding—you don’t just have to use the GDK namespaces. The Glass team shared this gem with the community on Stack Overflow:

The API surface of GDK Glassware is not limited to the classes contained in the GDK Add-on. The GDK Add-on merely closes the gaps between the Android SDK and features that are unique to Glass. This means, in general, given a problem that isn’t covered by the GDK library directly, just attempt the Android solution.

How the GDK Differs from the Mirror API

The GDK empowers you to make distinct functionality part of your mobile applications. While it shares the timeline metaphor as a user interface with the Mirror API, writing native code for Glass gives you the ability to do so at a more granular level. Since everything happens locally on the device, you won’t necessarily need to ping the cloud each and every time you need to do something.

You’re able to reuse custom components within your project with the Android NDK, releasing you of the burden of having to port entire libraries over to Java. This can’t be easily done with the Mirror API. Additionally, writing locally running programs doesn’t let you enjoy Glass sync with its intuitive and self-managed push backend, using Google Cloud Messaging.

The GDK lets you explore three specific areas that aren’t available with the Mirror API:

Offline access

Since native apps don’t have to rely on constant connectivity, you’re fully capable of doing all processing and data storage locally. Your apps could be exclusively offline and not talk to the outside world; or they could provide an option to sync on-demand or periodically (like how Android does for your contacts, calendar, Chrome, Gmail, and other app data), or apply offline support when losing connectivity and persist data to a local store or cache and then sync when a network connection returns (like Google Drive does with its files).

Real-time interactivity

Since a network connection isn’t required, you’ll be able to capture and handle user events in true real time without latency or the need for server roundtrips. You’ll be able to respond to changes in microseconds.

Sensor access

Directly interacting with hardware is at the heart of native mobile programming. The GDK lets you process readings as users move or their environment changes. Google’s documentation cites the following sensors as programmatically available on Glass:

§ Sensor.TYPE_ACCELEROMETER: Rate of movement, including gravity

§ Sensor.TYPE_GRAVITY: Influence of gravity on the device

§ Sensor.TYPE_GYROSCOPE: Rate of rotation

§ Sensor.TYPE_LIGHT: Amount of ambient light around the device

§ Sensor.TYPE_LINEAR_ACCELERATION: Three-dimensional vector for acceleration along each axis, excluding gravity

§ Sensor.TYPE_MAGNETIC_FIELD: Proximity of the device in relation to the user

§ Sensor.TYPE_ROTATION_VECTOR: Orientation to measure tilt levels

You can find out more about Android sensors and the APIs for interacting with them in the Android Developers documentation. The Android Open Source Project also includes several software sensors, so keep current with the latest information in the docs.

Aside from the exclusive features you can build using the GDK, a few advantages available to you include:

Performance of installed apps on Glass hardware

As of the time of this writing, GDK Glassware is built for Android 4.4.2 as the target version (API Level 19 or higher), so be mindful that while the specs for Glass are comparable to a mid-range smartphone (see Chapter 3 for details), high-end and processing-intense operations should be done with care. The Glass firmware manages multiple operations incredibly efficiently and does a good job of self-healing from stalls and crashes. Glass is a very capable computer.

Mirroring

Think about how Glassware may leverage modern releases of Android’s support for Miracast, which is built on top of WiFi Direct and bypasses the need for devices to share the same router for use in mirroring or second-screen experience apps to other displays. Glass might be used as a remote control much in the same way that Android phones and tablets can fling YouTube content to a monitor running Android TV.

Frontend presentation

In addition to being able to generate cards with text and images and insert them into the timeline like the Mirror API, the GDK provides you with a range of unique presentation elements that work with the Glass experience to give data new depth and create rich UIs using standard Android widgets as well as drawing custom graphics.

NOTE

Chromecast, Google’s amazing diminutive streaming dongle, wasn’t even out for a full day before we—and tons of other people on Google+—began speculating about and ultimately demanding how we might be able to use it in combination with Glass as a live screensharing medium. The screencast feature in the MyGlass mobile app is fantastic but limited to the device running it, and debugging tools like Android Screen Monitor connected to Glass via USB to get the Glass UI on larger displays is a hacky setup. We’d like to see something that works out of the box and send our timelines directly to an HDTV via Chromecast. Or even in huge theater and convention center displays or digital billboards. It’s too obvious not to do.

However, GDK development doesn’t have the full features of the Android application ecosystem just yet. As we write this, Glass development with the GDK doesn’t include Google Play Services. If you try to force the issue and bundle it or try to use the various features it provides, your app will likely break. We hope to see this change soon, but if you manage an app and are translating it for a wearable audience, it’ll behoove you to know what components aren’t fully supported.

See Porting Existing Apps to Glass: DON’T for more on this topic.

User Interface Elements of GDK Apps

When dealing with installed applications on Glass, you’ve got two UI stages on which your apps can perform: live cards and immersions.

Live Cards

Live cards involve content that’s updated frequently. While static cards from the Mirror API can be modified if need be, the content in live cards is expected to be altered more rapidly, even at a rate of several times per second. They live to the left of the home card (with the clock), and represent those events that are currently happening or will happen in the future.

This information may be from the Internet, like a ballot tally during an election; or from your device, like a sensor reading. Because they’re ongoing tasks, they run in a process separate from other cards, and in that respect are very much like the Android widgets that run on the home card of tablets and smartphones. The Timer, Stopwatch, and Compass apps are examples of live cards. So is the Settings card telling you how much charge your battery has, updating its content in response to events (in this case, the change in how much charge you have left). You can even have more than one live card running—as exhibited through all of your Google Now cards, all independently updating their contained information.

Another way live cards differ from static cards is that live cards are still bound to the rules of the timeline, but if the user swipes in either horizontal direction and moves away from a live card it continues to run even if not visible. When the user swipes back to the left of the home card or Glass wakes from standby mode, the live card is still right there, doing its thing. It doesn’t have to have focus to continue working. There are lots of neat ways you can use this type of card, since it doesn’t have to stay resident and doesn’t need the display to be actively on to work.

Live cards exist as Android services rather than statically generated cards or their own activities, and generally are used in situations where there’s not much input involved from the user. Examples like the Timer app allow a user to set a timer and then leave it alone as it animates a countdown. A major difference between static cards and live cards is that should Glass reboot, lose power, or run low on resources, live cards will be removed. The Ongoing Task pattern documentation that Google published shows how to use Android services running in the background with handlers to update live card content.

DON’T SKIMP ON “STOP”

Google’s review process requires that all live cards include a Stop command to properly dismiss them and kill their underlying service. This isn’t an idiom or pattern the Glass team takes ligtly—if you submit Glassware for review and it doesn’t include Stop, your Glassware won’t be approved.

However, do make sure to kill processes if they run long or the wearer forgets about them. Being “live,” they should optimally have a shelf life and expire at some point. While modal dialogs and Android toasts don’t fit the Glass UX, informing the user that a live card has been running for an exceptionally long time and should be dismissed if it’s not needed (fittingly, with a static card and accompanying Glass alert) might be a good way to ensure your app doesn’t develop a bad reputation as a battery hog, and possibly receive negative ratings.

Two Flavors of Live Cards

How fast is fast? As a Glassware developer, you normally make the design decision to work with live cards over static cards based on the need to update a timeline item’s content within a few seconds. But it goes deeper than that. You’ve then got to determine how rapidly you’ll be replacing what the user sees or gets notified about, and the GDK gives you some options in determining how you control the on-screen content for live cards. Live card content can be generated either with low-frequency rendering or high-frequency rendering.

Low-frequency rendering uses a RemoteViews object to inflate and set values in View-inherited UI elements every few seconds or a couple of times an hour like a change in a sports score, a weather update, or a stock price tracker. This method is much easier to configure and requires only a few lines of code. In contrast, situations where high-frequency rendering comes into play involve direct drawing to the surface on the order of tens if not dozens of times per second, like an animation loop, or displaying real-time accelerometer readings as the user moves her head. This method requires more setup work to handle the rendering logic.

Live cards only support the following layouts and views within the Android SDK:

Layouts

§ FrameLayout

§ LinearLayout

§ RelativeLayout

§ GridLayout

Views

§ AdapterViewFlipper

§ AnalogClock

§ Button

§ Chronometer

§ GridView

§ ImageButton

§ ImageView

§ ListView

§ ProgressBar

§ StackView

§ TextView

§ ViewFlipper

See Google’s documentation for full examples of each type of rendering, and make sure you choose that which is most appropriate for the content you’re working with and the experience you’re aiming to deliver.

Android services typically drive the lifecycle of live cards, given that they still need to run if the user navigates away from them or Glass falls asleep after nonuses. GDK Glassware can be run as a service that starts when Glass first boots up—if you’re familiar with Android development already, you’ll enjoy the ability to create background processes.

STARTING SERVICES AT BOOTUP?

Because Android services that run constantly can be battery killers, the practice is discouraged for third-party Glassware publishers. But developers will no doubt still want to start a service for a live card when Glass is first turned on and have them run as long as the device is on. Consider, for example, the system software that controls the various Google Now cards—each item is a live card and the service driving them spins up when you first boot Glass, running for the duration of your session, and even while the device is suspended when Glass is idle or off your head.

Unfortunately, this level of control isn’t available to external developers at the moment, as the rules for approval of your Glassware for listing in MyGlass require you to have a Stop command. Still, the best part of Glassware review (which we detail in this book’s final chapter) is that you get to work with Google directly on implementing your idea in the best way possible.

If you have a genuine use case that demands such an experience, you’ll get help for the best architecture to use to make it come to life—even if that isn’t a perpetual background service.

Good luck!

Two of the most widely used purposes for services are to schedule time-specific or periodical publishing of timeline cards, or to handle long-running operations asynchronously in a dedicated thread like broadcasting system-wide alerts under certain conditions based on sensor input, or downloading assets from the Internet. A funny example of this based on the user’s context might be generating a warning card when the user looks straight up during daylight hours, warning him, “Hey, stupid! Don’t look directly into the sun!” which is the type of contextual computing experience that being able to Think for Glass is all about.

Lastly, it’s important to note that live cards aren’t just for dynamic content—they also work fantastic as dynamic containers for data. A perfect example of this is ViewTube for Glass, which lets you enjoy content from YouTube without having the timeline stay in a fixed position while the video is in playback. You can search for videos, then stream the clip and swipe away from the ViewTube card (essentially an embedded media player), and do other things or let Glass go to sleep while the content continues to play.

It’s a tremendous user experience to not force the user to remain on a certain location on the timeline or be within the Glass browser. If you wanted to adjust the volume, you could swipe to the Settings bundle and make your tweaks there. Pandora’s Glassware achieves the same effect while including a menu item within the live card to control the volume level.

In both cases, the live card gave the user freedom while still updating the live card’s content—ViewTube with matching search results, Pandora with the next available track.

The docs on managing ongoing tasks with live cards give you a great springboard for getting productive quickly, so do read up.

Immersions

This is the type of UX that most people will immediately think of when they first hear the term “Glass apps.” Immersions are dedicated Android components, and as such are the most complex type of element for GDK apps because they give you the most autonomy. You’ve got carte blanche as far as the experience you want to create, which can be separate from the user’s timeline altogether or directly integrated with it for a seamless transition. You also have less constraints on the user input controls than you do with other UI types—if you want to require a Bluetooth keyboard, you can do that, too.

Immersions are also the most challenging type of interface element to build. And true to their namesake they demand the most prolonged attention from their audience, which means the display could stay active for the duration of their use—meaning they can potentially devour CPU cycles and be battery killers if you’re not careful. You don’t want Glass to run hot. Consider the system application “Show the viewfinder.” It activates the camera on a persistent viewable surface with a framing tool…which if left running for more than 20 seconds starts to notably heat up Glass. It’s a really good utility but needs to be used responsibly.

Immersions and their role in the Glass ecosystem are also misunderstood. Developers often think that an immersion has to be stared at for long periods of time, like with games or apps using the camera, when there are instances where this isn’t the case at all. The navigation system application displays a map contextually accurate to the user by using location provider features, animates the position on the map as the user moves about, and reads turn-by-turn driving directions aloud. But, the app doesn’t force the user to stare at it—it auto-dims the display if there’s no active use after a few seconds, like the timeline does.

“OY, MY ACHING BATTERY…”

We’re tragically projecting that the majority of the crop of native apps built for Glass will unnecessarily be built as immersions, and that this will largely lead to negative performance based on the processing power, memory, and battery charge they require. Remember, having the Glass projection unit active consumes the charge on your battery more dramatically than static and live cards, so unless you’re developing an app where the display needs to stay on with information constantly resident, let Glass naturally go dormant.

An app like GolfSight allows the user to sleep Glass without dismissing the app running in memory. This is good program design to emulate.

So as far as our projection, we hope we’re wrong. Someone please prove us wrong.

In contrast to live cards, immersions are built to harvest as much input as they can. You’ll want to use them as the surface for UIs like games where there is a high degree of interaction with the app. This is the main reason they live separately from the timeline—when interacting with items on the timeline, input you give Glass typically directs the navigation, sending you forward or backward through your items, or selecting menu items or launching new Glassware. Immersions are independent so that they can capture all of those gestures and use them as input for a completely different interface experience.

An immersive app exists as its own activity, and can capture trackpad and gesture events completely differently than Glass does for other services. Still, while applications like games may create custom input control elements, it’s best practice to follow the idiomatic uses for swipes and taps (i.e., swipe forward to move left, swipe back to move back, tap to select, etc.) so users don’t get confused with their wider use. Since they exist off the timeline, immersions also don’t necessarily leave the breadcrumb trail of cards that other app elements might. This can work for or against you—you may want to create an archive of a user’s search activity against your recipe database for historical purposes and quick reference, for example.

However, because they live outside of the friendly confines of the timeline, immersions also lack some of the conventions that other UI elements have. Namely, immersions naturally don’t dim the display after a certain period of time. They stay on until dismissed or their containing application is exited, hence their reputation for draining the amount of available charge. They also don’t have any concept of bundles, so you’re on your own when transitioning between screens and launching activities.

And just like live cards have to have a Stop action, Google enforces a design rule for immersions: they all have to support the familiar downward swipe gesture on the trackpad to dismiss the activity, which kills it in memory and exits back to the timeline.

One word of caution: while immersions are very good and very powerful, they don’t give you license to violate the Noble Truths for Glassware design as described in Chapter 5. The Glass “Get directions” application is a good example of this: the display promotes a glanceable, quick reference interface for navigation, but it takes over the system the same way an immersive screen does.

You can learn more about proper immersion development patterns from Google’s documentation. Table 13-1 summarizes the characteristics of each type of native UI element.

Table 13-1. Components of GDK Glassware

Elements

Features

Example

Live cards

Content constantly drawn on cards, low-frequency updates, high-frequency updates

Animation, timers, location

Immersions

Environments separate from the timeline, exist as their own activities, handle touchpad taps/swipes and head gesture events

Games, camera invocation

As a final difference between live cards and immersions, neither allows programmatic control for wake locks in order to control how the device goes to sleep. Live cards are managed by the Glass system to suspend naturally, and immersions stay on. It’s that simple.

More Tools for Rapid Design

When designing your GDK Glassware, a free tool you’ll find indispensable is the Glassware Flow Designer as seen in Figure 13-1, a web-based flowchart creator that Google built to help you rapidly lay out and visualize how your app will look and behave. Like the Mirror API Playground, it lets you apply Glass-formatted templates to get a snapshot of how content will appear in cards, get a bird’s-eye view of how intuitive your usability process is, and possibly identify areas where things can be streamlined. You’ll be able to quickly spot redundant menu items or rearrange their order based on which ones will be used most frequently.

Glassware Flow Designer, shown in Figure 13-1, can be used with any Glassware project, but we find it works especially well for charting out live card projects. Because it runs on Google Drive, you can share designs with other people and collaborate on putting flows together, editing them, and presenting them prior to beginning the fun task of writing code.

The Stopwatch Glassware’s flow

Figure 13-1. Glassware Flow Designer

However, Glassware Flow Designer isn’t intended to be an all-in-one solution. Glassware Flow Designer is effective for assembling an overall UX flow, not necessarily denoting things at the component level. It doesn’t currently differentiate between cards and other Android components like services or broadcast receivers, databases, or Java objects. Justinmind Prototyper Pro is a fantastic tool that lets you visualize gestures, swipes, and taps, and even canvas objects. GlassWireframe denotes actions, multimedia, and sensor use.

So if you prefer more detailed sketching when putting specs together for complex applications, these tools are great to have handy.

Whichever you go with, they’ll help speed up your production process. When teaching Glassware design sprints, Allen has noticed that when using Glassware Flow Designer teams greatly increase their turnaround time for prototyping and can turn ideas into working products.

WHAT’S THE OPTIMAL IMAGE DENSITY?

There’s a valid concern that astute Android developers will have about the optimal density bucket to use for images in GDK apps. To date, the Android documentation notes the linear relationship between screen size and pixel density—the larger the display, the larger the density. But Glass, once again, proves different—the resolution for the tiny prism display is 640 x 360, with the effect being viewing a 25-inch high-definition display from 8 feet away. What density bucket should we use in our Glassware?

Google recommends that drawable resources for Glass be stored in at least the /res/drawable-hdpi folder in your IDE. That’s a good baseline to follow to ensure clarity and performance.

It Was Native All Along!

As a means of illustrating what native apps can do, Table 13-2 presents some recognizable examples of the GDK elements at work using the Glassware that Glass ships with.

Table 13-2. Examples of Glass UI elements

UI element

Glassware examples

Static cards

Nearly everything with the Mirror API, Google Search results

Live cards

Voice Call, “Set wake angle” Settings card, Google Now cards

Immersions

Hangouts, Record video, web browser, navigation, “Bluetooth” card in Settings, camera

Now ask yourself—how many of these surprised you at being native components and not generated by the Mirror API? As a bonus, what element do you think the home card uses? Gold star for you if you guessed live cards! The card updates on its own with the current time, synced to the network and displayed in your local time zone. It also dynamically displays certain system status messages like “In a phone call” or “Glass must cool down to run smoothly,” which appear in response to system events. Neat, huh?

GLASS GETS A FAILSAFE AGAINST OVERHEATING

Even though the Glass hardware is designed to direct the heat generated by extended use of the processor and/or display away from the wearer’s head (the heat can be felt directly on the touchpad, opposite the user’s skull), a defensive mechanism exists that prevents apps from running if Glass was already running hot, as shown in Figure 13-2. While you can (and should) code your app in ways that minimize intense processing or displays that stay active for long durations, Glass will mercifully override any attempt to launch your app and block it until it cools down.

Installed apps using immersions and the onboard sensors are prime candidates that can trigger such a condition, as they impose a certain load on the projector and the processor, respectively. So code smart, but also know that Glass is looking out for you.

Glass proactively supresses any apps from running if it detects that the device has been working too hard and is running hot. This normally takes about 30-45 to clear.

Figure 13-2. Glass plays it cool

Also keep in mind that these apps demonstrate an advantage of hybrid Glassware, using elements of both native components and static Mirror API cards. (We’ll get to this shortly!) By the same token, when using the “Get directions” voice command everything is presented in terms of Mirror API elements, but once you tell Glass to give you turn-by-turn directions, the UI shifts to a live card that updates its visual appearance AND speaks updates aloud, sitting to the left of the home card. Navigation uses hybrid elements as a search-results relationship with static cards and immersive environments. The web browser handles any URLs within static cards. And the “Factory reset” option in the Settings bundle invokes a low-level program that restores the system to its out-of-box default state.

When sending or receiving a voice call, the GDK creates a live card with the contact you’re talking to left of the home card with a call counter to keep the elapsed time. Upon termination of the call, a static card is inserted as a call log into the right of the home card with the number you spoke with and the time spent on the call. This is emblematic of the design strategy we’ve been teaching you that neatly puts what’s happening apart from what’s already happened. The right tool for the right job.

Some of the native components that control the camera also map their commands not only by menu items and voice commands, but also by a special hardware control—the shutter button. Handle the onKeyDown() method in your activity to process shutter button presses.

The GDK Object Model

The classes you’ll be working with when using the GDK are organized under the com.google.android.glass namespace. You can review the classes and their supported methods, properties, and events from your IDE’s autocomplete feature, or by reviewing the class reference documentation online.

The classes cover operations that let you work with cards (static and live), hardware, menus, the touchpad and accompanying user gestures, and timeline dynamics. The touchpad API includes interfaces for you to implement for the various types of tapping and scrolling you’re able to capture. It also has an enumeration of values to identifying the type of user input captured, such as directional swiping, two- and three-finger long pressing, and double-tapping.

Packages

com.google.android.glass.app

Model for structuring cards; voice trigger operations to invoke the app from the main menu.

com.google.android.glass.content

Defines the explicit intent actions and extras specific to Glass.

com.google.android.glass.media

Extends the Android Camera API for capturing still images and video.

com.google.android.glass.timeline

Model for live cards and to interact with the timeline.

com.google.android.glass.touchpad

Recognize touch gestures.

com.google.android.glass.view

Extensions for Menu and WindowManager classes.

com.google.android.glass.widget

Special views that let you implement horizontal scrolling for navigating through collections of cards (the Glass version of a ListView).

This list may change with subsequent platform releases, so bookmark the changelog to see what’s new.

System Intents

The GDK also lets you make use of certain system applications provided by Android through implicit intents. If you have the need to let the user go out onto the Web, you can launch the Glass web browser. If your application involves geolocation and directions, you can call up the Navigation app for Glass and pass it a data URI with a series of parameters that assemble the turn-by-turn sequence as the users move toward their destination. You can also detect when a user is wearing Glass or has taken the headset off, in addition to getting paths for photos and videos the user has captured.

You could combine these ideas for a wearable pizza delivery application—creating a live card for a delivery driver that contained the customer’s name with a timer counting down the time taken to fulfill an order. That card might include custom menu items for providing driving directions to the customer, as well as letting the driver look up specials and promotions on the Web.

In case you’re pondering using system software in your own interfaces, here’s a rule of thumb: a live card can launch an immersion, but an immersion as we’ve noted has to be canceled via downswipe with the user returned to the timeline before other actions can be taken. So if you’re planning out an app and sending the user to the browser or navigation is key, use a live card as your launching pad. Your app can even register an intent that can be launched via a menu item in a static card. So, you’ve got options.

See the documentation on using system intents for implementation details.

THE MAGIC OF BRIDGED NOTIFICATIONS

One of the ways the Glass ecosystem continues to expand outward and integrates with the larger Android family is how Android Wear uses notifications, and how they’re synced between handhelds and their connected wearable devices. The magic is bridged notifications, which takes the default notification APIs for Android handheld programming and magically makes them work on wearables.

Bridged notifications represent an evolution of Android intents and give you great leverage as a programmer—messages can now not only be passed explicitly between components within the same application and implicitly between components within different apps resident on the same device, but also between devices connected through the same Google account, with data synchronization across devices being automatic and nearly instant. This is incredibly powerful.

Users are constantly connected to their personal network of two or more nodes they’re signed into through their Google accounts. We’ve got similar notifications on Glass through Notification Sync, so this is very exciting.

On-Head Detection Halts Running Apps, Too

When enabled, the On-Head Detection feature from the Settings bundle acts as an automatic traffic cop for running apps. If a user removes Glass while a native app is running, Glass goes to sleep, and certain actions happen depending on the GDK element.

Live cards are paused but continue to run in the background and resume once Glass is returned to the user’s head, retaining their state. Things are a little more unforgiving with immersions, though—any executing immersion is killed outright and must be restarted from scratch when Glass is put back on. This could result in a loss of data, so if you are planning an immersive app, you may want to add a warning to users to not remove Glass, or manipulate the Android lifecycle callbacks that are fired when running activities are halted to save any progress (typically in theonPause() callback). Or better yet, bake in a periodic autosave feature.

The effects on an application if the user takes Glass off are an oft-overlooked condition that need to be accounted for as part of proper defensive programming. Don’t let your users find out the hard way.

Hybrids: The Ultimate Glassware Challenge (and Experience!)

It’s important to note that the two frameworks for Glass development aren’t mutually exclusive. While the Mirror API and GDK stand on their own in order to create cloud-dependent services and installed applications, respectively, you can also combine the two schools of thought to make a really engaging wearable experience. Hybrid Glass applications (designated in Table 13-3 as “Mirror API + GDK”) use the best of both worlds—the quick and lightweight push nature of timeline cards with a full-blown custom native UX (Figure 13-3).

Hybrid Glassware uses static cards on the timeline generated by the Mirror API that launch Android activities built with the GDK.

Figure 13-3. Flow for hybrid Glassware (image courtesy of Google)

The communications bridge between the two frameworks is a menu item on the timeline—more specifically, the OPEN_URI built-in menu item value from the Mirror API with a corresponding URI as a value for the payload property. You’d normally use this for redirecting the users to a link on the Web after they select the menu item, but it can also be used to launch a specific activity within a native app. In addition to the system-level intents that can be called, specifying a URI within the scope of your own application likecontent://com.book.thinkforglass.totallyawesomesocialapp/status/8675309 jumps right to that screen, which would fire an activity registered to receive that URI type in your application’s AndroidManifest.xml. But, you could also specify a Java class that makes use of the camera or sensors.

So you could spend the time and build out a very robust installed application with the GDK to handle long-running jobs and local storage, but use the Mirror API as a flexible frontend that doesn’t have to be (re)compiled and (re)distributed whenever you make changes to it.

Table 13-3. Classification and requirements for GDK apps, Mirror API services, and hybrid applications

API

Category

Language/framework

Authorization

Mirror API

Cloud service

Any server-side framework

OAuth 2.0 (required)

Requires connectivity, auto-synced

GDK

Installed application

Java, Android SDK

AccountManager

Offline, real-time interactivity, sensor access

Mirror API + GDK

Hybrid

Combination

Combination

Menu items in static cards launch native activities

Let’s look at an example…one that you’ve probably been using a lot and may not have realized uses both frameworks. Hangouts is a shining example of hybrid Glassware, using Mirror cards and menu items as its UI and a handle that invokes the Call activity natively. In this case, we have an Android application that doesn’t have a UI of its own in the traditional sense but exists as a series of components that wait to be launched by Mirror actions.

The actual chatting feature is also a bonus lesson in Glassware design, showing how bundles of static cards, the standard menu items, and the Timeline object can be used for group messaging. Replying is done with the stock Reply menu item, and the mosaic of conversation participants is populated with the Timeline.recipients property—which is all Mirror API. What’s interesting to note is that the telephony actions for voice calling go through the smartphone to which Glass is connected via Bluetooth, but the chat element runs locally on the device and talks to the cloud directly—so consider how Android is handling the networking for a multifaceted communications application like Hangouts.

The way Hangouts is built should give you some good ideas about how to combine the two frameworks to really do some cool things and not be limited by using just one.

Also, currently if you want to programmatically manipulate static cards you’ll have to use the Mirror API from a GDK app—but that’s OK! You’ll need to use the network and authentication techniques, which is a beautiful segue into our next section…

Authentication

GDK apps can run without requiring authentication and can exist just fine without doing the OAuth dance that Mirror API Glassware needs to when communicating with a remote resource. However, there will certainly be cases where native Glass apps need to talk to RESTful APIs, so there is a facility for using standard Android programming techniques, although it’s probably a different method than you’re accustomed to.

This attacks the single most-frustrating problem of authenticating on Glass versus authenticating on mobile or desktop: the lack of a keyboard. We mentioned earlier how entering passwords could be a daunting task, given that most systems these days require combinations of characters with at least one numeral, possibly one uppercase character, and likely one special character like an ampersand. As rich as the Glass speech-to-text engine is, it’d never be able to decipher complex credentials. So instead of forcing users to authenticate the first time they use an application, they will authenticate when they install the application. And since that installation is done from a mobile or desktop device, we can take advantage of the keyboard those devices naturally provide for us. By this same merit, users are able to review and agree to the permissions an app requires, which wouldn’t be so easy on a wearable.

The solution highlights that one of the principles behind effective Glass design is—again—using the right tool for the right job.

In a nutshell, GDK authentication dictates that when a Glass user installs native Glassware requiring authentication with a third-party service, the user is prompted to sign in, and upon completion the user’s account is pushed to Glass. From that point on, the Glassware talks to the API on behalf of the user. And everything happens through MyGlass via its mobile application or on the Web. It’s that simple. This is a much more fluid system than having to build, maintain, and integrate a custom authentication platform of your own. At a macro level, it also ensures consistency in the process wearable users experience across their Glassware.

And here’s the trick to how it all works: even though this is a GDK app, it’s the Mirror API that pushes a user’s account onto his device. Cool, huh?

GDK apps use an Android AccountManager object to handle access to their identity with the service. The authentication process uses the special Accounts resource to grant applications access. Unlike the OAuth flow used with the Mirror API Glassware, which uses a project registered as aweb application in the Google Developers Console (as we detailed in Chapter 8), GDK authentication flows rely on a service account. The fundamental difference between the two is that a service account handles server-to-server communications by authenticating a service rather than a user. See Google’s documentation for a step-by-step walkthrough of how to create and configure a Google API service object.

Authenticating GDK Glassware works as follows (Figure 13-4):

1. A user enables your GDK application via MyGlass, which redirects the browser to your login page. This request is made to your server with a userAuth parameter.

2. On your login page, the user submits her credentials. (A special OAuth scope, https://www.googleapis.com/auth/glass.thirdpartyauth, is required when using this method.) In this step, any Android permissions your app requires are also declared to the user.

3. Upon successful validation of the user’s credentials, your backend calls the Mirror API’s mirror.accounts.insert endpoint with a JSON-formatted request body describing the account and its capabilities.

4. The Mirror API sends the user’s account to his Glass device, which is then available via an Android AccountManager object. See Google’s documentation for required permissions and associated APIs for account retrieval.

The flow of authentication in GDK Glassware.

Figure 13-4. The GDK authentication flow (image courtesy of Google)

IF YOU THOUGHT OAUTH WAS DIFFICULT…

Talking to service accounts server to server is a fairly complicated process, decidedly more than the user-to-server model that OAuth enforces. So it’s highly recommended in the interest of preserving your sanity and social life to use the Google API client libraries, which handle the complex plumbing for you.

At the moment GDK authentication is only available to you after your APK has been uploaded by Google to MyGlass, so you’re unable to work with the API locally on your development machine until then. (Just like when authorizing Mirror API Glassware for testing, you’ll see your app appear in MyGlass when signed in with your account, but it won’t be available for all Glass users until it’s approved and officially goes live.) Google’s iterative Glassware review process, which is one of many benefits of having your Glassware approved and cataloged, doesn’t mandate that your project be a ready-to-go production app—you can submit a rough prototype and indicate where you’ll be implementing web APIs. You’ll be able to build and test features while your project is under review.

Authentication continues to evolve, so keep an eye out for new developments in this space as it reaches maturity.

ACCOUNTMANAGER FOR OTHER TYPES OF CONFIGURATION

One of the neat tricks about the way Glass reads AccountManager data is that you can use it to set and change other configuration settings. The AccountManager object model takes a number of key/value pairs for the userData property. Give it a spin.

There’s a good argument to be made about what might happen in terms of workflow and the installation process if you include signing in to access some sexy new RESTful API down the road. The good news is that the authentication prompt will kick in automatically, forcing the flow.

Just like applying a new permission after-the-fact in a subsequent release of your app, the new enhancements take effect the next time.

Writing Native Code for Glass

Whether you’re using Eclipse or Android Studio, you’ll need to physically connect Glass to your development machine via the micro-USB cable and then enable Debug Mode in the Settings bundle on Glass, which turns on the ADB and registers your headset as an available device. This lets you run your app live with all of the UX functionality and controls—sensors, taps, swipes, voice commands, and gestures. You’ll be running and debugging your code (hopefully more of the former and less of the latter) live on Glass.

When you test builds of your application, your IDE will compile your code into an .APK file and then install it on Glass, just like running an app with your smartphone connected. To make sure Glass is being recognized as an AVD, click the Device panel in Dalvik Debug Monitor Server (DDMS; Figure 13-5). It should be registered and listening for events, with the output being logged in real time to LogCat.

Both IDEs let you specify that you’ll be compiling with the GDK, which bundles the necessary JAR library into your project. If you’re using Eclipse, creating a new project gives you a boilerplate Android application that will render “Hello world!” in a fairly pedestrian static card that launches immediately when you run the app. You can investigate the project’s structure and its code on your own to see how it sets the various values.

Glass recognized in DDMS view in Eclipse

Figure 13-5. Glass recognized in DDMS view in Eclipse

GETTING GLASS DRIVERS FOR WINDOWS

A glitch that many developers using Windows machines noted early on was incompatible USB drivers for Glass, which blocked the use of ADB on that platform. Eclipse and Android Studio have both made the correct driver available through SDK Manager, but if you continue to run into problems, our buddy Andrew Pritykin produced a helpful tutorial video showing how to update and enable the drivers so that you can install your apps properly and use all the helpful features of ADB.

Of course, Mac OS and Linux users don’t need to worry about this minor inconvenience. Carry on.

If you’re using Android Studio, selecting the Glass form factor when creating a project generates either a live card application, an immersion, or a blank project that you fill in manually (Figure 13-6). The live card and immersion apps are great learning resources and easily expandable as you become comfortable with each API. They also demonstrate the recommended coding patterns to optimize performance and usability for both a live card (by using a service) and an immersion (flow, UI, and program control).

Once you’ve got the hang of setting up simple bare-bones GDK apps, you can tackle some of the more advanced sample projects Google’s published for complete end-to-end native apps.

Setting up a Glassware project in Android Studio

Figure 13-6. Setting up a Glassware project in Android Studio

PRODUCING NATIVE APPS THROUGH OTHER FRAMEWORKS

There’s lots of work being done to produce native apps on Glass by using the less-complex and more rapid web stack—HTML, CSS, and JavaScript. This architecture is being used within the Glassware development community by ambitious projects like WearScript and PhoneGap, which access sensor readings via web interfaces.

Testing Native Glass Applications

We’re not going to outline coding up an entire GDK application, due to the fact that (1) the GDK is still evolving and methods, properties, and events are subject to be added, renamed, or outright removed at any time; (2) many of the features are still undocumented as the GDK is still in the Developer Preview stage; and (3) quite frankly, Android apps require a lot of files and documenting them takes up a lot of space. If you know how to create Android apps, the sample projects and poking around GitHub for cool repositories people are working on will get you started. If you’re new to Android programming, check out the GDK documentation, which walks you through a primer on mobile coding.

Visually, the GDK relies on a Glass theme that sets an application to full-screen without systems elements like the status bar, action bar, clock, or battery life indicator that you’re used to seeing in other Android form factors. To achieve the transparent effect of menu items that float above their associated app like you’re used to on the timeline, it’s helpful to define a custom style in your app’s /res/values/style.xml resource file and set it as a theme for activities containing menus. This ensures your layouts, fonts, and UI elements match the Glass UX:

<resources>

<style name="MenuTheme" parent="@android:style/Theme.DeviceDefault">

<item name="android:windowBackground">@android:color/transparent</item>

<item name="android:colorBackgroundCacheHint">@null</item>

<item name="android:windowIsTranslucent">true</item>

<item name="android:windowAnimationStyle">@null</item>

</style>

</resources>

And as noted earlier, you’re still working exclusively in landscape mode, with the viewable surface wider than it is taller at 640 x 360.

There are some notable differences between static cards in the GDK and those created with the Mirror API in regards to the amount of formatting control you get. The GDK does include several of the layout templates that the Google Mirror API Playground provides. You can specify full-bleed backgroud images, icons, footers, and timestamps and let Glass handle the formatting. Plus, you’ve still got the ability to use Android’s powerful graphics libraries and drawing classes to create dynamic 2D/3D graphics and animation, which you can’t do with the Mirror API.

Also, when testing your app, do make use of the helpful Glass developer settings, available from the Settings bundle. These utilities are like the developer tools Android programmers have relied on for years, letting you preview the layout of elements and visualizing their boundaries in relation to each other, allowing you to control the speed of animation playback, GPU overdraw cycles, and more. Personally, we can’t live without the “Keep the screen on while charging” option, which prevents you from losing your back stack state while you test an app.

A View to a Card

For you Android developers who suffered through the Mirror API chapters wondering when you would find out how to display things with Glass, this might be the moment you’re waiting for. Hopefully you’ll see that many of the best practices we talked about have some parallels on the GDK side of the world, but that you also have a lot of power to go your own way if you really need to.

If you remember the Playground tool we talked about back in our chapter on the Mirror API, you’ll remember how it offers a number of templates, and lets you style them with snazzy HTML. Similarly, the Glassware Flow Designer offers templates and specific fields you can set. Unsurprisingly, Google also offers a Java class to help you build cards with these styles in your code as well.

The CardBuilder class, under the com.google.android.glass.widget package, lets you create a static card based on these templates. The general pattern is that in the onCreate() method of an activity you’ll create a CardBuilder object, specifying the template type you’re creating. You’ll then set various properties on this builder, which we’ll discuss later, and conclude by having the builder create a standard View.

Each method that sets a property returns the builder, so you can chain them together.

REALLY, A CARD WITH A VIEW

You’ll notice that the CardBuilder’s ultimate job is to return a View object, similar to every other View object you’ll encounter in Android. What goes into this object? If you inspect it (and all of its children), you’ll see some classes you should be familiar with like FrameLayout, as well as some that are clearly made for Glass, such as MosaicView.

Unfortunately these are still opaque to us, but hopefully a future version will make them more available.

Let’s look at a few simple examples of using CardBuilder. The object’s constructor expects two arguments—a context and a CardBuilder.Layout enumeration that specifies the exact layout being applied.

Basic Text Formatting

For simple cards, similar to the article/section with a text-auto-size class from the base CSS styles for Glass, we can use the CardBuilder.Layout.TEXT template. This lets us set text that is auto-sized to fit where possible:

CardBuilder christine = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText("I felt as conspicuous as a baby whale in a goldfish pond.");

// display the CardBuilder object in the Activity

setContentView(christine.getView());

Optionally, you can instantiate a CardBuilder object directly as a View:

View christine = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText("I felt as conspicuous as a baby whale in a goldfish pond.")

.getView();

setContentView(christine);

We’ll stick with the former convention for the next few examples. These examples assume that they are being called inside Activity.onCreate(), so the “this” variable is the context of the current activity. It also assumes that we’ll do something with the View that is built, such as display it or add it to a CardScrollView, which we’ll talk about later:

CardBuilder christine = new CardBuilder(this, CardBuilder.Layout.TEXT)

// a resource located in /res/values/strings.xml

.setText(R.string.stephen_king_quote);

The three previous code blocks produce the static card in Figure 13-7.

Adaptable text that resizes dynamically

Figure 13-7. The TEXT layout

Another template, CardBuilder.Layout.TEXT_FIXED, is similar, except it applies formatting equivalent to the text-small class from the base styles CSS (Figure 13-8):

CardBuilder heartOfDarkness = new CardBuilder(this, CardBuilder.Layout.TEXT_FIXED)

.setText("I should be loyal to the nightmare of my choice.");

Text in a set size

Figure 13-8. The TEXT_FIXED layout

Both of these layouts (as well as most of the layouts we’ll be discussing) also let us set the card’s footer, just like the HTML we can provide when using the Mirror API. Unlike the Mirror API, however, we can also set the timestamp field that is on the right of the card. Android’s DateUtilclass provides us with some methods that format the timestamp correctly, or we can still put any CharSequence we want here (Figure 13-9):

long time_then = System.currentTimeMillis()-(24*60*1000); // 24 minutes ago

CardBuilder heartOfDarkness = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText(R.strings.joseph_conrad_quote)

.setTimestamp(DateUtils.getRelativeTimeSpanString(time_then))

.setFootnote("Heart of Darkness");

A TEXT card with a timestamp

Figure 13-9. A TEXT card with a timestamp

Creating Rich Text

What other formatting can we do? You’ll note that there’s no setHtml() method to go with setText(), and we can’t add a View directly as a child. We might be able to meddle with the View that gets created, but that seems like a bad idea. Fortunately, we do have a solution.

The setText() method takes a class that inherits from CharSequence. Fortunately, the android.text.SpannableString class is a CharSequence, and it can contain spans of additional formatting markup. While there are a few ways we can generate this marked-upCharSequence, the easiest way is to use the android.text.HTML.fromHTML() method (Figure 13-10):

String html = "<b>bold</b><br><font color='red'>red</font>";

Spanned htmlSpan = Html.fromHtml(html);

CardBuilder htmlCard = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText(htmlSpan)

.setTimestamp("around 2005");

Using a Spanned object to support HTML tags

Figure 13-10. Rich text formatting

But the HTML.fromHTML() method isn’t perfect—most notably, the HTML it uses is pretty ancient. If you’re used to CSS and HTML class or style attributes, it will feel clunky having to go back to a <font> tag. If this bothers you enough, feel free to build your SpannableStringanother way (Figure 13-11):

SpannableString htmlSpan =

new SpannableString( "green\nbold\nrelatively normal" );

htmlSpan.setSpan( new

ForegroundColorSpan( getResources().getColor( R.color.green ) ),

0, 5, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE );

htmlSpan.setSpan( new StyleSpan( Typeface.BOLD ), 6, 10,

Spanned.SPAN_EXCLUSIVE_EXCLUSIVE );

CardBuilder htmlCard = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText(htmlSpan)

.setTimestamp("sometime this year");

What about other layouts such as tables, left and right justification, and images? Other HTML-based formatting that the Mirror API provides, such as tables and lists, aren’t available—at least not yet. Images will be covered in a little bit, as will other template types, including some that aren’t available through Mirror.

A SpannableString lets you perform rich formatting

Figure 13-11. Rich text formatting with a SpannableString

Ellipses and Excess Content

You may remember that we explored a way to make it clear to our users when there is more information than what we are displaying on the card. Although there is no way to set the dog-ear icon for the card at this time, it seems reasonable a future CardBuilder will be able to provide this. Other aspects, such as an ellipsis at the end of the card, are more feasible.

How do we provide this ellipsis? Fairly easily—we just need to provide more text than will fit on the card (Figure 13-12):

CardBuilder iOnlySeeSixLines = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText("one\n two\n three\n four\n five\n six\n seven\n eight\n");

What about the scenario where we want the header line to show an ellipsis? That is significantly more complicated. But it probably isn’t necessary using CardBuilder. With the Mirror API, we needed to restrict the title to a single line to make sure we counted lines correctly to apply the ellipsis, but that isn’t necessary with the GDK—we can just let it take care of the ellipsis at the end. If you really want a specific header layout, there is also the CardBuilder.Layout.AUTHOR template, or you can roll one yourself (see the next section for more on this).

You get an ellipsis for free with excess text

Figure 13-12. You get an ellipsis for free with excess text

Columnar Layouts and Mosaics

One of the very common layouts we’ve explored is where we’ll have the leftmost-third of the screen with an icon, a set of images, or other information that is quickly glanceable. The right two-thirds contain more details, such as a message.

It should come as no surprise that we have two versions of this template for both dynamically sized and for fixed text, CardBuilder.Layout.COLUMNS and CardBuilder.Layout.COLUMNS_FIXED, respectively. The most common use for these is with a mosaic of images in the left column. We’ll use the addImage() method to add tiled images—one image for each call, in the order we add them (Figure 13-13):

CardBuilder message = new CardBuilder(this, CardBuilder.Layout.COLUMNS)

.setText("one\n two\n three\n four\n five\n six\n seven\n eight\n")

.addImage( R.drawable.messageFrom ) // Allen

.addImage( R.drawable.messageTo1 ) // Jason

.addImage( R.drawable.messageTo2 ); // Pegman

Calling addImage() repeatedly creates automatic mosaics

Figure 13-13. Calling addImage() repeatedly creates automatic mosaics

As we shared in Chapter 7, the mosaic images in column-based layouts aren’t randomly assembled and display a distinct hierarchy. The first image added is displayed as the most dominant in terms of it being positioned at the top, and occupying the most screen real estate. This is often used to denote a sender-recipient(s) relationship in messaging Glassware, but it can be used for other purposes, too. The user instantly gets the gist of what the card represents, a chat conversation or an email, as opposed to a news article, a tweet, a storm alert, or an announcement about a sale for gaudy holiday season sweaters.

Consider the effect you create when calling addImage() in a certain order, not just doing so arbitrarily. Conversely, if you haven’t got a need for such structure in your data, also think about what impact such a mosaic presentation might have on the users viewing it. They might see importance that isn’t actually there.

Using Icons

Another excellent use of the COLUMNS template is to place a single icon in the center of the leftmost column. Examples of this include the settings cards that Glass uses for its configuration. We can only set a single icon, however, so we would use the setIcon() method (Figure 13-14):

CardBuilder message = new CardBuilder(this, CardBuilder.Layout.COLUMNS)

.setText("one\n two\n three\n four\n five\n six\n seven\n eight\n")

.setIcon( R.drawable.wifi );

These work great for branding

Figure 13-14. Icons are simpler than imagery

MIXING IT UP

Although the documentation for the column-oriented templates say you can use an icon or the image mosaic, but not both, there doesn’t seem to be anything that enforces this right now, and the effect can be interesting and useful. Still, we suggest you avoid trying to combine them in case Google changes something in the future.

Both images and icons can be Drawables, Bitmaps, or resource references. And if you remember our text-centric layouts from earlier, we’ll let you in on a little secret. They’re good for more than just the columnar layout—you can use addImage() to add one or more mosaic images in the background of a text layout, too. This will darken the mosaic images using something similar to the overlay-full base style class to make the text more legible (Figure 13-15):

CardBuilder participants = new CardBuilder(this, CardBuilder.Layout.TEXT)

.setText("one\n two\n three\n four\n five\n six\n seven\n eight\n")

.addImage( R.drawable.member1 )

.addImage( R.drawable.member2 )

.addImage( R.drawable.member3 );

This creates a shaded background effect for the images

Figure 13-15. Applying images to a TEXT layout

We also have a derivative template that places text at the bottom of the page, with the overlay-gradient-short to make it more legible. The layout is more intended for a single line, but it will display up to two lines, and can take an optional icon. This is theCardBuilder.Layout_CAPTION template (Figure 13-16):

CardBuilder members = new CardBuilder(this, CardBuilder.Layout.CAPTION)

.setText("one\n two\n")

.addImage( R.drawable.member1 )

.addImage( R.drawable.member2 )

.addImage( R.drawable.member3 );

The gradiated shadow makes a nice backdrop to make footer text stand out

Figure 13-16. Applying images to a CAPTION layout

Other Neat Templates

Although we’ve seen the most basic types and uses, we need to be aware that there are more templates available. If we are duplicating the contact cards that we get through the Mirror API, for example, we will want to use the CardBuilder.Layout.TITLE layout, which takes a background image (or images), a single line of text, and an optional icon next to the contact text:

CardBuilder members = new CardBuilder(this, CardBuilder.Layout.TITLE)

.setText("My Clique")

.addImage( R.drawable.member1 )

.addImage( R.drawable.member2 )

.addImage( R.drawable.member3 )

.setIcon( R.drawable.logo );

Combining an icon, images, and text in a TITLE layout

Figure 13-17. Combining an icon, images, and text in a TITLE layout

This kind of card, however, isn’t very useful by itself. Most of the time, we will be combining it with other contacts as part of a swipeable list that you’ll want to tap on to select. This can be done by using an instance of a CardScrollAdapter to manage the list of Views (which you’ve created through CardBuilder or elsehow) and a CardScrollView to actually control, render, and manage interactions with the cards. To handle the user selecting the card, you’ll set up a click handler using CardScrollView.setOnItemClickListener().

To do this, we might create our CardScrollAdapter to be flexible and take CardBuilders, Views, and resources that translate to Views. It might look something like this:

public class MyAdapter extends CardScrollAdapter {

private Activity activity;

private List<Object> items;

public MyAdapter( Activity activity ) {

this.activity = activity;

this.items = new ArrayList<Object>( );

}

public MyAdapter( Activity activity, List<Object> items ) {

this.activity = activity;

this.items = items;

}

@Override

public int getCount() {

return items.size();

}

@Override

public Object getItem( int i ) {

return items.get( i );

}

@Override

public View getView( int i, View view, ViewGroup viewGroup ) {

Object item = items.get(i);

if( item instanceof CardBuilder ) {

return ( (CardBuilder) item ).getView( view, viewGroup );

} else if( item instanceof View ) {

return (View)item;

} else if( item instanceof Integer ) {

return activity.getLayoutInflater().inflate( (Integer)item, viewGroup );

} else {

throw new ClassCastException( "Unable to create View from "

+item.getClass() );

}

}

@Override

public int getPosition( Object o ) {

int index = items.indexOf( o );

return index < 0 ? AdapterView.INVALID_POSITION : index;

}

public MyAdapter add( Object item ) {

if( item == null ) {

throw new NullPointerException( "Unable to add null card" );

} else if( item instanceof View || item instanceof CardBuilder

|| item instanceof Integer ) {

items.add( item );

} else {

throw new ClassCastException( "Unable to add item of type "

+item.getClass() );

}

return this;

}

}

The Activity that creates the CardScrollView might, at a minimum, look something like this (Figure 13-18):

public class MyActivity extends Activity {

protected MyAdapter adapter;

protected CardScrollView scrollView;

@Override

protected void onCreate( Bundle savedInstanceState ) {

super.onCreate( savedInstanceState );

// Create an adapter to store the cards and add them

adapter = new MyAdapter( this );

// Create some cards we will scroll through

adapter

.add( new CardBuilder( this, CardBuilder.Layout.TEXT )

.setText( "Card 1" ) )

.add( new CardBuilder( this, CardBuilder.Layout.TEXT )

.setText( "Card 2" ).getView() )

.add( R.layout.card_3 );

// Create the view and set which cards it will use

scrollView = new CardScrollView( this );

scrollView.setAdapter( adapter );

// Setup click listeners

scrollView.setOnItemClickListener(new AdapterView.OnItemClickListener() {

@Override

public void onItemClick(AdapterView<?> parent, View view, int position,

long id) {

// TODO: do something when a card is clicked

}

});

// Show this view

setContentView( scrollView );

}

@Override

protected void onResume() {

super.onResume();

scrollView.activate();

}

@Override

protected void onPause() {

scrollView.deactivate();

super.onPause();

}

}

Combining cards with a CardScrollAdapter

Figure 13-18. Combining cards with a CardScrollAdapter

As our sample code illustrated, you’re not limited to using a title layout in the CardScrollAdapter. You can include any View created by CardBuilder or even a View that you create yourself. With these tools, you can duplicate most of the features of bundled cards or multiple contacts, and even go further by providing rich multilevel and dynamic menus.

We caution you, however, to not get too carried away. Remember that just because you can do all this doesn’t mean that it creates a great UX on Glass.

Table 13-4 contains the currently defined list of templates that are available through CardBuilder and the attributes that can be set for each. As Google adds new templates, they’ll be listed and documented as part of CardBuilder.Layout and on the GDK documentation about building cards.

Table 13-4. CardBuilder.Layout templates[a]

setText

setFootnote

setTimestamp

addImage

setIcon

setHeading

setSubheading

TEXT/TEXT_FIXED

X

X

X

X

-

-

-

COLUMN/COLUMN_FIXED

X

X

X

1

1

-

-

CAPTION

X

X

X

X

X

-

-

TITLE

X

X

X

X

X

-

-

MENU

X

X

-

-

X

-

-

AUTHOR

X

X

X

-

X

X

X

ALERT

X

X

-

-

X

-

-

[a] x means the attribute is usable for this template, 1 means that only one of these two attributes should be used for this template.

When You Have No Choice—Doing It Yourself

But if none of these templates meet your needs, you always have the ability to create your own layouts using most of the standard Views that Android offers. Custom layouts can be created declaratively and then inflated through a RemoteViews object. (This is essentially the same pattern used for handling live cards with low-frequency rendering.)

The Glass team provides a couple of custom layouts using viewgroups and views in XML as guides that correspond to the prefab Text and Column templates to help you match the standard Glass UI. However, while this approach gives you the freedom to create your own formatting, it also imposes on you the responsibility to manually implement padding, margins, and various styles that conform to the static card layout. If you need specialized formatting for content that’s outside the scope of what Google already provides, that’s fine, but don’t deviate from the core motif.

Configuring Voice Commands

The range of sanctioned voice commands launch their associated apps, right off the "OK Glass" home screen. These triggers are defined in a special XML file in your project and registered in your application’s manifest. To set up a voice trigger, you need to do two things: create an XML file that contains a string value defining your voice command; and register the command in your project’s manifest. Then in AndroidManifest.xml, specify an <action> for an intent filter, either on an <activity> element or <service> element depending on your implementation; and then create a sibling <meta-data> element, using a similar action value for the android:name attribute; also, map android:resource to your XML resource file with values for a voice trigger:

<!-- applied as children of an <activity> or <service>

element in AndroidManifest.xml -->

<intent-filter>

<action android:name="com.google.android.glass.action.VOICE_TRIGGER" />

</intent-filter>

<meta-data

android:name="com.google.android.glass.VoiceTrigger"

android:resource="@xml/voice_trigger" />

In this example, the resource value corresponds to /res/xml/voice_trigger.xml. Create the folder and file in your IDE and then enter the following for the file:

<?xml version="1.0" encoding="utf-8"?>

<trigger command="TUNE_AN_INSTRUMENT" />

This sets your trigger phrase to “tune an instrument” (which is cool to say out loud) as the launcher from the "OK Glass" menu for your app. The result allows your app to be launched via voice as seen in Figure 13-19.

A GDK app launched with a voice command

Figure 13-19. Launching an app via hotword

If your app requires additional vocal input, like for gathering a search query statement, a message to be sent, or an address, you can configure that secondary step by adding a child element to <trigger>, which evokes the voice input prompt:

<?xml version="1.0" encoding="utf-8"?>

<trigger command="TUNE_AN_INSTRUMENT">

<input prompt="@string/glass_voice_prompt" />

</trigger>

You can define and use your own custom commands during development, provided you include an Android permission in your manifest. But again, deployed apps need to use the approved commands:

<uses-permission

android:name="com.google.android.glass.permission.DEVELOPMENT" />

If it’s imperative that you use a voice command not already on the approved list, you can submit it for review, which may take several weeks (and then new approved voice commands are only released when Google pushes an update of the Glass OS). The selection of your trigger phrase is an important step in your design/development. It should be logical, short, and easy to say, using action verbs with an active voice. The input is open ended, so don’t use voice input to gather enumerated values like choosing one of the primary colors. That’s what menus are for.

After you’ve picked a few options, do your homework and see what else is out there and what other approved applications are using to launch themselves, and how your command might fit in when listed beside them.

You might be wondering about possible trigger phrase clashes. How does Glass resolve conflicts when two or more Glassware are based on the same trigger? The system appends “with” at the voice prompt in the event more than one Glassware (enabled with the Mirror API or installed with the GDK) have the same phrase registered. A second-level menu prompt, shown in Figure 13-20, then gives the user the option of which Glassware to use.

Glass inserts ‘with’ for two or more Glassware matching the same trigger phrase

Figure 13-20. The voice prompt screen to launch Glassware

Make sure to review the VoiceTriggers.Command enumeration for the official list of approved voice commands. If you absolutely must have your own voice trigger phrase, review the checklist of actions to take and then submit your own for review.

You can also use contextual commands, which allow users to interact with menus within your app. The menus, just as with Android programming for handheld devices, are defined in layouts and inflated into activities, which are attached to their parent window when selected, and displayed immediately on top of it.

You can find out best practices for using contextual commands and other types of vocal input on Google’s developer site.

MATCH YOUR VOICE COMMANDS ACROSS OTHER PLATFORMS

If the Glassware you’re building is part of a broader application ecosystem, you probably want to ensure for purposes of consistency that the voice commands available to users are uniform across other wearable platforms. Review the list of system-provided voice actions for Android Wear, as well as how to include custom voice actions for Wear devices so that your spoken commands are the same for other types of devices, like an Xbox One Kinect.

With Voice Commands, Google Has the Final Word

We can’t underscore enough that the voice commands are ultimately under Google’s sovereign control. It has final say on what apps are greenlit for placement in MyGlass, and voice commands are a big part of this approval process—don’t find out the hard way. We mentioned in the Designsection of this book how choosing your voice commands for Mirror API Glassware shouldn’t ever be treated as an arbitrary exercise, noting the categorical hierarchy of commands as the library of approved Glassware continues to grow.

The same applies for the GDK, as trigger phrases like “Tune an instrument,” “Start a bike ride,” or “Prepare a meal” will be very popular for music, exercise, and cooking apps, respectively. All Glassware will be available under these high-level categories. Note that these categories won’t know the difference between Glassware that’s native or exclusively cloud-based, so you may wind up sharing space next to a major software publisher as well as a garage operation looking for a break. Remember—actions not apps still rule here, even if you’re thinking about how to write an app.

Updating Releases, Versioning, and Crash Reports

A major technical advantage of having your app approved by Google and listed in MyGlass is having any updates you push to those who’ve installed it be applied silently, just like many apps do through Google Play. Users won’t be notified that they have to download a new version, it just gets pushed to their Glass headset automatically through the MyGlass mobile app. But it’s kind of a black box at the moment, as we’re currently unable to review modifications in Glassware between different versions and there aren’t any changelogs we can consult. New features or bug fixes show up magically, but we’re unaware of any patches or feature additions until we see/stumble across them.

Some vendors list ad hoc changelogs on their own websites outside MyGlass, but that’s not universally implemented. As the GDK platform matures, we hope to see a central facility in MyGlass or Google Play or something else that lets us tracks changes, if for no other reason than people like us are obsessed with knowing “What’s New.”

Additionally, MyGlass doesn’t provide automatic crash reporting back to Glassware publishers like Google Play does. However, if Google detects that your app is crashing with some regularity, it will reach out to you and work with you to resolve any issues your app might be having. The bright side is that the review process for apps looking to make it into the MyGlass catalog is very thorough and many crashes should be detected in that step. You also have the benefit of having to build for essentially only one device, as opposed to multiple OEM devices.

You’re still able to make use of third-party tool for crash analysis.

Porting Existing Apps to Glass: DON’T

Developers maintaining existing software platforms are foaming at the mouth waiting to start building for Glass. Undoubtedly a ton of cool ideas are being cobbled together as you read this from innovative thinkers wanting to fully use Glass as a platform and notable third-party applications will be rushing to use Glass as a frontend for their services, in addition to every other conceivable digital platform around. That said, if you’re just thinking you can do a straight port of your codebase, you’re doing it wrong.

Glass is a unique device that requires a unique experience. To support this notion, Google’s documentation stresses that when creating immersions, “Design interactions that make sense on Glass instead of porting over activities from other Android devices.” That’s pretty much the motivation for Glassware design and the Think for Glass philosophy in general.

Aside from the partners Google’s worked with already to offer apps at launch, it’s a safe bet that household brands like Instagram, Flipboard, Mint, Dropbox, and Spotify will build Glass clients, even if the UX doesn’t exactly equate to a seamless translation for their stuff. And this is where the big creative opportunity lies. Don’t just port your code—extend your platform and really explore the interface that Glass provides. Make something that becomes a part of your experience in a completely unique way. Remember, Glass complements your smartphone, not replaces it, so don’t just build yet another client—if you support a game, do a hands-free controller or heads-up display. Make using it cool!

Just like you had to relearn design for mobile apps, porting to Glass doesn’t necessarily mean a full and seamless translation of every feature. And this pragmatism is key in learning how to Think for Glass. It makes for a fun challenge.

You might want to take a pause from this chapter for a moment and re-review the material in our earlier chapter in the Discover section about what Glass isn’t, and see how all the knowledge you’ve gained and ideas you’ve developed interplay with those constraints. After you’ve given yourself a refresher, think about how the Glass experience would handle your current UX. The onus is upon developers to repurpose their ideas within a new environment—the right way, not crudely forcing a smartphone app onto a wearable. People by and large still won’t want to stare at Glass for extended periods of time, so keep things glanceable and continue to think in terms of actions, not apps. Many of the widgets and UI paradigms just don’t translate to this new venue.

The bottom line is that presence is critical and a proper design plan is the major step in achieving that, not just retrofitting current services. If your translation and/or extension of your platform is truly good, you’ll have a real winner on your hands. And happy users.

So Which Framework Is for Me?

We’ve come at last to the moment of truth. You’re well-read on the Mirror API and how to build cloud-dependent services against Google’s RESTful backend. And now you know how things work natively, too. When two paths diverge, how do you know which is the right one for your project? The simple $0.05 answer is use the right tool for the right job. Don’t let everything you learned in the Design part of this book go out the window. It’s actually not that hard to do—just figure out what usability components are critical to applying your idea (connectivity, sensors, etc.), what interface elements could best be used to display data and gather input, and how the usability of the application fits into the model of a microinteraction.

Don’t simply retrofit an idea with the wrong framework, because that flaw will show and your usability will suffer. Experienced Glass users wise up to what models work best for the platform, and even newbies will quickly figure out that if they can’t use your Glassware in their daily lives, they’re drop it like a bad cold. Better yet, figure out how to make your idea exist on both as a hybrid application.

You could also frame the argument in terms of the impacts on deployment—the traditional model of rigorously testing to stamp out as many bugs in native programs still holds true, but can be time consuming and creates delays between version releases, which then have to be pushed out through a repository or app store. Traditionally this has meant alerting the user that there’s a new version of the app available and then having her download it, or as Chrome has achieved in recent years, applying silent updates. This, after all, is the idea that makes updates to the Glass OS work so fluidly. We’d ultimately like to see this be the case for native apps on Glass, which solves a lot of the disconnect issues between software vendors and users when new versions are ready.

Even within the GDK itself, you might find yourself lost at first in deciding whether to commit to a live card or an immersion. Generally speaking, live cards tend to have a longer shelf life than immersions, but immersions require complete user attention while they’re running, and don’t permit integration with the timeline. In this light, we see immersions as the exception instead of the rule. Many people defer to using immersions, but they generally aren’t the way to go.

Real-world examples of apps that fit the live card category are implementations of media players like Pandora and ViewTube for Glass, time-sensitive actions like AllTheCooks Recipes, scheduling services like to-do applications, telephony applications, and others. Great examples of immersion-based apps are games, search services, applications that make various uses of the camera, location-oriented apps like driving utilities, and video tutorials. There’s room for innovation here though, as we’re seeing some very clever uses of both UI methods to deliver really effective experiences like Zombies, Run! (an immersion) and Battery Checker (a live card).

Learn the best practices and patterns, see what others have done, play with some code samples, and then let your own creativity drive your app.

So how do you use the Glassware development model to solve problems? Do you start with the frameworks and then figure out how to use the unique features of each to write an app…or do you begin with an existing domain space and then pick the most appropriate framework? There really isn’t a cut and dry answer. The best thing to do would be to write Glassware in the Mirror API and the GDK and get a solid feel for the development process and turnaround cycle.

You may find yourself needing one…or both.