Advanced User Experience - Professional Android 4 Application Development (2012)

Professional Android 4 Application Development (2012)

Chapter 11. Advanced User Experience

What's in this Chapter?

Resolution independence and designing for every screen

Creating image assets in XML

Making applications accessible

Using the Text-to-Speech and speech recognition libraries

Using animations

Controlling hardware acceleration

Using Surface Views

Copy, paste, and the clipboard

In Chapter 4, “Building User Interfaces,” you learned the basics of creating user interfaces (UIs) in Android with an introduction to Activities, Fragments, layouts, and Views. In Chapter 10, “Expanding the User Experience,” you expanded the user experience through using the Action Bar, menu system, Dialogs, and Notifications.

It's important to think beyond the bounds of necessity and create applications that combine purpose with beauty and simplicity, even when they provide complex functionality.

As you design apps to work with Android, consider these goals: Enchant me. Simplify my life. Make me amazing.

—Android Design Creative Vision,

This chapter introduces you to some best practices and techniques to create user experiences that are compelling and aesthetically pleasing on a diverse range of devices and for an equally diverse range of users.

You start with an introduction to the best practices for creating resolution- and density-independent UIs, and how to use Drawables to create scalable image assets, before learning how to ensure your applications are accessible and use the text-to-speech and speech recognition APIS.

You also discover how to use animations to make your UIs more dynamic, and how to enhance the custom Views you created in Chapter 4 using advanced canvas-drawing techniques.

When designing and implementing your application's UX design, be sure to refer to the guidelines on the Android Design site at

Designing for Every Screen Size and Density

The first four Android handsets all featured 3.2” HVGA screens. By the start of 2010, the number of devices running Android exploded, with the increased diversity of handsets heralding variations in screen sizes and pixel densities. In 2011, tablets and Google TV introduced further variation with significantly larger screens and even greater variation in resolution and pixel density.

To provide a great user experience on all Android devices, it's important to create your UIs knowing that your applications can run on a broad variety of screen resolutions and physical screen sizes. In practice, this means that just as with websites and desktop applications, you must design and build your applications with the expectation that they can run an infinitely varied set of devices. That means supplying scalable image assets for a variety of pixel densities, creating layouts that scale to fit the available display, and designing layouts optimized for different device categories based on the screen size and interaction model.

The following sections begin by describing the range of screens you need to consider, and how to support them, before summarizing some of the best practices to ensure your applications are resolution- and density-independent, and optimized for different screen sizes and layouts.


The Android Developer site includes some excellent tips for supporting multiple screen types. You can find this documentation at\_support.html.

Resolution Independence

A display's pixel density is calculated as a function of the physical screen size and resolution, referring to the number of physical pixels on a display relative to the physical size of that display. It's typically measured in dots per inch (dpi).

Using Density-Independent Pixels

As a result of the variations in screen size and resolution for Android devices, the same number of pixels can correspond to different physical sizes on different devices based on the screen's DPI. This makes it impossible to create consistent layouts by specifying pixels. Instead, Android uses density-independent pixels (dp) to specify screen dimensions that scale to appear the same on screens of the same size but which use different pixel densities.

In practical terms, one density-independent pixel (dp) is equivalent to one pixel on a 160dpi screen. For example, a line specified as 2dp wide appears as 3 pixels on a display with 240dpi.

Within your application you should always use density-independent pixels, avoiding specifying any layout dimensions, View sizes, or Drawable dimensions using pixel values.

In addition to dp units, Android also uses a scale-independent pixel (sp) for the special case of font sizes. Scale-independent pixels use the same base unit as density-independent pixels but are additionally scaled according to the user's preferred text size.

Resource Qualifiers for Pixel Density

Scaling bitmap images can result in either lost detail (when scaling downward) or pixilation (when scaling upward). To ensure that your UI is crisp, clear, and devoid of artifacts, it's good practice to include multiple image assets for different pixel densities.

Chapter 3, “Creating Applications and Activities,” introduced you to the Android resource framework, which enables you to create a parallel directory structure to store external resources for different host hardware configurations.

When using Drawable resources that cannot be dynamically scaled well, you should create and include image assets optimized for each pixel density category.

· res/drawable-ldpi—Low-density resources for screens approximately 120dpi

· res/drawable-mdpi—Medium-density resources for screens approximately 160pi

· res/drawable-tvdpi—Medium- to high-density resources for screens approximately 213dpi; introduced in API level 13 as a specific optimization for applications targeting televisions

· res/drawable-hdpi—High-density resources for screens approximately 240dpi

· res/drawable-xhdpi—Extra-high density resources for screens approximately 320dpi

· res/drawable-nodpi—Used for resources that must not be scaled regardless of the host screen's density

Supporting and Optimizing for Different Screen Sizes

Android devices can come in all shapes and sizes, so when designing your UI it's important to ensure that your layouts not only support different screen sizes, orientations, and aspect ratios, but also that they are optimized for each.

It's neither possible nor desirable to create a different absolute layout for each specific screen configuration; instead, it's best practice to take a two-phased approach:

· Ensure that all your layouts are capable of scaling within a reasonable set of bounds.

· Create a set of alternative layouts whose bounds overlap such that all possible screen configurations are considered.

In practice this approach is similar to that taken by most websites and desktop applications. After a fling with fixed-width pages in the ‘90s, websites now scale to fit the available space on desktop browsers and offer an alternative CSS definition to provide an optimized layout on tablets or mobile devices.

Using the same approach, you can create optimized layouts for certain categories of screen configurations, which are capable of scaling to account for variation within that category.

Creating Scalable Layouts

The layout managers provided by the framework are designed to support the implementation of UIs that scale to fit the available space. In all cases, you should avoid defining the location of your layout elements in absolute terms.

Using the Linear Layout you can create layouts represented by simple columns or rows that fill the available width or height of the screen, respectively.

The Relative Layout is a flexible alternative that enables you to define the position of each UI element relative to the parent Activity and the other elements used within the layout.

When defining the height or width of your scalable UI elements (such as Buttons and Text Views) it's good practice to avoid providing specific dimensions. Instead, you can define the height and width of Views using wrap_content or match_parent attributes, as appropriate.


The wrap_content flag enables the View to define its size based on the amount of space potentially available to it, whereas the match_parent flag (formally fill_parent) enables the element to expand as necessary to fill the available space.

Deciding which screen element should expand (or contract) when the screen size changes is one of the most important factors in optimizing your layouts for variable screen dimensions.

Android 4.0 (API level 14) introduced the Grid Layout, a highly flexible layout designed to reduce nesting and simplify the creation of adaptive and dynamic layouts.

Optimizing Layouts for Different Screen Types

In addition to providing layouts that scale, you should consider creating alternative layout definitions optimized for different screen sizes.

There is a significant difference in screen available on a 3” QVGA smartphone display compared to a high-resolution 10.1” tablet. Similarly, and particularly for devices with significant aspect ratios, a layout that works well viewed in landscape mode might be unsuitable when the device is rotated into portrait.

Creating a layout that scales to accommodate the space available is a good first step; it's good practice to consider ways that you can take advantage of the extra space (or consider the effect of reduced space) to create a better user experience.

This is a similar approach to websites that provide a specialized layout for users on smartphones, tablets, or desktop browsers. For Android users, the lines between each device category are blurred, so it's best practice to optimize your layouts based on the available space rather than the type of device.

The Android resource framework provides several options to supply different layouts based on the screen size and properties.

Use the long and notlong decorators to supply layouts optimized for normal versus widescreen displays, and use the port and land decorators to indicate layouts to be used when the screen is viewed in portrait or landscape modes, respectively.

res/layout-long-land/     // Layouts for long screens in landscape mode.
res/layout-notlong-port/  // Layouts for not-long screens in portrait mode.

In terms of screen size, two options are available. Android 3.2 (API level 13) introduced the capability to provide layouts based on the current screen width/height, or the smallest available screen width:


These decorators enable you to determine the lowest number of device-independent pixels your layout requires in terms of height and width, and supply an alternative layout for devices that fall outside those bounds.

If you plan to make your application available to earlier versions of Android, it's good practice to use these modifiers in conjunction with the small, medium, large, and xlarge decorators.


These buckets, although less specific, enable you to supply a different layout based on the size of the host device relative to a “normal” HVGA smartphone display.

Typically, you can use these various decorators together to create layouts optimized for various sizes and orientations. This can lead to situations in which two or more screen configurations should use the same layout. To avoid duplication, you can define aliases.

An alias enables you to create an empty layout definition that can be configured to return a specific resource when another one is requested. For example, within your resources hierarchy, you could include a res/layout/main_multipanel.xml layout that contains a multipanel layout and ares/layout/main_singlepanel.xml resource that contains a single-panel layout.

Create a res/values/layout.xml file that uses an alias to select the single panel layout:

<?xml version="1.0" encoding="utf-8"?>
  <item name="main" type="layout">@layout/main_singlepanel</item>

For each specific configuration that should use the multi-panel resource, create a corresponding values folder:


And create and add a new layout.xml resource to them:

<?xml version="1.0" encoding="utf-8"?>
  <item name="main" type="layout">@layout/main_multipanel</item>

Within your code, simply refer to the R.layout.main resource to let the system decide which underlying layout resource to use. Note that you cannot use the alias name you specify as resource identifier for any layouts stored within the res/layout folder; if you do, there will be a naming collision.

Specifying Supported Screen Sizes

For some applications it may not be possible to optimize your UI to support all possible screen sizes. You can use the supports-screens manifest element to specify on which screens your application can be run:

<supports-screens android:smallScreens="false"

In this context a small screen is any display with a resolution smaller than HVGA; a large screen is larger than a smartphone; an extra large screen is significantly larger (such as a tablet); and normal screens encompass the majority of smartphone handsets.

A false value forces Android to use compatibility scaling to attempt to scale your application UI correctly. This generally results in a UI with degraded image assets that show scaling artifacts.

Mirroring the new resource decorators described in the previous section, Android 3.2 (API level 13) introduced the requiresSmallestWidthDp, compatibleWidthLimitDp, and largestWidthLimitDp attributes to the supports-screen node:

<supports-screens android:requiresSmallestWidthDp="480"

Although neither the Android run time nor the Google Play Store currently use these parameters to enforce compatibility, they will eventually be used on the Google Play Store in preference to the small, normal, large, and extra large parameters on supported devices.

Creating Scalable Graphics Assets

Android includes a number of simple Drawable resource types that can be defined entirely in XML. These include the ColorDrawable, ShapeDrawable, and GradientDrawable classes. These resources are stored in the res/drawable folder and can be identified in code by their lowercase XML filenames.

When these Drawables are defined in XML, and you specify their attributes using density-independent pixels, the run time smoothly scales them. Like vector graphics, these Drawables can be scaled dynamically to display correctly and without scaling artifacts regardless of screen size, resolution, or pixel density. The notable exceptions to this rule are Gradient Drawables, which require a gradient radius defined in pixels.

As you see later in this chapter, you can use these Drawables in combination with transformative Drawables and composite Drawables. Together, they can result in dynamic, scalable UI elements that require fewer resources and appear crisp on any screen. They are ideal to use as backgrounds for Views, layouts, Activities, and the Action Bar.

Android also supports NinePatch PNG images that enable you to mark the parts of an image that can be stretched.

Color Drawables

A ColorDrawable, the simplest of the XML-defined Drawables, enables you to specify an image asset based on a single solid color. Color Drawables, such as this solid red Drawable, are defined as XML files using the color tag in the res/drawable folder:

<color xmlns:android=""

Shape Drawables

Shape Drawable resources let you define simple primitive shapes by defining their dimensions, background, and stroke/outline using the shape tag.

Each shape consists of a type (specified via the shape attribute), attributes that define the dimensions of that shape, and subnodes to specify padding, stroke (outline), and background color values.

Android currently supports the following shape types as values for the shape attribute:

· line—A horizontal line spanning the width of the parent View. The line's width and style are described by the shape's stroke.

· oval—A simple oval shape.

· rectangle—A simple rectangle shape. Also supports a corners subnode that uses a radius attribute to create a rounded rectangle.

· ring—Supports the innerRadius and thickness attributes to let you specify the inner radius of the ring shape and its thickness, respectively. Alternatively, you can use innerRadiusRatio and thicknessRatio to define the ring's inner radius and thickness, respectively, as a proportion of its width (where an inner radius of a quarter of the width would use the value 4).

Use the stroke subnode to specify an outline for your shapes using width and color attributes.

You can also include a padding node to offset the positioning of your shape on the canvas.

More usefully, you can include a subnode to specify the background color. The simplest case involves using the solid node, including the color attribute, to define a solid background color.

The following snippet shows a rectangular Shape Drawable with a solid fill, rounded edges, 10dp outline, and 10dp of padding around each edge. Figure 11.1 shows the result.

<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android=""
      android:radius="15dp" />

Figure 11.1


The following section describes the GradientDrawable class and how to specify a gradient fill for your Shape Drawables.

Gradient Drawables

A GradientDrawable lets you design complex gradient fills. Each gradient defines a smooth transition between two or three colors in a linear, radial, or sweep pattern.

Gradient Drawables are defined using the gradient tag as a subnode within a Shape Drawable definition (such as those defined in the preceding section).

Each Gradient Drawable requires at least a startColor and endColor attribute and supports an optional middleColor. Using the type attribute you can define your gradient as one of the following:

· linear—The default gradient type, it draws a straight color transition from startColor to endColor at an angle defined by the angle attribute.

· radial—Draws a circular gradient from startColor to endColor from the outer edge of the shape to the center. It requires a gradientRadius attribute that specifies the radius of the gradient transition in pixels. It also optionally supports centerX and centerY attributes to offset the location of the center of the gradient.

· Because the gradient radius is defined in pixels, it does not dynamically scale for different pixel densities. To minimize banding, you may need to specify different gradient radius values for different screen resolutions and pixel densities.

· sweep—Draws a sweep gradient that transitions from startColor to endColor along the outer edge of the parent shape (typically a ring).

The following snippets show the XML for a linear gradient within a rectangle, a radial gradient within an oval, and a sweep gradient within a ring, as shown in Figure 11.2. Note that each would need to be created in a separate file within the res/drawable folder.

<!-- Rectangle with linear gradient -->
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android=""
<!-- Oval with radial gradient -->
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android=""
<!-- Ring with sweep gradient -->
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android=""

Figure 11.2


NinePatch Drawables

NinePatch (or stretchable) images are PNG files that mark the parts of an image that can be stretched. They're stored in your res/drawable folders with names ending in .9.png extensions.


NinePatches use a one-pixel border to define the area of the image that can be stretched if the image is enlarged. This makes them particularly useful for creating backgrounds for Views or Activities that may have a variable size.

To create a NinePatch, draw single-pixel black lines that represent stretchable areas along the left and top borders of your image, as shown in Figure 11.3.

Figure 11.3


The unmarked sections won't be resized, and the relative size of each of the marked sections remains the same as the image size changes, as shown in Figure 11.4.

Figure 11.4


To simplify the process to create NinePatch images for your application, the Android SDK includes a WYSIWIG draw9patch tool in the /tools folder.

Creating Optimized, Adaptive, and Dynamic Designs

When designing your UI, it's important to ensure that not only are your assets and layouts scalable, but also that they are optimized for a variety of different device types and screen sizes. A layout that looks great on a smartphone may suffer from excessive whitespace or line lengths on a tablet. Conversely, a layout optimized for a tablet device may appear cramped on a smartphone.

It's good practice to build optimized layouts for several different screen sizes that take advantage of their relative size and aspect ratio. The specific techniques used to design such UIs are beyond the scope of this book, but they are covered in detail at the Android Training site:

Testing, Testing, Testing

With hundreds of Android devices of varying screen sizes and pixel densities now available, it's impractical (and in some cases impossible) to physically test your application on every device.

Android Virtual Devices (AVDs) are ideal platforms for testing your application with a number of different screen configurations. AVDs also have the advantage to let you configure alternative platform releases and hardware configurations.

You learned how to create and use AVDs in Chapter 2, “Getting Started,” so this section focuses on how to create AVDs representative of different screens.

Using Emulator Skins

The simplest way to test your application's UI is to use the built-in skins. Each skin emulates a known device configuration with a resolution, pixel density, and physical screen size.

As of Android 4.0.3, the following built-in skins are available for testing:

· QVGA—320 × 240, 120dpi, 3.3”

· WQVGA43—432 × 240, 120dpi, 3.9”

· WQVGA400—240 × 400, 120dpi, 3.9”

· WSVGA—1024 × 600, 160dpi, 7”

· WXGA720—720 ×1280, 320dpi, 4.8” (Galaxy Nexus)

· WXGA800—1280 × 800, 160dpi, 10.1” (Motorola Xoom)

· HVGA—480 × 320, 160dpi, 3.6”

· WVGA800—800 × 480, 240dpi, 3.9” (Nexus One)

· WVGA854—854 × 480, 240dpi, 4.1”

Testing for Custom Resolutions and Screen Sizes

One of the advantages of using an AVD to evaluate devices is the ability to define arbitrary screen resolutions and pixel densities.

When you start a new AVD, you see the Launch Options dialog, as shown in Figure 11.5. If you check the Scale Display to Real Size check box and specify a screen size for your virtual device, as well as the dpi of your development monitor, the emulator scales to approximately the physical size you specified.

Figure 11.5


This enables you to evaluate your UI against a variety of screen sizes and pixel densities as well as resolutions and skins—an ideal way to see how your application appears on a small, high-resolution phone or a large, low-resolution tablet.

Ensuring Accessibility

An important part of creating an inclusive and compelling UI is to ensure that it can be used by people with disabilities that require them to interact with their devices in different ways.

Accessibility APIs were introduced in Android 1.6 (API level 4) to provide alternative interaction methods for users with visual, physical, or age-related disabilities that make it difficult to interact fully with a touch screen.

In Chapter 4 you learned how to make your custom Views accessible and navigable. This section summarizes some of the best practices to ensure your entire user experience is accessible.

Supporting Navigation Without a Touch Screen

Directional controllers, such as trackballs, D-pads, and arrow keys, are the primary means of navigation for many users. To ensure that your UI is navigable without requiring a touch screen, it's important that your application supports each of these input mechanisms.

The first step is to ensure that each input View is focusable and clickable. Pressing the center or OK button should then affect the focused control in the same way as touching it using the touch screen.

It's good practice to visually indicate when a control has the input focus, allowing users to know which control they are interacting with. All the Views included in the Android SDK are focusable.

The Android run time determines the focus order for each control in your layout based on an algorithm that finds the nearest neighbor in a given direction. You can manually override that order using the android:nextFocusDown, android:nextFocusLeft, android:nextFocusRight, andandroid:nextFocusUp attributes for any View within your layout definition. It's good practice to ensure that consecutive navigation movements in the opposite direction should return you to your original location.

Providing a Textual Description of Each View

Context is of critical importance when designing your UI. Button images, text labels, or even the relative location of each control can be used to indicate the purpose of each input View.

To ensure your application is accessible, consider how a user without visual context can navigate and use your UI. To assist, each View can include an android:contentDescription attribute that can be read aloud to users who have enabled the accessibility speech tools:


Every View within your layout that can hold focus should have a content description that provides the entire context necessary for a user to act on it.

Introducing Android Text-to-Speech

The text-to-speech (TTS) libraries, also known as speech synthesis, enable you to output synthesized speech from within your applications, allowing them to “talk” to your users.


Android 4.0 (API level 14) introduced the ability for application developers to implement their own text-to-speech engines and make them available to other applications. Creating a speech synthesis engine is beyond the scope of this book and won't be covered here. You can find further resources on the Android Developer site, at

Due to storage space constraints on some Android devices, the language packs are not always preinstalled on each device. Before using the TTS engine, it's good practice to confirm the language packs are installed.

To check for the TTS libraries, start a new Activity for a result using the ACTION_CHECK_TTS_DATA action from the TextToSpeech.Engine class:

Intent intent = new Intent(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(intent, TTS_DATA_CHECK);

The onActivityResult handler receives CHECK_VOICE_DATA_PASS if the voice data has been installed successfully. If the voice data is not currently available, start a new Activity using the ACTION_INSTALL_TTS_DATA action from the TTS Engine class to initiate its installation.

Intent installVoice = new Intent(Engine.ACTION_INSTALL_TTS_DATA);

After confirming the voice data is available, you need to create and initialize a new TextToSpeech instance. Note that you cannot use the new Text To Speech object until initialization is complete. Pass an OnInitListener into the constructor that will be fired when the TTS engine has been initialized.

boolean ttsIsInit = false;
TextToSpeech tts = null;
protected void onActivityResult(int requestCode,
                                int resultCode, Intent data) {
  if (requestCode == TTS_DATA_CHECK) {
    if (resultCode == Engine.CHECK_VOICE_DATA_PASS) {
      tts = new TextToSpeech(this, new OnInitListener() {
       public void onInit(int status) {
         if (status == TextToSpeech.SUCCESS) {
           ttsIsInit = true;
           // TODO Speak!

After initializing Text To Speech, you can use the speak method to synthesize voice data using the default device audio output:

HashMap parameters = null;
tts.speak("Hello, Android", TextToSpeech.QUEUE_ADD, parameters);

The speak method enables you to specify a parameter either to add the new voice output to the existing queue or to flush the queue and start speaking immediately.

You can affect the way the voice output sounds using the setPitch and setSpeechRate methods. Each method accepts a float parameter that modifies the pitch and speed, respectively, of the voice output.

You can also change the pronunciation of your voice output using the setLanguage method. This method takes a Locale parameter to specify the country and language of the text to speak. This affects the way the text is spoken to ensure the correct language and pronunciation models are used.

When you have finished speaking, use stop to halt voice output and shutdown to free the TTS resources:


Listing 11.1 determines whether the TTS voice library is installed, initializes a new TTS engine, and uses it to speak in UK English.


Listing 11.1: Using Text-to-Speech

private static int TTS_DATA_CHECK = 1;
private TextToSpeech tts = null;
private boolean ttsIsInit = false;
private void initTextToSpeech() {
  Intent intent = new Intent(Engine.ACTION_CHECK_TTS_DATA);
  startActivityForResult(intent, TTS_DATA_CHECK);
protected void onActivityResult(int requestCode,
                                int resultCode, Intent data) {
  if (requestCode == TTS_DATA_CHECK) {
    if (resultCode == Engine.CHECK_VOICE_DATA_PASS) {
      tts = new TextToSpeech(this, new OnInitListener() {
       public void onInit(int status) {
         if (status == TextToSpeech.SUCCESS) {
           ttsIsInit = true;
           if (tts.isLanguageAvailable(Locale.UK) >= 0)
    } else {
      Intent installVoice = new Intent(Engine.ACTION_INSTALL_TTS_DATA);
private void speak() {
  if (tts != null && ttsIsInit) {
    tts.speak("Hello, Android", TextToSpeech.QUEUE_ADD, null);
public void onDestroy() {
  if (tts != null) {

code snippet PA4AD_Ch11_TextToSpeach/src/

Using Speech Recognition

Android supports voice input and speech recognition using the RecognizerIntent class. This API enables you to accept voice input into your application using the standard voice input dialog, as shown in Figure 11.6.

Figure 11.6


To initialize voice recognition, call startNewActivityForResult, passing in an Intent that specifies the RecognizerIntent.ACTION_RECOGNIZE_SPEECH or RecognizerIntent.ACTION_WEB_SEARCH actions. The former action enables you to receive the input speech within your application, whereas the latter action enables you to trigger a web search or voice action using the native providers.

The launch Intent must include the RecognizerIntent.EXTRA_LANGUAGE_MODEL extra to specify the language model used to parse the input audio. This can be either LANGUAGE_MODEL_FREE_FORM or LANGUAGE_MODEL_WEB_SEARCH; both are available as static constants from the RecognizerIntent class.

You can also specify a number of optional extras to control the language, potential result count, and display prompt using the following Recognizer Intent constants:

· EXTRA_LANGUAGE—Specifies a language constant from the Locale class to use an input language other than the device default. You can find the current default by calling the static getDefault method on the Locale class.

· EXTRA_MAXRESULTS—Uses an integer value to limit the number of potential recognition results returned.

· EXTRA_PROMPT—Specifies a string that displays in the voice input dialog (shown in Figure 11.6) to prompt the user to speak.


The engine that handles the speech recognition may not be capable of understanding spoken input from all the languages available from the Locale class.

Not all devices include support for speech recognition. In such cases it is generally possible to download the voice recognition library from the Google Play Store.

Using Speech Recognition for Voice Input

When using voice recognition to receive the spoken words, call startNewActivityForResult using the RecognizerIntent.ACTION_RECOGNIZE_SPEECH action, as shown in Listing 11.2.


Listing 11.2: Initiating a speech recognition request

Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// Specify free form input
                "or forever hold your peace");
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 1);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.ENGLISH);
startActivityForResult(intent, VOICE_RECOGNITION);

code snippet PA4AD_Ch11_Speech/src/

When the user finishes speaking, the speech recognition engine analyzes and processes the resulting audio and then returns the results through the onActivityResult handler as an Array List of strings in the EXTRA_RESULTS extra, as shown in Listing 11.3.

Listing 11.3: Finding the results of a speech recognition request

protected void onActivityResult(int requestCode,
                                int resultCode,
                                Intent data) {
  if (requestCode == VOICE_RECOGNITION && resultCode == RESULT_OK) {
    ArrayList<String> results;
    results = 
    float[] confidence;
    String confidenceExtra = RecognizerIntent.EXTRA_CONFIDENCE_SCORES;
    confidence =
    // TODO Do something with the recognized voice strings
  super.onActivityResult(requestCode, resultCode, data);

code snippet PA4AD_Ch11_Speech/src/

Each string returned in the Array List represents a potential match for the spoken input. You can find the recognition engine's confidence in each result using the float array returned in the EXTRA_CONFIDENCE_SCORES extra. Each value in the array is the confidence score between 0 (no confidence) and 1 (high confidence) that the speech has been correctly recognized.

Using Speech Recognition for Search

Rather than handling the received speech yourself, you can use the RecognizerIntent.ACTION_WEB_SEARCH action to display a web search result or to trigger another type of voice action based on the user's speech, as shown in Listing 11.4.


Listing 11.4: Finding the results of a speech recognition request

Intent intent = new Intent(RecognizerIntent.ACTION_WEB_SEARCH);
startActivityForResult(intent, 0);

code snippet PA4AD_Ch11_Speech/src/

Controlling Device Vibration

In Chapter 10 you learned how to create Notifications that can use vibration to enrich event feedback. In some circumstances, you may want to vibrate the device independently of Notifications. For example, vibrating the device is an excellent way to provide haptic user feedback and is particularly popular as a feedback mechanism for games.

To control device vibration, your applications needs the VIBRATE permission:

<uses-permission android:name="android.permission.VIBRATE"/>

Device vibration is controlled through the Vibrator Service, accessible via the getSystemService method:

String vibratorService = Context.VIBRATOR_SERVICE;
Vibrator vibrator = (Vibrator)getSystemService(vibratorService);

Call vibrate to start device vibration; you can pass in either a vibration duration or a pattern of alternating vibration/pause sequences along with an optional index parameter that repeats the pattern starting at the index specified:

long[] pattern = {1000, 2000, 4000, 8000, 16000 };
vibrator.vibrate(pattern, 0); // Execute vibration pattern.
vibrator.vibrate(1000);       // Vibrate for 1 second.

To cancel vibration, call cancel; exiting your application automatically cancels any vibration it has initiated.


Working with Animations

In Chapter 3, you learned how to define animations as external resources. Now, you get the opportunity to put them to use.

Android offers three kinds of animation:

· Tweened View Animations—Tweened animations are applied to Views, letting you define a series of changes in position, size, rotation, and opacity that animate the View contents.

· Frame Animations—Traditional cell-based animations in which a different Drawable is displayed in each frame. Frame-by-frame animations are displayed within a View, using its Canvas as a projection screen.

· Interpolated Property Animations—The property animation system enables you to animate almost anything within your application. It's a framework designed to affect any object property over a period of time using the specified interpolation technique.

Tweened View Animations

Tweened animations offer a simple way to provide depth, movement, or feedback to your users at a minimal resource cost.

Using animations to apply a set of orientation, scale, position, and opacity changes is much less resource-intensive than manually redrawing the Canvas to achieve similar effects, not to mention far simpler to implement.

Tweened animations are commonly used to:

· Transition between Activities

· Transition between layouts within an Activity

· Transition between different content displayed within the same View

· Provide user feedback, such as indicating progress or “shaking” an input box to indicate an incorrect or invalid data entry

Creating Tweened View Animations

Tweened animations are created using the Animation class. The following list explains the animation types available:

· AlphaAnimation—Lets you animate a change in the View's transparency (opacity or alpha blending)

· RotateAnimation—Lets you spin the selected View canvas in the XY plane

· ScaleAnimation—Lets you to zoom in to or out from the selected View

· TranslateAnimation—Lets you move the selected View around the screen (although it will only be drawn within its original bounds)

Android offers the AnimationSet class, shown in Listing 11.5, to group and configure animations to be run as a set. You can define the start time and duration of each animation used within a set to control the timing and order of the animation sequence.


Listing 11.5: Defining an interpolated View animation

<set xmlns:android=""
    android:fromXScale="0.0" android:toXScale="1.0"
    android:fromYScale="0.0" android:toYScale="1.0"

code snippet PA4AD_Ch11_Animation/res/anim/popin.xml


It's important to set the start offset and duration for each child animation; otherwise, they will all start and complete at the same time.

Applying Tweened Animations

You can apply animations to any View by calling its startAnimation method and passing in the Animation or Animation Set to apply.

Animation sequences run once and then stop, unless you modify this behavior using the setRepeatMode and setRepeatCount methods on the Animation or Animation Set. You can force an animation to loop or repeat in reverse by setting the repeat mode of RESTART or REVERSE, respectively. Setting the repeat count controls the number of times the animation repeats.


Using Animation Listeners

The AnimationListener lets you create an event handler that's fired when an animation begins or ends. This lets you perform actions before or after an animation has completed, such as changing the View contents or chaining multiple animations.

Call setAnimationListener on an Animation object, and pass in a new implementation of AnimationListener, overriding onAnimationEnd, onAnimationStart, and onAnimationRepeat, as required:

myAnimation.setAnimationListener(new AnimationListener() {
  public void onAnimationEnd(Animation animation) {
   // TODO Do something after animation is complete.
  public void onAnimationStart(Animation animation) {
    // TODO Do something when the animation starts.
  public void onAnimationRepeat(Animation animation) {
    // TODO Do something when the animation repeats.

Animating Layouts and View Groups

A LayoutAnimation is used to animate View Groups, applying a single Animation (or Animation Set) to each child View in a predetermined sequence.

Use a LayoutAnimationController to specify an Animation (or Animation Set) that's applied to each child View in a View Group. Each View it contains will have the same animation applied, but you can use the Layout Animation Controller to specify the order and start time for each View.

Android includes two LayoutAnimationController classes:

· LayoutAnimationController—Lets you select the start offset of each View (in milliseconds) and the order (forward, reverse, and random) to apply the animation to each child View.

· GridLayoutAnimationController—A derived class that lets you assign the animation sequence of the child Views using grid row and column references.

Creating Layout Animations

To create a new Layout Animation, start by defining the Animation to apply to each child View. Then create a new LayoutAnimation, either in code or as an external animation resource, that references the animation to apply and defines the order and timing in which to apply it.

Listing 11.6 shows a Layout Animation definition stored as popinlayout.xml. The Layout Animation applies a simple “pop-in” animation randomly to each child View of any View Group it's assigned to.


Listing 11.6: Defining a layout animation


code snippet PA4AD_Ch11_Animation/res/anim/popinlayout.xml

Using Layout Animations

After defining a Layout Animation, you can apply it to a View Group either in code or in the layout XML resource. In XML this is done using the android:layoutAnimation tag in the layout definition:


To set a Layout Animation in code, call setLayoutAnimation on the View Group, passing in a reference to the LayoutAnimation object you want to apply. In each case, the Layout Animation will execute once, when the View Group is first laid out. You can force it to execute again by callingscheduleLayoutAnimation on the ViewGroup object. The animation will then be executed the next time the View Group is laid out. Layout Animations also support Animation Listeners.

aViewGroup.setLayoutAnimationListener(new AnimationListener() {
  public void onAnimationEnd(Animation _animation) {
    // TODO: Actions on animation complete.
  public void onAnimationRepeat(Animation _animation) {}
  public void onAnimationStart(Animation _animation) {}

Creating and Using Frame-by-Frame Animations

Frame-by-frame animations are akin to traditional cel-based cartoons in which an image is chosen for each frame. Whereas tweened animations use the target View to supply the content of the animation, frame-by-frame animations enable you to specify a series of Drawable objects that are used as the background to a View.

The AnimationDrawable class is used to create a new frame-by-frame animation presented as a Drawable resource. You can define your Animation Drawable resource as an external resource in your project's res/drawable folder using XML.

Use the animation-list tag to group a collection of item nodes, each of which uses a drawable attribute to define an image to display and a duration attribute to specify the time (in milliseconds) to display it.

Listing 11.7 shows how to create a simple animation that displays a rocket taking off. (Rocket images are not included.)


Listing 11.7: Defining a frame-by-frame animation

  <item android:drawable="@drawable/rocket1" android:duration="500" />
  <item android:drawable="@drawable/rocket2" android:duration="500" />
  <item android:drawable="@drawable/rocket3" android:duration="500" />

code snippet PA4AD_Ch11_Animation/res/drawable/animated_rocket.xml

To display your animation, set it as the background to a View using the setBackgroundResource method:

ImageView image = (ImageView)findViewById(;

Alternatively, use the setBackgroundDrawable to use a Drawable instance instead of a resource reference. Run the animation calling its start method.

AnimationDrawable animation = (AnimationDrawable)image.getBackground();

Interpolated Property Animations

Android 3.0 (API level 11) introduced a new animation technique that animates object properties. Although the tweened View animations described in the earlier section modified the appearance of the affected view, without modifying the object itself, property animations modify the properties of the underlying object directly.

As a result, you can modify any property of any object—visual or otherwise—using a property animator to transition it from one value to another, over a given period of time, using the interpolation algorithm of your choice, and setting the repeat behavior as required. The value can be any object, from a regular integer to a complex Class instance.

As a result, you can use property animators to create a smooth transition for anything within your code; the target property doesn't even need to represent something visual. Property animations are effectively iterators implemented using a background timer to increment or decrement a value according to a given interpolation path over a given period of time.

This is an incredibly powerful tool that can be used for anything from a simple View effect, such as moving, scaling, or fading a View, to complex animations, including runtime layout changes and curved transitions.

Creating Property Animations

The simplest technique for creating property animations is using an ObjectAnimator. The Object Animator class includes the ofFloat, ofInt, and ofObject static methods to easily create an animation that transitions the specified property of the target object between the values provided:

String propertyName = "alpha";
float from = 1f;
float to = 0f;
ObjectAnimator anim = ObjectAnimator.ofFloat(targetObject, propertyName, from, to);

Alternatively, you can provide a single value to animate the property from its current value to its final value:

ObjectAnimator anim = ObjectAnimator.ofFloat(targetObject, propertyName, to);


To animate a given property, there must be associated getter/setter functions on the underlying object. In the preceding example, the targetObject must include getAlpha and setAlpha methods that return and accept a float value, respectively.

To target a property of a type other than integer or float, use the ofObject method. This method requires that you supply an implementation of the TypeEvaluator class. Implement the evaluate method to return an object that should be returned when the animation is a given fraction of the way through animating between the start and end objects:

TypeEvaluator<MyClass> evaluator = new TypeEvaluator<MyClass>() {
  public MyClass evaluate(float fraction, 
                          MyClass startValue,
                          MyClass endValue) {
    MyClass result = new MyClass();
    // TODO Modify the new object to represent itself the given
    // fraction between the start and end values.
    return result;
// Animate between two instances 
ValueAnimator  oa 
  = ObjectAnimator.ofObject(evaluator, myClassFromInstance, myClassToInstance);

By default, each animation will run once with a 300ms duration. Use the setDuration method to alter the amount of time the interpolator should use to complete the transition:


You can use the setRepeatMode and setRepeatCount methods to cause the animation to be applied either a set number of times or infinitely:


You can set the repeat mode either to restart from the beginning or to apply the animation in reverse:


To create the same Object Animator as an XML resource, create a new XML file in the res/animator folder:

<objectAnimator xmlns:android=""

The filename can then be used as the resource identifier. To affect a particular object with an XML animator resource, use the AnimatorInflator.loadAnimator method, passing in the current context and the resource ID of the animation to apply to obtain a copy of the Object Animator, and then use the setTarget method to apply it to an object:

Animator anim = AnimatorInflater.loadAnimator(context, resID);

By default, the interpolator used to transition between the start and end values of each animation uses a nonlinear AccelerateDecelerateInterpolator, which provides the effect of accelerating at the beginning of the transition and decelerating when approaching the end.

You can use the setInterpolator method to apply one of the following SDK interpolators:

· AccelerateDecelerateInterpolator—The rate of change starts and ends slowly but accelerates through the middle.

· AccelerateInterpolator—The rate of change starts slowly but accelerates through the middle.

· AnticipateInterpolator—The change starts backward and then flings forward.

· AnticipateOvershootInterpolator—The change starts backward, flings forward, overshoots the target value, and finally goes back to the final value.

· BounceInterpolator—The change bounces at the end.

· DecelerateInterpolator—The rate of change starts out quickly and then decelerates.

· LinearInterpolator—The rate of change is constant.

· OvershootInterpolator—The change flings forward, overshoots the last value, and then comes back.

anim.setInterpolator(new AnticipateOvershootInterpolator());

You can also extend your own TimeInterpolator class to specify a custom interpolation algorithm.

To execute an animation, call its start method:


Creating Property Animation Sets

Android includes the AnimatorSet class to make it easier to create complex, interrelated animations.

AnimatorSet bouncer = new AnimatorSet();

To add a new animation to an Animator Set, use the play method. This returns an AnimatorSet.Builder object that lets you specify when to play the specified animation in relation to another:

AnimatorSet mySet = new AnimatorSet();;;;

Use the start method to execute the sequence of animations.


Using Animation Listeners

The Animator.AnimationListener class lets you create event handlers that are fired when an animation begins, ends, repeats, or is canceled:

Animator.AnimatorListener l = new AnimatorListener() {
  public void onAnimationStart(Animator animation) {
    // TODO Auto-generated method stub
  public void onAnimationRepeat(Animator animation) {
    // TODO Auto-generated method stub
  public void onAnimationEnd(Animator animation) {
    // TODO Auto-generated method stub      
  public void onAnimationCancel(Animator animation) {
    // TODO Auto-generated method stub

To apply an Animation Listener to your property animation, use the addListener method:


Enhancing Your Views

The explosive growth in the smartphone and tablet market has led to equally dramatic changes and improvements to mobile UIs.

This section describes how to use more advanced UI visual effects such as Shaders, translucency, touch screens with multiple touch, OpenGL, and hardware acceleration to improve the performance and aesthetics of your Activities and Views.

Advanced Canvas Drawing

You were introduced to the Canvas class in Chapter 4, where you learned how to create your own Views. The Canvas is also used in Chapter 13, “Maps, Geocoding, and Location-Based Services,” to annotate Overlays for MapViews.

The concept of the canvas is a common metaphor used in graphics programming and generally consists of three basic drawing components:

· Canvas—Supplies the draw methods that paint drawing primitives onto the underlying bitmap.

· Paint—Also referred to as a “brush,” Paint lets you specify how a primitive is drawn on the bitmap.

· Bitmap—The surface being drawn on.

Most of the advanced techniques described in this chapter involve variations and modifications to the Paint object that enable you to add depth and texture to otherwise flat raster drawings.

The Android drawing API supports translucency, gradient fills, rounded rectangles, and anti-aliasing.

Owing to resource limitations, Android does not support vector graphics; instead, it uses traditional raster-style repaints. The result of this raster approach is improved efficiency, but changing a Paint object does not affect primitives that have already been drawn; it affects only new elements.


For those of you with a Windows development background, the two-dimensional (2D) drawing capabilities of Android are roughly equivalent to those available in GDI+.

What Can You Draw?

The Canvas class encapsulates the bitmap used as a surface for your artistic endeavors; it also exposes the draw* methods used to implement your designs.

Without going into detail about each draw method, the following list provides a taste of the primitives available:

· drawARGB/drawRGB/drawColor—Fills the canvas with a single color.

· drawArc—Draws an arc between two angles within an area bounded by a rectangle.

· drawBitmap—Draws a bitmap on the Canvas. You can alter the appearance of the target bitmap by specifying a target size or using a matrix to transform it.

· drawBitmapMesh—Draws a bitmap using a mesh that lets you manipulate the appearance of the target by moving points within it.

· drawCircle—Draws a circle of a specified radius centered on a given point.

· drawLine(s) —Draws a line (or series of lines) between two points.

· drawOval—Draws an oval bounded by the rectangle specified.

· drawPaint—Fills the entire Canvas with the specified Paint.

· drawPath—Draws the specified Path. A Path object is often used to hold a collection of drawing primitives within a single object.

· drawPicture—Draws a Picture object within the specified rectangle (not supported when using hardware acceleration.)

· drawPosText—Draws a text string specifying the offset of each character (not supported when using hardware acceleration).

· drawRect—Draws a rectangle.

· drawRoundRect—Draws a rectangle with rounded edges.

· drawText—Draws a text string on the Canvas. The text font, size, color, and rendering properties are set in the Paint object used to render the text.

· drawTextOnPath—Draws text that follows along a specified path (not supported when using hardware acceleration).

· drawVertices—Draws a series of tri-patches specified as a series of vertex points (not supported when using hardware acceleration).

Each drawing method lets you specify a Paint object to render it. In the following sections, you learn how to create and modify Paint objects to get the most out of your drawings.

Getting the Most from Your Paint

The Paint class represents a paintbrush and palette. It lets you choose how to render the primitives you draw onto the Canvas using the draw methods described in the previous section. By modifying the Paint object, you can control the color, style, font, and special effects used when drawing.


Not all the Paint options described here are available if you're using hardware acceleration to improve 2D drawing performance. As a result, it's important to check how hardware acceleration affects your 2D drawing.

Most simply, setColor enables you to select the color of a Paint, whereas the style of a Paint object (controlled using setStyle) enables you to decide if you want to draw only the outline of a drawing object (STROKE), just the filled portion (FILL), or both (STROKE_AND_FILL).

Beyond these simple controls, the Paint class also supports transparency and can be modified with a variety of Shaders, filters, and effects to provide a rich palette of complex paints and brushes.

The Android SDK includes several excellent projects that demonstrate most of the features available in the Paint class. They are available in the graphics subfolder of the API demos at:

[sdk root folder]\samples\android-15\ApiDemos\src\com\example\android\apis\graphics

In the following sections, you learn what some of these features are and how to use them. These sections outline what can be achieved (such as gradients and edge embossing) without exhaustively listing all possible alternatives.

Using Translucency

All colors in Android include an opacity component (alpha channel). You define an alpha value for a color when you create it using the argb or parseColor methods:

// Make color red and 50% transparent
int opacity = 127;
int intColor = Color.argb(opacity, 255, 0, 0);
int parsedColor = Color.parseColor("#7FFF0000");

Alternatively, you can set the opacity of an existing Paint object using the setAlpha method:

// Make color 50% transparent
int opacity = 127;

Creating a paint color that's not 100 percent opaque means that any primitive drawn with it will be partially transparent—making whatever is drawn beneath it partially visible.

You can use transparency effects in any class or method that uses colors including Paint colors, Shaders, and Mask Filters.

Introducing Shaders

Extensions of the Shader class let you create Paints that fill drawn objects with more than a single solid color.

The most common use of Shaders is to define gradient fills; gradients are an excellent way to add depth and texture to 2D drawings. Android includes three gradient Shaders as well as a Bitmap Shader and a Compose Shader.

Trying to describe painting techniques seems inherently futile, so Figure 11.7 shows how each Shader works. Represented from left to right are LinearGradient, RadialGradient, and SweepGradient.


Not included in the image in Figure 11.7 is the ComposeShader, which lets you create a composite of multiple Shaders, nor the BitmapShader, which lets you create a brush based on a bitmap image.

Figure 11.7


Creating Gradient Shaders

Gradient Shaders let you fill drawings with an interpolated color range. You can define the gradient in two ways. The first is a simple transition between two colors:

int colorFrom = Color.BLACK;
int colorTo = Color.WHITE;
LinearGradient myLinearGradient = 
  new LinearGradient(x1, y1, x2, y2, 
                     colorFrom, colorTo, TileMode.CLAMP);

The second alternative is to specify a more complex series of colors distributed at set proportions:

int[] gradientColors = new int[3];
gradientColors[0] = Color.GREEN;
gradientColors[1] = Color.YELLOW;
gradientColors[2] = Color.RED;
float[] gradientPositions = new float[3];
gradientPositions[0] = 0.0f;
gradientPositions[1] = 0.5f;
gradientPositions[2] = 1.0f;
RadialGradient radialGradientShader 
  = new RadialGradient(centerX, centerY,

Each gradient Shader (linear, radial, and sweep) lets you define the gradient fill using either of these techniques.

Applying Shaders to Paint

To use a Shader when drawing, apply it to a Paint using the setShader method:


Anything you draw with this Paint will be filled with the Shader you specified rather than the paint color.

Using Shader Tile Modes

The brush sizes of the gradient Shaders are defined using explicit bounding rectangles or center points and radius lengths; the Bitmap Shader implies a brush size through its bitmap size.

If the area defined by your Shader brush is smaller than the area being filled, the TileMode determines how the remaining area will be covered. You can define which tile mode to use with the following static constants:

· CLAMP—Uses the edge colors of the Shader to fill the extra space

· MIRROR—Flips the Shader image horizontally and vertically so that each image seams with the last

· REPEAT—Repeats the Shader image horizontally and vertically, but doesn't flip it

Using Mask Filters

The MaskFilter classes let you assign edge effects to your Paint. Mask Filters are not supported when the Canvas is hardware-accelerated.

Extensions to MaskFilter apply transformations to the alpha-channel of a Paint along its outer edge. Android includes the following Mask Filters:

· BlurMaskFilter—Specifies a blur style and radius to feather the edges of your Paint

· EmbossMaskFilter—Specifies the direction of the light source and ambient light level to add an embossing effect

To apply a Mask Filter, use the setMaskFilter method, passing in a MaskFilter object:

// Set the direction of the light source
float[] direction = new float[]{ 1, 1, 1 };
// Set the ambient light level
float light = 0.4f;
// Choose a level of specularity to apply
float specular = 6;
// Apply a level of blur to apply to the mask
float blur = 3.5f;
EmbossMaskFilter emboss = new EmbossMaskFilter(direction, light,
                                               specular, blur);
// Apply the mask
if (canvas.isHardwareAccelerated())

The FingerPaint API demo included in the SDK is an excellent example of how to use MaskFilters. It demonstrates the effect of both the emboss and blur filters.

Using Color Filters

Whereas Mask Filters are transformations of a Paint's alpha-channel, a ColorFilter applies a transformation to each of the RGB channels. All ColorFilter-derived classes ignore the alpha-channel when performing their transformations.

Android includes three Color Filters:

· ColorMatrixColorFilter—Lets you specify a 4 x 5 ColorMatrix to apply to a Paint. Color Matrixes are commonly used to perform image processing programmatically and are useful because they support chaining transformations using matrix multiplication.

· LightingColorFilter—Multiplies the RGB channels by the first color before adding the second. The result of each transformation will be clamped between 0 and 255.

· PorterDuffColorFilter—Lets you use any one of the 16 Porter-Duff rules for digital image compositing to apply a specified color to the Paint. The Porter-Duff rules are defined here at

Apply ColorFilters using the setColorFilter method:

myPaint.setColorFilter(new LightingColorFilter(Color.BLUE, Color.RED));

An excellent example of using a Color Filter and Color Matrixes is in the ColorMatrixSample API example:


Using Path Effects

The effects described so far affect the way the Paint fills a drawing; Path Effects are used to control how its outline (or stroke) is drawn.

Path Effects are particularly useful for drawing Path primitives, but they can be applied to any Paint to affect the way the stroke is drawn.

Using Path Effects, you can change the appearance of a shape's corners and control the appearance of the outline. Android includes several Path Effects, including the following:

· CornerPathEffect—Lets you smooth sharp corners in the shape of a primitive by replacing them with rounded corners.

· DashPathEffect—Rather than drawing a solid outline, you can use the DashPathEffect to create an outline of broken lines (dashes/dots). You can specify any repeating pattern of solid/empty line segments.

· DiscretePathEffect—Similar to the DashPathEffect, but with added randomness. Specifies the length of each segment and a degree of deviation from the original path to use when drawing it.

· PathDashPathEffect—Enables you to define a new shape (path) to use as a stamp to outline the original path.

The following effects let you combine multiple Path Effects to a single Paint:

· SumPathEffect—Adds two effects to a path in sequence, such that each effect is applied to the original path and the two results are combined.

· ComposePathEffect—Applies first one effect and then applies the second effect to the result of the first.

Path Effects that modify the shape of the object being drawn change the area of the affected shape. This ensures that any fill effects applied to the same shape are drawn within the new bounds.

Path Effects are applied to Paint objects using the setPathEffect method:

borderPaint.setPathEffect(new CornerPathEffect(5));

The Path Effects API sample gives an excellent guide to how to apply each of these effects:


Changing the Transfer Mode

Change a Paint's Xfermode to affect the way it paints new colors on top of what's already on the Canvas. Under normal circumstances, painting on top of an existing drawing layers the new shape on top. If the new Paint is fully opaque, it totally obscures the paint underneath; if it's partially transparent, it tints the colors underneath.

The following Xfermode subclasses let you change this 'margin-left:18.0pt;text-indent:-18.0pt;line-height: normal'>· AvoidXfermode—Specifies a color and tolerance to force your Paint to avoid drawing over (or only draw over) it.

· PixelXorXfermode—Applies a simple pixel XOR operation when covering existing colors.

· PorterDuffXfermode—This is a very powerful transfer mode with which you can use any of the 16 Porter-Duff rules for image composition to control how the paint interacts with the existing canvas image.

To apply transfer modes, use the setXferMode method:

AvoidXfermode avoid = new AvoidXfermode(Color.BLUE, 10,

Improving Paint Quality with Anti-Aliasing

When you create a new Paint object, you can pass in several flags that affect the way the Paint will be rendered. One of the most interesting is the ANTI_ALIAS_FLAG, which ensures that diagonal lines drawn with this paint are anti-aliased to give a smooth appearance (at the cost of performance).

Anti-aliasing is particularly important when drawing text, as anti-aliased text can be significantly easier to read. To create even smoother text effects, you can apply the SUBPIXEL_TEXT_FLAG, which applies subpixel anti-aliasing.

Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG|Paint.SUBPIXEL_TEXT_FLAG);

You can also set both of these flags manually using the setSubpixelText and setAntiAlias methods:


Canvas Drawing Best Practice

2D owner-draw operations tend to be expensive in terms of processor use; inefficient drawing routines can block the GUI thread and have a detrimental effect on application responsiveness. This is particularly true for resource-constrained mobile devices.

In Chapter 4 you learned how to create your own Views by overriding the onDraw method of new View-derived classes. You need to be aware of the resource drain and CPU-cycle cost of your onDraw method to ensure you don't end up with an attractive application that's unresponsive, laggy, or “janky.”

A lot of techniques exist to help minimize the resource drain associated with owner-drawn controls. Rather than focus on general principles, I'll describe some Android-specific considerations for ensuring that you can create activities that look good and remain interactive. (Note that list is not exhaustive.)

· Consider size and orientation—When you design your Views and Overlays, be sure to consider (and test!) how they look at different resolutions, pixel densities, and sizes.

· Create static objects once—Object creation and garbage collection are particularly expensive operations. Where possible, create drawing objects such as Paint objects, Paths, and Shaders once, rather than re-creating them each time the View is invalidated.

· Remember that onDraw is expensive—Performing the onDraw method is an expensive process that forces Android to perform several image composition and bitmap construction operations. Many of the following points suggest ways to modify the appearance of your Canvas without having to redraw it:

· Use Canvas transforms—Use Canvas transforms, such as rotate and translate, to simplify complex relational positioning of elements on your canvas. For example, rather than positioning and rotating each text element around a clock face, simply rotate the canvas 22.5 degrees, and draw the text in the same place.

· Use Animations—Consider using Animations to perform preset transformations of your View rather than manually redrawing it. Scale, rotation, and translation Animations can be performed on any View within an Activity and provide a resource-efficient way to provide zoom, rotate, or shake effects.

· Consider using bitmaps, NinePatches, and Drawable resources—If your Views feature static backgrounds, you should consider using a Drawable such as a bitmap, scalable NinePatch, or static XML Drawable rather than dynamically creating it.

· Avoid overdrawing—A combination of raster painting and layered Views can result in many layers being drawn on top of each other. Before drawing a layer or object, check to confirm if it will be completely obscured by a layer above it. It's good practice to avoid drawing more than 2.5 times the number of pixels on screen per frame. Transparent pixels still count—and are more expensive to draw than opaque colors.

Advanced Compass Face Example

In Chapter 4, you created a simple compass UI. In the following example, you make some significant changes to the Compass View's onDraw method to change it from a simple, flat compass to a dynamic artificial horizon, as shown in Figure 11.8. Because the image in Figure 11.8 is limited to black and white, you need to create the control to see the full effect.

Figure 11.8


1. Start by adding properties to store the pitch and roll values:

private float pitch;
public void setPitch(float _pitch) {
  pitch = _pitch;
public float getPitch() {
  return pitch;
private float roll;
public void setRoll(float _roll) {
  roll = _roll;
public float getRoll() {
  return roll;

2. Modify the colors.xml resource file to include color values for the border gradient, the glass compass shading, the sky, and the ground. Also update the colors used for the border and the face markings:

<?xml version="1.0" encoding="utf-8"?>
  <color name="background_color">#F000</color>
  <color name="marker_color">#FFFF</color>
  <color name="text_color">#FFFF</color>
<color name="shadow_color">#7AAA</color>
  <color name="outer_border">#FF444444</color>
  <color name="inner_border_one">#FF323232</color>
  <color name="inner_border_two">#FF414141</color>
  <color name="inner_border">#FFFFFFFF</color>
  <color name="horizon_sky_from">#FFA52A2A</color>
  <color name="horizon_sky_to">#FFFFC125</color>
  <color name="horizon_ground_from">#FF5F9EA0</color>
  <color name="horizon_ground_to">#FF0008B</color>

3. The Paint and Shader objects used for the sky and ground in the artificial horizon are created based on the size of the current View, so they can't be static like the Paint objects you created in Chapter 4. Instead of creating Paint objects, update the initCompassView method in theCompassView class to construct the gradient arrays and colors they use. The existing method code can be left largely intact, with some changes to the textPaint, circlePaint, and markerPaint variables, as highlighted in the following code:

int[] borderGradientColors;
float[] borderGradientPositions;
int[] glassGradientColors;
float[] glassGradientPositions;
int skyHorizonColorFrom;
int skyHorizonColorTo;
int groundHorizonColorFrom;
int groundHorizonColorTo;
protected void initCompassView() {
  // Get external resources
  Resources r = this.getResources();
  circlePaint = new Paint(Paint.ANTI_ALIAS_FLAG);
  northString = r.getString(R.string.cardinal_north);
  eastString = r.getString(R.string.cardinal_east);
  southString = r.getString(R.string.cardinal_south);
  westString = r.getString(R.string.cardinal_west);
  textPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
  textHeight = (int)textPaint.measureText("yY");
  markerPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
  markerPaint.setShadowLayer(2, 1, 1, r.getColor(R.color.shadow_color));

3.1 Still within the initCompassView method, create the color and position arrays that will be used by a radial Shader to paint the outer border:

protected void initCompassView() {
  [ ... Existing code ... ]
  borderGradientColors = new int[4];
  borderGradientPositions = new float[4];
  borderGradientColors[3] = r.getColor(R.color.outer_border);
  borderGradientColors[2] = r.getColor(R.color.inner_border_one);
  borderGradientColors[1] = r.getColor(R.color.inner_border_two);
  borderGradientColors[0] = r.getColor(R.color.inner_border);
  borderGradientPositions[3] = 0.0f;
  borderGradientPositions[2] = 1-0.03f;
  borderGradientPositions[1] = 1-0.06f;
  borderGradientPositions[0] = 1.0f;

3.2 Then create the radial gradient color and position arrays that will be used to create the semitransparent “glass dome” that sits on top of the View to give it the illusion of depth:

protected void initCompassView() {
  [ ... Existing code ... ]
  glassGradientColors = new int[5];
  glassGradientPositions = new float[5];
  int glassColor = 245;
  glassGradientColors[4] = Color.argb(65, glassColor,
                                      glassColor, glassColor);
  glassGradientColors[3] = Color.argb(100, glassColor,
                                      glassColor, glassColor);
  glassGradientColors[2] = Color.argb(50, glassColor,
                                      glassColor, glassColor);
  glassGradientColors[1] = Color.argb(0, glassColor,
                                      glassColor, glassColor);
  glassGradientColors[0] = Color.argb(0, glassColor,
                                      glassColor, glassColor);
  glassGradientPositions[4] = 1-0.0f;
  glassGradientPositions[3] = 1-0.06f;
  glassGradientPositions[2] = 1-0.10f;
  glassGradientPositions[1] = 1-0.20f;
  glassGradientPositions[0] = 1-1.0f;

3.3 Finally, get the colors you'll use to create the linear gradients that will represent the sky and the ground in the artificial horizon:

protected void initCompassView() {
  [ ... Existing code ... ]
  skyHorizonColorFrom = r.getColor(R.color.horizon_sky_from);
  skyHorizonColorTo = r.getColor(R.color.horizon_sky_to);
  groundHorizonColorFrom = r.getColor(R.color.horizon_ground_from);
  groundHorizonColorTo = r.getColor(R.color.horizon_ground_to);

4. Before you start drawing the face, create a new enum that stores each of the cardinal directions:

private enum CompassDirection { N, NNE, NE, ENE,
                                E, ESE, SE, SSE,
                                S, SSW, SW, WSW,
                                W, WNW, NW, NNW }

5. Now you need to completely replace the existing onDraw method. You start by figuring out some size-based values, including the center of the View, the radius of the circular control, and the rectangles that will enclose the outer (heading) and inner (tilt and roll) face elements. To start, replace the existing onDraw method:

protected void onDraw(Canvas canvas) {

6. Calculate the width of the outer (heading) ring based on the size of the font used to draw the heading values:

  float ringWidth = textHeight + 4;

7. Calculate the height and width of the View, and use those values to establish the radius of the inner and outer face dials, as well as to create the bounding boxes for each face:

int height = getMeasuredHeight();
  int width =getMeasuredWidth();
  int px = width/2;
  int py = height/2;
  Point center = new Point(px, py);
  int radius = Math.min(px, py)-2;
  RectF boundingBox = new RectF(center.x - radius,
                                center.y - radius,
                                center.x + radius,
                                center.y + radius);
  RectF innerBoundingBox = new RectF(center.x - radius + ringWidth,
                                     center.y - radius + ringWidth,
                                     center.x + radius - ringWidth,
                                     center.y + radius - ringWidth);
  float innerRadius = innerBoundingBox.height()/2;

8. With the dimensions of the View established, it's time to start drawing the faces.

Start from the bottom layer at the outside, and work your way in and up, starting with the outer face (heading). Create a new RadialGradient Shader using the colors and positions you defined in step 3.2, and assign that Shader to a new Paint before using it to draw a circle:

RadialGradient borderGradient = new RadialGradient(px, py, radius,
borderGradientColors, borderGradientPositions, TileMode.CLAMP);
Paint pgb = new Paint();
Path outerRingPath = new Path();
outerRingPath.addOval(boundingBox, Direction.CW);
canvas.drawPath(outerRingPath, pgb);

9. Now you need to draw the artificial horizon. You do this by dividing the circular face into two sections, one representing the sky and the other the ground. The proportion of each section depends on the current pitch.

Start by creating the Shader and Paint objects that will be used to draw the sky and earth:

LinearGradient skyShader = new LinearGradient(center.x,, center.x, innerBoundingBox.bottom,
  skyHorizonColorFrom, skyHorizonColorTo, TileMode.CLAMP);
Paint skyPaint = new Paint();
LinearGradient groundShader = new LinearGradient(center.x,, center.x, innerBoundingBox.bottom,
  groundHorizonColorFrom, groundHorizonColorTo, TileMode.CLAMP);
Paint groundPaint = new Paint();

10. Normalize the pitch and roll values to clamp them within ±90 degrees and ±180 degrees, respectively:

float tiltDegree = pitch;
while (tiltDegree > 90 || tiltDegree < -90)
  if (tiltDegree > 90) tiltDegree = -90 + (tiltDegree - 90);
    if (tiltDegree < -90) tiltDegree = 90 - (tiltDegree + 90);
float rollDegree = roll;
while (rollDegree > 180 || rollDegree < -180)
  if (rollDegree > 180) rollDegree = -180 + (rollDegree - 180);
   if (rollDegree < -180) rollDegree = 180 - (rollDegree + 180);

11. Create paths that will fill each segment of the circle (ground and sky). The proportion of each segment should be related to the clamped pitch:

Path skyPath = new Path();
               (180 + (2 * tiltDegree)));

12. Spin the canvas around the center in the opposite direction to the current roll, and draw the sky and ground paths using the Paints you created in step 4:;
canvas.rotate(-rollDegree, px, py);
canvas.drawOval(innerBoundingBox, groundPaint);
canvas.drawPath(skyPath, skyPaint);
canvas.drawPath(skyPath, markerPaint);

13. Next is the face marking. Start by calculating the start and endpoints for the horizontal horizon markings:

int markWidth = radius / 3;
int startX = center.x - markWidth;
int endX = center.x + markWidth;

14. To make the horizon values easier to read, you should ensure that the pitch scale always starts at the current value. The following code calculates the position of the UI between the ground and sky on the horizon face:

double h = innerRadius*Math.cos(Math.toRadians(90-tiltDegree));
double justTiltY = center.y - h;

15. Find the number of pixels representing each degree of tilt:

float pxPerDegree = (innerBoundingBox.height()/2)/45f;

16. Iterate over 180 degrees, centered on the current tilt value, to give a sliding scale of possible pitch:

for (int i = 90; i >= -90; i -= 10) {
  double ypos = justTiltY + i*pxPerDegree;
  // Only display the scale within the inner face.
  if ((ypos < ( + textHeight)) ||
      (ypos > innerBoundingBox.bottom - textHeight))
  // Draw a line and the tilt angle for each scale increment.
  canvas.drawLine(startX, (float)ypos,
                  endX, (float)ypos,
  int displayPos = (int)(tiltDegree - i);
  String displayString = String.valueOf(displayPos);
  float stringSizeWidth = textPaint.measureText(displayString);

17. Draw a thicker line at the earth/sky interface. Change the stroke thickness of the markerPaint object before drawing the line (and then set it back to the previous value):

canvas.drawLine(center.x - radius / 2,
                center.x + radius / 2,

18. To make it easier to read the exact roll, you should draw an arrow and display a text string that shows the value.

Create a new Path, and use the moveTo/lineTo methods to construct an open arrow that points straight up. Draw the path and a text string that shows the current roll:

// Draw the arrow
Path rollArrow = new Path();
rollArrow.moveTo(center.x - 3, (int) + 14);
rollArrow.lineTo(center.x, (int) + 10);
rollArrow.moveTo(center.x + 3, + 14);
rollArrow.lineTo(center.x, + 10);
canvas.drawPath(rollArrow, markerPaint);
// Draw the string
String rollText = String.valueOf(rollDegree);
double rollTextWidth = textPaint.measureText(rollText);
                (float)(center.x - rollTextWidth / 2),
       + textHeight + 2,

19. Spin the canvas back to upright so that you can draw the rest of the face markings:


20. Draw the roll dial markings by rotating the canvas 10 degrees at a time, drawing a value every 30 degrees and otherwise draw a mark. When you've completed the face, restore the canvas to its upright position:;
canvas.rotate(180, center.x, center.y);
for (int i = -180; i < 180; i += 10)
  // Show a numeric value every 30 degrees
  if (i % 30 == 0) {
    String rollString = String.valueOf(i*-1);
    float rollStringWidth = textPaint.measureText(rollString);
    PointF rollStringCenter = 
      new PointF(center.x-rollStringWidth/2,
                    rollStringCenter.x, rollStringCenter.y,
  // Otherwise draw a marker line
  else {
    canvas.drawLine(center.x, (int),
                    center.x, (int) + 5,
  canvas.rotate(10, center.x, center.y);

21. The final step in creating the face is drawing the heading markers around the outside edge:;
canvas.rotate(-1*(bearing), px, py);
// Should this be a double?
double increment = 22.5;
for (double i = 0; i < 360; i += increment) {
  CompassDirection cd = CompassDirection.values()
                        [(int)(i / 22.5)];
  String headString = cd.toString();
  float headStringWidth = textPaint.measureText(headString);
  PointF headStringCenter = 
    new PointF(center.x - headStringWidth / 2,
      + 1 + textHeight);
  if (i % increment == 0)
                    headStringCenter.x, headStringCenter.y,
    canvas.drawLine(center.x, (int),
                    center.x, (int) + 3,
  canvas.rotate((int)increment, center.x, center.y);

22. With the face complete, you can add some finishing touches.

Start by adding a “glass dome” over the top to give the illusion of a watch face. Using the radial gradient array you constructed earlier, create a new Shader and Paint object. Use them to draw a circle over the inner face that makes it look like it's covered in glass:

RadialGradient glassShader = 
  new RadialGradient(px, py, (int)innerRadius,
Paint glassPaint = new Paint();
canvas.drawOval(innerBoundingBox, glassPaint);

23. All that's left is to draw two more circles as clean borders for the inner and outer face boundaries. Then restore the canvas to upright, and finish the onDraw method:

  // Draw the outer ring
  canvas.drawOval(boundingBox, circlePaint);
  // Draw the inner ring
  canvas.drawOval(innerBoundingBox, circlePaint);

If you run the parent activity, you will see an artificial horizon, as shown at the beginning of this example in Figure 11.9.


All code snippets in this example are part of the Chapter 11 Compass project, available for download at

Hardware Acceleration

Android 3.0 (API level 11) introduced a new rendering pipeline to allow applications to benefit from hardware-accelerated 2D graphics.

The hardware-accelerated rendering pipeline supports most of the existing Canvas and Paint drawing options, with several exceptions, as described in the preceding sections. All the SDK Views, layouts, and effects support hardware acceleration, so in most cases it is safe to enable for your entire application—the primary exception being Views that you create yourself.


For a complete list of the unsupported drawing operations see the Android Developer Guide:

Managing Hardware Acceleration Use in Your Applications

You can explicitly enable or disable hardware acceleration for your application by adding an android:hardwareAccelerated attribute to the application node in your manifest:

<application android:hardwareAccelerated="true">

To enable or disable hardware acceleration for a specific Activity, use the same attribute on that Activity's manifest node:

<activity android:name=".MyActivity"
          android:hardwareAccelerated="false" /> 

It's also possible to disable hardware acceleration for a particular View within an Activity. To do so, set the layer type of the view to render using software using the setLayerType method:

view.setLayerType(View.LAYER_TYPE_SOFTWARE, null);

Checking If Hardware Acceleration Is Enabled

Not all devices support hardware acceleration, and not all 2D graphics features are supported on a hardware-accelerated Canvas. As a result, you might choose to alter the UI presented by a View based on whether hardware acceleration is currently enabled.

You can confirm hardware acceleration is active by using the isHardwareAccelerated method on either a View object or its underlying Canvas. If you are checking for hardware acceleration within your onDraw code, it's best practice to use the Canvas.isHardwareAccelerated method:

public void onDraw(Canvas canvas) {
  if (canvas.isHardwareAccelerated()) {
    // TODO Hardware accelerated drawing routine.
  else {
    // TODO Unaccelerated drawing routine.

Introducing the Surface View

Under normal circumstances, all your application's Views are drawn on the same GUI thread. This main application thread is also used for all user interaction (such as button clicks or text entry).

In Chapter 9, “Working in the Background,” you learned how to move blocking processes onto background threads. Unfortunately, you can't do this with the onDraw method of a View; modifying a GUI element from a background thread is explicitly disallowed.

When you need to update the View's UI rapidly, or the rendering code blocks the GUI thread for too long, the SurfaceView class is the answer. A Surface View wraps a Surface object rather than a Canvas object. This is important because Surfaces can be drawn on from background threads. This is particularly useful for resource-intensive operations, or where rapid updates or high frame rates are required, such as when using 3D graphics, creating games, or previewing the camera in real time.

The ability to draw independently of the GUI thread comes at the price of additional memory consumption, so although it's a useful—sometimes necessary—way to create custom Views, you should use Surface Views with caution.

When to Use a Surface View

You can use a Surface View in exactly the same way you use any View-derived class. You can apply animations and place them in layouts as you would any other View.

The Surface encapsulated by the Surface View supports drawing, using most of the standard Canvas methods described previously in this chapter, and also supports the full OpenGL ES library.

Surface Views are particularly useful for displaying dynamic 3D images, such as those featured in interactive games that provide immersive experiences. They're also the best choice for displaying real-time camera previews.

Creating Surface Views

To create a new Surface View, create a new class that extends SurfaceView and implements SurfaceHolder.Callback. The SurfaceHolder callback notifies the View when the underlying Surface is created, destroyed, or modified. It passes a reference to the SurfaceHolder object that contains a valid Surface. A typical Surface View design pattern includes a Thread-derived class that accepts a reference to the current SurfaceHolder and independently updates it.

Listing 11.8 shows a Surface View implementation for drawing using a Canvas. A new Thread-derived class is created within the Surface View control, and all UI updates are handled within this new class.


Listing 11.8: Surface View skeleton implementation

import android.content.Context;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
public class MySurfaceView extends SurfaceView implements
  SurfaceHolder.Callback {
  private SurfaceHolder holder;
  private MySurfaceViewThread mySurfaceViewThread;
  private boolean hasSurface;
  MySurfaceView(Context context) {
  private void init() {
    // Create a new SurfaceHolder and assign this 
    // class as its callback.
    holder = getHolder();
    hasSurface = false;
  public void resume() {
    // Create and start the graphics update thread.
    if (mySurfaceViewThread == null) {
      mySurfaceViewThread = new MySurfaceViewThread();
      if (hasSurface == true)
  public void pause() {
    // Kill the graphics update thread
    if (mySurfaceViewThread != null) {
      mySurfaceViewThread = null;
  public void surfaceCreated(SurfaceHolder holder) {
    hasSurface = true;
    if (mySurfaceViewThread != null)
  public void surfaceDestroyed(SurfaceHolder holder) {
    hasSurface = false;
  public void surfaceChanged(SurfaceHolder holder, int format,
                             int w, int h) {
    if (mySurfaceViewThread != null)
      mySurfaceViewThread.onWindowResize(w, h);
  class MySurfaceViewThread extends Thread {
    private boolean done;
    MySurfaceViewThread() {
      done = false;
    public void run() {
      SurfaceHolder surfaceHolder = holder;
      // Repeat the drawing loop until the thread is stopped.
      while (!done) {
        // Lock the surface and return the canvas to draw onto.
        Canvas canvas = surfaceHolder.lockCanvas();
        // TODO: Draw on the canvas!
        // Unlock the canvas and render the current image.
    public void requestExitAndWait() {
      // Mark this thread as complete and combine into
      // the main application thread.
      done = true;
      try {
      } catch (InterruptedException ex) { }
    public void onWindowResize(int w, int h) {
      // Deal with a change in the available surface size.

code snippet PA4AD_Ch11_SurfaceView/src/

Creating 3D Views with a Surface View

Android includes full support for the OpenGL ES 3D rendering framework, including support for hardware acceleration on devices that offer it. The SurfaceView provides a Surface onto which you can render your OpenGL scenes.

OpenGL is commonly used in desktop applications to provide dynamic 3D UIs and animations. Resource-constrained devices don't have the capacity for polygon handling that's available on desktop PCs and gaming devices that feature dedicated 3D graphics processors. Within your applications, consider the load your 3D Surface View will be placing on your processor, and attempt to keep the total number of polygons being displayed, and the rate at which they're updated, as low as possible.


Creating a Doom clone for Android is well out of the scope of this book, so I'll leave it to you to test the limits of what's possible in a mobile 3D UI. Check out the GLSurfaceView API demo example included in the SDK distribution to see an example of the OpenGL ES framework in action.

Creating Interactive Controls

Anyone who has used a mobile phone is painfully aware of the challenges associated with designing intuitive UIs for mobile devices. Touch screens have been available on mobiles for many years, but it's only recently that touch-enabled UIs have been designed to be used by fingers rather than styluses. Full physical keyboards have also become common, with the compact size of the slide-out or flip-out keyboard introducing its own challenges.

As an open framework, Android is available on a wide variety of devices featuring many different permutations of input technologies, including touch screens, D-pads, trackballs, and keyboards.

The challenge for you as a developer is to create intuitive UIs that make the most of whatever input hardware is available, while introducing as few hardware dependencies as possible.

The techniques described in this section show how to listen for (and react to) user input from touch-screen taps, key presses, and trackball events using the following event handlers in Views and Activities:

· onTouchEvent—The touch-screen event handler, triggered when the touch screen is touched, released, or dragged

· onKeyDown—Called when any hardware key is pressed

· onKeyUp—Called when any hardware key is released

· onTrackballEvent—Triggered by movement on the trackball

Using the Touch Screen

Mobile touch screens have existed since the days of the Apple Newton and the Palm Pilot, although their usability has had mixed reviews. Modern mobile devices are now all about finger input—a design principle that assumes users will use their fingers rather than a specialized stylus to touch the screen and navigate your UI.

Finger-based touch makes interaction less precise and is often based more on movement than simple contact. Android's native applications make extensive use of finger-based, touch-screen UIs, including the use of dragging motions to scroll through lists, swipe between screens, or perform actions.

To create a View or Activity that uses touch-screen interaction, override the onTouchEvent handler:

public boolean onTouchEvent(MotionEvent event) {
  return super.onTouchEvent(event);

Return true if you have handled the screen press; otherwise, return false to pass events down through the View stack until the touch has been successfully handled.

Processing Single and Multiple Touch Events

For each gesture, the onTouchEvent handler is fired several times. Starting when the user touches the screen, multiple times while the system tracks the current finger position, and, finally, once more when the contact ends.

Android 2.0 (API level 5) introduced platform support for processing an arbitrary number of simultaneous touch events. Each touch event is allocated a separate pointer identifier that is referenced in the Motion Event parameter of the onTouchEvent handler.


Not all touch-screen hardware reports multiple simultaneous screen presses. In cases in which the hardware does not support multiple touches, the platform returns a single touch event.

Call getAction on the MotionEvent parameter to find the event type that triggered the handler. For either a single touch device, or the first touch event on a multitouch device, you can use the ACTION_UP/DOWN/MOVE/CANCEL/OUTSIDE constants to find the event type:

public boolean onTouchEvent(MotionEvent event) {
  int action = event.getAction();
  switch (action) {
    case (MotionEvent.ACTION_DOWN): 
      // Touch screen pressed
      return true;
    case (MotionEvent.ACTION_MOVE):
      // Contact has moved across screen
      return true;
    case (MotionEvent.ACTION_UP):
      // Touch screen touch ended
      return true;
    case (MotionEvent.ACTION_CANCEL):
      // Touch event cancelled
      return true;
    case (MotionEvent.ACTION_OUTSIDE):
      // Movement has occurred outside the
      // bounds of the current screen element
      return true;
    default: return super.onTouchEvent(event);

To track touch events from multiple pointers, you need to apply the MotionEvent.ACTION_MASK and MotionEvent.ACTION_POINTER_ID_MASK constants to find the touch event (either ACTION_POINTER_DOWN or ACTION_POINTER_UP) and the pointer ID that triggered it, respectively. Call getPointerCountto find if this is a multiple-touch event.

public boolean onTouchEvent(MotionEvent event) {
  int action = event.getAction();
  if (event.getPointerCount() > 1) {
    int actionPointerId = action & MotionEvent.ACTION_POINTER_ID_MASK;
    int actionEvent = action & MotionEvent.ACTION_MASK;
    // Do something with the pointer ID and event.
  return super.onTouchEvent(event);

The Motion Event also includes the coordinates of the current screen contact. You can access these coordinates using the getX and getY methods. These methods return the coordinate relative to the responding View or Activity.

In the case of multiple-touch events, each Motion Event includes the current position of each pointer. To find the position of a given pointer, pass its index into the getX or getY methods. Note that its index is not equivalent to the pointer ID. To find the index for a given pointer, use thefindPointerIndex method, passing in the pointer ID whose index you need:

int xPos = -1;
int yPos = -1;
if (event.getPointerCount() > 1) {
  int actionPointerId = action & MotionEvent.ACTION_POINTER_ID_MASK;
  int actionEvent = action & MotionEvent.ACTION_MASK;
  int pointerIndex = event.findPointerIndex(actionPointerId);
  xPos = (int)event.getX(pointerIndex);
  yPos = (int)event.getY(pointerIndex);
else {
  // Single touch event.
  xPos = (int)event.getX();
  yPos = (int)event.getY();

The Motion Event parameter also includes the pressure being applied to the screen using getPressure, a method that returns a value usually between 0 (no pressure) and 1 (normal pressure).

Finally, you can also determine the normalized size of the current contact area by using the getSize method. This method returns a value between 0 and 1, where 0 suggests a precise measurement and 1 indicates a possible “fat touch” event in which the user may not have intended to press anything.


Depending on the calibration of the hardware, it may be possible to return values greater than 1.

Tracking Movement

Whenever the current touch contact position, pressure, or size changes, a new onTouchEvent is triggered with an ACTION_MOVE action.

The Motion Event parameter can include historical values, in addition to the fields described previously. This history represents all the movement events that have occurred between the previously handled onTouchEvent and this one, allowing Android to buffer rapid movement changes to provide fine-grained capture of movement data.

You can find the size of the history by calling getHistorySize, which returns the number of movement positions available for the current event. You can then obtain the times, pressures, sizes, and positions of each of the historical events by using a series of getHistorical* methods and passing in the position index. Note that as with the getX and getY methods described earlier, you can pass in a pointer index value to track historical touch events for multiple cursors.

int historySize = event.getHistorySize();
long time = event.getHistoricalEventTime(i);
if (event.getPointerCount() > 1) {
  int actionPointerId = action & MotionEvent.ACTION_POINTER_ID_MASK;
  int pointerIndex = event.findPointerIndex(actionPointerId);
  for (int i = 0; i < historySize; i++) {
    float pressure = event.getHistoricalPressure(pointerIndex, i);
    float x = event.getHistoricalX(pointerIndex, i);
    float y = event.getHistoricalY(pointerIndex, i);
    float size = event.getHistoricalSize(pointerIndex, i);
    // TODO: Do something with each point
else {
  for (int i = 0; i < historySize; i++) {
    float pressure = event.getHistoricalPressure(i);
    float x = event.getHistoricalX(i);
    float y = event.getHistoricalY(i);
    float size = event.getHistoricalSize(i);
    // TODO: Do something with each point

The normal pattern for handling movement events is to process each of the historical events first, followed by the current Motion Event values, as shown in Listing 11.9.


Listing 11.9: Handling touch screen movement events

public boolean onTouchEvent(MotionEvent event) {
  int action = event.getAction();
  switch (action) {
    case (MotionEvent.ACTION_MOVE):
      int historySize = event.getHistorySize();
      for (int i = 0; i < historySize; i++) {
        float x = event.getHistoricalX(i);
        float y = event.getHistoricalY(i);
        processMovement(x, y);
      float x = event.getX();
      float y = event.getY();
      processMovement(x, y);
      return true;
  return super.onTouchEvent(event);
private void processMovement(float _x, float _y) {
  // Todo: Do something on movement.

code snippet PA4AD_Ch11_Touch/src/

Using an On Touch Listener

You can listen for touch events without subclassing an existing View by attaching an OnTouchListener to any View object, using the setOnTouchListener method:

myView.setOnTouchListener(new OnTouchListener() {
  public boolean onTouch(View _view, MotionEvent _event) {
    // TODO Respond to motion events
    return false;

Using the Device Keys, Buttons, and D-Pad

Button and key-press events for all hardware keys are handled by the onKeyDown and onKeyUp handlers of the active Activity or the focused View. This includes keyboard keys, the D-pad, and the volume, back, dial, and hang-up buttons. The only exception is the home key, which is reserved to ensure that users can never get locked within an application.

To have your View or Activity react to button presses, override the onKeyUp and onKeyDown event handlers:

public boolean onKeyDown(int _keyCode, KeyEvent _event) {
  // Perform on key pressed handling, return true if handled
  return false;
public boolean onKeyUp(int _keyCode, KeyEvent _event) {
  // Perform on key released handling, return true if handled
  return false;

The keyCode parameter contains the value of the key being pressed; compare it to the static key code values available from the KeyEvent class to perform key-specific processing.

The KeyEvent parameter also includes the isAltPressed, isShiftPressed, and isSymPressed methods to determine if the alt, shift, or symbols keys are also being held. Android 3.0 (API level 11) introduced the isCtrlPressed and isFunctionPressed methods to determine if the control or function keys are pressed. The static isModifierKey method accepts the keyCode and determines whether this key event was triggered by the user pressing one of these modifier keys.

Using the On Key Listener

To respond to key presses within existing Views in your Activities, implement an OnKeyListener, and assign it to a View using the setOnKeyListener method. Rather than implementing a separate method for key-press and key-release events, the OnKeyListener uses a single onKey event.

myView.setOnKeyListener(new OnKeyListener() {
  public boolean onKey(View v, int keyCode, KeyEvent event) {
    // TODO Process key press event, return true if handled
    return false;

Use the keyCode parameter to find the key pressed. The KeyEvent parameter is used to determine if the key has been pressed or released, where ACTION_DOWN represents a key press and ACTION_UP signals its release.

Using the Trackball

Many mobile devices offer a trackball as a useful alternative (or addition) to the touch screen and D-pad. Trackball events are handled by overriding the onTrackballEvent method in your View or Activity.

Like touch events, trackball movement is included in a MotionEvent parameter. In this case, the MotionEvent contains the relative movement of the trackball since the last trackball event, normalized so that 1 represents the equivalent movement caused by the user pressing the D-pad key.

You can find the vertical change by using the getY method, and find the horizontal scrolling through the getX method:

public boolean onTrackballEvent(MotionEvent _event) {
  float vertical = _event.getY();
  float horizontal = _event.getX();
  // TODO Process trackball movement.
  return false;

Advanced Drawable Resources

Earlier in this chapter you examined a number of scalable Drawable resources, including shapes, gradients, and colors. This section introduces a number of additional XML-defined Drawables.

Composite Drawables

Use composite Drawables to combine and manipulate other Drawable resources. You can use any Drawable resource within the following composite resource definitions, including bitmaps, shapes, and colors. Similarly, you can use these new Drawables within each other and assign them to Views in the same way as all other Drawable assets.

Transformative Drawables

You can scale and rotate existing Drawable resources using the aptly named ScaleDrawable and RotateDrawable classes. These transformative Drawables are particularly useful for creating progress bars or animating Views.

· ScaleDrawable—Within the scale tag, use the scaleHeight and scaleWidth attributes to define the target height and width relative to the bounding box of the original Drawable, respectively. Use the scaleGravity attribute to control the anchor point for the scaled image.

<?xml version="1.0" encoding="utf-8"?>
<scale xmlns:android=""

· RotateDrawable—Within the rotate tag, use fromDegrees and toDegrees to define the start and end rotation angle around a pivot point, respectively. Define the pivot using the pivotX and pivotY attributes, specifying a percentage of the Drawable's width and height, respectively, using nn% notation.

<?xml version="1.0" encoding="utf-8"?>
<rotate xmlns:android=""

To apply the scaling and rotation at run time, use the setImageLevel method on the View object hosting the Drawable to move between the start and finish values on a scale of 0 to 10,000. This allows you to define a single Drawable that can be modified to suit particular circumstances—such as an arrow that can point in multiple directions.

When moving through levels, level 0 represents the start angle (or smallest scale result). Level 10,000 represents the end of the transformation (the finish angle or highest scale). If you do not specify the image level, it will default to 0.

ImageView rotatingImage 
  = (ImageView)findViewById(;
ImageView scalingImage 
  = (ImageView)findViewById(;
// Rotate the image 50% of the way to its final orientation.
// Scale the image to 50% of its final size.

Layer Drawables

A LayerDrawable lets you composite several Drawable resources on top of one another. If you define an array of partially transparent Drawables, you can stack them on top of one another to create complex combinations of dynamic shapes and transformations.

Similarly, you can use Layer Drawables as the source for the transformative Drawable resources described in the preceding section, or the State List and Level List Drawables that follow.

Layer Drawables are defined via the layer-list node tag. Within that tag, create a new item subnode using the drawable attribute to specify each Drawables to add. Each Drawable will be stacked in index order, with the first item in the array at the bottom of the stack.

<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="">
  <item android:drawable="@drawable/bottomimage"/>
  <item android:drawable="@drawable/image2"/>
  <item android:drawable="@drawable/image3"/>
  <item android:drawable="@drawable/topimage"/>

State List Drawables

A State List Drawable is a composite resource that enables you to specify a different Drawable to display based on the state of the View to which it has been assigned.

Most native Android Views use State List Drawables, including the image used on Buttons and the background used for standard List View items.

To define a State List Drawable, create an XML file containing a root selector tag. Add a series of item subnodes, each of which uses an android:state_* attribute and android:drawable attribute to assign a specific Drawable to a particular state:

<selector xmlns:android="">
<item android:state_pressed="true"
  <item android:state_focused="true"
  <item android:state_window_focused="false"
  <item android:drawable="@drawable/widget_bg_normal"/>

Each state attribute can be set to true or false, allowing you to specify a different Drawable for each combination of the following list View states:

· android:state_pressed—Pressed or not pressed.

· android:state_focused—Has focus or does not have focus.

· android:state_hovered—Introduced in API level 11, the cursor is hovering over the view or is not hovering.

· android:state_selected—Selected or not selected.

· android:state_checkable—Can or can't be checked.

· android:state_checked—Is or isn't checked.

· android:state_enabled—Enabled or disabled.

· android:state_activated—Activated or not activated.

· android:state_window_focused—The parent window has focus or does not have focus.

When deciding which Drawable to display for a given View, Android will apply the first item in the state list that matches the current state of the object. As a result, your default value should be the last in the list.

Level List Drawables

Using a Level List Drawable you can create an array of Drawable resources, assigning an integer index value for each layer. Use the level-list node to create a new Level List Drawable, using item subnodes to define each layer, with android:drawable / android:maxLevel attributes defining the Drawable for each layer and its corresponding index.

<level-list xmlns:android="">
  <item android:maxLevel="0"  android:drawable="@drawable/earthquake_0"/>
  <item android:maxLevel="1"  android:drawable="@drawable/earthquake_1"/>
  <item android:maxLevel="2"  android:drawable="@drawable/earthquake_2"/>
  <item android:maxLevel="4"  android:drawable="@drawable/earthquake_4"/>
  <item android:maxLevel="6"  android:drawable="@drawable/earthquake_6"/>
  <item android:maxLevel="8"  android:drawable="@drawable/earthquake_8"/>
  <item android:maxLevel="10" android:drawable="@drawable/earthquake_10"/>

To select which image to display in code, call setImageLevel on the View displaying the Level List Drawable resource, passing in the index of the Drawable you want to display:


The View will display the image corresponding to the index with an equal or greater value to the one specified.

Copy, Paste, and the Clipboard

Android 3.0 (API level 11) introduced support for full copy and paste operations within (and between) Android applications using the Clipboard Manager:

ClipboardManager clipboard = (ClipboardManager)getSystemService(CLIPBOARD_SERVICE);

The clipboard supports text strings, URIs (typically directed at a Content Provider item), and Intents (for copying application shortcuts). To copy an object to the clipboard, create a new ClipData object that contains a ClipDescription that describes the meta data related to the copied object, and any number of ClipData.Item objects, as described in the following section. Add it to the clipboard using the setPrimaryClip method:


The clipboard can contain only one Clip Data object at any time. Copying a new object replaces the previously held clipboard item. As a result, you can assume neither that your application will be the last to have copied something to the clipboard nor that it will be the only application that pastes it.

Copying Data to the Clipboard

The ClipData class includes a number of static convenience methods to simplify the creation of typical Clip Data object. Use the newPlainText method to create a new Clip Data that includes the specified string, sets the description to the label provided, and sets the MIME type toMIMETYPE_TEXT_PLAIN:

ClipData newClip = ClipData.newPlainText("copied text","Hello, Android!");

For Content Provider-based items, use the newUri method, specifying a Content Resolver, label, and URI from which the data is to be pasted:

ClipData newClip = ClipData.newUri(getContentResolver(),"URI", myUri);

Pasting Clipboard Data

To provide a good user experience, you should enable and disable the paste option from your UI based on whether there is data copied to the clipboard. You can do this by querying the clipboard service using the hasPrimaryClip method:

if (!(clipboard.hasPrimaryClip())) {
  // TODO Disable paste UI option.

It's also possible to query the data type of the Clip Data object currently in the clipboard. Use the getPrimaryClipDescription method to extract the metadata for the clipboard data, using its hasMimeType method to specify the MIME type you support pasting into your application:

if (!(clipboard.getPrimaryClipDescription().hasMimeType(MIMETYPE_TEXT_PLAIN)))
  // TODO Disable the paste UI option if the content in 
  // the clipboard is not of a supported type.
else {
  // TODO Enable the paste UI option if the clipboard contains data 
  // of a supported type.

To access the data itself, use the getItemAt method, passing in the index of the item you want to retrieve:

ClipData.Item item = clipboard.getPrimaryClip().getItemAt(0);

You can extract the text, URI, or Intent using the getText, getUri, and getIntent methods, respectively:

CharSequence pasteData = item.getText();
Intent pastIntent = item.getIntent();
Uri pasteUri = item.getUri();

It's also possible to paste the content of any clipboard item, even if your application supports only text. Using the coerceToText method you can transform the contents of a ClipData.Item object into a string.

CharSequence pasteText = item.coerceToText(this);