Animation Performance - Web Animation using JavaScript: Develop and Design (2015)

Web Animation using JavaScript: Develop and Design (2015)

Chapter 7. Animation Performance

Performance affects everything. Increased performance—apparent or real—drastically improves UX, which in turn boosts your company’s bottom line. Several major studies have demonstrated that latency increases on search engines result in significant decreases in revenue per user. To put it bluntly, people hate waiting.

As explained in Chapter 1, JavaScript animation performance is comparable to that of CSS animation. So, if you’re using a modern animation library such as Velocity.js, the performance of your animation engine is not your app’s bottleneck—it’s your own code. That’s what this chapter explores: techniques for coding high-performance animations across all browsers and devices.

The reality of web performance

If you’ve ever wondered why running concurrent animations slows down your UI, or why your site performs slowly on mobile devices, this chapter is for you.

Animations are a very resource-intensive process for browsers to perform, but there are many techniques that can help the browser work as efficiently as possible. We’re about to learn them.

From the perspective of UI design, there’s no shortage of articles extolling the virtues of building mobile-first, responsive websites. Conversely, from the perspective of UI performance, most of us, as developers, are unaware of what best practices are or how to follow them. Staying abreast of the web performance landscape is overwhelming and oftentimes futile; we’re held captive by browser and device quirks, byproducts of the volume of devices (desktops, smartphones, and tablets) and browsers (Chrome, Android, Firefox, Safari, Internet Explorer) that crowd the ecosystem. Considering that these platforms are continuously updated, it’s no surprise that we often throw in the towel and sideline performance concerns as much as we can. Sometimes we may even be tempted to do away with animations altogether if we’re unsure how to implement them without sacrificing performance.

We tell ourselves:

Since devices are getting faster, as users continue upgrading their hardware, my site will become progressively more performant.

Unfortunately, the global reality is the exact opposite: the smartphones that the developing world is adopting fall short of the performance of the latest iPhones in our pockets. Do you really want to forsake building products for the next few billion people coming online? The upcoming Firefox OS initiative is poised to bring capable smartphones to hundreds of millions of people, so we’re not simply waxing poetic about hypotheticals. The mobile revolution is here now.


Image Note

Ericsson has reported that the global smartphone subscriber count will rise from 1.9 billion to 5.9 billion in the next five years—fueled almost exclusively by the developing world.


Image

If your gut reaction is, “It’s not my problem—my app is just for the tech-savvy middle-class in the developed world,” rest assured that your evil web developer twin is sitting two thousand miles away cackling at the thought of getting to a nascent market before you do by actually putting in the effort necessary to deliver great experiences on low-powered devices. (There’s actually an enormous conglomerate dedicated to this—search Google for “Rocket Internet.”)

There’s another nasty reality to sidelining performance concerns: we systematically make the mistake of testing our sites on devices operating under ideal loads. In reality, of course, users have multiple apps and browser tabs running concurrently. Their devices are working overtime to process a dozen tasks at any given time. Accordingly, the performance baseline established for your app probably doesn’t reflect its performance in the real world. Yikes!

But, fear not, keen developer. It’s time to explore the performance techniques at your disposal and level up your animation game.

Technique: Remove layout thrashing

Layout thrashing—the lack of synchronization in DOM manipulation—is the 800-pound gorilla in animation performance. There’s no painless solution, but there are best practices. Let’s explore.

Problem

Consider how webpage manipulation consists of setting and getting: you can set (update) or get (query) an element’s CSS properties. Likewise, you can insert new elements onto a page (a set) or you can query for a set of existing elements (a get). Gets and sets are the core browser processes that incur performance overhead (another is graphical rendering). Think of it this way: after setting new properties on an element, the browser has to calculate the resulting impacts of your changes. For example, changing the width of one element can trigger a chain reaction in which the width of the element’s parent, siblings, and children elements must also change depending on their respective CSS properties.

The UI performance reduction that occurs from alternating sets with gets is called layout thrashing. While browsers are highly optimized for page-layout recalculations, the extent of their optimizations is greatly diminished by layout thrashing. Performing a series of gets at once, for example, can easily be optimized by the browser into a single, streamlined operation because the browser can cache the page’s state after the first get, then reference that state for each subsequent get. However, repeatedly performing one get followed by one set forces the browser to do a lot of heavy lifting since its cache is continuously invalidated by the changes made by set.

This performance impact is exacerbated when layout thrashing occurs within an animation loop. Consider how an animation loop aims to achieve 60 frames per second, the threshold at which the human eye perceives buttery-smooth motion. What this means is that every tick in an animation loop must complete within 16.7ms (1 second/60 ticks ~= 16.67ms). Layout thrashing is a very easy way to cause each tick to exceed this limit. The end result, of course, is that your animation will stutter (or jank, in web animation parlance).

While some animation engines, such as Velocity.js, contain optimizations to reduce the occurrence of layout thrashing inside their own animation loops, be careful to avoid layout thrashing in your own loops, such as the code inside a setInterval() or a self-invokingsetTimeout().

Solution

Avoiding layout thrashing consists of simply batching together DOM sets and DOM gets. The following code causes layout thrashing:

// Bad practice
var currentTop = $("element").css("top"); // Get
$("element").style.top = currentTop + 1; // Set
var currentLeft = $("element").css("left"); // Get
$("element")..style.left = currentLeft + 1; // Set

If you rewrite the code so that all queries and updates are aligned, the browser can batch the respective actions and reduce the extent to which this code causes layout trashing:

var currentTop = $("element").css("top"); // Get
var currentLeft = $("element").css("left"); // Get
$("element").css("top", currentTop + 1); // Set
$("element").css("left", currentLeft + 1); // Set

The illustrated problem is commonly found in production code, particularly in situations where UI operations are performed depending on the current value of an element’s CSS property.

Say your goal is to toggle the visibility of a side menu when a button is clicked. To accomplish this, you might first check to see if the side menu has its display property set to either "none" or "block", then you’d proceed to alternate the value as appropriate. The process of checking for the display property constitutes a get, and whichever action is subsequently taken to show or hide the side menu will constitute a set.

The optimized implementation of this code would entail maintaining a variable in memory that’s updated whenever the button is clicked, and checking that variable for the side menu’s current status before toggling its visibility. In this way, the get can be skipped altogether, which helps reduce the likelihood of the code producing a set alternated with a get. Further, beyond reducing the likelihood of layout thrashing, the UI now also benefits from having one less page query. Keep in mind that each set and get is a relatively expensive browser operation; the fewer there are, the faster your UI will perform.

Many tiny improvements ultimately add up to a substantial benefit, which is the underlying theme of this chapter: Follow as many performance best practices as you can, and you’ll wind up with a UI that rarely sacrifices your desired motion design goals for the sake of performance.

jQuery Element Objects

Instantiating jQuery element objects (JEO) is the most common culprit of DOM gets. You may be wondering what a JEO is, but you’ve certainly seen this code snippet before:

$("#element").css("opacity", 1);
... or its raw JavaScript equivalent:
document.getElementById("element").style.opacity = 1;

In the case of the jQuery implementation, the value returned by $("#element") is a JEO, which is an object that wraps the raw DOM element that was queried. JEO’s provide you with access to all of your beloved jQuery functions, including .css(), .animate(), and so on.

In the case of the raw JavaScript implementation, the value returned by getElementByid("element") is the raw (unwrapped) DOM element. In both implementations, the browser is instructed to search through the DOM tree to find the desired element. This is an operation that, when repeated in bulk, impacts page performance.

This performance concern is exacerbated when uncached elements are used in code snippets that are repeated, such as the code contained by a loop. Consider the following example:

$elements.each(function(i, element) {
$("body").append(element);
});

You can see how $("body") is a JEO instantiation that’s repeated for every iteration of the $.each() loop: In addition to appending the loop’s current element to the DOM (which has its own performance implications), you’re now also repeatedly forcing a DOM query. Seemingly harmless one-line operations like these add up very quickly.

The solution here is to cache the results—or, save the returned JEO’s into variables—to avoid a repeated DOM operation every time you want to call a jQuery function on an element. Hence, the code goes from looking like this:

// Bad practice: We haven't cached our JEO
$("#element").css("opacity", 1);
// ... some intermediary code...
// We instantiate the JEO again
$("#element").css("opacity", 0);

to looking like this after it’s properly optimized:

// Cache the jQuery element object, prefixing the variable with $ to indicate a JEO
var $element = $("#element");
$element.css("opacity", 1);
// ... some intermediary code...
// We re-use the cached JEO and avoid a DOM query
$element.css("opacity", 0);

Now you can reuse $element throughout your code without ever incurring a repeated DOM lookup on its behalf.

Force-feeding

Traditionally, animation engines query the DOM at the start of an animation to determine the initial value of each CSS property being animated. Velocity offers a workaround to this page-querying event through a feature called force-feeding. It’s an alternative technique for avoiding layout thrashing. With force-feeding, you explicitly define your animations’ start values so that these upfront gets are eliminated.

Force-fed start values are passed in as the second item in an array that takes the place of a property’s value in an animation properties map. The first item in the array is the standard end value that you’re animating toward.

Consider the following two animation examples, both of which are triggered upon page load:

// Animate translateX to 500px from a start value of 0
$element.velocity({ translateX: [ 500, 0 ] });
// Animate opacity to 0 from a start value of 1
$element.velocity({ opacity: [ 0, 1 ]);

In the first example, you’re passing translateX a force-fed start value of 0 since you know that the element has yet to be translated (since the page has just loaded). You’re force-feeding in what you know (or want) the original property value to be. Further, in the second example, the element’s current opacity is 1 because that’s the default value for opacity and you haven’t yet modified the element in any way. In short, with force-feeding, you can reduce the browser’s workload in situations where you have an understanding of how elements are already styled.


Image Note

Force-feed animation properties only when they’re first used in an animation chain, not when they occur subsequently in the chain, since Velocity already does internal caching there:

$element
// Optionally forcefeed here
.velocity({ translateX:[ 500, 0 ] })
// Do not forcefeed here;500 is internally cached
.velocity({ translateX:1000 });


Force-feeding is an invaluable feature for high-stress situations such as animating a large number of elements at once on a desktop browser or when dealing with low-powered mobile devices for which every page interaction incurs a noticeable delay.

However, for most real-world UI animation situations, force-feeding is an unnecessary optimization that makes your code less maintainable due to having to update the force-fed start values whenever you change the elements’ values within CSS stylesheets.


Image Note

Refer to Chapter 8, “Animation Demo,” to walk through an application of force-feeding.


Technique: Batch DOM additions

Like reducing layout thrashing, batching DOM additions is another performance technique to help avoid unoptimized interaction with the browser.

Problem

You’re not done with gets and sets just yet! A common page set is the insertion of new DOM elements at run-time. While there are many uses for adding new elements to a page, perhaps the most popular is infinite scrolling, which consists of elements continuously animating into view at the bottom of a page while the user scrolls downward.

As you learned in the previous section, browsers have to compute the composition of all affected elements whenever a new element is added. This is a relatively slow process. Hence, when DOM insertion is performed many times per second, the page is hit with a significant performance impact. Fortunately, when processing multiple elements, browsers can optimize page set performance if all elements are inserted at the same time. Unfortunately, we as developers often unintentionally forgo this optimization by separating our DOM insertions. Consider the following example of unoptimized DOM insertion code:

// Bad practice
var $body = $("body");
var $newElements = [ "<div>Div 1</div>", "<div>Div 2</div>", "<div>Div 3</div>" ];
$newElements.each(function(i, element) {
$(element).appendTo($body);
// Other arbitrary code
});

This iterates through a set of element strings that are instantiated into jQuery element objects (without a performance drawback since you’re not querying the DOM for each JEO). Each element is then inserted into the page using jQuery’s appendTo().

Here’s the problem: even if additional code exists after the appendTo() statement, the browser won’t compress these DOM sets into a single insertion operation because it can’t be certain that asynchronous code operating outside the loop won’t alter the DOM’s state between insertions. For example, imagine if you queried the DOM to find out how many elements exist on the page after each insertion:

// Bad practice
$newElements.each(function(i, element) {
$(element).appendTo($body);
// Output how many children the body element has
console.log($body.children().size());
});

The browser couldn’t possibly optimize the DOM insertions into a single operation because the code explicitly asks the browser to tell us the accurate number of elements that exist before the next loop begins. For the browser to return the correct count each time, it can’t have batched all insertions upfront.

In conclusion, when you perform DOM element insertion inside a loop, each insertion happens independently of any others, resulting in a notable performance sacrifice.

Solution

Instead of individually inserting new elements into the DOM, construct the full DOM element set in memory, then insert it via a single call to appendTo(). The optimized version of the code shown in the section above now looks like this:

// Optimized
var $body = $("body");
var $newElements = [ "<div>Div 1</div>", "<div>Div 2</div>", "<div>Div 3</div>" ];
var html = "";
$newElements.each(function(i, element) {
html += element;
});
$(html).appendTo($body);

This concatenates the string representation of each HTML element onto a master string that is then turned into a JEO and appended into the DOM in a single shot. In this way, the browser is given explicit instruction to insert everything at once, and it optimizes for performance accordingly.

Simple, right? As you’ll see in the remainder of this chapter, performance best practices are usually as easy as this. You simply have to train your eye to know when to use them.

Image

Technique: Avoid affecting neighboring elements

It’s important to consider the impact of an element’s animation on neighboring elements.

Problem

When an element’s dimensions are animated, the changes often affect the positioning of nearby elements. For example, if an element between two sibling elements shrinks in width, the siblings’ absolute positions will dynamically change so they remain next to the animating element. Another example might be animating a child element nested inside a parent element that doesn’t have explicitly defined width and height properties. Accordingly, when the child is being animated, the parent will also resize itself so that it continues to fully wrap itself around the child. In effect, the child element is no longer the only element being animated—the parent’s dimensions are also being animated, and that’s even more work for the browser to perform upon each tick in an animation loop!

There are many CSS properties whose modification can result in dimensional and positional adjustments to neighboring elements, including top, right, bottom, and left; all margin and padding properties; border thickness; and the width and height dimensions.

As a performance-minded developer, you need to appreciate the impact that animating these properties can have on your page. Always ask yourself how each property you’re attempting to animate affects nearby elements. If there’s a way to rewrite your code such that you can isolate elements’ changes from one another, then consider doing so. In fact, there is an easy way to do just this—on to the solution!

Solution

The simple solution to avoid affecting neighboring elements is to animate the CSS transform properties (translateX, translateY, scaleX, scaleY, rotateZ, rotateX, and rotateY) whenever possible. The transform properties are unique in that they elevate targeted elements to isolated layers that are rendered separately from the rest of the page (with a performance boost courtesy of your GPU), so that neighboring elements aren’t affected. For example, when animating an element’s translateX to a value of “500px", the element will move 500px rightward while superimposing itself on top of whatever elements exist along its animation path. If there are no elements along its path (that is, if there are no nearby elements for it to affect), then using translateX will have the same net effect on the look of your page as if you had animated using the much slower left property.

Hence, whenever possible, an animation that once looked like this:

// Move the element 500px from the left
$element.velocity({ left: "500px" });

should be refactored into this:

// Faster: Use translateX
$element.velocity({ translateX: "500px" });

Similarly, if you can substitute translateY for top, do so:

$element.velocity({ top: "100px" });
// Faster: Use translateY
$element.velocity({ translateY: "100px" });


Image Note

Sometimes you actually intend to use left or top so that neighboring elements’ positions are changed. In all other cases, get into the habit of using the transform properties. The performance impact is significant.



Consider Opacity Over Color

opacity is another CSS property that receives a GPU rendering boost since it doesn’t affect the positioning of elements. So, if there are elements on your page for which you’re currently animating, say, color when the user hovers over them, consider animating opacity instead. If the net effect looks almost as good as the color animation, then consider sticking with it—you’ve just boosted the UI’s performance without compromising its look.


As a performance-minded developer, you’re no longer allowed to arbitrarily select animation properties. You must now consider the impact of each of your property choices.


Image Note

Refer to CSSTriggers.com for a breakdown of how CSS properties affect browser performance.


Technique: Reduce concurrent load

Browsers have bottlenecks. Find out what they are and stay below them.

Problem

When a page first loads, the browser processes HTML, CSS, JavaScript, and images as quickly as possible. It should come as no surprise that animations occurring during this time tend to be laggy—they’re fighting for the browser’s limited resources. So, despite the fact that a page’s loading sequence is often a great time to flaunt all your motion design skills, it’s best to restrain yourself if you want to avoid giving users the first impression that your site is laggy.

A similar concurrency bottleneck arises when many animations occur at once on a page—regardless of where they take place in the page’s lifecycle. In these situations, browsers can choke under the stress of processing many styling changes at once, and stuttering can occur.

Fortunately, there are some clever techniques for reducing concurrent animation load.

Solution

There are two approaches for addressing the concurrency issue: staggering and breaking up animations into sequences.

Stagger

One way to reduce concurrent animation load is to make use of Velocity’s UI pack’s stagger feature, which delays the start times of successive animations in a set of elements by a specified duration. For example, to animate every element in a set toward an opacity value of 1 with successive 300ms delays between start times, your code might look like this:

$elements.velocity({ opacity: 1 }, { stagger: 300 });

The elements are no longer animating in perfect synchronization; instead, at the very start of the entire animation sequence, only the first element is animating. Later, at the very end of the entire sequence, only the last element is animating. You’re effectively spreading out the animation sequence’s total workload so that the browser is always performing less work at one time than it would have had it been animating every element simultaneously. What’s more, implementing staggering into your motion design is often a good aesthetic choice. (Chapter 3, “Motion Design Theory,” further explores the merits of staggering.)

Multi-animation sequences

There’s one more clever way to reduce concurrent load: break up property animations into multi-animation sequences. Take, for example, the case of animating an element’s opacity value. This is typically a relatively low-stress operation. But, if you were to simultaneously animate the element’s width and box-shadow properties, you’d be giving the browser appreciably more work to perform: more pixels will be affected, and more computation would be required.

Hence, an animation that looks like this:

$images.velocity({ opacity: 1, boxShadowBlur: "50px" });

might be refactored into this:

$images
.velocity({ opacity: 1 })
.velocity({ boxShadowBlur: "50px" });

The browser has less concurrent work to do since these individual property animations occur one after another. Note that the creative tradeoff being made here is that we’ve opted to prolong the total animation sequence duration, which may or may not be desirable for your particular use case.

Since an optimization such as this entails changing the intention of your motion design, this is not a technique that should always be employed. Consider it a last resort. If you need to squeeze additional performance out of low-powered devices, then this technique may be suitable. Otherwise, don’t pre-optimize the code on your site using techniques like this, or you’ll end up with unnecessarily bloated and inexpressive code.

Technique: Don’t continuously react to scroll and resize events

Be mindful of how often your code is being run. A fast snippet of code being run 1,000 times per second may—in aggregate—no longer be very fast.

Problem

Browsers’ scroll and resize events are two event types that are triggered at very high rates: when a user resizes or scrolls the browser window, the browser fires the callback functions associated with these events many times per second. Hence, if you’ve registered callbacks that interact with the DOM—or worse, contain layout thrashing—they can cause tremendously high browser load during times of scrolling and resizing. Consider the following code:

// Perform an action when the browser window is scrolled
$(window).scroll(function() {
// Anything in here is fired multiple times per second while the user scrolls
});
// Perform an action when the browser window is resized
$(window).resize(function() {
// Anything in here is fired multiple times per second while the user resizes
});

Recognize that the functions above aren’t simply called once when their respective events start; instead, they are called throughout the duration of the user’s respective interaction with the page.

Solution

The solution to this problem is to debounce event handlers. Debouncing is the process of defining an interval during which an event handler callback will be called only once. For example, say you defined a debounce interval of 250ms and the user scrolled the page for a total duration of 1000ms. The debounced event handler code would accordingly fire only four times (1000ms/250ms).

The code for a debounce implementation is beyond the scope of this book. Fortunately, many libraries exist exclusively to solve this problem. Visit davidwalsh.name/javascript-debounce-function for one example. Further, the tremendously popular Underscore.js (UnderscoreJS.org), a JavaScript library akin to jQuery that provides helper functions for making coding easier, includes a debounce function that you can easily reuse across your event handlers.


Image Note

As of this book’s writing, the latest version of Chrome automatically debounces scroll events.


Technique: Reduce image rendering

Not all elements are rendered equally. Browsers have to work overtime when displaying certain elements. Let’s look at which those are.

Problem

Videos and images are multimedia element types that browsers have to work extra hard to render. Whereas the dimensional properties of non-multimedia HTML elements can be computed with ease, multimedia elements contain thousands of pixel-by-pixel data points that are computationally expensive for browsers to resize, reposition, and recomposite. Animating these elements will always be less less than optimal versus animating standard HTML elements such as div, p, and table.

Further, given that scrolling a page is nearly equivalent to animating a page (think of scrolling as animating the page’s top property), multimedia elements can also drastically reduce scrolling performance on CPU-constrained mobile devices.

Solution

Unfortunately, there’s no way to “refactor” multimedia content into faster element types, other than turning simple, shape-based images into SVG elements wherever possible. Accordingly, the only available performance optimization is reducing the total number of multimedia elements that are displayed on the page at once and animated at once. Note that the words at once stress a reality of browser rendering: browsers only render what’s visible. The portions of your page (including the portions that contain additional images) that aren’t visible do not get rendered, and do not impose additional stress on browser processes.

So, there are two best practices to follow. First, if you’re ever on the fence about adding an additional image to your page, opt to not include it. The fewer images there are to render, the better UI performance will be. (Not to mention the positive impact fewer images will have on your page’s network load time.)

Second, if your UI is loading many images into view at once (say, eight or more, depending on your device’s hardware capabilities), consider not animating the images at all, and instead crudely toggling the visibility of each image from invisible to visible. To help counteract how inelegant this can look, consider staggering visibility toggling so that the images load into view one after another instead of simultaneously. This will help guide the user’s eye across the loading sequence, and will generally deliver more refined motion design.


Image Note

Refer to Chapter 3, “Motion Design Theory,” to learn more about animation design best practices.


Sneaky images

You’re not done yet. There’s more to this section than meets the eye, as we haven’t fully explored the ways in which images can materialize on a page. The obvious culprit is the img element, but there are two other ways that images can sneak onto your pages.

CSS gradients

These are actually a type of image. Instead of being pre-produced by a photo editor, they are produced at run-time according to CSS styling definitions, for example, using a linear-gradient() as the background-image value on an element. The solution here is to opt for solid-color backgrounds instead of gradients whenever possible. Browsers can easily optimize the rendering of solid chunks of color, but, as with images, they have to work overtime to render gradients, which differ in color from pixel to pixel.

Shadow properties

The evil twin siblings of gradients are the box-shadow and text-shadow CSS properties. These are rendered similarly to gradients, but instead of stylizing background-color, they effectively stylize border-color. What’s worse, they have opacity falloffs that require browsers to perform extra compositing work because the semitransparent portions of the gradients must be rendered against the elements underneath the animating element. The solution here is similar to the previous one: if your UI looks almost as good when you remove these CSS properties from your stylesheet, pat yourself on the back and never look back. Your website’s performance will thank you.

These recommendations are simply that: recommendations. They are not performance best practices since they sacrifice your design intentions for increased performance. Considered them only as last resorts when your site’s performance is poor and you’ve exhausted all other options.

Technique: Degrade animations on older browsers

You don’t have to neglect supporting underperforming browsers and devices. If you embrace a performance-minded workflow from day one, you can simply provide them with a degraded—but completely functional—experience.

Problem

Internet Explorer 8—a slow, outdated browser—is dying in popularity. But Internet Explorer 9, its successor, is still widely used outside of the Americas. Further, older Android smartphones running Android 2.3.x and below, which are slow relative to the latest-generation Android and iOS devices, also remain tremendously popular. Out of every ten users to your site, expect up to three of them to fall into one of these two groups (depending on the type of users your app attracts). Accordingly, if your site is rich in animation and other UI interactions, assume it will perform especially poorly for up to a third of your users.

Solution

There are two approaches to addressing the performance issue raised by weaker devices: either broadly reduce the occurrence of animations across your entire site, or reduce them exclusively for the weaker devices. The former is a ultimately a product decision, but the latter is a simple technical decision that is easily implemented if you’re using the global animation multiplier technique (or Velocity’s equivalent mock feature) explained in the Chapter 4, “Animation Workflow.” The global multiplier technique lets you dynamically alter the timing of animations across your entire site by setting a single variable. The trick then—whenever a weak browser is detected—is to set the multiplier to 0 (or set $.Velocity.mock to true) so that all of a page’s animations complete within a single animation tick (less than 16ms):

// Cause all animations to complete immediately
$.Velocity.mock = true;

The result of this technique is that weaker devices experience UI animations that degrade so that instant style changes replace your animated transition. The benefits are significant: your UI will perform noticeably more smoothly without resource-intensive animations occurring on your page. While this technique is undoubtedly destructive (it compromises your motion design intentions), an improvement in usability is always worth a reduction in elegance. After all, users visit your app to accomplish specific goals, not to admire how clever your UI work is. Never let animations get in the way of user intentions.

If you’re still irked by the notion of stripping animations from your UI, keep in mind that users on weaker devices are accustomed to websites behaving slowly for them. So, if your site bucks the trend in a constructive way, they’ll be especially delighted by it and will be more likely to continue using it.

Find your performance threshold early on

Continuing from the previous technique’s theme, it’s worth stressing that the advice in this chapter is especially relevant for mobile devices, many of which are slow relative to desktop computers. Unfortunately, we, as developers, often fail to consider this in our workflows: we routinely create websites within the pristine operating environments of our high-end desktops, which are likely running the latest-generation hardware and software available. This type of environment is divorced from the real-world environments of users, who are often not only using outdated hardware and software, but tend to have many apps and browser tabs running simultaneously. In other words, most of us work in development environments that are non-representationally high-performance! The side effect of this oversight is that your app may actually be noticeably laggy for a significant portion of your users. By the time you asking what frustrates them, they may have already lost interest in using it.

The correct approach for a performance-minded developer is to determine the performance threshold early on in the development cycle. While developing your app, check its performance frequently on reference devices, which might include a last-generation mobile device plus a virtual machine running Internet Explorer 9. If you set a performance goal early on of being performant on your reference devices, then you can sleep soundly knowing that all newer devices will deliver even better performance for your users.


Image Tip

If you’re a Mac user, visit Microsoft’s Modern.ie website for information on how to run free virtual copies of old Internet Explorer versions.


If you find that a reference device is too weak to power the motion design that you insist your app has, follow the advice from the previous technique: gracefully degrade animations on that reference device, and choose a faster device as your new (non-degraded) reference.

For each testing device, remember to open several apps and tabs at once so you simulate users’ operating environments. Never test in a vacuum in which the only app running is your own.

Keep in mind that remote browser testing (through services such as BrowserStack.com and SauceLabs.com) is not the same as live reference device testing. Remote testing services are appropriate for testing for bugs and UI responsiveness—not for animation performance. After all, the test devices running in the cloud aren’t using real hardware—they’re emulated versions of devices. Consequently, their performance is typically different than that of their real-world counterparts. Further, the lag time between what occurs on the virtual machine and what’s displayed on your browser window is too significant to get a meaningful gauge of UI animation performance.

In short, you’ll need to go out and buy real devices for performance testing. Even if you’re a cash-strapped developer, don’t skimp on this. The few hundred dollars you spend on test devices will be offset by the increased recurring revenue you’ll generate from happier users engaging more frequently with your buttery-smooth app.

If you wind up with a handful of reference devices, also consider purchasing Device Lab, a versatile stand that props up all of your mobile devices on a single surface so you can easily eyeball the screens during testing. As a bonus, the device includes a nifty app that lets you control all the browsers across your devices at once so you don’t have to manually refresh each browser tab.


Image Note

Visit Vanamco.com to purchase and download Device Lab.


Image


Visit Ebay to Buy Old Devices for Cheap

Purchasing the most popular Android and iOS devices from each of these products’ major release cycles will give you a broad cross-section of the hardware and software environments that your users have. Here’s my recommended setup (as of early 2015):

Image iPhone 4 or iPad 2 running iOS7

Image iPhone 5s (or newer) running the latest version of iOS

Image Motorola Droid X running Android 2.3.x

Image Samsung Galaxy SII running Android 4.1.x

Image Samsung Galaxy S5 (or newer) running the latest version of Android

You’re welcome to substitute any of the Android devices for devices of similar performance. What’s important here is that you’re using one device from each major Android release cycle (2.3.x, 4.1.x, and so on) so that you have a representative sample of the web browser performance from each one. Refer to http://developer.android.com/about/dashboards for a distribution of the most popular Android versions.


Wrapping up

Performance affects everything. From how many devices can run your app, to the quality of the user experience, to the perception of your app’s technical competency, performance is a major tenet of professional web design. It’s not a “nice-to-have,” it’s a fundamental building block. Don’t relegate performance as a simple optimization to be made in hindsight.