HTML5, Usability, and User-Centered Design - Pro HTML5 Accessibility: Building an Inclusive Web (2012)

Pro HTML5 Accessibility: Building an Inclusive Web (2012)

C H A P T E R 9

HTML5, Usability, and User-Centered Design

While a large part of this book has focused on the technical aspects of the HTML5 language, there are others aspects you should consider to ensure that your project is usable by the widest possible audience. This chapter covers areas you'll find useful, such as iterative development methods, participatory design, using focus groups and surveys for research, expert evaluation, and use of personas.

First, if accessibility is mostly about people with disabilities, what is usability?

What Is Usability?

Usability is a subset of human/computer interaction (HCI) that looks at the quality of the user experience and attempts to understand how to improve it. Usability as a discipline attempts to determine how successfully a user can complete a task and how satisfying a device or interface might be to use. This can be determined for any diverse group that you can think of, such as vision-impaired or blind people and older people, but it also can be determined for users without disabilities.

Image Note The area of UX, or user experience, is expanding these days. This is largely due to the power of consumer choice, such as potential customers being able to easily access alternatives to your service if they aren't 100 percent happy with yours.

There are different definitions of usability, and we'll look at several because they are nuanced and have individual implications. One popular definition of usability is the following:

“A measure of how easy it is for a user to complete a task. In the context of Web pages this concerns how easy it is for a user to find the information they require from a given Web site.”1


This definition is focused on the user being able to complete a specific task, which is obviously very important. However, is it the full picture?

Another interesting definition is this one:

“The ease with which a system can be learnt or used. A figure of merit or qualitative judgment of ease of use or learning. Some methods of assessing usability may also express usability as a quantitative index.” 2

I like this one, and I mention it here because it talks about how easily the system can be learned by the user. For me, a good rule of thumb in user-interface design is this: if you have to provide instructions on how to use it, it's already too complicated!

The user of your web sites and applications ideally should intuitively get it. This is, of course, not possible in many domains. The user won't just get how to fly a plane, for example. In fact, there are stories of disasters that befell pilots and surgeons, for example (or more explicitly the passengers and the patients) because of complexity in a UI. No one wants to have to hunt for and decipher a manual when the plane is going down in order to figure out what the obscure error message flashing on the dashboard means.

Image Tip For a really interesting read that documents these kind of techno horror stories, as well as discussing the wider need for more “human” technologies, have a look at the book Leonardo's Laptop: Human Needs and the New Computing Technologies by Ben Schneiderman. It shows thatuser friendly is a far from wooly notion.

This third definition is one of the most interesting because it goes beyond dryly looking at the tasks the user needs to do and mentions the level of satisfaction the user will feel when using a web interface.

“The effectiveness, efficiency, and satisfaction with which specified users can achieve specified goals in a particular environment. Synonymous with ‘ease of use'.” 3

This final definition takes the usability ideal to a higher level by looking at the quality of the user experience and not merely taking a mechanical, task-based approach. This is where user testing is very useful because it is a fantastic way of assessing the quality of the user experience.




Image Note I'll say more on user testing later. User testing is a great tool for meeting end users and getting really useful feedback from them on the quality of your user interface. Nothing says “success” like hearing one of your users happily say, “I found using that web site really easy, and I could do what I wanted. I'll be back.” Also, nothing says “fail” quite as loudly as the user threatening to throw the monitor out the window in frustration. User testing is the key to this kind of feedback.

Donald Norman, one of the fathers of usability, has this to say on his web site about the best ways to get user feedback about the usability of a system:

“I caution that logical analysis is not a good way to predict people's behavior (nor are focus groups or surveys): observation is the key […] For both products and services I'm a champion of beauty, pleasure and fun, coupled with behavioral and functional effectiveness.”4

Usability is about looking at how usable, intuitive, user friendly, and simply satisfying an interface is to use. As a discipline, it also examines the psychology of user interaction, or cognitive ergonomics. It is an attempt to understand how users perceive the instructions they receive from looking at or interacting with a user interface or device.

Image Note Although accessibility and usability are two different fields, there is a very strong relationship between the two. The following techniques are often used in the preparatory phase of a project and, if sufficient care is taken to use these techniques well, these requirements-gathering and prototyping phases can help authors avoid very serious mistakes in a UI design further on in the project.

I also highly recommend books by Donald Norman. In particular, the excellent (both fun, short, and easy to read) The Design of Everyday Things (with its distinctive teapot on the cover). He talks about the psychology of design and lays down some ground rules for good interaction design. For example, a couple of simple but profound ideas he puts forth are called the “Gulf of Evaluation” and the “Gulf of Execution.” I'm sure you have come across both online unaware, and becoming aware of them will help you to design better HTML5 interfaces.

According to his book, the “Gulf of Evaluation” is small “when the system provides information about its state in a form that is easy to get, is easy to interpret, and matches the way the person thinks of the system.” This gulf describes situations where a web site doesn't tell you, in a way that is intuitive and easy to understand, either what it is doing or what state it is in—or it doesn't even give you any feedback at all!



The “Gulf of Execution” outlines the difference between what your users think they need to do, to perform some task such as booking tickets or finding information, and what the system requires them to do.

I come across them both all the time! I often spend time shouting at my computer, “Why did they do it that way? It doesn't make sense! It's so obvious that all they needed to do was [insert better design by Josh here],” and then I try to calm down. See if you can spot either gulf online, and be aware that when you design a UI you need to translate the idea of how you think the system should work that's in your head to a mental model your users will understand. When your users get it easily, that's good usability and good interaction design.

Image Note While we are on the subject of good books, you should also read Don't Make Me Think by Steve Krugs and About Face 3: The Essentials of Interaction Design,” by Robert Reimann, Alan Cooper, and Dave Cronin.

Universal Design

One of the most exciting developments in designing for inclusion in recent times has been universal design.

Universal design can be defined as follows:

“The design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.”5

The “7 Principles of Universal Design,” which are illustrated in Figures 9-1 through 9-7, were developed in 1997 by a working group of architects, product designers, engineers, and environmental design researchers, led by the late Ronald Mace at North Carolina State University. 6 The purpose of the principles is to guide the design of environments, products, and communications. According to the Center for Universal Design at NCSU, the principles “might be applied to evaluate existing designs, guide the design process and educate both designers and consumers about the characteristics of more usable products and environments.”



6 Copyright © 1997 NC State University, The Center for Universal Design, College of Design

Image Note You might see some similarities between these principles and the WCAG 2.0, and there are. See which ones you can spot! The universal design (UD) guidelines are designed to span several domains from product design to information and communcation technologies (ICT) and the built environment. They are useful food for thought, even for your HTML5 projects.


Figure 9-1. Principle 1: Equitable use

Here are the guidelines for this principle:

· 1a. Provide the same means of use for all users: make it identical whenever possible and equivalent when not.

· 1b. Avoid segregating or stigmatizing any users.

· 1c. Provisions for privacy, security, and safety should be equally available to all users.

· 1d. Make the design appealing to all users.


Figure 9-2. Principle 2: Flexibility in use

Here are the guidelines for this principle:

· 2a. Provide choice in methods of use.

· 2b. Accommodate right-handed and left-handed access and use.

· 2c. Facilitate the user's accuracy and precision.

· 2d. Provide adaptability to the user's pace.


Figure 9-3. Principle 3: Simple and intuitive to use

Here are the guidelines for this principle:

· 3a. Eliminate unnecessary complexity.

· 3b. Be consistent with user expectations and intuition.

· 3c. Accommodate a wide range of literacy and language skills.

· 3d. Arrange information consistent with its importance.

· 3e. Provide effective prompting and feedback during the task and after task completion.


Figure 9-4. Principle 4: Perceptive information

Here are the guidelines for this principle:

· 4a. Use different modes (pictorial, verbal, tactile) for redundant presentation of essential information.

· 4b. Provide adequate contrast between essential information and its surroundings.

· 4c. Maximize legibility of essential information.

· 4d. Differentiate elements in ways that can be described (that is, make it easy to give instructions or directions).

· 4e. Provide compatibility with a variety of techniques or devices used by people with sensory limitations.


Figure 9-5. Principle 5: Tolerance for error

Here are the guidelines for this principle:

· 5a. Arrange elements to minimize hazards and errors. Possible arrangement include ordering by most used elements or the most accessible ones; eliminating, isolating, or shielding hazardous elements.

· 5b. Provide warnings of hazards and errors.

· 5c. Provide fail-safe features.

· 5d. Discourage unconscious action in tasks that require vigilance.


Figure 9-6. Principle 6: Low physical effort

Here are the guidelines for this principle:

· 6a. Allow user to maintain a neutral body position.

· 6b. Use reasonable operating forces.

· 6c. Minimize repetitive actions.

· 6d. Minimize sustained physical effort.


Figure 9-7. Principle 7: Size and space for approach and use

Here are the guidelines for this principle:

· 7a. Provide a clear line of sight to important elements for any seated or standing user.

· 7b. Make reach to all components comfortable for any seated or standing user.

· 7c. Accommodate variations in hand and grip size.

· 7d. Provide adequate space for the use of assistive devices or personal assistance.

Image Tip The HTML5 spec does make some claim to support “universal design.” How would you do this in your projects? Think about it when you are building an HTML5 interface and when you have finished a prototype you are happy with. Go back to the UD principles and see if your web site adheres to them. It can be fun, as well as very instructive to do this.

Participatory Design

Jeffery Rubin (an influential usability expert) outlines participatory design as a technique where there might be one or more end users on the design team itself. The user is put at the heart of the process by having his knowledge, skill set, and emotional responses tapped by the designers.

In practice, you might not always have the resources (both financially and in terms of time) to do this, but if you are fortunate enough to work in a company with deep pockets, it could be worthwhile to explore this option. Rubin warns, however, that the user can get “consumed” by the process because teams that work closely together can end up in a sort of bubble—erroneously thinking that once everyone in the team can happily use or understand the system, it's ready for prime time!

Focus Group Research

Focus group research aims to evaluate the project's basic concepts at an early stage in the development process. It can be used to identify and confirm the characteristics of the user, and also to validate the projected effectiveness of the product. It usually involves multiple participants.

The concepts to be explored can be presented to the group in many forms, such as paper-and-pencil drawings, storyboards, PowerPoint presentations, 3D prototypes and models, and so on. The idea is to identify how acceptable the concepts are to the group participants and in what ways they can be improved.

Focus groups are certainly very useful and a worthwhile way of figuring out what the user wants from the project. It also can be an effective way of doing quick prototype testing of UI components or getting feedback on wireframe or other aspects of your HTML5 design.


Surveys are often used to understand users' preferences about an existing product or a potential product. The survey is in some ways a more superficial way of collecting data than the focus group, but it is useful in particular to draw a picture of the views of a larger population. They can be used at any time but are often used at the beginning of a product development cycle.

Thorough survey design is very important, and a great deal of thought must go into survey design to ensure that questions are clear and unambiguous to get the best use from the returned data. It might not be practical for you to do a survey, but it is worth mentioning as a method of gaining information from end users. If you work on a university campus or in a large organization where there is an intranet, a survey is something you could do internally.

The Cognitive Walkthrough

The cognitive walkthrough is a common technique for evaluating the design of a user interface, with special attention paid to how well the interface supports exploratory learning—that is, successful first-time use without formal training. This evaluation can be performed by the system designer in the early stages of design—for example, before user testing is possible.

Early versions of the walkthrough method relied on a detailed series of questions to be answered on paper or electronic forms. These could take the form of “paper and pencil evaluations,” which are a useful way of learning user preferences for certain attributes of a user interface, such as the organization and layout of menus or other controls.

Paper and pencil evaluations, which are literally drawings of your interface, - are useful because designers can find out critical information very quickly. They are also inexpensive and allow you to get some real feedback about how intuitive a user interface might be before any development work has taken place. This technique can be used as often as necessary and can be used in conjunction with, or instead of, prototyping software such as Serena Prototype Composer or Axure.

Expert Evaluations

Expert evaluations refers to bringing in a usability specialist who has little to do with the project to assess its usability. The expert applies usability principles to assess the quality of the system and any potential problems it might have. This type of evaluation can be performed in conjunction with an accessibility audit of the system to see how usable it would be by people with disabilities using assistive technology (AT).

Expert Accessibility Audits

An accessibility audit tests a web site or application for technical accessibility problems against guidelines such as WCAG 2.0. This is a really powerful way of seeing what is right or wrong with the system, from an independent party. It generally takes the form of a list of recommendations framed against a list of each of the WCAG 2.0 checkpoints.

However, the feedback from an audit must be actively incorporated in the project. This really happens only when it is done as a part of an iterative development process; it is not effective if the recommendations are tagged on at the end. The accessibility audit is one of the most useful tools if it's used properly. However, there is often a culture of minimum compliance—a “We'll get to it later, we have the audit” attitude that makes the exercise hollow and pointless.

The ideal path is to first create a well-designed site that incorporates best practice and, if possible, uses feedback from users—ideally incorporating that feedback in the development process at various stages, or iterations, of the project. When a stage of the project is in a steady state, it should then be accessibility audited by a third party (which is important for objectivity). After the results of the test have been incorporated, a user test should be done.

The user test is not just the icing on the cake, it's the acid test of success or failure.

Using Personas

In some instances, it might not be feasible to do a user test at all. This is where using personas can be useful. A persona is a distilled archetype of a certain user group's qualities and attributes. These attributes are models of the various qualities a user-experience professional thinks might epitomize a certain user group, such as blind people. They, therefore, build a persona around them.

Persona use aims to simulate what the experience of using a web site might be like for this group of users. If various personas are accurate, the simulation of their experience will likely be accurate also. Personas can be used to justify the modification of an application design, based on the perceived needs of the persona.

Building Personas

Personas are created from the gathered research about a target group. This can be from surveys, interviews, and so on. It is possible to build personas that represent an average user. These various groups can include older people, young people, blind users, and so on. A good persona comes from real-world feedback that has been gathered from real users.

Does Using Personas Work?

Although personas are widely used, there is little empirical evidence to support the claim that using personas actually improves the quality of the user-interface design. However, in a very interesting field study conducted by Frank Long of Frontend (an excellent Irish user experience and interface design company), the effectiveness of using personas was investigated. This took the form of an experiment conducted over a period of five weeks using students from the National College of Art and Design in Dublin. The results showed that, through using personas, designs with superior usability characteristics were produced. The results also indicated that using personas provides a significant advantage during the research and conceptualization stages of the design process (supporting previously unfounded claims).

The study also investigated the effects of using different presentation methods to present personas and concluded that photographs worked better than illustrations, and that visual storyboards were more effective in presenting task scenarios than text-only versions.

Image Note You can read the full paper “Real or Imaginary: The effectiveness of using personas in product design” by Frank Long at

Measuring the Effectiveness of Using Personas

Long's study produced objective evidence to support the key claims made by Cooper et al. (who invented them) for using personas in the product-design process. Using personas seemed to strengthen the focus by designers on the end user and the user's tasks, goals, and motivation. Personas make the needs of the end user more explicit and thereby can direct decision-making within design teams more toward those needs. The study also suggests that using personas can improve communication between teams and facilitate more constructive and user-focused design discussion.

The use of personas doesn't always get the thumbs-up, however. Research by Chapman and Milham aiming to critically evaluate the use of persona expressed concern about the claim that the use of personas was effective. They suggested that personas actually could be harmful and lead to skewed and incorrect conclusions, and they were therefore unreliable. They asked, “How many users are represented by this persona?”, “Is this persona relevant for a group?”, “Are personas a valid method at all (and how can this be verified)?”7

These are valid questions. Long also found that using illustrations instead of photographs of the persona seem to reduce effectiveness and reduce empathy toward the illustrated persona. Also, the use of a storyboard task scenario was more effective than the text version and facilitated more detailed design solutions.

Long concluded that using personas offers several benefits for user-centered design in product development, such as enhancing the possibility of incorporating user-centered features at the product-specification stage. And he provided some objective evidence that using personas does work.

Field Studies

Field studies is a term that refers to testing a product or interface in its natural setting, which is the ideal. This setting can be an office, home, or any other environment that realistically reflects how the product will be used. A field study might or might not be possible to do, and when undertaken late in the product cycle it should be viewed not as an indicator of significant issues with the product or interface but as a way of refining it.

Traditional Usability Testing

Traditional usability testing involves testing with a random sample of the public or a sample of representative users, who will in practice be using a web application or web site. This type of testing is an attempt to assess the quality of the user experience. The outcomes of the test—such as whether a user can successfully complete a certain task or set of tasks, the ease of use with which could the user completed the tasks, and other user feedback and observations made during the test—are all noted by the test facilitator. This recorded information is valuable because it allows an experienced usability analyst to gain a detailed picture of what is or is not working for the end user in a particular user-interface design.

Although traditional usability testing is very useful, it usually captures only a small sample of issues. It is not exhaustive, but any difficulties become immediately apparent during the test. An experienced usability professional understands exactly how the design or implementation contributes to these problems and what can be done to fix them.

User Testing with People with Disabilities

Although user testing with people who do not have disabilities often yields many positive results that can certainly improve the user experience, these users generally have more standard requirements and might not need or use AT.

Therefore, involving people with disabilities in user testing is often the best way to get a detailed picture of how usable an interface, application, or web site is for people who use AT. There are also many people with mild to moderate disabilities who don't use AT and rely on the accessibility settings of their operating system, their browser, and good keyboard accessibility. By closely studying the experience of a user with a disability, it is possible to gain deep insight into how your design choices and decisions impact the user experience.


7 Chapman, C.N, and Milham, R. P (2006) The persona's new clothes: methodological and practical arguments against a popular method Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting, pp. 634–636. Available online at:

If you do user testing iteratively—that is, by involving people with disabilities in the full development and design cycle of a project—you can form a much more rounded picture of the user experience and make effective design decisions early on.

Formal vs. Informal User Testing

Formal usability analysis and user testing conjures images of the stern scientist with a white coat taking notes behind a one-way, glass observation screen—while the test participant is relayed commands and tasks via a talkback system or feedback relay. These instructions, of course, must be given in a voice drained of any hint of emotion or semblance of humanity to avoid the sin of bias within the test. These images, while obviously caricatures, are what many people think of when they hear the words “observation,” “testing,” and “analysis.” However, it's a rather outdated view that's at odds with the current trends and habits in the field of user testing.

Formal user testing is very much associated with the scientific method. Although this approach is certainly valid in certain domains, it is not what we are concerned with here. The formal method is associated with statistical analysis and control experiments. In terms of road testing your HTML5 projects, what we are concerned with here is the more real-world approach of informal user testing. This is where testing often has to be done quickly, as a part of an iterative development cycle (in the best of cases) and as an add-on at the end of a project as some attempt at validation, at worst.

More informal user testing is where there is a test script and a series of tasks that have been outlined beforehand. The test facilitator might also have a relationship that has been built up over the years where the test facilitator and participant have done many, many tests together.

Measuring User-Testing Outputs

User tests have certain outputs. These are varied and can be the accumulated notes of the test facilitator, the video footage collected during the test for later analysis, the collective impression of uninvolved observers of the test, and so on. Some outputs are more tangible, such as video that can be archived and viewed later. Some are less tangible but are still very valuable, such as the lasting impressions a user test can leave on the observers when they have watched someone use their web site.

These less tangible impressions and subjective experiences, however, can result in very real outputs. A product can be dropped, a software iteration cycle abandoned, and so on if a project manager sees a live, real-time negative user response to one of a product's interfaces. Conversely, a designer can be vindicated because the results of her design efforts and attention to detail come to fruition when a user says, “Yes, that web site is great. I can find the information I need really easily. I like the way the page is designed.” This experience can be more profound when the person being observed has a disability.

It is not by observing the average, user-experience testing where a usability expert gets the interesting information. It is the extreme cases—both positive and negative—where the really interesting aspects of user-testing analysis take place. This is where both positive and negative experiences are amplified and made quite explicit, so there is often no ambiguity. The language is often less than neutral, so there can be little doubt of the user's impression and feelings about a particular user interface or application.

How Does User Testing Work?

Figure 9-8 is an example of a user-testing facility (the one where I work at the NCBI Centre for Inclusive Technology). The figure shows the layout of the rooms and equipment.


Figure 9-8. The author's lab

The User Environment

The user test participant (1) sits in a typical office environment within the testing room, which is controlled for sound. The test facilitator (2) sits with the user, explaining the tasks, taking notes, and critically observing the user's interactions. The test is conducted using a standard PC (3) with assistive hardware and software. Dedicated user test recording software such as Morae, together with discreet cameras and microphones, capture and record every aspect of the user-testing session for later analysis.

The Observation Environment

Observers can watch the test in real time from the comfort of observation room couches. The video from the user's monitor (6) is displayed on a flat screen TV (4), while a second signal from the camera and microphone (7) shows the user's gestures, facial expressions, body language, and vocalizations on a television monitor (5).

Through these links, observers can see everything the user does and says, as well as the interaction between the user and the facilitator.

Test Details

A typical user test consists of eight separate user sessions of one to one and a half hours each. The types of use that are tested cover a broad range of disabilities and assistive technologies. It also allows us to include younger and older users with people with different levels of experience. This results in a more representative sample of attitudes and approaches. Figure 9-9 shows one participant at the center.


Figure 9-9. A user test participant in the NCBI Centre for Inclusive Technology usability lab

Each user carries out a set of realistic tasks that have been agreed on beforehand with the client.

These usually include the most common tasks for which the product is used, as well as the most critical tasks and any tasks that the test facilitator might anticipate causing problems for users. Tests are carefully designed and run to yield the most realistic user behavior possible. Figure 9-10shows a facilitator and user together in the testing lab.


Figure 9-10. User test facilitator and participant in the NCBI Centre for Inclusive Technology lab

Basic Elements of User Testing

Rubin outlines a basic methodology and standard elements for more informal testing 8:

· Develop problem statements or test objectives rather than hypotheses.

· Use a representative sample of end users, which might or might not be randomly chosen.

· Use a representation of the work environment.

· Observe end users (with a representative product).


8 Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (Wiley Technical Communications Library)

· Use controlled interrogation and probing of the participants by a test monitor (facilitator).

· Collect both quantitative and qualitative performance and preference measures.

· Recommend improvements to the design of the product.

The preceding points form an outline of the core aspects of user testing. When I undertake a test, I talk to the client first and ask him or her what the core areas of the site are that he or she wants me to look at. Remember that your client will be far more familiar with the system than you are (unless you built it, of course!). Then I draw up a test script based on some of this feedback and my own testing of the site, which I do beforehand to identify any possible problem areas. I then recruit users with different types of disabilities such as low-vision, blindness, limited mobility, or cognitive/sensory disabilities. I try to mix up the age range of the test participants also, because this gives better results.

Image Note Although it is great to have as diverse a group as possible, don't try to do too much, either. It is good to have a smaller group that gives good feedback and do several iterations of tests. Practical things like availability are often a big issue, and the logistics can be awkward because people promise to take part and then drop out and so on. You've just got to roll with it!

When performing tests, initially I was very conscious of not leading the user and tried to be very scientific about the whole thing. I soon realized, however, that this wasn't always necessary or wise. I had to face the fact that the whole test is artificial and contrived—trying to make it not so, or to ignore the reality of the situation in the name of some science, seemed to me to be rather shallow and pointless. When you acknowledge that the test is artificial, you put both yourself and the test participant at ease and you might find that you enjoy the process. In my opinion, this is healthy and can result in better data.

“Why?” you might ask. For a start, test participants might be more inclined to be more honest when they are relaxed. Also, they'll be more inclined to take chances and try new things that they might not try if they are uptight. The reason for this is interesting, and it took me a long time to realize it: the user test participants often want to please the test facilitator and don't want to be seen as not being able to do something. This is even more true when testing with people with disabilities—they might be very self-conscious, so you need to be aware of this and clearly explain thatthey are not being tested. If a person can't perform some task, they might feel there is something wrong with them, and the issue could well be an accessibility and usability issue that even Steve Hawking couldn't understand! So try to relax and even have fun. It's worth it. And if you do more testing with the same people, you'll build a rapport with them and understand their interaction patterns. If you are at ease, that will be a great help to them.

Observing a User Test

Observing user tests, as shown in Figure 9-11, is one of the best ways to gain a firsthand understanding of what accessibility and usability really mean. Designers and developers in particular can get huge benefits from the insights they gain from observing users.

Some facilities have a dedicated observation room, like ours. Using a wide screen TV and a small video monitor, clients can watch and listen to the user tests via a remote link without disturbing the users in their tasks.


Figure 9-11. Observing a user test in the NCBI Centre for Inclusive Technology observation room

Digital video recordings of user sessions are then often used to review the sessions after the test and to illustrate key issues. The emphasis is on building an understanding of how the design of the site contributes to users' difficulties and what practical steps can be taken to alleviate these problems.

Goals and Benefits of User Testing

There are several areas where a user test will help your HTML5 projects. One of the main goals is that it will help you identify problems your site users might have and give you the opportunity to fix them before you go live.

Some questions to bear in mind are the following:

· Is your site both easy to learn and to use?

· Is it satisfying to use?

· Will the site be valued by the user by providing the functionality that the user needs?

Here are some of the benefits of user testing:

· User testing your projects and documenting the results will give you a point of reference for later versions of your site.

· The amount of time spent on supporting/maintaining your site will be reduced if it is a quality project from the start. So the effort you put into gaining feedback from your users will pay for itself in the long run.

· A better site will result in more sales or greater use of your site, as well as fewer complaints.

· A better site gives you an advantage over your competitors.

Image Note It is interesting that there has been a large increase in demand for the services of usability professionals over the last few years, as the Internet has become more pervasive and the consumer has gained far more choices. Usability comes into its own when the quality of the user experience is a determinant of what your site users will buy and the services that they will use.

Limitations of Testing

Although user testing is undoubtedly useful and an important part of the user-centered design development cycle, there are some drawbacks to testing that you should be aware of:

· User tests are contrived environments: No matter how you slice it, user testing and observing people using a product or service is artificial. What you see is only a facsimile of real usage, and users often will not behave the way they would in their own environment. The act of observing something changes it.

· User test results might not actually prove that your HTML5 web site really works: To do statistically significant testing, you have to do a lot of it. Most of the resources at our disposal don't allow us to do this. Also, end users might often miss very obvious errors in your site that are apparent to you (if you built it or are an experienced test facilitator), even in larger test samples.

· Your user-test participants don't often truly represent your target audience: Even when you are being very careful to choose the right people, the test results might be skewed from the start because you might have too many more advanced users or too many users with only a basic level of digital literacy. Finding the right balance is often very hard. Again, to do tests that are more representative of the wider audience, you need to recruit a lot more than is practical for most developers to do.

· Is user testing always the best technique to use? This is an important question. For example, an expert evaluation in the form of a usability evaluation might be the best approach for your project. There might be expertise needed to use the system in the first place, so testing with novice users isn't ideal in this situation.

Digital Literacy

One recurring issue is that of the level of fluency a person has with both the AT he uses, as well as with the Web and technology in general. In the early days of your testing process, if you are inexperienced you might see a user having problems and think it is the fault of the user interface or the site design. But is it? You need to be sure. This is where using an experienced test facilitator can pay off—in particular, using someone who understands deeply how AT works. There aren't many around, but if you can find them, they are worth work with. This is because the knowledge such an expert brings to your project will be invaluable when it comes to understanding and analyzing your test results. You have to remember that you will need to make design decisions based on these outcomes.

So What's the Best Method for Me to Assess My HTML5 Project?

As you can see from what we have covered, there are a wide range of user evaluation methods available. It can be very difficult, even for usability professionals, to get a true picture of the user evaluation method—user testing, focus groups, prototyping, or other method—that is best suited to any given project. Here are some of the main issues:

· The evaluator effect. This is where different test facilitators or evaluators come up with varying results for the same data set.

· Lack of scientific rigor when applying usability evaluation techniques. This has the net effect of greatly diluting the reliability of much user data that is collected during a user test or other usability evaluation method.

· A general lack of appropriate standards or metrics that can be used to compare evaluation methods.

In short, there is no “one size fits all” method, so you have to approach your projects in the context of what you need from testing and the resources you have. It does get easier in time, and you'll learn what to ignore and what to pay attention to as you gain experience.

Iterative Design Processes

There is also much made in usability circles of the importance of a responsive, or iterative, design process. (You might have noticed the references to that in this chapter.) In practice, however, there is often little agreement on exactly how to achieve this.

Image Note The essence of the iterative design process is to include user involvement as early as possible in each stage of the build. If you break the project into three stages—such as the Initial Stage, the Prototype Stage, and the Final Stage, ideally the outcome of usability testing from each stage is fed into each consecutive stage. This results in a site that has a core of real-world feedback from users at its heart.

Is Usability the New Economics?

There is an old joke, that if you ask four economists the same question, you get five answers. To some degree, it is similar in the area of usability. Even the most seasoned professionals looking at the same data often come up with very different analyses of what a critical issue is. There are often cases where they come up with very few similar issues.

However, this isn't to say that the user-centered design processes discussed in this chapter aren't useful. They are, but it isn't an exact science. If you want to take the plunge and start using some of these methods, I strongly encourage you to go for it. You will make mistakes, but trust your instincts and learn from the mistakes. It is very worthwhile and satisfying to build humane technologies that help people do what they want to or, indeed, need to online.

Image Note Much of the advice I give in this chapter is based on a combination of years of experience doing user testing myself, and being influenced by the work of people like Donald Norman and, in particular, Jeffery Rubin. I highly recommend Rubin's book Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests for further reading.


This chapter was an introduction to many user-centered design processes and methodologies. You should feel free to pick and choose whatever works for your particular needs and circumstances. There is no “one size fits all” method. However, I strongly recommend that you start to try incorporating them into your HTML5 projects as best you can.

One of the most ironic things for me, as someone who has for years been doing user testing of web sites and applications involving people with disabilities, is that often the simple act of letting a developer meet with and observe people with disabilities using their system is very powerful. There is often an epiphany for the developer. A realization dawns that all the talk of accessibility, usability, and the application of various guidelines are not empty or esoteric exercises. The good and (indeed) bad coding and design decisions you and I make do have an impact on people's lives, in ways that we might not initially have the capacity to perceive.