Tools, Templates, and Testing Facilities - Setup - Institutionalization of UX: A Step-by-Step Guide to a User Experience Practice, Second Edition (2014)

Institutionalization of UX: A Step-by-Step Guide to a User Experience Practice, Second Edition (2014)

Part II. Setup

Chapter 7. Tools, Templates, and Testing Facilities

Image If you throw usability staff into the organization without the right equipment, they are going to be slow and inefficient, and impractical in their approach.

Image Get appropriate tools (e.g., lab equipment and prototyping software), templates (e.g., reusable questionnaires), and testing facilities. These items form an essential toolkit—the core infrastructure for routine usability work.

Image Your toolkit can enable usability designers to efficiently complete the user experience design methodology. To determine the toolkit you need, review your methodology.

A well-trained staff in a room with nothing but paper can out-design a poorly trained staff equipped with a state-of-the-art facility. While in China, I was once given a tour of a half-million-dollar usability testing facility. It was impressive—very advanced looking. Later, in quiet conversations with the staff, it became clear that the team had no idea what to do with the facility. They had equipment, but no software to support the testing, and no methods or skills.

The main value of facilities, tools, and templates derives from time savings. Instead of creating a testing form from scratch every time a test is needed, a user experience design engineer can take an existing form and modify it for a client’s specific test in approximately 20 minutes. By comparison, creating the concept and forms for a test from scratch may take days or even weeks. Clearly, it is advantageous to hire good staff members and supply them with the tools that can make a difference.

This chapter outlines the tools you need, the templates that are helpful, and the usability testing facilities that will help your staff be most efficient and effective. Of course, by the time this book is published, some of the tools and templates described here may be outdated. New developments are taking place all the time. For example, you might hear that usability testing labs have recently moved from “marginally useful in special circumstances” to being “a practical part of almost every test.” Or you might learn that remote testing, which isn’t used often today, is becoming far more practical and, therefore, much more widely used. Remote testing is usability testing performed at a distance; the participant and the facilitator are not in the room together (and may not even be on the same continent), yet the facilitator can monitor what the participant is doing and saying. Unmoderated remote tests are tests that are run automatically. The participant interacts with a scripted program that collects performance data and responses. Because the contents of toolkits will likely change, a skeptical attitude about these tools is useful—if a tool does not really make a difference in the design, spend your usability testing budget elsewhere.

Introduction to Your Toolkit

Your methodology points to the facilities, tools, and templates you need. For example, if the methodology specifies that a test of branding occurs at a certain point, you will want to have templates for reusable questionnaires and a standard template for the final report.

If you update your methodology, you may need to update the corresponding tools, templates, and facilities as well. Conversely, the introduction of new facilities, tools, and templates might lead you to change your methodology. Online prototyping has become easier, for example, so you might move it to an earlier point in the design cycle. Likewise, as remote testing becomes more feasible and useful, you might add it to your methodology and develop new tools and templates to fit it. Be careful about implementing these kinds of changes, however: some “amazing” breakthroughs are actually not particularly useful. Consider the new, crowdsourcing solutions that let many people review your design. While there are some known advantages to crowdsourcing supporting innovation, having a load of armatures be reviewed out of context may not be a valid methodology.

The following sections cover the infrastructure you should consider implementing at your company. They also explore scenarios and priorities for each facility.

Testing Facilities

Depending on the circumstances, testing facilities can range from a simple office setting or hotel room to a full-blown usability testing lab. Of course, a full usability testing lab is not necessary to conduct usability testing. If office space is at a premium, the office of one of the usability team members can be used for testing. There may not be a one-way mirror, special equipment, or videotaping in such a site; instead, you may have only a few chairs, a desk, and a computer at your disposal. Skilled staff members, however, can still successfully create and run the tests. Similarly, it is quite acceptable to use a conference room to run tests, but it is critical that the room be reasonably quiet and free of visual and auditory interruptions. For this reason, it is best practice not to use participants’ workspaces for testing. You can observe them there, but workspaces are not good places to run tests.

There are a number of reasons for establishing a formal, dedicated usability testing facility. Notably, designating a space for testing shows a commitment to testing within the organization. While it is nice to have a room or perhaps a suite with that label, the practical value of a usability testing facility is less significant than the political statement it makes. Of course, the facility becomes an albatross if it is not regularly used—or evolves into a storage space.

While running tests in storage closets can still yield quite good results, it is best to have a testing environment that makes the participants and the facilitator feel comfortable and important. If you can make the test a relaxed experience, you will get more accurate and complete results. At the same time, you do not want the evaluation environment to feel too formal. Facilities that feel imposing and clinical should be avoided. Usability engineers call the people who engage in testing participants instead of subjects for a reason; no one likes to feel like a lab rat!

Facilitating a test is a very demanding activity. It takes focus, and it’s difficult, if not impossible, for one person to simultaneously keep the test process running, observe the nuances of the results, and record data. There is no additional energy or time left to greet participants, provide the initial forms, and hand out the compensation once the testing is complete. Consequently, it is very useful to have additional staff available to handle these functions. Professional testing facilities, for example, have support staff.

In some cases, you will need a facility geographically separated from your offices. You might decide to do testing in a number of cities intermittently, or you might even need to complete testing in these different cities quite often. In this scenario, it makes sense to establish a relationship with a testing facility in each location. These testing facilities are generally set up for marketing studies, but they work well for usability testing. It is also possible to use a conference room in a hotel, but testing facilities come with valuable amenities such as a greeter, a one-way mirror, built-in sound and video, and usually a more comfortable atmosphere.

Whether you sign a contract with a professional testing facility or build your own testing space, there are a few other advantages associated with use of a professional testing facility versus use of a simple conference room for usability testing. Figures 7-1 and 7-2 show the appearance of a typical professional testing facility. Your facility may have a one-way mirror. Most people can tell when you have a one-way mirror, so if your facility has one in place, you should be straightforward about it. With proper briefing, the mirror works very well. Developers, business owners, and marketing and usability staff can come to the site and observe the participants without disturbing the test. They can discuss what they see and send in their questions to the test facilitator. Alternatively, instead of a one-way mirror, you can use video feeds to adjacent rooms to allow others to observe without disturbing the test.

Image

Figure 7-1: Observer’s side of a professional testing facility using a one-way mirror1

1. Photo courtesy of the Bureau of Labor Statistics.

Image

Figure 7-2: User’s side of a professional testing facility using a one-way mirror2

2. Photo courtesy of the Bureau of Labor Statistics.

Recording of Testing Sessions

There is value in recording data-gathering and testing sessions. Professional facilities have video capability, and the new portable labs support video as part of their software.

Two types of recording usage may be employed. One common practice is to produce a full video recording of the session for the record. A continuous tape is made of the test, and you end up with many hours of tape. If someone later says, “I don’t believe the user actually did that,” you can offer to let him or her see the appropriate portion of the tape. In other cases, a much shorter highlights tape is culled from the full videotaping sessions. This edited video, 5–10 minutes long, shows key findings of the usability testing through the voices and actions of the participants themselves. Carefully selected examples on well-edited highlights tapes often can depoliticize the usability test findings: the finding is no longer the “opinion” of the tester, but rather the voice of the participant. Highlights tapes effectively grip the audience’s attention when used as part of the final presentation (which, from a practical standpoint, cannot be done with the full, unedited video of the session). This is a very effective practice—there is nothing like showing video of the users in action.

In the past, recording sessions were prohibitively expensive, but the “shoebox” usability equipment available today (see Figure 7-3) has made the cost much more reasonable. Such equipment includes a TV camera, microphone, monitor, and a remote marker to make it easy to find interesting tape segments. There is actually no tape—just a high-capacity hard drive to save the data—so it is also far easier to edit and present the results. This ease of use, combined with its reasonable cost, makes the shoebox lab a practical alternative to traditional equipment.

Image

Figure 7-3: Shoebox usability equipment

Most labs are using applications and hardware that make video editing increasingly efficient. With these new facilities, you can easily place parts of the video record in the report (see the sample of a test presentation video in Figure 7-4). The lab software lets you record the user’s facial expressions and the activity on the screen. You can then add detail to reports with clear instances of user actions and reactions.

Image

Figure 7-4: Example of a video record from a usability test

A few labs use an eye-tracking device, which identifies where the user’s eye is fixating. You can gain a lot of information from this device. For example, you can see users scanning around the page because they are lost or scanning an image because they cannot tell if it is selectable.

Eye-tracking devices are very useful for research purposes. For example, studies have shown that people start scanning in the main area of a webpage and initially ignore the logo, tabs, and left-hand navigation [Schroeder 1998], and that people’s eyes are drawn first to areas that have saturated colors (pure bright colors), darker areas, and areas of visual complexity [Najjar 1990].

You do not need an eye-tracking device to run an excellent usability test. A good facilitator can see where the user is looking and can supply you with very similar data. An eye-tracking device is expensive and requires setup time, so you probably won’t use it for routine usability tests. It may come in handy in a remote usability test, however, because the facilitator will not be physically present with the participant.

Modeling Tools and Software

Most of the important usability work can be completed with a simple Microsoft Office suite and software for graphics work. It may help to have a flowcharting package, but that’s about all the software you need. You also need to be able to use a word processor to document meetings and descriptions, and to have some kind of tool to mock up screens and pages. Which tool you use is not as critical as making sure that the usability staff members are comfortable with that tool and do not get distracted or waste time writing “code” to make the screen mock-ups work. Some people prefer a graphics program such as Adobe Photoshop, but Microsoft PowerPoint or other presentation tools work just as well. Some usability staff already have facility with tools such as Microsoft Visio. Whatever tool your staff members already know how to use that allows them to quickly mock up screens and pages is the best tool to use.


Selecting Design and Research Tools: Why the Rules Are Changing

Michael Rawlins, Sr., User Experience Design Architect—ESPN and President Emeritus, CT UXPA

The options for selecting UX tools for creating wireframes and conducting research seem ubiquitous. Expect this trend to continue for the foreseeable future. UX teams will play a role in shaping the future proactively by revolutionizing the UX domain while opening the field up to more diverse thinkers and product visionaries.

Innovating the Trigger

Typically, UX teams start the innovation process in response to feedback provided by a new team member or some excitement generated out of a training class or conference demonstration. To innovate more effectively, we need to proactively experiment with software on our own using all trial periods offered by software vendors (take the tools through the paces).

Forget about the Traditional Software Purchase Matrix

Strategies are evolving for influencing funding stakeholders and corporate procurement/purchasing offices (getting them to “say yes”) to acquire design and research tools. There are too many tools to perform traditional matrixes—rather, we should perform SWOT analysis (a diagram that maps strengths, weakness, opportunities, and threats) to influence funding stakeholder decisions. This will enable you to establish criteria that are more salient:

• Who are the leaders?

• Who are the lagers?

• Who are the obsolete players, and why?

This evolution will continue for the foreseeable future—primarily because some smart product designers noticed opportunities to surpass the industry incumbents. Design and research tools have become vulnerable. Some are “too big to respond”—and some “simply out of touch.” It’s important to establish “direct” relationships with the software vendors—and join them as product creation collaborators. Tool choices are somewhat overwhelming. Here are a few recommendations to help UX grow and expand.

What Are the New Rules and Expectations?

New expectations have influenced the table stakes in software acquisition. As consumers, we are accustomed to the “no strings attached,” full-feature, free-trial period. Here are a few expectations:

• Installation should be as easy as uninstallation.

• No invasive sales interaction will occur during the trial period.

• The software will work as marketed.

UX teams should merge traditional software acquisition strategies while aggressively moving toward a stronger role in influencing software product developers (vendor community).

Three Key Acquisition Strategies

1. Perform SWOT analysis for research.

• Why this software?

• Are demos available?

• What are the critical path features of the software—are they “must-have features”?

• Which optional features could be useful?

• Can we cost-effectively and efficiently build this in-house?

• What is the cost of this software?

Questions that lead to innovative thinking:

• Will the software disrupt the current process?

Image Example: Why do we need to sketch on paper, when there is prototyping software that looks like a sketch?

• Is the software easy to adopt—extending the type of resources that can use the tool?

Image Example: Can the UX team delegate specific user testing rounds to non-UX resources?

• Are the software makers/vendors accessible?

Image Will they openly share their product development roadmaps?

Image Are they including the team in ideation sessions?

Image Do they feel like partners?

Nailing this preliminary research is the first step in buying software successfully. Not only does it save you money in the long run, but it also allows you to fully grasp your company’s future needs.

Take your time and analyze your company’s needs. Vet your SWOT analysis with other UX teams in your company (and create scorecards for how features and capabilities align with the leading software solution).

2. Costs: Low cost does not equal low value.

There have been many exciting innovations in software as a service (SAAS) targeted toward enhancing our ability to develop and share interactive prototypes and perform product research. Many of the best tools are subscription based. I see this trend becoming more pervasive going forward, with the leaders disrupting the current processes (becoming more “lean” and “time-to-market” focused). UX teams need to be pragmatic about asking cost-structure questions carefully—making sure that the vendor can sustain the long-term viability:

• How did the company form?

• What is the year-over-year performance?

• Are there any rebates available?

• Are you paying for other features that aren’t necessary?

• How flexible is the individual per-seat licensing structure?

In an ideal world, cost wouldn’t be a determining factor when buying software. Unfortunately, we live in a world where we need and want the price to be right. Making sure you pay only for the features you need is the number one priority when it comes to price.

3. Learning curve.

• Who needs to use this software?

• How difficult is it to use and grasp all the important features?

• How much will it cost to train employees?

• Will you have to outsource training?

• Are the time and resources available?

What is the time frame to learn the software? Minimally, there should be a demo version of the software available or a full-feature trial to download. If not, that omission is indicative of the vendor being out of touch.

Conclusions

Most UX professionals are facing the challenges associated with demonstrating our considerable value proposition to product design—especially in the face of shorter time frames and development life cycles. Tools enable us to disrupt the current mindset and contribute to the success of the product design and adoption. Here are a few implied takeaway messages:

• Select tools for prototyping that can be shared with non-UX resources (increase our effectiveness to focus on more valuable project activities).

Image Example: Balsamiq Mockups software allows non-UX resources, using drag-and-drop widgets, to create low-fidelity visual examples. This enables non-UX resources to disrupt the current work streams through collaboration and sharing visions.

• Seek research tools that extend the reach of traditional tools (e.g., remote unmoderated usability testing tools) that can be shared in social media channels.

Image Example: ZURBapps such as Verifyapp and Solidifyapp enable stakeholders to participate in constructing and seeing real-time responses to how people work with their product visions. This disruption to the typical business relationship builds more trust and credibility for the UX community—and leads to a greater understanding of why our methods work.

The rules for selecting software have changed. UX resources need to try as many tools as they can. Don’t wait for the new team member, or the inspiration to come from a class or seminar. Download, subscribe, try, blog, complain, and influence!


Sophisticated modeling tools may or may not be necessary. Available software can assist in the development of very large taskflows and the modeling of taskflow behavior. An example of this type of software is Micro Saint (from Micro Analysis and Design, Inc.), which supports task modeling. This software has been used to good effect in very complex, critical applications, especially in the military design arena (although it has yet to be employed in producing commercial websites or applications).

Limited modeling tools are available for usability work. You may wish to create your own. At HFI, we built an application called the Task Modeler—essentially a specialized spreadsheet that adds up the number of clicks, mouse movements, and keystrokes used to complete a task. Using this application on a group of tasks that are representative of the work to be completed on a given interface provides a good indication of the time it will take an expert user to use the software. Such data are important because when measuring how fast users complete tasks during a usability test, you’re measuring their speed during their initial usage, not their speed after extended experience with the interface. During the test, users spend only minutes with the software, so they will not be experts on using the interface.

In many cases, however, you may be designing for expert users. Also, you don’t want to make the classic blunder of designing your software or application for first usage only. For example, while a menu design that can be used easily and quickly by novices is initially a much better choice than using keyboard commands, commands are usually faster once they are learned. If the software is to be used on a full-time basis, going with the “easy” menu can be a million-dollar mistake.

Many companies have purchased tools to track websites and provide feedback and statistics on usage. Some of these tools claim to provide usability information and are good choices for performing quick checks and validation. For example, some tools let you know if your alt text tags are missing (accessibility tools) or if your line lengths are long. Be wary, however, of tools that track download times for a page, the number of users who clicked on a page, or the amount of time people spent on a webpage, independent of other information. While this information can be useful to know, it can also be misleading. Why did a user spend only 3 seconds on a page? Was it because (1) the page is poorly designed, (2) the page is well designed and the user got what he or she wanted right away, or (3) the page before the current one was poorly designed, so that the user clicked on the wrong link? You cannot determine this underlying rationale just by reading a report on where people went and how long they stayed. Nothing can replace a trained usability professional for evaluating a screen or page or watching and interpreting users performing a task.

If you are involved in websites, you really should have an A:B testing engine. It is a psychologist’s dream. When I was in graduate school, we would run perhaps 30 people through an experiment and we were happy to get that amount of testing done within a single semester. With A:B testing, you can run a comparison study and (for a large site) have 10,000 participants in an hour. The A:B test facility randomly sorts customers onto different versions of the site and then tracks their behavior, such as how much they purchase. It is just a great tool.

Like all tools, however, A:B testing has its limits. The “recipe” you are testing is critical. If you do not test a good idea, there is no way that A:B testing can magically recommend what to do. Companies that rely too heavily on this type of testing may think they are doing great work, but they may be uncompetitive in that their designs fail to innovate and may just have fragmented good ideas that don’t hold together. Also, it is important to remember that with the large number of participants in these studies, it is easy to show statistically significant differences between designs that are in reality so small as to be trivial.

Data Gathering and Testing Techniques

Data gathering and testing are some of the most valuable tasks your user experience design team can perform. But the phrase “run a usability test” is a general one. There is not a single type of testing, but rather many different types. There are, for example, tests of branding, early paper prototyping tests for conceptual design, and later tests on robust working prototypes. You must select the type of test needed for where you are in your development process and then create the correct type of test questionnaires to support it.

You can save a lot of money and time by developing an initial set of questions and then customizing them as needed for each test. Defining and creating predesigned templates can save countless hours. While no template for a given type of test works for all situations, there is certainly value in having a template as part of your infrastructure. Some example template forms include those listed here:

• The screener is an essential questionnaire used to select participants for a study. It can help eliminate participants who are too sophisticated or too inexperienced. In some cases, a template can be developed and used repeatedly for each study that will access those types of users, although typically the template must be modified for each test.

Usability testing routine forms are needed when running usability tests. While not exciting, they are quite necessary. For example, you must have an informed consent form to get each participant’s agreement to participate. Without this form in place, you are in violation of ethics in human research and can be sued. You may also need demographics forms and forms to acknowledge compensation.

As mentioned, there is more than one type of usability test. Following are descriptions of several tests. Which one you use depends on what questions you are trying to answer.

Brand perception tests let you see how the user perceives the current site or application. One version of this test is for a single design; a variant of the test can compare your design with competitors’ designs, or to select the best among suggested designs. Such testing can be conducted with designs from different graphic artists or even different agencies. Regardless of the scenario, the questionnaire for this test must be customized to reflect the company’s target brand values. Which ones are you looking for, and which do you want to make sure to avoid: trendy, warm, friendly, sophisticated, “tech-y”? You need to tailor the questionnaire accordingly.

• If you ask users if they want a given function, they almost always say yes. If you give them a list of potential functions and ask them to rate how important those choices are, they rate most functions as very important. But if you give users a list of possible functions and say they can have only three, you get interesting results. This test, called a functional salience test, is a great way to identify the relative importance of functions.

• A test of affordance determines whether users can tell what they can select on a page. In this test, you simply give users a printed copy of a page and tell them, “Circle the items you think you can select and click on.” You will see if there are selectable items that users cannot tell are selectable. You will also see if there are nonselectable items that make users think they can select them.

“Think aloud” tests consist of a whole family of tests in which the user is told to do a series of tasks, which are observed. Users are asked to read out loud as they work and tell the facilitator what they are thinking. This is a great way to find problems in a design. You can also estimate how long it will take users to complete tasks.

• The card sort test is a useful method if you are trying to find how users categorize the topics in a website or application. With this type of testing, you create stacks of cards with one item on each card, and then the participants group the cards in a way that makes sense to them. Software can help collect and analyze the groupings used by different participants. The software uses cluster analysis and gives results that can guide the information structure of the design.3

3. IBM’s EZSort programs are an example of cluster analysis software. For more information, visit www3.ibm.com/ibm/easy/eou_ext.nsf/Publish/410.

• While the card sort test can help guide the design, you can use the reverse card sort method to check whether the design worked. With this type of testing, you give the participants a list of items and see if they can figure out where to go to find them. If they can find the items, the navigational structure is self-evident.

Subjective ratings are a large family of tests that allow users to describe how they feel about your site or application. They decompose or break down users’ perceptions to help you more easily track the cause of problems. For example, you might find that people love the colors but feel that the site is very slow. These findings need to be carefully considered. Many users might say that they want a search facility, but this may actually indicate a problem with the structure of the site. The stated desire for a search facility is often just a symptom of being lost in a poor navigational structure.

Note: While standard tests help you quickly prepare testing, each test needs its own set of forms, such as video consent forms, facilitator scripts, task instructions, and so on.

Advanced Methods

In addition to the classic usability analyses, a range of analytical methods look at the big picture, meaning the ecosystem of the customer. Some of these methods were inspired by ethnographic studies, such as shadowing. In this approach, researchers quietly follow and observe participants. In designing a mobile phone for use in an emerging economy, for example, shadowing lets us notice the need for much louder, dust-resistant devices to handle the conditions on trains (Figure 7-5).

Image

Figure 7-5: A PET analysis (persuasion, emotion, and trust) created after in-depth gestalt-style interviews about buying a mobile phone

Other methods allow us to probe into the user’s emotional schema. When designing for psychological influence, we want to be able to model the customer’s drivers, blocks, beliefs, and feelings. The more deeply we can understand the user, the more leverage we have on influencing conversion.

The Special Needs of International Testing

International testing is far more difficult to coordinate and involves much more than just arranging for facilities and participants in many countries and racking up lots of air miles. Test procedures don’t work cross-culturally; therefore, international testing takes special capabilities and infrastructure. You need to deal with translation issues and adjust the testing methodology based on cultural differences. For example, in some cultures it is not polite to criticize, so the usual methods of asking users to think aloud, expecting that they will say what they think is wrong with the product, may not work. If you are testing internationally, make sure you leave enough time to deal with these different and differing circumstances.


The Bollywood Method4

4. Based on Chavan [2002].

Apala Lahiri, Global Chief of Technical Staff CEO, Institute of Customer Experience

The main challenge with usability testing in Asia is that it is impolite to tell someone they have a bad design. It is embarrassing within this culture to admit that you cannot find something, so it is very hard to get feedback.

I conducted a test on a site that offered airline tickets for sale. I used a conventional simulation testing method and got little feedback. I could see that users were not succeeding, but they would not willingly discuss the problems they were experiencing.

I then tried a new method I had developed, called the Bollywood Method. Bollywood is the Hollywood of India and makes far more movies each year than Hollywood does. Bollywood movies are famous for having long and emotionally involved plots. The movies have great pathos and excitement. In applying the Bollywood Method to this testing scenario, I described a dire fantasy situation. I asked each participant to imagine that his or her beautiful, young, and innocent niece is about to be married. But suddenly the family receives news that the prospective groom is a member of the underground. He is a hit man! His whole life story is a sham, and he is already married! The participant has sole possession of this evidence and must book airline tickets to Bangalore for himself or herself and the groom’s current wife. Time is of the essence!

The test participants willingly entered this fantasy, and with great excitement they began the ticket booking process. Even minor difficulties they encountered resulted in immediate and incisive commentary. The participants complained about the button naming and placement. They pointed out the number of extra steps in booking. The fantasy situation gave them license to communicate in a way they never would have under normal evaluation methods.

This method worked well in India and may even be able to be generalized to special situations in North America and other places where participants may hesitate to communicate freely.


Recruiting Interview and Testing Participants

Usability tests typically require fewer participants than marketing research studies because the findings in usability tests are usually qualitative, rather than statistically descriptive. In usability testing, you are not trying to generalize your results or estimate the numbers or percentage of people who feel or would react to a product in a certain way. Instead, you are exploring. You are trying to determine whether there are usability issues, what they are if they do exist, and how you might solve them. In doing so, you are trying to delve into the psychology of your users. This requires that the participants you test be representative of the target population of actual users. In other words, you need to find representative users for data gathering and usability testing.

In-house users, while easy to find, aren’t usually acceptable participants because they probably care more about the company than real users do. They see the application as being worthy of additional effort and might exaggerate its value, or they might not flag aspects of the design that make it impractical. They are also familiar with the company’s in-house language, concepts, attitude, and mindset, and they might even have different aesthetic values and perceptions than typical end users.

There is one case in which performing tests with in-house users is appropriate: if you are actually building an application for internal staff members. Sampling them is usually a very easy and informal process; staff members just need to be screened and scheduled.

Many market research and usability testing companies have entire staffs of screeners—clerical-level staff who call lists of potential participants and ask the questions included in a special questionnaire (also called a screener, as noted earlier in the chapter). The staff members use the questionnaire to select participants who fit the criteria for the study. Participants are typically offered a fee of $100 to $200 each, depending on how stringent the required match criteria are. Some of these testing facilities maintain databases of potential participants. This can be convenient, but the lists may be overused. (Some people seem to make a part-time job out of participating in studies!) You may want a fresher list.

To accomplish this, you may need to ask the recruiting firm about the people in their databases. You can shop around for databases and recruiters, and specify that the participants must not have been in any studies during the last 12 months. It may be more challenging to find these kinds of participants, which may make your recruiting more expensive. If you need general participants—for example, people between the ages of 20 and 60 who purchase goods from the Web at least once every 3 months—it may be relatively easy to find “fresh” participants. If you need people who work in a copy center who have never used a particular type of software, you will pay more for this type of recruiting. It is useful to have relationships established with companies that can help you recruit participants.

If your user group consists of current customers, it may be possible to develop a list of customers and have the staff screeners work from that list. This may be easier and more cost-effective than using a recruiting firm. In some cases, you can have internal staff work temporarily as screeners. This costs very little unless you need to hire in-house staff to work as screeners on a full-time basis. Using in-house screeners saves money over hiring a screener consulting firm, but the in-house screeners will need to be trained. Usability consultants, in contrast, are already trained and just charge you per project. In any event, having a smoothly running mechanism for obtaining study participants helps keep usability work progressing—problems with obtaining participants is the single most common source for the delay of usability projects.

A whole series of deliverable documents result from proper usability work. Some people approach usability without much of a concept of deliverables, thinking that by simply studying the user, good things will happen. That may be true—good things may happen. But making usability work efficient and repeatable requires an organized set of deliverable documents. The deliverables provide a clear focus and a set of milestones for usability work. As an example, Table 7-1 lists the major deliverables in the Schaffer Method.

Image

Table 7-1: The Major Deliverables in the HFI Framework Methodology (Version 5)

It takes time to create good deliverables, but they offer several benefits:

• They document that steps in the methodology are actually completed.

• They allow work to be communicated to others—for instance, key stakeholders and development staff.

• They allow work and processes to be repeated.

Most deliverables require several smaller deliverables to create the end product. In the end, there may be hundreds of deliverables. Imagine that you needed to create these deliverables from scratch for each project, determining the appropriate document structure and inventing the style of presentation. The level of investment required to achieve this goal would make usability engineering programs prohibitive in terms of both cost and time. If each of the 23 deliverables listed in Table 7-1 took just ½ day to create structurally, for example, then this task would add 11½ days to the project.

If usability is to be routine, standard reusable deliverables are indispensable. They help organize the project and save valuable time. Standard deliverables also make it is easier for managers to check a project’s progress because they know the full set of deliverables to expect. Finally, using standard deliverables makes it easier to get oriented with and review an unfamiliar project.

Summary

The value of the tools, templates, and facilities outlined in this chapter lies in their ability to save you valuable time. However, it remains critical to pick the items most appropriate for your efforts. It is not sensible to invest in something just because it is a new technology. Refer back to your strategy often, and remember to let your methodology determine your toolkit.