Developing Products That Customers Want - Analytics for Product Development - Customer Analytics For Dummies (2015)

Customer Analytics For Dummies (2015)

Part IV

Analytics for Product Development

webextras Visit for great Dummies content online.

In this part …

· Find out what customers expect in your company’s products.

· Test product features to find out how usable they are.

· Improve the navigation and findability of your website and products.

· Visit for great Dummies content online.

Chapter 13

Developing Products That Customers Want

In This Chapter

arrow Ranking customers’ top tasks

arrow Matching company needs to customers’ needs

arrow Discovering opportunities to delight customers

If you build it, they will come. At least you hope so! Understanding what customers want and what they will purchase is one of the holy grails of product development and is rife with analytics to help improve the process. Your product must meet customer needs (or perceived needs) better than your competition’s and at a comparable or better price.

One of the most effective ways to produce products and experiences that customers want is to first understand what customers want to accomplish. For software products, that can be tasks such as accurately filing taxes, creating professional looking movies from raw clips, or organizing and sharing photos. For physical products, this can be tasks such as recording a show to a DVR, drilling a hole in concrete, or painting a room faster. For websites, customers are usually looking to answer questions or to accomplish tasks like purchasing a product or paying a bill online.

remember Some of the most innovative products don’t come from what customers are asking for, but instead fixing problems with existing products and services.

In this chapter, I cover several methods to help you define and prioritize what features to include in your products.

Gathering Input on Product Features

Even if you already have a list of tasks and features ready to prioritize, here are four additional sources to come up with data (also see Chapter 7).

· Stakeholder interviews: Start with the people who know the product and customers the best. Sales and support teams hear complaints and probably have a lot of ideas about what customers want. In fact, there are probably already long lists of feature requests, bug fixes, enhancements, and wish lists. Combine and mine these to generate a list of requirements.

· Follow me home: A technique I learned while working at the software company Intuit was “follow me home.” You literally follow a customer home or to his workplace and then spend the day watching him do his job. Look for pain points and problems in how he does his job to look for opportunities of improvement.

During one such follow-me-home, a team of researchers noticed that retail customers were exporting their transactions from their point-of-sale cash registers into QuickBooks to manage their books. This extra step took time and could cause problems if done incorrectly. The developers came up with the idea of integrating QuickBooks with a cash register that eliminated a step for customers.

· Customer interviews: Ask customers directly about the types of problems they have and the features they think they need. Customers can be notoriously bad at articulating what they want. A skilled facilitator can ask questions in a way that gets at the underlying root of the problem. Have customers describe a typical day as they encounter your product (or any type of product).

Use the 5 Why’s technique when interviewing customers. It’s a question-asking technique that explores the cause-and-effect relationship that underlies a problem. See for more information on the 5 Why’s technique.

· Voice of customer survey: A voice of customer survey allows customers to provide direct feedback about existing products by answering a few survey questions. These surveys are one of the most cost-effective ways of gathering feedback about customers’ experiences with your products and services.

Finding Customers’ Top Tasks

While there are hundreds to thousands of things users can accomplish on websites and software interfaces, there are a critical few tasks that drive users to visit a website or use a software or product.

Think of all the features Microsoft Word provides. It supports document editing, mail merging, desktop publishing, and a full range of HTML. By one estimate, it has around 1,200 features. Now think of the most important features that you need when you create or edit a document — the features you couldn’t live without. These features likely support your top tasks.

Prioritizing tasks is not a new concept. Having users rank what’s important is a technique that’s been used extensively in marketing and conjoint analysis (I cover this topic later in this chapter). But having users force-rank hundreds or thousands of features individually would be too tedious for even the most diligent of users (or even the most sophisticated, choice-based conjoint software).

In the following sections, I outline the steps on how to prioritize your top tasks.

Listing the tasks

Identify the features, content, and functionality you want people to consider. The tasks can be specific to a website or to a class of websites. For example, here are a few things you can do on a healthcare insurance company website:

· Look up the address of a doctor.

· Find the office hours of a doctor.

· See if a healthcare provider accepts your insurance.

tip Avoid internal jargon as much as possible and make sure tasks are phrased as something a person can relate to and that are actionable.

The total number of tasks you need depends on how focused and extensive the functionality you are testing is. A broader experience (an entire e-commerce website) will have more tasks compared to a more focused experience (for example, on-demand videos for a cable provider). You might have as few as 20 or as many as 150.

Finding customers

Recruit a representative set of customers to determine the top tasks. As with all sampling methods, be sure the participants who take your study share the same characteristics as the larger customer base you are making inferences about.

The following table shows the sample size you’d need to have a desired margin of error (plus or minus around each percentage). For example, to achieve a margin of error of approximately plus or minus 7%, you should plan on a sample size of around 136. This percentage is based on the assumption that the percentage will fall at around 50%, at a 90% level of confidence (see Chapter 2 for details about confidence levels).

In top-tasks analysis, most items will actually only be selected by a small percentage of users, making the sample sizes shown on the high side for most of the items. In other words, this is a worst-case scenario sample size and means that sample sizes as low as ten customers still provide insights.

Sample Size

Margin of Error (+/-)



















Selecting five tasks

Present the enumerated list of tasks in random order to representative customers using software. You want to randomize because customers don’t consider tasks individually, but scan your list for recognizable key words. With a randomized list, tasks have an equal chance of being near the top or bottom of the list.

tip You can use software to randomize the order of your tasks. UserZoom and SurveyMonkey are two that allow you to randomize.

Have users pick their top five tasks from the list.

tip It’s hard to know every task a customer would like to accomplish. Therefore, include an “Other” option so customers can provide their own top task.

There’s nothing sacred about picking 5; it’s just a good number that works when you have 50 or more tasks. With fewer tasks, say 20 to 30, you can adjust the number down to 3 for users to select. This ensures users are forced to pick the really important tasks from the nice-to-have tasks.

Graphing and analyzing

After you collect the results of your survey, follow these steps to compute the customers’ top tasks:

1. Count the votes each task received.

2. Divide the vote total by the number of participants who voted.

This gives you the percent of times a task was selected.

3. Sort the tasks by percentage in descending order.

The characteristic shape of the top-task graph is the “long neck” of the vital few tasks your customers care about. Figure 13-1 shows four examples. Notice the handful of tasks that really stand out near the left side of each graph? They are the top tasks.


Figure 13-1: The long neck of the popular and least popular tasks.

You also see the long tail of the trivial tasks that are less important. Of course, you can’t just stop supporting your less important tasks, but you should be sure that customers can complete top tasks effectively and efficiently. These top tasks should become the core tasks you conduct benchmark tests around and the basis for design efforts.

You can also add confidence intervals around top-tasks graphs to help stakeholders understand the margin of error around each item. This way, if there are strong opinions about how important or trivial a task is, one or two votes in general won’t matter statistically. Figure 13-2 shows a larger version of the graph from Figure 13-1, with 90% confidence intervals. The gap after the fifth task (and the non-overlap in the confidence intervals) provides a good breaking point for prioritization. This particular top-tasks study was around what features consumers look for when researching consumer electronic devices on their smartphone.

Taking an internal view

Have the internal team working on the product also take the top-tasks analysis. This can be product managers, executives, designers, and developers. Compare these results with the customers’ top tasks. Identify areas where there is agreement or disagreement. Where the two groups agree are the areas you should focus your attention.


Figure 13-2: Top-tasks results with confidence intervals.

But can’t I just look at Google Analytics?

For websites and even software products, log files record the pages and screens where customers visit and spend the most time. It would seem that this would eliminate the need for conducting a top-tasks analysis. And although resources like Google Analytics (see Chapter 10) will tell where customers are going, they don’t do a good job of telling you why. This is best illustrated in an example with an automotive information company I worked with.

When we collected the tasks to prioritize in a top-tasks analysis for the website, a product manager said that he knew what customers wanted and were doing. In looking at the log files, most page views and time were spent on the Car Details page. The problem was that while customers did go to the Car Details page, we didn’t know why they were going there or even which, if any, of the dozens of car details customers were driven to.

Using the top-tasks analysis, we found that the top task that customers wanted was information about each car’s fuel efficiency. The only place that information existed was on the Car Details page! This top-tasks analysis led the design team to place the Miles per Gallon information on the Car Summary page.

Conducting a Gap Analysis

A complementary approach to the top-tasks analysis is a gap analysis. With a gap analysis, you understand how important customers find a feature and how satisfied they are with it in completing their tasks. If you have an existing customer base of people who have experience with a product (or competitive product), a gap analysis is what you need.

A gap analysis is a survey using two sets of items to be rated: importance and satisfaction.

remember Keep it to a smaller set of features or tasks than the top-tasks analysis, because you’re asking participants to rate each item twice.

To conduct a gap analysis, follow these steps.

1. Have customers rate how important a particular function or task is, using a numbered scale anchored with Not at all important to Very important.

A 7-point scale is popular, but almost any number from 5 to 11 will work.

2. Ask customers to rate how satisfied they are with the same features using the current product.

Use the same numbered scale from Not at all satisfied (1) to Very satisfied (7).

3. Use the following formula on each participant’s task/feature scores: Importance + (Importance - Satisfaction).

This will reveal the “gap” or opportunity for improvement. This formula provides both a prioritization of features and also gives a higher score to features that are rated higher in importance.

An example of tasks and functionality for a web-based email system and some sample data are shown in the following table. Notice how the feature “Add contacts to an address book easily” and “Search for messages older than 6 months” both have the same “gap” in the Importance - Satisfaction column. But because adding contacts was rated as more important than searching for older messages (a 7 versus a 4), it gets a higher priority score (a 10 compared to a 7).


Mapping Business Needs to Customer Requirements

Techniques like the gap analysis help prioritize the features from the customers’ perspective. But it’s also important to consider the organization’s priorities. One such technique is Quality Function Deployment (QFD). QFD is a structured way to prioritize features, functions, or even website content by taking into account both business priorities (the voice of the company) and the customer/user priorities (the voice of the customer).

QFD separates what you do from the how you do it. In the rush to complete long lists of feature requests, it’s easy to lose sight of what customers need.

The QFD’s main output is a decision matrix, which looks like a house (see Figure 13-3). You might hear “house of quality” used interchangeably with QFD.

The following sections show you how to build a QFD, using an example of a website redesign for a multinational electronics company that sells through channel partners.

This is a simple QFD that balances the internal need to build and improve with a clear prioritized top-tasks list of customers. It helps narrow the scope and improve the quality of redesign.

tip You can build a more advanced QFD by incorporating competitive information, specifications, and positive and negative relationships between the “How’s.” Adding each of these parts to the house takes extra time, but it can be worth it depending on the complexity and consequences of the project.


Figure 13-3: House of quality components from a QFD process.

Identifying customers’ wants and needs

Brainstorm as many possible features, functions, and task-based activities that customers would want to do. Look at the competition, survey and interview existing and former customers, review call center data, and interview stakeholders. These are your “What’s.”

For an informational-based website, most customer requirements are the tasks that users attempt, so a good task analysis can help identify these.

Here’s a selection of some tasks from the B2B electronics website:

· Find detailed technical specifications (like CPU and power consumption).

· Find and download drivers.

· Contact a salesperson.

· Have a personalized profile to save products I’m interested in.

· View products for an industry (for example, entertainment or medical).

· Share my purchase with others.

Identifying the voice of the customer

Ask customers to tell you what they think is important, not what you want them to think is important. An essential part of this step is realizing that everything can’t be important. If you ask customers and users what’s important, they will often tell you everything is important. You need a way to force users to choose which tasks are really important and which features are nice to have.

tip You can get sophisticated with conjoint analysis, but the fastest and easiest way to get a forced rank is to run a top-tasks analysis or a gap analysis, as explained in the previous sections.

Use the average importance rating or the percent of users who pick each task as the weights in the QFD. Use higher numbers to indicate higher importance, so if you use ranks, reverse them so a “1” represents the least important. The following table shows the relative rank and the percent of users who picked the task as one of their top five (out of 35 possible tasks) in a top-tasks survey.


% Who Picked

Tasks (the What)



Find detailed technical specifications (like CPU and power consumption).



Find and download drivers.



Contact a salesperson.



Save products I’m interested in to an account page.



View products for an industry (for example, entertainment or medical).



Share my purchases with others.

Identifying the How’s (the voice of the company)

Either use existing lists of enhancements and fixes or generate a new list based on the customer rankings.

The following are some example “How’s” for the electronics website:

· Integrate Facebook and Twitter into the product pages to promote sharing.

· Improve the search function and search results.

· Integrate product videos.

· Provide 3D rotation of products.

· Add a customization feature.

· Add a Filter by Location.

· Integrate detailed product specs and drivers from the main product page.

Building the relationship between the customer and company voices

Next determine how each “How” impacts each customer “What,” using the following scale:

· 9 = Direct and strong relationship

· 3 = Moderate relationship

· 1 = Weak/Indirect relationship

· Blank = no relationship

The 1's, 3's, and 9's are mostly a convention and help accentuate the differences when you prioritize. Place each value in the cell where the “What” and the “How” meet, and leave the cell blank if there’s no relationship.

In the example, “Integrate Facebook and Twitter buttons into the product pages” has a strong association with “Share my purchases with others.” Something like product videos will probably have a modest impact on detailed specifications because users could view the products to get a better sense of size and even see what external ports are supported.

Figure 13-4 shows the “What’s” and the “How’s” and the relationships between the two in the cells, with higher numbers indicating a stronger relationship, and blank cells meaning no relationship.

Generating priorities

Multiply the importance by the relationship and add up each sum in the “How” columns. The sum of each column represents the overall magnitude that each feature/function will have on the customer. Higher numbers represent more of an impact.


Figure 13-4: A QFD example with the Customer Requirements ( the “What’s”), the importance of each (Weights), and the voice of the company (the How’s) across the top.

The actual value of each of the totaled columns is less important than the relative ranking, which the tool also calculates. For example, “Find detailed technical specifications” (a “what”) has a weight of 80 and “Improve the search functionality and results” (a “how”) has a strong relationship of a 9. This cell generates a score of 9 × 80 = 720. “View products for an industry” (a “what”) has a weight of 13 and has a weak relationship with “Improve the search function and results.” This cell has a score of 13 × 1 = 13. The search feature raw score is found by adding these two cell scores, 720 + 13 = 733, which is the second-highest score relative to the other features considered.

Examining priorities

You should see gaps between features. The two highest ranked features (integrating the product specs and improving search) are well above the next five. A blank row means that a customer want isn’t supported. That isn’t necessarily a bad thing if it’s a low priority task. In this example, none of the new features supported “Contact a salesperson.” Assuming adding the contact information doesn’t introduce new problems, it would also be a good addition to the redesign.

Measuring Customer Delight with the Kano Model

Differentiating a product from the competition often isn’t about just meeting customer needs — it’s about exceeding those needs and expectations.

Beyond a set of expected features, some features so exceed expectations that they “delight” customers into purchasing, repurchasing, and recommending a product. The Kano model helps differentiate between features customers expect and features that delight customers. For example, customers expect cars to be reliable and include features like power steering, radios, and air-conditioning. Up until the early 1980’s, cup holders in vehicles were after-market add-ons. Cup holders, and the total number of cup holders, became a feature that delighted customers, so they were integrated into minivans and then to every type of car.

Named after Japanese professor Noriaki Kano, the Kano model is based on asking customers two questions about features. Features are categorized into one of six groupings.

· Delighting: The feature provides extra satisfaction when present but does not harm when absent.

· One-dimensional: The more of the feature the better.

· Must-Haves: Lack of the feature would lead to dissatisfaction.

· Indifferent Attribute: Customers don’t care about the feature and it neither increases nor decreases customer satisfaction.

· Reverse: Including this feature leads to dissatisfaction.

· Questionable: Conflicting customer responses make it unclear whether this feature adds or detracts from satisfaction.

Here are the steps to take to understand what customers expect and what delights them:

1. Aggregate the features in a similar way as you do for a gap analysis and top-tasks analysis.

The features should be articulated in a way that is meaningful to a customer, so avoid using internal company jargon and acronyms unless customers also use them.

2. For each feature, ask participants two questions.

The first is called the functional question: What are your feelings when the feature is included?

The second is called the dysfunctional question: What are your feelings when this feature is NOT included?

For both questions use the following 5-point response scale:

o 1 = I like it that way.

o 2 = It must be that way.

o 3 = I am neutral.

o 4 = I can live with it that way.

o 5 = I dislike it that way.

3. Tabulate the responses.

Table 13-1 shows how to score the responses to assign them one of the six categories. For example, if a customer responds to the functional question toward a feature as “I like it” and responds “I can live with it that way” for the dysfunctional question, the feature is a delighter. In comparison, if a customer responds to the functional question toward a feature as “Neutral” and responds “I like it” for the dysfunctional question, the feature is a reverse (less of it in the product).


Assessing the Value of Each Combination of Features

A more advanced method that identifies both the value (called the utility) and the optimal combination of individual features is called conjoint analysis. Where the Kano analysis identifies which features delight or dissatisfy customers individually, conjoint analysis identifies how the addition and reduction of features presented in more realistic combinations will lead to more or less interest in purchasing a product.

tip A conjoint analysis uses more advanced statistics and is usually done with software from companies like Survey Analytics (, QuestionPro (, or Sawtooth Software (

To conduct a choice-based conjoint, you enter all the features, including price, into the software. These features are presented to customers in a survey format and the pairing of features helps identify how much each feature contributes to likelihood to purchase.

Kano airline analysis

My company recently ran a Kano analysis on how customers feel about the airline travel experience and features. One hundred participants who flew in the last year completed the survey. They rated the following items, using the two Kano questions:

Wi-Fi onboard

Full meal service

Free drinks (Non-alcoholic)

Purchase snacks on board

Extra legroom

Duty-free items for sale

Free carry-ons

Pillows and blankets

Free checked bag

Reclining seats

Early boarding

No change fees on tickets

In-seat TVs

Pick seat when booking online

We tabulated the responses by feature; two examples are shown in the following table. For example, for having Wi-Fi Internet access onboard the airplane, 22% found that feature delighting and 35% were indifferent toward it. In contrast, 33% of participants rated the Free Checked Bag as one- dimensional, meaning more free checked bags the better. Eighteen percent rated a free checked bag as a must-have and 14% would be delighted by that feature.

Wi-Fi Onboard

Free Checked Bag







Must Have












Here are two examples of possible conjoint screens:

· How likely would you be to purchase a laptop with the following features (0 = not at all likely and 10= extremely likely)?

· Brand: Dell


· 15-Inch Display

· $999

· How likely would you be to purchase a laptop with the following features (0 = not at all likely and 10= extremely likely)?

· Brand: Lenovo


· 13-Inch Display

· $1,199

Through a mathematical combination of features presented to the customer, the software identifies which features have the biggest impact on likelihood-to-purchase scores. This uses a similar statistical approach as the key drivers analysis described in Chapters 9 and 12.

Finding Out Why Problems Occur

Sometimes the best improvement that you can make to a product is not adding more features but instead, fixing features that don’t work well or other problems with the experience that lead to undesirable outcomes.

A technique that works well for understanding the negative impacts of undesirable actions is the Failure Mode Effects Analysis (FMEA).

Things go wrong quite frequently in the customer experience, both online and elsewhere. Customers forget their passwords, install printers incorrectly, and experience long wait times on customer support. They are unable to find products, they take too long to make a purchase, and their information can be stolen. While you ultimately want a great customer experience, measuring poor experiences helps you understand how to prevent things from going wrong and generate better experiences. The following case study walks through an example of using an FMEA to identify problems with an online experience. It can be applied to offline experiences, too.

Imagine you’re interested in buying a new car. Should you trade yours in or try and sell it yourself? Either way, you want the most money for your car, and knowing its true value is essential information. Many websites (for example, Edmunds, Kelley Blue Book, Autotrader, and offer a tool that tells you how much your car is worth.

The task seems simple: Enter your make and model, condition, and mileage, and like the Magic 8 ball, you get a value. Of course, a number of details complicate the process and things can go wrong. You can pick the wrong make, be unable to accurately describe the condition or understand automotive jargon, or leave off features — or the entire process can take so long that you finally give up.

Applying the FMEA in finding the value of your car can help the designers of these websites prevent nuisances and failures (such as taking longer to find the value of your car, or getting the incorrect value altogether).

To conduct an FMEA, follow these steps:

1. Identify the steps in the process.

Break down a typical scenario into small steps to identify failure points or bottlenecks. Ideally, you can do this by watching the path users go down from a usability test (see Chapter 14) or from existing data.

2. Identify what can go wrong.

Here’s where recording poor experiences comes in handy. Are users taking too long, getting the wrong car value, making errors, or not recommending the website or product because of pricing or policies? Usability problem lists with severity ratings can be especially helpful in identifying the sort of failures you want to avoid. Are users entering the wrong make, picking the wrong condition, getting slowed down by too many pages or intrusive ads, or are they stopping at a preliminary estimate before getting the right value? Use any and all data sources to get a good picture about problems. These sources include call center data, surveys, third-party reports, former customers, and competitive data.

3. Determine how common the problems are.

Not all problems affect all users. Watch just a handful of users in a usability test and you’ll see that some users struggle with one step while others breeze through all of them. For example, in a test of the Enterprise Rental Car website, most users had a problem finding the total price for their rental car, while only a few users had difficulty picking the correct car type. For the FMEA, rank problems from least common (assigned a 1) to most common (assigned a 10). Even if a problem isn’t terribly harmful, if most users encounter it, it can be worth addressing.

4. Determine how severe the problems are.

Just because a problem affects a small percentage of users doesn’t mean that it isn’t a big deal. If a few users are getting the wrong car value and losing money on their trade-in, the negative experience can snowball after the dissatisfied customers tell their friends. In the FMEA, prioritize the severity of the problem from cosmetic/minor (assigned a 1) to leading to task failure, loss of money, or loss of life as a 10. The rankings are largely context dependent, so don’t get bent out of shape arguing over whether a problem is a 4 or 5.

5. Identify how hard the problems are to detect.

Some of the most pernicious problems are the ones that are hard to detect. If a problem is harder to detect, it’s harder to diagnose and prevent. It’s like when you take your car to the mechanic because of the noise it’s making, but when the mechanic goes to listen, it’s fine.

For example, the wording of a condition of a car might confuse users trading in a certain model and year as well as those who have never traded a car in before. Or, the layout of an interface might only affect users on old versions of Internet Explorer with low resolution screens. Like the previous two steps, assign problems that are easiest to detect as 1’s and the hardest to detect as 10’s.

tip When in doubt, use a 5, and then change the rating when you feel more confident about how easy or difficult it is to detect. Using a 5 has the effect of neutralizing the detection aspect of the FMEA, and this gives more weight to frequency and severity.

6. Calculate the Risk Priority Number (RPN).

The RPN is found by multiplying the ranking of occurrence by severity by detection (RPN = F × S × D). The higher the RPN, the higher the priority the feature has to be addressed.

Figure 13-5 shows three examples of possible problems in the car appraisal and purchasing process. Like the QFD discussed earlier in this chapter, the raw value is less important than the relative rank or the magnitude of the difference. In the example data, the “Estimated Car Value” function has a higher problem possibility than the “Finding if a car is available in a zip code” function.


Figure 13-5: Three example problems identified in the FMEA.

7. Identify the root causes of the failures.

Ranking what problems to address is one thing; finding ways to avoid them is another. For each of the undesirable outcomes, think of the root causes of the problem.

Sometimes the causes are obvious, while other times it can be helpful to use the 5 Why’s technique. Ask “Why?” until you uncover root causes instead of symptoms:

· Why are users getting the wrong car value?

They don’t scroll down to see the additional features section.

· Why don’t they scroll?

The options take up more than a single page.

· Why do they take up more than a page?

The screens are smaller to allow for more ads.

· Why so many ads?

These pages are among the highest visited on the site and ads generate a significant portion of revenue.