Measuring Customer Attitudes - Analytics for the Customer Journey - Customer Analytics For Dummies (2015)

Customer Analytics For Dummies (2015)

Part III

Analytics for the Customer Journey

Chapter 9

Measuring Customer Attitudes

In This Chapter

arrow Recognizing the influence of customer attitudes on behavior

arrow Identifying the customer attitudes that should be quantified

arrow Measuring brand lift after a marketing campaign

Customers have a range of attitudes. But when measuring a product or service experience, you usually can only measure a certain number of attitudes, including satisfaction, usability, trust, loyalty, luxury, and delight.

In Chapter 8, I cover the hierarchy of effects, which includes awareness, attitudes, and usage from a broad brand perspective. In this chapter, I focus in more detail on customer attitudes: customers’ ideas, sentiments, feelings, and beliefs toward a brand and product. I first cover the most common ways of measuring customer attitudes and then move on to constructing the right questions and scales.

Gauging Customer Satisfaction

By far the most common and fundamental measure of customer attitudes is customer satisfaction. Customer satisfaction is a measure of how well a product or service experience meets customer expectations. It’s considered a staple of customer analytics scorecards as a barometer of how well a product or company is performing. You can measure satisfaction on everything from a brand, a product, a feature, or a website to a service experience.

Satisfaction measures how a particular customer is satisfied based on his or her expectations of a product or service. If you’re selling a low-priced car, a budget-conscious consumer will be more satisfied with it than with a luxury, high-end car, even though the less expensive car might not have many features. Your customer’s main satisfaction comes from the value (price for the features and quality offered).

remember Customer satisfaction is the first step of the customer journey. It leads to customer loyalty and recommendation. It’s therefore a good idea to have some measure of customer satisfaction at each point in your customer journey. This includes everything from the product and its features, the buying process, service, and even how responsible you are toward your employees, shareholders, and the environment.

There are two levels of measuring customer satisfaction:

· General (or relational) satisfaction

· Attribute (or transactional) satisfaction

General satisfaction

Asking customers about their satisfaction toward a brand or organization is the broadest measure of customer satisfaction. It is often referred to as a relational measure because it speaks to customers’ overall relationship with a brand. It encompasses repeated exposure, experiences, and often repeat purchases.

To measure general satisfaction, ask customers to rate how satisfied they are with your brand or company using a rating scale. Figure 9-1 shows an example where participants were asked to rate their level of satisfaction with their bank, US Bank.

9781118937594-fg0901

Figure 9-1: Ask your customers about their level of satisfaction.

Because customer satisfaction is such a fundamental measure for gauging your company’s performance with your customers, a number of firms offer a standardized set of satisfaction questionnaires and reports to allow you to compare your satisfaction scores with those of your competitors and industry. (This chapter outlines the benefits of using standardized questionnaires.)

One of the most common industry surveys of company satisfaction is the American Customer Satisfaction Index (ACSI; theacsi.org). ACSI uses a standard set of questions and surveys thousands of U.S. customers each year on products and services they’ve used. ACSI provides a series of benchmark reports across dozens of industries, including those for computer hardware, hotels, manufacturing, pet food, and life insurance, to name a few.

The ACSI reports enable you to see how satisfied U.S. customers are with your company. In some cases, satisfaction benchmarks are also provided at a more specific product level.

tip While you should always collect your own customer satisfaction data, data from third parties provides a more objective view of your brand and provides insights into former and prospective customers as well.

Attitude versus satisfaction

Although there is a slight difference between customer attitude and customer satisfaction, they are highly related and tend to predict customer loyalty (see Chapter 12).

tip Here’s how to remember the difference:

· Potential customers have an attitude toward a brand or product they’ve never used.

· Actual customers rate their satisfaction after having experienced a brand or product.

remember Customer attitude and satisfaction are often used interchangeably in practice.

For example, customers can rate their opinions toward Apple before ever being a customer (attitude), their level of satisfaction with Apple after making a purchase (general satisfaction), their satisfaction with iTunes (product satisfaction), and with syncing iTunes with their iPhone (attribute satisfaction).

Attitude

If you’re interested in the beliefs, ideas, and opinions of prospective customers, you have to measure attitudes.

For example, prior to evaluating customers on two rental car websites, participants were asked about their attitudes toward the most common U.S. rental car companies, as shown in Figure 9-2.

9781118937594-fg0902

Figure 9-2: Have potential customers rate their opinions about your company.

tip One benefit of checking customer attitudes at the beginning of a survey is that you can screen out participants who have a very strong negative attitude toward your brand. While you don’t want to ignore these customers — in fact, you’ll want to follow up with them in the future — for this survey, you want to hear from prospective customers who are willing to use your product or service.

Attribute and product satisfaction

While customer satisfaction provides a broad view of a customer’s attitude, you’ll also want to find out whether or not your product or service is exceeding expectations.

To generate more specific and diagnostic measures of customer attitudes, ask about the satisfaction with features, or more specific parts of an experience. This is often referred to as attribute or transaction satisfaction because customers are rating attributes (features, quality, ease of use, price) of a product or the most recent transaction. Examples of attribute satisfaction include

· Check-in experience

· Registering

· Download speed

· Price

· Product (for brands with multiple products)

· Website

· In-store experience

· Online purchase process

· Product usability

To measure attribute satisfaction, use the same type of scale and questions as used to measure general satisfaction, but direct respondents to reflect on the specific attribute you’re interested in (the check-in experience, the search results page, the download speed).

tip In addition to collecting closed-ended rating scale data from participants, offer a space for customers to add a comment about their attitude. You can use these comments to help understand what’s driving high or low ratings. You can even turn these comments into quantifiable data (see Chapter 2 for how to do so).

Rating Usability with the SUS and SUPR-Q

Ease of use is usually one of the biggest differentiators in the customer experience: If people don’t find your product easy to use, they aren’t going to be very satisfied or loyal.

The process of measuring usability involves both observing customers using a product and their attitude toward it. You can get half the equation (customer attitude) through a survey. For the other half of the equation, turn to Chapter 14, where I cover usability testing in detail.

To measure attitudes toward usability, ask customers to rate how satisfied they are with the ease of use of a product or a more specific feature or experience (attribute satisfaction), or use one of two common industry questionnaires. These two questionnaires provide a reliable measure along with industry benchmarks (see “Writing Effective Customer Attitude Questions,” later in this chapter, for tips about developing survey questions).

System Usability Scale (SUS)

The most common measure of customer attitude toward usability is the System Usability Scale (SUS). It was developed by John Brooke in 1986. It consists of ten items that customers rate from Strongly Disagree (1) to Strongly Agree (5).

· I think that I would like to use this system frequently.

· I found the system unnecessarily complex.

· I thought the system easy to use.

· I think that I would need the support of a technical person to be able to use this system.

· I found the various functions in this system well integrated.

· I thought there was too much inconsistency in this system.

· I would imagine that most people would learn to use this system very quickly.

· I found the system very cumbersome to use.

· I felt very confident using the system.

· I needed to learn a lot of things before I could get going with this system.

The word “system” in the SUS is replaced with the name of the product or website you are testing. Figure 9-3 shows the response option format of the SUS.

9781118937594-fg0903

Figure 9-3: The response option format for the System Usability Scale (SUS) questionnaire.

To score the SUS, follow these steps:

1. For odd items, subtract 1 from the user response.

2. For even-numbered items, subtract the user responses from 5.

This scales all values from 0 to 4 (with 4 being the most positive response).

3. Add up the converted responses for each user and multiply that total by 2.5.

This converts the range of possible values from 0 to 100 instead of from 0 to 40.

4. Average together the scores for all participants.


What is a good SUS score?

A major benefit of using existing scales and questionnaires is to compare your score to a meaningful comparison (as discussed in Chapter 2). The average SUS score from over 500 studies is 68. That means a SUS score above 68 is considered above average and anything below 68 is below average.

The best way to interpret your score is to convert it to a percentile rank through a process called normalizing. The following figure shows how the percentile ranks associate with SUS scores and letter grades.

9781118937594-sb0901

This process is similar to “grading on a curve” based on the distribution of all scores. For example, a raw SUS score of 74 converts to a percentile rank of 70%. A SUS score of 74 has higher perceived usability than 74% of all products tested. It can be interpreted as a B-.

You’d need to score above 80.3 to get an A (the top 10% of scores). This is also the point where users are more likely to recommend the product to a friend. Scoring at the mean score of 68 gets you a C and anything below a 51 is an F (putting you in the bottom 15%).

The SUS is considered a technology agnostic questionnaire, meaning the items aren’t specific to any technology or platform. You can ask the SUS about physical products, software, mobile phones, websites, and even interactive voice response systems. For example, Intuit QuickBooks has a SUS score of 75 (above average usability), whereas Adobe Photoshop has a SUS score of 64 (below average usability).


For example, if a participant in a survey responds with the following responses to the ten SUS items, his SUS score is 72.5.

Raw Response

Scaled Response

4

3

1

4

4

3

2

3

3

2

2

3

3

2

1

4

4

3

3

2

Total

29

SUS Score

72.5

Standardized User Experience Percentile Rank Questionnaire (SUPR-Q)

For measuring customers’ attitudes toward your website quality, including usability, use an eight-item questionnaire called the Standardized User Experience Percentile Rank Questionnaire (SUPR-Q www.suprq.com). The SUPR-Q provides a reliable and valid measure of customer’s attitude toward the quality of a website experience. While there are a number of variables that impact the quality of a website experience, previous research has identified four of the most common dimensions: usability, trust, loyalty, and appearance.

To administer the SUPR-Q, ask users to respond to seven of the eight items using a five-point Disagree to Agree scale (1 = Strongly Disagree and 5 = Strongly Agree). For one item (“How likely are you to recommend this website to a friend or colleague?”), users respond to an 11-point scale (0 = Not at All Likely and 10 = Extremely Likely). The following are the eight items in the SUPR-Q and what they measure:

· Usability

· The website is easy to use.

· It is easy to navigate within the website.

· Trust

· I feel comfortable purchasing from the website.

· I feel confident conducting business on the website.

· Loyalty

· How likely are you to recommend this website to a friend or colleague?

· I will likely return to the website in the future.

· Appearance

· I find the website to be attractive.

· The website has a clean and simple presentation.

A score is generated by taking half the score with the item with the 11-point scale and averaging it with the other seven items that use the 5-point scale.

Like the SUS, the SUPR-Q has a reference dataset that enables you to purchase access to industry data to understand how well a website scores. If you have a score of 50%, your website ranks as average. A score of 25% means your website ranks below average, while a score of 75% ranks above average.

For example, Figure 9-4 shows the overall SUPR-Q scores for airline and air travel aggregator websites. The average score for this industry is 83%. United Airlines had a below-average score of 65% compared to Southwest’s above-average score of 91%.

9781118937594-fg0904

Figure 9-4: SUPR-Q scores for airline and travel websites.

tip You can also find similar questionnaires that include industry data for products from ForeSee (ForeSee.com) and WAMMI (wammi.com).

Measuring task difficulty with SEQ

Sometimes you need more specific data than the SUS and SUPR-Q provide you about a specific product.

Ask your customers how difficult they found a task, using the Single Ease Question (SEQ). It’s a seven-point scale from Very Difficult (1) to Very Easy (7); scores above 5.1 translate into easier than average and scores below 5.1 indicate harder than average. Figure 9-5 shows a sample SEQ.

9781118937594-fg0905

Figure 9-5: The SEQ measures customers’ attitudes about the difficulty of an experience or transaction.

If you’re interested in using a SEQ, check out Chapters 14 and 15, where I discuss it more in-depth.

tip You can ask more than one question after customers attempt a task. However, similar questions about the perception of time, satisfaction, and confidence tend to correlate highly with the Single Ease Question, so typically only a bit more information is gained by asking more than one question. You need to decide if more is better — or just more.

Figure 9-6 shows the SEQ scores for making a reservation on three airline websites. Participants in the study found the American Airlines reservation process more difficult than on the United Airlines website.

9781118937594-fg0906

Figure 9-6: Perceived ease of making reservations on three airline websites.

Scoring Brand Affection

Emotions play an important role in how attached a customer feels toward a brand. While emotions are harder to quantify than other attitudes about experiences, there are some effective ways to measure these softer attitudes.

remember Emotions impact customer loyalty. See Chapter 12 for more about loyalty.

One way you gauge the emotional connection customers have toward your brand or product is to ask questions around emotional attachment.

In the Journal of Consumer Psychology, Matthew Thompson suggests measures of connection, affection, and passion.

Have participants rate their level of agreement to the following items on a seven-point scale (1 = Strongly Disagree) and (7 = Strongly Agree) to ten adjectives, including “attached,” “delighted,” and “affectionate.”

· Connection

· Connected

· Bonded

· Attached

· Passion

· Passionate

· Delighted

· Captivated

· Affection

· Affectionate

· Friendly

· Loved

· Peaceful

You can get two measurements from these scales:

· A specific measure for each emotional attachment: Average together only the items in each category. For example, if you’re interested in the passion category, average together the scores to Passionate, Delighted, and Captivated.

· Overall connection measure: Average together the scores in all categories.

Figure 9-7 shows the brand attachment toward airline and online travel websites. United Airlines customers had the lowest emotional connection (scoring around 4) compared with high scores for American Airlines (scoring around 5). The average for this group was 4.7 out of 7.

9781118937594-fg0907

Figure 9-7: Brand connection to airlines and online travel websites.

Finding Expectations: Desirability and Luxury

Customer satisfaction in many respects is a measure of how well customers’ expectations are met.

tip If a product does what it is supposed to, for a reasonable price, and is generally easy to use, customers are generally satisfied. When you exceed customers’ expectations, they go from being satisfied to delighted. After delight comes desirability. Customers who are so delighted with your products or services will always come back to you.

Desirability

Researchers at Microsoft developed a way to test customer desirability. They identified a set of 118 positive and negative words that customers can select when describing their attitude toward a product, such as advanced, annoying, appealing, difficult, innovative, predictable, simplistic, useful, and valuable. Ask participants to select three to five adjectives they associate with a product or experience. Then calculate the ratio of positive (for example, appealing, clean) to negative words (for example, busy, dull, frustrating). There are no published benchmarks on the percentage of positive words or the ratio of negative to positive words. You’ll want to compare performance over time or to other comparable products. At the very least, you’ll want more positive than negative words.

tip You can find all the adjectives online at http://en.wikipedia.org/wiki/Microsoft_Reaction_Card_Method_%28Desirability_Testing%29

tip Chapter 13 covers the Kano model, which is another technique to help differentiate between features customers expect and those which delight them.

Luxury

Luxury products set themselves apart in the market through their capability to go above and beyond satisfaction — to delighting consumers. Delight is an emotional response, often described as a combination of surprise and joy.

This means that, especially in the luxury brand market, measurement tools need to aid in the differentiation between how consumers describe an experience cognitively and how they feel about it emotionally. One way of measuring customer attitude toward the perceived luxury is to have a set of representative customers rate a brand or product using a series of adjectives. This is part of a questionnaire called the Brand Luxury Index (BLI). Figure 9-8 shows a selection of the items in the BLI. Participants mark an “x” on the line between the adjectives when rating the luxury of an item or experience. More information about the BLI can be found in the Journal of Brand Management.

9781118937594-fg0908

Figure 9-8: A selection from the Brand Luxury Index (BLI).

Measuring Attitude Lift

One way to understand how a product or service experience impacts customers’ attitudes is to measure lift — the difference between attitudes before and after the experience.

Earlier in the chapter, I introduce the idea of customer attitude as a measure that’s appropriate for both existing and new customers. To understand how an experience impacts attitude, you measure your customers’ attitudes before they use your product and again after they use your product. The difference is the lift in attitude. The lift can be positive or negative, where a negative lift means the attitude declined after exposure.

For example, in a study of the Enterprise and Budget rental car websites, customers were asked to rate their attitude toward each brand prior to and after renting a car through each website.

Figure 9-9 shows the average brand favorability of Budget increased 12% after participants rented a car through its website (from 4.7 to 5.3). In contrast, the brand favorability declined 15% for participants after renting through Enterprise.com (from 5.3 to 4.5). More details about this study are discussed in Chapter 14.

9781118937594-fg0909

Figure 9-9: Measure attitudes before and after a user’s experience.

Calculate your lift measures with these steps:

1. Subtract the post measure from the pre measure, then change the order of the numbers.

For Enterprise, this was

4.5 - 5.3 = -.8

2. Divide the difference by the pre measure.

-.8 / 5.3 = -.15

3. Multiple by 100 to get a percentage.

-.15 * 100 = -15%

The result is a 15 percent decline in brand lift (a negative lift) for Enterprise.

Asking for Preferences

Customers inevitably make choices between competing brands. To understand how your products stack up against each of your competitors’ products, have participants select which brand they prefer given a set of likely alternatives.

When measuring preference data, collect data on both choice and intensity. That is, you want to know which brand customers would pick if they had to, and how strongly they feel. You only need one question to get this data because the direction of the intensity (for or against one brand) includes the preference.

tip Collect preferences both before and after a product or service experience.

For example, in the rental car website study, 62 participants were asked to rate which rental car website they preferred (see Figure 9-10) after having a chance to rent a car through both websites.

9781118937594-fg0910

Figure 9-10: Which site did customers prefer?

The distribution of the 62 responses is shown in the following table. For example, 20 participants selected seven (Strongly Preferred Budget). These participants showed both a preference for Budget and strong intensity toward it. Note that 7 other participants selected 5, meaning they preferred Budget, but only slightly.

Preference Rating

# of Participants Selecting

Preference

1

7

Enterprise

2

5

Enterprise

3

1

Enterprise

4

4

Neither

5

7

Budget

6

18

Budget

7

20

Budget

To compute the percentage of customers who preferred each brand, follow these three steps:

1. Add all the customers who selected a 5, 6, or 7 as customers who preferred Budget (45 people).

Participants who selected 1, 2, or 3 preferred Enterprise (13 people).

2. Exclude the four participants who preferred neither Enterprise nor Budget.

This step leaves a total of 58 participants.

3. Compute the percentage preferring each brand.

45/58 = 78% prefer Budget and 13/58 = 22% prefer Enterprise.

4. Test for statistical significance.

Don’t forget to calculate the error rate. You can use the online calculator at www.measuringu.com/onep.php. There’s less than a .01% chance this difference in preference is due to chance. See Figure 9-11.

9781118937594-fg0911

Figure 9-11: Test if the proportion of participants preferring Budget is statistically significant.

Repeat the same process for customers who expressed the most intensity toward each brand. Twenty out of 58 (34%) strongly preferred Budget compared to only seven (12%) who strongly preferred Enterprise. This difference is also statistically significant.

Finding Your Key Drivers of Customer Attitudes

Having all the data on customer attitudes is one thing; you now have to take all that data and interpret it to know what to do in order to improve your customer attitudes. What you need to look at are your key drivers — the features and attributes of the product or experience that contribute the most to your brand reputation.

You can find your key drivers with two methods:

· Open-ended questions: Analyze the responses to your open-ended questions. These can give you a good idea of what customers think is most important.

You can group these comments into categories and rank their importance by quantity.

remember With fuzzy concepts like satisfaction, usability, and luxury, always give participants space to write their own comments.

· Multiple regression analysis: Multiple regression analysis examines the correlation between the independent and dependent variables to determine which attributes contributed most to consumers’ overall brand attitude. I cover multiple regression analysis in the appendix, but you should plan on enlisting the help of a statistician to assist you with running a multiple regression analysis.

For example, customers’ satisfaction with a web-based software product was measured by averaging customers’ responses to a product satisfaction question and a loyalty question, as shown in Figure 9-12. Customer loyalty is covered in detail in Chapter 12.

9781118937594-fg0912

Figure 9-12: Two questions to determine customer satisfaction.

Participants also rated their level of satisfaction on 14 product attributes, including trust, reliability, specific technical features of the product, and usability.

The results of the key driver analysis are shown in Figure 9-13. It shows only seven variables drive most customer satisfaction with the product. The vertical axis is how important each variable is in predicting customer satisfaction. The numbers refer to how much each of these variables will increase customer satisfaction. Improving customer attitude toward being “more productive” by 1 point would improve customer satisfaction by .14 point.

The horizontal axis is how satisfied customers are on the seven-point scale. The two biggest drivers of customer satisfaction are customers’ attitude toward the software helping them do their job quicker, and their perceived quality of the product. Both drivers have relatively low ratings (below 5 on a seven-point scale) and suggest that improving these two aspects of the product would have the biggest impact on customer satisfaction.

The R-Square value describes how much the combination of independent variables predict the dependent variable — customer satisfaction. (See the appendix for more information on the R-Square value.) In this case, 55% of customer satisfaction is predicted using just seven variables.

9781118937594-fg0913

Figure 9-13: A key driver chart from multiple regression analysis.

Writing Effective Customer Attitude Questions

The process of measuring customers’ attitudes involves asking the right questions in the right way with the right scales. It can seem daunting, but here are tips to make the process easier.

· Look for existing questionnaires. While you have to create your own questions that are specific to your product, look for questions that exist in published literature. Many existing questionnaires have gone through a process of standardization. This includes testing the reliability (how stable customers respond) and validity (whether the questions actually measure what they are supposed to). If possible, look for existing questionnaires (like the SUPR-Q and SUS) that have benchmark data you can compare your score to.

· Don’t obsess over scale steps. Just pick a scale and stick with it. What matters, what makes a measure meaningful, is comparing the same responses to other products or over time.

tip Having more scale points is generally better (for example, seven versus five points); this matters most when you have only one or a few response scales. The more responses (for example, the SUS had ten 5-point scales), the less it matters.

· Avoid double-barrel questions. When writing questions, avoid using two concepts in one question (for example, “How satisfied are you with the price and quality of the product?”) Is the customer rating quality or price? While the two concepts may illicit the same response, separate them into two questions.

· Be concrete and specific. While general satisfaction is high level, more specific attribute satisfaction requires clearly stating what you want customers to rate. Be sure customers know what feature you’re referring to, and avoid using jargon.

· Stay positive. Phrase questions as positives rather than negatives. Participants often make mistakes when responding (usually in a hurry) and forget to reverse score the question. For example, here is an example of a positively worded item:

· This website was easy to use.

And a negatively worded item:

· It was difficult to find what I needed on this website.

tip Pretest your questions with a small sample of participants. Look for inconsistent responses or ask participants if any questions confused them. If you plan to use a set of questions repeatedly, hire a statistician to help validate the phrasing and scales.