Quantifying the Consideration and Purchase Phases - Analytics for the Customer Journey - Customer Analytics For Dummies (2015)

Customer Analytics For Dummies (2015)

Part III

Analytics for the Customer Journey

Chapter 10

Quantifying the Consideration and Purchase Phases

In This Chapter

arrow Identifying the consideration touchpoints

arrow Measuring the influence of customer and company touchpoints

arrow Tracking conversions and purchases

arrow Conducting A/B tests

Many times before customers make a purchase, they come across your company — they see your website, an advertisement, or read a review.

This is the awareness step. When you can get those potential customers interacting with your company — maybe they download a brochure from your website or sign up for your newsletter — those customers have moved into the consideration phases.

The places where potential customers interact with your company in the awareness and consideration steps are touchpoints (see Chapter 7). When you identify the touchpoints, you need to prioritize which touchpoints have the largest influence on your customers’ decisions. You need to understand how each touchpoint impacts customers’ decisions, then measure the strength of the impact and understand what can be improved.

Identifying the Consideration Touchpoints

Touchpoints are the places where customers find out about your company and products. (See Chapter 7.) Touchpoints can be media driven by your company — ads on TV, radio, or newspapers, for instance. But touchpoints can also be customer driven — reviews on websites and social media.

Before you can quantify the consideration phase, you have to know where customers are being influenced. To do so, you need to understand all the touchpoints that influence customers as they decide.

Company-driven touchpoints

Company-driven touchpoints are what you probably think of as advertising. It’s generally paid for by the company and includes more than ads on TV.

· Broadcast media: Television, radio, newspapers, and magazines are still popular touchpoints for consumer brands with big marketing budgets.

· Direct mail: Postcards, catalogs, coupons, or anything that gets sent in traditional snail mail.

· Email newsletters: Your inbox is probably full of email communication from websites and companies that you’ve purchased from or signed up with, or that somehow just keeps showing up.

· In person: The in-store experience or interactions at places like a convention.

· Company and product websites: The company website is often one of the only touchpoints customers see. Product web pages are often linked directly from ads or directly from Google searches.

Customer-driven touchpoints

Customer-driven touchpoints are typically areas you don’t have control over:

· Friends and colleagues: Few things have a bigger influence on customer behavior than the opinion of a trusted friend or colleague. It’s this word-of-mouth marketing power that lies at the heart of the Net Promoter Score system (see Chapter 12).

· Social media: Sites like Twitter, Pinterest, Houzz, Facebook, and LinkedIn are electronic extensions of word-of-mouth from friends and colleagues. They have a way of taking one person’s experience and making it viral — affecting an enormous number of potential customers with little control over the message or reach.

· Bloggers and influencers: The bigger the product or service, the more experts are writing about it, from airlines, technology, and restaurants to fashion.

· Consumer reviews: Potential customers rely heavily on customer reviews and will often use Amazon’s reviews when considering a purchase (even on websites other than Amazon).

Measuring the Customer-Driven Touchpoints

The positive or negative experiences customers share about your brand and product potentially reach thousands of potential and existing customers on social media sites like Twitter and Facebook. Online services such as Brandwatch, Netvibes, Radian6, Sysomos, and Social Mention offer a service to measure and report on the positive and negative sentiments. You can track these positive and negative sentiments over time and associate changes with advertising campaigns, new product launches, or other events.

Even without these services, Twitter provides basic analytics on how many mentions or retweets or replies a company, brand, or product receives. More advanced software from products like Hootsuite will also tell you which links people are clicking on and growth in Twitter mentions over a period of time.

While these companies will aggregate the data for you, you can also compute it yourself by monitoring your own social media streams:

1. Count all the times your product or brand was mentioned.

In Twitter, for example, you can search by hashtag #, @, or the name of the company or product.

2. Categorize the comments into positive, neutral, and negative.

Negative comments can include everything from a bad purchase experience to a high price or failed product.

For example, a positive tweet posted to Twitter for an airline might be “I can’t wait for @ajaxair to launch nonstop service to @curacaotravel from JFK. Just in time to save the summer tan #ILuvAjaxAir.”

And a negative tweet might be “Very sad @ajaxair Shifted our Xmas travel flight 10 hours today. Forcing $ plan changes that will cut into family time.”

3. Compute the percentage of positive and negative sentiments by dividing the number of positive by the total number of mentions.

For example, if you count 10 positive comments out of 85 neutral and 5 negative, 10% of the total are positive mentions. Repeat this for the negative comments and graph the difference by week or month.

Figure 10-1 shows the proportion of positive and negative mentions of a hardware manufacturer over a six-month period. In January, the ratio was even at 14% positive to 14% negative, then it fluctuated a bit until June, when the negative percentage was more than 1.5 times the positive percentage (15% versus 9%).

9781118937594-fg1001

Figure 10-1: The comparison of positive and negative comments on Twitter.

You can expect the changes to fluctuate due to random chance variation, so include confidence intervals (or run statistical comparisons) when making decisions. Figure 10-2 shows the graph updated to include 90% confidence intervals (see Chapter 2 for more about confidence intervals). When the intervals overlap, you can’t be sure the difference isn’t due to chance alone (for example, in Jan, Feb, and March). However, in June, the confidence interval error bars don’t overlap (notice the small gap), meaning the percentage of negative comments exceeds the percentage of positive comments.

tip It’s just as important to measure positive and negative comments via word of mouth. Turn to Chapter 12 to find out how to do that.

9781118937594-fg1002

Figure 10-2: Positive and negative comments on Twitter with a confidence interval added.

Measuring the Three R’s of Company-Driven Touchpoints

To measure the effectiveness of company-driven touchpoints, including direct mail (postcards and coupons), email newsletters, TV and radio ads, and company websites, measure the three R’s: reach, resonance, and reaction.

Reach

Understand the total number of customers your advertising is reaching:

· For TV and radio, this can be estimated by market share and is usually provided by the media outlet or companies like Nielsen.

· For newspapers, magazines, and trade journals, this can be estimated by circulation rates.

· For online advertisements, use page views that are recorded using website hosting log files or Google Analytics (see the section on website analytics, later in this chapter).

Resonance

It could be that prospective customers saw your ad, email, or postcard and even recall some of its details. It’s important to understand how well the messages are resonating. Company websites are often the most seen forms of communication and branding opportunities and play a major role in shaping the customer decisions. Most websites convey important branding messages, benefits, and features of purchasing a product service.

Ask a series of open- and closed-ended questions to gauge customers’ attitudes toward the message. See if customers can correctly list the benefits of what’s being offered and understand the value proposition the company is offering.

For example, I worked with the website team at PayPal after a redesign of its home page. The new home page introduced a major design departure by highlighting new offerings, as shown in Figure 10-3.


Attribution modeling

To improve brand awareness, companies need to get the word out using marketing methods such as advertising and word of mouth. Advertising can be online, on television, on the radio, or a number of other forms.

You can better understand how effective advertising campaigns are by asking participants in the branding survey where participants heard of the product or brand (on their mobile phones, a website, or on TV, for instance).

You can then attribute higher or lower brand awareness to various campaigns. For example, if participants report higher awareness and also report having seen a recent Facebook ad, then there’s at least a tenuous link to this campaign.


9781118937594-fg1003

Figure 10-3: Test how design changes resonate with customers.

To measure how the new PayPal home page design resonated with prospective and current customers, we had several participants answer questions about both the old and new website designs.

Reaction

After understanding how the message works, measure whether customers’ attitudes and behaviors changed. Use the measures of customer attitude, satisfaction, and future intent to assess the reaction, as described in Chapter 9.

To measure behavior, measure customer purchases (if possible) by looking at revenue or the number of transactions at the stores or online (see the A/B discussion section later in this chapter).

Measuring resonance and reaction

You can test resonance and reaction before or in conjunction with a company-driven touchpoint. For example, I worked with computer manufacturer Lenovo to understand the resonance and reaction for its new convertible laptop advertisement. It was a new concept and the advertisement would cover TV and the web, so we wanted to see how well it resonated with customers, and how it impacted their attitudes and their potential reaction (to purchase). We used this process to measure resonance and reaction, and you can too:

1. Measure existing attitudes, such as brand favorability or purchase intent, before showing a stimulus such as an advertisement or newsletter.

2. Show the stimulus (the advertisement or newsletter).

3. Measure the attitudes, comprehension, and intent to purchase after the exposure.

Look for changes in attitudes (see Chapter 9 for measuring lift) and how well participants’ attitudes and sentiments match the intended attitude.

Tracking Conversions and Purchases

A purchase is a conversion, but a conversion is not always a purchase. Potential customers often engage with a company in multiple ways during the consideration phase. These actions, such as signing up for the company’s email list or downloading a product brochure from the company website, can also be considered a type of conversion, or a micro conversion.

Like purchases, micro conversions that happen online can be tracked and quantified, giving you a clearer picture of the customer journey to purchase (see Chapter 7). These conversions also can be considered leads that can turn into customers. Giving potential customers in the consideration phase a way to engage with your business through micro conversions is one form of lead generation: the process of creating sales leads that might convert into sales. You can help nurture a lead that engages with your company through a micro conversion to make a purchase through additional communication, or touchpoints, along the customer’s journey to purchase.

In order to turn a potential customer who engages with your company website via a micro conversion into a paying customer, you first have to create micro conversions and a way to record them.

Tracking micro conversions

To start measuring micro conversions and ultimately conversions on a website, you need to set up an analytics tool. A number of analytics tools allow you to track website interaction. The most popular solution (free) is Google Analytics (http://www.google.com/analytics).

Web analytics tools allow you to collect the following metrics on online interactions.

· Page views: One of the most fundamental measures of engagement on a website is which pages customers are viewing and how many times those pages are viewed. This will become a key metric for tracking conversion opportunities.

· Average session duration: Average session duration is simply the total duration of all sessions on your site divided by the number of sessions. In other words, it’s how long, on average, visitors spend on your site. Sometimes also referred to as time on site, this metric can give you insight into whether visitors are spending a significant amount of time browsing your site or leaving quickly after arriving.

This metric can be measured over time to compare how visitor behavior changes. For example, if you add additional page content, you can compare average session duration before and after the change was made to determine if the new content had an impact on how long visitors were interacting with the website.

· Bounce rate: Bounce rate is the percentage of single-page sessions. This means that site visitors immediately left the website from the page they first entered without interacting with the page.

A high bounce rate, generally upwards of 60 percent, can be an indication that website visitors aren’t finding what they anticipated when they clicked through to your site. On the flip side, it can also be an indication that website visitors found the information that they wanted quickly and had no need to view additional content. A low bounce rate, generally below 40 percent, means that most website visitors are engaging with multiple pages on your website.

· Pages per session: Related to session duration is the metric pages per session, which measures on average, how many pages on your website a visitor looks at during his or her visit. This can give you an indication of how visitors are spending their time on your website, not just for how long.

Like average session duration, pages per session can be compared over time to help determine if changes made to your website or marketing campaigns affect how much content visitors are interacting with.

· New versus returning users: This metric tells you what percentage of visitors to your site are visiting for the first time versus visiting for a second or subsequent time.

Looking at the new versus returning visitor metric can be helpful in analyzing user behavior and on-site engagement (as discussed in Chapter 4 on customer segmentation). You may find, for example, that visitors who have previously visited your website spend more time on-site and view more content than those who visit it for the first time. You may also find that returning visitors are more likely to move from the consideration phase to the purchase phase as they become more aware of your products and services and gain confidence in buying from you versus a competitor.

Creating micro-conversion opportunities

Even if your ultimate goal is to drive sales of your product, online or offline, think about what other actions your potential customers might take as they work their way through the consideration phase. Use your segments and personas to determine offers that resonate with customers’ pain points and needs. (See Chapter 5 for the scoop on setting up personas.)

You may already have some micro conversions on your website, perhaps a form that allows visitors to sign up for an email newsletter, or a phone number at the top of the page that will allow them to contact customer service or the sales team.

Other examples you might consider adding include:

· Web form for the user to request prices or general information

· Web form that allows the visitor to send an email to the company

· White papers, case studies, e-books, or other downloadable assets

· Digital product catalogs or brochures that can be downloaded

· A form that allows a user to request a brochure or catalog by mail

· Video tutorials or product demonstration presentations

· Calculators and other useful utilities

· Capability to create a user account on your website

Setting up conversion tracking

Once you have created micro conversions on your website, be sure that you can track their behavior. Google Analytics allows you to track micro conversions and sales through the setup of Goals.

Google Analytics Goals can currently track four types of actions a visitor takes on your website:

· Destination: Visitor reaches a specific page

· Duration: Sessions that last for equal to or longer than a specified amount of time

· Pages/screens per session: Visitor views a specified number of pages or screens

· Event: Action specified as an event is triggered (for example, a video is played)

Say, for example, that you have created an email sign-up form and a brochure download as micro conversions on your site. In each case, the site is set up so that the visitor reaches a confirmation page, thanking her for taking the action. You can set up two separate destination goals in Google Analytics: one that records each email sign-up and another that records a brochure download as a type of goal completion.

In addition to tracking micro conversions, you need to track online purchases. Google Analytics also offers e-commerce tracking that passes information about the purchased product, including quantity purchased, associated revenue, and tax and shipping costs, as well as how many times a user visited your site before completing a transaction.

Google Analytics provides extensive documentation on how to set up a profile for your website and how to create goals on its support site (https://support.google.com/analytics).

Measuring conversion rates

Conversion rate is the number of website visitors taking a desired action divided by the total number of visitors. Conversion rates can be measured for your entire website, a single page, or even for individual marketing channels that drive traffic to your website, like search engine listings or social media platforms. You can also track conversion rates for individual conversion or micro-conversion types.

Say you want to determine what the overall conversion rate for your website is over the past year. Here’s how you find your conversion rate:

1. Look at the total number of users who visited your site during the last 12 months.

2. Add up the completed online conversions.

3. Divide the number of completed online conversions by the total number of users.

For example, if you had 350,000 visits to your site last year and 3,500 of those turned into a purchase, you would divide 3,500 by 350,000 for a resulting conversion rate of 1%.

If those 350,000 visits had led to 7,000 conversions, your conversion rate would be 2%.

Determining what a “good” conversion rate is depends on the type of website, the industry, and what customers are being asked to do:

· For signing up for an online newsletter, the conversion may be between 1 percent and 20 percent.

· If you’re asking people to make a purchase or donate money, expect the conversion rate to be small, usually less than 1 percent.

When I helped Wikipedia measure and improve its online donations, the conversion rate was below 0.1 percent.

What makes a strong conversion rate for your site is best measured against your previous conversion rate.

Measuring Changes through A/ B Testing

The true success of company-directed messages is to measure how changes to your message affect recipients’ behavior. Here are two common methods:

· Usability testing is done ahead of time. It’s ideal for testing major new concepts with qualified participants in a study before you launch to your full customer base. This provides an insight into how designs will affect conversion rates and is covered in Chapter 14.

· Real-time testing involves running a live experiment with your prospective and current customers on a specific touchpoint. Savvy direct marketers have used this technique for decades. It’s used extensively on websites, online ads, email newsletters, and even email subject lines. It’s best used when you aren’t introducing major new concepts, but rather, want to tweak or optimize existing designs to improve conversions.

The approach is often referred to as A/B testing because there are usually two alternatives being tested: option A and option B.

tip You can test more than two alternatives (A/B/C testing). It’s called split testing (because different alternatives are split between your customers).

The idea behind A/B testing is that customers are randomly presented with one of two alternatives and in real time, you measure which alternative generates a better outcome. Outcomes in this case are one of a number of metrics that measure engagement, consideration, and purchasing. Typically only one variable is changed for each design (for example, a color, picture, or price). That enables you to properly attribute responses to the respective variable. (See Chapter 2 for a reminder on variables.)

Offline A/B testing

One example of A/B testing is direct mailing two alternative promotions in two different color envelopes (white versus manila) to prospective customers. If the offer contained in these envelopes has customers follow through, you can compare the percentage of A versus B envelopes to tell you which promotion is more effective.

For example, I worked with a national advertising agency that placed ads in weekly newspapers. The goal was to understand the increase (or decrease) in revenue from a coupon for Pier 1 Imports, a U.S. retailer. Newspaper readers in a handful of U.S. cities received the coupon, and the sales that weekend in those specific cities were statistically compared to results for cities that did not receive the coupon. We also controlled for differences in the market by accounting for same-store sales the prior month and year. We were able to see that the discount from the coupon was offset by the increase in sales.

Online A/B testing

While A/B testing gives you a lot of insight, there is a hard cost in mailing and printing different design alternatives. It can take a long time to realize that a significant portion of people throw out one color envelope more than another color. If a significant portion of customers toss the envelopes in the trash, you’ve lost money. Not a good thing!

Online A/B testing has no printing or postage costs, and the feedback is immediate. For those reasons, it’s very popular to optimize websites and online advertising.

By far, the most common type of A/B testing metric is the conversion rate described earlier in the chapter. As a reminder, a conversion can be anything from clicking a button, subscribing to email, or adding an item to a shopping cart, to making a purchase.

Here are the five steps to take to go from data to insights:

1. Determine your metric.

The two most common metrics are conversion (or micro conversion) and average order value for a purchase.

2. Find your variable.

Identify one salient element on the website: a button, headline, image, or layout. Choose two alternatives.

For example, PayPal tested two alternative designs of its Check-out button. Small changes resulted in major differences in conversions and average order value.

3. Randomize and test.

The secret to a successful A/B test is randomization. Alternate which design (A or B) the next customer coming to a web page receives.

warning It isn’t always possible to randomize designs; some companies run them sequentially (for example, Design A goes from Monday to Wednesday and Design B from Thursday to Sunday). Watch out for daily and weekly seasonality. You may have a different type of customer during the evening or weekends and running tests sequentially means you can’t tell if different conversion rates are the result of the designs or just the different customers.

4. Determine your sample size.

Differences in A/B test results are often very small, usually less than 5 percent or even 1 percent. For high-traffic websites, though, increasing conversion rates by even .1 percent can result in a substantial increase in sales.

For example, I worked with Wikipedia to help measure the best banner ads to increase donations. The differences in conversion rates were often less than .05 percent. But because millions of people viewed each website monthly, differences this small translated into thousands of dollars.

The sample size you need depends on the size of the difference you hope to detect (if one exists). The smaller the difference, the larger the sample size you need. Table 10-1 shows the number of customers you need for each design alternative.

9781118937594-101

tip For example, to detect a 1% difference in conversion rates (5% compared to 6%), you should plan on randomly assigning 12,856 participants to Design A or Design B. One approach to sample size planning is to take the approximate “traffic” you expect on a website and split it so half receives treatment A and half receives treatment B. If you expect approximately 1,000 page views a day, then you need to test for about 13 days. At that sample size, if there was a difference of 1 percentage point or larger (for instance, 5% versus 6%), then that difference would be statistically significant.

If you want to determine if your new application has at least a 20 percent higher completion rate than the older application, plan to test 80 people (40 in each group).

5. Compare the difference.

Don’t be fooled by randomness by eyeballing the results. A higher conversion rate could be from chance fluctuations. To determine if two conversion rates are statistically different, use an A/B test calculator, like this one (http://www.measuringu.com/ab-calc.php) (see figure 10-4).

For example, if 100 out of 5,000 users click through on an email (2% conversion rate) and 70 out of 4,800 click through on a different version (1.46% conversion rate), the probability of obtaining a difference this large or larger, if there really was no difference, is 4% (p-value = .04). That is to say, it’s statistically significant — you just don’t see differences this large very often from chance alone. The input and output of the A/B test are shown in Figure 10-4. See the appendix for how to compute the statistical difference between percentages.

9781118937594-fg1004

Figure 10-4: Use online calculators to test for statistical significance.

warning Just because you get results that are statistically significant doesn’t mean they are practically significant. Practical significance means the increase will have some practical meaning to the company or customers. For instance, a new design may increase the number of customers (statistically significant), but the increase might be so small (say, .01%) that it will have no noticeable impact on sales.

warning While your goal may be to optimize conversions (more click-throughs and purchases), don’t forget to consider the total revenue. In some cases, more customers will click through a design, but if it results in a smaller average order value, then you’re ultimately losing money even though more customers are converting!


The changing visitors of Wikipedia

I helped the Wikipedia team better understand the conversion rate data that they were monitoring during their donation period. Wikipedia is the online encyclopedia that is ad free and free to all. It raises money by having a donation drive a couple weeks each year. It’s therefore important to have as many customers as possible donate during that short window of time.

A lot of A/B testing of banners is used to see which messages and images elicit the highest response. The test banner that performs better is kept in place.

Wikipedia traffic is enormous: Hundreds of millions view it every day. That means statistical significance is reached quickly, usually within hours instead of days, as on many other sites.

One perplexing result the Wikipedia analytics team noticed was that they’d reach statistical significance but then the conversion rate would no longer be significant as the day progressed. After double-checking their numbers, it turned out that as it got later in the day in the U.S., more international readers started coming online and they reacted differently to the banners than did U.S. visitors. The conclusion was to split out banners by region. This illustrates the importance of understanding how your customers may change, even throughout the day!



Multivariate testing leads to surprising results

Testing multiple variables at the same time saves time, and more importantly, provides the optimal combination of variables. For example, I was helping a credit card company understand which combination of elements on a registration form led to higher conversions (applying for a card was the conversion outcome). During early tests, the two variables used were asking participants their Social Security Number (SSN) and offering a 0% introductory interest rate. In separate A/B tests, we found that prospective customers, not surprisingly, were more likely to apply with a 0% introductory rate.

Also, we found that asking just the last four digits of customers’ SSN also increased their likelihood to apply. Both options cost the company money (in lower interest and higher costs to validate someone’s identity). However, we found that customers were actually slightly more likely to apply even when there wasn’t a 0% introductory offer when they were asked just the last four digits of the SSN. In other words, the interaction of these two variables led to a different optimal result than testing them separately.


Testing multiple variables

A/B testing typically involves measuring only one variable at a time, like a website button size, placement, or color to increase conversion rates. But design elements have a way of interacting with each other in unpredictable ways. It can be that one type of image works well with only a certain color button. Or perhaps a headline, image, and location increase the conversion rate, while another combination decreases the conversion rate. Software from companies like Adobe can be used to test and implement multivariate tests.

tip The Which Test Won website, https://whichtestwon.com/, provides plenty of examples of online A/B and multivariate testing. It’s surprising how people’s intuition can be a poor judge of what design combinations will actually increase conversions.

Making the Most of Website Analytics

Website analytics tools contain a wealth of data and metrics that can help you quantify your customers’ behavior and improve user experience. As you dive into the wealth of information tools like those that Google Analytics provides about your users, here are some guidelines for making the most of them:

· Start with a goal. With so much information at your fingertips, it can be easy to get lost in the data. You may find that you’re spending a lot of time looking at information without taking action on it. The best way to combat this and make the data useful is to use the tool with specific goals in mind.

Write down one to three goals and use them to guide your time spent in Google Analytics. If you find yourself wandering off in another direction, remind yourself of your purpose for the current session and refocus.

· Use historical data to put current data in context. You can find out whether your current methods are in line or significantly different than previous time periods, which is incredibly helpful to decide when you need to make changes. Choose the date range you’re interested in — usually the previous period, previous year, or a custom date range.

· Use annotations. Annotations are an incredibly helpful tool for flagging unusual spikes or drops in website traffic and other metrics. You can note a higher number of conversions due to a limited time offer or a drop in conversion following a website change. When you review historical data, you can remember what may have caused the change in your data.

Set your annotations to allow others to see them. Shared annotations can save your team a lot of work while looking for the cause behind data irregularities.

tip Use annotations for any data irregularities that you are aware of, as well as for noting when marketing campaigns launch or your company receives earned media mentions, so that you and your team can easily tie performance changes back to your initiatives and other external forces.

· Use advanced segments. Advanced segments are a powerful way to isolate subsets of your website traffic for analysis. Google Analytics comes with predefined segments, as well as the option to build your own, in order to compare and contrast how different segments of visitors behave on your website.

For example, you may find that users who visit the site via organic search results stay longer on-site and view more pages per session than those who come through another channel, such as referral from a social media page.

· Create dashboards and shortcuts for views you use often. Dashboards are a way to quickly view data that you use frequently, such as bounce rate, user locations, and more. You can create up to 20 dashboards in your Analytics profile, with up to 12 widgets that display different metrics on each of them.

· Use alerts to stay on top of performance changes. If there is a sudden spike or drop in conversion rates, or bounce rate, you need to know so you can address it.

· Examine your search data. Search data can be incredibly valuable: It lends some insight into the minds of users and can sometimes even tell a bit about their intent as they interact with your content. If you’re using pay-per-click (PPC), or paid search, to drive traffic to your website, you have a wealth of information about the keywords and related search queries that drove people to your site.