Growth - Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (2014)

Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (2014)

Chapter 8. Growth

This chapter, “Growth,” explores all means by which a freemium product can expand its user base. The chapter begins with an introduction to strategic growth, which is a data-driven approach to optimizing a product’s reach through market research, and the degree to which users are retained in the earliest stages of product interaction. This retention focus is often accomplished with the onboarding funnel (sometimes called the tutorial funnel); onboarding is a highly analytical, highly granular process to ensure that the first few minutes of a user’s introduction to a product are optimized for retention. The chapter then leads to a discussion on paid user acquisition, which is the means by which a product is advertised. This section outlines the various components of an advertising system: the advertising exchange, the supply-side platform, and the demand-side platform. Paid search is also described in detail. The chapter concludes with a description of alternative user acquisition techniques and the means by which they are employed in freemium products. Alternative user acquisition encompasses traditional media advertising, search engine optimization, and the three forms of non-paid user acquisition: discovery, cross-promotion, and virality.

Keywords

user onboarding; tutorial funnel; demographic selection; paid user acquisition; supply-side platforms; demand-side platforms; paid search; advertising exchanges; mobile user acquisition; cross-promotion; virality; discovery

Facilitating a large user base

User acquisition and growth cannot be neglected in the freemium model. While the freemium business model is itself a component of a user acquisition strategy—a free product will have broader price appeal than a paid product—it is not the only tactic required in building a user base.

Product growth is difficult and necessitates much consideration, both before and after the launch. Strategic growth is the process by which a large user base is facilitated and recruited during the development phase and throughout the life cycle of the product. Just as fundamental use case decisions must be made early in the viral product development process to support future virality, so such decisions must be made with respect to appeal for a product with a large user base in order to support future user base growth.

Strategic growth

Decisions in the earliest stages of development about a freemium product’s appeal may set the tone for the product’s eventual success. Scale is the fundamental pillar around which freemium products are built: the size of the user base must be large enough so that the small percentage of users who contribute to revenue constitute a meaningful enough number to justify not charging for the product. This has to be accounted for in the planning and design stages of the product.

After launch, strategic growth as a conceptual procedure constitutes a metrics-based, analytical approach to iterating upon the product’s marketing initiatives and early user experience with a singular focus on growing the user base. Just as product development is undertaken in iterations with a focus on engagement, virality, and revenue metrics, marketing is executed on the basis of rapid iteration of both the product and the marketing initiatives in response to growth metrics.

Strategic growth is essentially a quantitative framework for measuring and improving upon the product’s ability to acquire and retain users. Long-term retention is not strictly a growth concern; it is rather a function of the match between the user’s needs and the product’s fundamental use case.

Early stage retention, however, can generally be improved upon in iterations as mechanics present in the first session are enhanced and perfected. In the context of strategic growth, early stage retention is defined as the percentage of people returning to the product for a second session. In essence, strategic growth as a concept seeks to optimize the first session to produce a higher percentage of people returning for a second one.

Demographic targeting and saturation

The strategic decisions of greatest significance, made in the earliest stages of product design, likely revolve around demographic targeting. Targeting in this sense does not specifically relate to user acquisition, or which demographics will be advertised to. Rather, it refers to tastes—selecting the demographic groups to whom the product will eventually offer the most appeal.

Demographic targeting in the conception and design stage does not mold the end product’s ultimate use case; rather, it influences the product’s form, function, and general aesthetic to best fit the demographics likely to contribute revenue to that use case. And while universal appeal puts a product in the best possible position for reaching scale, the intersection of a use case and a competitive marketplace rendering a product appealing to every demographic is unrealistic. Demographics should be identified as most likely to appreciate a product as well as contribute revenue to it.

Demographic targeting will inevitably take place in a product’s life cycle; the question is what that targeting affects. If targeting is done early in the design and conception phase of product development, then the product can be crafted to appeal most strongly to the demographics likely to pay for it. If targeting is saved until after the product has been developed, it can be used only to optimize marketing campaigns.

Given a defined and fairly concrete use case, a product will have an intrinsic addressable market size whether or not the product is designed with the demographics comprising that market in mind. Besides, developing an understanding of the market size, which is essentially the product’s saturation metric, during the design phase results in a product that is better suited to the demographics most attracted to it.

Waiting to consider the saturation metric until after the product is developed requires a marketing plan tailored to demographic groups and adaptable to a product designed to have a broad market. The opposite approach—conceptualizing a product around a broad target demographic and fitting the marketing narrative to the product—produces a less convoluted marketing message and a product experience that better serves its core constituents.

Deciding upon the product’s target market from the inception of the design process and molding the product to meet that market’s tastes results in an optimized experience for the people who would have most enjoyed the product had those considerations not been made. Waiting until the marketing process to engage in such deliberation merely delivers a substandard experience to everyone: the key demographic receives a product that has been deliberately made vague, thus not serving its target’s specific tastes and preferences, and the ancillary demographics receive a product with a fundamental use case that only obliquely meets its needs.

The success of the freemium model is contingent on massive scale, but no product use case has all-inclusive, worldwide appeal. When a firm operates under the assumption that a product cannot be found useful by every person on the planet, it can consider which demographic groups might find a product most useful early in the development process and deliver a product that has been more thoughtfully designed with those characteristics in mind. If that demographic consideration is only made with respect to marketing, then the degree to which the product satisfies the core demographics may be reduced.

The tradeoff, of course, is that other potential users may find a product more useful if it has been designed for universal appeal. This tradeoff is a downside of using demographic targeting in product development instead of marketing strategy; it means the design of the product caters to those least likely to contribute revenue, which runs decidedly counter to freemium principles.

A free product does not discriminate on the basis of price against any particular demographic group; it represents the utmost extent of accessibility. By designing a freemium product to accommodate the users who are most likely to capture delight from it, the product merely increases its potential, which is limited to generate revenue.

Optimizing the onboarding process

The onboarding process is the point in freemium product use where the greatest number of users churn. This is due to any number of factors: a fundamental mismatch between the users’ needs and the product’s use case, a mismatch between the marketing message used to acquire the user and the user’s initial experience, early, aggressive tactics to monetize users or force them to surrender personal information, and too many others to articulate.

But the onboarding process also represents the point in the product life cycle through which the most users pass; it is the one part of the product that every user experiences. It therefore logically follows that improvements to the onboarding process yield the largest relative results in terms of percentage of the user base retained. A 1 percent improvement to retention at the earliest stage of the product experience yields more than a 1 percent improvement at some point further into the life cycle by matter of fact.

Optimizing the onboarding experience, then, involves defining each point in the first session, or any period starting with the very beginning of a user’s tenure, and tracking the degree to which users from the initial total user base have churned by that point. This is represented by and accomplished through the onboarding funnel graph. Each point in the onboarding process is represented by a bar graph, and the difference in height from one bar to the next represents churn from 100 percent of the original cohort.

The onboarding process is optimized by a fluid progression of iterations focusing on reducing the vertical distance between bars on the onboarding funnel graph until the descent from start to finish is as small as possible. Generally, optimization begins by defining a starting point and an ending point to the onboarding process: if the onboarding process is defined as the entire first session, or some meaningful number of n sessions from the beginning of the user’s tenure, then the end point is represented by the beginning of session n+1.

The starting point of the onboarding process is usually defined as the beginning of the first session; if it isn’t (such as when a point exists in the first session before which users can’t reasonably be considered acquired), then loading times in the first session should be taken into account, as they generally have a large impact on early churn.

The start of the first session does not coincide with the initialization of the product, but this fact should not be taken for granted; even in products where acquisition is considered contingent on some event taking place, such as registration (without which the product is useless), introducing a segue between initialization and the acquisition threshold, one that offers some insight into the product’s value proposition, can have a meaningful impact on total churn. Users may need only a minor amount of convincing to decide to surrender personal information or otherwise take some required action before they can begin using a product.

Each iteration of the onboarding optimization process should focus on reducing the decrease between events in the onboarding funnel. This is usually and most capably accomplished with A/B testing: one or more event variants that inspire large drops in the funnel are introduced, and users are separated into different onboarding tracks, with each track containing a different version of the event. Once data is available, the variant that experienced the smallest drop is elected as the replacement (or, if none of the variants offer an improvement, the original is retained), and the process is repeated on another event.

It is important to note that A/B tests for the onboarding process should isolate only one event, not combinations of events at multiple points in the onboarding funnel. The results of two tests, each containing different combinations of multiple event test variants, can’t be compared easily, and the effect of an early event variant on a later event might not be predictable. Tests should run end to end and evaluate only one set of variants for a single event.

Especially large drops between successive events in the onboarding funnel should be investigated through intermediary events; when investigating the reasons for a large drop in user progression, isolating the precise source of the event before beginning the testing process generally saves time. Reducing the product progression between events helps to isolate the points of contention and eliminate ambiguity about user progress. And tracking more data is usually less labor- and time-intensive than implementing multiple rounds of tests, each of which requires a minimal amount of time to accumulate sufficient results.

Optimizing the onboarding funnel may be done continually from product launch through shuttering; it is not a task exclusive to the period immediately following a launch. Market and user base dynamics can, over time, change the way users experience the product upon initial adoption, and while the bulk of improvements to the onboarding process are usually made soon after product launch, awareness of user progression throughout the onboarding process is always worthwhile.

Optimizing product copy

The amount of written text that accompanies a product launch is often underestimated: advertising copy, the product’s description in platform stores and on the web, the text in launch and loading screens, and the content of emails sent to journalists, potential new users, and users within the developer’s existing product catalogue add up to a substantial amount of written material that can affect how potential and new users interpret the product. This text, often written in disjointed sessions, sometimes without an eye for overall thematic and stylistic cohesion, can be as significant in terms of user sentiment as the product’s graphical assets, which generally receive more scrutiny.

Product copy is any written text directed at potential users or new users that is used to describe the product. Product copy is most often associated with advertising, but it is used in a number of other materials, most of which fall under the umbrella description of marketing assets. And while ads may be tested to optimize performance, the rest of the materials comprising this group are often not. The fact is that these materials are not disparate, independent components of the product; they form a communicative whole and should be composed with that in mind. Contradictory or highly disjointed product copy distributed across a number of marketing assets can confuse and mislead potential users.

The first step in optimizing product copy for performance is compiling it all in a single location, ideally in a place the product can dynamically access, such as a database table, without requiring development resources to propagate changes. Centralizing product copy makes glaring contradictions or differences in tone across the component parts easier to spot; a general theme and common verbiage should be determined and applied across the entire portfolio of product copy assets. Stark differences in the quality of texts or the specific terms used to describe the product, which the user can see throughout the acquisition process and early onboarding process, can engender a perception of product ambiguity or simply confusion.

Once the product copy assets have been compiled, their individual levels of efficacy should be tested as contributors to an overall process in the same way the onboarding process is tested. Unlike the onboarding process, however, the chronology of the user’s exposure to product copy doesn’t follow a strict timeline. That is, a potential user may or may not have seen an advertisement for a product before arriving at a website for it. Likewise, a potential user may or may not have seen the product’s website before arriving at its entry in a platform store. As a result of this uncertainty, various elements of product copy are not A/B tested as a result of any level of drop-off in a funnel but rather of their own accord. Since chronology is ignored in the testing process, the pieces of copy can be A/B tested simultaneously, assuming the variants for each piece of copy adhere to the common verbiage determined in the writing process.

Because the purpose of unifying the theme and language of product copy is cohesion and not necessarily how well the theme or vocabulary perform, a separate A/B testing regimen might explore how well alternative elements perform. In such a test, it is best to compare only varying themes in the copy seen by everyone— usually a product description at the point of adoption, either in the product itself or on its listing in a platform store—and extrapolate the best-performing variant out to the remaining copy items.

Product copy is important to unify and test because, in most cases, it is the first aspect of the product a potential user is exposed to and because the distribution of product copy is neither predictable nor controlled by the developer. Spikes in product exposure, such as from platform featuring or coverage in a high-profile trade magazine or newspaper, can quickly bring a product to the attention of a massive number of potential new users. If product copy has been tested and optimized up to that point, the small conversion improvements achieved (in the 1 to 2 percent range) can result in tens of thousands of additional new users following an exposure event.

That said, there is a diminishing rate of return past a certain point for product copy testing, as there is with any testing regimen. Generally, testing a theme and one to three variants of each piece of copy are sufficient in optimizing the copy portfolio to an acceptable standard. As the user base grows, the number of potential users exposed to product copy without any other contextual background on the product, such as exposure to a viral invitation, shrinks relative to the total number of people who have seen the product, and efforts at optimization are more effectively directed at the onboarding process.

Paid user acquisition

Paid user acquisition is a fundamental freemium development topic; growing a user base in a competitive marketplace more often than not requires performance marketing initiatives to seed the initial cohort and facilitate continued growth. These performance marketing initiatives require the heavy analytics structure described in Chapter 2 and an organization’s focus on lifetime customer value management, as outlined in Chapter 5.

User acquisition, then, is somewhat of a barrier to entry in certain market segments dominated by freemium products, where products compete on not only quality but on the robustness of their support infrastructures. Large players can price small players out of a user acquisition market, thus making it harder for small players to initiate user base growth.

User acquisition and the various marketplaces in which users can be purchased—advertised to in an attempt to entice them to adopt a product—can essentially be thought of as commodity purchasing strategies. And indeed, users of freemium apps share much in common with the commodities traded on spot markets, with one fundamental difference: quality. Regulated commodity markets standardize the quality of commodities traded on them, whereas each user purchased through paid acquisition possesses unknowable marginal value (or LTV) to the buyer.

This presents a massive scale advantage in paid user acquisition: while the marginal value of users is unknowable, the value of the entire population per acquisition channel converges around a universal mean, and the closer to the population size a buyer can set purchasing limits, the more information the buyer has about the LTVs of the purchased users. And implicit in the ability to make large purchase orders on individual acquisition channels is possession of the expensive infrastructure already mentioned, which endows the large buyers with information about the purchased commodities (users) and the markets themselves, such as seasonality and intra-day price swings.

Competing in a product marketplace on the basis of paid user acquisition should be preceded by deep introspective recognition of the developer’s business limitations. If the developer cannot predict LTV, cannot afford to purchase a large volume of users on a daily basis, and does not possess the infrastructure to assess the efficacy of various networks, then participating in paid user acquisition beyond an absolute minimum level cannot reliably be done at a profit. A requisite exercise in a developer’s pursuit of strategic growth is recognizing its own disadvantages in a competitive market.

Misconceptions about paid user acquisition

The data-driven nature of the freemium model dictates the way certain functional groups within an organization interact with each other during the product design and development process. Because analytics is a revenue driver and not a cost center in the freemium model, it isn’t implemented after a product launch as a means to reduce losses. Rather, initiating analytics should coincide with the launch of the minimum viable product and run in parallel with product iterations as a means of optimizing the user experience and increasing revenue.

The develop-release-measure-iterate feedback loop can potentially be seen as an intrusion into the creative process of designing software by measurement and analysis. But enmity for the freemium design process is misplaced when applied to paid user acquisition. Paid user acquisition has nothing to do with the creative versus wholly data-driven design debate; it occurs outside the bounds of the freemium model and should always be determined by economic limits when pursuing an optimal revenue outcome. In any given situation, paid user acquisition is either profitable or it isn’t. There is no nuance.

That said, there are two common misconceptions about paid user acquisition that presume a relationship between the design process of a product and the necessity of performance marketing for growing a user base. The first misconception is that exceptional products don’t require paid user acquisition because they are viral by nature. While the most viral software products in the world probably require very little ongoing paid user acquisition, even highly viral products still require an initial seed of users with which to achieve virality, and the larger that initial seed is, the faster and more widespread the viral effects will propagate.

Every virality model is predicated on the same basic set of inputs: global k-factor, the virality timeline, and saturation. The total number of virally acquired users per period is a function of the global k-factor and the virality timeline; total user base growth is a function of saturation. If a product is truly viral—its global k-factor is greater than 1—then virality compounds user base growth by facilitating higher-ordered cohort viral conversions, as illustrated in Figure 7.17.

Thus, truly viral products are the best candidates for paid user acquisition: not only does virality serve as a supplement to an acquisition budget (because it reduces the effective cost per acquisition of a user), but it increases the size of the viral invitation pool and creates a compounding dynamic. When the LTV–CPA spread is positive, virality doesn’t negate the revenue benefit of paid user acquisition or somehow render it unnecessary; it amplifies its positive impact on revenues.

The second misconception about performance marketing is that the money spent on paid user acquisition would always be put to better use funding additional product development. A less absolute version of this statement wouldn’t be contestable; but, as it is written, this belief contrasts with the core tenet of iterative, freemium product development, which posits that resources should always be allocated to the initiative of highest return. In some cases, that could be further product development. But to say that product development is always the most profitable recipient of resources is to ignore that these decisions should be justified by quantifiable support, made on the basis of projected revenue contribution, and not simply a bias for product development.

Measuring the revenue effects of product development versus paid acquisition (given an equivalent budget for both) requires two things: a quantitative framework for predicting revenue from a freemium product, and a reasonable assessment of how the marketplace for a given product will evolve over the proposed development timeline.

A quantitative framework for revenue prediction is something most businesses put together before undertaking any projects and therefore shouldn’t be difficult to acquire. Without a framework, ROI estimates can’t be made, which makes it impossible to finance projects (or financing is done haphazardly).

An understanding of the evolution of the marketplace in the coming weeks (or months, in some cases) is harder to come by and represents a significant risk. What if a competitor releases a new product in that time? What if cost of user acquisition rises precipitously? While these factors are not unique to the decision at hand nor to freemium developers, they are made more acute when a development team is deciding upon revenues now versus revenues in the future based on incomplete information. Most developers have a reasonably concrete grasp of their product’s current metrics; they can compile a sensibly accurate lifetime customer value for users who would be acquired today through paid acquisition. But it is impossible to do the same for users acquired organically (which is the assumption) in, for example, two weeks’ time.

So the decision to pursue product development over paid acquisition represents an admission that the organization believes it can extract more revenues from an improved product at some point in the future than it can from the current product, given the equal cost of product development and user acquisition and an understanding of the evolution of the product’s marketplace over the course of product development.

Put another way, determining whether a budget should be allocated to product development or user acquisition on the basis of revenue benefit requires an evaluation process. Product development may produce a more desirable outcome in some cases, or even in most, but to say it is always the most ROI-effective course of action is to ignore the necessity of sober, objective analysis in making decisions.

Advertising exchanges

Online display advertising, which is essentially any advertisement seen on a website, is the primary means by which publishers monetize their products on the web. A publisher, in this sense, is anyone who produces content and makes that content available on the Internet; in cases where that content is in high demand, advertising might be the publisher’s exclusive source of revenue. When that content serves a niche, advertising may merely supplement the revenues that come from charging for access to the content (such as subscription fees or per-item prices).

During the earliest stages of online advertising, publishers packaged and sold their own advertising inventory (the physical space on their websites that can be filled with advertisements) directly to advertisers on a bulk basis. Publishers used forward contracts, or agreements for the buyers to buy specific numbers of impressions (almost always on a CPM basis) at a predetermined price at a certain point in the future. This system is known as direct buying, and it still persists to some extent. It is a viable strategy when an advertising campaign’s demographic is less important than the advertisement’s context (for instance, on a website that can only display ads related to sports and must purchase those ads specifically and exclusively).

The direct buying model is labor-intensive for the publisher; it must employ a sales team to interact with purchasers, and negotiations over pricing and purchasing agreements must be undertaken manually. Likewise, the process is inefficient: large blocks of ad inventory may go unsold when there is a mismatch between the bulk amounts of impressions buyers are willing to purchase and the bulk amounts of impressions the publisher has available to sell. Given the overhead of making a sale, publishers under the direct buying model have no incentive to sell impressions in small amounts, as the transaction and operational costs could eclipse sales revenues.

For these reasons, direct buying has mostly given way to the ad exchange model. An ad exchange is a technology platform that seeks to match buyers (advertisers) and sellers (publishers) of advertising impressions in a centralized marketplace through real-time bidding, where ad impressions are sold individually in near-real time. The ad impressions are based on various characteristics about the user, the content in question, and the time of day. That is, instead of purchasing impressions in bulk through a forward contract based on a predetermined projection of value, advertisers bid on each impression based on the market dynamics at that moment.

The ad exchange model orients the purchasing of impressions away from context and toward audience. The advantages of this model relate to optimization: because bidding takes place in real time, advertisers can more accurately control their spending, and because inventory is sold on a per-unit basis, publishers can prevent inventory from going unsold by decreasing their prices. The ad exchange model is not the exclusive domain of online advertising; rather, it has extended to nearly all forms of digital advertising, including ads on mobile platforms.

An ad exchange charges a fee to handle the transaction between an advertiser and a publisher. The ad exchange adds value to the process with the technology it provides to both parties and the liquidity it provides to the publisher (via access to a number of advertisers and the sales volume of individual impressions). When an advertiser elects to work directly with the ad exchange, that is, to directly manage the bidding process on ad impressions, the advertiser is said to have a seat at that exchange. Without a seat at an ad exchange, an advertiser must purchase ads through an intermediary, which might not offer full or direct access to the available impressions on the exchange.

Ad exchanges offer targeting capabilities to advertisers that are not necessarily restricted to the content on which the impression is being served. The most valuable targeting capabilities are historical user characteristics, or information about the user’s history or state that may assist the advertiser in determining whether or not that user is a good candidate for a specific ad. Some advertisers use these characteristics to engage in what is known as re-targeting, or advertising a product or service to a user whom an advertiser knows has accessed it in the past. User characteristics essentially provide a means of serving the best possible ad to a user on a per-impression basis.

The basic transactional structure underpinning the ad exchange model begins with a user engaging with a specific publisher’s content (which could be a website, a mobile application, etc.) where ad impressions are being sold. When the user opens the content, the publisher immediately sends the ad exchange three pieces of information: the specific piece of content the user is viewing, any information about the user the publisher is allowed to share, and the minimum price the publisher is willing to accept for the impression.

The exchange presents this information to advertisers, who then return two pieces of information to the exchange (unless they don’t wish to participate in the auction): their bid on the impression and the ad they would like to serve in that placement.

In a real-time bidding environment, the highest bidder wins the ad placement. After accepting the bid, the ad exchange passes the ad content to the publisher and handles the transaction logistics (payments, reporting, etc.). The entire process, from the user first viewing the content to the publisher serving the ad content in the placement, must be executed in a fraction of a second. An ad exchange of reasonable volume must therefore be capable of processing tens or hundreds of thousands of such transactions per second, which poses substantial infrastructure requirements. As such, only a few major ad exchanges exist.

No matter what the platform is, ads form the foundation of paid user acquisition in freemium marketing (except for paid search, which is discussed separately). On mobile devices, advertising for applications can take one of two forms, web-based ads and in-application ads, that can direct users to either platform application stores or mobile websites. On the desktop web, ads can also direct to websites or platform stores for desktop software.

The acquisition cost is the average amount of money paid for an ad that resulted in a user adopting the product. On mobile devices, this cost is sometimes quoted and charged on a cost-per-install basis; on the web, prices are often quoted and charged to users on a cost-per-mille basis, meaning the advertiser calculates its per-user acquisition cost (cost per install) by dividing the total amount of money spent on an advertising campaign by the number of users acquired through that campaign.

Demand-side platforms

The ad exchange model is presented as a system of four parties: the user, the publisher, the ad exchange, and the advertiser. But in reality, the system is often made of six parties, with intermediaries representing the publishers and advertisers to the ad exchanges. The intermediaries that represent advertisers are known as demand-side platforms, or DSPs.

A DSP automates the process of purchasing ad impressions while also making the process transparent. Without a DSP, an advertiser must sell ads through an ad network, or an entity that purchases advertising inventory in bulk directly from publishers and sells it to advertisers. Ad networks exist in two forms: blind networks, which do not provide contextual information about ad impressions purchased to advertisers, and vertical networks, which often represent publications they own and specialize in premium inventory. Ad networks require advertisers to purchase bulk impressions on a forward basis (meaning simply that prices are set now for quantities to be delivered in the future).

The advantages that DSPs offer are economies of scale in terms of overhead management and logistics, and the ability to optimize advertising strategy in real time. DSPs also offer tools that can help advertisers track their ads’ performance (through metrics like click-through rate), predict targeting efficacy, track historical data about the exchange, and unify external data sets with advertising market meta data. These all assist the advertiser with decisions about ad placement, optimal bid pricing, topical relevance, and other highly technical endeavors that small advertisers may not be equipped to undertake.

An additional advantage is that DSPs can offer advertisers access to multiple ad exchanges and manage performance across those exchanges. This increase in access to inventory gives the advertiser real-time bidding capabilities and allows the advertiser to set bids based on the goal of the advertising campaign, rather than on the expected value of the purchased ad impression.

Advertisers set these bids by establishing targets for effective prices, or the prices paid, on a per-mille basis, for individually purchased impressions. For example, the effective cost per mille (eCPM) is the average price of individual advertisements placed (since each placement is made individually in real time, not in bulk). Advertisers would have a difficult time achieving effective goals without access to a large pool of ad impressions.

DSPs generally offer more transparency and sophistication with respect to targeting content and demographics, which is a key issue with ad impressions. DSPs can often semantically analyze and classify content before placing ads into it; this helps to match ads to the content they’re most appropriate for (and most likely to convert in).

Given that the principle value a DSP offers to an advertiser is its technology stack and analytics infrastructure, some high-volume advertisers opt to build technology platforms internally that exist essentially as proprietary DSPs. The advantage of an internal platform for managing advertising purchasing is the access to insight into the algorithms used to best match advertisements with impressions, which DSPs closely guard.

But operating an internal DSP, or advertising optimization framework, requires a seat on each exchange from whom the advertiser purchases, and seats are often granted based not just on advertising volume (which generally must be in the order of millions of dollars per month to quality), but also on the quality of advertising the advertiser brings to the exchange. Thus, the development of internal DSP functionality may not be feasible for all but the largest advertisers, who are capable of dedicating massive resources to infrastructure deployment and managing a sufficient volume of regular impression purchases to qualify for seats on multiple exchanges.

Supply-side platforms

The intermediaries that represent content publishers, or the parties selling ad impressions, are known as supply-side platforms, or SSPs. An SSP helps a publisher maximize its advertising yield, or maximizing revenue in the process of selling fixed-number, fixed expiration-date commodities.

Yield management is an interesting and fairly new focus within economics that attempts to model optimization scenarios for perishable goods that can be priced with flexibility. Yield management strategies have been employed to great success in the airline and hotel industries, which seek to maximize the revenue they generate by predicting demand, segmenting users, and implementing fluid price models. Industries and product groups that can benefit from yield management tactics exhibit five basic characteristics:

ent Excess or unsold inventory cannot be saved or sold in the future. An ad impression that is not served to a user is not accessible again once that user leaves the content.

ent The customer pool is stratified across multiple demand curves and price elasticity constraints. Some customers are willing to pay a price premium for what they deem to be a higher quality experience, and some customers make their decisions to purchase based solely on price.

ent Inventory sales orders can be taken on a forward contract basis when future demand is uncertain. Content publishers can either sell their ad impressions ahead of time or wait to sell them in real-time on an exchange.

ent Sellers are free to turn down purchase offers on the basis of price in expectation of higher future offers. Content publishers can accept forward contract purchase orders on their future ad inventory or reject them in anticipation of increased future demand.

SSPs assist publishers with the mechanics of optimizing their yield by analyzing patterns in demand and the behavior of advertisers who frequently make offers on their inventory. Since a publisher always has the option to sell fixed-term inventory to advertisers on a forward contract basis, SSPs assist publishers in determining how they can best structure their inventory sales between bulk forward contracts and real-time, per-impression exchange bids.

Although publishers commit to forward contracts in advance (and may incur significant penalties if they do not meet the volume terms of those contracts), at the instance of each impression, the publisher is presented with the choice of whether to place the impression on an ad exchange or to use it toward the fulfillment of an outstanding bulk contract. Thus, when contracts are open, the share of a contract that an impression represents serves as an opportunity cost against which the revenue gained from selling that impression on exchange must be weighed, reduced by the demand risk of being unable to fulfill the entirety of the contract as a result of withholding the impression from the contracted party. Balseiro et al. (2011) present this in as a decision tree undertaken by a publisher at the point of each impression, which is illustrated in Figure 8.1.

image

FIGURE 8.1 A per-impression advertising decision tree, as described by Balseiro et al.

Figure 8.2 depicts a similar model, adapted to the parameters of the freemium model and grounded in the logic of cross-promotion as described in Chapter 5. This decision tree extends the tree depicted in Figure 8.1 and takes into account the concept of user churn as a result of advertising; that is, when one product is advertised in another product, the possibility of churn from the source product must be compensated for by either a sufficiently high sale price or a sufficiently low predicted LTV.

image

FIGURE 8.2 An extended decision tree model compensating for user churn.

While they are similar to DSPs, SSPs also provide publishers with an analytics infrastructure and reporting services that they can use to analyze their own inventory and optimize their operations. This infrastructure allows publishers to gather data about the demand for their inventory, in real time, and set price floors accordingly. This infrastructure may also allow publishers to block certain types of ads from appearing alongside their content and help them predict demand in both the near- and mid-term.

Paid search

Paid search advertising is an auction-based advertising model used in displaying contextual ads on search engine results. Instead of bidding on target user demographics or specific placements, advertisers bid on keywords; when those keywords are searched for, the search engine operates an auction to determine which ads are displayed alongside results. Search engine results pages generally feature multiple ad placements, with the most valuable being “above the fold,” or above the point at which a user must scroll to see additional content on the page. Search engines may also syndicate their ad placements across third-party websites and feature ad placements in their freely available tools.

Paid search ads are almost always text-based and the search engine contextually analyzes them for relevance. Ads are represented as links to websites; clicking on an ad takes the user to the advertised website. Paid search ads are most often undertaken on a cost-per-click (CPC) basis, where advertisers set bid prices on ads based on the value of a click, but cost-per-action (CPA) and cost-per-mille (CPM) pricing options are also typically offered.

The paid search dynamic merges the DSP, the ad exchange, the SSP, and the publisher role into one, creating a three-party model: the search engine, the user, and the advertiser. Search engines typically offer a rich suite of tools to advertisers in order to target and optimize the advertisers’ campaigns; when an advertiser creates an ad, the advertiser provides ad copy, targeting guidelines (such as geography, language, and other restrictions), and a maximum bid per identified keyword. The market depth of a specific keyword at a specific point in time represents the number of advertisers bidding for access to ad impressions served alongside the keyword’s search results. When market depth is greater than 1, the search engine must apply ranking logic to decide how to allocate ad placements.

Jansen and Mullen (2008) identify GoTo.com, which was later renamed Overture and acquired by Yahoo!, as having developed the first paid search auction marketplace in 1998. The initial incarnation of this auction model took the form of a generalized first price (GFP) auction; the advertiser with the highest bid took the best placement and paid exactly its bid, the advertiser with the second-highest bid took the second-best placement and paid exactly its bid, and so forth. This led to inefficiencies in the marketplace, wherein advertisers would engage in time- and labor-intensive bidding wars that ultimately decreased the revenue generated by the advertising marketplace.

In February 2002, both Google and Overture launched paid search marketplaces structured around a generalized second price (GSP) auction model. Under GSP conditions, the highest bidder wins the auction but pays the bid put forth by the second highest bidder, the second-highest bidder pays the bid put forth by the third-highest bidder, and so on; if the number of bidders exceeds or exactly matches the number of ad placements, then the lowest bidder pays the minimum bid price (the reserve price). Usually some margin amount is added to the price charged to each bidder.

The purpose of the GSP, which is a form of what is known as a Vickrey auction, is to give participants the incentive to place bids based on what the object’s value is to them. Since no participant knows the bids of the other participants, they cannot game the auction system by strategically outbidding the others. An example of a GSP auction is presented in Figure 8.3. In the figure, the ad placements on the page are arranged in order of priority: placement 1 is worth the most, placement 2 the second most, and placement 3 the least.

image

FIGURE 8.3 A GSP auction model.

In May 2002, Google added an additional layer of complexity to the GSP model by introducing a quality ranking component to its ad placement ranking algorithm. This component is known as the “quality score,” and it takes into account four factors: the click-through rate of the ad in question, a measure of the perceived quality of the destination website to which the ad leads, a measure of the destination website’s relevance to the keyword being bid on, and the value of the bid itself. The quality score’s purpose is to improve the ad experience for the user.

Because the search engine in the paid search model assumes the responsibilities of the DSP, the SSP, and the ad exchange from the ad exchange model, some operational efficiencies that emerge as frictions between moving parts are eliminated. For instance, the search engine is incentivized to offer advertisers robust analytics and targeting mechanisms in the search engine’s suite of advertising tools to optimize the advertiser’s yield and encourage continued use. And because the search engine has full agency over ad impressions supply data as well as user information, no tensions between the publisher and advertiser can arise over information asymmetries.

That said, because of the pervasiveness and reach of search engines and the competitive landscape in which they operate, the quality of the user experience plays a larger role in impressions allocation in paid search than it does in other advertising models. Search engines are incentivized to favor the user experience over sub-optimal ad placement service (in terms of ad quality, site quality, or relevance). This is because such a strategy preserves not only long-term advertising business, which is contingent on ad quality (given the nature of the advertising format), but also its positioning relative to other search engines. Thus advertisers with niche appeal may have trouble meeting aggressive campaign goals through paid search, as the number of placements available to niche topics is a function of the number of keyword searches executed within that niche.

Virality and user acquisition

Since the principal benefit of virality is non-paid user base growth, virality necessarily changes an organization’s paid user acquisition calculus with respect to growth goals and acquisition budgeting. As defined in Chapter 4, the k-factor describes the number of additional users an existing user virally introduces to a product; this number augments the aggregate effect of user acquisition and can therefore be used to derive a more precise projection of how many total users will be acquired in a campaign and at what individual price.

When reduced by the k-factor, the cost paid per user introduced into the product is known as eCPA, or effective cost per acquisition. The calculation to derive eCPA is fundamentally the same as the one used to derive CPA: it is the total amount of money spent on a campaign divided by the total number of users acquired by that campaign. The difference in the calculation of eCPA is the degree of separation considered: CPA is calculated on a direct acquisition basis, whereas eCPA takes into account the users introduced to the product indirectly through virality.

Because the calculations use the same set of inputs, eCPA can be calculated simply as a reduction of CPA by a product’s k-factor; that is, it is the degree to which the acquisition cost decreases as a result of the product’s ability to spread through viral mechanics. And since the k-factor cannot be negative, it always reduces CPA. The calculation for eCPA is illustrated in Figure 8.4.

image

FIGURE 8.4 The formula to calculate eCPA.

Adding the k-factor to 1 in the denominator of the equation incorporates the additional users acquired virally into the per-user price paid in the campaign. For instance, when the k-factor is 1 (i.e., each user introduces an additional user to the product virally), the CPA is effectively halved.

Another approach to adjusting the user acquisition dynamic to accommodate the k-factor is to use it to increase the calculated LTV of a product (as an effective lifetime customer value, or eLTV). This line of thought dictates that the revenue generated by users initiated into a product through viral mechanics by another user should be attributed to the original user. But three compelling arguments against this approach render the eCPA model more appropriate.

The first argument is that LTV should be segmented by various factors—geography, behavioral history, etc. —and the eLTV model works under the assumption that the profiles of users acquired virally match the profiles of the users who introduced them. This assumption is difficult to justify when considered against the potential means through which virality can take root: social media networks, word of mouth, and even chance encounters in public (such as on public transport). Given the variability in how viral mechanics produce user base growth, attributing a portion of one user’s LTV to another user’s as a result of viral introduction is impossible before the virally acquired users have been adopted.

The second argument is that, by attributing the expected revenues from one user to another, the organization engenders a dangerous opportunity to overestimate a product’s performance. Costs are concrete, as are user counts; even if the k-factor is overestimated (thereby producing an unrealistically low eCPA), because virality is generally calculated over a fairly limited timeline, the eCPA calculation can be audited and retroactively updated in short order. But LTV is a longer-term metric, usually realized over months; adjusting an unrealistically optimistic eLTV is impractical and could lead to imprudent budgeting and a false sense of revenue security.

And the last argument is that LTV is very specifically and narrowly defined as the estimated revenues a user directly contributes to a product. Augmenting LTV by a growth factor changes this definition and renders it more difficult to calculate, communicate, and use as a decision-making tool.

Mobile user acquisition

User acquisition on mobile platforms is undertaken under a different set of parameters than those on the web and through other mediums. While the mechanics and intermediaries of user acquisition on mobile platforms remain the same as those already identified, the dynamics of the system are much less precise. For one, far less information is knowable about a user on a mobile platform prior to a purchase than is known on the web.

Second, a user purchased from with a mobile application, and not through content serving advertising, purchased under certain quality implications, specifically, that users are sold only from an application when they have not converted. These two fundamental differences between mobile user acquisition and acquisition on other channels are effectively represented by the laws of large numbers and adverse selection.

In the context of mobile user acquisition, the term “selling users” refers to advertising mechanics within one application that prompt the user to install another. These advertising mechanics are operated by ad exchanges, and when a user clicks on such a prompt, they are removed from the application they are using and brought to the second application’s profile in the platform’s application store.

Whether or not the user installs the second application, the session in the first application has ended. Depending on the format of the agreement between the second application and the ad exchange, the second application may then pay the ad exchange for the user it received, and the ad exchange will subsequently pay the first application.

Mobile user acquisition and the law of large numbers

The law of large numbers states that the larger the number of experiments conducted under a static set of parameters and in an unmodified environment, the closer the average of the results will converge toward the expected value of the experiment. In the case of mobile user acquisition, the expected value of the experiment, which is the purchase of a user, is unknown. Thus, the more users purchased on a specific network, the greater confidence the developer can have that the average LTVs of those purchased users represent an LTV average for that channel.

The opposite is also true: when a developer buys few users from a specific channel, the developer knows very little about the expected LTV of users on the network. When a user is purchased from a mobile ad network, the developer knows only three things about the purchase:

ent The purchased user has a mobile device and has installed at least one application;

ent The user’s mobile device model and geographic location are known (other information such as gender and age may be known on some networks); and

ent The developer of the application where that user came from was willing to sell that user for a specific amount of money.

These three data points imply nothing about a user’s predilection for making in-application purchases. Considering the role subjective taste plays in whether or not a person enjoys a mobile application, mobile acquisition purchases are essentially made “blind.”

When a mobile developer acquires a user, it cannot predict how much money that user will spend in its application. The determinants of a user’s lifetime in-product spend are essentially random variables:

ent The user’s current financial status (and expected near-term future financial status);

ent The user’s preferences in terms of application category;

ent The amount of free time the user has (and expects to have in the future) to engage with the mobile device; and

ent The extent to which the user may find in-application, virtual purchases socially acceptable and financially responsible.

No mobile user acquisition networks filter for these variables because these variables are not measurable. (The last variable can be measured by proxy of past spend, but this data isn’t available when a user is purchased from a network.)

Thus the notion that any developer can predict, with any reliability or accuracy, the LTV of a single user purchased from a single channel is invalid, given the lack of insight into the characteristics of the user that dictate a predilection to making in-application purchases.

The layer of haze obfuscating the contours of the mobile user acquisition market is only penetrable by huge sample sizes, as dictated by the law of large numbers. And those huge sample sizes are expensive to acquire and maintain; only the very largest firms command the economy of scale necessary to build a system that can approach the full utilization of acquisition data in predicting LTV by network.

Mobile user acquisition and adverse selection

Adverse selection further complicates the economics of mobile user acquisition. As mentioned earlier, one of the few data points known about a user, immediately after the user is acquired by a mobile application, is that the user was put up for sale. Unlike on the web and in other media formats, where content is often monetized exclusively by advertising, in-application purchase is the dominant form of monetization for mobile applications. And an in-application purchase is zero-sum at the level of a single user: the purchases a user makes in one application are purchases not made in another.

Two circumstances exist under which an app developer would be incentivized to sell a user: (1) The application monetizes exclusively by selling users; it does not offer in-application purchases, and (2) the developer does offer in-application purchases, but the user in question is not considered likely to make purchases totaling to more than the application could receive for selling the user (as discussed earlier in the chapter with respect to yield management).

The first case represents more applications, but the second case represents more users; that is, the majority of users sold on mobile acquisition networks are done so through the applications with the largest user bases, by the developers that have the most resources to analyze user behavior and predilection for purchasing. In other words, users bought from mobile acquisition networks are sold to those networks precisely because they are considered to have a low probability of generating revenue. This is adverse selection: the information asymmetry about users between buyers and sellers produces a situation where the users being sold are the ones least likely to be of value.

An open market should correct for such an information asymmetry, that is, demand for users on these markets should be dictated by the value they provide, thus setting the average cost per acquisition at a market clearing price under equilibrium. But this doesn’t happen in mobile user acquisition, because buyers do not act rationally.

A developer dependent on acquisition networks for user base growth (i.e., a developer whose application isn’t experiencing organic or viral growth) has no choice but to continue to buy users once it has started, whether or not the quality of those users is high. The developer also may not be able to distinguish between bad users and a bad application that doesn’t monetize, but in either case, the developer cannot do anything to allay its dependency on paid acquisition until it has improved its application through iteration cycles.

When a developer does not possess the analytics infrastructure to constantly iterate and test improvements to application mechanics and instead believes the quality of the acquired users is to blame for poor performance, it likewise has no option but to continue to acquire paid users in the hopes that the quality will improve. And even when a developer does have the infrastructure and business capacity for continual improvement, it cannot stop acquiring users unless the application reaches appreciable organic discovery potential.

Users available to be acquired on mobile networks are a blend of non- converting users from applications published by savvy developers with sophisticated analytics systems and users randomly sold from applications by developers that possess no analytics infrastructure; this blend is not revealed by acquisition networks. Lack of insight into this blend of users, and the adverse selection present when the seller can evaluate the revenue capacity of its users, creates a situation in which user acquisition on mobile platforms is essentially conducted, at low volumes of users, without any means of predicting a result.

Alternative user acquisition

As user acquisition describes a firm’s initiatives in recruiting users into a product, it is often considered strictly as a paid endeavor, as other pursuits of user adoption fall under the umbrella term of “marketing.” But in the freemium model, the development of a user base, from adoption to optimized initial experience, is simply the broad notion of growth: the procedures and strategies employed to acquire and retain users at the earliest point in the adoption funnel.

In this sense, and because the concept of growth is supported by a unified analytics infrastructure in the freemium model, paid and alternative user acquisitions are interrelated: they represent the same set of challenges and opportunities to compound growth, and both can be tracked, instrumented, and optimized. In applying the same analytical and return-oriented approach to alternative acquisition as is used in paid acquisition, a freemium developer can ensure that its resources are being allocated to deliver to the product what matters most: growth of the user base.

Above all, a freemium developer should orient its operations to achieve profitable, consistent and reliable growth. One of the challenges presented by alternative user acquisition efforts is that they rarely provide the reliability and consistency required in making forward-looking budgeting forecasts, and their profitability can be muddied by the impression that alternative efforts do not incur costs. But the dynamics of freemium product development, and especially of the aggressive pursuit of product growth, assign a cost to every organizational action and inaction; for alternative user acquisition, that cost often shows in the opportunity cost of unrealized projects.

The goal of an organization’s growth team, then, is to implement analytics throughout its operations to such an extent that the best possible allocation of resources, with the goal of profitable, continuous, and reliable growth, can be achieved through thoughtful, well-informed analysis, regardless of where those resources go with respect to paid or alternative user acquisition.

Cross-promotion, virality, and discovery

Cross-promotion and virality have been covered at length in this text, and, in addition to discovery, represent what is generally considered to be organic growth, or growth that occurs without being explicitly paid for. But while these forms of growth can be employed without direct budgetary outlays (beyond development and testing time), they all bear fairly quantifiable opportunity costs that must be recognized and appreciated.

Discovery is the means by which users, of their own accord, find and adopt a product; it usually takes place through some sort of search mechanism, such as a search engine or search function on a platform application store. Users may discover a product by searching for it by name or simply by seeking out products with certain characteristics and then browsing the results.

Of the three sources of organic growth, discovery is the most authentically organic in the sense that it occurs with almost no direct influence from the product developer beyond the thought that goes into naming conventions and product descriptions. As a result, discovery is very hard to instrument and improve on; when a user proactively seeks out a product, the only aspects of the product that impact discovery are the product’s name and whatever information is available about the product through the search mechanism used. These pieces of information are so fundamental that the effects of changing them are hard to gauge and measure post-launch.

Testing the appeal of product names is possible by proxy—say, through measuring the click-through rates of advertisements, each one bearing a different potential product name—but the results can’t be meaningfully extrapolated onto live, post-launch performance. Product descriptions can be more easily tested and iterated upon, but given that product descriptions are usually most valuable when succinct and authentically descriptive of what the product does, the degree to which optimized product descriptions can drive additional discovery is fairly minimal.

Cross-promotion, as discussed earlier, is the process by which a developer entices users of one of its products to adopt another of its products. And while cross-promotion can be a powerful tool in growing the user base of a new product quickly, it often results in a zero-sum transfer of a user within the broader product system, not a net new user. Cross-promotion must thus be undertaken with an eye toward the revenue profiles of both products involved in the transaction; users should be cross-promoted to products in which they are considered likely to spend additional money, or to products that are complementary enough that continued use of the departure product can be predicted with some certainty.

Cross-promotion, then, while eminently capable of being analytically considered, presents a direct risk to the developer in terms of potential revenue loss: if a user is transfers to another product and churns out due to a preference mismatch, the opportunity to capture additional revenues from the first product may also be lost. Defining a user’s tastes within a product the user has already interacted with is difficult enough; attempting to define them relative to a product the user has never used adds another dimension of complexity to the process.

Finally, virality presents opportunity costs to growth in the form of alienated potential users and premature exposure. As discussed earlier, aggressive virality mechanics can have a negative impact on non-users’ perceptions of the product, as the necessity of intrusive and flagrant measures in growing a user base are seen as the exclusive domain of products without obvious and unmistakable quality. In other words, conspicuous virality tactics can be interpreted as negative quality signals (which can potentially impede growth) by users who have yet to adopt a product.

Likewise, users may bristle and feel exploited when overly relied upon as a viral marketing channel, especially when they are used to propagate viral invitations without their explicit approval. A heavy-handed virality mechanic’s success in growing the user base must be measured as a function of its gross effect, which is difficult to quantify to begin with, but becomes even more so when users churn out of a product over issues of trust abuse. That said, the most viral products reap rewards from virality that far exceed the associated costs, but those products are viral by nature and not reliant on specific mechanics to drive growth.

Organic growth must be viewed as a force that is not unequivocally beneficial but rather introduces frictions to overall growth that are simply not wholly apparent as explicit costs, as is paid user acquisition. Even discovery, which is the most genuinely organic of the three approaches, introduces attendant opportunity costs to the product, especially when significant effort is expended on managing the degree to which a product’s name and description are optimized for appeal.

Because organic growth is difficult to predict, it can lead to models of growth that drastically diverge from reality and produce significant negative impact on budget projections. Overestimated organic growth is the most common misapprehension as it relates to these three channels—but especially virality—and it can lull a developer into a false sense of security, resulting in insufficient planning for paid acquisition.

Freemium growth forecasts are therefore most sensibly modeled using paid acquisition as the primary anchor of growth, with organic growth channels calculated as a certain percentage of users acquired through paid channels. This approach orients the model toward the factors that can be best predicted and offers fuller flexibility in revising models when market conditions (such as acquisition prices) change. Building forecasts primarily oriented around organic growth drivers requires making broad and fundamentally untestable assumptions about the performance of the product, rather than using historically auditable market conditions, and introduces an unfortunate element of uncertainty into budgeting decisions.

Search engine optimization

Search engine optimization is the process by which publishers adapt and refine their content to increase its exposure to search engines. Search engine optimization has been a phenomenon on the web since the earliest days of natural search, as content publishers sought to increase the extent to which keyword-relevant searches delivered traffic to their websites. The concept has become important on mobile systems, as the traffic delivered to products in platform stores increases in significance.

No distinction is made between the types of searches on the web and those on mobile platforms; rather, distinctions are made between search engines and platform stores. Mobile web search engine optimization is essentially the same as it is for desktop web search engine optimization. And as platform stores are generally unified for all devices on which they can operate, optimizing for keyword search on one platform usually reaps the same benefits for all platforms.

Search engine optimization is accomplished by tailoring content to appeal best to the algorithms powering keyword search. And while the specifics of keyword-matching algorithms are highly guarded by search engines as proprietary secrets, they all reward the same basic characteristics: relevance to the keyword being searched for, and perceived quality of the destination website.

Keyword relevance is fairly straightforward to measure and improve on; if the searched keyword appears frequently in a text, it will likely be considered relevant to the text’s content. Added emphasis may be placed on title keywords and subheading keywords; the text that is most prominent will likely be the most heavily associated with the content. The degree to which a keyword exists in a piece of content should be measured for the keywords aspired to; the denser a specific keyword’s presence in the content, the more likely that keyword is to be associated with it.

Perceived quality is a far more ambiguous determination, and it is also harder to curate for, especially on platform stores. Many search engines use the extent to which other websites link to the considered website as a proxy for quality, assuming that links reflect how much others find a website’s content interesting and informative. Likewise, various website characteristics are taken into account to assess its intent and reliability, such as how often the website is updated and the amount of text on each page, although video may be assessed as more valuable than text in some cases.

The perceived quality of platform stores is largely measured through the frequency and recency of updates and product ratings by users. User ratings are eminently important on platform stores, not only because they affect search placement but because they have a profound impact on a potential user’s likelihood to download a product. And while user ratings are perhaps the most transparent and meritocratic means of evaluating a product, they also allow users to exert influence on the direction of product development and in-product pricing schemes.

Once a product is released, maintaining the status quo in terms of feature development becomes a real concern, as no product change will be universally accepted as good, yet even one negative rating can adversely affect a product’s discoverability (platform stores penalize it by listing it lower in search results). This phenomenon should play a role in freemium product development for platform stores: for the purposes of preserving user feedback ratings, incremental releases, even when introducing new functionality, are preferable to dramatic changes in the product’s feature set.

The number of downloads a product has achieved in platform stores is an element of perceived quality that can have an impact on its position in keyword search results. Popularity, for better or worse, is generally considered a measure of universal relevance; thus, as products gain prominence, they receive additional exposure in platform stores, creating a virtuous cycle of adoption momentum.

This is part of the logic behind “burst” product launches, or product launches seeded with a large paid acquisition campaign. Increased downloads are complemented by increased exposure in keyword search as well as increased prominence in league tables tracking platform store downloads, which bring additional attention to the product.

Specific pages of content on the web that are indexed and returned as search results by search engines are called landing pages. Landing pages can be updated, tested, and measured rather painlessly; often, a change involves nothing more technical than a file transfer or an update in a content management system. This ease of production and curation allows for evaluating optimization strategies against each other; multiple strategies (in terms of the structure of content and density of certain keywords) can be implemented across individual landing pages, and those that perform the best can be considered superior and implemented elsewhere.

Updating product descriptions on platform stores is often more tedious than it is on the web. But how effective the content is in generating traffic from keyword searches on the web is broadly applicable to generating traffic on other platforms; if a specific set of keywords or content works exceptionally well on the web, it is more likely to perform well on other platforms as well, as opposed to keywords or content that perform poorly on the web.

Even when a product is being developed for a platform store, search engine optimization on the web can be undertaken as a multi-platform agenda: specific strategies can be tested on the web, where the cost of experimentation is merely the time invested into crafting the websites, and the strategy that performs the best can be applied to platform stores, where a burst campaign budget will be disbursed.

Traditional media

Traditional media channels—radio, television, newspaper, and magazines—have served as capable means by which software-based freemium products have gotten new users. The fundamental disadvantage of traditional media over Internet-based media is, of course, the transmission disconnect: traditional media serves only to inform a person about the product, not to allow the person to immediately adopt the product. After viewing a traditional media advertisement, potential new users must still seek out the product on their own; conversion, therefore, is prone to performing more poorly in traditional media than on Internet-based channels.

Related to the transmission disconnect, tracking traditional media is also difficult. Since users must go through discovery channels or visit the product’s website to adopt the product, it is difficult to discern between the users who were influenced to seek the product out and those who found the product organically. Some traditional media advertisements supply product location information that has been modified in order to track the ad’s performance (such as a unique URL embedded in a QR code in print media), but these tactics can create confusion among potential users and are only marginally effective, at best.

Since traditional media advertisements can’t be accurately attributed, the most common technique for measuring their effects is simply comparing user adoption over two similar periods, with one of the periods following a traditional media campaign. The periods compared should be similar in length and where in the weekly and monthly cycles they occur. For instance, if a traditional media campaign is run on the last Monday of a month, the analysis period following the campaign should be compared to the same period a month earlier (i.e., following the last Monday of the previous month), given that monthly growth has been relatively stable.

If monthly growth was rapid, the comparison won’t be valid; in that case, comparing the period to the same period a week earlier, rather than a month earlier, would be more sensible but still prone to error. Likewise, if some event took place the preceding week that disrupted the normal level of user adoption, a week-to-week comparison would be rendered useless. The problem with comparing periods is that external influences on user behavior are difficult to understand, much less isolate. Using this method to gauge the effectiveness of traditional media campaigns may provide some basic insight in the case of an overwhelmingly successful campaign but is by not an authoritative means of doing so.

Another method of attribution is the registration poll, in which users are asked during the registration process to select from a predefined set of options how they were made aware of the product. Running the registration poll for ahead of the traditional media campaign will provide baselines values for each particular response (e.g., 20 percent of users discover a product through Internet ads) as well as a baseline value for the percentage of new users who respond to such surveys at all (assuming a response is optional).

Given that the campaign’s target audience is as likely to respond to registration poll as is the product’s user base (which is not always a fair expectation), then upon the campaign’s launch, the baseline values should change enough to provide clues into how many users came from the traditional media advertising.

The lack of a clear attribution channel in traditional media is problematic because this lack prevents the developer from measuring the return on investment into that channel. If a user acquired from traditional media campaigns can’t be unequivocally identified, the efficacy of the money allocated to a traditional media campaign can’t be evaluated. The pursuit of scale in the freemium model, which is of paramount importance, is predicated on the forward momentum provided by the undercurrent of profitable marketing.

Executing marketing campaigns without insight into or regard for performance runs counter to the conceptual core of the freemium business model, which holds that heightened instrumentation and analytical capacity can deliver more revenue from a larger user base than can a premium price barrier from a smaller user base. Traditional media has long been the bastion of brand advertising, where large corporations established and frequently reiterated the connection between their brand and some wide category of good or service: soap or toothpaste or insurance, and so on. This is an effective means of advertising when that intellectual connection between a brand and its category of product is valuable as well as important.

But freemium products are not categories of goods that users are confronted with every day, multiple times per day, as in the grocery store or at the mall. Freemium goods are platforms and services, and since a freemium product’s price point of $0 reduces a user’s barrier to moving between products, users can make easily make informed decisions around quality, rendering subconscious brand sentimentality impotent.

While traditional media is certainly not any worse an advertising medium of than online channels, it is simply not suited to the freemium model’s data demands unless more transparent channels have been completely exhausted to the point of inefficiency. A freemium product exhibiting a high enough level of virality can reach the threshold of its viral networks without actually reaching its saturation point. In such a case, and given a high monetization profile, traditional media may be the only recourse by which to reach users in the product’s demographic targets who are not in the population reached by online paid acquisition channels.

A product with very broad appeal to demographic groups segmented across varying degrees of technology savvy could be a good candidate for traditional media. And in such a case, the profitability of continued reach, while difficult to measure precisely, would probably be ensured through the viral nature of the product.

References

1. Aaker DA, Kumar V, Day GS. Marketing Research eigth ed New York: Wiley; 2003.

2. Anderson S, Thisse JF, Palma A. Discrete Choice Theory of Product Differentiation Cambridge, Mass: MIT Press; 1992.

3. Andrews KR. The Concept of Corporate Strategy Homewood, IL: Irwin; 1980.

4. Bagwell K, Ramey G, Spulber D. Dynamic retail price and investment competition. Rand J Econ. 1997;28:207–227.

5. Balseiro, B., Feldman, J., Mirrokni, V., Muthukrishnan, S., 2011. Yield Optimization of Display Advertising with Ad Exchange. Paper presented at the ACM Conference on Electronic Commerce.

6. Bauer HH, Hammerschmidt M, Braehler M. The customer lifetime value concept and its contribution to corporate valuation. Yearb Mark Consum Res. 2003;1:47–67.

7. Dekimpe MG, Hanssens DM. Time series models in marketing: some recent developments. Mark JRM. 2010;1:24–29.

8. Draper NR, Smith H. Applied Regression Analysis third ed New York: Wiley; 1998.

9. Edelman, B., Ostrovsky, M., Schwarz, M., 2005. Internet Advertising and the Generalized Second Price Auction: Selling Billions of Dollars Worth of Keywords. NBER working paper, 11765.

10. Ellickson, P., 2000. Vertical Product Differentiation and Competition in the Supermarket Industry. Doctoral dissertation. Retrieved from DSpace@MIT.

11. Fader, P., Hardie, B., Berger, P., 2004. Customer-Base Analysis with Discrete-Time Transaction Data. Unpublished working paper.

12. Fader P, Hardie B, Lee KL. Counting your customers the easy way: an alternative to the Pareto/NBD Model. Mark Sci. 2005;24:275–284.

13. Fader P, Hardie B, Shang J. Customer-base analysis in a discrete-time noncontractual setting. Mark Sci. 2010;29:1086–1108.

14. Gupta S, Lehmann DR. Customers as assets. J Interact Mark. 2003;17(1):9–24.

15. Gupta S, Hanssens D, Hardie B, et al. Modeling customer lifetime value. J Serv Res. 2006;9(2):139–155.

16. Johnson R, Bhattacharyya G. Statistics: Principles and Methods Hoboken, N.J: John Wiley & Sons Publisher; 1985.

17. Kumar V, Ramani G. Taking customer lifetime value analysis to the next level. J Integr Commun. 2003;27–33.

18. Kumar V, Ramani G, Bohling T. Customer lifetime value approaches and best practice applications. J Interact Mark. 2004;18(3):60–72.

19. Marshall A. Principles of Economics Amherst, N.Y: Prometheus Books; 1997.

20. Monroe KB. Pricing: Making Profitable Decisions second ed New York: McGraw-Hill; 1990.

21. Payne A, Holt S. Diagnosing customer value: integrating the value process and relationship marketing. Br J Manag. 2001;12:159–182.

22. Pfeifer PE, Haskins MR, Conroy RM. Customer lifetime value, customer profitability, and the treatment of acquisition spending. J Managerial Issues. 2005;17(1):11–25.

23. Pigou AC. The Economics of Welfare London: Macmillan and Co; 1920.

24. Reis E. The Lean Startup New York: Penguin Books; 2011.

25. Schmittlein DC, Morrison DG, Colombo R. Counting your customers: who are they and what will they do next? Manag Sci. 1987;33:1.

26. Sydsæter K, Strøm A, Berck P. Economists’ Mathematical Manual second ed Berlin: Springer-Verlag; 1993.

27. Whittaker J. Graphical Models in Applied Multivariate Analysis Chichester: Wiley; 1990.

28. Woodruff RB. Customer value: the next source of competitive advantage. J Acad Mark Sci. 1997;25(2):139–153.

29. Woodruff RB, Gardial SF. Know Your Customer: New Approaches to Customer Value and Satisfaction Cambridge: Wiley; 1996.

30. ———. RFM and CLV: using iso-value curves for customer base analysis. J Mark Res. 2005;42:415–430.

31. ———. Does sutton apply to supermarkets? RAND J Econ. 2007;38:43–59.