Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (2014)
Chapter 6. Freemium Monetization
This chapter, “Freemium Monetization,” provides a framework for integrating monetization mechanics—means by which users can contribute money to a product—into freemium products, given the parameters under which the freemium model operates: massive scale and low inherent conversion. The chapter begins with an overview of choice theory and vertical product differentiation, which posits that providing consumers with a large degree of choice increases the extent to which they feel their personal tastes are accommodated, given that a broad product catalogue is capable of serving preferences more precisely. The chapter then outlines a concept called the continuous monetization curve: the idea that the broader and more diverse a product’s purchase catalogue is, the better suited that product is to serving the needs of each individual user through purchases. The chapter then progresses into a discussion of data products, or products that are either composed of data, such as user-centric analytics, or marketed with the use of data-driven mechanics, such as recommendation engines. The chapter ends with an overview of downstream marketing, which is the process of keeping existing users engaged with the product using data and marketing techniques.
choice theory; reengagement; downstream marketing; recommendation engines; continuous monetization; analytics products; nonpaying users; user segmentation; vertical product differentiation
The continuous monetization curve
Successful monetization in a freemium product is contingent on a small percentage of users monetizing; therefore, the design of a freemium product’s catalogue of purchasable items must facilitate a workable LTV metric from the perspective of performance marketing. The LTV metric is a function of customer lifetime and typical spending behavior. But average monetization metrics, as with all average metrics in the freemium model, don’t provide much insight into the realities of product use.
It is highly unlikely (if not, in most cases, impossible), that users would make consistent purchases every day of their tenures within a product. Rather, users make purchases according to their needs at specific points in time and based on the appeal of the product. These two aspects of user monetization—user need and product appeal—form the foundation of a concept in the freemium model called the continuous monetization curve.
Choice, preference, and spending
The theoretical basis of the continuous monetization curve is that a product catalogue should be so complete that, at any given point in their tenures with a product, users are presented with a diverse and relevant set of potential purchasable items from which to choose. This catalogue should be composed of not only static, predefined purchasable items but also of dynamic purchasable items created specifically for the needs of a particular user.
The size of the range the LTV metric can take is a function of the size of the product catalogue: a small product catalogue necessarily limits the breadth of values the LTV metric can assume, given that a small catalogue of purchasable items doesn’t allow for a large number of combinations of purchases. Large product catalogues offer users choice; the larger the degree of choice afforded to a user, the more the user is given the opportunity to monetize. Keeping in mind that the vast majority of users in a freemium product will never spend money, the best way to create a dynamic in which the 5 percent can monetize to a considerable degree is to match those users’every potential whim with a potential purchase.
This phenomenon unfolded in the supermarket sector in the 1980s. In their book, Discrete Choice Theory of Product Differentiation (Anderson et al. 1992), Anderson, de Palma, and Thisse describe two catalysts for the rise of the supermarket format and of the diversified product catalogue around which it operates: the preference consumers place on variety over time in consumption of goods on different occasions, and the specific tastes of individual consumers with respect to brands, flavors, size, etc.
The core principle supporting the large product catalogue adopted by supermarkets—which is also significant within the realm of freemium economics—is that customers are willing to pay a price premium for the products that best suit their personal, idiosyncratic tastes. For this reason, the supermarket model is better equipped to capture consumer revenues and, combined with the effects of economies of scale, has all but pushed the specialty food store model into extinction.
The dominance of the supermarket model has created an entire subset of academic research around vertical product differentiation (VPD) and competition, the general thesis of which states that the increased fixed costs associated with chain supermarket development—land acquisition, logistics infrastructure development, and inventory supply—create natural oligopolies by making it harder for competition to enter the market.
But freemium software product development isn’t subject to these same constraints; the marginal cost of selling and distributing a freemium product is $0. And while the marginal cost of developing an item in a freemium product’s purchase catalogue is not zero, it relates mostly to salary and overhead expenses.
This chapter examines the monetization curve in a freemium product, focusing on three elements: its construction and conceptual basis, its expansion through “data products” and purchase offers relevant to specific situations, and the concept of downstream marketing, or user-focused retention and conversion techniques. Each of these concepts contributes to how much a freemium product is able to monetize its users and deliver revenue.
What is the continuous monetization curve?
The continuous monetization curve describes the possible extents to which users can monetize; it is a reflection of the size and depth of the product’s catalogue of purchasable items. Each point along the curve is a measurement of the percentage of the total user base that will ever reach some specific lifetime customer value. When graphed, the y-axis represents percentages of the user base and the x-axis represents lifetime customer values, with the curve generally taking the form of a Pareto distribution, as seen in Figure 6.1.
FIGURE 6.1 The continuous monetization curve.
The y-intercept, or the percentage of users expected to hold the most common lifetime customer value above $0, sits at a value below the conversion rate when more than one lifetime customer value in the product is possible; this is because the sum of the area under the curve represents 100 percent of paying users, and it is unlikely that all converted users (i.e., those users that directly contribute revenue) would have the same lifetime customer value. In other words, the curve intercepts the y-axis at the most common LTV, which is generally the minimum level of monetization. But unless the most common nonzero LTV is the only possible nonzero LTV, the percentage of users holding that value will not match the conversion rate; it will be lower, given further possible levels of monetization.
The curve decays from the y-intercept for increasing values on the x-axis until it forms a very long, very thin tail for extremely high lifetime customer values. The curve intercepts the x-axis at some point (usually theoretical), beyond which a level of total monetization for any user cannot be predicted.
The curve as described is actually discrete, not continuous (and, in fact, no curve of monetary values can be continuous, as monetary values are limited to two decimal points and are thus countable). Describing the curve as continuous instills a sense, within the product catalogue design, that every user who is willing to contribute monetarily to a product can be uniquely and individually served at a specific price point. This design principle is core to the freemium ethos and represents the strategic distinction between freemium and paid purchase strategy: when users believe that a product is accommodating their personal, idiosyncratic tastes, they are willing to pay a premium for that accommodation. It is this structural reality that allows freemium products to generate higher revenues than they would under other commercial models.
The continuity of the monetization curve could only be accomplished in a product that allowed purchases to be made at theoretical levels of fractions of a unit of currency, which would imply some form of time-based interest accumulation in perhaps an in-product pseudo-economy. The point is that the freemium monetization curve (as described, with LTV values plotted against percentages of the user base) should approach continuous form, with the product catalogue being designed with the aspiration of continuity in mind.
Engineering continuity in a product catalogue generally requires the presence of dynamic products that are not strictly designed but materialize as collections of customized features existing across a large (or infinite) number of combinations. Thus, the product catalogue is not analogous to the physical catalogues that were mailed to recipients in times gone by, such as clothing retailers; in these catalogues, customers were exposed to a discrete, limited number of clothing choices and purchases were made from within that spectrum. In the freemium model, the spectrum of purchasable items approaches continuity as the customer’s ability to customize those products increases.
The most obvious customization option to fit the clothing retailer analogy is the option to choose a color (and, in fact, some clothing retailers allowed for this customization, given a variety of color options that the clothing the retailer had in stock). But if the retailers offered customers the ability to choose any color combination from the color spectrum, the size of the potential product catalogue would increase.
Likewise, if the retailer allowed customers to choose clothing size not based on predetermined configurations (such as small, medium, and large) but based on personal measurements in millimeters, the size of the potential catalogue would increase even more drastically. And if the retailer allowed the combination of these nearly open-ended factors (color and measurements), the number of items in this potential product catalogue would climb into the millions (given that possible dimensions were limited to reasonable forms), even with just a limited number of basic products.
This process—approaching continuity in the monetization curve by providing nearly endless customization and personalization options to a discrete set of core purchasable items—is what transforms the freemium product catalogue into a dynamic mechanism capable of serving users’ personal tastes to such a degree that they are willing to pay a premium to participate in the product economy.
Engineering a freemium product catalogue
Product differentiation has long been a key component of retail success—the wider the spectrum of products a store offers for sale, the more likely any given customer is to find the product that best matches his or her tastes. But a second competitive component to this strategy provides value; by increasing the depth of its product catalogue, a retailer takes on additional infrastructure, logistics, and real estate costs that increase the price of competing with the retailer on the basis of product diversity. Thus, the wide product catalogue, for the so-called “mega-retailers,” serves as a means of pricing competitors out of the market.
In their 1993 paper, Dynamic Retail Price and Investment Competition (Bagwell et al. 1997), Bagwell, Ramey, and Spulber describe a three-part game in which mega-retailers enter a market, compete on the basis of price while investing in core infrastructure, and then settle into a permanent share of the market, with one firm usually dominating the rest. In this game, the mega-retailers are of equivalent quality (or magnitude of product diversity) at the outset, and customers make their decisions based solely on price. This is because price, in the case of the mega-retailer market, signals firm maturity and stability; the firms that are able to offer the best prices on goods are considered to be the best at managing their operations and are thus considered the most likely to operate for the long term (meaning the consumer won’t need to switch companies in the future, which could incur a cost).
In the freemium model, this dynamic changes. Infrastructure does not determine the size of the product catalogue beyond the minimal threshold to support any product catalogue, and the marginal cost of selling one unit of a digital product is $0. Similarly, the real estate costs incurred by mega-retailers are not barriers in the freemium model: the freemium “store” is distributed electronically, and thus it, too, incurs no marginal cost with an increase in the size of the catalogue of purchasable items. In the freemium model, the size of the product catalogue is not a competitive mechanism in the sense that it prices other firms out of the market; instead, it is purely a measure of quality—a measure of how capable a given firm is to meet a customer’s most personalized needs.
Engineering the freemium product catalogue, then, should be undertaken with the most individualized needs of the user in mind. Projecting quality, or a high degree of customization and personalization, allows users to evaluate the product as both enduring and relevant to their needs. Product longevity is perhaps even more important in the digital marketplace than in the physical one, as the cost incurred in switching from one digital product to another may be higher than that incurred in difficult to transfer (when a product or service is shut down, they may be impossible to transfer); users are reluctant to use a product that appears immature and not market-tested because the time invested in using those products will go to waste if they are shuttered. Thus the quality (diversity) component of the freemium product catalogue strategy is paramount; it gives users confidence that the product is worth investing their time into, which in turn increases early stage retention.
Another advantage of the diverse freemium product catalogue is the increased ability to monetarily engage with users. A large, broad, nuanced product catalogue presents more opportunities for the user to extract value from the product with a transaction than does a smaller catalogue with fewer provisions for a personalized experience. To leverage the freemium model’s commercial advantages, users must be given as many opportunities as possible to have their needs and tastes fulfilled. A large product catalogue is the keystone of the broad monetization curve, the breadth of which allows the freemium model to monetize. It also facilitates and capitalizes on the extended degree to which users can engage with the product.
Engineering the continuous monetization curve, then, is a matter of building a diverse assortment of opportunities with which the user can extract value from the product. These opportunities can materialize in discrete, preconfigured products or in customization options that augment and differentiate the user experience at the individual level. They can also be represented in a combination of the two, with a core set of products that can be endlessly and infinitely customized, as in the clothing catalogue example cited earlier.
These opportunities to enhance the experience are instruments of delight: they allow the users to whom the product holds the most appeal to engage with it to whatever extent they want. And the more opportunities that exist, the longer and higher the curve can reach: more possibilities manifest in higher levels of monetization and more varied levels of monetization, which produces the monetization curve’s long tail.
Freemium and non-paying users
The 5% rule, as it is discussed in this book, is defined as absolute; it is not. Examples exist of freemium products that monetize far more than 5 percent of their user bases, especially products serving smaller niche markets in which they operate alone. The 5% rule is meant to establish a limit as the size of the user base reaches a very high number; in other words, the products with the broadest possible appeal will experience the lowest levels of conversion.
At first glance, a scenario in which only 5 percent of users monetize might seem non-optimal; the scenario is often described as subsidization, whereby monetary contributions from paying users make up for the lack of contributions by non-paying users (NPUs). In this perspective, NPUs are a cost of doing business, an unfortunate reality of the freemium model that should be reduced to as insignificant a level as possible.
But the success of a freemium product is predicated on the existence of NPUs; indeed, when the product catalogue is optimized and the product is functionally sound, a low level of conversion indicates broad product appeal. NPUs do provide value to freemium products, but that value is indirect and thus requires a thorough understanding of the model in order to quantify it. But NPUs are neither unfortunate nor a component of a freemium product that should be actively minimized; rather, they are a natural and essential core component of any freemium product’s user base.
The value provided by non-paying users emerges in three ways. The first is in data, on which successful freemium implementation is dependent. The data generated by NPUs holds intrinsic value; when sophisticated statistical methods can be employed, the data can yield important insights into user behavior when the tendency to purchase is controlled for.
But even when these methods can’t be employed—when NPUs and paying users must be analyzed separately because the tools to isolate their shared behaviors aren’t available—the data generated by NPUs can be used to build segmentation models. Early behavioral segmentation allows the product team, at the very least, to group users into paying/non-paying segments that may dynamically alter the user experience. In other words, identifying a non-paying user is, for practical purposes, the equivalent of identifying a paying user.
The data generated by NPUs isn’t limited to the scope of product use. Many platforms allow for users to review and rate products, with the best-received products receiving preferential treatment from the platform in terms of discoverability (e.g., the products with the highest ratings are displayed more prominently in the platform’s store). Such sentiment-related meta data, when it is flattering, can contribute significantly to a product’s user base growth, and it is not wholly dependent on monetization.
The second benefit non-paying users accord to freemium products is viral scale. Non-paying users increase the product’s viral “surface area,” or potential for generating viral growth: NPUs, while not contributing to the product monetarily themselves, may introduce friends and social contacts to the product who will.
Virality is an amorphous concept because it is difficult to measure and difficult to engineer; given its importance in scaling a user base, it should be accommodated for with every mechanic conceivable. Enthusiastic product users need not uniquely and exclusively be paying users; a user may be highly engaged with a product but not contribute to it monetarily for any number of reasons. Discouraging NPUs from using the product reduces the extent to which the product’s user base can grow through virality.
The third benefit yielded by non-paying users is critical mass. Many freemium products require a minimum number of participants before they are useful; examples include platforms built around reviews, social sharing products, and location-based networks. For a product to become sustainable—to develop past the point that a lack of users threatens the utility of the product—every user who joins before critical mass has been reached, regardless of level of monetization, is not only valuable to the product but is essential to the product’s viability.
Even when network effects are not fundamental to a product’s success, that is, when any individual’s experience is not affected by the size of the user base, a large user base serves as a signal of product quality, regardless of what percentage of that user base monetizes.
This can be deemed the nightclub effect: nightclub patrons who haven’t paid anything for drinks still hold value to a nightclub because they serve as proof that the nightclub is worth visiting. When people enter a nearly empty nightclub, they tend to believe it isn’t very good and don’t generally stay for long; this doesn’t change if the few people on the dance floor are holding drinks.
While the three benefits accorded by non-paying users are undeniable— especially the viral benefit—they are quantified with a great degree of uncertainty. Luckily, the presence of NPUs does not detract from a product’s ability to generate revenue from paying users; in fact, NPU presence enhances it. When determining if NPUs should be presented with friction in an effort to dissuade their use of the product because of operating costs, the question is, does the presence of non-paying users provide more value (through the benefits identified) than the cost of accommodating them? For most software products, where the marginal cost of serving an additional user is almost nothing, the answer is yes, even though the value of the benefits of NPUs is difficult to precisely price. For products with a significant marginal per-user cost of operation, the question is rendered moot, as the freemium model is a bad fit for any product that experiences such a barrier to massive scale.
Ultimately, a freemium product should strive for as many non-paying users as possible to participate in its user base (given a steady rate of conversion) simply as a means of achieving scale. Identifying the benefits of NPUs is merely an intellectual exercise; because of the unique characteristics of the freemium model, it is understood that the vast majority of users will never contribute revenues to the product.
Revenue-based user segments
The monetization curve is more useful in driving design decisions in the abstract than it is as a reporting tool. Generating a monetization curve for a product based on real data is possible but difficult to graph and interpret, especially as a time series. A better means of achieving the same effect—monitoring the blend of the user base on the basis of LTV—is to group the user base into revenue-based user segments and track those over time.
Revenue-based user segments are often created using a whale (and lesser amphibious creatures) designation hierarchy. In a freemium product, a whale represents an active user who has surpassed a high (relative to the monetization curve), specific LTV threshold. Descending from this threshold are various other designations to indicate smaller thresholds; some commonly used examples are those of the dolphin and the minnow. These classifications are employed by the product team to describe users who have monetized to a specific level.
Sometimes the whale segment is further subdivided into whale species, which can be helpful when the monetization curve has a very long tail, making the threshold definition for a single whale segment seem arbitrary. Some common examples are the blue whale(the biggest whale species, relating to the highest-monetizing users), the orca whale (a medium-sized whale), and the beluga whale (a small whale).
Creating revenue-based user segments tracks the monetization of users over time with meaningful, intuitive depictions. User segments can be easily graphed by day in a stack percentage bar chart (see Figure 6.2), which draws attention to the growth or deterioration of various segments over the lifetime of the product. The segments should be represented by the percentage of DAU existing at those monetization thresholds on a daily basis; for a product with a large, stable user base, these segments should settle into fairly stable values over time.
FIGURE 6.2 A chart tracking active users by user segment per day.
Wild fluctuations in the values of the segments are a sign that the monetization curve has not reached an optimal level of continuity (or that the user segments thresholds are too far apart, or too numerous at the higher end of the scale). A continuous monetization curve provides many opportunities to monetize between segment thresholds, but one that is not fluid will exhibit sharp, staggered declines; these declines manifest themselves in volatility in the segmentation bar graph.
This volatility is a hindrance to product design because it doesn’t produce actionable insight; a steady, continuous change in the size of active segments speaks to holes in the product catalogue or declining interest in the product, but erratic changes can’t be easily interpreted in order to make decisions. Ideally, each segment should settle into a consistent range of values, changes in which can be taken as alerts that the product catalogue requires updating (i.e., a decrease in a segment at the lower end of the monetization curve means less-expensive products are losing appeal).
Some criticism has been leveled against the practice of grouping users into segments named from the animal kingdom. The practice originated in the gambling industry, wherein spendthrift gamblers were classified as “whales” and treated to an improved experience by the casino to induce further spending.
While taking cues from the gambling industry may be viewed as adopting conventions from a commercial enterprise that is largely seen as exploitative, the specific labels applied to user segments is irrelevant: the term whale could be replaced with highly engaged user and the purpose of segmentation, which is deriving data that assists in optimizing the user experience, wouldn’t change. To that end, grouping users into segments named after animals is no different from the myriad other techniques employed by product teams with the goal of creating a personalized, enjoyable experience for each user.
Data products in the freemium model
The rise of Big Data and ubiquitous instrumentation in consumer products has led to the emergence of a layer of meta-products that capture information about behavior and trends and allow the information to be leveraged to enhance the user experience. These meta-products are called data products, and they have become a staple of large-scale software platforms that generate huge volumes of data artifacts. The insight produced by such volumes of data is not only of consequence to the developers of products for feature design purposes, it is also generally valuable to users, who can utilize the insight to self-direct their own behavioral optimizations.
The best example of this type of insight being valuable to the end-user from behavioral audits or records mining from platforms that offer only data products. Some personal finance products, for instance, connect directly to users’ bank accounts and look for expenditure patterns over long time periods that the user may not have been cognizant of; others connect to users’ social networks and relay information about the size and scope of the sub-networks found within. The value of such products is derived wholly from preexisting data.
Freemium data products can serve to monetize users without presenting friction. They are generally valuable to the most engaged users; they enhance the core experience and allow for it to be improved through product awareness that is available only with direct access to user data and a robust analytics infrastructure, neither of which users generally have at their disposal. In this sense, data products can be seen as more of a service—the ability to access and use an analytics platform—the scale and expense of which most individuals wouldn’t be able to justify but nonetheless are still able to extract value from. Data products, then, are a means of leveraging existing, necessary analytics infrastructure to generate additional streams of revenue.
Data products essentially take two forms: tools that allow users to optimize their own experiences, such as user-centric analytics or insight products, and opt-in suggestive tools that optimize the user’s experience based on the user’s behavioral history. Data products don’t explicitly take the form of products that a user can recognize; they may simply be implemented into a user interface as elements of the navigation or product console. As such, they might be considered product features, except that data products are generally developed with heavy input from the analytics group.
This distinction is important to recognize because the develop-release-measure-iterate feedback loop for data products is fundamentally different as a result of the analytics group’s involvement. So, while the development of data products adheres to the same general guidelines as do other product features, consideration must be made for exactly how they are implemented, measured, and iterated upon.
A recommendation engine is a behavioral tool that helps guide users into the best possible product experience based on their personal demographic characteristics and the tendencies they have exhibited in the past. Unlike feature-level optimization, which is usually undertaken without any fanfare because it is based solely on user behavior, recommendation engines are opt-in, meaning users are presented with a choice and must either accept or decline it.
The opt-in nature of recommendation engines stems from the fact that opt-ins generally affect the way the product is used. One of the most popular examples of a recommendation engine is a social tool that suggests appropriate new connections in a network-based product: these often appear as panes on the periphery of the user interface and propose potential new connections to the user, based on a number of factors. If the product’s network is presumed to be based on real-world connections, the proposed new contact should likewise be someone the user is likely to know in a personal or professional context, which can be estimated through an analysis of shared contacts.
A tool of this nature materializes as a recommendation engine because a social network exists to allow users to connect of their own volition to the contacts of their choosing. A product that automatically assigns connections to a user is not a network, it’s an algorithmic data dissemination tool. Given that the inherent value of a social network is the network itself, curating the curation process would not be a user experience optimization but a total presupposition of user intent, the misuse of which would negate the product’s purpose and render it unusable.
Far from being the core of a product, a recommendation typically takes the form of an ancillary enhancement to the product’s user interface or a call to action in the product flow. For example, the “people you may know” tool described earlier might be introduced to the user during the initial onboarding period, immediately after registration; this allows the user to immediately populate the user’s network and thus extract value from the service from the outset, reducing the probability of churn. After this initial introduction, the tool might sit at the border of the user interface—always accessible but not overwhelmingly conspicuous.
Recommendations can be made individually or in larger numbers, depending on the projected efficacy of the recommendation (i.e., whether or not the user is expected to accept the recommendation) and the nature of the recommended object. They can likewise take a less obvious form than a explicit recommendations; rather, they might be posed as interface options or account settings. Whatever the case, the user should be made to feel that a nonpermanent choice is being presented.
The efficacy of recommendations is easily measured: it is the ratio of proposed to accepted recommendations. This ratio isn’t merely a characteristic of the algorithm that determines the recommendations; it should be used to actively improve upon the algorithm beyond some minimum acceptable threshold of appropriateness. No universal effectiveness benchmark for recommendations exists; a benchmark should be defined for the product, taking into account the nature of the recommendations. But recommendation engines that are not performing to an appropriate standard should be shuttered, as inappropriate recommendations are not necessarily benign; they can undermine the user’s trust in the product and distract the user from the core experience.
Consolidating the disparate components of the recommendation engine into a user interface implement requires cooperation between the analytics team and the product team over design and logistical considerations. The analytics team will own the data delivered by the tool, but the product team should determine where it is placed, how it contributes to the product experience, and when it will be visible to users.
The product development schedule therefore must be coordinated with care across the two groups, especially when updates are required. The two groups should also come to a consensus over the definition of success before the tool is launched, and that standard should be used to determine whether the tool remains active or is removed from the product. A concrete, agreed-upon standard to be reached in a predetermined timeframe after launch can help relieve conflicting interpretations of the tool’s effectiveness.
The dynamic product catalogue
The continuous product catalogue in a freemium setting isn’t necessarily composed of explicitly designed items; rather, to approach continuity, the product catalogue can be rendered dynamically as a function of customizable options available for a smaller, discrete set of items. In this sense, the product catalogue can approach an infinite size, which presents problems for navigation as well as appropriateness: some product configurations will be more valid than others to certain users, based on preference. Data products can be used to ameliorate both of these issues by preselecting the best product customization options for the user based on behavior and sorting those products based on taste.
While an infinitely large and complex catalogue of purchasable items meets the quality requirements cited in the discussion of discrete choice theory and vertical product differentiation, it comes at the cost of usability and straightforward navigation. A product experience is enriched by diversity, but too much choice can be cumbersome, especially when the majority of choices are inappropriate for any given user. The dynamic product catalogue should capitalize on the data artifacts users create in order to tailor the specific options and the variety in the product catalogue to users’ tastes and behavioral preferences.
Some product catalogues will invariably be harder than others to classify with respect to a user’s past behavior, especially when the user has not spent much time with the product. This can be accommodated for by profiling users early and establishing catalogue preferences that can be matched to a minimal set of data. Once a user is profiled, the user’s personal catalogue can evolve over time as the user produces data artifacts capable of informing the progression of the catalogue.
In the “people you may know” product example, this evolution might take the form of a set of contact recommendations for a user that is determined from a very broad demographic characteristic, say, the user having registered with a university email address. The product catalogue for this unique product, which is the contact recommendation engine, would change over time as the user exhibits a preference for certain types of connections, such as professional contacts or contacts from the same university. As this user’s tastes become clearer, the catalogue itself changes, offering more precisely tailored products (recommendations) for which the user is likely to have a desire.
In addition to selecting dynamic product options based on appropriateness, the dynamic catalogue of purchasable items should also be capable of ranking those options based on the likelihood of the user finding them appealing. Even when finely tuned to a user’s tastes, an unwieldy catalogue of purchasable items can reduce the user’s propensity to buy by inviting so much analytical comparison between options that, ultimately, no choice is ever made (a form of “analysis paralysis”). Tailoring a product catalogue to a user’s tastes requires pruning the catalogue to an appropriate size to maximize the likelihood that the user finds exactly what the user wants.
The benefit of limited choice is derived from the notion that the evaluation of a purchasable item is accompanied by an opportunity cost, which is foregoing the evaluation period and the purchase altogether. The evaluation period of most purchasable items is small and undertaken with only a few features, such as price, in mind. But when the options for a particular purchasable item increase, the process of comparing those options becomes more complex and requires considering more features to produce an efficacious result, given decreased differentiation. When the set of options and thus the set of comparison features grows very large, the opportunity cost of this evaluation process can exceed the utility of the eventual choice.
Such a situation is extreme, but even at lower levels of comparison, users can be led into inaction by the fear of making an incorrect decision. Thus, while purchasable items should be customizable to the extent that users feel they can buy exactly what they want, the purchasable items in the catalogue should be sufficiently differentiated so that they can make a quick comparison. The optimal size of the catalogue depends on the level of differentiation that can be achieved in variants, although enough variants must be presented to communicate a sense of diversity to the user (and satisfy the quality requirement).
At the macro level (the level of design that considers the entire user base), product catalogue continuity is a necessity because it allows for every user’s tastes to be accommodated for. But at the individual level, observable continuity—when a user can deduce, given the massive size of a product catalogue, that at least one purchasable item suits the user’s tastes—is less beneficial than ease of navigation. Therefore, the scope of the continuous product catalogue that is presented to the user should err on the side of being quickly and painlessly browsed through. While quality is important to communicate, and quality is communicated through a large product catalogue, the return on conspicuous quality rapidly diminishes as the product catalogue becomes cumbersome to navigate.
Predicting all possible use cases for a product during the development phase undermines the purpose of the minimum viable product model and is often futile. Use cases develop unpredictably, and, as a result, the extent to which the most engaged users will treasure a product is indeterminable until those users have been exposed to the product. The most engaged users may adopt a product as a staple of daily life, as occurs with some lightweight mobile apps.
These dedicated users can represent a powerful community of supporters and advocates whom overly aggressive monetization might alienate. As a result, their product use might be of little value; if a product space is commoditized to the point that the cost of switching between competing products is low, then even the most engaged users may not be incentivized to pay for direct use. In such a case, the enthusiasm for the product is better monetized than is actual usage, and productized analytics may offer an opportunity to strike a balance between adoption and monetization.
Productized analytics describes a meta-product that provides insight into a user’s interaction with a product. Many users, especially the most engaged, may expect the functionality—even the advanced functionality—of a product to be provided for free in a market segment that is dominated by the freemium model. Whether or not these expectations are rational is irrelevant; market participants must adapt to them. By providing product users with in-depth analytics to describe their usage of the product, the developer sidesteps a potential scale-limiting monetization strategy by allowing the most engaged users to become more deeply attached to the product.
Users are generally interested in the same scope of analytics as are product teams, with the exception that users may appreciate metrics describing their interactions with other users (e.g., profile views on a social network) and real-time metrics that aren’t useful for development purposes. Productized analytics can generally be extracted from the same infrastructure that produces product development analytics; however, the increased strain of making these analytics available to users (in terms of infrastructure load) may necessitate additional resource allocation to the analytics stack to ensure fidelity and availability.
As in product development, analytics tools made available to the user can be optimally delivered in a configurable dashboard. This dashboard may extend the user profile or exist as a separate functional entity, but it should be intuitively accessible and designed with simplicity of use in mind: product teams and users may engage with analytics for the same purposes, but product teams do so with informed intent. Users must be guided to analytics, and any hurdles to understanding the value in analyzing their behaviors might serve as an impediment to using the dashboard.
Pricing analytics products is often undertaken within the same freemium strictures as the source product itself, with a free tier available to entice users to become acquainted with the product and a paid tier available to those extracting the most value out of such information. Analytics products should be designed to underscore monetary value: if a user understands that streamlined product use (via analytics) can create direct, monetary value, then the user can more likely make an educated decision about purchasing analytics products. Thus, the overarching theme of the analytics tools should indicate a possibility of decreasing expenses (especially through streamlined usage) or increasing revenue.
The dynamics of the freemium model—large scale, low conversion, and a nearly continuous catalogue of purchasable items—pose discovery challenges. Specifically, items in a nearly infinite product catalogue are not readily accessible, and even highly engaged users may not be capable of staying abreast of the most recent product developments. For this reason, marketing in freemium products doesn’t apply just to the process of introducing users to the product, it also applies to users already in the product’s system as a means of targeting users for appropriate promotions and reengaging them with new and existing features. In this sense, marketing can be divided into two conceptual formats: upstream marketing, which describes the marketing activities undertaken to grow the user base, and downstream marketing, which describes activities targeted at the existing user base.
Downstream marketing is driven almost entirely by data, especially in freemium marketing; the ability to reengage a user or present a user with the most appropriate product promotion is contingent on an understanding of that user that can be realized only through behavioral data.
In this sense, downstream marketing is divorced from traditional marketing skillsets and is realized by applying data science to marketing principles. In essence, downstream marketing represents commercial analysis when a product’s system cannot be managed as a monolith. When the product catalogue is continuous, the user base is segmented, and the levels of potential lifetime value fall across a very broad range, then the process of encouraging the user base to make purchases never ends.
The most popular form of downstream marketing is probably reengagement marketing, or marketing efforts undertaken to bring a user back to a product that has ostensibly been abandoned. While it is not always effective, reengagement marketing is popular because it is cheap, and user acquisition is not. When a single user has apparently churned (or is considered likely to churn), the product team must weigh the cost of attempting to reengage that user against the cost of acquiring a new user.
With web-based products, reengagement campaigns are almost always delivered by email, as it is an inexpensive medium that is capable of converting with little friction by using embedded links. Emails are also incredibly easy to track, test, and iterate upon; the content of emails can be constructed in any number of combinations over which send rates, open rates, and click rates are readily and easily quantifiable.
Email campaigns are usually constructed with a template that is populated by specific user details when the campaign is launched, as shown in Figure 6.3. For a reengagement campaign, an email will typically focus on a specific call to action involving a discount or an invitation to explore an aspect of the service that the user hasn’t yet accessed. Templates addressing specific situations are constructed according to the relevant situational dynamics, and the analytics system automates the delivery of emails to appropriate recipients on a regular schedule (usually nightly).
FIGURE 6.3 A sample reengagement email template, in which values surrounded by % are replaced with user data from the product database.
The transparency of email campaign conversion allows for continual optimization but may also create a sense of progress that isn’t wholly warranted; email campaigns are notoriously ineffective, generally producing conversion rates of far less than 10 percent. The return on invested time should be tracked carefully when engaging in email reengagement campaigns—slight conversion gains may not be worth the time invested to produce them, and email content can be adjusted endlessly.
On mobile platforms, a more powerful alternative to email reengagement campaigns is called push notification. A push notification is a system message delivered (“pushed”) by an application. Push notifications are received by user opt-in (at first launch, most applications request to send push notifications); if a user has selected to receive them, the user converts extraordinarily well. And while the space allotted for push notifications is small—usually not more than a few words—and thus offers little latitude for testing, tracking behavior that results from push notifications is straightforward, since the entire process takes place on a user’s mobile device.
Determining when reengagement should be pursued is mostly a function of the definition of churn for that product. Users who have churned out of a product are not only largely resistant to reengagement overtures, they’re also likely to consider such attempts spam, which incites negative feelings for the product and potentially damages its reputation. A company should take care, especially in email reengagement, to not communicate with users who have obviously churned completely; rather, it should connect with users when they are considered likely to churn, which is the analytics system’s responsibility.
User segments are not useful merely for predicting user behavior and grouping monetization profiles; they can also be useful in categorizing users based on the stimuli the users respond to, especially as those stimuli relate to monetization. The manner in which users interact with a product’s catalogue of purchasable items, and the goals users seek to achieve with a product, can be useful tools in curating an experience that best suits users’ needs.
Product promotions—discounts, time-limited special offers, combination products, volume prices, and so on—are universally appreciated; that people welcome the notion of paying less for a product under special circumstances is not controversial. And in the freemium model, the opportunity exists to dynamically undertake promotional pricing strategies in perpetuity, whereby the entire product catalogue is subject to an optimal pricing framework at all times. This framework need not be a monolith; instead, it can apply to each user (or, more often, each user segment) in the manner that best optimizes product revenue.
Promotional targeting is a strategy for increasing revenue that involves setting prices so as to intersect with demand within user segments, not within the entire population of users. In practice, this intersection isn’t accomplished through different prices for different user segments but rather through ongoing promotional efforts that vary by user segment. Since the marginal cost of producing products in a freemium environment is $0, products should be priced (discounted) to whatever extent results in the highest number of sales.
Promotional targeting is driven by analytics: much like in reengagement marketing, promotional targeting matches a predetermined template (in this case, a discounting template) to a user profile and presents that template where it is relevant. The determinants of the segment profiles in promotional targeting are product preference, promotional preference, and timelines. These preferences and characteristics can be derived by measuring how users react to various items in the catalogue of purchasable items and grouping them accordingly.
The unit economics of freemium products supports promotions that can be used to gauge sentiment to some of the factors listed above. For instance, a catalogue of purchasable items might include a perennial discount on a specific product to test for users’ enthusiasm for sale prices. Users who respond favorably to such promotions might be grouped into a “value buyer” segment that is targeted for additional discount prices in the future. Likewise, a “combination buyer” user segment may be presented with promotions that provide combinations of purchasable items that don’t normally exist in the catalogue, in order to encourage monetization. These user segments can be identified early in the product experience by permanently applying promotional techniques to various items in the purchasable catalogue.
Targeting users for discounts can also be done as a function of product preference. Users who have exhibited predilections for a specific type of product can be offered promotional pricing opportunities in order to motivate increased engagement. The opportunities presented should be grounded in the user’s historical purchasing tendencies as well as the user’s expected utility from the purchases. A user’s purchase not only augments a sense of commitment to the product, but it also improves the user’s future experience with the product. Encouraging purchases can therefore be seen as a means of boosting retention and engagement; it’s not simply a strategy for increasing monetization.
Measuring downstream marketing
Downstream marketing is largely an automated endeavor; the processes designed to keep users engaged in the product and aware of the evolving catalogue of purchasable items should, for the most part, be handled by systems. But, like any system designed to operate autonomously, the systems driving downstream marketing require measurement and oversight. Complex systems can break at large scale for any number of reasons; to ensure the effectiveness of downstream marketing, it must be monitored and shortcomings must be addressed as they surface.
Monitoring downstream marketing requires the derivation of metrics that can be used to measure its success. For reengagement marketing, these metrics are fairly straightforward: number of reengagement messages sent, number of messages received, and number of messages converted into further use of the product (usually defined as another session). Metrics relating to revenue generated from reengaged users might also be tracked within a conservative time period after the correspondence is sent. For example, all revenue generated during the week after sending a reengagement campaign is attributed to that campaign. These revenue metrics are useful for measuring return on investment and making decisions about increased investment in reengagement.
Measuring dynamic promotions is more difficult. Dynamic promotions leverage the low marginal cost of product sales to create revenue where there otherwise would be none. Given this conceptual abstraction, counting all revenue generated by a promotional sale may be overly simplistic. Whether a user would have made a purchase in the absence of a promotion is unknowable; in fact, overly aggressive promotional pricing may cannibalize baseline revenues.
Because of this inherent ambiguity, tracking dynamic promotion conversion as a proxy of success may be more valuable than trying to attribute revenues to promotions. The user segments that best capture promotional revenue experience fluidity; as the product and user base mature, so too will preferences for products and receptiveness to promotions.
Aspirations for consistent conversion on promotional offers should be pursued by monitoring any dynamic mechanisms used in producing them; decreasing conversion rates (i.e., fewer people over time purchase the promotions they are exposed to) should serve as a signal that either user tastes have changed, the level of discounts offered is no longer attractive, or the user segments around which promotions are configured have evolved.
The effectiveness of dynamic promotional pricing on revenues can be measured only through A/B testing against a control group (i.e., a group that is not exposed to promotions). While reaffirming the efficacy and need of any automated system should be undertaken at regular intervals, it is not necessary on an ongoing basis. The testing sample should be derived using the same targeting logic that produces user segments for the automated promotions, and testing the technique’s impact on revenue can be done using the statistical methods introduced in chapter 3. If a promotional technique isn’t working—it is not producing more revenues than the baseline pricing scheme of the catalogue of purchasable items—it should be deactivated and re-conceptualized.
Downstream marketing metrics are necessarily limited in relevance; while the product team should be interested in the performance of downstream marketing techniques, very few others in an organization likely need to know much about them. Including such metrics on a high-level dashboard is therefore not necessary, but tracking these metrics on a regular basis is. Automated systems should not be left unmonitored; automation can lead to unintended consequences when implemented and forgotten.
Besides being tracked properly, automated systems should integrate a means of being shut off or paused when various metrics breach predetermined thresholds; for instance, if the conversion rate on a promotional offer exceeds a certain number—50 percent, for example—that promotion should automatically stop until it has been investigated. The logic driving automated systems operates on a limited set of a priori assumptions that may fall victim to unpredictable factors. A stop-loss shutdown mechanism should prevent any automated system from operating outside the bounds of predicted behavior for a meaningful amount of time.