Analytics and Freemium Products - Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (2014)

Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (2014)

Chapter 2. Analytics and Freemium Products

This chapter, “Analytics and Freemium Products,” defines the concept of analy-tics introduced in the previous chapter and presents a model for capturing, storing, and visualizing data for the purposes of driving the design of freemium products with insight, not intuition. Freemium products require constant adaptation to user preferences and tastes in order to generate revenue from the small minority of users who are willing to pay for them; as a result, an “analytics stack” (which consists of both technical infrastructure and the elements of the analytics techno- logy stack that depict and report on data) is needed to properly instrument data from the product and deliver it to product teams. In this chapter, the concept of analytics is thoroughly surveyed and the actual technology architecture used to collect data and generate insight from it is described. The chapter begins with an introduction to analytics and proceeds into an overview of data storage, processing, and reporting systems. The chapter ends with a discussion of the tenets of data-driven design—that is, the processes by which data is used to inform product decisions and improve the user experience. In this context, the minimum viable product, or the most basic incarnation of a product capable of providing valuable data about how development can best meet users’ needs, is introduced and considered.

Keywords

Big Data; data-driven design; analytics; dashboards; analytics infrastructure; reporting; database; relational databases; Hadoop; minimum viable product

Insight as the foundation of freemium product development

Since only 5 percent or less of a freemium product’s users can be expected to generate revenue through direct purchases, a freemium product requires the potential for massive scale to produce meaningful returns.

But the freemium model doesn’t operate under “low margin, high volume” conditions; the scale of a freemium product doesn’t necessarily produce massive reve-nues with a linear relationship to the user base. A large user base merely increases the odds that users matching the profile of those most likely to engage with the product to the greatest extent are exposed to it.

In this sense, the freemium model is the opposite of “low margin, high volume”; in fact, the freemium model is “high margin, low volume” because the users who will eventually pay (the members of the 5 percent) must be given the opportunity to glean enjoyment from the product to the greatest extent possible in exchange for an exceptionally personal, exceptionally enjoyable experience. The massive scale of a freemium product’s user base is only a prerequisite for the model’s success because it is very difficult to identify the 5 percent of people who will make direct purchases through the product before they begin using it. The more users recruited into the product, the more data the user base produces, and very large data sets facilitate insight.

Analytics

Building a freemium product is like hunting in a large river for a very rare fish that is physically indistinguishable from other, less valuable fish, with the expectation that the rare fish will reveal itself once it is caught. An extremely wide net is cast to attempt to catcheverything; once the caught fish have been brought on board the boat, the rare target reveals itself.

But how would the rare fish reveal itself to the fisherman if it looks the same as the other fish? Presumably, by its behavior. The same is true of highly engaged users in freemium products: they are indistinguishable from other users at the time of product initiation, but they use the product differently than do other users and exhibit patterns that can sometimes be identifiable before they even spend money.

The dynamic between scale and the 5% rule raises an interesting question about the required general appeal of a product in determining whether the freemium model is an appropriate business model choice for it. Specifically, how large must the potential user base be in order to justify building a product with the freemium model? There is no definitive answer to this question. The problem with a small potential user base is of course that 5 percent won’t represent a meaningful number of users and thus cannot produce a meaningful amount of revenue. But if that small group of users is more passionate for that niche product than, say, a more generic product that appeals to a larger group of people, users in the small group might spend more money and close the revenue gap.

Market size is a crucial consideration when making the decision to build a product using the freemium model, which is why holding the 5% rule as a given assists in making a realistic determination about which business model best suits a given product. A common pitfall when evaluating the freemium model’s appropriateness for a product is to assume that the product in question will collect revenues from an unprecedented number of users.

Product teams are inherently optimistic when launching new products; after all, they wouldn’t be attempting to build something if they didn’t think it was going to introduce an innovation to an existing paradigm. But very few products disprove the 5% rule, and those that do usually accomplish that feat only after many product iterations. Assuming that any freemium product will convert users to payers at a level consistent with the freemium model on aggregate is sound business model design; it ensures that lofty expectations and overly optimistic revenue estimates don’t prevail over historical norms.

With the 5% rule taken as fact, the product team is forced to design the freemium product around either one or both of two requirements: extracting large amounts of revenue from the users who do pay and increasing the product’s scale by expanding its universal appeal. Both of these initiatives are accomplished by measuring user behaviors and identifying patterns within them that can be used to iterate on the product. This measurement method is known as analytics.

What is analytics?

Analytics is a fairly broad term that can mean different things in different industries. In the abstract, analytics describes a measurement system that produces audi-table records of events across a period of time that can then be used to develop insight about a product. For freemium products, analytics is used to record the actions of users, store the records, and communicate the information in them, mostly visually graphical depictions (or, to product executives and statisticians in order to facilitate analysis.

Analytics is the heart, the foundation, of the freemium model because analytics is the only means through which highly engaged users can be identified early and accommodated for. And analytics is the only means of determining how a freemium product can best be improved in order to enhance the experience for highly engaged users.

Analytics has been a cornerstone of web development for years—the largest web applications record every click, every page view, and every followed link to better understand how their products are being used. Many desktop software products instrument use to a high degree, recording interactions between the user and the software in an attempt to improve the user’s experience.

Analytics for a freemium product is necessarily more complex than analytics implemented for most other web products because improving a freemium product user’s experience, as a general concept, doesn’t necessarily increase revenues. While averages can certainly be measured in the freemium model, they don’t convey meaningful information when aggregated over the entirety of a user base; users are segmented into non-paying and paying groups, and those groups have different needs, desires, and proclivities. Exposing these fundamentally different user types to the same product experience would only serve to alienate one group or the other.

Analytics is a somewhat abstract concept because it doesn’t exist as a singular function point; it can be implemented in any number of myriad ways, and what most people associate with analytics is merely its output. Analytics should be thought of as a system, an entire chain of features and independent products that collects, stores, processes, and outputs data.

The highest purpose of an analytics system is to provide insight on product performance to product teams. In freemium development, analytics is a paramount concern because, without insight, a product team is left with aggregated information that doesn’t help drive product improvement. Global averages from freemium data are very rarely useful and can, in some cases, be misleading; the important metrics in freemium products are those that identify and quantify the behavior of specific user groups. When a maximum of 5 percent of total users will ever contribute revenue through direct purchases, of what use is a global average in determining how a product feature should be improved?

To that end, analytics must be flexible and granular. The ability for a product team to group users by specific behaviors or demographic characteristics, or to limit a view of data by date, is an absolute necessity, and this functionality must be provided for throughout the entire analytics stack, the collection of software products that comprises the freemium product’s analysis capability.

The analytics stack can be broken down into three component pieces: the back-end, the events library, and the front-end. An analytics back-end is a storage mechanism that collects data; it can be implemented in any number of ways using any number of technologies, some of which will be outlined later in the chapter. Although the market dynamic in analytics is shifting toward what is known as Big Data, most freemium products still rely on a relational back-end (such as MySQL or PostgreSQL) that stores data in relational database tables.

An events library is a protocol for recording and transmitting data to the back-end. It usually takes the form of a discrete list of events—important data points that, when collected, can be used to glean insight about a product—that are meant to be tracked. Events libraries are generally integrated into software clients, meaning that an update to the library will be adopted only through an update to the user’s client. As Big Data systems move further into the mainstream, events libraries are being jettisoned in favor of logs that keep track of literally everything that happens in a product. But for the foreseeable future, events libraries will likely remain re-levant, as developers must choose which data points they wish to collect for storage concerns.

An analytics front-end is any software that retrieves and processes the data stored in the back-end. Third-party solutions are generally used to fulfill this function; products such as Tableau, QlikView, and Greenplum allow product teams to connect to a data set and almost instantly use it to build graphs and charts, segment data, and spot patterns. Some analytics stacks rely on ad-hoc querying and analysis in lieu of a front-end; an analyst will query data directly from the database and manipulate it using a statistical interface or desktop spreadsheet software. In other cases, a bespoke front-end is developed from scratch to fit the specific needs of product teams.

Taken together, the back-end, events library, and front-end form the complete and fully functional analytics stack (illustrated in Figure 2.1), although most product teams only ever interact with the front-end. How these component pieces are developed and maintained can have a significant impact on how well they function; distributing each component across three functional teams within an organization might disrupt the extent to which each piece communicates with the others. Additionally, changes to the events library must be communicated clearly to the team maintaining the back-end, and front-end requirements must similarly be made known to the team maintaining the events library.

image

FIGURE 2.1 The analytics stack.

The best-functioning analytics stacks are those that are developed by a cohesive analytics group dedicated exclusively to analytics. Systems distributed across functional groups within an organization have a tendency to break or, worse yet, function poorly and produce inaccurate metrics. Keeping an analytics stack entirely within the purview of one organizational group ameliorates the risk of this happening.

What is big data?

The term “Big Data” is generally over-used and poorly understood, especially by non-technical professionals. And Big Data is often mistakenly viewed as a panacea to fix product problems that would be better addressed not with more data but with more insight. In fact, Big Data describes both a storage trend and an analysis trend. The storage trend is straightforward: the price of storing data has declined as storage technologies have improved. Combined with innovative storage solutions from companies like Rackspace and Amazon, storing huge volumes (terabytes and petabytes) of data is cost-effective and manageable for companies without dedicated hardware infrastructure teams. The term “Big Data” reflects the contemporary capability of storing and accessing massive amounts of data; when a product team so chooses, it can store literally every interaction users have with the product.

The analysis trend behind Big Data is more nuanced. The massive data sets that resulted from decreasing storage prices presented a problem to analysts: large volumes of data cannot easily be parsed and processed, and most statistical software products are not capable of working with data that is too large to store in memory. A process was needed to build manageable data sets out of the cumbersome, distributed, unwieldy caches of unstructured data (usually in the form of server logs) created by Big Data frameworks. The most prominent solution to this dilemma was developed by Google and is called MapReduce; it is a routine for parsing massive, distributed datasets (the map) and building a set of aggregate statistics from them (the reduce).

MapReduce allows data distributed across a cluster of cheap commodity servers to be parsed and analyzed as a monolith; in other words, MapReduce glues together disparate, independent data sets and allows them to be analyzed as a cohesive whole. Although many implementations of MapReduce exist, the most popular is a free framework from Apache called Hadoop.

The innovation of MapReduce over traditional data storage techniques, namely, relational databases, is programmatic. Relational database systems, such as those mentioned earlier, suffer performance burdens as they increase in size. A traditional solution to this problem is database sharding, or separating large data sets into smaller, independent segments based on some characteristic, usually time. These database shards are not easily unified and can lead to analysis blind spots.

MapReduce routines, on the other hand, can quickly and easily be “spun up” (executed) across a large number of commodity servers. This means that, as long as it isn’t deleted, data that is stored can be processed and analyzed persistently into the future; it’s always available. And the execution time of MapReduce routines can be reduced by distributing the routines across more commodity servers; using services like Amazon’s Elastic MapReduce (EMR), a large number of commodity servers can be quickly appropriated for a MapReduce routine.

Big Data is important because it accommodates complete instrumentation; literally every interaction a user has with a product can be stored, analyzed, and used to influence future product iterations. This information is priceless to quantitatively oriented product teams that, armed with an adequate understanding of statistics and the tools to process and render data, can make highly informed design decisions when implementing new features or adjusting old ones.

Big Data also alleviates the need for product teams to attempt to conceive of all possible relevant metrics and data points before the product is released; because all product usage data is stored under Big Data conditions, metrics that were not considered upon launch can be retroactively aggregated. This is generally not possible in relational database systems, where data is collected only as warranted by existing groups of metrics.

But Big Data can also be a burden: data overload can muddy a product team’s understanding of a feature’s use, and without proper statistical training, product teams can misinterpret trends or jump to spurious conclusions about what exactly is happening within the very complex system of their product. Big Data requires real expertise in statistical analysis in order to provide value; without the ability to pro-perly and accurately interpret data, more information simply produces more noise.

That said, Big Data is an important topic for any product team to understand and is a precursor to full comprehension of the freemium model. While this book is not a technical guide, it would be remiss in not addressing Big Data as a freemium concern; the massive scale required by the freemium model is a product asset precisely because it produces massive volumes of data. Those massive volumes of data are at their most helpful when analyzed in their entirety, which can only be done with modern Big Data solutions.

A Big Data approach to product development allows the freemium product team to iterate faster, execute deeper analysis of feature performance, and draw sounder conclusions about how users are interacting with the product than through traditional, relational database-oriented analytics. When properly resourced and utilized, Big Data analytics systems provide the capacity for the kinds of incremental performance improvements needed in a freemium product: small, consistent, and predic-table metrics enhancement based on behavioral user data.

Designing an analytics platform for freemium product development

The first step in building an analytics stack is determining what data should and can be tracked. Collecting data isn’t necessarily as straightforward as it may seem at first, and the format of tracked data must be carefully considered; once an event takes place, recording it from a different perspective, or with more attributes, is impossible. Excess data can always be disposed of in the future if its storage is deemed unnecessary, but unrecorded data is surrendered to the ether.

The best way to build an events library is to work backward from a set of questions that the data should be able to answer. Use cases can be hard to predict (and it should be understood that data will reveal patterns that hadn’t been considered during the product’s design phase, thus spawning the need for further metrics), but a set of operating questions such as, “For how long do highly engaged users remain with the product?” can generate a reasonable set of baseline metrics ahead of a product’s launch.

Once a set of baseline events has been catalogued, some sort of reporting structure should be considered around which events can be organized. The organization applies a hierarchy to the events so that reporting systems can easily aggregate metrics under different filters. An example of a hierarchical structure is Group–Category–Label, where group represents a high-level description of the type of event (e.g., Social), category represents a class of this group (e.g., Friend List Change), and label describes the event in fine detail (e.g., Add Friend), as illustrated in Figure 2.2. Group may have several categories, and category may have several labels; all events given a specific label can be easily aggregated (or disaggregated) by their group and category designations.

image

FIGURE 2.2 An events hierarchy structure for the “Social” event group.

Applying hierarchical structure to events data makes reporting much easier, as it installs fault lines throughout the stored data around which aggregations can be made. This exercise also forces order onto a process that can very easily become chaotic; when events fit into predefined groups, they’re less likely to record the same thematic data from differing angles, thereby avoiding confusion.

Once event hierarchies have been catalogued, a structure for event storage must be devised. This involves considering which attributes of the data will be required for reporting purposes: in essence, what filters must be applied to the data when analysis is conducted. Date of occurrence is the first attribute that should be asso-ciated with a given event; without date information, almost no insight can be gleaned. Other attributes that are almost universally necessary are:

ent User ID. The identification of the user executing the event.

ent User state. This isn’t necessarily one attribute; user state could represent a group of attributes describing the user’s state at the time the event was executed. Some examples of state attributes are session ID, device type (if the product is accessible on multiple platforms), social connections within the product, etc. These attributes should reflect the user’s state at the precise moment the event was recorded.

ent Event state. Specific attributes of the event that may not apply universally to all events, or attributes that could change based on other factors for different instances of this event.

Attaching attributes to events allows those events to be audited in the future in case values associated with those events change as the product evolves. Figure 2.3 illustrates a sample events table column list with user state and event state attributes.

image

FIGURE 2.3 A sample events table column list.

In the figure, the row labeled CUMULATIVE_TOTAL_SESSIONS relates the number of sessions a user has completed up to the recorded event, and the row labeled FRIENDS_IN_APP relates the number of friends the user has in the app at the time of the event. These attributes describe the user state because they are unique to that particular user at a specific point in time and provide context around the ways the user’s previous interactions with the product may have influenced the event in question.

The row labeled USERS_ONLINE relates the number of users online in the app at the time of the event, and the row labeled PROMOTIONS_LIVE relates the promotions (discounts, purchase bundles, special offers, etc.) live in the app at the time of the event. These attributes describe the event state because these product-level factors may have influenced the user’s behavior (and thus the fact that the event took place) independent of the user’s usage of the product.

Events are not the only data that should be collected; information about users and product feature values (such as pricing information, access, etc.) must be stored, not only for the functionality of the product but for future reference and analysis. This data is generally less dynamic than events data, and its structure is less important to get right prior to product launch. A user’s location, for instance, can always be gleaned based on use, but events data is lost forever if it is not tracked at the time the event is executed. A users table holds data about individual users such as email address, location, first and last name, date of registration, and source of acquisition (e.g., this user was introduced to the product through paid search, from a link on a partner website, from an organic search, etc.).

The events library holds an auditable behavioral records—trails of data artifacts—that are used to improve the product. The time spent constructing a thorough, instructive events library prior to launch is equivalent to a fraction of the time needed when implementing the same library post-launch. Data that isn’t stored can’t be recovered, and neither can the insight that said data might produce.

Storing data for a freemium product

Once an events library has been constructed, organizing the product’s data artifacts must be considered. While the implementation of a database schema is a technical task, the product team should provide input into how the tables that will be used for analysis—especially derived tables, which store aggregates based on nightly batch processing of metrics calculations—are designed. (See Figure 2.4.) Analysis is as much an element of product development as is user experience design, and therefore the product team should play a role in determining how the data used for analysis is organized.

image

FIGURE 2.4 A derived dashboard table formed from normalized events and users tables.

Whether or not an analytics stack is built as a Big Data system, some layer of relational data should exist for analysis purposes, for two reasons:

ent Aggregated data stored in a relational database is easily accessed. Unstructured, distributed data can be queried through Big Data tools, but these techniques still require running MapReduce routines and, in most cases, are much slower than SQL queries.

ent SQL is much older than the tools that allow for querying Big Data. As a result, hiring analysts familiar with SQL is much easier than hiring analysts who can query from Big Data systems. Likewise, the tools that cater to SQL-based systems are far more numerous than those that provide for Big Data querying.

The relational database tables that hold event data should be normalized, that is, optimized to reduce storage redundancy. Database normalization simply means that no column in a table has a partial dependency on another column and that the items stored in any given table are only stored once. Normalization is the optimal way to store raw data, but it isn’t ideal for tables that are used primarily for analysis, as normalization segments data in a way that almost always requires tables to be joined, which is the process of unifying data across tables based on a common characteristic.

To that end, tables holding data inserted into the database by the events library for the explicit purposes of storage should be normalized; tables derived from that data should not be normalized. At a minimum, the product, via the events library, should maintain raw data storage in an events table in a database. For the purposes of normalization, that events table should be able to reconcile user information with a users table via the user ID. Likewise, the events table should reconcile with tables that hold product information relevant to events. The number and character of tables holding raw data is entirely dependent on the technical (and not design) requirements of the product, however, and these decisions are not likely to be made by a product team. Derived tables, on the other hand, are driven entirely by product design requirements. Derived tables hold aggregated metrics about product performance and are used exclusively for reporting and analysis; these tables will power design decisions by storing actionable metrics specifically defined by the product team and thus require active participation from product stakeholders when being designed.

Two very useful derived table templates are the user state table and the dashboard table. A user state table stores a snapshot of every user’s state for each day users engage with the product; the table is composed of running totals for many metrics up to a given point. This table creates an easily auditable record of activity on a per-user basis and alleviates the need to calculate the data-intensive metrics that are the precursors to most analysis initiatives. In other words, the user state table eases the analysis process by making the most common metrics readily available.

The dashboard table is composed of aggregate metrics for the product’s entire user base, broken down by day, that are directly used in populating a product dashboard: a succinct collection of graphs, charts, and data points that provides broad insight into a product’s use and growth. The dashboard table contains aggregated information about product usage and revenue that can be used to assess the general health of a product. Product dashboards don’t facilitate analysis per se; they’re designed to give product teams and corporate management an all-encompassing view of a product’s health over time.

As the name implies, a derived table is calculated based on data in other tables. This is accomplished through a process known as extract, transform, load (ETL). This process produces aggregated metrics from raw data, usually in a nightly, automated routine, by applying business logic to unprocessed records of activity. An example of a function of the ETL process is storing a count of daily active users in the dashboard table. The process queries the number of session start events from the events table (extract); counts them, potentially removing duplicates and accounting for dropped sessions (transform); and stores that aggregation in the row of the dashboard table containing metrics for that day (load).

While a product team shouldn’t be actively involved in deploying an ETL process, understanding the fundamentals of data warehousing—the process through which raw data is stored, processed, and aggregated—is a critical component of design-driven development. The data available for analysis directly affects the pace and quality of product iterations; if a product team isn’t abreast of how and when a product’s data artifacts will be converted into actionable metrics, the reliability of insight is called into question. Proper, consistent data administration is an essential component of the product management life cycle in a freemium product.

Reporting data for a freemium product

Reports comprise the portion of the analytics stack visible to the organization and thus receive the most scrutiny. Reporting is essentially the collection of tools that product teams and other employees interact with to retrieve, manipulate, and depict data. Depending on the organization’s size and priorities, reporting, might be centralized internally or it might be distributed across all product groups; that is, product groups may be charged with handling their own reporting tools. Each approach has its unique benefits.

Centralized reporting—wherein one group, usually called the analytics or business intelligence group—maintains the organization’s data warehouse and provides a set of unified reporting tools and interfaces to all product teams. The benefit of this is consistency and transparency: the calculation of all metrics is standardized because the reporting group defines metrics, and the tools (and thus the output of those tools) used for reporting are the same across all product groups. This consistency provides convenience: metrics from one product can easily be compared with those from another product because the two products are calculated and communicated similarly.

Embedded reporting, where an analyst sits in a product team and handles reporting exclusively for that team’s products, provides the benefit of flexibility: tools can be calibrated to fit the specific needs of the products being covered, and calculations can be tailored to the intricacies of the product’s user base behavior. Embedded, dedicated analysts can generally turn analysis requests around more quickly than can centralized reporting teams, as they have the benefit of familiarity with the product’s data set and peculiarities. Analysis from internal, team-specific reporting analysts may not be consistent with that from other groups, however, and the inconsistency could hinder management’s ability to compare product performance or to set a baseline standard for certain metrics.

A hybrid model between the two options exists wherein an organization-wide reporting team maintains the organization’s data and standards for calculating metrics, and embedded product analysts conduct analysis independently under those parameters. This model works well when a company has a large yet fairly homo-genous product catalogue that lends itself to metrics comparison.

The tools used in reporting are myriad and serve a number of purposes and tastes. The most popular class of reporting tool, and by far the easiest to become proficient in, is desktop spreadsheet software. Spreadsheets can usually store a fairly large amount of data in memory, are capable of performing reasonably sophisticated calculations, and quickly create visuals such as charts and graphs.

Other desktop-based analysis tools (i.e., tools that are not hosted on a server but reside on a user’s computer) range in sophistication from command-line statistical packages such as R to extremely powerful yet user-friendly solutions such as Tableau. The philosophical current unifying all desktop solutions is that the end user can define metrics and can save analysis output as local files capable of being physically or electronically distributed. The converse of this philosophy is the hosted analysis tool, an authenticated (i.e., password-requiring) Internet-based tool that predefines all metrics and allows analysis output to be shared only via an auditable link.

Desktop analysis software suffers a few critical drawbacks; for example, persistent database connections are not easily accommodated, meaning data must be actively pulled from the data warehouse each time it needs to be refreshed. This poses data consistency risks: a user could base an analysis on an outdated data set. Likewise, placing the responsibility of defining metrics calculation in the hands of report stakeholders could result in a situation where consumers of the report have defined the same metrics differently in their own analyses, which can lead to decisions being made with conflicting information. But the largest drawback is access; once a file has been saved, the organization that owns the data the analysis was based on no longer controls access to that data. By preventing data from being copied from the source (the data warehouse), an organization ensures that only approved emplo-yees will ever have access to it. This is only possible when an analysis tool is hosted.

Hosted analysis solutions have increased with popularity alongside the data-driven product development movement as product teams demand higher accessibility from their reporting software. Hosted analysis solutions boast the benefit of being generally platform-neutral: when an analysis is hosted on the web, the only requirement for viewing it is a web browser, which almost all smartphones and tablets have by default. Additionally, hosted analysis solutions maintain a persistent connection to the data warehouse (when an Internet connection is present), meaning that the report consumer can always assume the data being presented is current.

The downside of hosted analysis solutions is that they may not provide for data to be accessed offline. They’re also generally not very flexible; working with the data in a hosted solution is limited to whatever filtering options are provided by the software. Many product teams require the ability to manipulate data sets based on assumptions about changes to certain metrics—for instance, that one metric will shift in a certain way if the value of another increases by a known amount—which isn’t always possible with a hosted solution. Instead, hosted solutions may be incapable of doing more than communicating information through fixed-format dashboards.

Data-driven design

The benefit of the massive volumes of data provided by the freemium model’s scale is that products can be designed around user behavior. Users emit innumerable preference signals when they use products; by collecting and measuring those signals, the product team can ensure that the 5 percent receive the best possible experience from the product.

This feedback loop—develop- release- measure-iterate—is known as data-driven design. (See Figure 2.5.) The ubiquity of instrumentation in products provides for a rich bank of data about the performance of product features to be available very quickly after launch. Those features can thus be improved on as quickly as that data can be parsed and analyzed; what drives product development, then, is not only the intuition of the product team but the use patterns of users. And since those patterns can be quickly synthesized and acted upon, product development can be done continuously, in small increments, as opposed to being done in large batches.

image

FIGURE 2.5 The develop-release-measure-iterate feedback loop.

This concept is called continuousimprovement—small, incremental product improvements are continuous implemented based on behavioral user data. Continuous improvement is the only effective means of implementing the freemium model: since only 5 percent of the user base contributes all product revenues, and that specific 5 percent of users is unrecognizable before it begins generating behavioral data, the product must be flexible upon launch and capable of being rapidly improved on to optimize the experience for that user segment. Long product improvement cycles reduce the number of hypotheses that can be tested in a given period of time and thus delay the eventual arrival of revenue-producing product enhancements.

Continuous improvement is facilitated by a product development methodology dominated by an emphasis on return on investment: product features and changes are prioritized based on the revenue they are expected to produce. Those revenue assumptions can be made only with a robust analytics infrastructure capable of processing the behavioral data of the very small proportion of the total user base that contributes revenue, which is likewise identified using that robust analytics infrastructure. In the freemium model, data drives development—it sets the standard by which features are prioritized and implemented.

The length of a product iteration cycle is generally left to the discretion of the product team; cycles can range from as short as one day, especially for web products, to as long as two weeks. The purpose of a post-launch product iteration cycle is threefold: to implement features that were conceived of prior to launch but deprioritized; to fix bugs that were discovered post-launch; and to implement new features at users’ requests. A list of features intended to be added to the product is known as a product backlog, elements of which should be prioritized in descending order by a mixture of projected return on investment and estimated time required to implement.

Methodologies differ on how the product backlog should be constructed, but the team should estimate the development length of each backlog item and take only those that can feasibly be accomplished within the predetermined iteration cycle, with the understanding that some very large features will be implemented across multiple cycles through multiple component backlog items.

Product teams are incentivized to keep iteration cycles short because the develop-release-measure-iterate feedback loop is more difficult to interpret when multiple features are launched at once. If two features produce conflicting results—say, one produces an increase in revenue and the other produces a simultaneous decrease in revenue—the effects of each feature may be difficult to measure. The fewer changes implemented at once, the clearer the effects of those changes on revenue and the faster those effects can be extrapolated to existing assumptions about which features best serve the interests of the 5 percent (potentially changing the priority of the product backlog).

Data-driven design is an operational pillar of the freemium model, a fundamental component of its proper and successful implementation. The dynamics of the freemium model don’t support assumptions about user behavior, and they don’t support non-optimized experiences for the 5 percent. Data-driven design allows the most engaged users to be given the best possible experience by appraising their tastes and preferences through behavioral signaling. Building products on assumptions is antithetical to the freemium mindset, which accepts a very small percentage of revenue-generating users on the premise that those users will generate more revenue, through personalization and optimization, than all users in total would have in a paid access scenario. Data-driven design is the means through which this mentality manifests, but development before product launch is driven by a different concept: the minimum viable product.

The minimum viable product

The concept of the minimum viable product (MVP) was popularized by author and entrepreneur Eric Ries in his book, The Lean Startup (2011). Ries argues that startup companies, owing to a lack of funding with which to build products solely around assumptions, must directly address customer needs very early in their product life cycles. Reconciling a product’s features with the real needs of end users—as opposed to assuming what their needs are during prelaunch—is known as achieving product-market fit.

Ries contends that a startup should endeavor to launch a product as soon as it can capably address its core use case (i.e., as soon as the product is viable) and thus enter into the develop-release-measure-iterate feedback loop as quickly as possible. His point is that any development beyond what is needed for a minimum release is, by definition, based on assumptions and thus is not optimized for user taste. By releasing the MVP and orienting future development around customer behavior, the product team reduces the amount of development that isn’t driven directly by use patterns.

Given the freemium model’s fundamental dependence on data-driven design, the MVP methodology holds heightened significance for it. For most products, the needs of the user can be approximated fairly accurately through market surveys, focus groups, and user testing; because monetization is oriented toward average behavior in non-freemium products, a substantial amount of data, while helpful, isn’t necessary before understanding product–market fit. But monetization in the freemium model is driven by anomalies—by the 5 percent. As a result, a small sample of user data for a freemium product is useless: any valuable sample must be large enough to include, with confidence, a meaningful number of the 5 percent. And since the 5 percent isn’t identifiable within a freemium product before behavioral data has been collected, the only way to engender such a large sample is to release the freemium product.

The minimum viable product methodology is the cornerstone of the freemium model, not only because it prevents assumptions that may alienate the 5 percent at launch, but also because it represents a philosophical adherence to data as a significant (but not exclusive) influence on product design decisions. Releasing an MVP as a freemium product sets the tone for a product’s development pace and method of prioritization and is an excellent means of unifying the product team’s development ethos.

The economic justification of using the MVP methodology in the freemium model relates to the time value of money and the degree to which user behavior can be estimated in the specific product vertical. A complete product, built around perfect assumptions of how users will engage with it (especially the 5 percent), will monetize to a theoretically infinite maximum degree upon launch; it’s the perfect product. But that complete product will have taken longer to develop than the MVP. For example, assume the MVP is ready six months before the launch of the final product. In the six months between MVP launch and launch of the completed product, the MVP is monetizing to some lower extent than the complete product will. And all the while during that intervening period, development is being driven by data (not assumptions), although no changes are made to feature prioritization in this case because the product team’s assumptions from the start were perfect.

So, even under circumstances when the product team’s assumptions about user behavior are perfect, the MVP methodology still produces more aggregate revenue than a model for which the product is not released until it is complete. And no product team’s assumptions can ever reliably be considered perfect; markets are dynamic, and user tastes change as a reaction to new products launching. Assumptions should be kept to an absolute minimum, especially with freemium products, where the users whose behaviors relate to revenue are a small fraction of the total user base. The only way to minimize assumptions is to release the MVP and drive development with the develop-release-measure-iterate feedback loop as early as possible.

Because the feedback loop is necessarily driven by data, the MVP includes not only a minimum product feature set but also a minimum level of analytics capabi-lity. Launching a freemium product without an analytics infrastructure renders irrelevant the freemium model’s principle benefit, which is behavioral data. A freemium product must, at MVP launch, provide granular data about user behavior that can be used to pump the feedback loop. Without that data, development decisions will be assumptions-based; the complete product will be no more tailored to user tastes than it would have been had the MVP not been launched.

Methodologies like MVP are designed to help product teams produce better products. Releasing an MVP shouldn’t be a product team’s goal in itself—rather, it should be a means to an end that culminates in a higher-quality, better-performing, and more satisfying product.

Data-driven design versus data-prejudiced design

The importance of tracking multiple metrics under the minimum viable product model relates to gaining insight from product iterations: any iteration containing more than one significant change might introduce conflicts of effect, and those conflicts are impossible to identify without tracking multiple metrics. For instance, a change to a product’s registration flow might increase retention, but a change to its product catalogue might decrease retention to an even greater extent. If these changes are implemented in one iteration, with the net effect being negative, the entire iteration might be withdrawn without the registration flow change being retained. A broad portfolio of metrics could potentially catch this conflict; a single-metric focus (presumably on retention) would not.

The MVP model is an implementation of the data-driven design paradigm: data should be descriptive of the product, not of a single feature or outcome. Tracking only one metric doesn’t assist with product development, it merely describes the state of a product as it exists at that moment. Focusing on a single metric when developing a product is data-prejudiced design: it utilizes things that are already known to justify decisions that have already been made.

Prejudice in this sense refers to product decisions made based on intuition rather than behavioral data, with assumptions stemming from past observations that may or may not apply directly to the product being developed. The problem with resurrecting design decisions from past products is that user tastes and expectations change as markets evolve; an innovation to product design may produce a broadly cascading change in overall market behavior. In other words, innovations can cause extensive market disruption in a shorter amount of time than the average freemium product’s development cycle, so decisions made from even the immediately preceding product cycle may already be dated.

The point of using data to inform product decisions is to replace preconceived notions about development priorities with information about user behavior, so resources can be allocated to develop the highest-return features. Improving retention, when possible, should always be a goal of a product team; but in all circumstances, is any incremental improvement to retention the highest-return allocation of time? Most likely not, but if retention is the only metric being tracked, it is impossible to make an informed decision about prioritization.

Tracking only one metric introduces prejudices into the design process: prejudices about which outcomes are most ROI efficacious and prejudices about how those outcomes should be achieved. Very few experienced product teams would be unable to achievesome progress toward a discrete outcome (say, improving conversion) in a development sprint. And, in most cases, strong, surefire precedents exist for improving any given metric. But the purpose of data-driven product development is not to achieve just any outcome; it is to achieve the most beneficial possible outcome under existing resource constraints.

Data-prejudiced development is facilitated through a myopic focus on an immutable set of metrics that has been subjectively selected based on past product experience. Product development experience is obviously not a bad thing, but it can introduce baggage that is difficult to jettison, especially when that baggage was accumulated on a different platform than what the current product is being deve-loped for. Data-driven product development requires three things:

1. A quantitative understanding of how a product produces revenue in order to facilitate prioritizing feature development. Efficient resource allocation requires knowing where resources do the most good, which is impossible to determine without a model of revenues that takes into account the dynamics of a product’s use.

2. A high level of instrumentation and robust analytics for identifying the exact causes of changes in product metrics. The net effects of product iterations on multiple metrics can be assessed only when those mechanics are thoroughly measured. Broad, nebulous metrics are not enough: a full tracking library, capable of recording the individual actions of each user, needs to produce an auditable historical record of behavior.

3. The capacity to conduct analysis. Large volumes of data are impossible to parse with dashboards: an analysis group must have the analytical expertise to perform statistically authoritative investigations into product usage.

Data-driven product development is easiest in an environment that places a premium on data-driven decision-making; the hardest part of avoiding data-prejudiced design may be escaping an organization’s over-reliance on intuition. Product teams may likewise bristle at the notion that their experience and insight should be cast aside during the product development cycle in favor of usage metrics. But when considering the vast extent to which intuition influences product development—in deciding which products to develop, which platforms to develop for, and which people to staff on a project—data-driven design can be seen as, among other things, a hedge against poor judgment. And data-driven design is not implemented through broad, extensive feature deployment but through incremental development and metrics-based testing.