Establishing Estimates - Scrum Shortcuts without Cutting Corners: Agile Tactics, Tools, & Tips (2014)

Scrum Shortcuts without Cutting Corners: Agile Tactics, Tools, & Tips (2014)

Chapter 5. Establishing Estimates

Like it or not, the need to provide estimates for software projects isn’t going to disappear. Thankfully, the typical estimation burden that afflicts many teams can become significantly reduced should they choose to adopt the de facto standard for estimating Scrum projects: relative estimation.

The following three shortcuts introduce you to the concept of relative estimation and provide guidance on how to transition from traditional time-based estimation.

Shortcut 13: Relating to Estimating introduces the elegance of the relative estimation approach. Shortcut 14: Planning Poker at Pace provides a range of tips and tricks to ensure efficient Planning Poker sessions. Finally, Shortcut 15: Transitioning Relatively offers advice to assist teams in transitioning from time-based estimation to relative estimation.

Shortcut 13: Relating to Estimating

Like many of you reading this shortcut, I have spent an inordinate number of hours watching time whittle away during long-winded estimation sessions in the quest to meticulously break down nebulous requirements into detailed tasks (on very long and very stripy Gantt charts). But worse than the time wasted actually creating these Gantt charts was the significant time spent reworking them on almost a daily basis as the inevitable changes to scope—not to mention adjustments to estimates—flooded in.

It wasn’t long before I realized that the only good to come out of this situation was that we now had some interesting-looking stripy wallpaper to decorate the office with!

And thus began my epic quest to find a more effective approach to help conquer the dark art of estimation. After much searching, I stumbled across what I consider to be the most effective technique for estimating emergent requirements: relative estimation. The elegant simplicity of this new approach finally convinced me that there was in fact some light at the end of the long, dark estimation tunnel.

Estimation Pain

Before jumping into the ins and outs of relative estimation, let’s go right back to basics and consider why estimation is so hard and painful (especially in our software world).

First, we humans are not naturally great estimators. We tend to either be optimists or pessimists and very rarely realists. I don’t even need to back this assertion up with statistics because I am confident that anyone reading this paragraph will agree!

In addition, especially in the world of software, there are numerous unknowns: technology constantly changes and requirements are emergent. There are many moving parts as well as intricate dependencies between tasks (and between people), and that’s not even throwing in external environmental factors!

Why Bother Estimating?

If our estimates carry such a significant chance of being inaccurate, then why bother estimating at all? Well, I believe that even if our estimates aren’t always correct, there are still very important reasons to estimate, and I’m going to talk about two of them.

The first reason is to help us make trade-off decisions. For example, let’s say that I were to ask a couple living in San Francisco whether they would prefer a vacation to Australia or a vacation to Mexico—which one would they choose? Sure, they might have a preference for one or the other, but two other big factors come into play—time and budget. While they might prefer a trip to Australia, let’s say (yes, I’m a little biased), they might not have enough accrued leave (time) to justify the long trip or enough budget (as the Aussie dollar is pretty strong at time of writing!). So how do they calculate whether they can afford to take this particular trip? Well, they simply estimate how long the trip might take and how much the trip might cost. The same principle applies to requirements that make up the wish-lists for our software products.

The second reason is to help set goals. If you’re anything like me, when you set a deadline for yourself, you do everything in your power to make sure you hit it. Sure, there will be times when your estimates are way off—and it shouldn’t necessitate unsustainable heroics—but the act of estimating and setting targets can certainly help you to maintain focus and maximize results.

Now that you’re convinced that estimation is a worthwhile exercise, we can dive right into the details of relative estimation.

Explaining Relative Estimation

Relative estimation is applied at a product backlog level rather than at a sprint backlog level. Sprint backlog items can be estimated in traditional time units (such as hours) primarily because the period of time being estimated for is a single sprint (typically a matter of days rather than months) and the requirements will be defined in enough detail. On the other hand, product backlog items (PBIs) are more loosely defined and may collectively represent many months of work, making time-based estimation very difficult, if not impossible.

Relative estimation applies the principle that comparing is much quicker and more accurate than deconstructing. That is, instead of trying to break down a requirement into constituent tasks (and estimating these tasks), teams compare the relative effort of completing a new requirement to the relative effort of a previously estimated requirement. I’ll use an analogy to demonstrate what I mean.

The Stair Climber

Let’s say we have four buildings. Three of them are modern, while the other is older and somewhat decrepit. They are all different sizes. We are asked to estimate how long it will take us in total to walk to the top floor of all the buildings using the stairs (see Figure 5.1).

Image

Figure 5.1. How do we estimate how long it will take us to walk up all of these buildings using the stairs?

Having never completed an exercise like this, we have some unknowns to consider. For example, we are not sure how physically fit we are or what types of obstacles we might need to negotiate in the stairwells.

So, what do we do? Well, we could take the time to count every floor of every building and then estimate how long it might take us to go up the counted flights of stairs despite not knowing our fitness or the state of the stairwells. This estimate not only will take considerable time but also will be grossly inaccurate if our assumptions are way off the mark.

Let’s explore another option. First, let’s classify the buildings into what we’ll call “effort classes,” with the smallest building considered a 10-point class. The choice of 10 is arbitrary—it could have been 100, 1,000, or any number for that matter (you’ll soon see why it makes no difference). We take a look at building 2, and we think it looks about three times the size of our 10-point building; therefore, we classify it as a 30-point class. Our third building (the older one) is somewhere in the middle, so we would typically call it a 20-pointer, but because of its aging state, there may be more risks and impediments getting up the stairwell, so we take these factors into account and give it a point value of 25. Our final massive building is about twice the size of our second building (the 30-pointer), so it becomes a 60-point class building (see Figure 5.2).

Image

Figure 5.2. We can classify our buildings in relative terms by making some quick comparisons.

Note that these points are simply relative markers to help us compare. The numbers do not relate to a specific unit of size or time—they are just classification markers.

This little exercise allows us to quickly estimate the effort of our four climbs—not in absolute terms but in relative terms. This information forms the first piece of the puzzle. We might now have an idea of the relative effort of climbing one building compared to another, but we still need to work out an estimate for the duration of the overall exercise.

What next? Well, how about we first invest a little time to actually test our fitness and check the state of an indicative stairwell? Let’s time-box this experiment to 10 minutes (our nominated sprint duration) and see how far we manage to get (see Figure 5.3).

Image

Figure 5.3. After 10 minutes of actual stair climbing, we get halfway up the 10-point building.

To the stairwell we go and, after 10 minutes, we find ourselves halfway up building 1 (the 10-point building). With this information, we can work out what our velocity is, or, in other words, the amount of work (in points) that we are able to achieve within our 10-minute sprint. Based on the fact that we climbed halfway up the 10-point building, we can say that our velocity is 5 points per sprint, or more succinctly, 5 points.

“But we need to know how long it will take us to reach the top of all four buildings,” I hear you say? Well, how about some simple extrapolation. Let’s start by totaling the amount of work to do by adding up the relative sizes of the buildings: 10 + 30 + 25 + 60 = 125 points.

We then take our velocity (remember, it was 5 points) and, using some simple math, we divide the total 125 points by our 5-point velocity to give us 25 sprints. We know that each sprint is worth 10 minutes, so we have 250 minutes so far. We can then add another 50 minutes (20 percent of our estimated time) for some extra buffer (for catching our breath and for elevator rides back down), and voilá, we can give a rough estimate of 300 minutes, or 6 hours, to complete our exercise!

Software Relative Estimation

Let’s apply this new concept to our software projects. Instead of estimating our stair-climbing prowess, we need to estimate the effort required to complete PBIs.

First, we should determine the effort required to complete a PBI using three factors: complexity, repetition, and risk (see Figure 5.4).

Image

Figure 5.4. The three factors that determine how much effort is required to complete a PBI.

Let me explain the difference: we may have a PBI that requires the design of a complex optimization algorithm. It may not require many lines of code but instead a lot of thinking and analysis time.

Next, we may have a PBI that is user-interface focused, requiring significant HTML tweaking across multiple browser types and versions. Although this work is not complex per se, it is very repetitious, requiring a lot of trial and error.

Another PBI might require interfacing with a new third-party product that we haven’t dealt with before. This is a requirement with risk and may require significant time to overcome teething issues.

When sizing up a PBI, it is necessary to take all of these factors into account.

Another point to note is that we don’t require detailed specifications to make effort estimates. If the product owner wants a login screen, we don’t need to know what the exact mechanics, workflow, screen layouts, and so on, are going to be. Those can come later when we actually implement the requirement within a sprint. All we need to know at this early stage is roughly how much effort the login function is going to require relative to, let’s say, a search requirement that we had already estimated. We could say that if the search function was allocated 20 points, then the login function should be allocated 5 points on the assumption that it will require approximately a quarter the effort.

Velocity

We discussed the core purpose of the velocity metric, but there are a few other important factors to be aware of:

image Velocity is calculated by summing up the points of all PBIs completed in a sprint.

image The most common approach for handling partially completed PBIs is to award points only to the sprint in which the PBI actually met its definition of done (see Shortcut 11).

image Although a velocity can certainly be generated with only one sprint, the reality is that it won’t necessarily reflect the longer-term average because velocity tends to fluctuate from sprint to sprint. This fluctuation can happen for a number of reasons, including the impact of partially completed stories, the impact of impediments, and team member availability (or lack thereof), to name just a few. Using an average velocity or rolling average of the last three sprints is a simple option for calculating a more indicative velocity. For an even more comprehensive and accurate calculation of velocity, I recommend you use Cohn’s free velocity range calculator,1 but note that to use this tool, you need to have data from at least five sprints.

1. Mike Cohn’s free velocity calculator can be found at www.mountaingoatsoftware.com/tools/velocity-range-calculator.

image Velocity is reliant on maintaining the same team makeup and the same sprint length—otherwise, calculating velocity is much harder.

Relative Estimation in Practice

To put relative estimation into practice, many teams play a nifty game called Planning Poker. Shortcut 14 explains the mechanics of this effective technique and offers a selection of tips and tricks to make it as effective and efficient as possible.

Speaking of practice, it is important to understand and accept that estimation is hard. Very hard. Software development is burdened with high levels of complexity (and many unknowns), yet it requires perfection for the software to compile and work. Because of these factors, no estimation approach is going to be foolproof. However, I truly believe that relative story point estimation is, at the very least, just as accurate as any alternative while offering the advantage of being far more simple and elegant in comparison.

Shortcut 14: Planning Poker at Pace

With relative story point estimation now in your arsenal, you finally have a weapon to wield in the battle against the forces that make software estimation so painful. Relative estimation is simple, makes sense, and is way more fun than other long-winded and misleading estimation techniques!

The technique we use to conduct relative estimation is a game invented by James Grenning and popularized by Mike Cohn called Planning Poker. It is based on a method developed in the 1970s called Wideband Delphi that evolved from an earlier version (Delphi) invented in the 1950s by the RAND Corporation. This approach utilizes broad insight from a group of cross-functional experts to arrive at an estimate that is typically more accurate than one derived from a single person.

Setting Up the Game

The technique is called Planning Poker because teams literally play with cards. But, instead of spades, hearts, clubs, and diamonds, they use cards representing story point values. A common point system that is utilized for these values is Mike Cohn’s modified Fibonacci sequence:

½, 1, 2, 3, 5, 8, 13, 20, 40, 100, ∞ (infinity)

In case you were wondering, I believe a fair translation for the infinity card would be something along the lines of, “Whoa! That is way too big to estimate—it definitely needs splitting before any meaningful estimation can occur.”

I’m a fan of using the modified Fibonacci sequence because it helps to reflect the greater amount of uncertainty that exists as requirements get larger (see Figure 5.5) while also avoiding the perception of precision (hence the change from 21 to 20, 42 to 40 and so on). That being said, it does come with a potential problem especially with new teams. If you recall from Shortcut 14, the point values should not correlate to a specific time or distance unit. The issue when using Fibonacci numbers is that people can get into the bad habit of equating 13 points to 13 hours, for example.

Image

Figure 5.5. The modified Fibonacci sequence is an approximation of the logarithmic “golden spiral” where greater uncertainty exists as requirements get larger.

To combat this situation, some teams adopt more abstract classifiers, such as T-shirt sizes:

XS, S, M, L, XL, XXL

I personally don’t use this extra layer of abstraction because it requires the extra step of mapping to a numeric value to enable forecasting during release planning—remember how in Shortcut 13 we calculated the time it would take to climb the buildings by dividing by the numeric velocity value?

Planning Poker Mechanics

Before we proceed with some Planning Poker hints and tips to ensure that your sessions don’t become late-night marathons, let’s run through a brief overview of the mechanics of the game.

The session proceeds as follows:

1. The product owner describes the top PBI before the team is invited to ask questions to clarify the scope and desired benefits. Any changes to the PBI description or acceptance criteria are captured progressively.

2. Once ready to estimate, each team member places, facedown on the table, the card he or she feels best represents the effort required to complete the PBI.

3. Once they have chosen their cards, all team members simultaneously flip their cards face up.

4. When there is a lack of agreement, the holder of the high card has a short debate (a few minutes at most) with the low-card representative under close observation from the rest of the team.

5. With the new evidence uncovered during the debate, the team returns to step 2.

6. Steps 2 through 5 are repeated until a general consensus is reached for the PBI.

7. Once the PBI has been assigned a value, the process starts again from step 1 for the next PBI in the product backlog (see Figure 5.6).

Image

Figure 5.6. The flow of a typical Planning Poker cycle.

Note that the ScrumMaster acts as the facilitator throughout and is not involved in the actual estimation.

A key requirement is that everyone must estimate the effort for the entire PBI as opposed to just estimating the bit that pertains to his or her specialty. For example, a programmer needs to estimate the effort of not just the coding work but also the testing and deployment work—an interesting concept, right?

So, how does everyone participate in estimates of work that is not in their primary area of expertise? Well, they need to base the combined estimate on experience. Even though they might not have done much of the testing, they will remember what was involved when a similar PBI was implemented in the past. They will recall, for example, that even though the programming wasn’t too tricky, the testing was a nightmare because of the various integration points with the third-party payment system they were using. It is common for individuals to assume that they are estimating only for their specific function, and it is a common reason that new teams typically start with very disparate estimates (so beware of this pitfall).

When to Play Planning Poker

The first Planning Poker session should take place after the initial product backlog is compiled, and subsequent games can be played whenever a new PBI is added to the backlog or in the rare situation that reestimation is called for. Reestimation should be required only when a whole class of PBIs suddenly becomes smaller or larger (relatively speaking). When does this situation occur? Let’s say that a set of your PBIs rely on integration with a third party. Their API has been flimsy, at best, and you know that workarounds, not to mention a whole heap of extra testing, will need to be applied. Let’s say they finally release a brand-new, super-improved interface that removes the need for workarounds and additional testing—all of a sudden, any PBIs reliant on it become relatively smaller.

Get the Team Warmed Up

To get the entire team ready for a Planning Poker session, it’s important to circulate a small number of reference PBIs that correspond to the point levels in the card deck (see Shortcut 15).

This process calibrates everybody’s yardsticks nice and early so that the team will be able to immediately recall what a 13-point PBI is and what a 1-point PBI is (as well as everything in between). Send these references out a few days before the session, followed by a quick reminder on the morning of the session.

Big Cards for Big Occasions

I typically advocate removing the big cards (20, 40, 100, infinity) as well as the ½ card from the Planning Poker deck. The rationale behind this decision is that you effectively cut the total choice of cards down to six, and fewer options equals less analysis paralysis. Further, it discourages product owners from bringing to the table any stories that are too large and nebulous.

That being said, Mike Cohn (2011) raises a valid scenario in which the big cards will come in handy:

Suppose your boss wants to know the general size of a new project being considered. The boss doesn’t need a perfect, very precise estimate. Something like “around a year” or “three to six months” is enough in this case. To answer this question you’ll want to write a product backlog. You want to put no more effort into this than you need to. Since the boss wants a high-level estimate, you can write a high-level product backlog. Big user story “epics” that describe large swaths of functionality will suffice. . . . And these epic user stories can be estimated with the large story point values.

If nothing else, using these big numbers will indicate to all involved that there is a high level of uncertainty and that the best estimate that can be offered at this early stage is one of general magnitude rather than specific duration.

Don’t Double Up

As often happens, a group of PBIs will inevitably rely on some of the same important research or technical plumbing. If this is the case, ensure that the same work is not estimated multiple times. This underlying work should be incorporated into the estimation of only one of the PBIs, not into all of them. Which PBI you decide to incorporate these extra tasks into is up to you, but try to select the one you anticipate will be implemented first. Although this advice might sound obvious, you may find that your team assumes they are estimating PBIs in isolation unless you make this point explicitly clear.

Reaching a Consensus

After some discussion, the team will play its first round of Planning Poker. If it doesn’t result in consensus, I recommend asking the following questions:

image Have you considered all the necessary functions and not just individual specializations?

image Do you have a hard or soft opinion about that score? If it is soft, are you comfortable switching your value?

image Is anyone on the borderline between two values? If so, are you comfortable moving to the more popular adjacent score?

If there is no consensus after asking these questions, it is time to facilitate a quick debate between a representative of the high card and a representative of the low card. The trick here is to make sure the debate doesn’t become a drawn-out, granular technical discussion. The simple message to the debaters is to base their arguments on the relative comparison of the PBI being estimated to the reference PBIs (rather than on the complexities of the potential technical implementation). Remember from Shortcut 13 that relative estimation focuses on comparing rather than deconstructing.

Finally, if the team simply cannot reach an agreement between two adjacent values, you should err on the side of caution and use the higher value.

Phones Can Help

Unless you’re a hard-core disciplinarian, you will find people occasionally checking their phones for messages. If they’re playing Angry Birds,2 then you’ve got bigger problems! Instead of being the scolding teacher, make their devices part of the session. Get the team to download one of the legitimate, licensed Planning Poker apps and use their devices instead of the traditional cards. Not only does playing Planning Poker on a phone make it more difficult to play around with Angry Birds, it also saves you from that laborious task of sorting through the cards at the end of the session!

2. To learn more about Angry Birds, go to http://en.wikipedia.org/wiki/Angry_Birds.

It’s All about Benefits

After you’ve played some Planning Poker and used the advice from this shortcut, the estimation benefits should be obvious. But what happens when your boss calls you into his office to discuss concerns about the team playing card games on the company’s dollar? Well, here are some extra benefits you can explain to transform him into a Planning Poker advocate:

image The ability to rapidly estimate long-term product backlogs without requiring detailed specifications and complicated dependency mapping.

image The ability to provide broader insight from a diverse set of functional experts to ensure that estimates aren’t being padded or underbaked.

image The ability to leverage the knowledge obtained from completing legacy work.

image The ability for the team to actually have some fun while conducting a traditionally mundane and frustrating task. Planning Poker sessions are interactive, lively, and much faster than traditional estimation marathons!

Remember Parkinson’s Law

Speaking from experience, if the ScrumMaster doesn’t control the pace and focus of Planning Poker sessions, they will become endless talkfests or could even turn into pitched battle (personally, I don’t know which one is worse!).

Time-box your discussions and always remember Parkinson’s law: “Work expands so as to fill the time available for its completion” (Parkinson 1993).

I assure you that if you apply the suggestions given in this shortcut, your Planning Poker sessions will not only become punchier (as in faster, not more violent) but everyone will enjoy them a great deal more!

Shortcut 15: Transitioning Relatively

Hopefully, if you are reading this shortcut, it means that you’re now convinced that relative estimation is a great way to move forward. The lights are dimmed, the sunglasses are on, and the cards are ready to be dealt for your inaugural Planning Poker session (see Shortcut 14). But hang on—where do you start? What does a 1-point user story actually mean? How about a 13-pointer? What is the best way to initially calibrate so that the team has a foundation to work from? If these are the questions that are running through your mind, then please read on.

An Approach

One calibration approach that some teams like to use is to identify the smallest user story in the product backlog and designate it to be the initial ½-point story (assuming they are using the Fibonacci sequence). Once this initial baseline has been confirmed, the team works its way down the list of user stories and allocates 1 point for any story that is roughly double the ½-pointer, 2 points for any story that is roughly double a 1-pointer, and so on.

This approach can certainly work, and it seems straightforward on the surface, but the reality is that it can end up taking considerably more time than you might expect. First, the team has to actually traverse through the entire product backlog to identify the starting contenders, and second, the team needs to reach a consensus as to which user story should become the actual initial baseline.

Bear in mind that your team is new to this process, so it helps to reduce as much ambiguity as possible. It is for this reason that I like to calibrate story points by utilizing work completed in the past.

Using Historical Work

The idea behind leveraging historical work is to help create mappings between known quantities (old completed work) and the new Fibonacci story point values (or whatever other scale you choose to use).

Using historical work offers a team two significant advantages: familiarity and consistency.

Familiarity

It is obvious that any team will be more familiar with work that they completed previously than with work they are going to do in the future. This familiarity proves to be particularly helpful when playing Planning Poker (see Shortcut 14) because instead of comparing future unknown work to other future unknown work (similar to the first approach described earlier), teams can compare future unknown work with past known work. Not only does this approach remove an element of ambiguity, but also, the speed at which these comparisons take place will be much quicker because the team can more readily recollect the historical work.

Consistency

When historical work forms the set of benchmarks (for the various point values in the Planning Poker deck), these same benchmarks can be used across any and all projects that the same team works on down the track. This early work will naturally speed up future proceedings because the initial benchmarking process is required only once (as opposed to whenever a new product backlog is formulated and presented).

Creating the Mappings

Five steps are required when creating the mappings between the historical work and the new point scale. Steps 2 and 3 in the following process are inspired by James Grenning’s article “Planning Poker Party” (2009), which describes a similar approach (using a new product backlog rather than historical work).

Step 1: Identify

Identify a recent project that the same team (or at least most of the team) was involved in. List the discrete pieces of work, and write them on index cards (if they are in digital form). If they are not already in the user story format (see Shortcut 10), they should be converted to ensure comparative consistency moving forward (see Figure 5.7).

Image

Figure 5.7. Transcribe any digital requirements onto index cards.

Step 2: Sort and Stack

For this next step, you need a nice, big table and the development team. Starting with the first index card, read the user story out loud and place it on the table (see Figure 5.8).

Image

Figure 5.8. Read the first story out loud and place it on the table.

Next, take the second card and ask the team whether they recall it taking more, less, or the same amount of effort as the first card (see Figure 5.9). If it took less effort, place it to the left of the original; if it took more effort, place it to the right; and if it took roughly the same amount of effort, stack it on top of the first. If there is any contention or confusion, “burn” the card (not literally, please).

Image

Figure 5.9. Read the second story and, as a group, decide whether it took more, less, or the same amount of effort as the first story.

Then take the next card and place it either to the left of both cards (if it took less effort than both), to the right of both cards (if it took more effort than both), between the cards (if its effort was somewhere in the middle), or on top of one of the cards (if it was roughly the same effort). Repeat this process for all of the index cards (see Figure 5.10).

Image

Figure 5.10. Read the next story and, as a group, decide where it fits in relation to all of the stories that came before it.

Step 3: Sizing Up

At this stage in the process, there should be a number of sequential card stacks (of varying sizes) on the table. Please note that I use the word stack loosely, as you can certainly have just one card in a stack in this exercise. The stack at the very left of the table will therefore contain the cards representing the smallest user stories, and the stack representing the largest stories will be located at the very right end of the table.

Now, it’s time to play some Planning Poker (see Shortcut 14). Automatically assign all cards in the leftmost stack a 1-point value (see Figure 5.11). As an aside, I like to reserve the smallest ½-point value for trivial changes, such as label adjustments or textbox alignments, so unless your smallest stack is made up of these tiny requirements, consider starting with 1 point rather than ½ point.

Image

Figure 5.11. With all the sorting done, the smallest stories will be on one side of the table and the largest stories will be on the other.

Starting with a representative from the second-smallest stack (directly to the right of your new 1-pointers), determine the relative effort that was required to complete it compared to a representative from the smallest stack (for example, it may be three times as much effort).

As each stack gets classified, place a card representing its relative point value above it for quick recollection, so using our example, the second stack would be tagged with a 3-pointer card.

Step 4: Subclassify

With any luck, your Planning Poker session ran smoothly (thanks to the tips that you picked up in Shortcut 14), leaving you with several stacks of user stories with corresponding point values.

In a perfect world, there will be a stack that corresponds to each value in the point system that you’re using (see Figure 5.12), but do not worry if this isn’t the case. At the end of the day, so long as you have a couple of benchmark stories, you can at least get started.

Image

Figure 5.12. After playing Planning Poker and assigning values, you might have stacks that look like this.

If you happen to be spoiled for choice by having stacks containing a number of stories, then you can further classify them into subcategories that relate to different areas of focus (see Figure 5.13). For example, you could end up with three different 5-point stories. Even though they are grouped together (based on similar effort), they could all have very different focal points. Story 1 could have data optimization complexities, story 2 could have more of a user-interface focus, and story 3 could require integration with a third-party product. By subclassifying in this manner, the ability to compare apples to apples (when estimating new product backlogs) becomes a reality.

Image

Figure 5.13. You can consider subclassifying stories in the same stack by their different focal points.

Step 5: Final Filter

The final step in this calibration exercise is to filter out one representative from each stack (or substack if you subclassified as explained in step 4). These final champions will become the reference stories that are used to help start off future Planning Poker sessions (on new product backlogs). Considering that the stories have already been classified, the selection of the reference stories can be based on choosing a random story from the stack, or if you wish to be more discerning, the team can select stories that carry the most familiarity.

Keep Up Your Recycling

Although the initial calibration exercise may be complete, I recommend that you embrace and continue your new recycling practices. At the end of every subsequent project, add any completed stories to the benchmark collection to continuously build up a rich library of stories that are not only familiar but also easily relatable to a variety of different requirements.

There you have it. You are now equipped with a process to leverage historical work to calibrate some relative benchmarks. By utilizing work completed in the past, the team gains the added benefits of familiarity and consistency, making the transition to relative estimation smoother and less ambiguous.

Wrap Up

The three shortcuts included in this chapter focused on a selection of tactics, tools, and tips to help your team understand and transition to the world of relative estimation. Let’s recap what was covered:

Shortcut 13: Relating to Estimating

image Fundamental reasons for estimating requirements

image An explanation of relative estimation using some easy-to-understand metaphors

image Factors that influence a meaningful velocity

Shortcut 14: Planning Poker at Pace

image The mechanics of a Planning Poker session

image Tips to ensure that the team is warmed up and ready to launch into a game of Planning Poker

image Additional spinoff benefits that Planning Poker offers

Shortcut 15: Transitioning Relatively

image The benefits of using historical work to calibrate some initial reference points

image How to create mappings between legacy requirements and story points

image The process of generating a broad selection of meaningful reference points for future estimation sessions