Requirement Refinement - Scrum Shortcuts without Cutting Corners: Agile Tactics, Tools, & Tips (2014)

Scrum Shortcuts without Cutting Corners: Agile Tactics, Tools, & Tips (2014)

Chapter 4. Requirement Refinement

Thankfully, we no longer rely on the old practice of crystal-ball gazing to define an upfront “spec.” That being said, the communication and validation of requirements can still be a challenging matter.

This chapter’s three shortcuts guide your team in forming and monitoring both its requirements and the associated definition of done (DoD).

Shortcut 10: Structuring Stories provides recommendations on how teams may split their sprint-ready requirements into actionable tasks. Shortcut 11: Developing the Definition of Done offers food for thought to help the Scrum team establish and evolve the DoD. Finally, Shortcut 12: Progressive Revelations focuses on how your team can eliminate waste by conducting walkthroughs.

Shortcut 10: Structuring Stories

Although Scrum does not prescribe the use of any specific format to articulate the user requirements that appear as product backlog items (PBIs), I think it is safe to say that the user story format popularized by Mike Cohn’s seminal book, User Stories Applied (2004), has become the de facto standard.

I would hazard a guess that anyone reading this book has read User Stories Applied and/or worked with the now ubiquitous format:

“As a . . . I want to . . . so that I can. . . .”

This shortcut isn’t going to retread old ground and explore topics such as the benefits of user stories or how to split big stories into smaller stories—these topics are covered perfectly well by Cohn in User Stories Applied, so go check it out if you haven’t already. Instead, I want to delve deeper into some user story aspects with less commentary surrounding them, such as the relationship between tasks and stories as well as how to handle “technical stories.”

Breaking It Down

Apart from the typical, sprint-ready user story (see Shortcut 11), there are two other commonly used constructs to help us piece together requirements: epic stories and tasks.

Starting with the largest, Cohn (2009) describes an epic story as “A user story that will take more than one or two sprints to develop and test.” When you are initially formulating your product backlog, the reality is that most of the requirements may well be more epic in nature. Remember that in Scrum, requirements are emergent, so it is not necessary to have detailed user stories for every requirement right from the beginning—only the top-priority items that are going to be worked on in the next one or two sprints need detailed user stories.

To split epics into sprint-ready user stories (our next level), I again refer you to Cohn’s User Stories Applied (2004), where you will read about a range of options, including splitting by subactivity, subrole, data boundaries, and operational boundaries (among several others).

Tasks make up the third and most granular level, and once they are identified and defined (hopefully by using some of the suggestions we discuss in the next section), the team is ready to start building some great software!

Task Slicing and Dicing

Formulating feasible, sprint-ready user stories is only part of the challenge. What comes next for these top priority stories is a trip to the sprint planning session (see Shortcut 8) where they will become sprint backlog items.

The sprint backlog is typically made up of not only the nominated sprint-ready stories but also their offspring (generated during the sprint planning process), typically referred to as tasks. These are the specific, granular pieces of the story that are worked on by members of the development team during the sprint and are often tracked with colored sticky-notes on the task board (see Shortcut 21). I prefer that these tasks take between 2 and 8 hours. Any longer and they start to become unwieldy.

While splitting epic stories into sprint-ready stories can be considered an art form, the slicing and dicing of stories into specific tasks requires the talents of a well-trained sashimi chef!

A common task breakdown that I’ve seen (and have used in the past) goes something like this:

User Story

“As a new user, I would like to sign up to XYZ website so that I can start using its awesome services.”

Task 1: Design end-to-end functional tests.

Task 2: Generate test data.

Task 3: Develop database layer.

Task 4: Develop business logic layer.

Task 5: Develop user interface layer.

Task 6: Develop end-to-end functional automation test.

I know this breakdown seems logical and straightforward, and it can work out just fine, but do you know what it looks like to me? Yep, you guessed it—a mini-waterfall! Although not nearly as dangerous as the scarier product-level waterfall approach, this mini version can still give us the same headaches, albeit on a more micro level. Focusing specifically on tasks 3, 4, and 5, it is evident that the product owner will find it difficult to verify and validate the requirement until all three of these tasks have been completed. Asking the product owner to verify the database schema changes and related stored procedures (task 3) won’t necessarily provide reassurance that the development is heading in the right direction (as far as the user functionality is concerned).

Instead, why not also apply, down at a task level, the “vertical slice” mindset that is commonly utilized at a story level? By doing so, it becomes possible to start validating work in a matter of hours—how cool is that?

Now let’s look at a vertical slice option that could be employed when breaking up the story into constituent tasks:

User Story

“As a new user, I would like to sign up to XYZ website so that I can start using its awesome services.”

Task 1: Develop username/password functionality (including test design and automation).

Task 2: Develop email authentication functionality (including test design and automation).

Task 3: Develop landing page functionality (including test design and automation).

What I’ve done here is break up the story into some relatively encapsulated end-user functions, each incorporating a small amount of database work, business logic, and user interface implementation (see Figure 4.1). The best thing is that the mini-waterfall has now become a safe trickle, and the feedback loops can be measured in hours rather than days!

Image

Figure 4.1. Instead of discrete layers, each task is vertically sliced to shorten feedback loops.

I’m sure one or two of you must be thinking as you read this, “Hang on a second—if you have managed to thin-slice this story into smaller functions, then why not make these functions into separate stories rather than tasks?” Well, that is a good question, and luckily I have a couple of answers. First, remember Cohn’s advice: “The key is that stories should be written so that the customer can value them” (2004). Referring to the example above, although signing up for the website is certainly something that the customer can value, using a standalone email authentication function is not necessarily of value.

The second answer comes down to story independence. If you can split a story into smaller ones and it is possible to independently prioritize them, it makes sense to treat them as actual stories rather than tasks. Referring again to the example, I would argue that none of the tasks can be separately prioritized because the functionality that the customer values would be incomplete and incoherent.

Instead of eating our cake in layers, let’s eat it in tasty slices so that we can enjoy everything the cake has to offer in one go!

Technical Stories and Bugs

I like Henrik Kniberg’s straightforward definition of technical stories from his book Lean from the Trenches (2011):

Technical stories are things that need to get done but that are uninteresting to the customer, such as upgrading a database, cleaning out unused code, refactoring a messy design, or catching up on test automation for old features.

I hear two common questions when it comes to technical stories:

image How can we write technical stories when user stories are supposed to be user-centric?

image How can we represent technical stories using the typical user story format?

My answer to the first question is that, wherever possible, instead of writing independent technical stories, try to represent the technical requirement as tasks within one of the functional user stories that is reliant on this technical work. Remember to ensure that if more than one functional story relies on the technical work, you factor in the tasks only once and in the story that has the highest priority.

I like the option of including technical work within functional stories (rather than as separate technical stories), primarily to ensure that they don’t get ignored by the product owner simply because they might be “uninteresting to the customer” (Kniberg 2011). By factoring them into a functional story, they certainly can’t be ignored, and the product owner will start to gain an appreciation for some of the technical complexity inherent within stories.

Addressing the second question, how to represent technical stories using the typical user story format, my answer is that you don’t have to. I like the user story format for functional stories, but for technical stories, I find that the format isn’t necessarily fit for purpose. Instead, simply use any format that makes sense and is easily communicated. Ensure that you set a consistent format that is as workable as possible for these technical types of stories. This same advice applies to capturing bugs in the product backlog. Although it is possible to document bugs utilizing the user story format, again, I find that taking this approach can become a bit contrived and confusing (like knocking a square peg into a round hole).

Consistency Is King

At the end of the day, the format you choose and how you break up your stories into tasks is totally up to you (incidentally, some teams don’t even go down to this level). Scrum makes no mention of any of these details, so there are no prescribed rules to follow.

However, if you are to implement just one de facto rule, it should be that you are consistent with whatever approach you choose to take. That doesn’t mean you are stuck with the same approach forever. As good Scrum practitioners, you no doubt will be constantly inspecting and adapting your processes. But once you have decided to give a particular approach a try, ensure that your team understands it and that it is maintained across all user requirements. This consistency creates a sense of discipline and is your insurance policy for those (hopefully rare) occasions when the product owner is not available to continue the conversation.

Shortcut 11: Developing the Definition of Done

I recently conquered what I consider to be one of my toughest Scrum challenges, and it’s got nothing to do with my career. I finally convinced my completely non-techie wife, Carmen, to adopt “household Scrum”! We were finding that household tasks were simply not getting done at a sufficient rate ever since our daughter, Amy, made her entrance into the world. So I took my opportunity when it presented itself, and now our home is beautifully decorated with some eclectic sticky-note artwork!

Admittedly, I am not a great handyman, but I nonetheless completed one of my household Scrum tasks (fixing up a desk), and very proud of myself, I slapped the corresponding sticky-note into the Done column right in front of Carmen—oh yeah! Without skipping a beat, Carmen took the same sticky-note and placed it just as fervently back into the In Progress column with the accompanying commentary: “Umm, nice work fixing the desk, but that task has certainly not met my definition of done—your tools are still lying on the desk.” Not only was I super-proud (because my wife had obviously been listening to my constant Scrum babbling), but also it reinforced an important lesson. The most important thing, when you have two or more humans involved in any type of transaction, is the setting and aligning of expectations. Scrum understands the criticality of this maxim and offers a vital construct to help us do so: the DoD (definition of done).

Ambiguous Arguments

Although debates centered on subjective topics can be fun, I usually find they are a big fat waste of time, especially in the workplace. You often go round and round in circles with no conclusive outcome and with all parties storming away frustrated or even resentful. I can’t tell you the number of times I witnessed this type of argument in the “old days” between a programmer and a tester discussing quality. The programmers would vehemently argue that their code was perfectly fine for production, whereas the testers, tearing their hair out, would argue exactly opposite. So who was right? Well, both and neither. The problem was that the rules around what constituted a sufficient level of quality had not been clearly defined and/or had not been well communicated.

The DoD, if developed collaboratively, prevents such arguments from happening with any regularity (see Figure 4.2). The DoD becomes the governing agreement that guides all developmental activities, clearly stating what is required for a piece of work to be categorically classified as “done.”

Image

Figure 4.2. Aligning expectations with a clear definition of done minimizes ambiguous arguments.

Where to Start

The first thing to realize when formulating your first DoD is that it isn’t cast in stone. You don’t need to spend an eternity deliberating what it should be, because it can evolve over time. Like everything else, we should be regularly inspecting and adapting the DoD. As Clinton Keith writes inAgile Game Development with Scrum (2010), “Teams expand the standard DoD by improving their practices. This enables the team to continually improve their effectiveness.”

To compile an initial DoD, I recommend that you start extremely realistically or perhaps even conservatively. There is no definitive DoD that you can simply look up online. Your definition should be completely customized to take into account your product’s specific requirements and your team’s abilities and expectations (and remember that these will certainly change over time). Also, ensure that the entire Scrum team, including the developers, product owner, and ScrumMaster, is involved in shaping the DoD.

Multiple Levels

The DoD can apply at several different levels. Tasks, user stories (see Shortcut 10), and releases can all have a corresponding DoD (see Figure 4.3).

Image

Figure 4.3. Your definition of done may have different levels.

Let’s look at some examples. Remember that there is no universal DoD; however, here are some indicative definitions that I have worked with that may prove useful in prompting discussion within your teams.

Task Level (Programming Task Example)

Task-level elements of the DoD include the following:

image Code has been unit tested.

image Code has been peer reviewed (if continual pair programming isn’t being conducted) to ensure coding standards are met.

image Code has been checked into source control with clear check-in comments for traceability.

image Checked-in code doesn’t break the build (see Shortcut 18).

image The task board has been updated and remaining time for the task = 0 (see Shortcut 21).

User Story Level

There are two parts to this particular story (pardon the pun). The first part includes getting the actual user story requirements to a state of done to ensure that it is sprint-ready (sometimes called the definition of ready). The second (and more obvious) part is the DoD for determining when the story is ready for production release.

image Definition of ready (see Figure 4.4)

Image

Figure 4.4. When a user story is sprint-ready, it can be moved into sprint planning.

– The user story has been estimated (see Shortcut 14).

– There is a clearly defined set of acceptance criteria.

– The user story has been uniquely ordered within the product backlog.

– An appropriate and applicable level of extended documentation exists (such as mock-ups and wireframes if they are necessary).

– Based on the initial estimate, it should appear that the user story will comfortably fit within a single sprint.

image Definition of done

– All automated functional acceptance tests confirm that the new feature works as expected from end to end.

– All regression tests verify successful integration with other functions.

– Any relevant build/deploy scripts have been modified and tested.

– The final working functionality has been reviewed and accepted by the product owner.

– All relevant end-user documentation has been written and reviewed.

– If applicable, any translations and other localization elements have been integrated and reviewed.

– The user story has been demonstrated by the team to all relevant stakeholders at a sprint review meeting.

Release Level

Release level elements of done include the following:

image All code related to the release has been successfully deployed to the production servers.

image The release passes all production smoke tests (both automated and manual) with the actual production data rather than just test data.

image Customer service and marketing teams have been trained on the new features.

image The final release has been reviewed and accepted by the product owner.

Constraints

Along with the examples just listed, it often happens that compliance with any required system constraints (also called non-functional requirements) is reflected in the DoD. Often referred to as “-ilities,” these constraints typically include areas such as scalability, portability, maintainability, security, extensibility, and interoperability. These are requirements that need to be baked into all layers of the product from the task level to the release level.

Here are some examples of these constraints:

image Scalability: Must be able to scale to 20,000 concurrent users.

image Portability: Any third-party technology used must be cross-platform.

image Maintainability: Clear modular design should be maintained across all components.

image Security: Must be able to hold up against specific security penetration tests.

image Extensibility: Must ensure that the data access layer can connect to all major commercial relational databases.

image Interoperability: Must be capable of data synchronization with all products in the same suite.

Acceptance Criteria or DoD?

Once your team becomes familiar with the DoD and the user story format, you will likely encounter an interesting question from time to time: Should XYZ requirement form part of the acceptance criteria or a part of the DoD? The answer to this question comes down to whether the requirement is applicable to every user story or to a smaller subset of stories. For example, let’s look at backwards compatibility. If every feature of the product that is being developed must be completely backwards compatible with the previous version, then this nonfunctional requirement should form a part of the DoD. On the other hand, if it has been determined that backwards compatibility is necessary for only a handful of the features under development, then this requirement should be added to the list of acceptance criteria for just those features.

It’s Just Like Cooking!

Similar to our discussion on defining DoD requirements (see Shortcut 10), the most important thing to remember is to remain consistent. Your DoD will evolve over time as requirements and abilities change. Starting with a detailed, overly ambitious list may look impressive at the onset, but as soon as it becomes unrealistic to adhere to, the team’s credibility and morale will quickly dissipate. So, be realistic and get the ball rolling with a minimally acceptable DoD that everyone can live with—remember that it can evolve as the team matures.

It’s like adding salt to your cooking—you can always add more, but it is much more difficult to remove once you’ve put in too much.

Shortcut 12: Progressive Revelations

As we age, certain parts of our body slowly start to change: the odd wrinkle here, an extra love handle there, and so on, and so forth. Luckily for those of us who aren’t yet willing to age gracefully, there is a powerful tool that we can use every day to combat this transformation—it is that marvelous piece of reflective glass called the mirror! By looking at our reflection every day, we can inspect and quickly adapt to any slight deviation (if we choose to, of course). A new wrinkle—no problem, on goes some extra face cream; an extra bulge beginning to form—all good, an extra hard gym session should sort that one out!

Now imagine if you didn’t look in the mirror for a year; there would be two likely outcomes. First, no doubt, you would be somewhat surprised by the (relatively) unfamiliar image looking back at you. Second, any perceived “degradation” would have compounded over the year, causing the “fix-up” work to be extensive, difficult to apply, and perhaps beyond remedy at this stage (contrary to what the cosmetics industry will tell you!).

Agile Coach Mike Dwyer (2010) poignantly blogs that “Scrum is not a silver bullet, Scrum is a silver mirror!” I find this statement to be helpful to illustrate that Scrum is not a silver bullet to resolve a project’s woes but a silver mirror to help identify improvement areas earlier rather than later. In the past, waterfallers wouldn’t look closely at the product or process under development until right at the end. Scrum gives teams the opportunity to frequently look in the mirror to discover the early-stage wrinkles, which allows the team to take action before the problems grow worse.

You may be thinking that this is a shortcut dedicated to the important sprint retrospective or sprint review sessions, but it actually isn’t. Instead, it focuses on a more informal intra-sprint activity that many teams call a walkthrough. The purpose of a walkthrough is to inspect and adapt the user stories under development on a day-to-day basis throughout the sprint. So what about the sprint review? I hear you ask. While sprint reviews are indeed a regular opportunity to look in the mirror, conducting even more frequent checks can help eliminate additional waste by closing the feedback loop faster while also providing the opportunity to engage in continuous deployment/delivery (see Shortcut 18). I still believe the sprint review has considerable value, but I see it more as an opportunity to present and discuss the output of the sprint with a broader section of the stakeholder community (see Shortcut 22).

Verification and Validation

The purpose of a walkthrough is to reassure the team that the work undertaken (or about to be undertaken) is going to deliver what everyone is expecting. A walkthrough may be requested by a developer wishing to verify with the product owner that he or she hasn’t misinterpreted the objective of the work they are about to launch into. Alternatively, the product owner may wish to call for a walkthrough with a developer to verify design decisions before too much time and energy is invested.

How many times have you been involved in a waterfall project where the product manager (using the old vernacular), with wide, startled eyes, exclaims at the end of development, “This isn’t what I meant! I thought it would function like blah, blah!” This would elicit a gruff response from the developers along the lines of “Then why on page 47 of version 3.6.4 of the specification document, which has been signed off by half the company, does it clearly state otherwise?!”

I’m not saying that breakdowns in communication will cease to exist with Scrum, and in fact, with the emphasis on increasing the frequency of face-to-face discussions, you may well find that there are more disagreements. However, the key here is that the more regular the interaction, the easier it will be to smooth out any contention and get everyone pulling in the same direction again.

When, Where, Who

A walkthrough should occur whenever it is needed. Some teams that I have worked with prefer to allocate a couple of periods in the day (typically an hour following the daily scrum and another hour midafternoon). This practice doesn’t necessarily require that all team members attend walkthroughs for 2 hours a day, but it sets an expectation that they may be interrupted during these times, so they should not get frustrated if they get a tap on the shoulder from someone requesting a walkthrough.

Because the walkthrough is typically a hands-on demonstration of the work requiring verification/validation, it usually occurs at the desk of the applicable developer. There is no need to get tied down with red tape by sending out meeting requests and booking rooms for a walkthrough. However, if you don’t have the luxury of a collocated development team, you obviously need to consider some additional logistics.

In relation to who should attend a walkthrough, I recommend that you try to always include the product owner, the relevant programmer(s), and the relevant tester(s) at the same time. With more reliance on discussion than on specification, it is important to ensure that everyone is on the same page (ironic choice of phrase, I know) at the same time.

Issues and Adjustments

A walkthrough should be purely focused on the current sprint backlog items rather than providing a forum to discuss future stories—leave that for your sprint planning session (see Shortcut 8). Valid outputs from a walkthrough typically fall into three categories:

image Issues: These are problems during development that manifest in broken functionality that needs fixing (see Shortcut 9).

image Adjustments: Consider adjustments to be minor changes to the design, not to be confused with wholesale scope creep to the user story.

image Thumbs up and smiling faces: That’s what you get if the walkthrough verifies that the work in progress is on the money!

Be Aware of Scope Creep

What happens if the product owner, after having a “taste test” during a walkthrough, decides that he or she wants to adjust the “flavor” (and I’m not just talking about a pinch of salt). This can be a common and potentially frustrating situation, but it is not a problem if handled correctly.

Remember from Shortcut 1 that once the scope of the sprint is set, it should be left alone and protected to provide the developers with some reassurance that their focus can remain intact for the duration of the sprint. Although minor adjustments should be accepted and expected, significantly changing the requirements for a user story mid-sprint must be avoided (see Figure 4.5). So what should happen if this situation arises? As an example, let’s say the product owner decides that the shopping cart requires more than the previously agreed upon number of payment options. No problem. Simply create an additional user story that focuses purely on the new payment options, and add it to the product backlog. If this is considered to be of high priority, then present it in the next sprint planning session and tackle it in the subsequent sprint rather than changing the scope of the current one.

Image

Figure 4.5. Minor adjustments can be accommodated, but save the big changes for the product backlog and future sprints.

Capturing the Output

Although teams should be encouraged to resolve issues and adjustments in real time during the walkthrough (if the change is relatively trivial), there are times when it isn’t possible to do so. In such situations, it is important that the issues are not forgotten when the time comes to tidy things up. I recommend the following simple approach to ensure that you’re not wasting too much time in documentation mode.

1. Add a dotted line under the current list of acceptance criteria for the user story to distinguish the new notes from the original requirements.

2. Add the initials of the walkthrough attendees as well as a date and time stamp to help the team more easily recall when the modifications were discussed should memories become hazy.

3. Define the requirements with short, sharp bullet points (see Figure 4.6).

Image

Figure 4.6. An example of how minor adjustments can be captured on the back of the user story card.

4. New changes are likely to spawn some new tasks, extend the length of existing tasks, or both. That’s fine, but make sure the time remaining for existing tasks is adjusted accordingly. I also recommend adding new tasks to the task board using sticky-notes in a different color than used for the original tasks (see Shortcut 21).

Don’t Overdo It

In Brisbane, where I grew up, there is a set of outdoor climbing cliffs in the heart of the city where city-dwellers can escape to enjoy some face time with nature. Although climbing isn’t necessarily my sport of choice, I would occasionally venture down to admire the nerve and flexibility of these human spiders. I recall the particularly courageous (aka crazy) climbers who would free-climb—that is, climb without using support ropes from the top but instead incrementally anchor themselves in as they progressed. The further apart these anchors are, the higher the risk and the longer the recovery time should a slip occur. However, climbers who obsessively and too frequently create anchors inevitably end up wasting energy and time as well as losing the valuable rhythm and focus required to reach the top.

The intra-sprint walkthrough is analogous to free-climb anchoring in that walkthroughs should occur as frequently as necessary to ensure “safety,” but they should not be conducted just for the sake of it—you don’t want to unnecessarily spend the team’s precious time and energy.

Wrap Up

The three shortcuts in this chapter focused on a selection of tactics, tools, and tips to help your team define and evolve their requirements and definition of done. Let’s recap what was covered:

Shortcut 10: Structuring Stories

image An overview of the user story hierarchy

image Approaches for breaking down sprint-ready user stories into tasks

image Options for incorporating technical requirements into the sprint backlog

Shortcut 11: Developing the Definition of Done

image Starting points for defining what done means

image Options for generating multiple levels of done

image Differentiating between the definition of done and acceptance criteria

Shortcut 12: Progressive Revelations

image The benefits of conducting progressive intra-sprint walkthroughs

image Walkthrough logistics—when, where, and who

image Differentiating between scope creep and acceptable mid-sprint adjustments