Testing in an Agile Context - Testing with F# (2015)

Testing with F# (2015)

Chapter 8. Testing in an Agile Context

Agile denotes a set of principles supporting methodologies to sustain quality in a software project, even when you have forces pushing the team into taking shortcuts and making sacrifices in order to deliver in a faster and cheaper manner or squeezing in another feature. In the previous chapter, we talked about tools that will help us deliver high-quality products. Now, we will focus on the process that will enable a high-quality project.

Building a bridge or tending to a garden

What is a software project?

It so often happens that building software is compared to construction work. This is a rather strange comparison as construction projects have huge budgets with huge margins for risk. They don't change requirements halfway down the lane. Once the blueprints for a building is complete, there will be no changes during build time, because everything is set in stone.

Errors made in software are also expensive to fix, but instead of having huge margins, software projects are often slimmed down so tightly that there is hardly room for bathroom breaks.

Software projects are nothing at all like construction work. There is this notion that software is more like gardening. The code is the garden that needs to be tended to and grown. If it's not properly taken care of, parts of the garden will fall into decay. This is not really a good observation, as code does not fall into decay if you leave it; it is the rest of the world that changes. The code itself represents the requirements from when it was written, but since then, the business priorities have changed and better languages and tools have emerged.

Software development is more like research and development, where you look at a problem and try to figure out a solution. Seeing it this way, we can derive the following facts:

· What you do not know from the beginning you will learn along the way

· It's not possible to know what is required to solve the problem

· The input requirements for the problem might change along the way

It is hard for businesses to understand this. They don't see the process, the learning, and the research; they see a product that they want and developers as only being there to supply that product. They want to buy the car, but what they're really buying is the process of developing that car. Software development is not a product and it does not have a fixed price.

The broken iron triangle

Businesses would like to know what are getting, for what price, and when they are getting it. This is the most common misconception that ruins quality in all our software projects.

The following diagram is the broken iron triangle:

The broken iron triangle

The corners of the triangle represent the three targets you would like to lock down in order to have predictability. However, these are impossible to lock down if you want quality.

Locking down features means deciding on a set of features that should be delivered. This often comes from the client who wants to know what they are getting. The problem, as often shown in agile projects, is that they don't really know what they want before seeing results. Once there is a first demo version of an application, new ideas start popping up and the original scope is not valid anymore.

Locking down cost is done when the client requires a price tag on the system. Trying to estimate the work, a practice that is imprecise, most often requires this. The estimation is turned into a budget, which is then turned into a price tag. You lock down cost at the expense of quality, as no one wants to spend time on a project once the money is gone and with the knowledge that corners will be cut everywhere.

Time represents the deadline and the client asking to have the system ready by a specific date. Once the deadline approaches and the system is not coming along according to plan, it is easy to cut some corners to have it ready in time.

When we talk about fixed price contracts, we always talk about fixed price, fixed deadline, and fixed scope; I want these specific features, for this price, on that date. The only thing you have left to deal with in order to meet these requirements is quality. Cutting on quality will, however, build technical debt, which will cause everything to become slower and more expensive later on.

The proposal Scott Ambler gives us is to lock down two corners of the triangle and keep the third as a variable in order to stay agile.

I was once estimating a development project where the project manager wanted to know how much time it would take in order to provide a client with a budget. The project manager thought my estimates where too high and challenged me on the numbers. I could agree that I wouldn't pay the amount if I were the client, but the estimates where reasonable.

Knowing that this was a fixed price, fixed time, and fixed scope project, I added an extra 20 percent to the total estimate to mitigate the risk of such a project. I told the project manager this and explained why it was needed.

Oh, we usually deduct 20 percent from your estimates in order to sell the proposals to the client. No wonder the projects always go over budget and are never delivered on time.

This book is not about running software projects, but it is about quality, and quality is at the center of these three pillars of software projects. If we don't have a working software development process that will sustain quality in the process, then all other recommendations in this book will be for naught.

Visualizing metrics

The key to software projects is early feedback and creating predictability. This means we can see problems long before they appear and create predictability that will give the team a sustainable pace.

One way of creating predictability is by using the sprint concept from Scrum. Instead of having one long waterfall of specification, implementation, test, and release, you define a three week long time box where you take the highest prioritized features from the backlog and finalize those. This is called a sprint, and each sprint is a set of features from the backlog that are most prioritized by the product owner.

The following graph displays a Sprint burndown chart from a project I once ran as the Scrum Master:

Visualizing metrics

On the Y axis is the amount of work left to do in the sprint, and on the X axis are the number of days until the sprint is finished. On day 7, there was a sprint refinement meeting where the decision was made to bring in another story from the backlog into the sprint, and that is why hours left in the sprint are increasing.

This chart demonstrates that the sprint is progressing according to plan. It is possible to do the same with the product.

The following graph displays a product burndown chart:

Visualizing metrics

In this chart, the fifth sprint has been committed and it is apparent from the prediction that the whole backlog will not be completed before the eighth sprint. The eighth sprint is the last planned sprint. We have a prediction of how much work can be finished by then. What features might be used is a question of prioritization.

These metrics are important to visualize because they give early feedback on the progress of the project, and make it easier to make decisions that do not reflect poorly on quality.

At the end of the project, when the budget is burned, it is too late to ask the client for more money. At the start of the project, we are still able to revise expectations that all features might not make the decided deadline.

The Kanban board

If you have worked with any kind of agile methodology before, you have likely run across the Kanban board.

The following figure shows a basic example of a Kanban board:

The Kanban board

How do we deal with testing and bug reports on the board?

To the left are the user stories that the team has committed to this sprint. The Post-it notes are the user stories broken down into tasks. The columns are the statuses that these tasks are in. When the sprint starts, all the Post-it notes are in the left column, and when the sprint ends, they have all moved to the right.

Testing should be treated just as any other development task, by having a Post-it note for it. When all the other tasks are in the Ready for test column, the tester can move his or her task into the In progress column and start verification. Any bug found will be added into the To do column with a different colored Post-it. It is not necessary to fix all the bugs in order to deliver the sprint, but it is preferable.

When all yellow tasks are in the Ready for test column, they can be moved to the Done column. The user story is then officially complete. This is very much controlled by the tester.

Predictability

The way to drive software projects within the needle's eye we call the budget is to work out to have as much predictability as possible. Rule out and isolate what is unknown. Divide the project into what's predictable and what's uncertain. Remove unknowns by offering spikes, a time box where you try out new technology, and prestudies outside the project scope.

There are many tools a project manager could use. The following is what we developers can do to create predictability.

Testing

This is how you build a bookcase:

· Specification: You start by making a design where you decide how high, wide, and deep the bookcase should be

· Tests: You continue by measuring some wood carefully to get the correct length of the parts to be assembled

· Code: You hammer it all together, following the design using the carefully assembled parts

Writing a system without specification or tests is like building a bookcase by taking some random parts lying around and hammering them together. It may become something that will hold books, but when the client comes looking, there will be change requests coming. This will continue until the bookcase is good enough to be able to fulfill the original requirements on paper.

The problem here is that you don't know how many iterations it will take to get the feature good enough. The feature will be implemented, tested, bug fixed, tested, and bug fixed again until both developer and tester agree on the result. This is a very unpredictable and expensive way to work.

The following figure shows the workflow of a feature:

Testing

The client, sometimes in conjuncture with the development team, usually creates the requirements. These act as input to the specification that is created by the team and verified by the client. The specifications act as inputs to the tests. When all the tests are green, the feature is completely implemented.

This is then done for each and every feature on the backlog, as long as the software project is active. It is this practice that creates a sustainable pace and predictability.

Tip

It is never a good idea to try creating the full specification of a system before the software project is started. This neglects the fact that software development is a learning process and that the specifications will change once development has started.

What it means to be done

What is very important for predictability in a software project is knowing when a feature is done. This can be harder than it seems. Before starting a project, the team should make a definition of what it means to be done with a feature:

· The feature should be function-tested

· The feature should have automated tests

· The feature should be reviewed

· The feature should be documented

Before a feature can be moved to the Done column on the Kanban board, all these definitions of Done should be fulfilled. Even if the definition is clear, it is easy to skip these definitions when a project is getting stressed. This is the same as rounding corners on quality and building technical debt.

In order to make sure the definition of Done is not taken lightly, it is important to add it as tasks to each story of the sprint. It becomes much harder to cheat the definition of Done when developers have to confirm it is done by moving the task on the Kanban board.

Mitigating risks

Predictability means we know what is going on. We take what we have and create a plan. We measure and visualize in order to get constant feedback for our process. The way to assess the greatest risks to our projects lies in knowing:

· What we know

· What we know that we don't know

· What we don't know that we don't know

We call them known knowns, known unknowns, and unknown unknowns.

What we know is what we use when estimating a project, creating a project plan, and building a backlog. This is what we make our assumptions on and what we present to the client. It is everything else that is our dark mass in the project universe.

Known unknowns

The known unknowns are what we use to manage risk within our project. In our estimates, we add a little time because we have a known unknown. We add a few days to the delivery date because of the uncertainties.

An example of known unknowns is integration with a third part, which we never have done before. It could be a framework or language that the developers have no previous experience of, or a dependency to a design bureau. Ultimately, it could be anything we know beforehand that might cause us a delay.

How to deal with known unknowns is often quite straightforward. We can provide more time in the schedule for hard integration. We could have the developers make a spike on that new framework in order to learn, or send them on a course to learn the new language. We could provide the design bureau one set of deadlines and use more generous deadlines toward the client.

The known unknowns represent risk we can mitigate and expenses that go beyond what a project would normally cost if there where no known unknowns.

Unknown unknowns

The unknown unknowns are things we couldn't anticipate in the software project. The framework the application is developed with won't work on the production environment. Far into the project, legalities will restrict you from using a core component because it is open source under a license you can't support. When drafting out specifications, you will find that two major requirements are contradicting one another.

The only way to mitigate the unknown unknowns is to leave room in the project for unforeseeable things. We don't know beforehand how much space is needed, as we don't know that we don't know. We just have to come up with an index that is decent enough to continue the project, even if the unknown happens.

An unknown factor might change the rules of the project completely, and one should not be afraid to cancel a project if it's not viable to carry it out. It's better to lose the initial investment than carry on creating something that will be obsolete before hitting the market. You will always leave a project with experiences and knowledge that you didn't have before.

Automation

Continuous integration is the practice of having a build server running a series of operations on the code as soon as it is committed to the source code repository.

The tasks most often performed in the build server are:

· Compiling the code looking for lexical errors

· Executing any automated tests (unit tests, integration tests)

· Running static code analysis on the code

· Running linting tools to verify code style

The point of this is to verify that the committed code is acceptable to the quality standards initially set up by the project.

This following figure shows the workflow of continuous integration:

Automation

The reason for this practice is that integrating your code with other developers is hurtful, and a way to mitigate that pain is to do it more often. In the same spirit, a practice called continuous deployment has emerged, mitigating the pain of deploying code to production.

The following workflow shows how continuous integration hooks into continuous deployment:

Automation

The point here is when the code has been tested, it is verified to be deployable by continuous integration; that process carries over to continuous deployment, which makes sure code is delivered to target environments.

The point of both continuous integration and continuous deployment is to get instant feedback on submitted work and create a predictable working environment where the path from code to live feature is shorter and easier traveled.

When the CI/CD workflow is failing by a test not turning green, it should be the team's top priority to get it back to green. This will otherwise disturb the whole operation and if that process is allowed to stay red for a longer time, chances are that when it's truly needed, it is not possible to get it back to green.

With these conclusions, we must try to create predictability in our software projects, and one way to do so is to work with test automation, a clear definition of done, automating code integration and deployment, and also understanding what we know, what we don't know, and dealing with what we don't know that we don't know.

Testing in agile

When testing in agile development, it can be hard to know where in the process testing fits in. This differs depending on what kind of testing it is.

The following figure illustrates testing that has commenced in an agile process:

Testing in agile

There is testing that should be confined in the sprint and then testing that will happen after the sprint.

When the developer is working on a feature, he or she will also develop the automatic tests for that feature, making sure he or she has the feature covered with tests. This will make the job of the tester easier, as this individual can focus on investigating the feature and not so much on checking the requirements. It will also reduce the number of turnarounds between development and functional testing. The feature gets function-tested during the sprint. There are important aspects to this:

· Functional testing should have its own stable test environment

· All deployments to test the environment have a discrete version such as 1.1.1234

· The tester controls when deployment to test the environment is done

· The last feature in the sprint must not be finished too late to allow room for functional testing

When it is time for demo, the code of the sprint is deployed to a demo environment where it can be reviewed by the business. This is also where any kind of system testing is applied, such as load testing, stress testing, security testing, and so on.

The following figure shows what happens when a bug is found by functional testing:

Testing in agile

The problems here are obvious. The bug reports that come back from functional testing disrupt the feature development. There is overhead in the back and forth between development and testing, which makes the process of fixing bugs costly and tiresome. As the client that suggested we do it right the first time, this is exactly what should be sought to achieve with test automation, as fixing bugs from functional testing is too costly.

In sprint planning, you very seldom make room in the sprint for bug fixing the features. It will be handled by any remaining space in the sprint after all the features are done. Bugs not fixed in the sprint can be planned to be fixed in the next sprint. That is why you often deliver a sprint with known issues.

It is important that bugs are not fixed at the end of the sprint, but evenly divided on the sprint. The key is to provide the tester with work to do by following these steps:

1. Fix planned bugs from the previous sprint at the start of the sprint, to have the tester verify the fixes.

2. Pick up reported bugs between features. When one feature is done, before starting work on the next, check if there are any open bugs to fix.

3. Bugs fixed late in the sprint will most likely be added as planned bugs to the next sprint.

Summary

In order to begin performing quality practices such as testing in a software project, the software project must first enable those practices by being in good health. You need an agile project that embraces change instead of preventing it. You need a product owner that understands that requirements changes, and management that sees that a plan is not equal to the reality. The team members of the project must understand the value of predictability and the value of doing quality work like test automation.

In this chapter, we've discussed what a developer must know about software projects in order to perform test automation. In the next chapter, we'll talk about test smells, or anti-patterns as they're also called (which are the most common ones), and how we are going to avoid them.