Setting the stage for improvements - The Mikado Method (2014)

The Mikado Method (2014)

Appendix B. Setting the stage for improvements

This chapter covers

· Technical preparations for a restructuring effort

· Personal preparations

· What to think of before, during and after a restructuring effort

At this point in the book, you should be pretty confident with the technical aspects of restructuring code the Mikado way. Now we’ll show you a few things that can give you advantages and boost your work right from the start. You want to make sure that you spend your time wisely and direct your energy on the essential parts of the problems.

In this appendix, we’ve gathered some tips, tricks, and good practices that can really make a difference when you restructure code using the Mikado Method. We’ll present a mix of technical and organizational advice that we’ve found very useful in solving and mitigating common problems. This appendix is organized to give you a rough hint about when the advice is most applicable—before, during, or after your restructuring. We hope you’ll find a few gems in here.

B.1. Before your restructuring

This section presents things you should think about before heading out on your restructuring. It’s a rather long list, and you probably don’t need to have them all in place before you start. Look at your context and see what needs to be done, if anything.

B.1.1. Shorten your feedback cycles

Every developer has at some point implemented a feature and had a user report a bug in a seemingly unrelated part of the application. Every developer has at some point wondered, “How could this thing possibly affect that?” And every developer has also thought, “If I only knew then what I know now...”

When you deal with code improvements, you’re much better off if you can verify that your changes don’t break some other functionality, and this is why programming benefits a lot from closing feedback loops and shortening feedback cycles. You get to know as soon as possible what your changes have affected and minimize users having to report bugs.

The faster you get the feedback, the faster you can dare to go. You’ll also want a development environment where you can try things and get feedback from failure safely, in order to learn how the system behaves at its boundaries. The more things you can try without disturbing or depending on others, the better.


One way to simplify the Mikado Method dance is to have an easy way to verify that a change didn’t break anything. Quickly running compilers and tests is one thing, but for a complex product you need an automated build process. Automation will give you consistency and repeatability, which is very important for reliable feedback. In fact, try to automate as much as possible, as soon as possible. If you need the results of the automation, it’s time well spent.

Don’t automate what you don’t need

Spending time on automating things you don’t need is an utter waste and should be avoided. Only produce results that are asked for and that produce mutual benefit for all parties involved, including you.

A typical build process setup involves the following steps:

1. Pull all code from a versioning system

2. Compile

3. Run micro tests

4. Put together artifacts

5. Deploy

6. Run macro tests

7. Create documentation

8. Create the distributable artifact

It’s best to pick out parts of that process depending on the type of change you’re making to get the fastest relevant feedback. For instance, if you rename a local variable in a method, you only need to compile to check that you didn’t miss anything. On the other hand, if you change the internal structures of an algorithm, you’ll probably want to run the micro tests to check that it still works as expected. If you’re pulling apart your system into new packages, you’ll probably want to run the entire build. The book Continuous Delivery by Jez Humble and David Farley (Addison-Wesley Professional, 2010) offers detailed advice on how to automate building, testing, and deployment.

compiler support

For statically typed languages, a compiler gives even faster feedback than tests if something breaks, because compilation precedes running tests. There are also so-called eager compilers in many development environments that check code as it’s typed, and that feedback is almost instantaneous.

Early signs that something broke after a change, such as compiler errors, are important input when you create a Mikado Graph. If you don’t have a statically typed language, micro tests are the first level of feedback.

In this book, we talk a lot about using the IDE’s refactoring automation for many atomic and composed refactorings. But for an automated refactoring to work properly, it’s essential that the code compiles. Before we discovered the Mikado Method, we often tried to refactor code in a noncompiling codebase, usually with very poor results. Then, when we couldn’t compile our code, we had to revert to refactoring using search and replace, which is much slower and more error-prone than the automated refactorings of an IDE.

automated tests

Our best friend when we’re up against a bigger refactoring or restructuring is automated tests. When we refer to automated tests, we mean pieces of code that run a part of the system and assert that the expected functionality is there, without human intervention. Tests like that can be written both on a macro level and on a micro level.

Languages without static typing require you to execute tests to verify changes because there’s no option to get feedback from a compiler. Automated tests can be viewed as the compilers of nonstatically typed languages, in that they actually invoke the code and force the runtime interpreter to check it. If no tests are in place before making a change in a dynamically typed language, it’s a good idea to make “Cover area X with tests” a prerequisite to the change in the Mikado Graph, or even to make adding tests a separate graph before you start on the actual goal. These types of tests are often referred to as characterization tests (as described in Michael Feathers’ Working Effectively with Legacy Code (Prentice Hall, 2004)), which are tests written to define the actual functionality of the code. IDEs for dynamically typed languages usually provide some refactoring support, but when dynamic language features are used, refactorings might occasionally fail.

Macro-level tests

Automated tests that work the entire system, or parts of it, from the outside like a user would are particularly valuable when a system needs to change. The Naive Approach partly relies on this safety net and it also tells you what prerequisites you should consider. Those tests are sometimes called macro, system, functional, or acceptance tests. They typically launch a GUI, click buttons, enter text, select check boxes, and more, just like a user would. By doing so, they exercise the system in an end-to-end fashion, and it’s also very common for these actions, in turn, to store state in a persistent manner, generally by using some sort of database. As the suite of tests is run, it verifies the business value of the system, as well as its wiring and plumbing. It’s this end-to-end aspect that makes these tests so useful when you make structural changes, because you don’t want the functionality to change as internal changes are made.

The downside of such tests is also their end-to-end characteristic—they often take a long time to execute. In return, they’re able to provide loads of high-quality feedback, but that feedback arrives more slowly than the feedback from micro-level tests. Improving the execution speed and tightening the feedback cycle is possible to a certain degree, but dramatic improvements come at the price of the amount and quality of feedback they provide. Making major improvements in execution time often means bypassing important parts of an end-to-end test, like keeping data in memory instead of using the actual database. In order to get faster feedback, you need a complement, namely, micro-level tests.

Micro-level tests

Micro-level tests provide the fastest feedback you can get, only bested by the speed of generating compiler errors in statically typed environments. If you don’t have either tests or a compiler, you’ll be severely crippled in your ability to use the Mikado Method; you’ll have to rely on manual tests and analysis, both which are relatively slow.

The micro-level tests we’re talking about don’t trigger the use of any external resources, such as files, sockets, or screens, and this is what makes them so fast. They only test small chunks of logic and should execute entirely in memory. This means that hundreds, thousands, or even tens of thousands of tests can run within seconds. When they execute that quickly, it becomes possible to verify a lot of the logic in the code in a very short time. The downside is that they normally don’t verify that a user can actually perform a business-value function in the end product.

Tests on a micro level can cause problems if they’re written a certain way. If the tests use knowledge about the structure and implementation details of the code they test, they’re on the same level of detail as that code. Hence, for every piece of code that needs to change, several tests have to change as well. For instance, when using refactorings, you often change a lot of structure at the same time, which can break a substantial number of micro tests. Fixing them can be a daunting task, and in that way, poorly constructed micro tests can hold the code hostage.

You can usually solve that problem by removing those tests, but before you remove the tests, you should usually add a replacement for the tests you’re removing—a higher-level test. The job of the higher-level test is to verify the module’s behavior in a way that’s less dependent on the way the application is structured. This often implies testing a small cluster of classes that perform some cohesive service for the application. Sometimes this becomes a macro-level test, and sometimes it’s a micro-level test on that cluster of classes. Such tests are usually more stable in the face of change. When that higher-level test, or maybe several tests, are in place, the old tests can safely be removed.

Add tests “on the way out”

If the codebase doesn’t have a sufficient number of tests, such that they don’t verify the behavior you think they should, a test is clearly missing. This is normal, especially when you work on legacy code, but too often you’ll find this out too late. To mitigate nasty surprises, we’ve found that it’s best to cover the code with tests as we explore the graph, rather than when we make all the changes. We add high-level tests when we’re still exploring the graph. This is the classic “cover (with tests) then modify” approach.

Load and performance tests

In addition to the macro- and micro-level tests that verify the functionality and logic of the application, load and performance tests are used to verify that a change doesn’t introduce severe bottlenecks.

Load tests usually simulate having multiple clients accessing an application to see, for example, how many simultaneous users or requests the application can handle. This normally requires the system to be set up in a somewhat realistic fashion, which usually results in rather long feedback cycle times.

Performance tests measure how fast certain requests or algorithms are performed in the system. Performance tests may or may not need the system to be set up in a realistic fashion, and can sometimes give fast feedback.

From the Mikado Method perspective, we prefer faster feedback while exploring a system and growing a graph, so load and performance tests can be a bit too slow. But if you have those types of tests, run them as often as you can, such as over lunch, or during the night, so that you’re alerted if any of your changes have degraded performance significantly. Keep in mind that sometimes you need to first make structural changes and then optimize the new structure, meaning that performance will be degraded for awhile. During this time, the system isn’t releasable, and extra care must be taken to avoid getting stuck in this situation for too long.

Tests of other system properties

In many applications and domains there are other properties, such as security and regulatory rules, that limit what can be done in an application, or specify what must be done. Because the changes from the Mikado Method could break such properties, just like any changes you make to your system, it’s equally important to get feedback on such properties and use that information to build your Mikado Graph.

Try to automate those verifications as well. Be creative. A colleague of ours implemented a plugin for his automated tests that tested the rules of the system and generated regulatory documentation at the same time. This might not be directly applicable to your context, but it shows the creativity that might be needed to avoid long feedback loops.

manual feedback

Automation can provide fast feedback on things that are measurable by a computer. By putting a person in the loop, you can get another dimension of more complex feedback.

Manual testing

In some settings, teams of testers manually click through (or use other means to stimulate) the system under test to verify that it acts as expected. We recommend automating the repetitive tasks of testing as much as possible via micro- or macro-level tests. If this is done properly, the automated tests enable faster feedback cycles, and they also contribute to the confidence in the system by acting as executable specifications. In a Mikado Method setting, where maintaining momentum is usually important, waiting for system-wide manual testing kills the flow and focus.

Manual testing is still needed, but the time invested should be used to explore the system and go beyond the scripted tests rather than finding errors on a daily basis. Repetitive tasks are perfect for machines. Thinking outside of the box is something humans do a lot better.

Deploying and then letting the users report problems is also a way of getting feedback. We don’t recommend this as a strategy for finding errors, especially not in a Mikado Method setting, but releasing and getting user feedback is essential in order to build the best product possible.

Pair programming and mob programming

One of our favorite ways to get fast feedback is to use pair programming or, if you have the opportunity, mob programming. When done properly, the co-driver(s) will give you immediate feedback on the code you’re writing. You can also get feedback on your ideas in a design discussion, before you write any code. The feedback can be at many levels, from details about the programming language or shortcuts in the development environment to paradigm philosophies and principles of software development.

Code reviews

Programming with other people—in pairs or in a mob—isn’t the same as a traditional code review, where a developer’s code is scrutinized by another developer. Pair or mob programming gives instant feedback, in context, whereas a code review is usually performed after the fact, with the potential for plenty of time wasted on a bad solution or a bad Mikado Method path. Although a code review can provide valuable feedback, we value the fast feedback when programming together more.

When we pair or mob program, we let the co-driver focus on strategic decisions so the driver can focus on tactical decisions. We also give the co-driver the responsibility of updating the graph.

Balancing speed and confidence

In the ideal Mikado Method situation, you’d have full test coverage of the whole system, and it would run in the blink of an eye, so you’d know immediately what problems a change caused. In reality, though, you must strike a balance between how long you spend verifying a system, and how sure you are that you’ve found all the problems your change introduces. You can adjust the balance by adding good tests and making the tests faster, but there will always be a trade-off between speed and confidence when using the Mikado Method. The more critical the system is, the more you should shift the balance toward confidence, adding and improving tests, and spending time verifying the system after the changes you make. On the other hand, if time to market is more important, the balance should shift toward speed and away from confidence. The latter choice involves some hoping for the best, but it’s a valid option when a significant deadline must be met.

B.1.2. Use a version control system

We’ve seen businesses that don’t use a proper version control system (VCS), and we can’t stress this enough: the VCS is one of the most important tools in a software project.

A VCS is an application that keeps track of all the changes made to files in a filesystem. The beauty of using a VCS is that you can revert, or roll back, all the changes to a specific date, or to a specific tag or label created to identify a version. The more often code is checked in, the more fine-grained the history is. Other names for a VCS are revision control system or source control module.

The Mikado Method uses the revert functionality of the VCS systematically and extensively. Without it, you can’t expect any real progress at all.

Pet projects are also projects

Even when we’re working alone on our pet projects, we use a VCS to save us from any wasted time and agony. There are good VCSs available for free that are easy to set up, so there’s really no excuse for not using one. If you think your VCS at your day job could be better, evaluate another one when you work on your pet project.

B.1.3. Use automated refactorings

Another thing we’ve covered to some extent in this book is the automated refactoring support of your IDE. Most modern IDEs have automation support for an abundance of atomic and composed refactorings from Martin Fowler’s Refactoring (Addison-Wesley Professional, 1999) and other literature.

When we’re working with a codebase, we make constant use of these tools to improve, change, delete, restructure, and implement code. With these automated tools, you can make changes that span the entire codebase, changing and moving thousands of files with a single command.

Because modern IDEs, or changes using regular expressions, can affect literally thousands of files, we don’t rely on the undo or redo function of these tools too much. If one change creates one or two errors, you can use the undo function of the IDE, but if there are more than three or four errors, it’s probably better to undo using the VCS.

B.1.4. Use one workspace

Before you start changing code, it’s good to have all the editable code in a single workspace in the IDE, so you can make as much use of the refactoring tools as possible. If the code is spread out over several different workspaces, changing the code using automated refactoring becomes very cumbersome, because some code is changed properly and some isn’t changed at all. This can hide compilation and runtime errors, or delay the discovery of such problems until the right workspace is opened.

B.1.5. Use one repository

When making extended changes to a codebase, it’s simpler, more robust, and development performant to move all related code into a single repository root. This is something we strongly recommend. If you have reason not to do so, the benefits from having different repository roots must be carefully weighed against the increased overall complexity of the development environment.

We’ve seen cases where a single codebase is spread out over several VCS repository roots. This makes configuration and version management more complex and increases the risk for anyone involved to make mistakes. When you make changes across repository boundaries, it’s more difficult to maintain consistency and integrity across the different repositories. Taking care of the release-reuse versioning of the different repositories can also be cumbersome.

B.1.6. Merge frequently with the main branch

If you use a VCS, you always work in branches. The checked-out code at your local workstation is an implicit branch of the central main branch. Each developer has one such branch and possibly more locally defined branches in addition to any explicit branches created centrally in the versioning system. The longer you wait before checking in your local implicit branch, the harder the merge becomes, because the trunk and your code will diverge from the point when you checked it out.

When we use the Mikado Method, we try to merge our local branch with the main branch frequently, sometimes as often as 10 times a day. This may sound like a lot, but it allows everyone on the project to have the very latest code to build their changes on; likewise, it allows us to build our changes on the latest code our colleagues provide. One of the main characteristics of the Mikado Method is the small safe steps that continuously take the code in a new direction. When you work like that, the only sane approach is to work with one branch or very short-lived local ones.

What you really want to avoid is branches that diverge for a long time. The more they diverge, the harder they become to merge. The divergence will increase with the rate of change and the time between merges. With a large codebase and with the use of modern refactoring tools, you can sometimes change and move hundreds or thousands of files with a single command. This makes the rate of change very high, and the only variable you can affect is the time between merges.

Refactoring branches

A common, but bad, idea is to make a separate branch for the improvement efforts, and to keep on implementing new functionality in the main branch, with the intention of merging these two branches when the improvements are done. The rationale is usually that implementing new features shouldn’t be disturbed by improvements.

This is far from optimal and not the Mikado Method style of merging, mainly because the branches have different goals and will diverge very quickly. The better you become at using the Mikado Method, the faster they’ll diverge. A common, but equally suboptimal, solution to that problem is to merge the refactoring branch with the development branch continuously, but only from the main branch to the improvement branch.

As the improvements move in one direction and the new features in another, the merges will become more and more difficult, eventually choking the improvement efforts. The preferred solution to that problem is to merge both ways, but that’s like having only one branch to begin with.

Our recommendation is that the number of “central” branches be kept to an absolute minimum, with preferably only one main branch and possibly a branch that’s created when problems in the production environment arise and need immediate attention.

There are VCSs that can help you with complex merges, provided that there’s a fine-grained check-in history in each of the branches. But even then, there will always be cases where that fails. Use this VCS ability as an extra safety net instead of letting your improvement effort stand or fall with it.

Moving to “trunk development”

Learning how to work in a single branch is one of the things that makes companies such as Google, Amazon, and Yahoo successful at developing profitable software quickly. The most common pattern for enabling working in a single branch is the latent code pattern, where you can turn features on and off. This means that you can deploy half-finished features that are disconnected from the execution flows. This is also called a dark release.

B.1.7. Scripting languages to the rescue

Some improvements are difficult to make with the basic refactoring and regular expression tools of an IDE, such as when you need more complex search-and-replace operations or to access different sources of data to produce the result. Tasks like that, however, are easy to perform with most scripting languages. Sometimes when you’re using the Mikado Method, the number of edits required to fix a single prerequisite can be in the hundreds. If there is a pattern, using a scripting language can help immensely in making such edits.

B.1.8. Use the best tools

Get the best tools for the job and learn how to use them. In a for-profit context, the time spent discussing whether or not to buy a tool is usually worth more than the cost of the tool. On top of that, tools are usually so cheap that using them once pays for the whole investment. If work is carried out in a pro bono environment, there are many free tools that are sufficiently good and sometimes better than their commercial counterparts.

Be a toolsmith

The best tools are not necessarily the most expensive ones. Sometimes it can even be more cost-effective to take a simple free tool and extend or modify it than to buy an expensive tool that doesn’t do exactly what’s needed. Occasionally, building your own tool might even be warranted. As for any application, start out small with a clear business case and add functionality incrementally as long as you can warrant it.

B.1.9. Buy the books

Investing in knowledge pays good interest, and books are a cheap way of extending your knowledge. If the advice from a book is applied only once, the book has probably paid for itself. Managers should keep that in mind, and always give approval when their employees ask to buy a work-related book. You could say that we, as authors, are a bit biased about this, but we also run a company where our employees are encouraged to buy the books they think they need, at our company’s expense.

B.1.10. Know the dynamic code

Fast feedback and fast graph exploration is a lot harder if there are dynamic, or reflective, parts in the code. It’s important to know what parts of a codebase are dynamic, and to specifically verify those parts using automated tests, or to use the frozen partition trick from section 7.3.2 of this book.

Sometimes, these dynamic parts are needless complexity that can, for example, be replaced easily with proper abstractions. We always try to minimize the use of dynamic parts, and we’re often surprised how well that works.

B.1.11. Consistency first, then improvements

There are two things in a codebase that are good to keep consistent: the formatting of the code and the programming style. When you keep the formatting consistent, your brain can make more sense of a complex situation and doesn’t get distracted by irrelevant formatting differences. By keeping your programming style consistent, you can also reduce the cognitive load of having to understand different solutions to similar problems.

When formatting the code, there are several tools you can use to get a consistent result. In terms of programming style, it’s important to keep adding code in the same style as already exists in the system, until a change to a new style is performed.

If the code is formatted the same way everywhere, with regard to line endings, blank lines, and curly braces, it’s easier to modify the code with scripting languages and regular expressions.

B.1.12. Learn to use regular expressions

Most development environments have tools to perform search and replace using regular expressions. A regular expression is a powerful text-matching pattern that’s processed to find and replace text in files in very intricate ways.

There are a few different regular expression dialects for different platforms, but they usually contain approximately the same functionality. To avoid overly complex regular expressions, start by making the code consistent, as described in section B.1.11.

B.1.13. Remove unused code

One of our favorite code improvements is removing unused code. After removing the code, the system reads better, compiles faster, and takes up less disk space and working memory. Of course, the code removed must not perform any work that’s relevant to the system. When removing unused code, go for the low-hanging fruit and don’t try to clean-sweep the codebase.

There are a few categories of unused code, and we’ve taken some inspiration from the horror genre to describe them.

Figure B.1. Ghost code

ghost code

Sometimes you stare at code that’s commented out, thinking, “Why is this still here? What is it saying? It’s probably very important, or it wouldn’t be here. Let’s keep it.”

Programmers sometimes feel the urge to comment out a particular piece of code, perhaps to short-circuit the execution, or just to save it for later possible use. But there’s no reason for that commented-out code to stay in that state for more than a couple of minutes. Code that’s commented out is ghost code, and like any ghost, it should move on to the other side as quickly as possible. In this case, it should be deleted. If it’s not, it will haunt you as long as it remains in this world. On the upside, ghost code is easy to locate with a normal text search.

You should use the VCS instead of keeping old code in commented-out blocks all over the codebase.

zombie code

Figure B.2. Zombie code

Zombie code is the living dead code. It’s not part of any business value flow; it’s only called from automated tests, like unit tests or other unused code. It’s just walking around, feeding off the living code by taking up maintenance resources without giving anything back.

This kind of code is very hard to distinguish from real code. It looks like real code, but it isn’t really used. It’s also hard to locate, because it’s covered by tests and may not be marked by static analysis.

To kill it, cut off its head, which is usually the unit tests. This will make the zombie code plain old dead code.

dead code

Nothing invokes dead code. There are no user calls from a UI, no calls from tests, nor any other usage from anyplace else. If a compiling language is used, it’s somewhat easier to find these dead parts, because static code analysis can help you. Dead code is of no use at all and should be given a proper burial by deletion.

finding removable code

When you look for dead or zombie code, try to run the system using the acceptance test suite with code coverage turned on. This way you can exercise the parts of the code that an actual user will use. Better yet, if the system allows for some overhead, monitor it in production. Then look through the coverage reports to find uncovered code. The covered code is clearly used and can be ignored. The uncovered parts are candidates for more detailed analysis.

Ghost code can usually be found using a regular expression in a text search. The trick is to look for the start of a comment, and then any characters that are common in code, but uncommon in normal text. Parentheses and semicolons are good candidates, depending on your programming language.

B.1.14. Prepare yourself mentally

Improving code takes time as well as determination and a vision, but mostly it requires courage. Fear of changing code is not an option, although respect for its complexity is. Errors will be made, and the important thing is to deal with them as soon as possible. Leading, and especially leading change, can be lonely at times. Find strength by including the people around you instead of pushing them away.

B.1.15. Prepare others

You also need to communicate with the people around you, telling them what is about to happen and explaining why the codebase needs to be changed, in terms they can relate to. A good way to kick off the dialogue is to review the main goal of the Mikado Graph, and to explain in a broad sense how the goal will be achieved, perhaps showing a draft Mikado Graph. Take the time to explain the consequences if this isn’t done, as well as the upside of it being implemented. You might also need to explain that this is the job of every single person in the company, especially the developers. The Mikado Graph provides a visual tool through which the goal, scope, opportunity, and consequences can be discussed without going into technical detail.

B.1.16. Measure code problems

There are many tools out there that provide help when analyzing a codebase, as well as plenty of relevant metrics, or code quality parameters, that can give a hint regarding the state of your code:

· When you get confused by how many ways the execution flows due to conditionals, the cyclomatic complexity index will give you a number to quantify that confusion.

· When your mind is spinning from dependencies going back and forth between your packages, you can draw a dependency diagram, or take a look at efferent and afferent couplings in Robert C. Martin’s book Agile Software Development (Prentice Hall, 2002).

· When you need to know how much you can trust your tests, first look at the quality of the tests, and then check the code coverage of the tests. If the tests look good in that they assert the right things, and most of the code is executed by the tests, all is well. Otherwise, you’ll have to find the parts that aren’t tested, and determine what the impact of that is.

These are just a few of the available metrics, and all metrics are good in that they provide numbers you can learn from, and you can see how they change as you make changes to the codebase. They can give you some idea of where you might have problems, but there’s more to developing software, and especially legacy code, than just improving metrics.

The Mikado Method starts with what needs to be done, and even though a piece of code scores well on metrics such as complexity, package coupling, and coverage, you might have to change it. At the same time, you might never have to read or change the parts of the code with really bad numbers.

In addition, your gut feeling often tells you where the real problems are. It might take some initial working with the codebase to get that gut feeling, but you usually bump into the worst problems every other day. Code metrics are a good way of getting some backing for that gut feeling, but you shouldn’t base a restructuring effort on metrics alone.

B.1.17. Hire a critical mass of change agents

If bad development habits led to your problems, and that isn’t changed, the problems will never go away. You need to change the way the system is developed, and that requires a lot of effort and determination. Our experience is that on a software development team, 25–50% of its members have to support a new way of working to make a sustainable change.

The fifth principle of the Agile Manifesto states, “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”[1] It’s the same for an improvement effort: find the people who want to make the improvements. If the goal is to gradually restructure the code, you want to pick the people who want to work that way. If they can’t be found inside the company, search outside of the company for consultants or new employees. To improve your chance of finding the right people, let the people inside the company who want to work that way select the people they want to work with.

1 See

For more detailed advice on change, we recommend reading Fearless Change by Mary Lynn Manns and Linda Rising (Addison-Wesley Professional, 2004).

B.1.18. Encourage collective code ownership

In Extreme Programming, collective code ownership is mentioned as a good practice, and it’s one of the cornerstones of software development teamwork. It basically means that any person on the team in a software project is allowed to change any part of the code. (See Extreme Programming Explained, second edition, by Kent Beck, Addison-Wesley Professional, 2004.)

The opposite approach is giving some people the exclusive right to change certain parts of the code, by conventions in a team, or by rules and procedures in a company. Denying developers access to some parts of the code might seem like a good idea at first glance—what they can’t change they can’t ruin. The problem is that when you restrain access to code, this increases the resistance to change it. Necessary restructurings become very tedious to perform, to the point where building large workarounds is easier than fixing the real problem.

When doing larger restructurings with the Mikado Method, you can end up in any part of the codebase and the system, and you need to be able to perform the necessary changes. At the same time, you must, of course, make sure the users of the system aren’t affected.

B.2. During the restructuring

There are a lot of ways to initiate and maintain an effort to improve a codebase. This section discusses some things to keep in mind when you’re in the middle of a restructuring.

B.2.1. All things are relative

Try to always be sensitive to what other team members think is problematic in the code. Just because your last project was even worse doesn’t mean their concerns aren’t relevant.

The shared purpose of always changing the code for the better is a great way to boost morale and feel good about what you’re doing. If you stop improving because the code is in fairly good shape, you’ll end up in triple trouble: first by losing the ability to improve, second by losing the good morale, and third by ending up with bad code. Our best advice is to never stop improving the code, no matter how good it may seem.

B.2.2. Be aware of the code

To effectively reflect on the code you create, you need regular input from other programmers. One of the best ways to see code through fresh eyes is to practice pair programming or mob programming. Then you get the eyes of the other people in the room, and an additional perspective, namely, the one where you reflect on how you think others perceive your work. Pairing or mob programming also enables informal discussions about newly produced code in a fitting context with one or more people who most certainly have knowledge that complements your own.

Another opportunity for reflection can be achieved by using the Pomodoro Technique (see Pomodoro Technique Illustrated by Staffan Nöteberg, Pragmatic Bookshelf, 2009. This technique suggests 25 minutes of work followed by 5 minutes of some other activity. By focusing on something else for 5 minutes, your brain gets a chance to restructure new information and knowledge. Compare this to all the bright ideas that come to mind when standing in the shower, or when folding laundry. Pausing regularly makes it possible to use new information sooner, and it also gives your brain a chance to refocus. Pair programming, or pair work in general, often fits nicely with the Pomodoro Technique. When mob programming, switching programmers at 15-minute intervals works well.

B.2.3. Create an opinion about what good code looks like

In order to improve, you need to have an opinion about what constitutes good code. You need to set a current standard for your team. After that, the hard work begins: the job of keeping the code up to, or above, the standard you’ve set. Discuss the standard with others, and raise it when a better way of doing things appears. The standard will never be perfect, but rather something that is always being perfected.

B.2.4. The road to better often runs through worse

When you’re working with small-scale improvements, the code improves incrementally and is in a little bit better shape after each change. But when a larger refactoring or restructuring is started, you’ll sometimes feel as if the code is heading off in the wrong direction. The stepping stones leading to better code actually make the situation look worse!

Imagine a situation where a lot of related functionality is spread out in several places throughout the codebase, and you use the temporary shelter pattern to move it all into one big and ugly class, just to get an overview. At this stage, the code wouldn’t get through any decent code review.

This is only a transient state though, and the code will eventually be moved into more appropriate places. When you start finding the connections and abstractions in the code, you can create better homes for the pieces of code, making it all look better.

This also applies to when you implement new functionality. You want the code to tell you the abstractions needed in order to avoid overdesign. This often means that you create a small mess to get to know exactly what’s needed and to get an idea of what the right abstractions are, and then you refactor to those abstractions.

If you’re afraid to make the system look a bit worse initially, you might never find the ways to make it look good eventually.

B.2.5. Code of conduct

Dealing frequently with legacy code can be frustrating, and it’s easy to revert to code bashing or code baisse, such as, “This code sucks!!” or “What kind of idiot wrote this crap?!” Phrases like that are often formulated in a programmer’s mind, uttered between gritting teeth, or even shouted out loud in team rooms and over cubicles.

If you bash each other’s code, eventually you might stop speaking with each other, not daring to ask for help, and actually becoming afraid of each other. Name-calling only leads to negativity and a downward spiral.

For those situations when you need to blow off some steam, find a safe space for it, like an agreement with a colleague that it’s OK to be very open and direct.

Often the real problem isn’t the code, but something else, and the code just gets the blame. Finding what you’re really upset about often helps relieve some of the agony. Remembering that the code you’re bashing might be that of a future friend, or the result of your own actions, might help too.

When working with spaghetti code, big balls of mud, or just a plain mess, keep your focus on developing good code. And after all, it’s just code.

It’s just code

Yup, it’s just code. Relax!

B.2.6. Find the ubiquitous language

You should always strive to bring technical and business people as close to each other as possible. Invite everyone to discussions and create, or rather discover, the common or ubiquitous language and understanding for your domain. Try to incorporate this language in the code, represented as names of classes, packages, methods, and functions.

This often requires a closer collaboration between developers and the business side than most organizations are used to. For more details, tricks, and tips, check out Eric Evans’ Domain-Driven Design (Addison-Wesley Professional, 2003) and Gojko Adzic’s Specification by Example(Manning, 2011).

B.2.7. Pay attention to your gut feelings

Nicely formatted code, well-balanced comments, and documentation could at first glance make the code look good. But it might also be a way to just “pimp the code” and trick its readers into thinking it’s better than it actually is.

Many developers have a good gut feeling for code, but without experience, that gut feeling is hard to trust. After a couple of years of experience, some developers start to understand that they’re actually right when they feel something is overly complex, whereas others just seem to get numb to the pain.

Try to listen to what you feel about the code, questioning how things are done and looking for better ways. You don’t want to get numb.

Ola remembers

When I started out programming, I was doing a lot of web development. At that time there weren’t a lot of fancy web frameworks, so there was a fair amount of text parsing, database handling, and even some low-level HTTP stuff to be done. The only frameworks available were the ones that we developed ourselves.

I was working on a rather big web project. This time we had developed some sort of action classes, a dispatcher, and support for views. Very basic stuff. But it still made the development go faster.

One day I was struggling a bit more than usual with some difficult code in a view, and I turned to a more seasoned colleague for some help. I explained the logic and the flow of the code, how this affected that, and how the state of this parameter made the program enter this code. It was a huge mess with nested conditionals, state scattered all over the code, and way too many responsibilities in a single class.

After 10 minutes of explaining, I looked at my colleague and sort of waited for him to point out what I had missed. I didn’t get the response I was looking for though.

“Ola, this is too complex, I don’t get this,” my colleague started. “It doesn’t have to be this way...”

What did he mean by that?

It took at least 5 minutes for that to sink in. Maybe even a week for it to really sink in. What did he mean? I don’t need to have 20 if statements? How am I supposed to do it any other way?

Looking back at this incident, with years of experience, it’s easy to laugh. But for me to see this back then, when I was digging an ever deeper hole, creating a more complex mess every day by adding one if statement at a time, it was a totally different story. I had a blindness to flaws that kept me from seeing that I was creating a complex mess. In addition to that, I had insufficient knowledge about what good code is supposed to look like. I also lacked that hard-to-describe skill that experienced programmers have—the ability to distinguish good code from bad code, which can be harder than one first thinks.

B.3. When the crisis is over

Once you’ve gotten your head above the surface, you want to keep it there and start swimming, and eventually get dry land under your feet. The following sections describe what you need to do to get out of the water and stay there. This behavior takes a little bit of effort to keep up, but the joy of working in such a way will give you much more energy back when things start to look good, feel good, and work well.

B.3.1. Refactor relentlessly

For every change that’s made to the system, try to take a step back and see if there are more parts that can be simplified, split up, or removed completely.

There are some great books out there describing how to make the code better, such as

· Refactoring by Martin Fowler et. al (Addison-Wesley Professional, 1999)

· Refactoring to Patterns by Joshua Kerievsky (Pearson, 2005)

· Clean Code by Robert C. Martin (Prentice Hall, 2008)

· Implementation Patterns by Kent Beck (Pearson, 2008)

· Test-Driven Development by Kent Beck (Addison-Wesley Professional, 2002)

There are also great tools in most development environments that make it easier to carry out a lot of those smaller refactoring and restructurings.

All this knowledge and all these tools are of no use unless they’re put into practice. Play with your tools and your new knowledge to see how they work. Playing around is without a doubt one of the best ways to learn how to perform refactorings and restructurings.

B.3.2. Design never stops

A software system is a model (or several models) of the real world, usually modeling the business you’re developing for. Like any model, it represents a few concepts of the real world in order to be able to perform some tasks. But the real world will change, and so will your understanding of it. The model should then change too, based on your knowledge, to favor your business.

For that reason, design is something that never stops. You constantly need to rework the model, introduce new concepts, and think about how you can make the model serve its purpose most effectively.

Also, the natural habitat for a software system is in production. If the software isn’t making money, saving money, protecting property, or avoiding costs, no benefit or value will come out of it. This means that you need to be able to stay out of trouble and at the same time keep deploying new increments of value into production. This requires you to design as you go.

B.3.3. You ain’t gonna need it—YAGNI

Never develop a feature because you might need it later on. You most likely won’t, and if you do, you can implement the change then and there, later on.

This is one of the most important rules of software development. Few things add complexity as much as speculative development, and it only makes changing the code harder in the future. The cheapest software to write and maintain is the software that’s never developed.

B.3.4. Red-green-refactor

The test-driven development mantra is red-green-refactor, and it describes the three parts of test-driven development:

1. Write a test for the functionality you want to implement. Then run the test, expecting it to fail. When you run the test in an IDE, there’s often an indicator or a progress bar that turns red to signal failing tests; hence, red.

2. Implement just enough code for the test to pass, and nothing more. This results in the indicator turning green as the test passes; hence, green.

3. If all tests pass, you can safely refactor to remove duplications, reorganize the code, and clean up.

If any test fails after such a refactoring, revert the code to the last green state and continue from there. This approach makes sure a lot of problems are discovered before the code reaches other developers or the users.

The tests are good to have, but the refactoring step is immensely important. If you omit it, the code will be tested, but complexity will build up in the codebase. Eventually, you can’t refactor the built-up complexity within a single task. This might force you to take on even more complexity in a workaround, and then you’d need to ask your stakeholders for permission to plan time for a major cleanup. It’s better to continuously refactor.

B.3.5. Continue using the Mikado Method

Even though the crisis is over, there are usually more things to take care of in a codebase that just emerged from a crisis. The Mikado Method can, of course, be used for day-to-day work on nontrivial tasks. In addition, it’s good to keep your tools sharp, just in case you need them. When you use the Mikado Method continuously, you’re up to speed with it when you really need it.

B.4. Summary

In this appendix, you’ve seen a plethora of things to think about before, during, and after using the Mikado Method in a change effort.

A very important aspect is getting sufficient and timely feedback, preferably from automated tests, but also from compilers and other tools, as well as from your colleagues using pair or mob programming. In addition, by setting up your technical environment in a helpful way (such as for main branch development), you can move more quickly toward your goal.

Any change effort must also include the people involved, starting with you and your team, extending to the team’s surroundings and possibly to the entire company.

When the crisis is over, you should continue improving the codebase bit by bit to avoid falling back into that hole again.

Try this

· Of all the preparations described in this appendix, what do you have in place in your organization or team?

· What preparations would you need to make before making a bigger change in your codebase?

· What preparations would you omit? Why?

· Which of the preparations are you unable to do? Why?

· Is there any preparation that feels more difficult than others? Why? What would make it easier?

· How do you verify that your application works after a change?

· What metrics do you generate from your codebase? Do you know what they mean?

· Find the automated refactoring tools in your IDE (you might need a plugin). Try each of them. Think of an occasion when they would have been useful.

· Find the regular expression option in the search dialog of your IDE. Write different regular expressions to locate things in your codebase. Can you find what you expect? Does formatting the code consistently improve your success?

· Try to write regular expressions to replace code in your system, using parentheses to pick parts of the found string to use in the replacement. In particular, find the negating match and use it.