Automated Unit Testing - Real World .NET, C#, and Silverlight: Indispensible Experiences from 15 MVPs (2012)

Real World .NET, C#, and Silverlight: Indispensible Experiences from 15 MVPs (2012)

Chapter 15

Automated Unit Testing

by Caleb Jenkins

As a consultant, development mentor, and agile coach, I've worked with many dozens of organizations and hundreds of developers who have struggled through the realities of implementing automated unit tests. Most developers have at least heard of unit tests. They even believe that implementing automated tests would be a good idea and would make their code better, even if they're not sure exactly how or why — or rather, they're not fully convinced that the added time and complexity of writing tests can make their code better.

If you are a developer, you already know how to write code — you probably get paid for it — and won't tests just slow you down? Won't they give you more code to write and maintain? If you've ever struggled with these notions about writing tests, know that you're not alone! If you have these concerns now, you will probably be a better test writer in the long run.

This chapter lays the right foundation for your tests, and gives you the practical skills and tools to make unit testing a natural part of your coding activity.

Understanding Unit Tests

At their core, unit tests are just code you write to test code that you wrote — or, better yet, code that you are going to write. With automated unit tests, you can run and rerun hundreds, or even thousands, of tests with the click of a single button, or have the test run automatically on the server whenever a developer checks in new code to source control.

Scope, LEGOs, and Connected Parts

Think of software development in terms of building with LEGO blocks (the classic children's interlocking building bricks), except that you must individually build most of the blocks. A LEGO block is like a class, and, as you build your software, you assemble the various parts.

Thinking about software this way is helpful. It is also a bit sobering when you realize that you are responsible for building the blocks, and for making sure that they fit together and function properly.

Understanding Test-Driven Development

It has been said that the worst word in Test-Driven Development is the word test. Why is that? Because when you think of a test, it is often something that is done after that fact.

For example, when you build a house, you have an inspector come after that house has been built to confirm that it is structurally sound and stable. Compare that to the role of the architect who originally designed and built the house. He wrote specifications that the builders then had to follow.

In Test-Driven Development (TDD), think of your tests as executable specifications, and not as after-the-fact tests. By writing the tests first, you in essence tell your code what it is supposed to do, and you tell it in such a way that you can easily execute and confirm that it does what you told it to do!

The basic workflow to TDD is simple enough, but it is also a powerful practice if your team embraces it. As Yoda might say, “The tests are my ally, and a powerful ally they are.”

Following are the four basic steps for TDD:

1. Write the test.

2. Run the test and watch it fail.

3. Write code to pass the text.

4. Refactor the code.


This process can be summarized as red, green, refactor — rinse and repeat.

The first step is easy. Write the test that tells your code what it should do.

Next, you run your test and watch it fail (red). Of course, it should fail — you haven't written the code to make it pass yet! It's important to see your test fail to ensure that it can. I've worked with far too many systems that had passing tests — tests that would and could never actually fail. Simply put, as tests, systems, and scenarios grow in complexity, it is all too easy to get caught in the weeds, and inadvertently write a passing test (one that will never fail). So, ensure that your tests can fail, okay?

Then write just enough code to pass your tests. This is actually more difficult than it sounds. For developers, it is too easy to get caught up in the moment and the momentum of where the code is going. “We know what it should do next; I'll write the tests later.”

The problem is that this is often where scope creeps its way into your systems. Use the power of the tests to help guide and push your code where you want it to go. Write just enough code to pass your test (green). I know, I know, you are probably thinking that the code is ugly, it's a hack, and it just does the minimum to pass. Yep — and that is why you have Step 4.

In Step 4, you refactor your code to make it perform better, as well as ensure that it is maintainable and readable. Basically, Step 4 is a cleanup step. Now that you have a passing test, you can refactor and test, refactor and test, and so on, and have a level of confidence that the refactoring you do here isn't breaking something else. Then, when you have it all cleaned up, you write another test!

Understanding the Benefits of a Test-First Approach

I'm not saying that everyone must always write tests first before any code is written (although that is preferred). The argument could be made that fast proto-typing and mock-ups are an example of when that might be overkill. But take a quick look at several of the direct benefits of following a test-first approach.

Testable Code

Although I am not a stickler on having massive test coverage in every scenario, I am a huge stickler when it comes to writing testable code. If you find that your code is difficult to test, writing your tests first can dramatically help your ability to write testable code. Most people who start down this path soon realize that they like it so much that it becomes uncomfortable to not have tests first. Give it a try!

Self-Documenting Code

Do you find yourself coming back to your code after the fact, and trying to explain it to other developers? Wouldn't it be great if you had a whole host of example code that showed exactly how to use your library?

Defensive Code

Do you work with other developers who check in code without asking first? (I'm being a little facetious here. Of course, they don't ask for permission to check in code — and you wouldn't want them to!) Having your unit tests already in place is the best way to ensure that other developers (or yourself, for that matter) aren't breaking your code.

Maintainable Code

Some people try to argue that writing tests slows down development and increases the cost of software. However, at this point in our industry, it is a statistically established fact that the cost of software creation is pennies on the dollar when compared to the cost of software maintenance.

In other words, the real problem with quick-and-dirty is that dirty always outlasts quick. Having a healthy suite of automated tests is quite possibly the single largest and fastest thing that any team could do to decrease the cost of software maintenance, decrease the fragility of its code, reduce the risk of future change, and lower the total cost of software ownership.

Code Smell Detector

If you find that your code is increasingly difficult to test, that is a huge smell indicating that it will be difficult to use. Think of writing your tests like dogfooding your own code — you know, eating what you make to see how it tastes. Writing tests is a way to use your own code early and often.

Writing software is often about writing useful pieces of functionality that then gets chained together for a larger need of functionality. The larger need (although necessary to software development) is often obscured from view when smaller pieces break. Unit tests enable you to quickly test at the smallest level of usefulness.

Getting Oriented with a Basic Example

Okay, enough with the chitchat — let's code! Let's start by taking a look at a simple (albeit contrived) example.

Assume that you are going to write a library to manage account balances. You want to withdraw funds, add funds, and get the current balance. What would the class for that look like? Check out the following:

public class Account


public int Total { get; set; }

public void Withdraw(int amount)


Total -= amount;


public void Deposit(int amount)


Total += amount;



That's simple enough, right? But can it do what you need? Does it work as expected? Can you spot any issues with the code? How would you test to see if the code can do what you want it to do?

You could do a simple test of this code with something like this:

var testAccount = new Account() { Total = 100 };


bool passed = (testAccount.Total == 110);

But then you have to ask yourself, “How am I going to run this test code?” Back in the day, you might have whipped up a quick Windows Form application, drawn a button, double-clicked it, written your test code in the event handler, and ended with a big MessageBox telling you that it worked! Don't laugh. It's sad, but many developers “unit” test their code that way, and, frankly, at least it is a unit test.

Later in this chapter, you learn about automated unit tests and using mocking frameworks and test runners to make your life easier and more productive. But first, look at the fundamental components that make up every bit of test code — “The Three A's.”

Assign, Act, Assert

Aside from being a catchy homily, the “Three A's” do help to ensure that you write appropriate tests. Now dig a little deeper into what each means.


The assign section is basically the setup for the test. This is where you assign your starting values for your test. In the earlier example pseudocode, I created the test account and then assigned the property Total to 100.

In many cases, it makes sense to move the starting position (or assign section) for a test to a common method that gets called before every test — especially if you group your tests together by common scenarios.


The act (or action) section is the method or event that you test. In the previous example, it was the act of depositing 10 into the account that was the action.

It's tempting to cram multiple scenarios and actions into a single test, but that's actually just a headache waiting to happen, which can lead to confusion and uncertainty when the tests fail. It's a much better practice to adhere to “one act per test.”


The assert statement is what tells your test if it has passed. An assert statement is always a Boolean expression — you either pass or fail, and nothing in between. In the preceding example code, I set the result to a bool (passed) and left what I was going to do with that bool up to the imagination.

If you wrote a Windows Form application, you'd be tempted to do a MessageBox at this point with the message, “It passed!” If you wrote a Console app to handle your unit tests, you would have done a Console.Write to display the result of your test. But, again, you have better options. You have moved on. You have embraced the unit testing framework.

A unit testing framework is like having a test application for your code already in place. You leverage the testing framework as infrastructure to host your code, and run your tests against your code.

Code, Tests, Frameworks, and Runners

If you think about it, there are just four major components to an automated unit testing environment:

· Code

· Tests

· Testing framework

· Test runner

Now take a look at each of these components.


This is simply the code that you wrote or are going to write, and that you want to test.


This is the code that tests your code. (This is what was discussed earlier in this chapter.)

Testing Framework

The testing framework is what makes the whole automated unit testing environment possible. The testing framework that you use will be an enabler to the whole process. Testing frameworks are available for almost every language and environment.

In the .NET world, nUnit has been around the longest. It's freely available (, and is completely Open Source. Also, Microsoft's VS Test is included with all versions of Visual Studio Professional and up, so it's also extremely ubiquitous and readily available ( Other testing frameworks are available, but this discussion focuses on nUnit and VS Test. As a consultant and agile coach, these are the two that I run into more with clients than anything else in the field.

Choosing a Testing Framework

In my experience as a consultant, Microsoft VS Test is typically used at companies that are new to testing (it is the easiest to adopt if you are already using Microsoft Visual Studio), or have made the complete shift to Microsoft Team Foundation Server and Microsoft Team Build as a build server and for Continuous Integration (CI).

nUnit is typically used at companies that have been unit testing longer than Microsoft has officially supported it, or in combination with other Open Source tools and non-Microsoft build servers. For example, Hudson, Team City, and CruiseControl .NET are all build server/CI environments that play nicely with nUnit.

If you use Team Foundation Server, VS Test integrates nicely out-of-the-box. (This isn't to say that you can't mix and match these, but they might require more effort to configure.)

nUnit and VS Test are not the only testing frameworks available in .NET. Following are a couple of others worth looking at:

· xUnit.NET (

· MbUnit (

Microsoft's testing framework is composed of multiple pieces:

· MS Test (MSTEst.exe, the Microsoft Test Runner console application)

· Visual Studio Testing Framework (Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll)

· Visual Studio Test Professional (the version of Visual Studio that specifically incorporates all the testing features from Microsoft)

I refer to the Microsoft collections of unit testing features as VS Test. Fortunately, automated unit testing is included in all versions of Visual Studio from Visual Studio Professional on up, not just the Test Professional version.

The testing framework provides a set of APIs and attributes that your test runner can use to execute your tests, and your tests can use to notify the test runner of your test results. For example, if you were to write the test code shown earlier with VS Test, it might look something like this:


public void TestDeposit()


var testAccount = new Account() { Total = 100 };




Did you notice the [TestMethod] attribute? That is the part of the testing framework that tells the test runner that this is a test that must be executed. You should also notice that the method is public and returns void (Public Sub in Visual Basic). All tests need to be public voids.

If you were using nUnit instead of VS Test, then you would have used the nUnit [Test] attribute instead, as shown here:


public void TestDeposit()


var testAccount = new Account() { Total = 100 };




You might also notice that nUnit offers the Assert.That method. nUnit also has the Assert.Equals method that was used in the VS Test snippet. You could have moved your test code over and only changed the attribute. It is a matter of preference and readability as to whether you prefer Assert.Equals()or Assert.That().

Test Runner

Running a single test with a Windows Form or Console application would not be that big of a deal — maybe even running r or 5 tests. But what happens when you have 20, 30, 100, or 500 various automated tests that you want to run all at once? This is where the test runner comes in.

Figure 15.1 shows an easy-to-use test runner included with nUnit. Figure 15.2 shows a VS Test test runner integrated directly in Visual Studio.

Figure 15.1 Test runner in nUnit


Figure 15.2 Test runner in VS Test


It's easy to think of the test runner as being synonymous with the testing framework. I try to make a distinction between the two because you will probably use other test runners from time to time, and you might prefer some of those.

For example, if you use Team Foundation Server for Continuation Integration (CI), then Team Build will be your build server and Team Test would actually be the piece executing your tests. If you use Nant as a build server, then you might use Cruise Control.NET for CI and have a Nant script to run your nUnit tests.

Other examples of test runners include the following plug-ins for Visual Studio:

· Test Driven .NET (

· ReSharper (

· CodeRush (

Other test runner plug-ins for Visual Studio are available, but these three tend to be the most common and most popular. (I use the integrated test runner included with CodeRush from DevExpress.) What I like about all these is that you end up with a consistent testing environment and test execution workflow, regardless of the testing framework. As a consultant, I use whatever testing framework the client happens to use. So, using a test runner that spans multiple test frameworks helps reduce my friction switching between projects.

Using CI Servers and Source Control

Now take a moment to clarify exactly what CI servers are, and how they fit into the overall mix. The basic workflow is this:

1. A developer checks code into a source control server.

2. Your CI server monitors the source control for check-ins and grabs the latest source, pulls it to a known location, and engages a build server (such as MS Build or Nant) to compile the code and run the unit tests.

3. In a good CI environment, the results of the build, the unit tests, and any other metrics tools that you include post to a report view of some kind.


With Team Foundation Server, all this is included out-of-the-box (although it does need to be configured properly). Other well-known CI servers worth looking at include Jet Brain's Team City, Hudson CI, and Cruise Control.

At this point, you should have a good grasp on the major players or the big-picture items in unit testing (your code, your tests, testing frameworks, and test runners), and, more important, why these are all important pieces in your testing environment. Take a minute to dig in to some important specifics, such as the overall organizational structure of the solution/project.

Solution/Project Structure

If you use Visual Studio's VS Test, then you can simply right-click in your code and select Generate Unit Tests. This creates a new Test Project, makes the appropriate references, and does a lot of the wire-up details for you. Figure 15.3 shows the result of this approach.

Figure 15.3 VS Test file and directory organization


There are times, however, when you might want to configure your tests in a specific way, or use other testing frameworks (such as nUnit), so take a quick look at some important details:

· Your code and your tests should be in separate assemblies (projects).

· The projects with your tests should reference your testing framework and the project that you want to test. (In VS Test, this is a Visual Studio Test Project; if you use nUnit, then it's just a C# or VB Library.)

· The code that you test should never reference your test code. This may seem obvious, but it's an important step to keep the direction of dependencies in the right order. Keep in mind that you want this separation because you never want to ship your tests in your production code.

Using NuGet to Blend nUnit and VS 2010

In the past there were certain challenges using tools and libraries that weren't included with Visual Studio “out-of-the-box.” These challenges might include knowing what libraries could be trusted, downloading the latest versions and their dependencies, and then repeating this for every project.

As mentioned several times, VS Test is included with Visual Studio. It's also worth mentioning how easy it is to add nUnit's testing framework to your solution with Microsoft's new Open Source NuGet tool. As noted on the official NuGet website (, “NuGet is a Visual Studio extension that makes it easy to install and update open source libraries and tools in Visual Studio.” In other words, NuGet makes working with third-party Open Source libraries a breeze.

To add NuGet to Visual Studio, follow these steps:

1. In Visual Studio 2010, go to Tools → Extension Manager.

2. Select Online Gallery, and search for “NuGet.”

3. Select NuGet Package Manager and click Install.

To add nUnit from NuGet, follow these steps:

1. Create a class library for your nUnit tests.

2. Right-click the class library project, and select Add Library Package Reference.

3. In the search bar, type nUnit.

4. As shown in Figure 15.4, select the nUnit package, and click Install,

Figure 15.4 Adding nUnit from NuGet


Methods with Fakes and Mocks

Earlier, you learned how it is desirable to fully test each code block or class so that you know that it fits properly and behaves as it should. Some parts are easier to test than others. For example, a method that takes in parameters and returns a value with no other dependencies is easy to test, but what about more complicated software applications? For example, what about the kind that actually connects to other areas of your application and has dependencies and connections to other methods and classes?

Faking with Dependency Injection

The single greatest thing to that you can do to make your code more testable and healthy is to start taking a Dependency Injection (DI) approach to writing software. The full breadth of DI is beyond the scope of this chapter, but, simply put, DI enables you to inject objects into a class, as opposed to the class creating the object. A quick example may help to put this into context.

Start with a common scenario. Say that you are inside of your business layer and you need to make a call to your data layer. Sound familiar? What does that code look like?

// Some business layer code where I'm doing business logic stuff …

IMyAwesomeDataLayer data = new MyAwesomeDataLayer();


// more business layer stuff…

Setting aside the lack of error handling, does that code snippet look about right? Now, think about how you will test that call. How will you test how the business layer responds to an exception from the data layer — in other words, how will you force an error at this point?

Note that you're not testing the data layer. Right now you're testing the business layer that is supposed to call the data layer, with certain parameters and in a specific scenario. At this point in your test, you don't care if the data layer actually exists. You are not testing that the data layer is doing its awesome thing. You care only that the business layer is calling it when and how it's supposed to.

In this example code snippet, you have an interface-backed data layer (IMyAwesomeDataLayer), but you have a hard dependency on the instantiation of a specific implementation of that interface. One testing solution would be to introduce a simple abstract factory and call that instead of specifically “newing” up your class in code. Unfortunately, an abstract factory would solve one problem and introduce a whole new one. You'd have a new dependency to your class that would then resurface as an issue in less obvious ways later. So, what else would work?

What if you simply told your class that you need an instance of IMyAwesomeDataLayer without telling it where to get it? How could you do that?

You could simply add it as a constructor parameter, or a property that gets set before you call your method. (Without going in to the religious DI debate, I'll just say that I prefer constructor injecting on anything that is a required dependency.)

Moving to a DI style of software development can help you make your code more modular, easier to test, and easier to maintain. A number of available DI frameworks exist (including Windsor from the Castle Project, Microsoft Unity, Ninject, Structure Map, and others) that use a dependency style of code that is easier to work with and maintain. However, those are beyond the scope of this chapter, and, typically, those are also outside of your unit tests.

Now that you have a way to get your specific implementation of the IMyAwesomeDataLayer into the class, you have the capability to create implementations that you can specifically use for testing. For example, what if you want to test how the business layer handled various exceptions that the data layer could throw, you now could create a class that implemented the IMyAwesomeDataLayer interface, but that always threw a specific exception whenever you called the update method.

public class FakeDataLayerForTest : IMyAwesomeDataLayer


public void Update(Account account)


throw new InvalidAccountNumberException();



A fake class like this would be useful only for testing that specific scenario, but think how useful a scenario that could be to test! So, now what do you do? You end up writing (and maintaining) a new set of fake classes for every possible scenario.

Take a minute to consider the types of fake classes that you would want to build for various scenarios. Maybe you could build a validation fake class that always returned true, a data class that always returned the same exact record, or even a networking class that always acted like the network was down.

This chapter began with a comparison between writing code and building with children's LEGO bricks. Like building with LEGO bricks when the pieces rarely sit by themselves, building software requires multiple moving parts to all work together, as shown in Figure 15.5.

Figure 15.5 Thinking of building software applications as building with LEGO bricks


In this scenario, you need something to represent what the UI sends to the Validation piece, plus you need something to act as the Integration and Service Proxy pieces that talk back to the Validation component. It's nice to know that you don't need to re-create the entire application to isolate the Validation piece, just the parts that touch it. Figure 15.6 shows this process.

Figure 15.6 Validation process


Although creating fake classes for each component is possible (and can work well), as you add fake classes to your test suites, your tests become less readable. Or to be more specific, to understand what and how a test behaves, you end up looking inside of each fake class that you created, and discerning its purpose and behaviors to understand exactly what the test is doing, and why. It doesn't take long for the large number of fake classes for specific scenarios in your test suites to build up. This is why I prefer using mocking frameworks instead of fake classes.

Mocking Frameworks

A mocking framework enables you to create fake classes on-the-fly and in line with your test code. That may be a bit of a simplification. Mocking frameworks use a combination of emits, reflection, and generics to create runtime instance implementations of .NET interfaces. In other words, they create fake classes on-the-fly!

Choosing a Mocking Framework

A number of popular mocking frameworks are worth looking at, including the following:

· Rhino Mocks (

· MoQ (

· nMock (

Rhino Mocks has been around the longest. It is probably the most mature and most used mocking framework for .NET, so that's what is used in this chapter. You can use the same instructions provided earlier in this chapter for installing nUnit with NuGet to add any of these mocking frameworks.

Now look at some test code with a mocking framework. First, get a mock object that matches the interface, as shown here:

loggerMock = MockRepository.GenerateMock<ILog>();

Remember, you're not testing the ILog object. You're testing something that has a dependency on it, and so you use a mock to fake the expected interactions.

Next, you tell the mock object what you expect, as shown here:

loggerMock.Expect(x => x.Error("Error Happened")).Repeat.Once();

In this case, you expect the method that you call to need to log some error, specifically the message, “Error Happened.” Obviously, this would not be a useful message in real code, but it makes things nice for this example. You also tell the mock that you expect this method to get called, with this parameter, and that you expect it to be called once, and only once.

Now that you have set up your mock, let's “new up” the class that you actually want to test, and pass in the mock as a dependency. Here is the constructor:

public AcmeBankingService(ILog logger)

This enables you to inject the dependency (or in the case of this test, the mock instance), as shown here:

var serviceUnderTest = new AcmeBankingService(loggerMock);

Finally, you perform your action and then verify your expectations, as shown here:

var tx = serviceUnderTest.CashWithdraw(Account_Number, amount_withdraw);


In this simple example, the only thing that you ask the mock to do is receive a method call a certain number of times, with a certain parameter being passed in to it. Even with this basic example, you verify quite a bit!

By using mocking frameworks, you can go much deeper into the interactions that your code is expected to have with its dependencies. Also, by using a mocking framework, you can expect method calls, return specific values, throw exceptions, and raise events. Your mocks can quickly become quite complicated; although, that should also be a bit of a code smell. Remember, if it takes a tremendous amount of work to test a single class, you might be trying to do too much in one place.

Class Attributes, Test Attributes, and Special Methods

Testing frameworks provide attributes that you will use in your tests to let the test runner know what it should do. For example, the test runner will need to know which classes contain your test methods, and then which public methods are the actual tests. You might also have supporting methods that you'll want the test runner to call at specific points in the testing. Wouldn't it be nice if you could say, “run this method before anything else — it sets up the test environment,” or “run this same method before each test in this suite.” And this is where your special test (attributes) will be used.

If you use VS Test, not only do you start by creating a Test Project, but also Visual Studio starts you off with an “example” unit test, including placeholders for several attributes that you will probably never use or need. Now take a look at the attributes that will be the most helpful to you.

Earlier in this chapter, you learned about the Three A's in unit testing: assign, act, and assert. It doesn't take too many tests to realize that you seem to be using the same sort of “assign” repeatedly in your tests. Remember, the “assign” section of your tests is where you set up the starting values that you are going to test against.

So, if you were testing some sort of validator, the setup might include initializing the validator, setting some values on the test object that you are going to validate, and so on. If you were using mocks, you would typically register your mocks with the mocking framework and set up your expectations here as well.

The testing framework infrastructure recognizes this common pattern (using a single method to set up the majority of your tests), and provides helpful attributes for your use. With nUnit, you mark a method with [TestFixtureSetUp], and with VS Test you mark a method with [ClassInitialize]. Like tests, what you actually name these methods is irrelevant, as long as they return void and are public.

By marking a method with the appropriate attribute, the test framework promises to call that method one time before it runs any of the tests. Inversely, you can probably quickly figure out what [ClassCleanup] (VS Test) and [TestFixtureTearDown] (nUnit) do. They run at the end of all the tests in that class. This would be useful if, for example, you were running unit tests against a database, and you needed to reset some of the test data to a known state, or to reset any system environmental variables that you might have changed during the tests.


I am a huge proponent of isolated unit tests and leveraging mocking frameworks to reduce those sorts of dependencies. I almost never have a need for the tear-down attributes.

What about times when you need to do a certain amount of setup before each unit test, not just before the class of unit tests? Well, if you use Microsoft's VS Test, you can mark a method with [TestInitialize]. If you use nUnit, then [SetUp] can do the job. I end up taking this approach more often than not. You will probably start with a certain setup that spans all your tests and needs to be done only once. Quickly, though, you'll find the improved quality of your tests by running each test in isolation.

Testing the Hard to Test — Pushing the Edges

So far, this discussion has focused on the interactions and tests with libraries and utility functions. On one hand, this makes sense because the majority of your application should live in the business logic of a class library. On the other hand, libraries and utility functions are easy to demonstrate in an automated unit test — so this is where most examples stop. However, now take a look at some of the hard-to-test areas.

The edges of your application are always the hardest to test. Think about your normal three-tier application with a UI, Business and Data layers, as shown in Figure 15.7.

Figure 15.7 Normal three-tier application


The edges of your application are always the hardest to test. Even if this basic diagram doesn't fit your application's model, the edges of your application are still hard to isolate in a unit test. Are you working on writing a device driver? The edge is easy to find there. What about a web service? The edge is an obvious service endpoint, and it would be easy to point to the many record and playback web tools out there for testing web services. However, those would not be unit tests, but instead would be full integration tests, or at a minimum, functional tests that would require the entire application to be up and running to test. Remember, you're not testing applications; you're testing the bricks that they are built from.

So, what is the solution? Make the edge as wide as possible, and increase your testable area, as shown in Figure 15.8. Several new UI and data patterns have emerged (though not that new, “new in popularity” might be a better way to say it) that help achieve this goal.

Figure 15.8 Increasing the testable area


By separating the UI and data logic from the actual edge implementation, you gain the benefit to increase your testable area, and creating a better separation between your UI and data implementations with the rest of the application. One way to achieve this separation is to use a trusted UI pattern framework.

Three UI patterns surface as being notable for unit testing:

· Model-View-Controller (MVC)

· Model-View-Presenter (MVP)

· Model-View-ViewModel (MVVM)

In each of these, a model represents an application model, or simple data object. A model holds data, but other than that, should be dumb. The same holds true with views. They should look pretty and be dumb as rocks. Why? Because you want those views to be as thin as possible so that as much of your UI logic as possible can be pushed in to the controller (MVC), presenter (MVP), or ViewModel (MVVM).

Although each of these models is touched upon in other chapters of this book, take a glance at each in the context of unit testing.

Model View Controller (MVC)

With MVC, your controller is where your UI logic should live. In MVC, all input is routed to a controller. The controller decides what actions to take, what classes to call, and what view to display. This is ideal for certain web scenarios, in which all input essentially goes straight to the web server where it gets processed.

MVC separates the UI “edge” from the UI logic, with the controller giving you a nice clean place to add automated unit testing to your UI layer.


Take a look at Microsoft's latest ASP.NET MVC Framework for examples on using MVC as a proven pattern.

Model View Presenter (MVP)

In an MVP approach, a presenter object contains all your UI logic. Your view implements an interface that enables you to mock your view when you want to test your presenter object. This is a helpful pattern where the view itself is rich enough to actually bubble up its own events.

Contrast a rich client Windows Form/WPF screen with a button linking to a web application. In a rich client application, the button is componentized enough that it actually owns its own event handler and initiates its own click event. With an ASP.NET Web Form application, the click event actually results in an HTTP request going across to a web server, where it bubbles up through a managed pipeline leveraging View State to eventually end up in an event handler.

This artificial pipeline in ASP.NET Web Forms makes the MVP pattern a good fit in ASP.NET Windows Form Web Form scenarios, as well as rich client applications.

When using MVP pattern, make sure that your presenter objects go in a separate class library from your main UI application. This isn't as critical with Windows applications, though it's always a good idea. In web applications, you want that separation to keep your unit test lightweight. You don't want to spin up a whole web server just to run your unit tests!

MVP separates the UI “edge” from the UI logic with the presenter object, making it the ideal place to unit test the UI logic in your UI layer.

Model View ViewModel (MVVM)

In the MVVM (or simply ViewModel) approach, your UI architecture is broken in to three key pieces. Like the previous two patterns, your application Models hold your application data, and your Views display the data. But with MVVM, you have View-specific models called ViewModels.

MVVM takes advantage of the rich data binding available in Microsoft platforms such as Silverlight and WPF, so that, instead of having a presenter that manipulates your view like a puppet master, you have a single ViewModel that exposes properties and event handlers that are bound to the View. Think of the View as a thin shell that gets wrapped around the ViewModel. All the UI logic and behaviors are rolled in to the ViewModel, which can easily be tested.

MVVM separates the UI “edge” from the UI logic with the ViewModel object, so that's where your unit test should focus.

Table 15.1 provides a summary for the use of these three patterns.

Table 15.1 UI Patterns for Unit Testing

UI Pattern

Useful Scenario

Testable UI Logic


Web and non-componentized Windows applications



Windows and Web Forms applications



Rich client application with strong data binding (Silverlight/WPF/some JavaScript frameworks)


Now that you understand some of the common approaches to making the hard-to-test scenarios easier to test, let's look at some strategies for dealing with really difficult (or even impossible) code to test.

Using Sensing Variables to Refactor Nontestable Code

Every developer has “inherited” some hard-to-test code from time to time. For example, this might have occurred after the umpteenth time that a new release broke some part of the code, and it was obviously time for a change.

To solve this problem, you add some unit tests to protect this code from other changes in the system. And while a complete refactoring might be in order, there might be some simple steps that you could take to quickly achieve a level of testability.

Start seeing how this could be done by looking at the following example:

private int qualifiedPoints;

public void ValidatePoints(decimal amount, AccountType accountCategory)


if (amount > 100 && accountCategory == AccountType.Business)

qualifiedPoints += 4;

else if (amount > 100 && accountCategory == AccountType.Personal)

qualifiedPoints += 2;

else if (amount <= 100 && accountCategory == AccountType.Business)

qualifiedPoints += 1;


// nothing for Personal account < 100


public void UpdatePoints()


if(qualifiedPoints > 0)


var data = new AccountData();

data.UpdatePoints(accountNumber, qualifiedPoints);

qualifiedPoints = 0;



In this code snippet, the first thing you should notice is that there seems to be a lot of business logic tied up in the ValidatePoints method, and there is not an easy way to test it. After you toss in an expense, you must add 3 points, or 4 points, or 2 points, or no points, depending on whether this was a Personal Account or a Business Account, and depending on how large the transaction was.

This is the sort of code that changes frequently, and yet you have no easy way to validate the logic and functionality. One approach could be to mock the data layer and then test how many points were pushing through it. Of course, that would lead you to only checking the final output, and not easily checking that the logic in ValidatePoints was actually correctly working.

This introduces the need for a sensing variable. A sensing variable (or method) is simply a way to peak at the internal state of an object. This would be immediately useful for testing this object. What would that look like? What if you simply added a read-only property to give you the current number of qualified points?

public int QualifiedPoints


get { return qualifiedPoints;}


And just like that, this class just became a whole lot easier to test.

Now that you are starting to refactor your code to make it testable and using your tests to drive better coding practices, you might be asking, “now that I have tests, what can I do — I mean, beyond running those tests?” Let's take a look at how having automated unit tests build a foundational layer for many of the other engineering practices that will help reduce your technical debt, and improve the quality of your code.

Using Automated Unit Testing with Other Practices

As an agile coach, I work with teams of developers all over the world, and one of the things that I enjoy about embracing automated unit tests is not only what they do for an individual's code, but also what having a suite of tests can do for a team of developers working together. Automated unit tests make up one of the core staples in reducing technical debt, and improving a team's overall code quality.

One of the key benefits to build up your automated units is the multiplier effect that your tests can have with the additional services and metrics of code health that you can now take advantage of.

As your code base and team grow, it becomes more challenging to ensure a consistent practice around unit testing. Although nothing can replace solid development practices like code pairing to help ensure higher code quality, you can look at several metrics to help monitor quality practices within larger teams.

Code coverage is a measurement of what percentage of your code that your unit tests cover. Although some people might want to hit some artificial number of code coverage (such as 80 percent, or 30 percent, or whatever), I'm usually more interested in the trend of coverage than any specific number. I want to see the coverage going up, or at least remaining steady. Otherwise, I know that new code is being added without test coverage.

Coverage isn't everything. As all developers know, not all code is created equal — as in importance or criticalness. So, for example, I typically don't unit-test getters and setters on properties for simple data transfer objects (DTOs). (You could, but is there value there?) Rather, I'd want to ensure that the critical and complex areas of my code have the most coverage.

In addition to covering the most important areas, I also want to ensure that I'm accounting for the various scenarios within a specific section of code. So, instead of relying on code coverage to be everything, I also want to ensure that my number of unit tests continues to grow over time.


NCover is a product that integrates with multiple unit testing frameworks and CI servers to provide code coverage metrics. Microsoft's VS Test features include code coverage metrics that work with MS Test, and integrates out-of-the-box with Team Foundation Server and Team Build for a CI solution.

Microsoft's Work with Automated Unit Testing

Microsoft is doing some incredibly interesting work with automated unit testing in its research group. It developed a couple of tools that do a good job figuring out your existing code and generating a high number of high-quality unit tests. This is significant work to develop a useful tool to add to your utility belt (although not a replacement for consistent TDD). This is especially useful for those applications that you inherit, that you need to refactor, and just need a safety net of sorts added to the code before you start working on it.

Check out Microsoft Research's Pex & Moles projects. Pex is the tool that actually automatically creates the unit tests. Moles is a separate tool (that Pex uses — and that you can just use directly yourself) that enables your tests to “mock” objects that can't normally be mocked, similarly to Isolator (a product worth looking at from a company called TypeMock).

Although no tool can (or should) replace the solid coding practice and separation that TDD typically leads to, having a broad set of tools on your belt should be helpful!


This chapter discussed the fundamentals of automated unit testing. You learned about the Three A's — Assign, Act, and Assert. The discussion broke down automated unit testing into its four major components — code, tests, testing framework, and test runners. You learned a bit about Dependency Injection and working with mocking frameworks, and then about some of the added benefits of having automated unit tests — code metrics, continuous integration, and code coverage. Finally, you learned about some of the leading-edge work that Microsoft Research is doing, and you were encouraged to go beyond the basics so that you may continue to grow your unit testing developer skills.