Overview of Build Automation - Team Foundation Build - Professional Team Foundation Server 2013 (2013)

Professional Team Foundation Server 2013 (2013)

Part IV

Team Foundation Build

· CHAPTER 17: Overview of Build Automation

· CHAPTER 18: Using Team Foundation Build

· CHAPTER 19: Customizing the Build Process

· CHAPTER 20: Release Management

Chapter 17
Overview of Build Automation

What's in this chapter?

· Getting to know build automation

· Scripting a build

· Using build automation servers

· Adopting build automation

After version control, automating the build is the second most important thing you can do to improve the quality of your software. This chapter defines build automation and examines why it benefits the overall software engineering process. This is followed by a look at the high-level capabilities and limitations of Team Foundation Server, and a comparison with other common build systems in use today. Finally, some general advice is provided to assist in adopting build automation in your environment today.

Subsequent chapters of this book dive deeper into Team Foundation Server's build capabilities, discuss how to customize the build process, and demonstrate this by working through a series of common build customizations used in real-world scenarios.

What's New in Build Automation

Team Foundation Server 2013 and Visual Studio Online have shipped with some improvements to the automated build system that make it easier to get an automated build running and to quickly extend the build functionality. These improvements include the ability to host your build servers in Windows Azure, store build outputs in the Team Foundation Server or Visual Studio Online server, and extend your build to perform custom actions using PowerShell scripts. Each of these will be discussed further in this and in the following chapters of Part IV.

Hosted Build Service

Within the Visual Studio Online service ecosystem is a capability known as the Hosted Build Service. This service provides a relatively unlimited pool of build machines that are managed by Microsoft and hosted in Windows Azure. The services provided mimic the Team Foundation Build architecture described in Chapter 18 but without the cost of hardware acquisition, setup, and maintenance.

Visual Studio Online provides a hosted build controller that will provision a temporary build agent to service your build request. The output of the build will be placed in the new server build drop location in your Visual Studio Online account or in a version control folder that you specify.

The build agents provided in the service have a plethora of preinstalled software packages that your build can utilize. Anything else it needs will have to be pulled from your version control repository during the build.


To get a list of the software packages provided on the hosted build agent you can view the official list of software at http://aka.ms/SoftwareOnHostedBuild. You can see a live list of the available software packages by browsing tohttp://listofsoftwareontfshostedbuildserver.azurewebsites.net/.

If you find that you need software that is not provided by Microsoft, you still have the option to register additional build controllers and agents that run on premises. These machines are registered with your Visual Studio Online account, but their configuration is fully controlled by you.

Server-Based Build Drops

In all versions of Team Foundation Server, you have the option to either have the build process copy all of the outputs of compilation to a folder on a file server known as the build drop or to not copy any files off the build agent machine. In Visual Studio Online and Team Foundation Server 2013, you now have the option to have the outputs of compilation stored in a special location on the server. The reason for this addition is that when the Hosted Build Service in Visual Studio Online was implemented, it didn't have any way to access your local file share. Storing the outputs of compilation on the server solved this problem.

Some of the nice side-effects of this change is that now your build drops can be managed by Team Foundation Server so you don't have to go to IT to get access to a file share for your build outputs. The server drops are also backed up with all of the other Team Foundation Server data.

Server drops are discussed further in Chapter 18.

Let's Build Something

Imagine building a house. You visit the site every day and see nothing but a muddy field. The construction team tells you, “Yup, we're making excellent progress. The doors are all done and look great. Walls are 80 percent there. Plumbing is ready to go, and the kitchen is ready to drop in.” Every day that you visit, you see that same muddy field. The construction teams tell you how well progress is going. Sometimes they regale you with stories about how they decided the doors were not going to be up to the job, so they threw them on a bonfire and built new ones from scratch that can open both ways, and even have little flaps ready should you ever decide to get a cat.

But you never see any progress—just a muddy field with lots of busy people running around looking stressed.

Then, the day before you are due to move in, everything arrives on site at the same time. Things are stuck together in a hurry—but it takes longer than everyone seemed to think it would. The day draws on, night begins to fall, and everyone gets tired, but they heroically continue trying to get the thing to fit together.

In the morning, you take a look at your house. It's a house for sure. A couple of the rooms are not quite finished yet, because they didn't fit when they arrived onsite. A few of the screws are missing, none of the paint is the color you would have chosen, and many things aren't exactly how you'd envisioned them when you drew up the plans six months ago. More embarrassingly for you, now when you see the house you think of several places where it would have been great to have an extra power outlet, and you realize you will probably never get to use the expensive hot tub that you asked for. You can't help wondering why they spent all that time putting cat flaps in your doors when you are allergic to cats, and yet they didn't get the toilet plumbed in the main bathroom.

Now, try to imagine how your customers feel when dealing with something as ephemeral as software. How do you show progress to a customer? How do you know how complete you are? How do you know if everything works? How do you know if you are done with a particular feature or if a feature is done enough to move onto the next one?

The only way to know all this is to assemble your software together and try it out as a whole, to run your application, or to visit your website. Sure, some areas are missing or not quite functional yet. But once you are able to see your application running, you know how close you are to finishing, and it is also very easy for your customer to know how things are going. Once the customer sees it for real, he or she might say that a particular feature you thought was only partially implemented is actually enough to do what he or she wanted. The customer can suggest some course corrections early on, which will make everyone happier with the end result. But you didn't have to change too much to get there.

The problem is that assembling your application can take time. But by making an investment in automating this experience as you go through the software development process, you not only ensure that you can accurately measure and demonstrate progress, but you also remove a huge source of error when it comes to that last-minute push to completion.

If you are serious about the quality of the software you deliver then you need to be serious about build automation.

What Is Build Automation?

Build automation is the process of streamlining your build process so that it is possible to assemble your application into a usable product with a simple, single action. This entails not just the part of code a particular developer is working on but other typical activities such as the following:

· Compiling source code into binaries

· Packaging binaries into installable modules such as MSI files, XAP files, JAR files, DMG images, and so on

· Running tests

· Creating documentation

· Deploying results ready for use

Only after the parts of your application come together can you tell if your application works and does what it is supposed to. Assembling the parts of an application is often a complex, time-consuming, and error-prone process. There are so many parts to building the application that, without an automated build, the activity usually falls on one or two individuals on the team who know the secret. Without an automated build, even they sometimes get it wrong, with show-stopping consequences that are often discovered very late, making any mistakes expensive to fix.

Imagine having to recall an entire manufacturing run of a DVD because you missed an important file. Worse still, imagine accidentally including the source code for your application in a web distribution or leaving embarrassing test data in the application when it was deployed to production. All these things made headlines when they happened to organizations building software yet they could have easily been avoided.

Integration of software components is the difficult part. Developers work on their features in isolation, making various assumptions about how other parts of the system function. Only after the parts are assembled do the assumptions get tested. If you integrate early and often, these integrations get tested as soon as possible in the development process—thus reducing the cost of fixing the inevitable issues.

It should be trivial for everyone involved in the project to run a copy of the latest build. Only then can you tell if your software works and does what it is supposed to. Only then can you tell if you are going to have your product ready on time. A regular, automated build is the heartbeat of your team.

In Visual Studio, a developer can usually run his or her application by pressing the famous F5 key to run the code in debug mode. This assembles the code together on the local workstation and executes it, which makes it trivial for the developer to test his or her part of the code base. But what it doesn't do is ensure that the code works with all the latest changes committed by other members of the team. In addition, pressing the F5 key simply compiles the code for you to run and test manually.

As part of an automated build, not only can you test that the code correctly compiles, but you can also ensure that it always runs a full suite of automated tests. This instantly gives you a high degree of confidence that no changes that have been introduced have broken something elsewhere.

Pressing the F5 key is easy for a developer. You want your automated build to make it just as easy to run your application—if not easier. This is where a build automation server plays a part.

The build automation server is a machine that looks for changes in version control and automatically rebuilds the project. This can be on demand, on a regular schedule (such as nightly or daily builds), or can be performed every time a developer checks in a file—a process that is often referred to as continuous integration. By giving you rapid feedback when there is a problem with something that has been checked in, the software development team has the opportunity to fix it right away when it is fresh in the mind of the person just checking in code. Fixing the issue early minimizes the cost of the repair as well as the impact the problem code would have on the development efforts of your team members.

However, before you can set up a continuous integration build on a build server, you must script your build so that it can be run with a single command.

Martin Fowler on Continuous Integration

The term continuous integration (CI) emerged from Agile software development methodologies such as Extreme Programming (XP) at the turn of the millennium. Martin Fowler's paper on continuous integration from 2000 is still worth reading today at http://www.martinfowler.com/articles/continuousIntegration.html.

Note that, as originally described, the term refers to increasing the speed and quality of software delivery by decreasing the integration times, and not simply the practice of performing a build for every check-in. Many of the practices expounded by Fowler's paper are supported by tooling in Team Foundation Server—not simply this one small feature of the build services. However, the term continuous integration has come to be synonymous with building after a check-in has occurred and is, therefore, used by Team Foundation Server as the name for this type of trigger, as discussed in Chapter 18.

Scripting a Build

The most basic form of build automation is to write a script that performs all the operations necessary for a clean build. This could be a shell script, batch file, PowerShell script, and so on. However, because of the common tasks that you perform during a build (such as dependency tracking, compiling files, batching files together, and so on), a number of specialized build scripting languages have been developed over the years.


The granddaddy of specialized build scripting languages is Make. Originally created at Bell Labs by Dr. Stuart Feldman in 1977, Make is still commonly used on UNIX-based platforms to create programs from source code by reading the build configuration as stored in a text-based file called makefile. Typically, to build an executable, you had to enter a number of commands to compile and link the source code, also ensuring that dependent code had been correctly compiled and linked. Make was designed specifically to help C programmers manage this build process in an efficient manner.

A makefile defines a series of targets, with each command indented by a tab inside the target:


target: dependencies


For example, a simple Hello World application could have the following makefile:

# Define C Compiler and compiler flags



# The default target, called if make is executed with no target.

all: helloworld

helloworld: helloworld.o

$(CC) $(CFLAGS) -o $@ $< # Note: Lines starts with a TAB

helloworld.o: helloworld.c

$(CC) $(CFLAGS) -c -o &@ $<


rm -rf *o helloworld

Note that one of the main features of Make is that it simplifies dependency management. That is to say that to make the executable helloworld, it checks if the target helloworld.o exists and that its dependencies are met. helloworld.o is dependent on the C source filehelloworld.c. Only if helloworld.c has changed since the last execution of Make will helloworld.o be created and, therefore, helloworld.

The previous script is the same as typing the following commands in sequence at the command line:

gcc -g -c -o helloworld.o helloworld.c

gcc -g -o helloworld helloworld.o

In a simple makefile like the one shown previously, everything is very readable. With more complex makefiles that do packaging and deployment activities, it can take a while to figure out which commands are executed in which order. Make uses a declarative language that can be difficult to read for developers used to coding in more imperative languages (like most modern program languages are). For many developers, it feels like you must read a makefile slightly backward—that is, you must look at the target, and then follow all its dependencies, and then their dependencies, to track back what will actually occur first in the sequence.

Since its inception, Make has gone through a number of rewrites and has a number of derivatives that have used the same file format and basic principles, as well as providing some of their own features. There are implementations of Make for most platforms, including NMAKE from Microsoft for the Windows platform.

Apache Ant

Ant is a build automation tool similar to Make, but it was designed from the ground up to be a platform-independent tool. James Duncan Davidson originally developed Ant at Sun Microsystems. It was first released in 2000. According to Davidson, the name “Ant” is an acronym for “Another Neat Tool.” It is a Java-based tool and uses an XML file, typically stored in a file called build.xml. With its Java heritage and platform independence, Ant is typically used to build Java projects.

Ant shares a fair number of similarities with Make. The build file is composed of a project that contains a number of targets. Each target defines a number of tasks that are executed and a set of dependencies. Ant is declarative and does automatic dependency management. For example, a simple Hello World application in Java could have the following build.xml to compile it using Ant:

<?xml version=”1.0” encoding=”utf-8”?>

<project name=”helloworld” basedir=”.” default=”package”>

<target name=”compile”>

<mkdir dir=”${basedir}/bin” />

<javac srcdir=”${basedir}/src”





<target name=”jar”>

<jar destfile=”${basedir}/helloworld.jar”

basedir=”${basedir}/bin” />


<target name=”clean”>

<delete file=”helloworld.jar” />

<delete dir=”${basedir}/bin” />


<target name=”package” depends=”compile,jar”>

<!-- Comments are in standard XML format -->



The tasks in Ant are implemented as a piece of compiled Java code implementing a particular interface. In addition to the large number of standard tasks that ship as part of Ant, a number of tasks are available in the open source community. Manufacturers of Java-related tooling will often provide Ant tasks to make it easier to work with their tools from Ant.

Ant scripts can get quite complex, and because the XML used in an Ant script is quite verbose, scripts can quickly get very large and complicated. Therefore, for complex build systems, the main build.xml file can be broken down into more modular files.

Ant is so common among the Java community that most of the modern IDEs ship with a version of Ant to allow automated builds to be easily executed from inside the development environment as well as with tooling to help author Ant scripts.

Apache Maven

Maven is an open source project management and build automation tool written in Java. It is primarily used for Java projects. The central concept in Maven is the Project Object Model (pom.xml) file that describes the project being built. While Maven is similar in functionality to Make and derivations such as Ant, it has some novel concepts that define a distinct new category of build tools, making Maven worth discussing in this book.

Make and Ant allow a completely free-form script to be coded, and for you to have your source files located in any manner. Maven takes the not-unreasonable assumption that you are performing a build and uses conventions for where files should be located for the build process. It applies the Convention over Configuration software design paradigm to builds. The main advantage of this paradigm is that it helps you find your way around a Maven project because they all must follow certain patterns to get built (at the disadvantage of losing some flexibility).

The other main difference between Maven and the Make-inspired build tools is that it takes dependency management to the next level. While Make and Ant handle dependencies inside the project being built, Maven can manage the dependencies on external libraries (which are especially common in many Java projects). If your code takes a dependency on a certain version of a library, then Maven will download this from a project repository and store it locally, making it available for build. This helps the portability of builds because it means that all you need to get started is Java and Maven installed. Executing the build should take care of downloading everything else you need to run the build.


For more information about Maven, visit http://maven.apache.org/.


NAnt (http://nant.sourceforge.net/) was inspired by Apache Ant, but it was written in .NET and designed to build .NET projects. Like Ant, it is also an open source project and was originally released in 2001. Interestingly, according to the NAnt FAQ, the name NAnt comes from the fact that the tool is “Not Ant,” which, to extract Ant from its original acronym, would mean that NAnt was “Not Another Neat Tool.” But, in fact, NAnt was a very neat way of performing build automation, and it was especially useful in early .NET 1.0 and 1.1 projects.

Syntactically very similar to Ant, NAnt files are stored with a .build suffix such as nant.build. Each file is composed of a project that contains a number of targets. Each target defines a number of tasks that are executed and a set of dependencies. There are tasks provided to perform common .NET activities such as <csc /> to execute the C# command-line compiler tool.

The main problem with NAnt files is that they are not understood by Visual Studio, and so changes made to the Visual Studio solution files (.sln) and project files must also be made in the NAnt file; otherwise, the dependencies would not be known to the automated build script. To execute a build using the .sln file or the .vbproj/.csproj files, you must install Visual Studio on the build server and use the devenv task to drive Visual Studio from the command line, which most people avoid.


MSBuild is the build system that has been used by Visual Studio since Visual Studio 2005. However, the MSBuild platform is installed as part of the .NET Framework, and it is possible to build projects using MSBuild.exe from the command line without using the Visual Studio IDE.

Visual Studio keeps the MSBuild file up-to-date for the project. In fact, the .csproj and .vbproj files that are well known to developers in Visual Studio are simply MSBuild scripts.

MSBuild was heavily influenced by XML-based build automation systems such as Ant or NAnt, and also by its predecessor NMAKE (and therefore Make). MSBuild files typically end with a *proj extension (for example, TFSBuild.proj, MyVBProject.vbproj, orMyCSharpProject.csproj). The MSBuild file follows what should by now be a familiar pattern. It consists of a project, and inside the project, a number of properties and targets are defined. Each target contains a number of tasks.

Following is an example of a simple MSBuild script that you could execute from a Visual Studio command prompt with the command msbuild helloworld.proj:

<?xml version=”1.0” encoding=”utf-8”?>

<Project xmlns=”http://schemas.microsoft.com/developer/msbuild/2003”

DefaultTargets=”SayHello” >


<!-- Define name to say hello to -->



<Target Name=”SayHello”>

<Message Text=”Hello $(Name)!” />



However, MSBuild has some notable exceptions. In addition to simple properties in a PropertyGroup, as shown previously (which can be thought of as key-value pairs), there is also a notion of an Item. Items are a list of many values that can be thought of as similar to an array or enumeration in programming terms. An Item also has metadata associated with it. When you create an Item, it is actually a .NET object (implementing the ITaskItem interface). There is a predefined set of metadata available on every Item, but you can also add your own properties as child nodes of the Item in the ItemGroup.

Another way that the use of MSBuild differs from tools such as Ant or NAnt is that Visual Studio and Team Foundation Server ship with a number of templates for the build process. These are stored in an MSBuild script with a .targets extension. They are usually stored in %ProgramFiles%/MSBuild, %ProgramFiles(x86)%/MSBuild or in the .NET Framework folder on the individual machine. The actual build script created by Visual Studio usually just imports the relevant .targets file and provides a number of properties to customize the behavior of the build process defined in the .targets file. In this way, MSBuild shares some slight similarities to Maven in that a typical build pattern is presented, which the project customizes to fit.

In an MSBuild script, reading the file from top to bottom, the last place to define a property or target wins (unlike in Ant, where the first place defined is the winner). This behavior means that anything you write after you import the .targets file in your MSBuild script will override behavior in the imported build template.

The standard templates provided by Microsoft include many .targets files that are already called in the standard template prefixed with Before or After, which are designed as hook points for your own custom logic to run before or after these steps. A classic example would be BeforeBuild and AfterBuild. It is considered good practice to override only targets designed to be overridden like this, or to override properties designed to control the build process. The imported .targets files are typically well-commented and can be read if you would like to learn more about what they do.

The following is a basic .vbproj file as generated by Visual Studio 2013 for a simple Hello World style application. Hopefully, you will now recognize and understand many of the elements of the file. Notice that is doesn't contain any actual Targets—these are all in the imported Microsoft.VisualBasic.targets file, including the actual callout to the Visual Basic compiler. The .vbproj file just contains properties and ItemGroups, which configure how that .target file behaves:

<?xml version=”1.0” encoding=”utf-8”?>

<Project ToolsVersion=”12.0” DefaultTargets=”Build”


<Import Project=”$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)



\Microsoft.Common.props')” />


<Configuration Condition=” ‘$(Configuration)’ == ‘’ “>Debug</Configuration>

<Platform Condition=” ‘$(Platform)’ == ‘’ “>AnyCPU</Platform>









<PropertyGroup Condition=” ‘$(Configuration)|$(Platform)’ == ‘Debug|AnyCPU' “>









<PropertyGroup Condition=” ‘$(Configuration)|$(Platform)’ == ‘Release|AnyCPU' “>






















<Reference Include=”System” />

<Reference Include=”System.Data” />

<Reference Include=”System.Xml” />

<Reference Include=”System.Core” />

<Reference Include=”System.Xml.Linq” />

<Reference Include=”System.Data.DataSetExtensions” />



<Import Include=”Microsoft.VisualBasic” />

<Import Include=”System” />

<Import Include=”System.Collections” />

<Import Include=”System.Collections.Generic” />

<Import Include=”System.Data” />

<Import Include=”System.Diagnostics” />

<Import Include=”System.Linq” />

<Import Include=”System.Xml.Linq” />

<Import Include=”System.Threading.Tasks” />



<Compile Include=”Class1.vb” />

<Compile Include=”My Project\AssemblyInfo.vb” />

<Compile Include=”My Project\Application.Designer.vb”>




<Compile Include=”My Project\Resources.Designer.vb”>





<Compile Include=”My Project\Settings.Designer.vb”>







<EmbeddedResource Include=”My Project\Resources.resx”>








<None Include=”My Project\Application.myapp”>




<None Include=”My Project\Settings.settings”>






<Import Project=”$(MSBuildToolsPath)\Microsoft.VisualBasic.targets” />

<!-- To modify your build process, add your task inside one of the

targets below and uncomment it.

Other similar extension points exist, see Microsoft.Common.targets.

<Target Name=”BeforeBuild”>


<Target Name=”AfterBuild”>




Windows Workflow Foundation

Although this chapter has familiarized you with specialized build scripting languages, so far no mention has been made of other programming and scripting methods that could also be used to create a build (such as PowerShell, batch files, or even UNIX shell scripts). But one such general-purpose framework is worth mentioning here because of its use by the build automation functionality in Team Foundation Server—Windows Workflow Foundation (WF).

WF is a programming framework from Microsoft used for defining and executing workflows. The WF version used by Team Foundation Server is version 4.5 and is part of the .NET Framework 4.5. WF can be coded using the XML-based XAML markup or in any .NET language directly against the Windows Workflow Foundation APIs, which ship with the .NET Framework.

Unlike the specialized build languages, WF contains no functionality built in for dependency management—or even methods for mass manipulation of files. Therefore, its use by Team Foundation Server for build automation might seem a little odd at first. However, WF provides a couple of capabilities that traditional build scripting languages do not.

The build scripting languages do not typically store states between instances, but workflow is all about state. WF maintains state, gets input and sends output to the world outside of the workflow engine, provides the control flow, and executes the code that makes up the work.

In addition, most build scripting languages control the execution on a single machine. The state persistence nature of WF brings with it the ability to take components of the build and deploy them across multiple machines. This means that you can split some of the workload of your build across several machines and bring the results back together before proceeding with the rest of the build process. For example, you could perform compilation on one machine, while generating documentation from the source on another, and bring them both together when you package your build. This capability provides another weapon in your arsenal when trying to reduce the overall time for a build to complete, and thus tightening the feedback loop for your builds.

For activities that require more traditional build capabilities (such as amassing a bunch of files together and compiling them), the WF templates used by Team Foundation Server rely on the traditional build scripting languages—typically MSBuild.

Chapters 18 and 19 explain more about WF and how it is used by Team Foundation Build. The rest of this chapter looks in more detail at the concept of a build automation server.

Using Build Automation Servers

Once you have a single command that can run your build, the next step is to run it periodically. This ensures that the product in version control is not only always in a runnable state, but also removes yet another manual step in the chain and fully automates the build process. Having the build runnable on a server ensures that the build is repeatable on a machine other than the one used by the developer to code the project. Just this simple act of separation helps to ensure that all dependencies are known about and taken into account by the build, which is what helps build repeatability.

In the earliest days of build automation, the build was performed periodically (typically weekly, nightly, or daily) using a simple cron job or scheduled task.

Building on every single check-in to the version control system requires a machine with dedicated build server logic. It was exactly this logic that was built for a project being implemented by a company called ThoughtWorks (an IT consultancy focused on Agile software development practices). The Continuous Integration (CI) build server logic was then later extracted into a standalone project, which became CruiseControl.


CruiseControl (http://cruisecontrol.sourceforge.net/) is an open source build server implemented in Java. Therefore, it runs on many platforms, including Windows and Linux. At the heart of CruiseControl is the build loop that periodically checks the configured version control system for changes to the code and, if a change is detected, will trigger a new build. Once the build is complete, a notification can be sent regarding the state of the build.

Configuration of CruiseControl is performed using a single config.xml file. Because of its long life as a vibrant and active open source project, many extensions have been contributed to CruiseControl over time. Many different version control systems (including Team Foundation Server) can be queried by CruiseControl using these extensions. An equal number of notification extensions exist including e-mail, a web-based console, instant messenger, or even a system tray application in Windows. Output from the build (including results of unit tests, code coverage reports, API documentation, and so on) is available via the web interface.

While any build process can, in theory, be executed by CruiseControl, it is typically used to automate Ant builds. Therefore, it is typically used to build Java projects.

As discussed, Team Foundation Server is supported by CruiseControl as a version control repository. However, data about the build and build notifications are kept within the CruiseControl system.


CruiseControl.NET (http://www.cruisecontrolnet.org/) is an open source build server, but, as the name suggests, it is implemented using .NET. It was loosely based on the original Java version of CruiseControl, and it was also originally developed by the same ThoughtWorks consultancy.

Configuration of CruiseControl.NET is typically performed by editing an XML file called ccnet.config. It is also capable of working with a number of version control systems, including Team Foundation Server, and because of its focus on .NET developers, it is capable of building .NET projects by using NAnt or MSBuild scripts and notifying the developers of the results.


Hudson is another open source build server implemented in Java. Hudson was the original name for the server but after a bit of an acrimonious falling out between the main community maintainer of the project and the holders of the Hudson trademark, a community fork of the project was created called Jenkins. In recent years, they have become a popular alternative to CruiseControl, not least because of an easy-to-use web-based interface for configuring new builds, rather than relying on manual editing of XML files.

While Hudson/Jenkins is capable of building a number of projects (including Ant and even MSBuild), it has some special features for handling Maven builds and tracking dependencies between the builds for Maven projects, which is what makes it worth calling out in particular in this book.

Hudson is capable of working with many version control tools, including Team Foundation Server. However, like all external build systems, data about these builds is kept inside the Hudson system, though it does have some useful build reporting capabilities.

Team Foundation Server

Build automation is so vital to improving the quality of software development that, since its original release in 2005, Team Foundation Server has included build automation capabilities. Internally, the feature was known by the name “Big Build,” but people refer to the build automation component of Team Foundation Server as Team Build or Team Foundation Build.

MSBuild first shipped with Visual Studio in 2005, and the original incarnation of Team Foundation Build in 2005 was based heavily around MSBuild. A build was defined by an MSBuild script called TFSBuild.proj located in a folder under$/TeamProject/TeamBuildTypes/BuildTypeName in Team Foundation Server version control.

When a build was triggered, the TFSBuild.proj file was downloaded to the build server and executed. Results of the build were published back to the server, and, importantly, metrics about it were fed into the powerful Team Foundation Server data warehouse. Additionally, the build results were automatically linked with entries in the Team Foundation Server work item tracking engine.

However, in the original 2005 release, the capabilities of Team Foundation Build were very limited. There was no built-in process for triggering builds—they had to be triggered manually, or users had to configure their own jobs to trigger builds periodically or listen for check-ins and trigger continuous integration builds.

Thankfully, the 2008 release of Team Foundation Server saw huge improvements in the build capabilities. In fact, Team Foundation Build was probably the single biggest reason to upgrade from Team Foundation Server 2005. In 2008, you had the ability to trigger builds by one of several trigger types, including scheduled builds and continuous integration style builds.

The 2010 release saw even more improvements to Team Foundation Server's build capabilities. The biggest of these was the move from a totally MSBuild-based solution to one using WF as the build orchestration engine. This had several important advantages, including the ability to easily surface common build configuration properties into the user interface in Visual Studio, as well as the ability to distribute a build across multiple servers (or Build Agents).

While you get very rich integration in Team Foundation Server within the version control, build, and work item tracking functionality, it is important to note that the build automation capabilities of Team Foundation Server can be used only with Team Foundation Server version control or Git. While this should not be an issue for readers of this book, it is an important element to factor in when planning your migration to Team Foundation Server. Only after your source code is in Team Foundation Server does it make sense to switch on its build automation capabilities.

Adopting Build Automation

Hopefully, by now, you are suitably convinced that build automation is something that you want to do. But how should you go about adopting build automation as a practice?

The first step is to ensure that you have a single command that you can run to fully build and package your product ready for deployment. If you use Visual Studio then this is very easy because most Visual Studio project types are easily built using MSBuild. However, if you have components developed in Java or other software languages then you will need to do some work to put together your build script using the most appropriate scripting language (such as Ant or Maven).

Next, you should ensure that everyone knows how to build the project and that all the developers can run this from their machines.

Once the build is easy to run, the next step is to periodically run the build to ensure that you always have a clean code base. If you have sufficient resources, and your build is fast enough, then strive for a continuous integration style of build and ensure that you have a fast (and hopefully fun) method of notification to tell the developers when the build has been broken. A simple e-mail notification will suffice and should be used at a minimum, but you can be more creative if you would like.

Brian the Build Bunny

Some ways of making the team pay attention to the state of the build are more imaginative than others. A popular way of encouraging the team to pay attention to the current state of the build is to make creative and eye-catching build status notification mechanisms. While wall displays and lava lamps are a good way of communicating this information to the team, Martin has even gone so far as to connect a talking, moving robot rabbit into Team Foundation Server. For more information on this project (including a prize-winning YouTube video and full source code), see http://aka.ms/BrianTheBuildBunny.

Sadly, the company that created Brian the Build Bunny is no longer in business.

Just this simple step of running the build regularly will significantly affect the productivity of your team. No longer will developers need to roll back changes they have downloaded because they do not compile in their environment. At this point in your adoption of build automation, the trick is to keep things fun, but to gradually introduce a little peer pressure to ensure that the build status is usually good. If a build fails for some reason, that build failure should be the team's immediate priority. What change to the system made the build fail? Who just broke the build? Fix the build and then resume normal work.

If you are developing a website, then make your build automatically deploy to a server so that people can easily play with the latest version of the code. If you are building a client-side application, try to package it so that it can easily be executed by anyone involved in the project. MSI files or ClickOnce installers are good ways of doing this on Windows, but DMG images for the Mac, RPM/DEB files on Linux, or Eclipse Update sites for Eclipse developers are all great ways of making it easy to run the latest build.

Once the team has become familiar with the notion of builds happening automatically, and gotten into the habit of ensuring that the build is “good” at all times, you can gradually raise the bar on determining what makes a good build.

To begin with, simply being able to compile the build and package it for deployment is good enough. Next, you want to introduce things such as automated unit tests (again, slowly at first) so that not only does the build compile, but it also actually works as originally intended. You can also introduce other code-quality indicators at this point, such as ensuring that code meets team-wide coding standards. Over time, you can introduce targets—such as 20 percent of code being covered by the unit tests—and then gradually increase this percentage. In Team Foundation Server, you can also increase quality by making the build run before the code being checked in is committed to the version control system. This feature is known as Gated Check-in and is discussed in Chapter 18. Using the Lab Management features described in Chapter 26, you can even deploy your complex n-tier application out to a series of servers in your lab, and then execute full integration tests in that environment, to validate your build. For the developer, this is still incredibly easy; all she has to do is check in her code.

The trick is to be constantly improving the quality of your builds but still ensuring that checking in and getting a clean build is fast and easy. By keeping the feedback loop between a check-in and a working, deployable product to test as short as possible, you will maximize the productivity of your team, while also being able to easily demonstrate progress to the people sponsoring the development activity in the first place.


This chapter provided a glimpse of what's new in build automation in Team Foundation Server 2013. It explained what build automation is and the benefits it brings. You learned about some of the various ways to script an automated build and how to run that build periodically using a build automation server. Finally, the chapter provided tips on how to adopt build automation, in general, inside the organization.

Once you have migrated your source code into Team Foundation Server, getting builds configured is an important next step. As discussed in this chapter, if you are already using an existing build automation server, then most of these are already able to use Team Foundation Server as a version control repository from which to draw when automating the build. However, there are several advantages to using Team Foundation Server's built-in build automation capabilities—primarily the integration you get between the version control and work item tracking systems, but also the excellent reporting capabilities provided by Team Foundation Server. The regular builds act as a heartbeat to which you can track the health of your project once all the data from version control, work item tracking, and build automation is combined in the reports provided by Team Foundation Server.

Things have been somewhat generalized in this chapter because build automation is important regardless of the technology or platform you choose to use. However, Team Foundation Server has some innovative features around build automation.

Chapter 18 describes in detail how to create automated builds inside Team Foundation Server. It describes all the features that Team Foundation Server provides, and it highlights new features in the 2013 release. You will learn about the architecture of the Team Foundation Build system and how to work with builds. Finally, the build process will be examined in detail, describing what it does and how it works.