Testing JavaScript components - Managing complexity - JavaScript Application Design: A Build First Approach (2015)

JavaScript Application Design: A Build First Approach (2015)

Part 2. Managing complexity

Chapter 8. Testing JavaScript components

This chapter covers

· Applying unit testing fundamentals to JavaScript components

· Writing unit tests in Tape

· Mocking, spying, and proxying

· Testing browsers hands-on

· Using Grunt for test automation

· Understanding integration and visual testing

By writing tests, you’ll improve the reliability of the modules and applications you build and insure they work the way you intend. In typical Build First fashion, you’ll get the necessary insight to automate those tests and run them on the cloud. This chapter includes a few guidelines that will help you write tests, and you will also get hands-on experience in testing components. In some cases I’ll walk you through the tests that you may write for a given piece of code, helping you visualize the thought process behind writing thoughtful unit tests.

While I’m not an advocate for the Test-Driven Development (TDD) paradigm, which encourages you to write tests before you develop any functionality, I think tests are important, and you should write them. In this chapter we’ll go back and forth between process design and application design. You’ll look at how to write tests, and then I’ll give you the tools to automate testing.

What do you mean you’re not an advocate for TDD?

That’s right. I wouldn’t recommend you use TDD, so let me elaborate on that. I don’t have anything against TDD itself, but writing tests is already a large commitment. If you’re getting started and throw TDD into your learning process, it probably won’t work out well for you. It definitely didn’t work for me when I was first getting into testing! TDD can be overwhelming, and maybe you don’t write any tests because you don’t know where to start. Or maybe you write pointless ones, testing against the implementation itself rather than testing the underlying interfaces and their expected behavior. Before attempting to learn TDD, I suggest that you try writing a few tests for existing code. That way when (and if) you decide to go down the TDD route you’ll know how your tests should be structured, what parts are important to test, and what parts are not. More importantly, you’ll know whether writing a particular test case is necessary or even helpful. That being said, if you already have experience writing unit tests, and Test-Driven Development suits you, then I have nothing against that!

You learned about modularity, mostly in chapter 5; improving your asynchronous flows, as discussed in chapter 6; and structuring your code in a more organized manner, thanks to the MVC pattern in chapter 7. All that modularity helps drive down the complexity in your application designs by creating smaller components that are easier to work on and understand. A benefit of the work you’ve accomplished so far in part 2 is that testing becomes much simpler.

8.1. JavaScript testing crash course

The essence of testing lies in learning how to isolate functionality so that it can be easily tested. This is the reason modularity is so important for attaining more testable code, which in turn improves quality, the cornerstone of Build First. Modular, loosely coupled code is easier to test because you have fewer things to account for, and your tests can be contained in small units that are only concerned with one small piece of code getting something right. In contrast, monolithic, tightly coupled code is harder to test because more things can go wrong, many of which might be completely unrelated to the piece of functionality you were attempting to test.

8.1.1. Logical units in isolation

Consider the following contrived example for reference. You have a method that queries an API endpoint (you’ll learn about API design in chapter 9, so hang tight), and then crunches numbers before returning a value. Suppose you want to make sure the data, whatever it was, was correctly multiplied by 555:

function getWorkDone () {

return get('/api/data').then(function (res) {

return res.data * 555;

});

}

In this case, you don’t care about the bits of this method that don’t have to do with the computational part, and they get in the way of your testing. Testing becomes harder, as you now need to deal with the Promise stuff to verify that the data gets computed correctly. You might want to consider refactoring this into two smaller methods, one that does computation only, and one that deals with querying the API:

function getWorkDone () {

return get('/api/data').then(function (res) {

return compute(res.data);

});

}

function compute (data) {

return data * 555;

}

This kind of separation of concerns enables reusability, because you could run the computation in other places in your code that might need it. More importantly, it’s much easier to test the computation in isolation now. The following piece of code is good enough at making sure thecompute method works as intended:

if (compute(3) !== 1665) {

throw new Error('assertion failed!');

}

Things become much easier when you use a library equipped to help with testing requirements, and I’ll teach you how to use the Tape library, which adheres to a unit testing protocol called Test Anything Protocol[1] (TAP). Other popular JavaScript testing libraries include Jasmine and Mocha, but we’ll stay away from those. They involve more complicated setups, often requiring a test harness and filling the global namespace with global variables. We’ll be using Tape, which doesn’t rely on globals or a test harness, and makes it easy to test code regardless of whether it’s written for Node.js or the browser.

1 Visit http://testanything.org to learn more about the Test Anything Protocol.

8.1.2. Using the Test Anything Protocol (TAP)

TAP is a test protocol implemented in a variety of languages, including Node.js. There are a few ways in which you can execute tap tests:

· Using node to run the tests directly in your terminal

· In a browser, compiling the tests to client-side JavaScript using Browserify

· Remotely, using automation services such as Travis-CI, the way you did in chapter 4

To get things started, you’ll look at how to use Tape in your local environment by plainly firing up a browser. In section 8.4 you’ll learn how to automate this process using Grunt to avoid firing up the browser on your own, and I’ll explain how to include it in your CI workflows.

Getting started with JavaScript unit tests that need a browser can be confusing at first. You’ll set up a pointless unit test in Node first, and then you’ll run that in the browser before getting to unit testing principles and advice, which you’ll find in section 8.2.

8.1.3. Putting together our first unit test

To create your first unit test and run it in the browser, start with the compute function from the previous examples in this chapter, placed in a CommonJS module. This example is available as ch08/01_your-first-tape-test in the samples. You can save this file in src/compute.js:

module.exports = function (data) {

return data * 555;

};

In the following code you’ll find the unit test written using tape, which provides an interface to perform basic assertions. Once you create a test, you can give it a name and a function will provide an interface to write your test. You’ll learn more about assertions in section 8.2. Each test case in Tape can be defined using a description and a test method. You’ll place this file in test/compute.js:

var test = require('tape');

var compute = require('../src/compute.js');

test('compute() should multiply by 555', function (t) {

t.equal(1665, compute(3));

t.end();

});

Note that you have to require the compute function to test it. Tape won’t load your source code for you. Similarly, the tape module should also be required. The API is fairly simple and requires you to call t.end() to denote when a test has finished. Tape is mostly concerned with assertions about your assumptions and tracking test results. To run any tests written using tape, you merely need to run the code using Node:

node test/compute.js

Let’s see what it takes to run these tests in the browser as well.

8.1.4. Tape in the browser

Running Tape tests in the browser is mostly a matter of Browserifying your tests. You could do this once by using the global Browserify package, or you could automate it using Grunt. Let’s automate it. You’ll need to use grunt-browserify to do that:

npm install --save-dev grunt grunt-browserify

Once you’ve installed grunt-browserify, you need to set up a Gruntfile the way you did throughout part 1, and configure the browserify task to compile your CommonJS code down to something browsers can interpret seamlessly. In the case of the unit test you’ve seen, your configuration could look like the following listing (you can find this example under ch08/02_tape-in-the-browser).

Listing 8.1. Compiling code for a browser to interpret

module.exports = function (grunt) {

grunt.initConfig({

browserify: {

tests: {

files: {

'test/build/test-bundle.js': ['test/**/*.js']

}

}

}

});

grunt.loadNpmTasks('grunt-browserify');

};

Using the browserify:tests target, you can compile the code so it can be referenced in an HTML file. As a last step you need to put together the HTML file. Luckily, you won’t need to touch it once it’s put together, because the JavaScript will be taken care of by the Browserify bundler, and you won’t need to change the script tags by hand or anything else in your HTML, as shown in the following listing.

Listing 8.2. Compiling code to be referenced by the HTML file

<!doctype html>

<html>

<head>

<meta charset='utf-8'>

<title>Unit Testing JavaScript with Tape</title>

</head>

<body>

<script src='build/test-bundle.js'></script>

</body>

</html>

Running the tests will only be a matter of opening this HTML file with a browser. You’ll come back to Grunt later in the chapter to look at automating your testing process. Let’s talk about testing principles and how to apply them in JavaScript tests.

8.1.5. Arrange, Act, Assert

Writing unit tests is often made out to be a difficult and tedious process, but it doesn’t have to be. If your code is written with modularity and testability in mind, it’ll be much easier to test. Monolithic, tightly coupled code does turn testing into a complicated process. That’s because tests are most effective when they can verify small components in isolation, so you shouldn’t have to worry about dependencies. This type of testing is referred to as unit testing. The second most common type of testing is integration testing, which involves testing that the interaction between components works as expected, focusing on how the network of components operates. Figure 8.1 compares both types of testing.

Figure 8.1. Differences between unit and integration testing strategies. Note that a combination of the two should be used. Unit tests and integration tests are not exclusive. Pure functions are discussed in section 8.1.15.

8.1.6. Unit testing

In contrast with integration tests, which focus on interaction, good unit tests actively disregard interaction, only focusing on how a single component works in isolation. Furthermore, good unit tests don’t care about a component’s implementation details; they only focus on the component’s public API. That means good unit tests can be read as examples of how a component is expected to work. Even though not ideal, sometimes unit tests are the next best thing when a package’s documentation is lacking.

Good unit tests often follow the “Arrange Act Assert” (AAA) pattern, creating fake versions of dependencies in unit tests and spying on methods to make sure they are invoked. The following subsections explore those concepts. Before you get to section 8.3, you’ll go through real unit testing case scenarios.

The AAA pattern can help you to develop concise and organized unit tests. It consists of building your unit tests in three stages:

· Arrange: You create instances of everything needed by your test.

· Act: You execute your tests and track their results.

· Assert: You verify whether the results match the expected output.

Following these simple steps, it’s easy to find your place when skimming through a unit test. Assertions are used to verify, for instance, the result of typeof {} matches object. Note that when these steps can be simplified into a single, readable line, you probably should do so.

8.1.7. Convenience over convention

Some purists will tell you to do only a single assertion per unit test. I suggest you stay pragmatic and allow yourself to write a few assertions in the same test, as long as they test the same specific piece of functionality. It won’t hurt if you do, because the test harness (Tape, in your case) will tell you exactly which assertion failed in which test. Using a single assertion per test often leads to massive code duplication and frustrating testing sessions.

8.1.8. Case study: unit testing an event emitter

Let’s write tests against the emitter method, which augments objects allowing them to emit and listen to events that we saw back in chapter 6. That should give you a good idea what a real unit test might look like. The following listing (available as ch08/03_arrange-act-assert in the samples) shows the full method in all its glory. This is the same event emitter method you implemented in section 6.4.2.

Listing 8.3. Your event emitter implementation

function emitter (thing) {

var events = {};

if (!thing) {

thing = {};

}

thing.on = function (type, listener) {

if (!events[type]) {

events[type] = [listener];

} else {

events[type].push(listener);

}

};

thing.emit = function (type) {

var evt = events[type];

if (!evt) {

return;

}

var args = Array.prototype.slice.call(arguments, 1);

for (var i = 0; i < evt.length; i++) {

evt[i].apply(thing, args);

}

};

return thing;

}

How do you test all of that? It’s pretty big! Repeat after me: test against the interface. The rest doesn’t matter that much. You want to make sure that, given the correct parameters, each of the public API methods does what you expect it to do. In the case of the emitter function, the API consists of the emitter function itself, the on method, and the emit method. The API is anything that can be accessed by the consumer, which is what you want to verify.

You can think of writing good unit tests as asserting the right things. The assertions your tests will verify should be deterministic, and they should also disregard implementation details, such as how the event listeners are stored. Private methods are typically implementation details, and you shouldn’t worry about testing them; only the public interface matters. If you want to test private methods, you’ll have to expose them so that they can be unit tested like any other public interface method.

8.1.9. Testing the event emitter

To get things going, let’s start with a test asserting whether calling emitter with different arguments results in an emitter object. This is a basic test in which you’ll verify that an object is returned with the expected properties (on and emit) on it.

Listing 8.4. Your first test using TAP

It’s always good to have unit tests that assert the basics of how something is expected to operate. Keep in mind that you only need to write these tests once, and they’ll help you assert these validations anytime. Let’s write a few more basic assertions in the following listing, making sure the returned object is indeed the same object you provided.

Listing 8.5. Writing basic assertions

test('emitter(thing) should reference the same object', function (t) {

var data = { a: 1 }; // Arrange

var thing = emitter(data); // Act

t.equal(data, thing); // Assert

t.end();

});

test('emitter(thing) should reference the same array', function (t) {

var data = [1, 2]; // Arrange

var thing = emitter(data); // Act

t.equal(data, thing); // Assert

t.end();

});

In the “basic JavaScript unit test” department, you’ll sometimes find tests asserting whether something that’s supposed to be a function is indeed a function. Although it’s true that any other test would fail if emitter wasn’t a function, redundancy is a good thing to have when it comes to unit testing. In addition, your tests should fail at assertions rather than while arranging or acting. If your tests fail somewhere else, it might indicate it’s time to add more tests to assert that doesn’t happen, or maybe the problem lies with your code.

Testing for object types might seem trivial, but it can pay off. Even more important is testing return value types. The first test you wrote made sure the properties were there, but it didn’t check if they were functions. Let’s rework it, adding type checks. These will seem like trivial changes, but you want to be explicit about the purposes of an assertion, for clarity.

Listing 8.6. Type checking in your tests

8.1.10. Testing for the .on method

Next we’ll write tests for the .on method. This time around, we’ll be content if calling .on does not throw. In a bit, we’ll make sure that the listeners work when we test the emit method. Note how I wrote two different tests which are almost identical, even though they have different purposes. In testing, it’s fairly common to find duplicate code, and it’s fine to copy and paste, although it’s not encouraged to abuse it.

Listing 8.7. Testing the .on function

Last, you need to test the emit function. To do that, you’ll attach a few listeners, as before, and then you’ll emit the event. Then you’ll assert that the listeners fired correctly, once for each call to .on. Notice how if you changed emit to be asynchronous by wrapping the event handlers in asetTimeout call, this test would fail. In those cases, you can either adapt the test to the new functionality or avoid changing the functionality in the first place.

Listing 8.8. Testing the .emit function

Finally, let’s add one more method to make sure that emit passes any arguments to the event listener the way we expect.

Listing 8.9. Further testing on .emit

That’s it! Your event emitter implementation is fully tested. You only wrote assertions that verify how the public API works, and you didn’t meddle with implementation details. At this point, you could add tests that deal with unconventional usage of the API, such as calling emit() without any arguments. Then you could decide whether you’d want emit to throw an exception in that particular case. Think of your tests as a formal and stricter API documentation.

In the following section you’ll learn about creating mocks, spying on function calls, and proxying require statements.

8.1.11. Mocks, spies, and proxies

Sometimes you want greater isolation, even though two parts of an application can’t be decoupled any further. The application might need to query a real database, fetch data using a service, or connect together different modules, or there may be some other reason why you can’t decouple the implementation. You can use a variety of different tools, such as mocks, spies, and proxies, to circumvent the testing issues introduced by tight coupling. Figure 8.2 depicts the issue and the solution provided by these stubs.

Figure 8.2. Using source code as-is versus using mocks when testing

Next up you’ll learn about mocking dependencies, which can come in handy if you’re working with a component that has external dependencies.

8.1.12. Mocking

Mocking creates fake instances of the dependencies (such as services or other objects) in your System Under Test (SUT). In statically typed languages, mocking often involves access to the compiler, often referred to as Reflection. One of the advantages of JavaScript being a dynamically-typed language is that you can create an object with a couple of properties and that’s it. Suppose you have to test the following snippet of code:

function (http, done) {

http.get('/api/data', done);

}

In a real application, maybe that snippet accessed the network and queried an endpoint, getting back data from the application’s API. You should never need to connect to external services to run a unit test, making this an ideal scenario for mocking. In this case in particular, you’re making aGET request and calling back a done function with an optional error and data in return.

Mocking the http object using plain JavaScript, as it turns out, is easy. Note how you’re using setTimeout to keep the method asynchronous, the way the original code expected, and how you can conjure up any response you like to fit your test:

{

get: function (endpoint, done) {

setTimeout(function () {

done(null, { data: 'dummy' });

}, 0);

}

}

The server-side aspect of this test, querying the real HTTP endpoint, should be handled in server tests, which isn’t a client-side concern anymore. Another option might be testing these things in integration tests, which is a topic you’ll navigate later in the chapter. I’ll introduce Sinon.js next. Sinon is a library for creating mocks, spies, and stubs. It also allows you to fake XHR requests, server responses, and timers. Let’s look at it.

8.1.13. Introducing Sinon.js

Sometimes it’s not enough to mock values by hand, and in those more advanced case scenarios, using a library such as Sinon.js might come in handy. Sinon helps you easily test setTimeout delays, dates, XHR requests, and even set up fake HTTP servers to use in your tests. Using Sinon, it’s trivial to create functions called spies. Spies are functions that are prepared to tell you whether they’ve been called, how many times, and what arguments they were invoked with. As it turns out, you’ve already used a custom flavor of spies in listing 8.9, where we had a listenerfunction that kept track of how many times it was called. Let’s see how using spies helps assert function calls.

8.1.14. Spying on function calls

Spies can be used whenever a function you’re testing requires function parameters, and you can use them to easily assert whether they’ve been used and how.

Let’s go through a simple example (found as ch08/04_spying-on-function-calls). Here’s a pair of functions that take a callback function parameter:

var maxwell = {

immediate: function (cb) {

cb('foo', 'bar');

},

debounce: function (cb) {

setTimeout(cb, 0);

}

};

Sinon makes it easy to test these. Without the need to construct a custom callback, you can ensure that immediate invoked your callback exactly once:

test('maxwell.immediate invokes a callback immediately', function (t) {

var cb = sinon.spy();

maxwell.immediate(cb);

t.plan(2);

t.ok(cb.calledOnce, 'called once');

t.ok(cb.calledWith('foo', 'bar'), 'arguments match expectation');

});

Note how I switched from t.end to t.plan. Using t.plan(n) allows you to define how many assertions you expect to be made during the execution of your test case. The test will fail if it doesn’t exactly match the number of asserts. This is most useful for asynchronous tests, where your code may or may not end up invoking a callback where you had a few more asserts. Using t.plan verifies that the correct amount of asserts were indeed executed.

Testing delayed execution is a bit trickier, but Sinon provides an easy-to-use interface for that, as shown in the following listing. By calling sinon.useFakeTimers(), any subsequent calls to setTimeout or setInterval are going to be faked. You also get a simple tick API to manually change the clock.

Listing 8.10. Testing delayed execution

test('maxwell.debounce invokes a callback after a timeout', function (t) {

var clock = sinon.useFakeTimers();

var cb = sinon.spy();

maxwell.debounce(cb);

t.plan(2);

t.ok(cb.notCalled, 'not called before tick');

clock.tick(0);

t.ok(cb.called, 'called after tick');

});

Sinon.js has more tricks you can perform, such as creating fake XHR requests. The last topic I want to discuss regarding mocking is the case where you need to create a mock for the results provided by invoking require on any given module. Let’s check out how that works!

8.1.15. Proxying require calls

The issue here is that sometimes modules require other modules, which in turn require additional modules, and you don’t want all that in unit tests. Unit tests are about controlling the environment, detecting the absolutely necessary pieces that are needed to execute a test, and mocking everything else. There’s a nice npm package called proxyquire that can help with that situation. Consider that you’d like to test the code in the following listing (available as ch08/05_proxying-your-dependencies in the samples), in which you’d like to fetch a user from the database and then return a subset of the model for security reasons.

Listing 8.11. Using the require method

var User = require('../models/User.js');

module.exports = function (id, done) {

User.findOne({ id: id }, function (err, user) {

if (err || !user) {

done(err); return;

}

done(null, {

name: user.name,

email: user.email

})

});

};

Let’s consider a small refactor for a moment. It’s always best to isolate “pure” functionality. A pure function is a concept that comes from functional programming, and it describes a function whose outputs are defined solely by its inputs and nothing else. Pure functions return the same value every time they receive the same inputs. In the example above, your pure and reusable piece of functionality is mapping the user model to its “safe” subset, so let’s extract that into its own function, and make your code a little prettier and easier to follow through.

Listing 8.12. Creating a pure function

var User = require('./models/User.js');

function subset (user) {

return {

name: user.name,

email: user.email

};

}

module.exports = function (id, done) {

User.findOne({ id: id }, function (err, user) {

done(err, user ? subset(user) : null);

});

};

As you can see, though, unless you expose the subset function on its own, you’re stuck with querying the database to get a user. You could argue that the module should get a user object, instead of merely an id, and you’re right. Sometimes, however, you have to query the database. Maybe you have a user parameter and do something with it, but you also want to ask the database about his permissions or the groups he belongs to. In those cases, as well as in the previous case, assuming you don’t refactor it any further, a good way to get around the situation is to return a fake result from require calls.

The good news is that using proxyquire means you don’t have to change the application code at all. The following listing demonstrates how to use proxyquire to mock up a required module without resorting to a database at all. Note how the mock object you’re passing to proxyquireis a map of require paths and the results you want to get (rather than what you’d normally get).

Listing 8.13. Mocking up a required module

var proxyquire = require('proxyquire');

var user = {

id: 123,

name: 'Marian',

email: 'marian@company.com'

};

var mapperMock = {

'./models/User.js': {

findOne: function (query, done) {

setTimeout(done.bind(null, null, user));

}

}

};

var mapper = proxyquire('../src/mapper.js', mapperMock);

Once you isolate the mapping functionality without resorting to a database connection, the test becomes trivial. You’re using the mapper function, complete with fake database access, and asserting whether it gives back an object with the name and email properties on it. Note that you’re using Sinon’s cb.args to figure out the arguments when the cb spy was first called.

Listing 8.14. Creating spies with Sinon

In the following section I’ll go a bit deeper into client-side testing, talking about fake XHR (XMLHttpRequest). You’ll also get a feel for DOM interaction testing before you look at other forms of automation and a mention of non-unit testing flavors.

8.2. Testing in the browser

Testing client-side code is typically a hassle because of both AJAX requests and DOM interaction. That, often paired with a complete lack of modularity and code organization, spells chaos for the client-side JavaScript test developer. That being said, in chapter 5 you resolved your browser modularity concerns by settling for Browserify. Browserify allows you to use self-contained CommonJS modules even in client-side code but at the cost of an extra build step.

You also resolved code organization issues by resorting to an MVC framework on the client side, to keep your concerns properly separated. In chapter 9, you’ll learn about REST API design, which you’ll apply to future web applications you write, getting rid of the endpoint chaos that usually characterizes front-end application development.

In the next section, you’ll learn how to write tests for your client-side code by mocking XHR requests and isolating DOM interaction so that you can write tests against it. Let’s start with the easy part: mocking up XHR requests and server responses.

8.2.1. Faking XHR and server communication

Similarly to the way you created fake require results with proxyquire, you can use Sinon to mock any XHR requests you’d like, without modifying your source code. Use Sinon to simulate server responses and snoop request data. Those are the only reasons you’ll need to deal with XHR. Figure 8.3 shows how these mocks can help you to isolate and test code that would normally depend on an external resource.

Figure 8.3. Native XMLHttpRequest compared with fake XHR mocks during tests

To see how that might look in code, here’s a snippet of client-side JavaScript that makes an HTTP request and gives you the response text (see sample ch08/06_fake-xhr-requests). I’m using the superagent module to make the HTTP requests, because it works seamlessly in the server or the browser. Perfect for Browserifying action!

module.exports = function (done) {

require('superagent')

.get('https://api.github.com/zen')

.end(cb);

function cb (err, res) {

done(null, res.text);

}

};

In this case you don’t want to write tests for superagent itself. You don’t want to test the API call, either. You probably want to make sure that an AJAX call is made, though. The method is supposed to call you back with the response text, so you should test for that as well, as shown in the following listing.

Listing 8.15. Creating a method that sends response text

var test = require('tape');

var sinon = require('sinon');

test('qotd service should make an XHR call', function (t) {

var quote = require('../src/qotdService.js');

var cb = sinon.spy();

quote(cb);

t.plan(2);

setTimeout(function () {

t.ok(cb.called);

t.ok(cb.calledWith(null, sinon.match.string));

}, 2000);

});

That’s fine for testing the outcome, but you can’t afford to have tests depend on network conditions or to spend that long waiting to make assertions. The right way to test your method is to simulate the responses. Sinon allows you to do this by creating a fake server, which provides two-fold value. It captures real requests made by your code and transforms them into testable objects it controls. It also allows you to create responses for those requests within your tests, simulating an operational server. To get that functionality, create the fake server usingsinon.fakeServer.create() before invoking the method under test. Then, once the method that’s supposed to create an AJAX request is invoked, you can respond to the request, setting your response’s status code, headers, and body. Let’s update your test method to reflect those changes.

Listing 8.16. Testing the “Quote of the Day” service

test('qotd service should make an XHR call', function (t) {

var quote = require('../src/qotdService.js');

var cb = sinon.spy();

var server = sinon.fakeServer.create();

var headers = { 'Content-Type': 'text/html' };

quote(cb);

t.plan(4);

t.equals(server.requests.length, 1);

t.ok(cb.notCalled);

server.requests[0].respond(200, headers, 'The cake is a lie.');

t.ok(cb.called);

t.ok(cb.calledWith(null, 'The cake is a lie.'));

});

As you can see, you verified that a single request was made and that you got called back with exactly the same value as the response text.

The last piece of browser testing to dabble in before heading over to the automation department is DOM interaction testing. Much like testing AJAX calls, DOM testing is complicated because you’re interacting with something that’s across a gap. Mind the gap.

8.2.2. Case study: testing DOM interaction

Client-side development and testing are funny in that way. You have three layers: HTML, JavaScript, and CSS, all working together to serve a sophisticated concoction of bits. Yet, as any good developer will, you must keep the concerns separated across the three technologies, trying not to couple them too tightly together. CSS is easy to leave untied. You create classes in CSS and assign them to DOM elements by giving them their matching class attributes. Your CSS starts falling apart when it makes assumptions about the structure of your HTML. The best pieces of CSS are those that don’t depend on the HTML being structured exactly in a particular way, those that aren’t tightly coupled to the HTML.

JavaScript and HTML are similar to CSS and HTML in that your HTML shouldn’t make any assumptions about your JavaScript. HTML should work fairly well even with JavaScript turned off; this is called progressive enhancement and it helps deliver primary content to your users faster, resulting in a better experience overall. The problem is that your JavaScript code must make assumptions about your HTML. Finding the inner text for a DOM node, attaching event listeners, reading data attributes, setting attributes, or any other form of DOM manipulation, leads with the assumption that a DOM node is there.

Let’s get to your imaginary application where events come to party and decimal numbers get rounded.

Setting up the HTML

In this application, you have an input where you’re meant to enter decimal numbers and then click on a button to get the rounded version of that same number back. Each result is written into a list that’s displayed on the page. There’s also another button to clear the result list. Figure 8.4depicts how the application should look.

Figure 8.4. The application you’ll be building in this case study

We’ll start by going through the application, and explain the choices made along the way. Then, I’ll show you what you should be testing in this small application, and how you can get test coverage on those factors without worrying about implementation details.

Consider the following piece of HTML. Note that you’re not writing any JavaScript in the DOM directly. Keeping your concerns separated is extremely important to testability:

<h1>Event Bar</h1>

<p>Enter a number and see it rounded!</p>

<input class='square' placeholder='Decimals only please.' />

<button class='barman'>Another Round!</button>

<button class='clear'>Clear Results</button>

<div class='result'>

<h4>Results come here to cool off!</h4>

</div>

Next you’ll learn how to implement JavaScript functionality.

Implementing the JavaScript functionality

Next we’ll discuss a small JavaScript application that interacts with the HTML shown in the previous example, using the JavaScript DOM API. To begin, you’ll use query-Selector, a (relatively) little-known but powerful native browser API that allows you to find DOM nodes in a similar fashion to how jQuery works, using CSS selectors. querySelector is supported in all major browsers, going as far back as Internet Explorer 8. The API is present on the document root as well as on any DOM nodes, allowing you to limit the search to their children. If you want to look for many elements, instead of the first one, you can use querySelectorAll instead.

var barman = document.querySelector('.barman');

var square = document.querySelector('.square');

var result = document.querySelector('.result');

var clear = document.querySelector('.clear');

Note

I never use the id attribute in HTML. It causes all sorts of problems, such as CSS selector precedence, leading to developers using !important style rules and the inability to reuse the value, because HTML id attributes are meant to be unique.

Let’s implement the code in charge of figuring out how your input did. If it’s not a number, then that’s a mistake. If it’s an integer, that’s a problem too. Otherwise, you’ll return the rounded value:

function rounding (number, done) {

if (isNaN(number)) {

done(new Error('Do you even know what a number is?'));

} else if (number === Math.round(number)) {

done(new Error('You are such a unit. Integers cannot be rounded!'));

} else {

done(null, Math.round(number));

}

}

The done callback should create a new paragraph in your result list and fill it with the error message, if any, or the rounded value, if present. You’ll also set a different CSS class if you see an error than when you’re successful, to help a designer style the page accordingly without you making additional changes to your JavaScript, as shown in the following listing.

Listing 8.17. Using the done callback

function report (err, value) {

var p = document.createElement('p');

if (err) {

p.className = 'error';

p.innerText = err.message;

} else {

p.className = 'rounded';

p.innerText = 'Rounded to ' + value + '. Another round?';

}

result.appendChild(p);

}

The last piece to the puzzle is binding the click event and parsing the input before handing it off to the two methods you put together in listing 8.17. The following code snippet will do:

barman.addEventListener(click, round);

function round () {

var number = parseFloat(square.value);

rounding(number, report);

}

Wiring up the Reset button is even easier. Your listener should remove every paragraph created by the barman; that’s as straightforward as it gets! The following listing shows how you might do it.

Listing 8.18. Wiring a Reset button

clear.addEventListener(click, reset);

function reset () {

var all = result.querySelectorAll('.result p');

var i = all.length;

while (i--) {

result.removeChild(all[i]);

}

}

That’s it; your application is fully operational. How can you make sure future refactorings don’t break existing code? You need to identify tests that ensure your code works as intended and then write those tests.

Identifying the test cases

First off, let me go on a tangent to mention that you need to completely disregard the HTML at the beginning of this case study. You shouldn’t write any HTML in your tests. If you need a DOM, you should build it using JavaScript inside your tests. As you’ll see when you implement the tests, this can be even easier than writing HTML. Separating concerns is one of the most important aspects of unit testing.

Next, you should try and identify your application concerns and differentiate them from implementation details. For the sake of this experiment, consider everything you wrote previously to be implementation details, because your application doesn’t provide an API or even build a public-facing object of any sort. When everything in the implementation is an implementation detail, you can still unit test, but you need to test against what the application is supposed to do, as opposed to what each method is supposed to do.

The test cases are supposed to assert that the statements you can find in the application definition presented previously, quoted here, hold true when checked against its implementation.

Application definition

In this application you have an input where you enter decimal numbers and then click on a button to get the rounded version of that same number back. Each result is written into a list that’s displayed on the page. There’s also another button to clear the result list.

Several test cases are noted in the following list. These were derived from the quoted definition and other logic constraints imposed in the implementation (which you’d like to turn into part of the definition). Keep in mind you could prepare any test cases you want, as long as they satisfy the definition. These are the ones I designed:

· Clicking barman without input should result in an error message.

· Clicking barman with an integer should result in an error message.

· Clicking barman with a number should result in a rounded number.

· Clicking barman twice, with two values should produce two results.

· Clicking clear when the list is empty does not throw.

· Clicking clear removes any results in the list.

Let’s get to the testing. I mentioned earlier that you’d create the DOM in code in every test. You’ll do that by creating a Setup task, called before every test, and a Teardown task, called after every test. Setup will create the elements. Teardown will remove them. This gives every test a clean slate even after another test has run.

Setup and Teardown

Most JavaScript testing frameworks, for baffling reasons, include globals in your test program. For instance, if you want to run a task before each test when using the Mocha test framework (Buster.js and Jasmine also do this), you’d pass a callback function to the beforeEach global method. In fact, test cases should be described with other globals, such as describe and it, as shown in the following listing.

Listing 8.19. Using describe to describe test cases

function setup () {

// prepare something

}

describe('foo()', function () {

beforeEach(setup);

it('should not throw', function () {

assert.doesNotThrow(function () {

foo();

});

});

});

This is terrible! Indiscriminate use of globals, even in tests, shouldn’t be the norm. Luckily tape doesn’t submit to this nonsense, and it’s still easy to run something before each test. The following listing shows the same piece of code, using tape instead.

Listing 8.20. Using tape to describe test cases

var test = require('tape');

function testCase (name, cb) {

var t = test(name, cb);

t.once('prerun', setup);

}

function setup () {

// prepare something

}

testCase('foo() should not throw', function (t) {

assert.doesNotThrow(function () {

foo();

});

});

Granted, it looks more verbose, but it doesn’t pollute the global namespace, breaking one of the oldest conventions. In tape, tests emit events, such as prerun, at different points in the test run. To set up and tear down our tests, you’ll need to create and use a testCase method. The name is irrelevant, but I find testCase applies well in this situation:

function testCase (name, cb) {

var t = test(name, cb);

t.once('prerun', setup);

t.once('end', teardown);

}

Now that you know how to run these methods for every test, it’s time to code them!

Preparing the test harness

In the setup method, you need to create each DOM element you’ll need in the tests and set any default values made available through the HTML. Note that testing the HTML itself isn’t part of these tests, which is why you completely disregard it. Your concern is that, assuming the HTML is what you expect, the application will run successfully. Testing the HTML is a concern of integration testing.

The setup method is found in the following listing. The bar module is your application’s code, wrapped in a function so you can execute it whenever you want. In this case, you need to run the application before every test. That will attach event listeners to your freshly baked DOM elements.

Listing 8.21. Using the setup method

var bar = require('../src/event-bar.js');

function setup () {

function add (type, className) {

var element = document.createElement(type);

element.className = className;

document.body.appendChild(element);

}

add('input', 'square');

add('div', 'barman');

add('div', 'result');

add('div', 'clear');

bar();

}

The teardown method is even easier, because you give it a few selectors and iterate through them, removing the elements created during setup:

function teardown () {

var selectors = ['.barman', '.square', '.result', '.clear'];

selectors.forEach(function (selector) {

var element = document.querySelector(selector);

element.parentNode.removeChild(element);

});

}

Woo-hoo! Onto the tests.

Coding your test cases

As long as you keep your concerns cleanly separated between Arrange, Act, and Assert, you shouldn’t have any issues writing or reading your tests. In the first one you get the barman element, click it, and get any results. You verify there’s one result. Then you assert that the CSS class and text in that result are correct, as shown in the following listing.

Listing 8.22. Asserting the CSS class and text are correct

testCase('barman without input should show an error', function (t) {

// Arrange

var barman = document.querySelector('.barman');

var result;

// Act

barman.click();

result = document.querySelectorAll('.result p');

// Assert

t.plan(4);

t.ok(barman);

t.equal(result.length, 1);

t.equal(result[0].className, 'error');

t.equal(result[0].innerText, 'Do you even know what a number is?');

});

The next test also does error checking. Making sure your error checking works as expected is as important as making sure the happy path does indeed work. In the following listing, you’re also setting a value in the input, before the click.

Listing 8.23. Error checking your code

By now you should start to see the pattern. See how easy it is to identify what each test does when they follow the AAA convention? This next one, shown in the following listing, verifies that the happy path works as intended. It sets the input to a decimal value and clicks on the button, and then it checks that the result was a rounded number.

Listing 8.24. Verifying the path works

It’s certainly good to write tests that interact with your code the way you expect humans to interact with it. Sometimes humans do the unexpected, and that should be tested for as well.

Testing possible outcomes

We’re wired in a certain way, where we believe in three possible outcomes: something either never works, works once, or it always works. I often joke that only three numbers exist: 0, 1, and infinite. As shown in the following listing, asserting that making two clicks works as intended should be enough. You can always go back and add more tests.

Listing 8.25. Making sure two clicks works

When developing code, you might find that your code is throwing errors, wearing down your productivity. Simple tests such as the one in the following listing that asserts a method call does not throw are helpful in these types of cases. The next section talks about automated testing, which definitely helps as well.

Listing 8.26. Asserting a method call does not throw errors

testCase('clearing empty list does not throw', function (t) {

// Arrange

var clear = document.querySelector('.clear');

// Assert

t.plan(2);

t.ok(clear);

t.doesNotThrow(function () {

clear.click();

});

});

The last test in your embarrassingly small suite is close to an integration test. It clicks repeatedly, and then it asserts that clicking the Clear button does indeed remove the accumulated results.

Listing 8.27. Verifying the Clear button works

testCase('clicking clear removes any results in the list', function (t) {

// Arrange

var barman = document.querySelector('.barman');

var square = document.querySelector('.square');

var clear = document.querySelector('.clear');

var result;

var resultCleared;

// Act

square.value = '3.4';

barman.click();

square.value = '3';

barman.click();

square.value = '';

barman.click();

result = document.querySelectorAll('.result p');

clear.click();

resultCleared = document.querySelectorAll('.result p');

// Assert

t.plan(2);

t.equal(result.length, 3);

t.equal(resultCleared.length, 0);

});

The most value in your tests always comes when it’s time to refactor. Suppose you changed the implementation of your Event Bar program. You run the tests again. If they succeed, all is good, unless you find a bug testing by hand, in which case you add more tests and fix the issue. If they fail, two possibilities exist. The tests now may be outdated. For example, the Clear button may have been changed to “remove only the oldest result” when clicked. In that case you should update the tests to reflect those changes. The other reason why the tests may fail is because of an oversight in your changes, which would break functionality. The fact that these tests are forever repeatable, at no extra cost, is what makes them so valuable.

You can check out the fully working example, with all the code I’ve shown you, in the accompanying code samples, as ch08/07_dom-interaction-testing. Next up we’ll go back to the case study we developed during chapter 7 and add unit tests to it.

8.3. Case study: unit testing the MVC shopping list

In chapter 7 we reached quite a few milestones in developing an MVC shopping list application, and in this section we’ll unit test one of the iterations of that application. Concretely, you’ll pair with me in unit testing the application at the end of section 7.4, right before we added Rendr to the solution in section 7.5. You can check out the source code for that application at ch07/10_the-road-show in the samples. Its unit-tested counterpart can be found under ch08/07b_testability-boulevard.

The Road Show was a small-sized application, yet large enough to show how you could slowly add tests to an application and end up having a well-tested application. Taking this gradual approach to testing would have been much harder if we hadn’t put effort into modularizing our application, but we learned to do that in chapter 5 and applied those concepts when putting together the application in chapter 7. This section guides us through writing tests for the view router, and model validation. You are then free to explore adding test coverage for the view controllers.

8.3.1. Testing the view router

The first step you always need to take before any testing can begin is configuring the environment so tests can run. In this case that means you’ll copy the application (from ch07/10_the-road-show) to be used as a starting point, and then add the test harness built in this chapter for running Tape in the browser (the ch08/02_tape-in-the-browser sample) on top of that.

Once the initial setup is put together (ch08/07b_testability-boulevard in the samples), you can start fleshing out your tests using Tape. We’ll start with the router (which was shown in listing 7.18 in chapter 7) because that’s the simplest module we want to test. For reference, the following listing is how the module looks at the moment.

Listing 8.28. Testing the module

var Backbone = require('backbone');

var ListView = require('../views/list.js');

var AddItemView = require('../views/addItem.js');

module.exports = Backbone.Router.extend({

routes: {

'': 'root',

'items': 'listItems',

'items/add': 'addItem'

},

root: function () {

this.navigate('items', { trigger: true });

},

listItems: function () {

new ListView();

},

addItem: function () {

new AddItemView();

}

});

We want to assert a few things in testing this module. You want to know that

· There are three routes.

· Their associated route handlers do in fact exist.

· The root route handler properly redirects to the listItems action.

· View routes would render the correct view in each case.

You may already be drooling over the possibilities, considering creating mocks for the views, or maybe using proxyquire to stub those modules altogether. To get started, we’ll assert that three routes are in fact registered, and that their route handlers exist on the router.

To achieve this, the following listing uses proxyquireify (a flavor of proxyquire that works on the client side) combined with sinon and tape to put together the routes.js test module.

Listing 8.29. The first View Router tests

Once the test file is ready, you can verify that the tests pass by going through the same process as in section 8.4: opening up a browser with the compiled test bundle and checking the developer console for any error messages.

Test runner HTML file

First off, you’ll need a test runner HTML file like the following one. There’s nothing special about it, except that it loads the built test bundle:

<!doctype html>

<html>

<head>

<meta charset='utf-8'>

<title>Unit Testing JavaScript with Tape</title>

</head>

<body>

<script src='build/test-bundle.js'></script>

</body>

</html>

Once you’ve created both the routes.js test module and the runner.html test runner, you should create a Grunt task to build the bundle.

Create a Grunt task to build the bundle

Because you’ve learned how to write your own tasks, and as a way of reinforcing that knowledge, you’ll create your own task to compile the Browserify bundle! To make that work, you should include all of the following listing in a Gruntfile. It uses the browserify package directly, without the grunt-browserify plugin intermediary. Sometimes using a package directly instead of through a plugin can offer greater flexibility in what your tasks can do.

Listing 8.30. Creating a custom Browserify task

See test execution

When everything is set up, you can run the following command and see the tests being executed in your browser:

grunt browserify_tests

open test/runner.html

A browser window should pop up. If we open the developer console, we’d see the output shown in figure 8.5.

Figure 8.5. Developer Tools showing the results for the tests we’ve provided

There are a few more routing tests to be had. Next up, you’ll make sure that each route handler does what it’s meant to, whether it’s meant to redirect users to a different route or render a particular view.

A few more tests

The following listing contains the code for the remaining tests. You can add it to the end of the routes.js test suite.

Listing 8.31. Testing route handlers individually

Once all of the tests are in your routes.js file, you can run the Grunt task again and reload the browser. Figure 8.6 contains the results of executing the new test suite.

Figure 8.6. Reveals the results of our modest test suite and its ten assertions

While our tests for the router are minimal, in that they don’t assert much, we’re at least ensuring that the routes exist and that their route handlers do what they’re expected to. Routing in an application is typically a convergence point where configuration is plumbed together, and tests help ensure that the correct modules are used.

8.3.2. Testing validation on a view model

The application also needs to test model validation with a few different inputs, making sure that a model is invalid under certain circumstances, and valid when every validation condition is met. For reference, code for the Shopping Item module is included in the following listing.

Listing 8.32. Testing validation

var Backbone = require('backbone');

module.exports = Backbone.Model.extend({

addToOrder: function (quantity) {

this.set('quantity', this.get('quantity') + quantity, {

validate: true

});

},

validate: function (attrs) {

if (!attrs.name) {

return 'Please enter the name of the item.';

}

if (typeof attrs.quantity !== 'number' || isNaN(attrs.quantity)) {

return 'The quantity must be numeric!';

}

if (attrs.quantity < 1) {

return 'You should keep your groceries to yourself.';

}

}

});

Validation brings us to an interesting use case for JavaScript when it comes to testing. Given that we want to set up a test for each possible validation scenario, we could set up a list of test cases in an array, and then create a single test for each test case.

The following listing shows one possible way to stay DRY in our tests by using a test case factory and a battery of test cases. I’ve thrown in a test that’s not part of the test cases array for contrast.

Listing 8.33. Model validation test case battery

Imagine if you had to write each test case as an individual test: much copy-pasting would ensue, breaking the DRY principle.

Following the practices we’ve discussed in this chapter, you could write tests for the views as well. Good test cases could be

· Making sure the template assigned to the view is the one intended for that view

· Checking that event handlers are declared in the events property

· Ensuring those event handlers do what they’re expected to

You could use sinon to mock the different properties in the view before invoking each method under test. I’ll leave those test cases as an exercise for you.

When you finish writing your tests for the view controllers, it’ll be time to shift your attention toward more automation. This time, you’ll automate Tape tests using Grunt, and you’ll also learn how to run these tests continuously on a remote integration server.

8.4. Automating Tape tests

You automated the Browserify process using Grunt in section 8.1.4. How can you add the tape tests to your Grunt builds? Running the tests on Node is significantly easier than executing them on the browser. As you learned earlier, you could run them on Node by providing the node CLI with the test file path:

node test/something.js

Automating the process shown in the previous code by using the grunt-tape plugin couldn’t be easier. The following code snippet (found as ch08/08_grunt-tape-node in the samples) is all you need in your Gruntfile to run the tape tests in Grunt. Note that you don’t have to run Browserify because, in this case, the tests will run in Node:

module.exports = function (grunt) {

grunt.initConfig({

tape: {

files: ['test/something.js']

}

});

grunt.loadNpmTasks('grunt-tape');

grunt.registerTask('test', ['tape']);

};

That was fast. How about in the browser?

8.4.1. Automating Tape tests for the browser

Running tape tests on browsers from your command line is also fairly easy. You can use testling to do it. Testling (also known as substack) is a tool written by James Halliday, a tremendously prolific Node contributor, who’s also the author of Tape, and a modularity fanatic. There wasn’t a readily available grunt-testling package in existence, but I decided not to disappoint. I created grunt-testling so that you could run Testling from Grunt. The grunt-testling package doesn’t require any Grunt configuration. But you need to configure testling itself. Testling is configured by placing a 'testling' property in your package.json and telling it where the test files are. The following listing (found as ch08/09_grunt-tape-browser) is a sample package.json to do that.

Listing 8.34. Automating Tape tests

{

"name": "buildfirst",

"version": "0.1.0",

"author": "Nicolas Bevacqua <buildfirst@bevacqua.io>",

"homepage": "https://github.com/bevacqua/buildfirst",

"repository": "git://github.com/bevacqua/buildfirst.git",

"devDependencies": {

"grunt": "^0.4.4",

"grunt-contrib-clean": "^0.5.0",

"grunt-testling": "^1.0.0",

"tape": "~2.10.2",

"testling": "^1.6.1"

},

"testling": {

"files": "test/*.js"

}

}

Once you’ve configured testling, installed grunt-testling, and added it to your Gruntfile, you should be all set!

module.exports = function (grunt) {

grunt.initConfig({});

grunt.loadNpmTasks('grunt-testling');

grunt.registerTask('test', ['testling']);

};

You can now run the tests in a browser entering the following command into your terminal:

grunt test

Figure 8.7 shows the results of using Testling with Grunt.

Figure 8.7. Driving tests through the Testling CLI using Grunt

Next up let me briefly reiterate a concept you first saw in chapter 3: continuous development adapted to testing.

8.4.2. Continuous testing

An important aspect of running tests is to do so on every change, making sure you don’t spend a long time with broken code in your local development environment. You might recall a particular watch configuration snippet in chapter 3 that allowed you to run specific tasks when file changes were detected somewhere in your code base. The following listing is an adapted version of that snippet to run tests and lint when files change.

Listing 8.35. Running tests and lint when files change

watch: {

lint: {

tasks: ['lint'],

files: ['src/**/*.less']

},

unit: {

tasks: ['test'],

files: ['src/**/*.js', 'test/**/*.js']

}

}

Automating tests in both Node and the browser is important. Watching for changes and running those tests locally again is also important. At this point you might want to check out chapter 4, section 4.4 again, where I discussed continuous integration, which is fundamental to setting up your project so tests are executed on every push to your version control system.

Testing components in isolation isn’t the only way to test an application. In fact, countless types of testing exist, and we’ll briefly discuss a few interesting ones in the next section.

8.5. Integration, visual, and performance testing

As I mentioned a few times before, testing comes in various sizes and shapes. Integration testing, for instance, allows you to test different paths in your application workflow, making sure that component interaction works as expected. Components were already tested in isolation, and integration testing provides both a redundancy layer and the ability to capture bugs that aren’t evident without executing an application and seeing for yourself.

8.5.1. Integration testing

Integration tests are no different from unit tests in the sense of tooling. You can still use Tape, Sinon, and Proxyquire to run these tests. The difference lies in what should be tested. In integration, you no longer strive to test a completely isolated version of a component, but rather test as many interconnections as you can get away with, and mock the rest. For instance, you might run your application’s web server, hit it with real HTTP requests, and check if the response matches your assertions.

You may also use Selenium, a browser automation tool, to help drive these comprehensive tests on the client side. Selenium uses a web server to communicate with a browser through its API, which is supported in a variety of languages. You can send commands to the browser through a Selenium server. You could write down a series of steps for your test to follow, and then Selenium fires up a browser and does those actions for you. A running web server and browser automation, working together, allow you to automate tests that you might do by hand. Remember, you only have to put together the test once! Then you can run it as many times as needed. You can always go back and change the tests, too. I’m not going to lie to you, though. Setting up Selenium is cumbersome and generally frustrating, and it’s poorly documented. But once you’ve put together a few tests, you’ll reap the benefits.

Integration tests aren’t limited to browser automation with a tool like Selenium, though. You could run integration tests that work solely on your back-end stack or merely in the front end.

8.5.2. Visual testing

Visual testing mostly consists of taking screenshots of an application, at different viewport dimensions, and validating that the layout isn’t broken. You can perform these validations by either comparing a screenshot to what you expect or by superimposing the latest screenshot with the previous one, generating what’s called a “diff.” These diffs let you quickly identify what changed from one version to another by highlighting the differences and shading the parts of the screenshot that haven’t changed. Many Grunt plugins can take screenshots of an application for you. Several even go the extra mile and compare the latest screenshot with the previous one, showing you where the differences may lie. One such Grunt plugin is grunt-photobox. Configuring it is mostly a matter of deciding which URL you want to load and what resolution you want the viewport to be when taking the screenshots. This is particularly useful if your site follows the Responsive Web Design paradigm, which uses CSS media queries to change the appearance of a page based on the dimensions of the viewport and other variables. The following code snippet shows how you might configure grunt-photobox to take pictures of a page in three different sizes. Let me go over the options:

· The urls field is an array of pages you want to take pictures from.

· In screenSizes you can define the width of each screenshot you want to take; the height will be the full height of the page. Make sure you use strings. Note that Photobox will take a picture of each site in each of the resolutions you’ve decided on:

· photobox: {

· buildfirst: {

· options: {

· urls: ['http://bevacqua.io/bf'],

· screenSizes: ['320', '960', '1440'] // these must be strings

·

· }

· }

}

Once you’ve configured Photobox in Grunt, you can run the following command and Photobox will generate a site you can browse to compare the screenshots:

grunt photobox:buildfirst

You can find the fully working example in the accompanying code samples as ch08/10_visual-testing. Finally, let’s shift our attention to performance testing.

8.5.3. Performance testing

Keeping tabs on the performance of your application can help quickly identify the root cause of performance issues. You can monitor performance in web applications using tools such as Google PageSpeed or Yahoo YSlow. Both tools give you similar insights, and they can both be automated using Grunt plugins. They do have a few differences between their services. The PageSpeed Grunt tool gives you more insight into what improvements you should make to your site. For example, it might let you know that you aren’t caching your static assets as aggressively as you should. The YSlow plugin gives you a more compact version, telling you how many requests were made, how long the page took to load, how much content was downloaded, and a performance score.

The PageSpeed plugin, grunt-pagespeed, requires you to get an API key from Google.[2] You can then configure the plugin as shown in listing 8.36 (sample ch08/11_pagespeed-insights). In the code, you’re telling PageSpeed which URL you want it to hit, what locale you want the results to be generated in, what strategy to use ('desktop' or 'mobile'), and the minimum score (out of 100) to consider the test successful. Note that you’re purposely avoiding including the API key in the Gruntfile. Instead, you’ll get it from an environment variable to keep the secret safe.

2 Get the API key from https://code.google.com/apis/console.

Listing 8.36. Configuring the PageSpeed plugin

pagespeed: {

desktop: {

url: 'http://bevacqua.io/bf',

locale: 'en_US',

strategy: 'desktop',

threshold: 80

},

options: {

key: process.env.PAGESPEED_KEY

}

}

To run the example, you’ll have to take the key you got from Google and enter the following command into your terminal:

PAGESPEED_KEY=$YOUR_API_KEY grunt pagespeed:desktop

For more information about the reasons for storing secrets in environment variables, go back to chapter 3, section 3.2.

In the case of grunt-yslow, the Grunt plugin for YSlow, you won’t need to get any API keys, which makes matters considerably simpler. Configuring the plugin is a matter of specifying the website URL you want to hit and setting the threshold levels for page weight, page load speed, performance score (out of 100), and request count, as shown in the following listing (sample ch08/12_yahoo-yslow).

Listing 8.37. Configuring the YSlow plugin

yslow: {

options: {

thresholds: {

weight: 1000,

speed: 5000,

score: 80,

requests: 30

}

},

buildfirst: {

files: [

{ src: 'http://bevacqua.io/bf' }

]

}

}

To run these YSlow tests, enter the following command into your terminal:

grunt yslow:buildfirst

All of these examples can be found in the accompanying source code samples, under ch08. Make sure to check them out!

8.6. Summary

That was exciting! We covered many concepts in a short time:

· You got a crash course on unit testing and learned how to tune your components, making them more suitable to test.

· I explained Tape and how you can use it to seamlessly run tests on both the client side and the server side, without duplicating your code.

· You learned about mocks, spies, and proxies; why you need them; and how you can use them in JavaScript code.

· I showed you several case studies to help you figure out what things you should be testing for and how you should test them.

· You looked at automation using Grunt to run Tape tests on both the server and the browser without leaving the command line.

· I introduced you to integration and visual testing, and you learned how to automate those tasks using Grunt.

If you’re interested in learning more about testing, I suggest you check out Test-Driven JavaScript Development, by Christian Johansen (Developer’s Library, 2010).