Phase 3: Expertly Applying Testing To Your Development Workflow - Testing And Debugging Responsive Web Design - Responsive Web Design, Part 2 (2015)

Responsive Web Design, Part 2 (2015)

Testing And Debugging Responsive Web Design

Phase 3: Expertly Applying Testing To Your Development Workflow

The final testing phase in your IT career is when you work out the best way of implementing testing within your development workflow. This is normally reached when your thirst for testing has gone too far, or the best practices that used to work for the small product in your brand new startup team have now become anti-patterns for the legacy product that is worked on by your now large, established company.

You’ll get to a point where you think you’ve tested enough, or that your entire testing framework needs refactoring. Congratulations! You’ve reached peak testing. Everything from here on is relative to your own unique development requirements.

Without a constant eye on the big picture, every IT project will eventually go bad. Too much testing or the wrong kind of testing ends up hurting your productivity. You mustn’t blame testing for this, though — the technique is not to blame for how it is implemented.

The best piece of advice you can ever give to someone in IT is this (it encapsulates but isn’t limited to testing): don’t build big IT projects. Big IT is an anti-pattern. You can quote me on that.

“Big IT is an anti-pattern.”
– Tom Maslen, 2015

A key programming paradigm for any IT project should be (again): simplicity. The opposite of simplicity, complexity, destroys IT projects. More requirements, more features, more team members increase the complexity of what you’re trying to achieve. It creates longer release dates, adds more risk, increases potential failure and requires more testing.

You should be trying to simplify your product at all its levels. Simplify what browsers have to do, and keep designs simple so they can be reused in slightly different configurations. My advice for testing is the same. Now, don’t mistake “simple” for “easy” — they’re two different things. Easy means the task is going to be done in five minutes, or that anyone can do it. Simple means splitting up your problem into many smaller, logical parts, giving each part a single responsibility. Simple means clarity, making what each part of what your codebase does predictable and, ultimately, reliable.

Testing is no different. The difference between moving from a small site to a large site is that there is more to test. I’ve learned the hard way that you simplify testing a large website by breaking up the site into smaller, self-contained parts. A large media organization’s website doesn’t have to be one product. You could break it up into multiple products, each product responsible for a different content page: an index page application, a story page application.

If you have multiple mini products, then you can test each one in isolation. Don’t get yourself into a position where making one change to your codebase means having to test everything before you can release. Having to test everything before each release only works for a limited amount of time. As a product matures and more features are added, the testing process becomes more complex. You’ll reach a point in your product’s life cycle where you will have accrued so much technical debt that you will have to stop delivering features and split up your codebase.

Massive single codebase visualized
Code as red units; tests as gray units; changed code as blue units. With a large monolithic codebase, making a single change means running all of your tests and deploying your entire codebase.

Multiple small codebases visualized
By breaking your codebase into multiple smaller parts you can make a single change, run only the relevant tests and deploy a much smaller section of your code.

Splitting up your testing into chunks is only possible if you don’t have a single, large codebase that can only be shipped in one piece. This means the secret to simplifying your testing process lies in your application architecture.

If the product you are working on has grown organically into a much larger proposition, there is no magic testing bullet. There isn’t any methodology you can apply to improve testing. Your only option in staying sane is to refactor and break up your application into smaller pieces. It doesn’t matter who you are, how many testers and actual devices you have, or what methodology you use. There are no best practices for making large IT products. If you’re testing everything in one go, you’re doing it wrong.

IDENTIFYING WHEN TESTING HAS BECOME A PROBLEM

Here are some workflow smells that will tell you if your codebase has become too big:

•Sections of your codebase have had tests run against them more times than they have lines of code.

•Testing takes longer than the development.

•Deployments used to be easy, now they are hard and take longer.

•Team members don’t like running the tests.

•Team members start disabling tests to get the build to pass.

If this sounds familiar to you, then it’s time to break up your product into many smaller pieces. You need to get your project manager to recognize the issue. The longer you leave this, the more features you add, the harder it will be to untangle your codebase and split it into smaller pieces.

COMMON TESTING PROBLEMS AND HOW TO RESOLVE THEM

There are many common pitfalls that all teams get themselves into when testing. Here are a few of them with advice on how to get you out.

NOT HAVING ENOUGH TIME TO TEST ALL THE BROWSERS

The biggest problem responsive web design presents us with is the increase in what you now need to support. Not only do we have to support many more browsers, we also have to support a crowd of different devices all accessing the internet on a multitude of connection types.

Before the iPhone we had to test across the following variables:

•Browser (Chrome, IE6–IE8, Opera, Safari and Firefox)

•Input type (keyboard and mouse)

After the iPhone we now have to test across the following variables:

•Browser (desktop and mobile Chrome, IE8–IE11, multiple Operas, more than one version of desktop Safari, more than one version of iOS Safari, multiple WebKits, desktop Firefox, mobile Firefox, too many versions of Android, Nokia Ovi, Dolphin and many more I can’t think of right now)

•Device size (240px up to greater than 1,000px)

•Input type (touch, keyboard, mouse, and D-pad)

•Connection speed (GPRS, 2G, 3G, 4G, and broadband)

All of these different variables and their combinations makes testing not only suddenly much more important than before but also much harder. Even companies like the BBC (where I work) struggle with this new paradigm. When your support matrix becomes this complicated, making websites at speed or trying to hit tight deadlines becomes risky. The potential to release bugs into production increases.

You can scale your testing up to meet this demand. You could hire a whole team of testers or try to outsource your testing to India, but this is the most expensive option. A cheaper alternative is to split your testing up into groups and prioritize what you test. If you tested a select few browsers that you know represent over 50% of your audience, you could then release your latest change and then test the next group of browser/device combos.

This won’t work for everyone. If you have a product or a specific feature of a product where the cost of a bug gone live is expensive, then you should be more deliberate and take time with your testing. But if the feature is minor, or the consequences of a bug mean a minor rendering issue then this technique will work for you.

Another alternative is to make your product really simple. Simplify the design, make the JavaScript-driven interface a progressive enhancement of a static webpage, and don’t use thousands of lines of CSS or cutting-edge HTML5 capabilities. The simpler a website is, the more likely it is to work across a range of different browser/device combos. You should never underestimate the power of simplicity. Simplicity scales.

NOT HAVING ENOUGH DEVICES

Lack of devices or not having a large budget to purchase devices is a common problem. Testing responsive web design is expensive: even a minimum recommended group of devices to test (in 2015 this is: non-Samsung Android 4.x, Samsung Android 4.x, iPhone 6, iPhone 5, and your development PC) is going to be expensive.

While testing on real devices is always preferable, sometimes it’s not possible, and we have to look for alternatives. For desktop browsers, it’s better that your development machine is a Mac, as you can easily run Windows virtualized via VirtualBox or VMWare to test all versions of Internet Explorer. Microsoft maintains an excellent website, www.modern.ie, that allows you to download virtual machines of their operating system. Testing the other way round, via a PC and using a virtualized version of Mac OS X is problematic.

Google and Apple have IDEs you can download that have built-in emulators for iPhone and Android device testing. Although it’s not trivial to get them set up and running, once you have this is a much cheaper alternative to purchasing devices. Your final option should be a service like BrowserStack. Although the barrier to entry of using BrowserStack is much lower than buying devices or setting up emulators, in the long term it is less useful because loading webpages can take a long time. You also lose the immediacy of having an actual device in your hand. BrowserStack lets you test that your layout works and that there are no bugs with your JavaScript, but you can’t test how the experience feels in the device.

TROUBLESHOOTING BUGS ON MOBILE DEVICES

The hardest bugs to fix are bugs you can’t replicate. You might have a webpage looking really nice in Chrome on your development machine, but then see it rendering incorrectly on your boss’s Sony Xperia Tipo. You have a great array of debugging tools on your development machine, but you really need to be able to use them with the browsers on mobile devices. Luckily this problem has been solved with the introduction of remote debugging.

Safari and Chrome both allow you to connect your local browser to a browser on a mobile device. You need to connect the devices either over Wi-Fi or by USB cable (USB is always easier). The syncing can be a bit complicated but once it’s done you can then use your local debugging tools on the browser in the phone. When I first saw this demonstrated I wept with joy (I also considered throwing money at the person).

Remote debugging
You can connect your local browser to a browser on a mobile device over WiFi or by USB.

Explaining how this works in a book is useless as the techniques and commands change regularly. I recommend you follow up this technique by visiting the Chrome17 and Apple18 online documentation.

Another pain point with mobile debugging is getting your content viewable in a device. Small teams creating static sites will be able to FTP their content to a publicly viewable space to check on a mobile. But if you are working on a branched version of an application, or the application has a non-trivial back-end that is not easy to deploy, you may want to make your local web server available to view on a device connected directly to the internet.

Services like ProxyLocal19, once installed on your computer, can make your local web server available on the internet. This allows you to connect any mobile device to your local machine, so you don’t have to deploy your code to a second location.

TESTS TAKE TOO LONG OR GET IN THE WAY OF DEPLOYMENT

Tests that take too long to complete can be remedied in a few different ways depending on your situation. The first thing to consider is whether you are testing too much. As outlined above, I strongly recommend that you break up large codebases into smaller fragments; logical divisions of codebases should be tested in isolation. But sometimes this isn’t possible.

If this true for you, then here are a few other solutions you can try to improve the time the tests take to run.

Reduce HTTP Requests

Look for and remove HTTP requests in the code you are testing. HTTP requests are often performance bottlenecks within web applications. If you are testing a web application that makes its own HTTP requests for data, add a hook into your app so you can set it to use local fixture data instead.

If you are running unit tests for JavaScript that makes AJAX requests, use a library like Sinon.JS to mock the AJAX requests. In your tests, this is actually preferable to making real AJAX requests as you can control the response. You can deliberately throw 404 and 500 errors and test how your code reacts.

Aside: Speed up Your Testing

There are many tools and techniques to help you reduce the time it takes to test. Gremlins.js20 is a monkey testing script written in JavaScript for Node and browsers. It tests the robustness of your application by randomly clicking everywhere, entering random strings into web forms and randomly firing events. Leaving gremlins.js running on your application will also eventually detect memory leaks.

A more deliberate way of testing forms is to prefill your forms with meaningful data. I often find myself filling in a web form quickly, testing something, making a change to the code then refreshing the page, before filling in the web form again. This can be especially tiresome on a mobile device. Chris Coyier’s article “Prefill Your Own Forms in Dev21” gives a great example of how you can get around this issue.

Testing multiple devices at the same time is also a great way of decreasing the time spent testing. Ghostlab22 provides a service that lets you synchronize scrolls, clicks, reloads and form inputs across multiple devices.

If your JavaScript uses the setTimeout method, then your JavaScript unit test will need to wait for the callback to fire, slowing down your tests. Again, Sinon.JS can help as it provides a mocked setTimeout method, so you can call a method in your JavaScript, tick the time along and then run our test without having to wait.

Don’t Run All of Your Tests All of the Time

If you are running 300 tests on each commit, and committing many times a day, you’ll spend much of your day waiting for tests to be passed. Much of the time, a good portion of your tests will be checking parts of your codebase that you no longer actively develop. Do these tests really need to be run on every commit? While it’s important to make sure all of your codebase does not go stale, you can choose to run parts of your test suite on a daily basis, rather than on each commit. Running a smaller subset of your entire test suite will speed up your commit workflow.

Run Tests in Parallel

Modern computers contain multiple processors allowing you to take advantage of serialization, meaning you can run multiple tests at once. You should invest time into seeing if your tests can be run in this way. For example, if you use Cucumber to run BDD tests, you can use the ParallelTests project23. If you have JavaScript tests, then take advantage of Grunt’s concurrent task24, split your testing into groups and run the groups simultaneously.

TOO MUCH IS BEING TESTED, OR TESTING IS DUPLICATED

When using several types of testing, it is easy to create duplication in your tests. With BDD tests, you can confirm whether JavaScript is working on a page, but you can also do this with a JavaScript testing framework too. You don’t need to test something twice. Unit tests are preferable to functional tests as they generally execute faster owing to the fact that a functional test will need to spin up a web browser and then make HTTP requests.

It’s easy to get into the rhythm of continually piling feature upon feature into a codebase. You must not lose focus on the big picture. Continually review your tests. Can two tests now be merged into one? Can we remove certain tests completely? Another problem sometimes is we end up testing too much, especially with BDD.

Because BDD tests are created by listing functionality, it’s very easy to map each requirement to a test. While you do want to make sure you capture all of your business logic, you don’t want to create too much granularity with a BDD test. Often simply testing to see if a value from the database has been added to the page is enough. Use unit testing to confirm the rest of your business logic.

TESTS BREAK ALL THE TIME, OR THEY ARE TOO FRAGILE

When testing HTML, your tests can become fragile because of the way certain BDD test libraries work. For example, Jasmine (a JavaScript framework) allows you to make assertions by searching for elements in the DOM using jQuery:

expect($('.pageTitle').html()).toEqual('The end of the world is here!!!');

If another developer (let’s call him John) comes along and decides to refactor all the CSS to use BEM notation, he will change our class name from pageTitle to page__title. When John runs the Jasmine tests he will get an error: .pageTitle no longer fetches anything from the DOM.

A better way to write your tests is to use hooks created specifically for tests in your elements. You can do this using data attributes:

<h1 class="page__title" data-test-id="page_title">The end of the world is here!!!</h1>

You can then test for the element’s existence using this hook, like so:

expect($('[data-test-id="page_title"]').html()).toEqual('The end of the world is here!!!');

With this in place, John can refactor the CSS as much as he likes, and he won’t break our test.

ADDING THIRD-PARTY CODE ALWAYS CAUSES ISSUES.

A common pattern on the web is to include third-party components in your website. From comments plugins to interactive content like maps and charts, to social media buttons, this code written by someone else can cause all kinds of problems. At BBC News, this problem also arises between development teams. We often build responsive infographics and rich, interactive content in isolation from the website. These mini projects are developed by a separate team from the one which develops the website itself. For example, as part of the 100 Women 2014 campaign we produced an infographic-heavy page25 covering domestic violence in India. The page contains these two responsive infographics that were created independently of the page:

Responsive infographic 1
Creating a responsive infographics can be difficult due to the sheer amount of content to display.

Responsive infographic 2
One option to create responsive infographics is by embedding them into a responsive <iframe>.

The infographics are static content: once deployed, we never intend to go back and maintain them. However, they reside within a codebase that changes many times a month. If we only had to support Chrome on a desktop computer, much of the complexity of maintaining compatibility between two codebases would disappear. Unfortunately, this problem is exacerbated by responsive web design because the area in the DOM where the component needs to reside is not one consistent size.

Badly written CSS pollutes the other party’s code: CSS’s global nature means any CSS can affect any element on the page. If you’re lucky, then both parties have written their CSS using the BEM methodology. If you’ve used BEM and the other party has not, then unwanted CSS will be applied to your HTML. Even if there is no pollution, or you’ve managed to reset all the styles applied to your HTML, then you still have the problem of future CSS selectors and properties that you don’t currently know about affecting your page in the future.

JavaScript has the same concerns, but, fortunately, a main feature of the language is scope. You’d expect the creator of the third-party HTML not to do something stupid, like define a vague-sounding name (var i, for example) in the global scope, which would cause logic between the two parties’ codes to leak into one another. A big issue with third-party code is often not the code they write themselves, but the dependencies that come with it. Another problem is the other code relying on a slightly different version of jQuery that your page does. Double-loading jQuery is an embarrassing performance issue and often causes your code to run in unexpected ways.

Responsive web design makes matching the layout between the two parties’ code extremely difficult. If the included code is a very small UI element or something like Google Maps, where the interface basically works at any size or aspect ratio, then you’ll get away with this issue. The creator of the plugin needs to make their user interface very flexible. Media queries do help with this, but plugin authors can’t know what percentage of the width their code will be allowed to expand out into.

It’s easy to end up with a very fragile implementation. Fortunately, there is a solution. It involves some JavaScript and an HTML element that belongs to an earlier, more arcane version of the web. An element that’s traditionally been very wicked, never positively spoken about, whose only real friends until now have been adverts, and is probably older than you: the iframe.

An iframe (inline frame), along with its older brother <frameset>, is an element that allows you to include an external HTML document in another. The very idea of this breaks one of the main concepts of the web: one URL per resource. They act like HTML documents too; you can follow links in them and load another page within the iframe. This action is added to your browser’s history, so it can confuse the heck out of users when they press the Back button and the main page doesn’t change.

Iframes have also traditionally given us accessibility issues, but modern browsers and screen readers are much better at handling them today. The benefit of the iframe is that it’s a separate HTML document, so any code you put in it (HTML, JavaScript, CSS, and so on) is completely isolated from the host page. The iframe essentially acts as a sandbox.

The <iframe> element is not responsive by default. It acts like a viewport into another document. This means that the height of the iframe element works independently of the height of the iframe’s HTML document. Typically, the iframe creates an additional scrollbar in the page allowing users to access content below the fold of the element.

An iframe with an additional scrollbar
An iframe with an additional scrollbar. (An example of this made responsive26 is available online.)

As the width of the HTML document changes so too will its height. The variability of the HTML document’s width means that we need to set the height of the <iframe> element according to the height of the HTML document.

Two images of the same responsive infographic at two different resolutions, showing the variable difference in height and width
Two images of the same responsive infographic at two different resolutions, showing the variable difference in height and width. The example on the left is rendered in a smaller viewport but requires more height compared with the example on the right.

We can do this dynamically using two snippets of JavaScript. The first, inside the iframe’s HTML document, informs the second, in the host page, what the height of the <iframe> element needs to be. The source domain of the iframe will often be different from that of the host page, so these two snippets of JavaScript can’t talk directly to each other. Standard browser security models will protect your code from malicious third-party attacks by blocking any attempts to interact between scripts that come from different domains. To get around this, our two JavaScript snippets communicate via a browser feature called window.postMessage. This is an asynchronous communication method in JavaScript. Here is an example of how it works.

First, set up the host page to listen for a sent message:

window.addEventListener('message', function (e) {

console.log(e.data); // will output "Hello"

}, false);

Then, send a message from the iframe using the postMessage method on the parent’s window object:

// iframe page

window.parent.postMessage("Hello", "*");

Note: this technique can potentially allow any third-party JavaScript to communicate with your code. To find out how to use this feature in a more robust and secure way, read about it on the MDN website27. You can download an example of this codebase from Github28. Many publishing companies such as the BBC, the Guardian, Telegraph and New York Times use this technique.

The responsive iframe will give you a much more robust way of including third-party code in your page. There are a few gotchas to note, though:

•JavaScript calculates the required height by checking the height of the body in the iframe’s HTML document. Any absolutely positioned elements in the document will render separately to the body and so won’t be taken into consideration when determining the required height.

•Using position: fixed on an element within the iframed HTML document positions the element relative to the iframe, not the host page. This means the element will not be fixed in the browser.

•Using any kind of modal state within the iframe will only work within the confines of the iframe itself. If you wanted to use a modal from within the iframe, you need to set the iframe to take up the full viewport of the host page.

•By default, linking to other documents using hyperlinks within the iframe will load the requested page within the iframe. To load a new document into the host page, use the target attribute on the hyperlink: <a href="http://www.google.com" target="_top">Google</a>.

Ideally, we shouldn’t have to use an iframe in this manner to isolate parts of the page from one another. The shadow DOM, which is part of the web components specification, gives us natively sandboxed CSS and JavaScript, as well as a truly responsive element. For now, unfortunately, web components don’t have enough browser support to warrant putting this technique into practice, but it does give us a future escape strategy from our current reliance on iframes.

WELL-MAINTAINED, PREPROCESSED CSS MAKES DEBUGGING CSS REALLY HARD

In this chapter, I’ve evangelized breaking up your CSS into multiple Sass files. Sass, like any other preprocessor (LESS, Stylus, etc.), will concatenate all your Sass files into a single CSS file. This is great for performance as the browser will make only one HTTP request for all your styles. Unfortunately, this makes debugging issues harder as the browser’s dev tools are not looking at the source files that we edit. When we find a style we wish to change, mapping it to the original source file is not easy. If you’ve minified the concatenated output file, the browser dev tools will tell you the style comes from the first line of the CSS file. Not helpful at all.

Styles pointed at CSS
Preprocessed CSS makes debugging really hard because the browser’s dev tools are not looking at the source files.

From the very beginning, Sass and LESS helped us by outputting debug comments next to each line of generated CSS. These comments pointed to where in the preprocessor files the styling came from.

/* line 76, _normalize.scss */

body {margin: 0;}

This is great when you’re developing locally, but it adds considerable bloat to the file. We wouldn’t want to deploy this to our live site as it would increase the rendering time of the page. Once finished working on the page, we’d deploy the CSS without the debugging comments in them. This made investigating CSS issues on the live site hard, as we were once again dealing with minified CSS.

Fortunately, there is a handy feature that helps us with this problem: CSS source maps. Generated by a CSS preprocessor, a source map is an additional file that informs the browser of the origin of each line of CSS in the source files. To enable CSS source maps you need to:

1. Tell browsers to enable their use. There’s typically a setting in the browsers’ dev tools. For example, in Chrome 38 it’s in the general settings tab of the dev tools.

Enable CSS source maps

2. Tell your CSS preprocessor to generate a source map. The preprocessor will append each generated CSS file with a comment informing browsers where to find the map file.

main.css/*# sourceMappingURL=main.css.map */

Once these two instructions have been completed, browsers will download and use the CSS source map when the dev tools panel is open. When inspecting a DOM element, each line of styling will point to the original source file instead of the generated CSS output.

Styles pointed at SASS
You can now benefit from the concatenation offered by CSS preprocessors without the difficulty a single file presents.

You may find setting up your preprocessor to output source maps a little frustrating. I use Grunt for all my front-end dev tasking. Once Grunt runs the Sass task, I instruct it to run grunt-contrib-cssmin to minify the generated CSS output. At the time of writing, grunt-contrib-cssmin strips out the comment in the CSS that points to the source map. If you are trying to apply source mapping and can’t get it to work, check to see if any task downstream of the preprocessing is stripping it out.

Summary

The one theme that I’ve tried to include throughout the chapter is simplicity. The importance of simplicity would be the one key idea I hope you take from this chapter. Apart from death and taxes, the only other guarantee the future brings is increased complexity. As more people access the web with diverse types of devices, we’ll find the best way to tackle this is to make the way we build our products simpler. As developers, we now think about performance as a key objective for our codebase (or NFR — non-functional requirement — in product management speak). I’d strongly argue that simplicity should also be a key objective.

Simplicity — and its crime-fighting partner, predictability — will persist if you keep doing the following with your codebase:

•Plan the structure of your HTML, CSS, and JavaScript — don’t make it up as you go along. Modular thinking is key to building maintainable structures of CSS. Always keep in mind the single responsibility principle, not just for splitting content, presentation and interaction into HTML, CSS and JavaScript, but also for breaking up your code into isolated units that are responsible for just one thing.

•When designing and building responsive structures, plan from the content out. Add breakpoints at the point that the design starts to break.

•Start using visual regression testing in your projects today, and start to get a better understanding of other ways of testing as well.

•Be pragmatic. During my career I’ve swung from one deeply dogmatic way of working (cowboy development) to an opposing and yet still just as dogmatic idealism (testing too much). It’s important to understand that delivering on time in a sustainable manner is the preferred way of working.

Hopefully, you’ve enjoyed this chapter as much as I’ve enjoyed writing it for you. It’s taken a long time in my career to get to a point where I can confidently say I’m no longer a terrible web developer.

Cutting The Mustard

Cutting the mustard is a technique that we’ve used on the BBC News website to answer the most difficult question responsive web design asks us: how do you support a MacBook Pro with a Retina display and a Nokia C5 with the exact same webpage?

These devices are massively different. Let’s take a quick look at them:

Property

15" MacBook Pro (Retina)

Nokia C5

Screen width

15 inches

2.2 inches

Pixel count

2,880×1,800

240×320

CPU

2.8GHz dual-core Intel Core i5

600MHz ARM 11

Memory

16Gb RAM

128Mb RAM

Graphics card

Intel Iris Graphics

n/a

Input

Full keyboard + multi-touch pad

Number pad + D-pad

They are completely different types of device. Their only shared features are:

•a screen

•an internet connection

•a web browser is installed

•a way to input

These devices have as much in common as a luxury yacht and a raft made out of mango trees. To support them both, we have to find common ground, somewhere for us to start that we can build on.

The first step of cutting the mustard is to find this common ground: build a basic webpage. This webpage is going to be rendered on a Nokia C5, as well as all the other weird, strange and obscure phones and web browsers out there. This base experience needs to do the following:

•Provide a very basic layout: a single-column design using only HTML4 elements (no fancy stuff!).

•Build this layout using old CSS2 selectors and properties (the only caveat to this is the box model).

•Don’t use any JavaScript, or at the very least use only minimal JavaScript (Google Analytics doesn’t count in this statement so add that in if you want to).

•If you have content that you want to load in with JavaScript, add a link to the body of your page connecting to another webpage with that content in it (we’ll talk about this in a moment).

This webpage is now so basic that anything can render it. The webpage should be able to resolve the needs of your users at the most basic level. If your site is a content publishing site, then users should be able to read all your content. If your site is an e-commerce site, then your forms should be POST-able, and work without JavaScript. If your site is a web application, then your users should be able to log in without JavaScript and immediately see stuff and potentially interact using POST-able forms.

Supplying a base experience to all users essentially lowers your site’s barrier to entry. I like to think of each website I build as a shop. You wouldn’t make a shop with a door that is thin, short, shut and takes 30 seconds to open. You’d want the shop door to be as wide and high as possible, open and requiring no effort to walk through. You’d make sure there was nothing stopping people from entering your shop. Yet by making websites that aren’t accessible, usable, don’t work without JavaScript and are slow to load, you’re essentially raising that barrier to entry, making it hard for people to enter your shop.

The next step in the cutting the mustard technique is to add a JavaScript application that improves the base experience. This JavaScript will progressively enhance the base experience, adding more CSS to improve the layout (this is where we can actually implement a responsive layout) and interactive elements into the page (drop-down or slide-out navigation, a dynamic shopping basket or auto save when features are toggled).

Not all browsers get this content, however. The JavaScript application should be conditionally loaded by checking to see how modern the browser is. You can do this with a simple bit of JavaScript feature detection:

if (

"addEventListener" in windows &&

"localstorage" in document &&

"queryselector" in document

) {

// load the premium experience

}

If a browser passes this test, we say it cuts the mustard. Can you see what we did with the name?

There is a nice correlation between all browsers that cut the mustard. They are all modern browsers with good HTML5 and CSS3 support. They also have good support for W3C specifications, so you can depend on small, modular JavaScript libraries like Zepto, Ender and jQuery 2. But best of all, problematic browsers that have for years caused us issues — browsers like IE6, 7 and 8 — they don’t cut the mustard, so they only get the base experience.

A big criticism of responsive web design is that it doesn’t decrease load time and actually makes websites less performant. This is true of many implementations of responsive web design, but it’s not an issue with RWD techniques themselves. You can try to polyfill all the browsers and give them the same experience. You can load hundreds of kilobytes over GPRS connections and force your users to wait until JavaScript is ready before accessing the content. Or you can use the natural ingredients of the web to build an experience that progressively enhances based on the capabilities of the web. Cutting the mustard is an implementation of responsive web design that doesn’t try to offer all browsers the same experience, because not all browsers are equal. It’s an implementation that recognizes the hostility of the web and tries to use the way browsers work by default to improve the chances of users getting to the content.