An Ongoing Battle - Performance Optimization Roadmap - Responsive Web Design, Part 2 (2015)

Responsive Web Design, Part 2 (2015)

Performance Optimization Roadmap

An Ongoing Battle

The entire performance optimization was an exercise in figuring out just the right strategy for making the website faster while keeping the effort required for it well balanced. As we introduced more and more changes, we had to review and adjust our editorial workflow to ensure that everybody on the team had a clear roadmap and a solid workflow when publishing articles on the site.

When working with authors, we ask them to submit images, photos and illustrations at the largest possible resolution, either as high-resolution JPEGs, PNGs or SVGs. If an article requires JavaScript for an example, we make sure to defer the loading of that JavaScript after the DomContentLoaded event has fired. We process images, generating variants depending on the context (usually four or five variants) using Grunt, always producing losslessly optimized PNGs and progressive JPEGs at 80% quality following a namimg convention. We then generate markup for those images using srcset and review articles for performance before publishing — just to make sure that nothing blocks the rendering of an article and all images are properly optimized. The goal is to have at least a PageSpeed score of 98 for every new article or page, both on mobile and desktop, and start render time within one second on a moderate 3G connection.

When implementing new features we follow a similar procedure, but mostly tend to avoid JavaScript libraries and keep the priority list in mind for every new feature or adjustment. A good example is responsive tables. We load Filament Group’s Tablesaw JavaScript and CSS for responsive table behavior using custom fields in WordPress, but only if they are actually required on the page. Again, the JavaScript is loaded asynchronously so it doesn’t block page rendering and becomes available later as the main content has already started to be rendered.

Once you compartmentalize the entire layout into independent content blocks, you can list how all the different components affect performance on a page, and identify what can be deferred. With such an overview at hand, keeping performance a high priority is becoming a matter of doing things right the first time. And this is exactly the principle we emphasize within the entire editorial team to ensure we consider performance at every step, with each new article or page or feature or change that goes live on the website.

Because traffic varies a lot depending on how well articles perform, SEO, and the time of the year (late winter and late summer usually aren’t heavy on traffic), we haven’t been able to precisely measure the impact of performance optimization on traffic. The traffic improved by 1.9% on average in terms of visitors (+0.7%), actions (+2.1%) and average time per visit (+3%), but most importantly reader satisfaction has improved significantly, with readers applauding our efforts publicly and in numerous emails. Obviously, we also noticed a quite remarkable technical improvement. By deferring and caching web fonts, inlining CSS and optimizing the critical rendering path for the first 14KB, we were able to achieve dramatic improvements in loading times. The start rendering time began circling around one second for an uncached page on 3G and was around 700ms (including latency!) on subsequent loads.

webpagetest
We’ve been using WebPagetest a lot for running tests. Our waterfall chart46 has become better over time and reflects the priorities we had defined earlier.

On average, Smashing Magazine’s home page makes 45 HTTP requests and has 440KB in bandwidth on the first uncached load. Because we heavily cache everything but ads, subsequent visits have around 15 HTTP requests and 180KB of traffic. The first-byte time is still around 300–600ms (which is a lot), yet start render time is usually under 0.7s47 on a DSL connection in Amsterdam (for the very first, uncached load), and usually under 1.7s on slow 3G48. On a fast cable connection, the site starts rendering within 0.8s49, and on fast 3G within 1.1s50. Obviously, the results vary significantly depending on the first-byte time and the network configuration. That’s the only asset that introduces unpredictability into the loading process, and as such has a decisive impact on the overall performance.

Just by following basic guidelines from our colleagues mentioned above and Google’s recommendations, we were able to achieve the 97–99 Google PageSpeed score51 both on desktop and on mobile. The score varies depending on the quality and the optimization level of advertising assets displayed randomly in the sidebar.

webpagetest
After a few optimizations, we achieved a Google PageSpeed score of 99 on mobile52.

99 out of 100 points on desktop with the Google PageSpeed tool
We got a Google PageSpeed score of 99 on desktop as well.

Scott Jehl has also published a wonderful article on the front-end techniques53 Filament Group uses to extract critical CSS and load it inline while loading the full CSS afterwards and avoiding download overheads. Patrick Hamann’s talk on “Breaking News at 1000ms54” explains a few techniques that the Guardian is using to hit the speed index 1,000 mark. Definitely worth reading and watching, and indeed quite similar to what we implemented on our site as well.

Work To Be Done

While the results we have able to achieve are quite satisfactory, there are still some work and experiments to be done. The web fonts solution works fairly well, but we are aiming for a more bulletproof solution with WOFF2 delivery to capable browsers, similar to the achievements of Filament Group. We are also looking into back-end architecture and CDNs to ensure fast loading times from significant locations worldwide. We haven’t mastered responsive images just yet, so over time we hope to learn more on how we can achieve best results within a minimal timeframe. Advertising images often don’t have an appropriate max-age directive and Expires header still, and Gravatar assets have very short Expires headers.

Our next challenge will be to launch a redesigned responsive Smashing shop by following the very same principles we’ve used for Smashing Magazine. A lot of work has been done already, but a lot of micro-performance optimizations require close attention: more specifically, the checkout flow. Of course, we will publish the results on Smashing Magazine.

We’re playing around with DNS prefetching and HTML5 preloading to resolve DNS lookups way ahead of time. We also considered using Instaclick to preload the content of HTML pages before the user actually follows the links. However, we are being careful and hesitant here because we don’t want to create a loading overhead for users on slow or expensive connections. Besides, we’ve added third-party metadata55 to make our articles easier to share. If you link to an article on Facebook, Facebook will pull optimized images, a description and a title from our metadata, which is crafted individually for each article. We’ve also happily noticed that article pages scroll smoothly at 60fps56, even with relatively large images and ads.

Performance Optimization Strategy

To sum up, optimizing the performance of Smashing Magazine was quite an effort to figure out, yet many aspects of optimization can be achieved very quickly. In particular, front-end optimization is quite straightforward as long as you have a shared understanding of priorities. Yes, that’s right: you optimize content delivery, and defer everything else.

Strategically speaking, the following could be your performance optimization roadmap:

•Remove blocking scripts from the <head> of the page.

•Identify and defer non-critical CSS and JavaScript.

•Identify critical CSS and load it inline in the <head>, and then load the full CSS after rendering. (Make sure to set a cookie to prevent inline styles from loading with every page load.) Also, keep reviewing and adjusting inline CSS to avoid unnecessary reflows and jumps.

•Keep all critical HTML and CSS to under 14KB, and aim for a speed index of under 1,000.

•Defer loading web fonts and store them in localStorage or AppCache.

•Consider using WOFF2 to further reduce latency and file size of web fonts.

•Implement a responsive images solution. The easiest way would be using srcset and generate markup with an integrated CMS plugin or Grunt and Gulp,

•Replace JavaScript libraries with leaner JavaScript modules.

•Aim for a Google PageSpeed score of at least 98 for both mobile and desktop for every new page launched.

•Avoid unnecessary libraries and look into options for removing Respond.js and Modernizr (by cutting the mustard57, for example, placing separate browsers into discrete buckets). Legacy browsers could get a fixed-width layout. Clever SVG fallbacks58 also exist.

That’s basically it. By following these guidelines, you can make your responsive website really, really fast.

Conclusion

Finding the right strategy to make our website fast took a lot of experimentation, blood, sweat and cursing. Our discussions kept circling around next steps and critical and not-so-critical components, and sometimes we had to take three steps back in order to pivot in a different direction. But we learned a lot along the way, and we have a pretty clear idea of where we are heading now and, most importantly, how to get there.

There you have it: a little story about the things that worked well (and failed) on Smashing Magazine over the last year. If you notice any issues, please let us know on Twitter @smashingmag59 and we’ll hunt them down for good.

Ah, and thanks for reading all these years. It means a lot. You are quite smashing indeed. You should know that.

A big thank you to Andy Hume, Tim Kadlec, Ilya Pukhalski and Horia Dragomir for their fantastic support throughout the years. Also a big thank you to our front-end engineer, Marco, for his help with the chapter and for his thorough and tireless front-end work, which involved many experiments, failures and successes along the way. Also, kind thanks to the Inpsyde team, MediaTemple Team and Florian Sander for technical support and implementations.