Hitting the hundred

July 2016

A guide to obtaining a perfect 100 in Google PageSpeed

Google, like many others, have recognised a worrying trend in web design. The complexity and weight of websites is going up, and our connection speeds are not. Or rather, they are, but not quickly enough, and with too much variation to be reliable.

In addition to this, Google have been using speed metrics to formulate their rankings for years now. It’s not only important for you, it’s important for your users.

As such, they have created a useful tool called PageSpeed Insights. This is part of a larger suite of solutions created in order to achieve response time zen, and can be used to parse and report on performance related issues your site may be exhibiting.

A word of warning

Like snowflakes, all websites are different. You might have any number of performance issues that aren’t covered in this guide, and that’s fine. most of the issues come with helpful advice — albeit a little technical at times — that can you can follow to come up with your own solutions.

Another thing to note is that some content management systems, platforms or even servers might not allow you to perform some of the optimisations required. For instance, as of this article, I’m waiting for a PHP 7 compatible version of the superb [W3 Total Cache plugin] for my WordPress installation in order to get the final 2% based on HTML minification. True, I could use another plugin, but W3 comes with so many other options, it’s worth the wait.

With that said, these are the areas I’ll be focusing on for this guide:

  • Blocking resources (JS and CSS)
  • Inlining your critical CSS
  • Making use of Web Font Loader
  • Image optimisations (especially PNGs)
  • Leveraging browser caching

The render path

Browsers love to load files. Servers love to serve files. The internet however, isn’t keen on either. Bandwidth utilisation and latency is the number one cause of a website appearing to be slow — especially with today’s powerful mobile and desktop processors capable of handling almost all the usual things thrown at them.

Every request your browser makes to a server makes your web page feel a little bit slower to load. With modern JS plugins, CSS frameworks and design heavy sites, these requests can easily hit over 100 even on a small site. All the while this is happening your browser is patiently waiting to display the first word, and your visitor is impatiently considering the other website they heard about yesterday. If you’re an online store, you can see how this would be a problem.

Minification and combination of your CSS/JS can help a lot with this, but only takes you so far. The real trick is to give the user what they need first, and what your designers want second.

All browsers must follow the render path in order to display visible content. This will be the case for the foreseeable future. The goal you have here, is to make that path as short as possible. Patrick Sexton has written a great article explaining this so I’ll leave it to him for the details. Feel free to come back once you’ve read up.

Make sense? Mostly? Good. Let’s optimise our sites.

Eliminating render blocking

“Eliminate render-blocking JavaScript and CSS in above-the-fold content” or, “Prioritize visible content”

Of all the nuts, this one is the toughest to crack reliably. If you’ve had above message, you’re not alone. Almost every site I’ve run through this tool — no matter how well optimised — displayed this message.

It’s actually two problems. The first one of which is to do with the method you’re loading in the CSS/JS, and the second is about optimising delivery of the CSS rules themselves. We’ll cover both here.

Optimising the JavaScript

JavaScript, by and large, has thankfully remained for the most part an enhancement to sites. There are notable exceptions to the rule (Angular, React etc) but usually, the content is legible on a page before a line of JS has been executed.

Why on earth does a browser wait for it before showing you that content, then? Well, JavaScript can do almost anything; consequently, this includes things that change the way a page looks, which means the browser might need to re-think its rendering strategy.

Minification & concatenation

If, like me, you use some form of AMD/CommonJS module stack for your JavaScript, you’ll already be familiar with the benefits of concatenation. If you’re not, look into it. It can help massively with making your JS easier to manage as a single unit of predetermined dependencies.

Without libraries such as Require JS or Browserify, minification is still possible and highly recommended — especially if we’re talking about making the PageSpeed tool happy. You can fairly easily implement some form of minification using a local task runner such as Grunt or Gulp.

Personally, I’m using a combination of Browserify for Node-style module loading and Grunt to concatenate and minify the JS.

A sample of my current Gruntfile.js is as follows:

browserify: {
    dev: {
        options: {
            browserifyOptions: {
                debug: true
            }
        },
        files: {
            'js/global.js': ['js/core.js']
        }
    },
    prod: {
        files: {
            'dist/js/global.js': ['js/core.js']
        }
    }
},

This makes use of the popular Browserify module in order to manage module dependencies within individual files, as well as concatenating into a single file (global.js) for development and production.

core.js is a simple loading script which takes in the rest of my JS and libraries in order to achieve the base functions:

(function() {
    var cssrelpreload = require('./lib/cssrelpreload.js'),
        loadcss = require('./lib/loadcss.js'),
        app = require('./modules/app.js');

    app.init();
}());

And in case you were wondering, the start of my app.js, which initialises 90% of the site’s interactive functionality:

(function() {
    var $ = require("../lib/jquery-3.0.0.min.js"),
        hljs = require("../lib/highlight.pack.js"),
        modernizr = require("../lib/modernizr-2.7.1.min.js"),
        highlight = require("../lib/highlight.pack.js"),
        $doc = $(document.body),
        $win = $(window),
        app;
...

(Some of these libraries can be retrieved by NPM which makes their inclusion with Browserify even easier.)

All of these files convert to a single global.js file which is then included in the footer. Including JS in the footer, as close to the closing tag as possible allows the JS to interact with the DOM without having to rely on the document ready event or the super-slow onload. It also means the browser gets your markup before the JavaScript.

async/defer

In addition to combining the JavaScript, the async and defer properties will come in handy. Simply put, async tells the browser to not interrupt the render path with this one file, and defer tells the browser “this can wait”.

Both of these attributes are applied to the global.js file in the footer:

<script src="/path/to/my/js/global.js" async defer></script>

Interestingly, PageSpeed doesn’t mind certain scripts being inline and/or synchronous, so in addition to the above tag, I also include a few lines of “pre-flight” JS that executes as quickly as possible. Included within this is the webfont handling, which is covered later.

With minification and use of async and defer, my footer area now looks like the following:

<script src="//ajax.googleapis.com/ajax/libs/webfont/1.6.16/webfont.js"></script>
<script id="preflight"> // preflight js, loading webfonts </script>

<script src="/path/to/dist/js/global.js" async defer></script>

Which was enough to keep Google — and modern browsers — happy.

Optimising the CSS

The CSS is slightly more convoluted. link tags get many attributes, but unfortunately, there’s no async or defer. There are techniques, but by far the most effective and wide-ranging is still to use JavaScript.

“JavaScript!?”, you say? Yes, of course, JavaScript means that browsers that can’t or wont load JavaScript for whatever reason will end up dead in the water with a page reminiscent of the early days of the internet. “Under construction” gifs are sadly out of style these days so there’s a quick workaround for it. <noscript> to the rescue, as usual. The full resulting code is as follows:

<style>
    /* critical css */
</style>

<link rel="preload" href="/my/styles/css/styles.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="/my/styles/css/styles.css"></noscript>

There’s a few things going on here, so we’ll cover them bit by bit.

Inlining critical CSS

The first part is completely inline. Yes, that technique everyone was recommending you didn’t do back in the day. There was good reason for it, too. Above all it was messy, harder to maintain and almost impossible to reliably cache. Some of these reasons are still true, but there’s inline, and there’s “inline”.

<style>
    /* critical css */
</style>

The CSS within the <style> tag isn’t really inline. It’s the result of a file called critical.scss, which contains the CSS affecting everything above a certain point on the homepage. Everything else is included within the link tags below.

My approach has been completely manual. There’s pros and cons to this. The main con is that as you can imagine, what might be critical for one page, might not be for the next. Addy Osmani has created a fantastic Node module aptly named Critical, which will let you generate CSS for every page, but it comes with its own caveats. You’ll have to manage the method of caching the CSS outputted for every URL yourself, and there are issues with some CSS files (such as external font files).

Each to their own, though – and whichever method you choose, the aim is to place within inline <style> tags the “above the fold” CSS that most browsers will see first.

So long as you make a concerted effort at inlining a certain amount of head CSS, Google will give you a good mark…

Thankfully, Google and PageSpeed aren’t too strict about this. So long as you make a concerted effort at inlining a certain amount of head CSS, they’ll give you a good mark, and the general user experience will be that your site appears to load much faster, with styles almost instantly applied at little expense to the load time.

Asynchronously loading the rest

The remaining CSS still needs to be loaded, otherwise you’ll end up with a very pretty header and hero, but your content might not look so hot. Here’s where the Filament Group have already done the leg work for us. A module called loadCSS is used to sort-of polyfill existing-but-yet-unimplemented functionality of CSS preloading Your Domain Name. The idea here is that when the major browsers do implement this technique, the polyfill will become unnecessary but everything will (in theory) still work.

<link rel="preload" href="/my/styles/css/styles.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="/my/styles/css/styles.css"></noscript>

I’m a fan of using polyfills for new technology instead of going in a different direction for this very reason. Specs change, though, so it pays to be wary of any new technique that isn’t yet implemented.

The basic process is:

  • CSS file is preloaded (i.e. requested but not implemented)
  • An onload event fires when the data has been retrieved
  • The CSS is then applied to the page (generally after the page has already displayed in some form)

In the case that JavaScript isn’t or cannot be loaded, the <noscript> tag takes care of going back to basics. Google happy, browsers happy. Another tick for the PageSpeed checker.

Using Web Font Loader

Let’s face it. Web fonts aren’t going anywhere. They’re just too darn pretty to avoid and when we started telling big organisations that their brand could be maintained without using flash or images, they positively leapt at the opportunity for their custom fonts to adorn every pixel of the company website.

There are so many techniques for loading typefaces, but many of them actually focus on prioritising the font files themselves, instead of making the page legible as quickly as possible.

FOUT (Flash of unstyled text) is real, and many agencies, corporations and brand champions hate it with a passion. Working in an agency myself, I empathise with them somewhat. Consumers are fickle, and to give a real-world example, even the colour of a sign being wrong is enough to make you think twice.

If for instance, you saw a can of Coke in a petrol station fridge, but the ‘Coke red’ was ever so slightly lighter than usual, you’d wonder if it was genuine. The same can apply to online brands, and affects everything from colour, to type, even to basic layout.

However, that being said, the message your company is trying to get across won’t mean squat if nobody can see it. What’s more dangerous than a brand being represented in the wrong typeface is a brand not being represented at all. This is where the Web Font Loader can help.

It’s a JavaScript based solution (with no non-JS solution, unfortunately) for loading fonts asynchronously and support the big font networks without getting in the way of browser rendering. It also supports locally hosted fonts.

Here’s my implementation for this very site:

// init webfont loader
WebFont.load({
    google: {
            families: [
                'Open Sans:300,300i,400',
                'Scope One'
            ]
    }
});

Pretty simple stuff, really. I’ve stuck this in the preflight.js file which is embedded straight into the markup. PageSpeed is happy, browsers are relatively happy.

There are more options to the Web Font Loader, including a nifty timeout which will trigger after a defined number of milliseconds when the font fails to render, as well as classes which you can use to mitigate the flash of unstyled text.

Optimising images

Images are easy to optimise uupggst. There are so many tools, CMS plugins and scripts that let you practically automate it. I’ll skip over JPEGs and GIFs if only to remind you: optimise your images! It’ll be better for your visitors, and cheaper for everyone involved.

PNGs

PNGs are a relatively newer format than GIF/JPEG in their widespread use – it wasn’t until IE7/8 that we could reliably use them to their fullest, and even then it came with caveats.

The single most powerful optimisation you can make for a lot of PNG images isn’t removal of metadata, or lossy compression – it’s a pallet reduction.

Now we’re using PNGs all over the shop. Logos, backgrounds, layout, and even photos are uploaded in the venerable format. However, as useful and flexible the format is, it’s easy to create file sizes nearing a few hundred kilobytes when you assumed it would be a fraction of that. Example:

A cat. Found on tumblr - how-sarcastic.tumblr.com - 242kb

The single most powerful optimisation you can make for a lot of PNG images isn’t removal of metadata, or lossy compression – it’s a pallet reduction. Software such as Imagealpha will let you reduce the pallet of a PNG, just like you can with GIFs, but retaining the full alpha blending abilities. The cat, after applying such treatment:

The same cat, 128 colours - 68kb

72% less filesize with almost negligible quality loss.

Leveraging browser caching

It’s at this point I should mention that Google have provided automated PageSpeed modules for both NGiNX and Apache which can be installed and do a lot of work for you.

I’ll come clean – I’ve never installed either of them, and I likely never will except out of curiosity. I’m a firm believer in optimising sites through conscious decisions which will persevere through multiple projects and be echoed throughout future designs and implementations. The approach these modules take is more that of “install, configure and forget”. This could work for some sites, but you’ll have to take my word for it that making real improvements to the sites yourself will make you a better developer.

Optimising Apache

I’m using Apache on the server hosting this site, and as such, will cover the basics for leveraging browser caching through this software.

Browsers are fond of caching data. It means less round trips to servers and faster load times. However, they are also aware that you can’t just cache everything. This is where HTTP headers come in. Little snippets of code sent by the server in order to tell a browser which resources on a site can be cache, and more importantly, for how long.

Google understand this is important, and penalise your site if you aren’t caching properly. Here’s how I’ve handled this in the .htaccess file for the site:

# Cache control headers
ExpiresActive On
ExpiresByType image/gif "access plus 30 days"
ExpiresByType image/jpeg "access plus 30 days"
ExpiresByType image/jpg "access plus 30 days"
ExpiresByType image/png "access plus 30 days"
ExpiresByType image/svg "access plus 30 days"
ExpiresByType image/svg+xml "access plus 30 days"
ExpiresByType image/x-icon "access plus 6 months"
ExpiresByType text/css "access plus 30 days"
ExpiresByType text/javascript "access plus 30 days"
ExpiresByType text/plain "access plus 30 days"
ExpiresByType application/javascript "access plus 30 days"
ExpiresByType application/json "access plus 30 days"

Essentially, it’s a lot of “keep this file around for x days” notices to the browser. It means they can source their own caches for many assets without having to come back to my site first.

The one drawback to this is that when you update some CSS, an image, some JS etc, it might not show for everyone immediately. The only people that use the reload button are developers and users wondering if it’ll fix a specific problem. Nobody randomly reloads your website.

However, it keeps PageSpeed happy, and it keeps your bandwidth costs down.

The future

Well, that was a lot to get through. The future, however, could completely invalidate this entire guide. And you know what? That’s ok — the web is complicated as it is, and you have JavaScript concerning itself with stylesheets, shims upon shims, and server administration. All this just to get around the issues created by the tools we’ve already been given.

Why can’t we just put the <script> tag in the head, where it feels more comfortable? Why do we need to complicate the <link> tag? Will we ever agree on the best way to load fonts?

Largely, the issues I’ve presented here will be mitigated — if not completely nullified by the HTTP/2 spec. We’ll get much more power to give the browser hints about what content is needed up front, what can be delayed, and even to compress and combine certain assets on the fly.

Until then, we’ll have to stick with the tried and tested techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *