I think it’s always worth revisiting accomplishments like this—it’s absolutely astounding that we don’t even think about polio (or smallpox!) in our day-to-day lives, when just two generations ago it was something that directly affected everybody.
The annual number of people paralyzed by polio was reduced by over 99% in the last four decades.
I must admit I’ve been wincing a little every time I see a graph with a logarithmic scale in a news article about COVID-19. It takes quite a bit of cognitive work to translate to a linear scale and get the real story.
Tom makes an endpoint for generating QR codes so you don’t have to rely on the Google Charts API.
He also provides a good definition of “serverless”:
Now, serverless is a very silly buzzword dreamed up by someone from the consultant class who love coming up with terrible names, so I promise I won’t use it any further. Your code obviously run on a server. It just means it runs on a server someone else manages.
Amazon call it a ‘Lambda Function’. Google call it a ‘Cloud Function’. Microsoft Azure call it simply a ‘Function’. But none of those are very descriptive, because, well, anyone who writes any kind of programming language generally writes functions pretty much all the time in much the same way as anyone who writes English writes paragraphs, and we don’t call our blogging software “Cloud Paragraphs”. (Someone will now, I’m guessing.)
I mean, I’m not a huge fan of trying to get the damn things to work consistently—thanks, browsers—but I love the fact that they exist (athough I’ve come across a worrying number of web developers who weren’t aware of their existence). Print stylesheets are one more example of the assumption-puncturing nature of the web: don’t assume that everyone will be reading your content on a screen. News articles, blog posts, recipes, lyrics …there are many situations where a well-considered print stylesheet can make all the difference to the overall experience.
You know what I don’t like? QR codes!
It’s not because they’re ugly, or because they’ve been over-used by the advertising industry in completely inapropriate ways. No, I don’t like QR codes because they aren’t an open standard. Still, I must grudgingly admit that they’re a convenient way of providing a shortcut to a URL (albeit a completely opaque one—you never know if it’s actually going to take you to the URL it promises or to a Rick Astley video). And now that the parsing of QR codes is built into iOS without the need for any additional application, the barrier to usage is lower than ever.
So much as I might grit my teeth, QR codes and print stylesheets make for good bedfellows.
I picked up a handy tip from a Smashing Magazine article about print stylesheets a few years back. You can the combination of a @media print and generated content to provide a QR code for the URL of the page being printed out. Google’s Chart API provides a really handy shortcut for generating QR codes:
Except that there’s no telling how long that will continue to work. Google being Google, they’ve deprecated the simple image chart API in favour of the over-engineered JavaScript alternative. So just as I recently had to migrate all my maps over to Leaflet when Google changed their Maps API from under the feet of developers, the clock is ticking on when I’ll have to find an alternative to the Image Charts API.
I’ve been thinking about another potential use for QR codes. I’m preparing a new talk for An Event Apart Seattle. The talk is going to be quite practical—for a change—and I’m going to be encouraging people to visit some URLs. It might be fun to include the biggest possible QR code on a slide.
I’d better generate the images before Google shuts down that API.
We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.
When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.
I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:
overall file size (page weight + assets), and
number of requests.
I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:
time to first byte,
time to first render,
time to first meaningful paint, and
time to first meaningful interaction.
There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.
Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.
By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.
Factors
1st byte
server speed
network speed
1st render
network speed
critical path assets
1st meaningful paint
network speed
font-loading strategy
image optimisation
1st meaningful interaction
network speed
device processing power
JavaScript size
So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.
First visit factors
Repeat visit factors
1st byte
server speed
network speed
server speed
network speed
caching
1st render
network speed
critical path assets
network speed
critical path assets
caching
1st meaningful paint
network speed
font-loading strategy
image optimisation
network speed
font-loading strategy
image optimisation
caching
1st meaningful interaction
network speed
device processing power
JavaScript size
network speed
device processing power
JavaScript size
caching
Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.
Here are some numbers for adactio.com:
First visit time
Repeat visit time
1st byte
1.476 seconds
1.215 seconds
1st render
2.633 seconds
1.930 seconds
1st meaningful paint
2.633 seconds
1.930 seconds
1st meaningful interaction
2.868 seconds
2.083 seconds
I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.
I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.
A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.
My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.
Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.
For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:
First visit goal
Repeat visit goal
1st byte
1.476 seconds
1.215 seconds
1st render
2.133 seconds
1.430 seconds
1st meaningful paint
2.133 seconds
1.430 seconds
1st meaningful interaction
2.368 seconds
1.583 seconds
See how the 0.5 seconds saving cascades down into the other numbers?
Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.
Priority
1st byte
Medium
1st render
High
1st meaningful paint
Low
1st meaningful interaction
Low
Your goals and priorities may be quite different.
I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.
I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.
Danielle and I have been doing some front-end consultancy for a local client recently.
We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.
I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.
I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.
This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.
When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:
time to first byte,
time to first render,
time to first meaningful paint, or
time to first meaningful interaction.
And that doesn’t even cover the more easily-measurable numbers like:
overall file size,
number of requests, or
pagespeed insights score.
One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.
Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.
Here we go, counting down from four to the number one spot…
4. Web fonts
Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).
Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?
So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.
2. Our JavaScript
Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.
At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.
It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.
Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.
The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.
There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.
No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.
The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.
But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.
Unless…
Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.
There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.
When I was young my parents would make me happy by doing something special for me. I would really like it if you would do it too. Please tell me what you are not allowed to do.