Journal tags: amp

92

sparkline

Fidinpamp

If you’re a fan of gratuitous initialisms, you’ll love Google’s core web vitals. Just get a load of the obfuscation in the important-sounding metrics like CLS, FCP, LCP, and more.

To be fair to Google, this is a problem in the web performance world in general. Practioners prefer to talk about TTFB rather than “time to first byte” even though both contain exactly the same number of syllables.

The big news in the web performance community this month is the arrival of a new initialism. INP sounds like one of those pseudo-scientific psychologic profiles but it’s meant to stand for Interaction to Next Paint (even if they were to swear off pointless initialisms, you’d still have to pry Pointless Capitalisation from Google’s cold dead hands).

This new metric is a welcome one. It’s replacing first input delay. Sorry, First Input Delay, or FID, one of the few web vital initialisms that can be spoken as a word, making it a true acronym (fortunately fid’s successor, inp, also works as an acronym).

First Input Delay has long outstayed its welcome. It was always an outlier in the core web vitals. It didn’t seem to measure anything actually useful. I know it sounds like it’s measuring the delay until the user can interact with a web page, but when you dive into what it actually does, it’s a mess:

FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

See that word “begin” in there? It’s doing a lot of work. First Input Delay doesn’t measure the lag between the user interaction and the browser response; it only measures the lag between the user interaction and the browser beginning to respond. The actual response could take ages, but that lag doesn’t get measured. Unlike the other core web vitals, this metric is very far removed from what actually matters to the user’s experience.

What the fid where they thinking? How the fid did this measurement ever get included in core web vitals in the first place?

Well, feel free to take what I’m about to say as pure gossip, but I have my sources, I trust ’em, and no, I’m not going to reveal ’em…

It’s because of AMP.

Remember Google AMP? An acronym so pointless they eventually just forgot it ever stood for anything?

The AMP project ended up doing incredible damage to Google’s developer relations. By colluding with the search team to privilege the appearance of AMP pages in the top news carousel, Google effectively blackmailed the entire publishing industry into using their format.

In the end, it didn’t work. It was a shit format. All they did was foster resentment and animosity:

AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore.

It turns out that Google search wasn’t the only team infected by AMP. The core web vitals team also had to play ball.

Originally they had a genuinely useful metric for measuring the lag between input and response. But guess which pages did terribly? That’s right: AMP pages.

Rather than ship an actually-useful measurement, the core web vitals team instead had to include the broken First Input Delay, brainchild of a certain someone on the AMP team.

Now it all makes sense.

So good riddance to FID. Welcome to INP. And here’s hoping it won’t be much longer till we’re finally burying AMP.

Indie webbing

The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.

There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.

Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.

Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.

I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.

I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.

But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.

So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.

It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.

While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.

“It’s having your own website”, I said.

But surely there’s more to it than that, they wondered.

Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.

What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.

Bookmarklets for testing your website

I’m at day two of Indie Web Camp Brighton.

Day one was excellent. It was really hard to choose which sessions to go to because they all sounded interesting. That’s a good problem to have.

I ended up participating in:

  • a session on POSSE,
  • a session on NFC tags,
  • a session on writing, and
  • a session on testing your website that was hosted by Ros

In that testing session I shared some of the bookmarklets I use regularly.

Bookmarklets? They’re bookmarks that sit in the toolbar of your desktop browser. Just like any other bookmark, they’re links. The difference is that these links begin with javascript: rather than http. That means you can put programmatic instructions inside the link. Click the bookmark and the JavaScript gets executed.

In my mind, there are two different approaches to making a bookmarklet. One kind of bookmarklet contains lots of clever JavaScript—that’s where the smart stuff happens. The other kind of bookmarklet is deliberately dumb. All they do is take the URL of the current page and pass it to another service—that’s where the smart stuff happens.

I like that second kind of bookmarklet.

Here are some bookmarklets I’ve made. You can drag any of them up to the toolbar of your browser. Or you could create a folder called, say, “bookmarklets”, and drag these links up there.

Validation: This bookmarklet will validate the HTML of whatever page you’re on.

Validate HTML

Carbon: This bookmarklet will run the domain through the website carbon calculator.

Calculate carbon

Accessibility: This bookmarklet will run the current page through the Website Accessibility Evaluation Tools.

WAVE

Performance: This bookmarklet will take the current page and it run it through PageSpeed Insights, which includes a Lighthouse test.

PageSpeed

HTTPS: This bookmarklet will run your site through the SSL checker from SSL Labs.

SSL Report

Headers: This bookmarklet will test the security headers on your website.

Security Headers

Drag any of those links to your browser’s toolbar to “install” them. If you don’t like one, you can delete it the same way you can delete any other bookmark.

Patterns Day and more

Patterns Day is exactly six weeks away—squee!

If you haven’t got your ticket yet, get one now. (And just between you and me, use the discount code JOINJEREMY to get a 10% discount.)

I’ve been talking to the speakers and getting very excited about what they’re going to be covering. It’s shaping up to be the perfect mix of practical case studies and big-picture thinking. You can expect talks on design system governance, accessibility, design tokens, typography, and more.

I’m hoping to have a schedule for the day ready by next week. It’s fun trying to craft the flow of the day. It’s like putting together a set list for a concert. Or maybe I’m just overthinking it and it really doesn’t matter because all the talks are going to be great anyway.

There are sponsors for Patterns Day now too. Thanks to Supernova and Etch you’re going to have bountiful supplies of coffee, tea and pastries throughout the day. Then, when the conference talks are done, we’ll head across the road to the Hare And Hounds for one of Luke Murphy’s famous geek pub quizes, with a bar tab generously provided by Zero Height.

Now, the venue for Patterns Day is beautiful but it doesn’t have enough space to provide everyone with lunch, so you’re going to have an hour and a half to explore some of Brighton’s trendy lunchtime spots. I’ve put together a list of lunch options for you, ordered by proximity to the Duke of York’s. These are all places I can personally vouch for.

Then, after the conference day, and after the pub quiz, there’s Vitaly’s workshop the next day. I will most definitely be there feeding on Vitaly’s knowledge. Get a ticket if you want to join me.

But wait! That’s not all! Even after the conference, and the pub quiz, and the workshop, the nerdy fun continues on the weekend. There’s going to be an Indie Web Camp here in Brighton on the Saturday and Sunday after Patterns Day.

If you’ve been to an Indie Web Camp before, you know how inspiring and fun it is. If you haven’t been to one yet, you should definitely come along. It’s free! If you’ve got your own website, or if you’re even just thinking about having your own website, it’s a great opportunity to meet with like-minded people.

So that’s going to be four days of non-stop good stuff here in Brighton. I’m looking forward to seeing you then!

Indie Web Camp Nuremberg

After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.

I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.

This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.

A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.

This time I attempted three bits of home improvement on my website.

Autolinking Mastodon usernames

The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.

I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.

That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.

Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @[email protected].

Good enough. Ship it.

Related posts

My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.

I wanted to show related posts when you get to the end of one of my blog posts.

I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.

I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.

Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.

I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.

I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.

It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.

Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.

Link rot

I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.

On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.

The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.

In the end I decided to stick with Remy’s two-pronged approach:

  1. a client-side script that—as a progressive enhancement—intercepts outbound links and re-routes them to
  2. a server-side script that redirects to the Internet Archive if the link is broken.

Here’s the JavaScript I wrote for the first part.

It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry and if it is, I pass on the date from the post’s dt-published value.

Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:

  • Check that the request is coming from my site.
  • There also has to be a URL provided in the query string.
  • Make a very quick curl request to get the response headers from the URL. The time limit is set to 1 second.
  • If there was any error (like a time out), give up and go to the URL.
  • Pick the response headers apart to get the HTTP status code.
  • If the response is OK, go to the URL.
  • If the response is a redirect, go around again but this time use the redirect URL.
  • Construct the archive.org search endpoint.
  • If we have a date, provide it. Otherwise ask for the latest snapshot.
  • Ping that archive.org URL. This time there’s no time limit; this might take a while.
  • If there’s an archived copy, redirect to that.
  • There’s no archived copy. Give up and go the URL anyway.

Not perfect by any means, but it works for the most common cases of link rot.

For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.

Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.

The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.

I will continue to donate money to the Internet Archive and I encourage you to do the same.

Decision time

I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!

But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.

Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?

I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.

When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.

Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.

It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.

From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.

It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.

In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.

When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?

But often a decision that delivers short-term gain may well end up delivering long-term pain.

Alexander Petros describes this succinctly:

Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.

When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.

Robin Berjon puts this into the historical perspective of Taylorism and Luddism:

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.

Or as Marshall McLuhan put it:

Every extension is also an amputation.

…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.

Maintenance matters. With the passing of time, maintenance matters more and more.

Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:

Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.

Relative times

Last week Phil posted a little update about his excellent site, ooh.directory:

If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.

Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.

I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.

I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.

“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.

Sure enough, we’ve now got the Intl.RelativeTimeFormat object. It’s got browser support across the board.

Here’s the code I wrote.

I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.

You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.

It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.

Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.

You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)

I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.

This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.

Eventing

In person events are like buses. You go two years without one and then three come along at once.

My buffer is overflowing from experiencing three back-to-back events. Best of all, my participation was different each time.

First of all, there was Leading Design New York, where I was the host. The event was superb, although it’s a bit of a shame I didn’t have any time to properly experience Manhattan. I wasn’t able to do any touristy things or meet up with my friends who live in the city. Still the trip was well worth it.

Right after I got back from New York, I took the train to Edinburgh for the Design It Build It conference where I was a speaker. It was a good event. I particularly enjoyed Rafaela Ferro talk on accessibility. The last time I spoke at DIBI was 2011(!) so it was great to make a return visit. I liked that the audience was seated cabaret style. That felt safer than classroom-style seating, allowing more space between people. At the same time, it felt more social, encouraging more interaction between attendees. I met some really interesting people.

I got from Edinburgh just in time for UX Camp Brighton on the weekend, where I was an attendee. I felt like a bit of a moocher not giving a presentation, but I really, really enjoyed every session I attended. It’s been a long time since I’ve been at a Barcamp-style event—probably the last Indie Web Camp I attended, whenever that was. I’d forgotten how well the format works.

But even with all these in-person events, online events aren’t going anywhere anytime soon. Yesterday I started hosting the online portion of Leading Design New York and I’ll be doing it again today. The post-talk discussions with Julia and Lisa are lots of fun!

So in the space of just of a couple of weeks I’ve been a host, a speaker, and an attendee. Now it’s time for me to get my head back into one other event role: conference curator. No more buses/events are on the way for the next while, so I’m going to be fully devoted to organising the line-up for UX London 2022. Exciting!

Writing on web.dev

Chrome Dev Summit kicked off yesterday. The opening keynote had its usual share of announcements.

There was quite a bit of talk about privacy, which sounds good in theory, but then we were told that Google would be partnering with “industry stakeholders.” That’s probably code for the kind of ad-tech sharks that have been making a concerted effort to infest W3C groups. Beware.

But once Una was on-screen, the topics shifted to the kind of design and development updates that don’t have sinister overtones.

My favourite moment was when Una said:

We’re also partnering with Jeremy Keith of Clearleft to launch Learn Responsive Design on web.dev. This is a free online course with everything you need to know about designing for the new responsive web of today.

This is what’s been keeping me busy for the past few months (and for the next month or so too). I’ve been writing fifteen pieces—or “modules”—on modern responsive web design. One third of them are available now at web.dev/learn/design:

  1. Introduction
  2. Media queries
  3. Internationalization
  4. Macro layouts
  5. Micro layouts

The rest are on their way: typography, responsive images, theming, UI patterns, and more.

I’ve been enjoying this process. It’s hard work that requires me to dive deep into the nitty-gritty details of lots of different techniques and technologies, but that can be quite rewarding. As is often said, if you truly want to understand something, teach it.

Oh, and I made one more appearance at the Chrome Dev Summit. During the “Ask Me Anything” section, quizmaster Una asked the panelists a question from me:

Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?

(Thanks to Jake for helping craft the question into a form that could make it past the legal department but still retain its spiciness.)

The question got a response. I wouldn’t say it got an answer. My verdict remains:

I’m not sure that Google Chrome can be considered a user agent.

The fundamental issue is that you’ve got a single company that’s the market leader in web search, the market leader in web advertising, and the market leader in web browsers. I honestly believe all three would function better—and more honestly—if they were separate entities.

Monopolies aren’t just damaging for customers. They’re damaging for the monopoly too. I’d love to see Google Chrome compete on being a great web browser without having to also balance the needs of surveillance-based advertising.

Resigning from the AMP advisory committee

Inspired by Terence Eden’s example, I applied for membership of the AMP advisory committee last year. To my surprise, my application was successful.

I’ve spent the time since then participating in good faith, but I can’t do that any longer. Here’s what I wrote in my resignation email:

Hi all,

As mentioned at the end of the last call, I’m stepping down from the AMP advisory committee.

I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.

If I were to remain on the advisory committee, my feelings of resentment about this situation would inevitably affect my behaviour. So it’s best for everyone if I step away now instead of descending into outright sabotage. It’s not you, it’s me.

I’d like to thank the OpenJS Foundation for allowing me to participate. It’s been an honour to watch Tobie and Jory in action.

I wish everyone well and I hope that the advisory committee can successfully guide the AMP project towards a happy place where it can live out its final days in peace.

I don’t have a replacement candidate to nominate but I’ll ask around amongst other independent sceptical folks to see if there’s any interest.

All the best,

Jeremy

I wrote about the fundamental problem with Google AMP when I joined the advisory committee:

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is.

There’s the collection of web components. If that were all AMP is, it would be a very straightforward project, similar to other collections of web components (like Polymer). But then there’s the concept of validation. The validation comes from a set of rules, defined by Google. And there’s the AMP cache, or more accurately, Google hosting.

Only one piece of that trinity—the collection of web components—is eligible for the label of being open source, and even that’s a stretch considering that most of the contributions come from full-time Google employees. The other two parts are firmly under Google’s control.

I was hoping it was a marketing problem. We spent a lot of time on the advisory committee trying to figure out ways of making it clearer what AMP actually is. But it was a losing battle. The phrase “the AMP project” is used to cover up the deeply interwingled nature of its constituent parts. Bits of it are open source, but most of it is proprietary. The OpenJS Foundation doesn’t seem like a good home for a mostly-proprietary project.

Whenever a representative from Google showed up at an advisory committee meeting, it was clear that they viewed AMP as a Google product. I never got the impression that they planned to hand over control of the project to the OpenJS Foundation. Instead, they wanted to hear what people thought of their project. I’m not comfortable doing that kind of unpaid labour for a large profitable organisation.

Even worse, Google representatives reminded us that AMP was being used as a foundational technology for other Google products: stories, email, ads, and even some weird payment thing in native Android apps. That’s extremely worrying.

While I was serving on the AMP advisory committee, a coalition of attorneys general filed a suit against Google for anti-competitive conduct:

Google designed AMP so that users loading AMP pages would make direct communication with Google servers, rather than publishers’ servers. This enabled Google’s access to publishers’ inside and non-public user data.

We were immediately told that we could not discuss an ongoing court case in the AMP advisory committee. That’s fair enough. But will it go both ways? Or will lawyers acting on Google’s behalf be allowed to point to the AMP advisory committee and say, “But AMP is an open source project! Look, it even resides under the banner of the OpenJS Foundation.”

If there’s even a chance of the AMP advisory committee being used as a Potempkin village, I want no part of it.

But even as I’m noping out of any involvement with Google AMP, my parting words have to be about how impressed I am with the OpenJS Foundation. Jory and Tobie have been nothing less than magnificent in their diplomacy, cat-herding, schedule-wrangling, timekeeping, and other organisational superpowers that I’m crap at.

I sincerely hope that Google isn’t taking advantage of the OpenJS Foundation’s kind-hearted trust.

Doing the right thing for the wrong reasons

I remember trying to convince people to use semantic markup because it’s good for accessibility. That tactic didn’t always work. When it didn’t, I would add “By the way, Google’s searchbot is indistinguishable from a screen-reader user so semantic markup is good for SEO.”

That usually worked. It always felt unsatisfying though. I don’t know why. It doesn’t matter if people do the right thing for the wrong reasons. The end result is what matters. But still. It never felt great.

It happened with responsive design and progressive enhancement too. If I couldn’t convince people based on user experience benefits, I’d pull up some official pronouncement from Google recommending those techniques.

Even AMP, a dangerously ill-conceived project, has one very handy ace in the hole. You can’t add third-party JavaScript cruft to AMP pages. That’s useful:

Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role.

AMP is currently dying, which is good news. Google have announced that core web vitals will be used to boost ranking instead of requiring you to publish in their proprietary AMP format. The really good news is that the political advantage that came with AMP has also been ported over to core web vitals.

Take user-hostile obtrusive overlays. Perhaps, as a contientious developer, you’ve been arguing for years that they should be removed from the site you work on because they’re so bad for the user experience. Perhaps you have been met with the same indifference that I used to get regarding semantic markup.

Well, now you can point out how those annoying overlays are affecting, for example, the cumulative layout shift for the site. And that number is directly related to SEO. It’s one thing for a department to over-ride UX concerns, but I bet they’d think twice about jeopardising the site’s ranking with Google.

I know it doesn’t feel great. It’s like dealing with a bully by getting an even bigger bully to threaten them. Still. Needs must.

Numbers

Core web vitals from Google are the ingredients for an alphabet soup of exlusionary intialisms. But once you get past the unnecessary jargon, there’s a sensible approach underpinning the measurements.

From May—no, June—these measurements will be a ranking signal for Google search so performance will become more of an SEO issue. This is good news. This is what Google should’ve done years ago instead of pissing up the wall with their dreadful and damaging AMP project that blackmailed publishers into using a proprietary format in exchange for preferential search treatment. It was all done supposedly in the name of performance, but in reality all it did was antagonise users and publishers alike.

Core web vitals are an attempt to put numbers on user experience. This is always a tricky balancing act. You’ve got to watch out for the McNamara fallacy. Harry has already started noticing this:

A new and unusual phenomenon: clients reluctant (even refusing) to fix performance issues unless they directly improve Vitals.

Once you put a measurement on something, there’s a danger of focusing too much on the measurement. Chris is worried that we’re going to see tips’n’tricks for gaming core web vitals:

This feels like the start of a weird new era of web performance where the metrics of web performance have shifted to user-centric measurements, but people are implementing tricky strategies to game those numbers with methods that, if anything, slightly harm user experience.

The map is not the territory. The numbers are a proxy for user experience, but it’s notoriously difficult to measure intangible ideas like pain and frustration. As Laurie says:

This is 100% the downside of automatic tools that give you a “score”. It’s like gameification. It’s about hitting that perfect score instead of the holistic experience.

And Ethan has written about the power imbalance that exists when Google holds all the cards, whether it’s AMP or core web vitals:

Google used its dominant position in the marketplace to force widespread adoption of a largely proprietary technology for creating websites. By switching to Core Web Vitals, those power dynamics haven’t materially changed.

We would do well to remember:

When you measure, include the measurer.

But if we’re going to put numbers to user experience, the core web vitals are a pretty good spread of measurements: largest contentful paint, cumulative layout shift, and first input delay.

(If you prefer using initialisms, remember that CFP is Certified Financial Planner, CLS is Community Legal Services, and FID is Flame Ionization Detector. Together they form CWV, Catholic War Veterans.)

Ampvisory

I was very inspired by something Terence Eden wrote on his blog last year. A report from the AMP Advisory Committee Meeting:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP.

Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.

So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:

Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.

The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.

That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!

But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?

The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.

That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.

I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:

An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.

The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.

Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.

So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.

I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).

The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.

I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.

That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.

After all, AMP is a collection of web components. Maybe.

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Design maturity on the Clearleft podcast

The latest episode of the Clearleft podcast is zipping through the RSS tubes towards your podcast-playing software of choice. This is episode five, the penultimate episode of this first season.

This time the topic is design maturity. Like the episode on design ops, this feels like a hefty topic where the word “scale” will inevitably come up.

I talked to my fellow Clearlefties Maite and Andy about their work on last year’s design effectiveness report. But to get the big-scale picture, I called up Aarron over at Invision.

What a great guest! I already had plans to get Aarron on the podcast to talk about his book, Designing For Emotion—possibly a topic for next season. But for the current episode, we didn’t even mention it. It was design maturity all the way.

I had a lot of fun editing the episode together. I decided to intersperse some samples. If you’re familiar with Bladerunner and Thunderbirds, you’ll recognise the audio.

The whole thing comes out at a nice 24 minutes in length.

Have a listen and see what you make of it.

Dark mode revisited

I added a dark mode to my website a while back. It was a fun thing to do during Indie Web Camp Amsterdam last year.

I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.

But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”

But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”

Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.

To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.

So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Indie Web Camp London 2020

Do you have plans for the weekend of March 14th and 15th?

If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.

Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.

If you’re wondering whether this is for you, ask yourself if any of this situations apply:

  • You don’t have your own website yet, but you want one.
  • You have your own website, but you need some help with it.
  • You have some ideas about the independent web.
  • You have your own website but you never seem to find the time to update it.
  • You’d like to help other people with their websites.

If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).