Journal tags: web

360

sparkline

Blog Questions Challenge

I’ve been tagged in a good ol’-fashioned memetic chain letter, first by Jon and then by Luke. Only by answering these questions can my soul find peace…

Why did you start blogging in the first place?

All the cool kids were doing it. I distinctly remember thinking it was far too late to start blogging. Clearly I had missed the boat. That was in the year 2001.

So if you’re ever thinking of starting something but you think it might be too late …it isn’t.

Back then, I wrote:

I’ll try and post fairly regularly but I don’t want to make any promises I can’t keep.

I’m glad I didn’t commit myself but I’m also glad that I’m still posting 24 years later.

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

I use my own hand-cobbled mix of PHP and MySQL. Before that I had my own hand-cobbled mix of PHP and static XML files.

On the one hand, I wouldn’t recommend anybody to do what I’ve done. Just use an off-the-shelf content management system and start publishing.

On the other hand, the code is still working fine decades later (with the occasional tweak) and the control freak in me likes knowing what every single line of code is doing.

It’s very bare-bones though.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

I usually open a Mardown text editor and write in that. I use the Mac app Focused which was made by Realmac software. I don’t think you can even get hold of it these days, but it does the job for me. Any Markdown text editor would do though.

Then I copy what I’ve written and paste it into the textarea of my hand-cobbled CMS. It’s pretty rare for me to write directly into that textarea.

When do you feel most inspired to write?

When I’m supposed to be doing something else.

Blogging is the greatest procrastination tool there is. You’re skiving off doing the thing you should be doing, but then when you’ve published the blog post, you’ve actually done something constructive so you don’t feel too bad about avoiding that thing you were supposed to be doing.

Sometimes it takes me a while to get around to posting something. I find myself blogging out loud to my friends, which is a sure sign that I need to sit down and bash out that blog post.

When there’s something I’m itching to write about but I haven’t ’round to it yet, it feels a bit like being constipated. Then, when I finally do publish that blog post, it feels like having a very satisfying bowel movement.

No doubt it reads like that too.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

I publish immediately. I’ve never kept drafts. Usually I don’t even save theMarkdown file while I’m writing—I open up the text editor, write the words, copy them, paste them into that textarea and publish it. Often it takes me longer to think of a title than it takes to write the actual post.

I try to remind myself to read it through once to catch any typos, but sometimes I don’t even do that. And you know what? That’s okay. It’s the web. I can go back and edit it at any time. Besides, if I miss a typo, someone else will catch it and let me know.

Speaking for myself, putting something into a draft (or even just putting it on a to-do list) is a guarantee that it’ll never get published. So I just write and publish. It works for me, though I totally understand that it’s not for everyone.

What’s your favourite post on your blog?

I’ve got a little section of “recommended reading” in the sidebar of my journal:

But I’m not sure I could pick just one.

I’m very proud of the time I wrote 100 posts in 100 days and each post was exactly 100 words long. That might be my favourite tag.

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

I like making little incremental changes. Usually this happens at Indie Web Camps. I add some little feature or tweak.

I definitely won’t be redesigning. But I might add another “skin” or two. I’ve got one of those theme-switcher things, y’see. It was like a little CSS Zen Garden before that existed. I quite like having redesigns that are cumulative instead of destructive.

Next?

You. Yes, you.

A long-awaited talk

Back in 2019 I had the amazing experience of going to CERN and being part of a team building an emulator of the first ever browser.

Remy was on the team too. He did the heavy lifting of actually making the thing work—quite an achievement in just five days!

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Remy and I were both keen to talk about the work, which is why we did a joint talk at Fronteers in Amsterdam that year. We’re both quite sceptical of talks given by duos; people think it means it’ll be half the work, when actually it’s twice the work. In the end we come up with a structure for the talk that we both liked:

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well.

You can watch the video of that talk in Amsterdam. You can also read the transcript.

After putting so much work into the talk, we were keen to give it again somewhere. We had the chance to do that in Nottingham in early March 2020. (cue ominous foreboding)

The folks from local Brighton meetup Async had also asked if we wanted to give the talk. We were booked in for May 2020. (ominous foreboding intensifies)

We all know what happened next. The Situation. Lockdown. No conferences. No meetups.

But technically the talk wasn’t cancelled. It was just postponed. And postponed. And postponed. Before you know it, five years have passed.

Part of the problem was that Async is usually on the first Thursday of the month and that’s when I host an Irish music session in Hove. I can’t miss that!

But finally the stars aligned and last week Remy and I finally did the Async talk. You can watch a video of it.

I really enjoyed giving the talk and the discussion that followed. There was a good buzz.

It also made me appreciate the work that we put into stucturing the talk. We’ve only given it a few times but with a five year gap between presentations, I can confidentally say that’s it’s a timeless topic.

Words I wrote in 2024

People spent a lot of time and energy in 2024 talking about (and on) other people’s websites. Twitter. Bluesky. Mastodon. Even LinkedIn.

I observed it all with the dispassionate perspective of Dr. Manhattan on Mars. While I’m happy to see more people abondoning the cesspool that is Twitter, I’m not all that invested in either Mastodon or Bluesky. Or any other website, for that matter. I’m glad they’re there, but if they disappeared tomorrow, I’d carry on posting here on my own site.

I posted to my website over 850 times in 2024. sparkline

I shared over 350 links. sparkline

I posted over 400 notes. sparkline

I published just one article.

And I wrote almost 100 blog posts here in my journal this year. sparkline

Here are some cherry-picked highlights:

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Going Offline is online …for free

I wrote a book about service workers. It’s called Going Offline. It was first published by A Book Apart in 2018. Now it’s available to read for free online.

If you want you can read the book as a PDF, an ePub, or .mobi, but I recommend reading it in your browser.

Needless to say the web book works offline. Once you go to goingoffline.adactio.com you can add it to the homescreen of your mobile device or add it to the dock on your Mac. After that, you won’t need a network connection.

The book is free to read. Properly free. Not the kind of “free” where you have to supply an email address first. Why would I make you go to the trouble of generating a burner email account?

The site has no analytics. No tracking. No third-party scripts of any kind whatsover. By complete coincidence, the site is fast. Funny that.

For the styling of this web book, I tweaked the stylesheet I used for HTML5 For Web Designers. I updated it a little bit to use logical properties, some fluid typography and view transitions.

In the process of converting the book to HTML, I got reaquainted with what I had written almost seven years ago. It was kind of fun to approach it afresh. I think it stands up pretty darn well.

Ethan wrote about his feelings when he put two of his books online, illustrated by that amazing photo that always gives me the feels:

I’ll miss those days, but I’m just glad these books are still here. They’re just different than they used to be. I suppose I am too.

Anyway, if you’re interested in making your website work offline, have a read of Going Offline. Enjoy!

Going Offline

Syndicating to Bluesky

Last year I described how I syndicate my posts to different social networks.

Back then my approach to syndicating to Bluesky was to piggy-back off my micro.blog account (which is really just the RSS feed of my notes):

Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It worked well enough, but it wasn’t real-time and I didn’t have much control over the formatting. As Bluesky is having quite a moment right now, I decided to upgrade my syndication strategy and use the Bluesky API.

Here’s how it works…

First you need to generate an app password. You’ll need this so that you can generate a token. You need the token so you can generate …just kidding; the chain of generated gobbledegook stops there.

Here’s the PHP I’m using to generate a token. You’ll need your Bluesky handle and the app password you generated.

Now that I’ve got a token, I can send a post. Here’s the PHP I’m using.

There’s something extra code in there to spot URLs and turn them into links. Bluesky has a very weird way of doing this.

It didn’t take too long to get posting working. After some more tinkering I got images working too. Now I can post straight from my website to my Bluesky profile. The Bluesky API returns an ID for the post that I’ve created there so I can link to it from the canonical post here on my website.

I’ve updated my posting interface to add a toggle for Bluesky right alongside the toggle for Mastodon. There used to be a toggle for Twitter. That’s long gone.

Now when I post a note to my website, I can choose if I want to send a copy to Mastodon or Bluesky or both.

One day Bluesky will go away. It won’t matter much to me. My website will still be here.

Feed reading

I described using my feed reader like this:

I would hate if catching up on RSS feeds felt like catching up on email.

Instead it’s like this:

When I open my RSS reader to catch up on the feeds I’m subscribed to, it doesn’t feel like opening my email client. It feels more like opening a book.

It also feels different to social media. Like Lucy Bellwood says:

I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.

There’s also the blessed lack of any algorithm:

Because blogs are much quieter than social media, there’s also the ability to switch off that awareness that Someone Is Always Watching.

Cory Doctorow has been praising the merits of RSS:

This conduit is anti-lock-in, it works for nearly the whole internet. It is surveillance-resistant, far more accessible than the web or any mobile app interface.

Like Lucy, he emphasises the lack of algorithm:

By default, you’ll get everything as it appears, in reverse-chronological order.

Does that remind you of anything? Right: this is how social media used to work, before it was enshittified. You can single-handedly disenshittify your experience of virtually the entire web, just by switching to RSS, traveling back in time to the days when Facebook and Twitter were more interested in showing you the things you asked to see, rather than the ads and boosted content someone else would pay to cram into your eyeballs.

The only algorithm at work in my feed reader—or on Mastodon—is good old-fashioned serendipity, when posts just happened to rhyme or resonate. Like this morning, when I read this from Alice:

There is no better feeling than walking along, lost in my own thoughts, and feeling a small hand slip into mine. There you are. Here I am. I love you, you silly goose.

And then I read this from Denise

I pass a mother and daughter, holding hands. The little girl is wearing a sequinned covered jacket. She looks up at her mother who says “…And the sun’s going to come out and you’re just going to shine and shine and shine.”

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Docks and home screens

Back in June I documented a bug on macOS in how Spaces (or whatever they call they’re desktop management thingy now) works with websites added to the dock.

I’m happy to report that after upgrading to Sequoia, the latest version of macOS, the bug has been fixed! Excellent!

Not only that, but there’s another really great little improvement…

Let’s say you’ve installed a website like The Session by adding it to the dock. Now let’s say you get an email in Apple Mail that includes a link to something on The Session. It used to be that clicking on that link would open it in your default web browser. But now clicking on that link opens it in the installed web app!

It’s a lovely little enhancement that makes the installed website truly feel like a native app.

Websites in the dock also support the badging API, which is really nice!

Like I said at the time:

I wonder if there’s much point using wrappers like Electron any more? I feel like they were mostly aiming to get that parity with native apps in having a standalone application launched from the dock.

Now all you need is a website.

The biggest issue remains discovery. Unless you already know that it’s possible to add a website to the dock, you’re unlikely to find out about it. That’s why I’ve got a page with installation instructions on The Session.

Still, the discovery possibilities on Apples’s desktop devices are waaaaay better than on Apple’s mobile devices.

Apple are doing such great work on their desktop operating system to make websites first-class citizens. Meanwhile, they’re doing less than nothing on their mobile operating system. For a while there, they literally planned to break all websites added to the homescreen. Fortunately they were forced to back down.

But it’s still so sad to see how Apple are doing everything in their power to prevent people from finding out that you can add websites to your homescreen—despite (or perhaps because of) the fact that push notifications on iOS only work if the website has been added to the home screen!

So while I’m really happy to see the great work being done on installing websites for desktop computers, I’m remain disgusted by what’s happening on mobile:

At this point I’ve pretty much given up on Apple ever doing anything about this pathetic situation.

The datalist element on iOS

The datalist element is good. It was a bit bumpy there for a while, but browser implementations have improved over time. Now it’s by far the simplest and most robust way to create an autocompleting combobox widget.

Hook up an input element with a datalist element using the list and id attributes and you’re done. You can even use a bit of Ajax to dynamically update the option elements inside the datalist in response to the user’s input. The browser takes care of all the interaction. If you try to roll your own combobox implementation, it’s almost certainly going to involve a lot of JavaScript and still probably won’t account for all use cases.

Safari on iOS—and therefore all browsers on iOS—didn’t support datalist for quite a while. But once it finally shipped, it worked really nicely. The options showed up just like automplete suggestions above the keyboard.

But that broke a while back.

The suggestions still appeared, but if you tapped on one of them, nothing happened. The input element didn’t get updated. You had to tap on a little downward arrow inside the input in order to see the list of options.

That was really frustrating for anybody on iOS using The Session. By far the most common task on the site is searching for a tune, something that’s greatly (progressively) enhanced with a dynamically-updating datalist.

I just updated to iOS 18 specifically to see if this bug has been fixed, and it has:

Fixed updating the input value when selecting an option from a datalist element.

Hallelujah!

But now there’s some additional behaviour that’s a little weird.

As well as showing the options in the autocomplete list above the keyboard, Safari on iOS—and therefore all browsers on iOS—also pops up the options as a list (as if you had tapped on that downward arrow). If the list is more than a few options long, it completely obscures the input element you’re typing into!

I’m not sure if this is a bug or if it’s the intended behaviour. It feels like a bug, but I don’t know if I should file something.

For now, I’ve updated the datalist elements on The Session to only ever hold three option elements in order to minimise the problem. Seeing as the autosuggest list above the keyboard only ever shows a maximum of three suggestions anyway, this feels like a reasonable compromise.

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

Space dock

Apple announced some stuff about artificial insemination at their WorldWide Developer Conference, none of which interests me one whit. But we did get a twitch of the webkit curtains to let us know what’s coming in Safari. That does interest me.

I’m really pleased to see that on desktop, websites that have been added to the dock will be able to intercept links for that domain:

Now, when a user clicks a link, if it matches the scope of a web app that the user has added to their Dock, that link will open in the web app instead of their default web browser.

Excellent! This means that if I click on a link to thesession.org from, say, my Mastodon site-in-the-dock, it will open in The Session site-in-the-dock. Make sure you’ve got the scope property set in your web app manifest.

I have a few different sites added to my dock: The Session, Mastodon, Google Calendar. Sure beats the bloat of Electron apps.

I have encountered a small bug. I’ll describe it here because I have no idea where to file it.

It’s to do with Spaces, Apple’s desktop management thingy. Maybe they don’t call it Spaces anymore. Maybe it’s called Mission Control now. Or Stage Manager. I can’t keep track.

Anyway, here are the steps to reproduce:

  1. In Safari on Mac, go to a website like adactio.com
  2. From either the File menu or the share icon, select Add to dock.
  3. Click on the website’s icon in the dock to open it.
  4. Using Apple’s desktop management (Spaces?) available through the F3 key, drag that window to a desktop other than desktop 1.
  5. Right click on the site’s icon in the dock and select Options, then Assign To, then This Desktop.
  6. Quit the app/website.
  7. Return to desktop 1.

Expected behaviour: when I click on the icon in the dock to open the site, it will open in the desktop that it has been assigned to.

Observed behaviour: focus moves to the desktop that the site has been assigned to, but it actually opens in desktop 1.

If someone from Apple is reading, I hope that’s useful.

On the one hand, I hope this isn’t one of those bugs that only I’m experiencing because then I’ll feel foolish. On the other hand, I hope this is one of those bugs that only I’m experiencing because then others don’t have to put up with the buggy behaviour.

Our web

Gregory Bennett chronicles the enshittification of everything online in his piece Heat Death of the Internet. It makes for grim reading.

There’s a note of hope at the end. It’s the same note of hope that Charles Digges amplifies in his great piece, Viva la Library!:

Rebel against The Algorithm. Get a library card.

Molly White has also chronicled the decline of everything good on the web, but her piece has hope threaded throughout. We can have a different web:

Though we now face a new challenge as the dominance of the massive walled gardens has become overwhelming, we have tools in our arsenal: the memories of once was, and the creativity of far more people than ever before, who entered the digital expanse but have grown disillusioned with the business moguls controlling life within the walls.

And if anything, it is easier now to do all of this than it ever was.

Like I’ve repeatedly said, having your own website has gone being something uncontroversial to being downright transgressive.

Still, the barrier to entry remains too high for my liking. I wish more smart minds were working on making publishing on the web easier instead of just working on getting people to consume.

But even if you don’t have your own website, Andrew Stephens says you can still Save the Web by Being Nice:

The very best thing to keep the web partly alive is to maintain some content yourself - start a blog, join a forum and contribute to the conversation, even podcast if that is your thing. But that takes a lot of time and not everyone has the energy or the knowhow to create like this.

The second best thing to do is to show your support for pages you enjoy by being nice and making a slight effort.

To paraphrase Shakespeare, being nice “is twice blest; It blesseth him that gives and him that takes.” Tell someone that you liked something they put on the web. You’ll feel good. They’ll feel even better.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.

Pickin’ dates

I had the opportunity to trim some code from The Session recently. That’s always a good feeling.

In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.

There are a couple of places on the site where you can input a date. This is exactly what input type="date" is for. But when I was making the interface, the support for this type of input was patchy.

So instead the interface used three select dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date", I replaced the three selects with one date input.

It was a little fiddly but it worked.

Fast forward to today and input type="date" is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!

I was discussing date inputs recently when I was talking to students in Amsterdam:

They’re given a PDF inheritance-tax form and told to convert it for the web.

That form included dates. The dates were all in the past so the students wanted to be able to set a max value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.

Wouldn’t it be nice if you could specify past dates like this?

<input type="date" max="today">

Or for future dates:

<input type="date" min="today">

Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.

This seems like an ideal use-case for HTML web components:

Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

In this case, it would be nice to augment an existing input type="date" element. Something like this:

 <input-date-past>
   <input type="date">
 </input-date-past>

Here’s the JavaScript that does the augmentation:

 customElements.define('input-date-past', class extends HTMLElement {
     constructor() {
         super();
     }
     connectedCallback() {
         this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
     }
 });

That’s it.

Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future.

See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.

Fidinpamp

If you’re a fan of gratuitous initialisms, you’ll love Google’s core web vitals. Just get a load of the obfuscation in the important-sounding metrics like CLS, FCP, LCP, and more.

To be fair to Google, this is a problem in the web performance world in general. Practioners prefer to talk about TTFB rather than “time to first byte” even though both contain exactly the same number of syllables.

The big news in the web performance community this month is the arrival of a new initialism. INP sounds like one of those pseudo-scientific psychologic profiles but it’s meant to stand for Interaction to Next Paint (even if they were to swear off pointless initialisms, you’d still have to pry Pointless Capitalisation from Google’s cold dead hands).

This new metric is a welcome one. It’s replacing first input delay. Sorry, First Input Delay, or FID, one of the few web vital initialisms that can be spoken as a word, making it a true acronym (fortunately fid’s successor, inp, also works as an acronym).

First Input Delay has long outstayed its welcome. It was always an outlier in the core web vitals. It didn’t seem to measure anything actually useful. I know it sounds like it’s measuring the delay until the user can interact with a web page, but when you dive into what it actually does, it’s a mess:

FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

See that word “begin” in there? It’s doing a lot of work. First Input Delay doesn’t measure the lag between the user interaction and the browser response; it only measures the lag between the user interaction and the browser beginning to respond. The actual response could take ages, but that lag doesn’t get measured. Unlike the other core web vitals, this metric is very far removed from what actually matters to the user’s experience.

What the fid where they thinking? How the fid did this measurement ever get included in core web vitals in the first place?

Well, feel free to take what I’m about to say as pure gossip, but I have my sources, I trust ’em, and no, I’m not going to reveal ’em…

It’s because of AMP.

Remember Google AMP? An acronym so pointless they eventually just forgot it ever stood for anything?

The AMP project ended up doing incredible damage to Google’s developer relations. By colluding with the search team to privilege the appearance of AMP pages in the top news carousel, Google effectively blackmailed the entire publishing industry into using their format.

In the end, it didn’t work. It was a shit format. All they did was foster resentment and animosity:

AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore.

It turns out that Google search wasn’t the only team infected by AMP. The core web vitals team also had to play ball.

Originally they had a genuinely useful metric for measuring the lag between input and response. But guess which pages did terribly? That’s right: AMP pages.

Rather than ship an actually-useful measurement, the core web vitals team instead had to include the broken First Input Delay, brainchild of a certain someone on the AMP team.

Now it all makes sense.

So good riddance to FID. Welcome to INP. And here’s hoping it won’t be much longer till we’re finally burying AMP.

What the world needs

I was having a discussion with some people recently about writing. It was quite cathartic. Everyone was sharing the kinds of things that their inner critic tells them. We were all encouraging each other to ignore that voice.

I mentioned that the two reasons for not writing that I hear most often from people are variations on “I’ve got nothing to say.”

The first version is when someone says they’ve got nothing to say because they’re not qualified to write on a particualar topic. “After all, there are real experts out there who know far more than me. So I’ve got nothing to say.”

But then once you do actually understand a topic, the second version appears. “If I know about this, then everyone knows about this. It’s obvious. So I’ve got nothing to say.”

In both cases, you absolutely should be writing and sharing! In the first instance, you’ve got the beginner’s mind—a valuable perspective. In the second instance, you’ve got personal experience—another valuable perspective.

In other words, while it seems like there’s never a good time to write about something, the truth is that there’s never a bad time to write about something.

So write! Share! Publish!

Then someone in the discussion said something I always find a bit deflating. They said they had no problem writing, but they’re not so keen on publishing.

“After all”, they said, “the world doesn’t need yet another opinion.”

This gets me down because it’s hard to argue with. It’s true that the world doesn’t need another think piece. The world doesn’t need to hear your thoughts on some topic. The world doesn’t need to hear what you’ve been up to recently.

But you know what? Screw what the world needs.

If we’re going to be hardnosed about this, then the world doesn’t need any more books. The world doesn’t need any more music. The world doesn’t need art. Heck, the world doesn’t need us at all.

So don’t publish for the world.

When I write something here on my website, I’m not thinking about the world reading it. That would be paralyzing. I do sometimes imagine that one person is reading it; someone just like me who hasn’t yet had this particular thought, or come up with that particular idea.

I’m writing for myself. I write to figure out what I think. I also publish mostly for myself—a public archive for future me. But if what I publish just happens to connect with one other person, I’m glad.

So, yeah, it’s true that the world doesn’t need you to write and share and publish. Isn’t that liberating? You’re free to write and share and publish for yourself.

Indie webbing

The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.

There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.

Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.

Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.

I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.

I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.

But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.

So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.

It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.

While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.

“It’s having your own website”, I said.

But surely there’s more to it than that, they wondered.

Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.

What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.