Journal tags: thesession

52

sparkline

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Train coding

When I went up to London for the State of the Browser conference last month, I shared the train journey with Remy.

I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.

On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.

Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.

If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.

Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.

Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.

The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.

There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.

With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.

Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.

By the time I was back home, the code was done. Here’s the result:

The Session archive: a static copy of the content on thesession.org.

If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.

I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.

Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.

You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.

I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).

I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.

Preventing automated sign-ups

The Session goes through periods of getting spammed with automated sign-ups. I’m not sure why. It’s not like they do anything with the accounts. They’re just created and then they sit there (until I delete them).

In the past I’ve dealt with them in an ad-hoc way. If the sign-ups were all coming from the same IP addresses, I could block them. If the sign-ups showed some pattern in the usernames or emails, I could use that to block them.

Recently though, there was a spate of sign-ups that didn’t have any patterns, all coming from different IP addresses.

I decided it was time to knuckle down and figure out a way to prevent automated sign-ups.

I knew what I didn’t want to do. I didn’t want to put any obstacles in the way of genuine sign-ups. There’d be no CAPTCHAs or other “prove you’re a human” shite. That’s the airport security model: inconvenience everyone to stop a tiny number of bad actors.

The first step I took was the bare minimum. I added two form fields—called “wheat” and “chaff”—that are randomly generated every time the sign-up form is loaded. There’s a connection between those two fields that I can check on the server.

Here’s how I’m generating the fields in PHP:

$saltstring = 'A string known only to me.';
$wheat = base64_encode(openssl_random_pseudo_bytes(16));
$chaff = password_hash($saltstring.$wheat, PASSWORD_BCRYPT);

See how the fields are generated from a combination of random bytes and a string of characters never revealed on the client? To keep it from goint stale, this string—the salt—includes something related to the current date.

Now when the form is submitted, I can check to see if the relationship holds true:

if (!password_verify($saltstring.$_POST['wheat'], $_POST['chaff'])) {
    // Spammer!
}

That’s just the first line of defence. After thinking about it for a while, I came to conclusion that it wasn’t enough to just generate some random form field values; I needed to generate random form field names.

Previously, the names for the form fields were easily-guessable: “username”, “password”, “email”. What I needed to do was generate unique form field names every time the sign-up page was loaded.

First of all, I create a one-time password:

$otp = base64_encode(openssl_random_pseudo_bytes(16));

Now I generate form field names by hashing that random value with known strings (“username”, “password”, “email”) together with a salt string known only to me.

$otp_hashed_for_username = md5($saltstring.'username'.$otp);
$otp_hashed_for_password = md5($saltstring.'password'.$otp);
$otp_hashed_for_email = md5($saltstring.'email'.$otp);

Those are all used for form field names on the client, like this:

<input type="text" name="<?php echo $otp_hashed_for_username; ?>">
<input type="password" name="<?php echo $otp_hashed_for_password; ?>">
<input type="email" name="<?php echo $otp_hashed_for_email; ?>">

(Remember, the name—or the ID—of the form field makes no difference to semantics or accessibility; the accessible name is derived from the associated label element.)

The one-time password also becomes a form field on the client:

<input type="hidden" name="otp" value="<?php echo $otp; ?>">

When the form is submitted, I use the value of that form field along with the salt string to recreate the field names:

$otp_hashed_for_username = md5($saltstring.'username'.$_POST['otp']);
$otp_hashed_for_password = md5($saltstring.'password'.$_POST['otp']);
$otp_hashed_for_email = md5($saltstring.'email'.$_POST['otp']);

If those form fields don’t exist, the sign-up is rejected.

As an added extra, I leave honeypot hidden forms named “username”, “password”, and “email”. If any of those fields are filled out, the sign-up is rejected.

I put that code live and the automated sign-ups stopped straight away.

It’s not entirely foolproof. It would be possible to create an automated sign-up system that grabs the names of the form fields from the sign-up form each time. But this puts enough friction in the way to make automated sign-ups a pain.

You can view source on the sign-up page to see what the form fields are like.

I used the same technique on the contact page to prevent automated spam there too.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Securing client-side JavaScript

I mentioned that I overhauled the JavaScript on The Session recently. That wasn’t just so that I could mess about with HTML web components. I’d been meaning to consolidate some scripts for a while.

Some of the pages on the site had inline scripts. These were usually one-off bits of functionality. But their presence meant that my content security policy wasn’t as tight as it could’ve been.

Being a community website, The Session accepts input from its users. Literally. I do everything I can to sanitise that input. It would be ideal if I could make sure that any JavaScript that slipped by wouldn’t execute. But as long as I had my own inline scripts, my content security policy had to allow them to be executed with script-src: unsafe-inline.

That’s why I wanted to refactor the JavaScript on my site and move everything to external JavaScript files.

In the end I got close, but there are still one or two pages with internal scripts. But that’s okay. I found a way to have my content security policy cake and eat it.

In my content security policy header I can specifiy that inline scripts are allowed, but only if they have a one-time token specified.

This one-time token is called a nonce. No, really. Stop sniggering. Naming things is hard. And occassionally unintentionally hilarious.

On the server, every time a page is requested it gets sent back with a header like this:

content-security-policy: script-src 'self' 'nonce-Cbb4kxOXIChJ45yXBeaq/w=='

That gobbledegook string is generated randomly every time. I’m using PHP to do this:

base64_encode(openssl_random_pseudo_bytes(16))

Then in the HTML I use the same string in any inline scripts on the page:

<script nonce="Cbb4kxOXIChJ45yXBeaq/w==">
…
</script>

Yes, HTML officially has an attribute called nonce.

It’s working a treat. The security headers for The Session are looking good. I have some more stuff in my content security policy—check out the details if you’re interested.

I initially thought I’d have to make an exception for the custom offline page on The Session. After all, that’s only going to be accessed when there is no server involved so I wouldn’t be able to generate a one-time token. And I definitely needed an inline script on that page in order to generate a list of previously-visited pages stored in a cache.

But then I realised that everything would be okay. When the offline page is cached, its headers are cached too. So the one-time token in the content security policy header still matches the one-time token used in the page.

Most pages on The Session don’t have any inline scripts. For a while, every page had an inline script in the head of the document like this:

<script nonce="Cbb4kxOXIChJ45yXBeaq/w==">
document.documentElement.classList.add('hasJS');
</script>

This is something I’ve been doing for years: using JavaScript to add a class to the HTML. Then I can use the presence or absence of that class to show or hide elements that require JavaScript. I have another class called requiresJS that I put on any elements that need JavaScript to work (like buttons for copying to the clipboard, for example).

Then in my CSS I’d write:

:not(.hasJS) .requiresJS {
 display: none;
}

If the hasJS class isn’t set, hide any elements with the requiresJS class.

I decided to switch over to using a scripting media query:

@media (scripting: none) {
  .requiresJS {
   display: none;
  }
}

This isn’t bulletproof by any means. It doesn’t account for browser extensions that disable JavaScript and it won’t get executed at all in older browsers. But I’m okay with that. I’ve put the destructive action in the more modern CSS:

I feel that the more risky action (hiding content) should belong to the more complex selector.

This means that there are situations where elements that require JavaScript will be visible, even if JavaScript isn’t available. But I’d rather that than the other way around: if those elements were hidden from browsers that could execute JavaScript, that would be worse.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Pickin’ dates

I had the opportunity to trim some code from The Session recently. That’s always a good feeling.

In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.

There are a couple of places on the site where you can input a date. This is exactly what input type="date" is for. But when I was making the interface, the support for this type of input was patchy.

So instead the interface used three select dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date", I replaced the three selects with one date input.

It was a little fiddly but it worked.

Fast forward to today and input type="date" is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!

I was discussing date inputs recently when I was talking to students in Amsterdam:

They’re given a PDF inheritance-tax form and told to convert it for the web.

That form included dates. The dates were all in the past so the students wanted to be able to set a max value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.

Wouldn’t it be nice if you could specify past dates like this?

<input type="date" max="today">

Or for future dates:

<input type="date" min="today">

Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.

This seems like an ideal use-case for HTML web components:

Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

In this case, it would be nice to augment an existing input type="date" element. Something like this:

 <input-date-past>
   <input type="date">
 </input-date-past>

Here’s the JavaScript that does the augmentation:

 customElements.define('input-date-past', class extends HTMLElement {
     constructor() {
         super();
     }
     connectedCallback() {
         this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
     }
 });

That’s it.

Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future.

See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.

Speedier tunes

I wrote a little while back about improving performance on The Session by reducing runtime JavaScript in favour of caching on the server. This is on the pages for tunes, where the SVGs for the sheetmusic are now inlined instead of being generated on the fly.

It worked. But I also wrote:

A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.

Take a tune like Out On The Ocean. It has 27 settings. That’s a lot of SVG markup that needs to be parsed, styled and rendered, even if it’s inline.

Then I remembered a very handy CSS property called content-visibility:

It enables the user agent to skip an element’s rendering work (including layout and painting) until it is needed — which makes the initial page load much faster.

Sounds great! But there are two gotchas.

The first gotcha is that if a browser doesn’t paint the element, it doesn’t know how much space the element should take up. So you need to provide dimensions. At the very least you need to provide a height value. Otherwise when the element comes into view and gets rendered, it pushes down on the content below it. You’d see a sudden jump in the scrollbar position.

The solution is to provide a value for contain-intrinsic-size. If your content is dynamic—from, say, a CMS—then you’re out of luck. You don’t know how long the content is.

Luckily, in my case, I could take a stab at it. I know how many lines of sheetmusic there are for each tune setting. Each line takes up roughly the same amount of space. If I multiply that amount of space by the number of lines then I’ve got a pretty good approximation of the height of the sheetmusic. I apply this with the contain-intrinsic-block-size property.

So each piece of sheetmusic has an inline style attribute with declarations like this:

content-visibility: auto;
contain-intrinsic-block-size: 380px;

It works a treat. I did a before-and-after check with pagespeed insights on the page for Out On The Ocean. The “style and layout” part of the main thread work went down considerably. Total blocking time went from more than 600 milliseconds to less than 400 milliseconds.

Not a bad result for a little bit of CSS!

I said there was a second gotcha. That’s browser support.

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working). I’ve said it before and I’ll say it again: the forgiving error-parsing in HTML and CSS is a killer feature of the web. Browsers just ignore what they don’t understand. That’s what makes progressive enhancement like this possible.

And actually, there’s something you can do for all browsers. Even browsers that don’t support content-visibility still understand containment. So they’ll understand contain-intrinsic-size. Pair that with a contain declaration like this to tell the browser that this chunk of content isn’t going to reflow or get repainted:

contain: layout paint;

Here’s what MDN says about contain:

The contain CSS property indicates that an element and its contents are, as much as possible, independent from the rest of the document tree. Containment enables isolating a subsection of the DOM, providing performance benefits by limiting calculations of layout, style, paint, size, or any combination to a DOM subtree rather than the entire page.

So if you’ve got a chunk of static content, you might as well apply contain to it.

Again, not bad for a little bit of CSS!

How green is my server?

The Session does very well in terms of performance. You can see the data from the Chrome UX Report (CRUX).

What’s good for performance is good for the environment. Sure enough, The Session gets a very high score from the website carbon calculator:

Hurrah! This web page achieves a carbon rating of A+

This is cleaner than 99% of all web pages globally

But under the details about hosting it says:

Oh no, it looks like this web page uses bog standard energy

The Session is hosted on DigitalOcean, who tend to be quite tight-lipped about their energy suppliers. Fortunately others have done some sleuthing to figure out which facilities are running on green energy.

One of the locations to get the green thumnbs up is the Amsterdam facility housed by Equinix. That’s where The Session is hosted.

I’m glad that I was able to find out that the site is running on 100% renewable energy, but I wish I didn’t need to go searching to find this out. DigitalOcean need to be a lot more transparent about the energy sources for their hosting facilities.

Secure tunes

The caching strategy for The Session that I wrote about is working a treat.

There are currently about 45,000 different tune settings on the site. One week after introducing the server-side caching, over 40,000 of those settings are already cached.

But even though it’s currently working well, I’m going to change the caching mechanism.

The eagle-eyed amongst you might have raised an eagle eyebrow when I described how the caching happens:

The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.

I knew when I came up with this plan that there was a flaw. The endpoint that receives the markup via Ajax is accepting data from the client. That data could be faked by a malicious actor.

Sure, I’m doing a whole bunch of checks and sanitisation on the server, but there’s always going to be a way of working around that. You can never trust data sent from the client. I was kind of relying on security through obscurity …except it wasn’t even that obscure because I blogged about it.

So I’m switching over to using a headless browser to extract the sheetmusic. You may recall that I wrote:

I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills.

That’s still true. So I’m outsourcing the work to Browserless.

There’s a reason I didn’t go with that solution to begin with. Like I said, over 40,000 tune settings have already been cached. If I had used the Browserless API to do that work, it would’ve been quite pricey. But now that the flood is over and there’s a just a trickle of caching happening, Browserless is a reasonable option.

Anyway, that security hole has now been closed. Thank you to everyone who wrote in to let me know about it. Like I said, I was aware of it, but it was good to have it confirmed.

Funnily enough, the security lesson here is the same as my conclusion when talking about performance:

If that means shifting the work from the browser to the server, do it!

Speedy tunes

Performance is a high priority for me with The Session. It needs to work for people all over the world using all kinds of devices.

My main strategy for ensuring good performance is to diligently apply progressive enhancement. The core content is available to any device that can render HTML.

To keep things performant, I’ve avoided as many assets (or, more accurately, liabilities) as possible. No uneccessary images. No superfluous JavaScript libraries. Not even any web fonts (gasp!). And definitely no third-party resources.

The pay-off is a speedy site. If you want to see lab data, run a page from The Session through lighthouse. To see field data, take a look at data from Chrome UX Report (Crux).

But the devil is in the details. Even though most pages on The Session are speedy, the outliers have bothered me for a while.

Take a typical tune page on the site. The data is delivered from the server as HTML, which loads nice and quick. That data includes the notes for the tune settings, written in ABC notation, a nice lightweight text format.

Then the enhancement happens. Using Paul Rosen’s brilliant abcjs JavaScript library, those ABCs are converted into SVG sheetmusic.

So on tune pages there’s an additional download for that JavaScript library. That’s not so bad though—I’m using a service worker to cache that file so there’ll only ever be one initial network request.

If a tune has just a few different versions, the page remains nice and zippy. But if a tune has lots of settings, the computation starts to add up. Converting all those settings from ABC to SVG starts to take a cumulative toll on the main thread.

I pondered ways to avoid that conversion step. Was there some way of pre-generating the SVGs on the server rather than doing it all on the client?

In theory, yes. I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills, so I’ve taken a slightly different approach.

The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.

Next time someone hits that page, there’s a server-side check to see if the sheetmusic has been cached. If it has, send that down the wire embedded directly in the HTML.

The idea is that over time, most of the sheetmusic on the site will transition from being generated in the browser to being stored on the server.

So far it’s working out well.

Take a really popular tune like The Lark In The Morning. There are twenty settings, and each one has four parts. Previously that would have resulted in a few seconds of parsing and rendering time on the main thread. Now everything is delivered up-front.

I’m not out of the woods. A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.

I’ll keep working on it. But overall, the speed improvement is great. A typical tune page is now very speedy indeed.

It’s like a microcosm of web performance in general: respect your users’ time, network connection and battery life. If that means shifting the work from the browser to the server, do it!

Relative times

Last week Phil posted a little update about his excellent site, ooh.directory:

If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.

Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.

I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.

I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.

“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.

Sure enough, we’ve now got the Intl.RelativeTimeFormat object. It’s got browser support across the board.

Here’s the code I wrote.

I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.

You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.

It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.

Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.

You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)

I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.

This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.

Add view transitions to your website

I must admit, when Jake told me he was leaving Google, I got very worried about the future of the View Transitions API.

To recap: Chrome shipped support for the API, but only for single page apps. That had me worried:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Well, the multi-page version still hasn’t yet shipped in Chrome stable, but it is available in Chrome Canary behind a flag, so it looks like it’s almost here!

Robin took the words out of my mouth:

Anyway, even this cynical jerk is excited about this thing.

Are you the kind of person who flips feature flags on in nightly builds to test new APIs?

Me neither.

But I made an exception for the View Transitions API. So did Dave:

I think the most telling predictor for the success of the multi-page View Transitions API – compared to all other proposals and solutions that have come before it – is that I actually implemented this one. Despite animations being my bread and butter for many years, I couldn’t be arsed to even try any of the previous generation of tools.

Dave’s post is an excellent step-by-step introduction to using view transitions on your website. To recap:

Enable these two flags in Chrome Canary:

chrome://flags#view-transition
chrome://flags#view-transition-on-navigation

Then add this meta element to the head of your website:

<meta name="view-transition" content="same-origin">

You could stop there. If you navigate around your site, you’ll see that the navigations now fade in and out nicely from one page to another.

But the real power comes with transitioning page elements. Basically, you want to say “this element on this page should morph into that element on that page.” And when I say morph, I mean morph. As Dave puts it:

Behind the scenes the browser is rasterizing (read: making an image of) the before and after states of the DOM elements you’re transitioning. The browser figures out the differences between those two snapshots and tweens between them similar to Apple Keynote’s “Magic Morph” feature, the liquid metal T-1000 from Terminator 2: Judgement Day, or the 1980s cartoon series Turbo Teen.

If those references are lost on you, how about the popular kids book series Animorphs?

Some classic examples would be:

  • A thumbnail of a video on one page morphs into the full-size video on the next page.
  • A headline and snippet of an article on one page morphs into the full article on the next page.

I’ve added view transitions to The Session. Where I’ve got index pages with lists of titles, each title morphs into the heading on the next page.

Again, Dave’s post was really useful here. Each transition needs a unique name, so I used Dave’s trick of naming each transition with the ID of the individual item being linked to.

In the recordings section, for example, there might be a link like this on the index page:

<a href="/recordings/7812" style="view-transition-name: recording-7812">The Banks Of The Moy</a>

Which, if you click on it, takes you to the page with this heading:

<h1><span style="view-transition-name: recording-7812">The Banks Of The Moy</span></h1>

Why the span? Well, like Dave, I noticed some weird tweening happening between block and inline elements. Dave solved the problem with width: fit-content on the block-level element. I just stuck in an extra inline element.

Anyway, the important thing is that the name of the view transition matches: recording-7812.

I also added a view transition to pages that have maps. The position of the map might change from page to page. Now there’s a nice little animation as you move from one page with a map to another page with a map.

thesession.org View Transitions

That’s all good, but I found myself wishing that I could just have those enhancements. Every single navigation on the site was triggering a fade in and out—the default animation. I wondered if there was a way to switch off the default fading.

There is! That default animation is happening on a view transition named root. You can get rid of it with this snippet of CSS:

::view-transition-image-pair(root) {
  isolation: auto;
}
::view-transition-old(root),
::view-transition-new(root) {
  animation: none;
  mix-blend-mode: normal;
  display: block;
}

Voila! Now only the view transitions that you name yourself will get applied.

You can adjust the timing, the easing, and the animation properites of your view transitions. Personally, I was happy with the default morph.

In fact, that’s one of the things I like about this API. It’s another good example of declarative design. I say what I want to happen, but I don’t need to specify the details. I’ll let the browser figure all that out.

That’s what’s got me so excited about this API. Yes, it’s powerful. But just as important, it’s got a very low barrier to entry.

Chris has gathered a bunch of examples together in his post Early Days Examples of View Transitions. Have a look around to get some ideas.

If you like what you see, I highly encourage you to add view transitions to your website now.

“But wait,” I hear you cry, “this isn’t supported in any public-facing browser yet!”

To which, I respond “So what?” It’s a perfect example of progressive enhancement. Adding one meta element and a smidgen of CSS will do absolutely no harm to your website. And while no-one will see your lovely view transitions yet, once browsers do start shipping with support for the API, your site will automatically get better.

Your website will be enhanced. Progressively.

Update: Simon Pieters quite rightly warns against adding view transitions to live sites before the API is done:

in general, using features before they ship in a browser isn’t a great idea since it can poison the feature with legacy content that might break when the feature is enabled. This has happened several times and renames or so were needed.

Good point. I must temper my excitement with pragmatism. Let me amend my advice:

I highly encourage you to experiment with view transitions on your website now.

Assumption

While I’m talking about the SVGs on The Session, I thought I’d share something else related to the rendering of the sheet music.

Like I said, I use the brilliant abcjs JavaScript library. It converts ABC notation into sheet music on the fly, which still blows my mind.

If you view source on the rendered SVG, you’ll see that the path and rect elements have been hard-coded with a colour value of #000000. That makes sense. You’d want to display sheet music on a light background, probably white. So it seems like a safe assumption.

Ah, but when it comes to front-end development, assumptions are like little hidden bombs just waiting to go off!

I got an email the other day:

Hi Jeremy,

I have vision problems, so I need to use high-contrast mode (using Windows 11). In high-contrast mode, the sheet-music view is just black!

Doh! All my CSS adapts just fine to high-contrast mode, but those hardcoded hex values in the SVG aren’t going to be affected by high-contrtast mode.

Stepping back, the underlying problem was that I didn’t have a full separation of concerns. Most of my styling information was in my CSS, but not all. Those hex values in the SVG should really be encoded in my style sheet.

I couldn’t remove the hardcoded hex values—not without messing around with JavaScript beyond my comprehension—so I made the fix in CSS:

[fill="#000000"] {
  fill: currentColor;
}
[stroke="#000000"] {
  stroke: currentColor;
}

That seemed to do the trick. I wrote back to the person who had emailed me, and they were pleased as punch:

Well done, Thanks!  The staff, dots, etc. all appear as white on a black background.  When I click “Print”, it looks like it still comes out black on a white background, as expected.

I’m very grateful that they brought the issue to my attention. If they hadn’t, that assumption would still be lying in wait, preparing to ambush someone else.

Workaround

Two weeks ago, I wrote:

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Paul Rosen, who makes abcjs, the JavaScript library that renders sheet music on The Session, managed to get a fix out pretty quickly. But I use an older version of the library and updating it would introduce some side-effects that would take me a while to work around. So that option wasn’t available to me.

In this situation, when the problem is caused by a browser bug, the correct course of action is to file a bug with the browser. That had already been done. Now all I could do was twiddle my thumbs and wait for the next release of the browser, which would hopefully ship with the fix.

But I figured I may as well try to find a temporary workaround in the meantime.

At first, I looked at diving into the internals of the JavaScript—that’s where the instructions are given for drawing the SVGs.

But then I stopped and thought, “If the problem is with the rendering of the SVG, maybe CSS can help.”

I started messing around with SVG-specific CSS properties like stroke, fill, and so on. With dev tools open, I started targeting the paths that acted as bar lines in the sheet music, playing around with widths, opacities, and fills.

It was the debugging equivalent of throwing spaghetti at the wall. Remarkably, it actually worked.

I found a solution with this nonsensical bit of CSS:

stroke: currentColor;
stroke-opacity: 0;

For some reason, rather than making all the barlines disappear, this ensured they were visible.

It’s the worst kind of hacky fix—the kind where you have no idea why it works, but it does.

So I shipped it.

And at pretty much exactly the same time, a new version of Firefox dropped …with the bug fixed.

I can’t deny that there was a certain satisfaction in being able to work around a browser bug. But there’s much more satisfaction in deleting the hacky workaround when it’s no longer needed.

Progressive disclosure with HTML

Robin penned a little love letter to the details element. I agree. It is a joyous piece of declarative power.

That said, don’t go overboard with it. It’s not a drop-in replacement for more complex widgets. But it is a handy encapsulation of straightforward progressive disclosure.

Just last week I added a couple of more details elements to The Session …kind of. There’s a bit of server-side conditional logic involved to determine whether details is the right element.

When you’re looking at a tune, one of the pieces of information you see is how many recordings there of that tune. Now if there are a lot of recordings, then there’s some additional information about which other tunes this one gets recorded with. That information is extra. Mere details, if you will.

You can see it in action on this tune listing. Thanks to the details element, the extra information is available to those who want it, but by default that information is tucked away—very handy for not clogging up that part of the page.

<details>
<summary>There are 181 recordings of this tune.</summary>
This tune has been recorded together with
<ul>
<li>…</li>
<li>…</li>
<li>…</li>
</ul>
</details>

Likewise, each tune page includes any aliases for the tune (in Irish music, the same tune can have many different titles—and the same title can be attached to many different tunes). If a tune has just a handful of aliases, they’re displayed in situ. But once you start listing out more than twenty names, it gets overwhelming.

The details element rides to the rescue once again.

Compare the tune I mentioned above, which only has a few aliases, to another tune that is known by many names.

Again, the main gist is immediately available to everyone—how many aliases are there? But if you want to go through them all, you can toggle that details element open.

You can effectively think of the summary element as the TL;DR of HTML.

<details>
<summary>There are 31 other names for this tune.</summary>
<p>Also known as…</p>
</details>

There’s another classic use of the details element: frequently asked questions. In the case of The Session, I’ve marked up the house rules and FAQs inside details elements, with the rule or question as the summary.

But there’s one house rule that’s most important (“Be civil”) so that details element gets an additional open attribute.

<details open>
<summary>Be civil</summary>
<p>Contributions should be constructive and polite, not mean-spirited or contributed with the intention of causing trouble.</p>
</details>

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

Tweaking navigation sizing

Gerry talks about “top tasks” a lot. He literally wrote the book on it:

Top tasks are what matter most to your customers.

Seems pretty obvious, right? But it’s actually pretty rare to see top tasks presented any differently than other options.

Look at the global navigation on most websites. Typically all the options are given equal prominence. Even the semantics under the hood often reflect this egalitarian ideal, with each list in an unordered list. All the navigation options are equal, but I bet that the reality for most websites is that some navigation options are more equal than others.

I’ve been guilty of this on The Session. The site-wide navigation shows a number of options: tunes, events, discussions, etc. Each one is given equal prominence, but I can tell you without even looking at my server logs that 90% of the traffic goes to the tunes section—that’s the beating heart of The Session. That’s why the home page has a search form that defaults to searching for tunes.

I wanted the navigation to reflect the reality of what people are coming to the site for. I decided to make the link to the tunes section more prominent by bumping up the font size a bit.

I was worried about how weird this might look; we’re so used to seeing all navigation items presented equally. But I think it worked out okay (though it might take a bit of getting used to if you’re accustomed to the previous styling). It helps that “tunes” is a nice short word, so bumping up the font size on that word doesn’t jostle everything else around.

I think this adjustment is working well for this situation where there’s one very clear tippy-top task. I wouldn’t want to apply it across the board, making every item in the navigation proportionally bigger or smaller depending on how often it’s used. That would end up looking like a ransom note.

But giving one single item prominence like this tweaks the visual hierarchy just enough to favour the option that’s most likely to be what a visitor wants.

That last bit is crucial. The visual adjustment reflects what visitors want, not what I want. You could adjust the size of a navigation option that you want to drive traffic to, but in the long run, all you’re going to do is train people to trust your design less.

You don’t get to decide what your top task is. The visitors to your website do. Trying to foist an arbitrary option on them would be the tail wagging the dog.

Anway, I’m feeling a lot better about the site-wide navigation on The Session now that it reflects reality a little bit more. Heck, I may even bump that font size up a little more.

Overloading buttons

It’s been almost two years since I added audio playback on The Session. The interface is quite straightforward. For any tune setting, there’s a button that says “play audio”. When you press that button, audio plays and the button’s text changes to “pause audio.”

By updating the button’s text like this, I’m updating the button’s accessible name. In other situations, where the button text doesn’t change, you can indicate whether a button is active or not by toggling the aria-pressed attribute. I’ve been doing that on the “share” buttons that act as the interface for a progressive disclosure. The label on the button—“share”—doesn’t change when the button is pressed. For that kind of progressive disclosure pattern, the button also has an aria-controls and aria-expanded attribute.

From all the advice I’ve read about button states, you should either update the accessible name or change the aria-pressed attribute, but not both. That would lead to the confusing situation of having a button labelled “pause audio” as having a state of “pressed” when in fact the audio is playing.

That was all fine until I recently added some more functionality to The Session. As well as being able to play back audio, you can now adjust the tempo of the playback speed. The interface element for this is a slider, input type="range".

But this means that the “play audio” button now does two things. It plays the audio, but it also acts as a progressive disclosure control, revealing the tempo slider. The button is simultaneously a push button for playing and pausing music, and a toggle button for showing and hiding another interface element.

So should I be toggling the aria-pressed attribute now, even though the accessible name is changing? Or is it enough to have the relationship defined by aria-controls and the state defined by aria-expanded?

Based on past experience, my gut feeling is that I’m probably using too much ARIA. Maybe it’s an anti-pattern to use both aria-expanded and aria-pressed on a progressive disclosure control.

I’m kind of rubber-ducking here, and now that I’ve written down what I’m thinking, I’m pretty sure I’m going to remove the toggling of aria-pressed in any situation where I’m already toggling aria-expanded.

What I really need to do is enlist the help of actual screen reader users. There are a number of members of The Session who use screen readers. I should get in touch and see if the new functionality makes sense to them.