I wrote a book about service workers. It’s called Going Offline. It was first published by A Book Apart in 2018. Now it’s available to read for free online.
Needless to say the web book works offline. Once you go to goingoffline.adactio.com you can add it to the homescreen of your mobile device or add it to the dock on your Mac. After that, you won’t need a network connection.
The book is free to read. Properly free. Not the kind of “free” where you have to supply an email address first. Why would I make you go to the trouble of generating a burner email account?
The site has no analytics. No tracking. No third-party scripts of any kind whatsover. By complete coincidence, the site is fast. Funny that.
For the styling of this web book, I tweaked the stylesheet I used for HTML5 For Web Designers. I updated it a little bit to use logical properties, some fluid typography and view transitions.
In the process of converting the book to HTML, I got reaquainted with what I had written almost seven years ago. It was kind of fun to approach it afresh. I think it stands up pretty darn well.
UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.
I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:
We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.
If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.
Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!
I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.
I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.
I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.
I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.
Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.
Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left, right, width or height in a style sheet, it looks like a bug to me.
Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp declaration on the body element, which can also get the job done.
Okay, so those are the things that feel standard to me. So what could I play around with that was new?
View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.
Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…
Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.
You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.
That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.
Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference @media query.
I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).
Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.
I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.
There’s also the blessed lack of any algorithm:
Because blogs are much quieter than social media, there’s also the ability to switch off that awareness that Someone Is Always Watching.
This conduit is anti-lock-in, it works for nearly the whole internet. It is surveillance-resistant, far more accessible than the web or any mobile app interface.
Like Lucy, he emphasises the lack of algorithm:
By default, you’ll get everything as it appears, in reverse-chronological order.
Does that remind you of anything? Right: this is how social media used to work, before it was enshittified. You can single-handedly disenshittify your experience of virtually the entire web, just by switching to RSS, traveling back in time to the days when Facebook and Twitter were more interested in showing you the things you asked to see, rather than the ads and boosted content someone else would pay to cram into your eyeballs.
The only algorithm at work in my feed reader—or on Mastodon—is good old-fashioned serendipity, when posts just happened to rhyme or resonate. Like this morning, when I read this from Alice:
There is no better feeling than walking along, lost in my own thoughts, and feeling a small hand slip into mine. There you are. Here I am. I love you, you silly goose.
I pass a mother and daughter, holding hands. The little girl is wearing a sequinned covered jacket. She looks up at her mother who says “…And the sun’s going to come out and you’re just going to shine and shine and shine.”
If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.
I mentioned the browser support:
Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).
But …I think I’ve discovered a little bug in Safari’s implementation.
(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)
So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.
To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.
I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.
I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).
I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.
On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.
Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.
If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.
Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.
Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.
The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.
There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.
With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.
Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.
By the time I was back home, the code was done. Here’s the result:
If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.
I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.
Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.
You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.
I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).
I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.
This seems to be the attitude of many of my fellow nerds—designers and developers—when presented with tools based on large language models that produce dubious outputs based on the unethical harvesting of other people’s work and requiring staggering amounts of energy to run:
This is the future! I need to start using these tools now, even if they’re flawed, because otherwise I’ll be left behind. They’ll only get better. It’s inevitable.
Whereas this seems to be the attitude of those same designers and developers when faced with stable browser features that can be safely used today without frameworks or libraries:
The Session goes through periods of getting spammed with automated sign-ups. I’m not sure why. It’s not like they do anything with the accounts. They’re just created and then they sit there (until I delete them).
In the past I’ve dealt with them in an ad-hoc way. If the sign-ups were all coming from the same IP addresses, I could block them. If the sign-ups showed some pattern in the usernames or emails, I could use that to block them.
Recently though, there was a spate of sign-ups that didn’t have any patterns, all coming from different IP addresses.
I decided it was time to knuckle down and figure out a way to prevent automated sign-ups.
I knew what I didn’t want to do. I didn’t want to put any obstacles in the way of genuine sign-ups. There’d be no CAPTCHAs or other “prove you’re a human” shite. That’s the airport security model: inconvenience everyone to stop a tiny number of bad actors.
The first step I took was the bare minimum. I added two form fields—called “wheat” and “chaff”—that are randomly generated every time the sign-up form is loaded. There’s a connection between those two fields that I can check on the server.
Here’s how I’m generating the fields in PHP:
$saltstring = 'A string known only to me.';
$wheat = base64_encode(openssl_random_pseudo_bytes(16));
$chaff = password_hash($saltstring.$wheat, PASSWORD_BCRYPT);
See how the fields are generated from a combination of random bytes and a string of characters never revealed on the client? To keep it from goint stale, this string—the salt—includes something related to the current date.
Now when the form is submitted, I can check to see if the relationship holds true:
if (!password_verify($saltstring.$_POST['wheat'], $_POST['chaff'])) {
// Spammer!
}
That’s just the first line of defence. After thinking about it for a while, I came to conclusion that it wasn’t enough to just generate some random form field values; I needed to generate random form field names.
Previously, the names for the form fields were easily-guessable: “username”, “password”, “email”. What I needed to do was generate unique form field names every time the sign-up page was loaded.
Now I generate form field names by hashing that random value with known strings (“username”, “password”, “email”) together with a salt string known only to me.
(Remember, the name—or the ID—of the form field makes no difference to semantics or accessibility; the accessible name is derived from the associated label element.)
The one-time password also becomes a form field on the client:
If those form fields don’t exist, the sign-up is rejected.
As an added extra, I leave honeypot hidden forms named “username”, “password”, and “email”. If any of those fields are filled out, the sign-up is rejected.
I put that code live and the automated sign-ups stopped straight away.
It’s not entirely foolproof. It would be possible to create an automated sign-up system that grabs the names of the form fields from the sign-up form each time. But this puts enough friction in the way to make automated sign-ups a pain.
You can view source on the sign-up page to see what the form fields are like.
I used the same technique on the contact page to prevent automated spam there too.
Hook up an input element with a datalist element using the list and id attributes and you’re done. You can even use a bit of Ajax to dynamically update the option elements inside the datalist in response to the user’s input. The browser takes care of all the interaction. If you try to roll your own combobox implementation, it’s almost certainly going to involve a lot of JavaScript and still probably won’t account for all use cases.
Safari on iOS—and therefore all browsers on iOS—didn’t support datalist for quite a while. But once it finally shipped, it worked really nicely. The options showed up just like automplete suggestions above the keyboard.
But that broke a while back.
The suggestions still appeared, but if you tapped on one of them, nothing happened. The input element didn’t get updated. You had to tap on a little downward arrow inside the input in order to see the list of options.
That was really frustrating for anybody on iOS using The Session. By far the most common task on the site is searching for a tune, something that’s greatly (progressively) enhanced with a dynamically-updating datalist.
I just updated to iOS 18 specifically to see if this bug has been fixed, and it has:
Fixed updating the input value when selecting an option from a datalist element.
Hallelujah!
But now there’s some additional behaviour that’s a little weird.
As well as showing the options in the autocomplete list above the keyboard, Safari on iOS—and therefore all browsers on iOS—also pops up the options as a list (as if you had tapped on that downward arrow). If the list is more than a few options long, it completely obscures the input element you’re typing into!
I’m not sure if this is a bug or if it’s the intended behaviour. It feels like a bug, but I don’t know if I should file something.
For now, I’ve updated the datalist elements on The Session to only ever hold three option elements in order to minimise the problem. Seeing as the autosuggest list above the keyboard only ever shows a maximum of three suggestions anyway, this feels like a reasonable compromise.
Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.
Making websites without a build step is a gift to your future self. When you open that project six months or a year or two years later, there’ll be no faffing about with npm updates, installs, or vulnerabilities.
Need to edit the CSS? You edit the CSS. Need to change the markup? You change the markup.
It’s remarkably freeing. It’s also very, very performant.
If you’re thinking that your next project couldn’t possibly be made without a build step, let me tell you about a phrase I first heard in the indie web community: “Manual ‘till it hurts”. It’s basically a two-step process:
Start doing what you need to do by hand.
When that becomes unworkable, introduce some kind of automation.
It’s remarkable how often you never reach step two.
I’m not saying premature optimisation is the root of all evil. I’m just saying it’s premature.
Start simple. Get more complex if and when you need to.
I’ve been on a sabbatical from work for the past six weeks.
At Clearleft, you’re eligible for a sabbatical after five years. For some reason I haven’t taken one until now, 19 years into my tenure at the agency. I am an idiot.
My six-week sabbatical has been lovely, alternating between travel and homebodying.
Belfast
The first week was spent in Belfast at the excellent Belfast Trad Fest. There were workshops in the morning, sessions in the afternoon, and concerts in the evening. Non-stop music!
This year’s event was a little bit special for me. The festival runs an excellent bursary sponsorship programme for young people who otherwise wouldn’t be able to attend:
The bursary secures a place for a young musician to attend and experience a week-long intensive and immersive summertime learning course of traditional music, song and dance and can be transformative.
Starting from today, and for the whole month of April, any donations made to The Session, which normally go towards covering the costs of running the site, will instead go towards sponsoring bursary places for this year’s Belfast Summer school.
Needless to say, I was thrilled! The Trad Fest team were very happy too—they very kindly gave me a media pass for the duration of the event, which meant I could go to any of the concerts for free. I made full use of this.
That said, one of the absolute highlights of the week wasn’t a concert, but a session. Piper Mick O’Connor and fiddler Sean Smyth led a session out at the American Bar one evening that was absolutely sublime. There was a deep respect for the music combined with a lovely laidback vibe.
Brighton
There were no shortage of sessions once Jessica returned from Belfast to Brighton. In fact, when we got the train back from Gatwick we hopped in a cab straight to a session instead of going home first. Can’t stop, won’t stop.
The weather hadn’t been great in Belfast, which was fine because we were mostly indoors. But once we got back to Brighton we were treated to a week of glorious sunshine.
Needless to say, Jessica did plenty of swimming. I even went in the ocean myself on one of the hottest days.
I also went into the air. Andy took me up in a light aircraft for a jolly jaunt over the south of England. We flew from Goodwood over the New Forest, and around the Isle of Wight where we landed for lunch. Literally a flying visit.
I can attest that Andy is an excellent pilot. No bumpy landings.
Cork
Our next sojourn took us back to the island of Ireland, but this time we were visiting the Republic. We spent a week in the mightiest of all the Irish counties, Cork.
Our friends Dan and Sue came over from the States and a whole bunch of us went on a road trip down to west Cork, a beautiful part of the country that I shamefully hadn’t visited before. Sue did a magnificent job navigating the sometimes tiny roads in a rental car, despite Dan being a nervous Nellie in the passenger seat.
We had a lovely couple of days in Glengarriff, even though the weather wasn’t great. On the way back to Cork city, we just had to stop off in Baltimore—Dan and Sue live in the other Baltimore. I wasn’t prepared for the magnificent and rugged coastline (quite different to its Maryland counterpart).
Boston
We were back in Brighton for just one day before it was time for us to head to our next destination. We flew to Boston and spent a few days hanging around in Cambridge with our dear friends Ethan and Liz. It was a real treat to just pass the time with good people. It had been far too long.
I did manage to squeeze in an Irish music session in the legendary Druid pub. ’Twas a good night.
After all the excitement of Frostapalooza, Jessica and I went on to spend a week decompressing in Saint Augustine, Florida.
We went down to the beach every day. We went in the water most days. Sometimes the water was a bit too choppy for a proper swim, but it was still lovely and warm. And there was one day when the water was just perfectly calm.
When we weren’t on the beach, we were probably eating shrimp.
It was all very relaxing.
Brighton
I’ve spent the sixth and final week of my sabbatical back in Brighton. The weather has remained good so there’s been plenty of outdoor activities, including a kayaking trip down the river Medway in Kent. I may have done some involuntary wild swimming at one point.
I have very much enjoyed these past six weeks. Music. Travel. Friends. It’s all been quite lovely.
It all started back in July of last year when I got an email from Brad:
Next summer I’m turning 40, and I’m going to use that milestone as an excuse to play a big concert with and for all of my friends and family. It’ll sorta be like The Last Waltz, but with way more web nerds involved.
Originally it was slated for July of 2024, which was kind of awkward for me because it would clash with Belfast Trad Fest but I said to mark me down as interested. Then when the date got moved to August of 2024, it became more doable. I knew that Jessica and I would be making a transatlantic trip at some point anyway to see her parents, so we could try to combine the two.
In fact, the tentative plans we had to travel to the States in April of 2024 for the total solar eclipse ended up getting scrapped in favour of Brad’s shindig. That’s right—we chose rock’n’roll over the cosmic ballet.
Over the course of the last year, things began to shape up. There were playlists. There were spreadsheets. Dot voting was involved.
Anyone with any experience of playing live music was getting nervous. It’s hard enough to rehearse and soundcheck for a four piece, but Brad was planning to have over 40 musicians taking part!
We did what we could from afar, choosing which songs to play on, recording our parts and sending them onto Brad. Meanwhile Brad was practicing like hell with the core band. With Brad on bass and his brother Ian on drums for the whole night, we knew that the rhythm section would be tight.
A few months ago we booked our flights. We’d fly into to Boston first to hang out with Ethan and Liz (it had been too long!), then head down to Pittsburgh for Frostapalooza before heading on to Florida to meet up with Jessica’s parents.
When we got to Pittsburgh, we immediately met up with Chris and together we headed over to Brad’s for a rehearsal. We’d end up spending a lot of time playing music with Chris over the next couple of days. I loved every minute of it.
The evening before Frostapalooza, Brad threw a party at his place. It was great to meet so many of the other musicians he’d roped into this.
Then it was time for the big day. We had a whole afternoon to soundcheck, but we needed it. Drums, a percussion station, a horn section …not to mention all the people coming and going on different songs. Fortunately the tech folks at the venue were fantastic and handled it all with aplomb.
We finished soundchecking around 5:30pm. Doors were at 7pm. Time to change into our rock’n’roll outfits and hang out backstage getting nervous and excited.
I wasn’t playing on the first few songs so I got to watch the audience’s reaction as they realised what was in store. Maybe they thought this would be a cute gathering of Brad and his buddies jamming through some stuff. What they got was an incredibly tight powerhouse of energy from a seriously awesome collection of musicians.
I had the honour of playing on five songs over the course of the night. I had an absolute blast! But to be honest, I had just as much fun being in the audience dancing my ass off.
Oh, I was playing mandolin. I probably should’ve mentioned that.
The first song I played on was The Weight by The Band. There was a real Last Waltz vibe as Brad’s extended family joined him on stage, along with me and and Chris.
Later I hopped on stage as one excellent song segued into another—Maps by Yeah Yeah Yeahs.
I’ve loved this song since the first time I heard it. In the dot-voting rounds to figure out the set list, this was my super vote.
You know the way it starts with that single note tremelo on the guitar? I figured that would work on the mandolin. And I know how to tremelo.
Jessica was on bass. Jessi Hall was on vocals. It. Rocked.
I stayed on stage for Radiohead’s The National Anthem complete with horns, musical saw, and two basses played by Brad and Jessica absolutely killing it. I added a little texture over the singing with some picked notes on the mandolin.
Then it got truly epic. We played Wake Up by Arcade Fire. So. Much. Fun! Again, I laid down some tremelo over the rousing chorus. I’m sure no one could hear it but it didn’t matter. Everyone was just lifted along by the sheer scale of the thing.
That was supposed to be it for me. But during the rehearsal the day before, I played a little bit on Fleetwood Mac’s The Chain and Brad said, “You should do that!”
So I did. I think it worked. I certainly enjoyed it!
With that, my musical duties were done and I just danced and danced, singing along to everything.
At the end of the night, everyone got back on stage. It was a tight fit. We then attempted to sing Bohemian Rhapsody together. It was a recipe for disaster …but amazingly, it worked!
That could describe the whole evening. It shouldn’t have worked. It was far too ambitious. But not only did it work, it absolutely rocked!
What really stood out for me was how nice and kind everyone was. There was nary an ego to be found. I had never met most of these people before but we all came together and bonded over this shared creation. It was genuinely special.
Days later I’m still buzzing from it all. I’m so, so grateful to Brad and Melissa for pulling off this incredible feat, and for allowing me to be a part of it.
And then in the middle of this traumatic medical emergency, our mentally-unstable neighbor across the street began accosting my family, flipping off our toddler and nanny, racially harassing my wife, and making violent threats. We fled our home for fear of our safety because he was out in the street exposing himself, shouting belligerence, and threatening violence.
After that, Brad started working with Project Healthy Minds. In fact, all the proceeds from Frostapalooza go to that organisation along with NextStep Pittsburgh.
Just think about that. Confronted with intimidation and racism, Brad and Melissa still managed to see the underlying systemic inequality, and work towards making things better for the person who drove them out of their home.
Good people, man. Good people.
I sincererly hope they got some catharsis from Frostapalooza. I can tell you that I felt frickin’ great after being part of an incredible event filled with joy and love and some of the best music I’ve ever heard.
I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.
All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.
I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.
I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.
Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.
I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?
And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.
There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.
A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”
What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?
I’m just throwing this out there. I’d love it if someone were to run with it.
The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.
Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.
In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.
With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.
Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.
It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.
Codebar Brighton has been very, very good to me. Here’s to the next ten years!
My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.
On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.
First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.
It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.
When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.
Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.
What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.
Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.
Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.
My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.
During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.
The event itself was, as always, excellent. Both in terms of content and organisation.
Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.
The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.
Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.
It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!
I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.
The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.
But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.
Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.
For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!
During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:
The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.
Insert .gif of character from The Office looking to camera.
Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.
Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.
Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.
A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.
Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.
In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”
I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.
Either way I’ll back next year for sure.
It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.
There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.
This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.
We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.
I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:
We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.
The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.
So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com
If I’m shown into someone’s home and left alone while the host goes and does something, you can bet I’m going to peruse their bookshelves.
I don’t know why. Maybe I’m looking for points of commonality. “Oh, you read this book too?” Or maybe it’s the opposite, and I’m looking for something new and—pardon the pun—novel. “Oh, that book sounds interesting!”
I like it when people have a kind of bookshelf on their website.
I like having an overview of what I’ve been reading. One of the reasons I started keeping track was so that I could try to have a nice balance of fiction and non-fiction.
I always get a little sad when I see someone’s online reading list and it only consists of non-fiction books that are deemed somehow useful, rather than simply pleasurable. Cameron wrote about when he used to do this:
I felt like reading needed to have some clear purpose. The topic of any book I read needed to directly contribute to being an “adult”. So I started to read non-fiction – stuff that reflected what was happening in my career. Books on management, books on finance, books on economics, books on the creative process. And for many years this diet of dry, literary roughage felt wholesome … but joyless. Each passage I highlited in yellow marker was a point for the scoreboard, not a memory to be treasured.
Now he’s redressing the imbalance and rediscovering the unique joy of entering other worlds by reading fiction.
For a while, I forced myself to have perfect balance. I didn’t allow myself to read two non-fiction books in a row or two fiction books in a row. I made myself alternate between the two.
I’ve let that lapse now. I’m reading more fiction than non-fiction, and I’m okay with that.
But I still like looking back on what I’ve been reading and seeing patterns emerge. Like there’s a clear boom in the last year of reading retellings of the Homeric epics (all kicked off by reading the brilliant Circe by Madeline Miller).
I also keep an eye out for a different kind of imbalance. I want to make sure that I’m not just reading books by people like me—middle-aged heterosexual white dudes.
You may think that a balanced reading diet would emerge naturally, but I’m not so sure. Here’s an online bookshelf. Here’s another online bookshelf. Plenty of good stuff in both. But do either of those men realise that they’ve gone more than a year without reading a book written by a woman?
It’s almost certainly not a conscious decision. It’s just that in the society we live in, the default mode tends towards what’s been historically privileged (see also films, music, and painting).
It’s like driving in a car that subtly pulls to one side. Unless you compensate for it, you’ll end up in the ditch without even realising.
I feel bad for calling attention to those two reading lists. It feels very judgy of me. And reading is something that doesn’t deserve judgement. Any reading is good reading.
Mind you, maybe being judgemental is exactly why I’m drawn to people’s bookshelves in the first place. It might be that I’m subsconsciously looking for compatibility signals. If I see a bookshelf filled with books I like, I’m bound to feel predisposed to like that person. And if I see a bookshelf dominated by Jordan Peterson and Ayn Rand, I’m probably going to pass judgement on the reader, even if I know I shouldn’t.
A lot of work has gone into distilling WCAG down to these four guidelines. Here’s how I apply them in my work…
Perceivable
I interpret this as:
Content will be legible, regardless of how it is accessed.
For example:
The contrast between background and foreground colours will meet the ratios defined in WCAG 2.
Content will be grouped into semantically-sensible HTML regions such as navigation, main, footer, etc.
Operable
I interpret this as:
Core functionality will be available, regardless of how it is accessed.
For example:
I will ensure that interactive controls such as links and form inputs will be navigable with a keyboard.
Every form control will be labelled, ideally with a visible label.
Understandable
I interpret this as:
Content will make sense, regardless of how it is accessed.
For example:
Images will have meaningful alternative text.
I will make sensible use of heading levels.
This is where it starts to get quite collaboritive. Working at an agency, there will some parts of website creation and maintenance that will require ongoing accessibility knowledge even when our work is finished.
For example:
Images uploaded through a content management system will need sensible alternative text.
Articles uploaded through a content management system will need sensible heading levels.
Robust
I interpret this as:
Content and core functionality will still work, regardless of how it is accessed.
For example:
Drop-down controls will use the HTML select element rather than a more fragile imitation.
I will only use JavaScript to provide functionality that isn’t possible with HTML and CSS alone.
If you’re applying a mindset of progressive enhancement, this part comes for you. If you take a different approach, you’re going to have a bad time.
Taken together, these four guidelines will get you very far without having to dive too deeply into the rest of WCAG.
I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.
But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.
I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?
So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.
Going back to speculation rules, David asked:
Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?
That question could be asked of many web technologies.
There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.
Or take the elephant in the room: JavaScript.
Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.
I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.
I really like the newly-launched website for this year’s XOXO festival. I like that the design is pretty much the same for really small screens, really large screens, and everything in between because everything just scales. It’s simultaneously a flyer, a poster, and a billboard.
When responsive design was on the rise, it was a real treat to come across a responsive site. After a while, it stopped being remarkable. Now if I come across a site that isn’t responsive, it feels broken.
And now it’s a treat to come across a site that uses fluid type. But how long will it be until it feels unremarkable? How will it be until a website that doesn’t use fluid type feels broken?