New CSS that can actually be used in 2024 | Thomasorus
Logical properties, container queries, :has
, :is
, :where
, min()
, max()
, clamp()
, nesting, cascade layers, subgrid, and more.
Logical properties, container queries, :has
, :is
, :where
, min()
, max()
, clamp()
, nesting, cascade layers, subgrid, and more.
Three great examples of HTML web components:
What I hope is that you now have the same sort of epiphany that I had when reading Jeremy Keith’s post: HTML Web Components are an HTML-first feature.
A blog post can be a plain text document uploaded to a server. It can be an image hosted on a social network. It can be a voice note shared with your friends.
Title, dates, comments, links, and text are all optional.
No one is policing this.
Fifteen years after the first one, Papercamp is back this September and the line-up looks good.
Long live the papernet!
If you’re a fan of gratuitous initialisms, you’ll love Google’s core web vitals. Just get a load of the obfuscation in the important-sounding metrics like CLS, FCP, LCP, and more.
To be fair to Google, this is a problem in the web performance world in general. Practioners prefer to talk about TTFB rather than “time to first byte” even though both contain exactly the same number of syllables.
The big news in the web performance community this month is the arrival of a new initialism. INP sounds like one of those pseudo-scientific psychologic profiles but it’s meant to stand for Interaction to Next Paint (even if they were to swear off pointless initialisms, you’d still have to pry Pointless Capitalisation from Google’s cold dead hands).
This new metric is a welcome one. It’s replacing first input delay. Sorry, First Input Delay, or FID, one of the few web vital initialisms that can be spoken as a word, making it a true acronym (fortunately fid’s successor, inp, also works as an acronym).
First Input Delay has long outstayed its welcome. It was always an outlier in the core web vitals. It didn’t seem to measure anything actually useful. I know it sounds like it’s measuring the delay until the user can interact with a web page, but when you dive into what it actually does, it’s a mess:
FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.
See that word “begin” in there? It’s doing a lot of work. First Input Delay doesn’t measure the lag between the user interaction and the browser response; it only measures the lag between the user interaction and the browser beginning to respond. The actual response could take ages, but that lag doesn’t get measured. Unlike the other core web vitals, this metric is very far removed from what actually matters to the user’s experience.
What the fid where they thinking? How the fid did this measurement ever get included in core web vitals in the first place?
Well, feel free to take what I’m about to say as pure gossip, but I have my sources, I trust ’em, and no, I’m not going to reveal ’em…
It’s because of AMP.
Remember Google AMP? An acronym so pointless they eventually just forgot it ever stood for anything?
The AMP project ended up doing incredible damage to Google’s developer relations. By colluding with the search team to privilege the appearance of AMP pages in the top news carousel, Google effectively blackmailed the entire publishing industry into using their format.
In the end, it didn’t work. It was a shit format. All they did was foster resentment and animosity:
AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore.
It turns out that Google search wasn’t the only team infected by AMP. The core web vitals team also had to play ball.
Originally they had a genuinely useful metric for measuring the lag between input and response. But guess which pages did terribly? That’s right: AMP pages.
Rather than ship an actually-useful measurement, the core web vitals team instead had to include the broken First Input Delay, brainchild of a certain someone on the AMP team.
Now it all makes sense.
So good riddance to FID. Welcome to INP. And here’s hoping it won’t be much longer till we’re finally burying AMP.
An in-depth look at Indie Web Camp Brighton with some suggestions for improving future events. Also, this insightful nugget:
There was something really energising about being with a group of people that had a diverse range of backgrounds, ideas, and interests, but who all shared a specific outlook on one problem space. We definitely didn’t all agree on what the ideal solution to a given problem was, but we were at least approaching topics from a similar starting point, which was great.
I had a fantastic time and hope it will become a frequent event.
Same!
I just attended IndiewebCamp Brighton, where I had a mind-expanding time with a bunch of folks as enthusiastic about the web as I am. It left me with a sense of hope that there are pocks of people keeping the dream of a free and open web alive.
Mark’s write-up of the excellent Indie Web Camp Brighton that he co-organised with Paul.
The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.
There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.
Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.
Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.
I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.
I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.
But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.
So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.
It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.
While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.
“It’s having your own website”, I said.
But surely there’s more to it than that, they wondered.
Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.
What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.
I’m at day two of Indie Web Camp Brighton.
Day one was excellent. It was really hard to choose which sessions to go to because they all sounded interesting. That’s a good problem to have.
I ended up participating in:
In that testing session I shared some of the bookmarklets I use regularly.
Bookmarklets? They’re bookmarks that sit in the toolbar of your desktop browser. Just like any other bookmark, they’re links. The difference is that these links begin with javascript:
rather than http
. That means you can put programmatic instructions inside the link. Click the bookmark and the JavaScript gets executed.
In my mind, there are two different approaches to making a bookmarklet. One kind of bookmarklet contains lots of clever JavaScript—that’s where the smart stuff happens. The other kind of bookmarklet is deliberately dumb. All they do is take the URL of the current page and pass it to another service—that’s where the smart stuff happens.
I like that second kind of bookmarklet.
Here are some bookmarklets I’ve made. You can drag any of them up to the toolbar of your browser. Or you could create a folder called, say, “bookmarklets”, and drag these links up there.
Validation: This bookmarklet will validate the HTML of whatever page you’re on.
Carbon: This bookmarklet will run the domain through the website carbon calculator.
Accessibility: This bookmarklet will run the current page through the Website Accessibility Evaluation Tools.
Performance: This bookmarklet will take the current page and it run it through PageSpeed Insights, which includes a Lighthouse test.
HTTPS: This bookmarklet will run your site through the SSL checker from SSL Labs.
Headers: This bookmarklet will test the security headers on your website.
Drag any of those links to your browser’s toolbar to “install” them. If you don’t like one, you can delete it the same way you can delete any other bookmark.
This isn’t just a great explanation of :has()
, it’s an excellent way of understanding selectors in general. I love how the examples are interactive!
Paul has been doing so much fantastic work with the indie web community, not least of which is co-organising Indie Web Camp Brighton—just ten days away now!
This is going to be a fun weekend!
Not got a personal website? Bring your laptop or mobile device, and we’ll help you get setup so that you can publish somewhere you control and can make your own.
Seasoned web developer? Learn about the different open web services, software and technologies that can help empower yourself and others to own their content and online identity.
Wouldn’t it be great if all web tools gave warnings like this?
As you generate and tweak your type scale, Utopia will now warn you if any steps fail WCAG SC 1.4.4, and tell you between which viewports the problem lies.
Patterns Day is exactly six weeks away—squee!
If you haven’t got your ticket yet, get one now. (And just between you and me, use the discount code JOINJEREMY to get a 10% discount.)
I’ve been talking to the speakers and getting very excited about what they’re going to be covering. It’s shaping up to be the perfect mix of practical case studies and big-picture thinking. You can expect talks on design system governance, accessibility, design tokens, typography, and more.
I’m hoping to have a schedule for the day ready by next week. It’s fun trying to craft the flow of the day. It’s like putting together a set list for a concert. Or maybe I’m just overthinking it and it really doesn’t matter because all the talks are going to be great anyway.
There are sponsors for Patterns Day now too. Thanks to Supernova and Etch you’re going to have bountiful supplies of coffee, tea and pastries throughout the day. Then, when the conference talks are done, we’ll head across the road to the Hare And Hounds for one of Luke Murphy’s famous geek pub quizes, with a bar tab generously provided by Zero Height.
Now, the venue for Patterns Day is beautiful but it doesn’t have enough space to provide everyone with lunch, so you’re going to have an hour and a half to explore some of Brighton’s trendy lunchtime spots. I’ve put together a list of lunch options for you, ordered by proximity to the Duke of York’s. These are all places I can personally vouch for.
Then, after the conference day, and after the pub quiz, there’s Vitaly’s workshop the next day. I will most definitely be there feeding on Vitaly’s knowledge. Get a ticket if you want to join me.
But wait! That’s not all! Even after the conference, and the pub quiz, and the workshop, the nerdy fun continues on the weekend. There’s going to be an Indie Web Camp here in Brighton on the Saturday and Sunday after Patterns Day.
If you’ve been to an Indie Web Camp before, you know how inspiring and fun it is. If you haven’t been to one yet, you should definitely come along. It’s free! If you’ve got your own website, or if you’re even just thinking about having your own website, it’s a great opportunity to meet with like-minded people.
So that’s going to be four days of non-stop good stuff here in Brighton. I’m looking forward to seeing you then!
After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.
I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.
This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.
A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.
This time I attempted three bits of home improvement on my website.
The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.
I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.
That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.
Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @[email protected]
.
Good enough. Ship it.
My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.
I wanted to show related posts when you get to the end of one of my blog posts.
I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.
I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.
Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.
I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.
I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.
It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.
Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.
I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.
On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.
The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.
In the end I decided to stick with Remy’s two-pronged approach:
Here’s the JavaScript I wrote for the first part.
It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry
and if it is, I pass on the date from the post’s dt-published
value.
Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:
curl
request to get the response headers from the URL. The time limit is set to 1 second.Not perfect by any means, but it works for the most common cases of link rot.
For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.
Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.
The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.
I will continue to donate money to the Internet Archive and I encourage you to do the same.
I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!
But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.
Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?
I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.
When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.
Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.
It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.
From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.
It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.
In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.
When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?
But often a decision that delivers short-term gain may well end up delivering long-term pain.
Alexander Petros describes this succinctly:
Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.
When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.
Robin Berjon puts this into the historical perspective of Taylorism and Luddism:
Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.
Or as Marshall McLuhan put it:
Every extension is also an amputation.
…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.
Maintenance matters. With the passing of time, maintenance matters more and more.
Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:
Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.
Oh, this is a nice addition to the Utopia set of tools: when you don’t need a full-on type scale but you still want to figure out fluid clamp()
values, the clamp calculator has you covered.
It’s got permalinks too!
BarCamp London is back this year, the day after ffconf.