JavaScript dos and donts @ Mu-An Chiou
Straightforward smart sensible advice that you can apply to any feature on a website.
Straightforward smart sensible advice that you can apply to any feature on a website.
UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.
I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:
We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.
If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.
Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!
These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:
I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.
I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.
I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.
I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.
Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.
Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left
, right
, width
or height
in a style sheet, it looks like a bug to me.
Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp
declaration on the body
element, which can also get the job done.
Okay, so those are the things that feel standard to me. So what could I play around with that was new?
View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.
Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…
Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.
You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.
That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.
Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference
@media query.
I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).
Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.
Oh, how I wish that every team building for the web would use this sensible approach!
I’m very glad to see that work has moved away from a separate selectmenu
element to instead enhancing the existing select
element—I could never see an upgrade path for selectmenu
, but now there are plenty of opportunities for progressive enhancement.
Perhaps the tide is finally turning against complex web frameworks.
I want to be a part of a frontend culture that accepts and promotes our responsibilities to others, rather than wallowing in self-centred “DX” puffery. In the hierarchy of priorities, users must come first.
Alex doesn’t pull his punches in this four-part truth-telling:
The React anti-pattern of hugely bloated single-page apps has to stop. And we can stop it.
Success or failure is in your hands, literally. Others in the equation may have authority, but you have power.
Begin to use that power to make noise. Refuse to go along with plans to build YAJSD (Yet Another JavaScript Disaster). Engineering leaders look to their senior engineers for trusted guidance about what technologies to adopt. When someone inevitably proposes the React rewrite, do not be silent. Do not let the bullshit arguments and nonsense justifications pass unchallenged. Make it clear to engineering leadership that this stuff is expensive and is absolutely not “standard”.
This is an interesting thought from Scott: using Shadow DOM in HTML web components but only as a way of providing sort-of user-agent styles:
providing some default, low-specificity styles for our slotted light-dom HTML elements while allowing them to be easily overridden.
Three great examples of HTML web components:
What I hope is that you now have the same sort of epiphany that I had when reading Jeremy Keith’s post: HTML Web Components are an HTML-first feature.
Progressive enhancement is a design and development principle where we build in layers which automatically turn themselves on based on the browser’s capabilities.
The idea of progressive enhancement is that everyone gets the perfect experience for them, rather than a pre-determined “perfect” experience from a design and development team.
There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.
This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.
You can find it at browsersupport.clearleft.com
We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.
I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:
We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.
The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.
So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com
Web Content Accessibility Guidelines—or WCAG—looks very daunting. It’s a lot to take in. It’s kind of overwhelming. It’s hard to know where to start.
I recommend taking a deep breath and focusing on the four principles of accessibility. Together they spell out the cutesy acronym POUR:
A lot of work has gone into distilling WCAG down to these four guidelines. Here’s how I apply them in my work…
I interpret this as:
Content will be legible, regardless of how it is accessed.
For example:
I interpret this as:
Core functionality will be available, regardless of how it is accessed.
For example:
I interpret this as:
Content will make sense, regardless of how it is accessed.
For example:
This is where it starts to get quite collaboritive. Working at an agency, there will some parts of website creation and maintenance that will require ongoing accessibility knowledge even when our work is finished.
For example:
I interpret this as:
Content and core functionality will still work, regardless of how it is accessed.
For example:
If you’re applying a mindset of progressive enhancement, this part comes for you. If you take a different approach, you’re going to have a bad time.
Taken together, these four guidelines will get you very far without having to dive too deeply into the rest of WCAG.
Photoshop in the browser? That needs JS.
But the reality is, most of what we build is either static HTML or mostly just forms and page reloads. We can build the web that way by default, and progressively enhance a more Ajaxy experience on top of it.
The result is an app that’s faster to load, faster to run, and less prone to breaking… without much additional work for your developers.
We’re all tired of: write some code, come back to it in six months, try to make it do more, and find the whole project is broken until you upgrade everything.
Progressive enhancement allows you to do the opposite: write some code, come back to it in six months, and it’s doing more than the day you wrote it!
There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.
Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).
The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender"
.
Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script
element with a type
value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.
That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.
I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset
than source
: you give the browser some options, but ultimately you can’t force it to do anything.
I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js
with the simplest of suggestions:
{
"prerender": [{
"where": {
"href_matches": "/*"
},
"eagerness": "moderate"
}]
}
The eagerness
value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).
I still need to point to that JSON file from my HTML. Usually this would be done with something like a link
element, but for this particular API, I can send a response header instead:
Speculation-Rules: “/speculationrules.json"
I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.
Here’s the PHP I added to send that header:
header('Speculation-Rules: "/speculationrules.json"');
There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf
file for The Session on Apache:
<IfModule mod_headers.c>
<FilesMatch "speculationrules.json">
Header set Content-type application/speculationrules+json
</FilesMatch>
</IfModule>
A bit of a faff, that.
You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.
Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.
Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!
Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.
Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!
Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.
The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.
Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.
Don’t wait, is what I’m saying.
I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.
But…
Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.
Service workers. Preference queries. View transitions.
If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.
Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.
Baseline won’t tell you the difference between those two different kinds of features.
I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.
For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.
Really you want to ask two questions:
If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.
But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.
Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.
So many of the problems and challenges of working with Web Components just fall away when you ditch the shadow DOM and use them as a light wrapper for progressive enhancement.
Some lovely HTML web components—perfect for progressive enhancement!
I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.
I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.
Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.
Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.
I’ve written about my preferred way to do DOM scripting: element.target.closest
. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.
Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback
method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.
So my client-side scripting style has updated over time:
event.target.closest
.None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.
Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!
A few patterns have emerged for me…
Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.
If I’m adding an enhancement to a button
element, I’ll wrap it in a custom element that starts with button-
. I’ve now got custom elements like button-geolocate
, button-confirm
, button-clipboard
and so on.
Likewise if the custom element is enhancing a link, it will begin with a-
. If it’s enhancing a form, it will begin with form-
.
The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div
with button-geolocate
I shouldn’t be surprised when it doesn’t work.
You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.
I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.
So I use data-
attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data-
attributes, the browser gives me automatic reflection of the value in the dataset
property.
Instead of getting a value with this.getAttribute('maximum')
I get to use this.dataset.maximum
. Nice and neat.
My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.
Here are some examples:
aria-collapsable
for toggling the display of one element when you click on another.play-button
for adding a play button to an audio
or video
element.ajax-form
for sending a form via Ajax instead of a full page refresh.user-avatar
for adding a tooltip to an image.table-saw
for making tables responsive.All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.
But what if my web component needs to do two things?
I make two web components.
The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.
What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a
element in an h3
element.
The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.
Take some of those button-
elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard
. Another throws up a confirmation dialog to complete an action, button-confirm
. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button
in the two existing custom elements.
Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.
Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?
Here’s an example from The Session: the results page when you search for sessions in London.
There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.
I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?
When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.
You can call the event anything you want. It could be a newLocations
event. That event is dispatched in the connectedCallback
of the component.
Meanwhile in the map component, an event listener listens for any newLocations
events on the document. When that event handler is triggered, the map updates.
The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.
There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback
method.
I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:
I may well end up changing my approach again in the future. For now though, these ideas are serving me well.
Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input
elements.
This would be the ideal use-case for the is
attribute:
<input is="input-date-future" type="date">
Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.
So instead we have to make HTML web components by wrapping existing elements in new custom elements:
<input-date-future>
<input type="date">
<input-date-future>
The end result is the same. Mostly.
Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.
So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback
method:
connectedCallback() {
this.style.display = 'contents';
…
}
This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.
Or you could (and probably should) do it in your stylesheet instead:
input-date-future {
display: contents;
}
Just to be clear, you should only use display: contents
if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.
It’s a bit of a hack to work around the lack of universal support for the is
attribute, but it’ll do.
I had the opportunity to trim some code from The Session recently. That’s always a good feeling.
In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.
There are a couple of places on the site where you can input a date. This is exactly what input type="date"
is for. But when I was making the interface, the support for this type of input was patchy.
So instead the interface used three select
dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date"
, I replaced the three select
s with one date
input.
It was a little fiddly but it worked.
Fast forward to today and input type="date"
is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!
I was discussing date inputs recently when I was talking to students in Amsterdam:
They’re given a PDF inheritance-tax form and told to convert it for the web.
That form included dates. The dates were all in the past so the students wanted to be able to set a max
value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.
Wouldn’t it be nice if you could specify past dates like this?
<input type="date" max="today">
Or for future dates:
<input type="date" min="today">
Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.
This seems like an ideal use-case for HTML web components:
Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.
In this case, it would be nice to augment an existing input type="date"
element. Something like this:
<input-date-past>
<input type="date">
</input-date-past>
Here’s the JavaScript that does the augmentation:
customElements.define('input-date-past', class extends HTMLElement {
constructor() {
super();
}
connectedCallback() {
this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
}
});
That’s it.
Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future
.
See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.