Journal tags: process

56

sparkline

Manual ’till it hurts

I’ve been going buildless—or as Brad crudely puts it, raw-dogging websites on a few projects recently. Not just obviously simple things like Clearleft’s Browser Support page, but sites like:

They also have 0 dependencies.

Like Max says:

Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.

Making websites without a build step is a gift to your future self. When you open that project six months or a year or two years later, there’ll be no faffing about with npm updates, installs, or vulnerabilities.

Need to edit the CSS? You edit the CSS. Need to change the markup? You change the markup.

It’s remarkably freeing. It’s also very, very performant.

If you’re thinking that your next project couldn’t possibly be made without a build step, let me tell you about a phrase I first heard in the indie web community: “Manual ‘till it hurts”. It’s basically a two-step process:

  1. Start doing what you need to do by hand.
  2. When that becomes unworkable, introduce some kind of automation.

It’s remarkable how often you never reach step two.

I’m not saying premature optimisation is the root of all evil. I’m just saying it’s premature.

Start simple. Get more complex if and when you need to.

You might never need to.

Creativity

It’s like a little mini conference season here in Brighton. Tomorrow is ffconf, which I’m really looking forward to. Last week was UX Brighton, which was thoroughly enjoyable.

Maybe it’s because the theme this year was all around creativity, but all of the UX Brighton speakers gave entertaining presentations. The topics of innovation and creativity were tackled from all kinds of different angles. I was having flashbacks to the Clearleft podcast episode on innovation—have a listen if you haven’t already.

As the day went on though, something was tickling at the back of my brain. Yes, it’s great to hear about ways to be more creative and unlock more innovation. But maybe there was something being left unsaid: finding novel ways of solving problems and meeting user needs should absolutely be done …once you’ve got your basics sorted out.

If your current offering is slow, hard to use, or inaccessible, that’s the place to prioritise time and investment. It doesn’t have to be at the expense of new initiatives: this can happen in parallel. But there’s no point spending all your efforts coming up with the most innovate lipstick for a pig.

On that note, I see that more and more companies are issuing breathless announcements about their new “innovative” “AI” offerings. All the veneer of creativity without any of the substance.

Making the Patterns Day website

I had a lot of fun making the website for Patterns Day.

If you’re interested in the tech stack, here’s what I used:

  1. HTML
  2. CSS

Actually, technically it’s all HTML because the styles are inside a style element rather than a separate style sheet, but you know what I mean. Also, there is technically some JavaScript but all it does is register a service worker that takes care of caching and going offline.

I didn’t use any build tools. There was no pipeline. There is no node_modules folder filling up my hard drive. Nothing was automated. The website was hand-crafted the long hard stupid way.

I started with the content. I wrote out the words and marked them up with the most appropriate HTML elements.

A screenshot of an unstyled web page for Patterns Day.

Time to layer on the presentation.

For the design, I turned to Michelle for help. I gave her a brief, describing the vibe of the conference, and asked her to come up with an appropriate visual language.

Crucially, I asked her not to design a website. Instead I asked her to think about other places where this design language might be used: a poster, social media, anything but a website.

Partly I was doing this for my own benefit. If you give me a pixel-perfect design for a web page and tell me to code it up, either I won’t do it or I won’t enjoy it. I just don’t get any motivation out of that kind of direct one-to-one translation.

But give me guardrails, give me constraints, give me boundary conditions, and off I go!

Michelle was very gracious in dealing with such a finicky client as myself (“Can you try this other direction?”, “Hmm… I think I preferred the first one after all!”) She delivered a colour palette, a type scale, typeface choices, and some wonderful tiling patterns …it is Patterns Day after all!

With just a few extra lines of CSS, the basic typography was in place.

A screenshot of the web page for Patterns Day with web fonts applied.

I started layering on the colours. Even though this was a one-page site, I still made liberal use of custom properties in the CSS. It just feels good to be able to update one value and see the results, well …cascade.

A screenshot of the web page for Patterns Day with colours added.

I had a lot of fun with the tiling background images. SVG was the perfect format for these. And because the tiles were so small in file size, I just inlined them straight into the CSS.

By this point, I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.

I’m not sure it’s possible to engineer that kind of serendipity in Figma. Figma was the perfect tool for exploring ideas around the visual vocabulary, and for handing over design decisions around colour, typography, and texture. But when it comes to how the content is going to behave on the World Wide Web, nothing beats a browser for fidelity.

A screenshot of the web page for Patterns Day with some changes applied.

By this point I was really sweating the details, like getting the logo just right and adjusting the type scale for different screen sizes. Needless to say, Utopia was a godsend for that.

I was also checking back in with Michelle to get her take on design decisions I was making.

I could’ve kept tinkering but the diminishing returns were a sign that it was time to put this out into the world.

A screenshot of the web page for Patterns Day with the logo in place.

It felt really good to work on a web page like this. It felt like I was getting my hands into the soil of the web. I don’t think it’s an accident that the result turned out to be very performant.

Getting hands-on like this stops me from getting rusty. And honestly, working with CSS these days is a joy. There’s such power to be had from using var() in combination with functions like calc() and clamp(). Layout is a breeze with flexbox and grid. Browser differences are practically non-existent. We’ve never had it so good.

Here’s something I noticed about my relationship to CSS; my brain has finally made the switch to logical properties. Now if I’m looking at some CSS and I see left, right, top, or bottom, it looks like a bug to me. Those directional properties feel loaded with assumptions whereas logical properties feel much more like working with the grain of the web.

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

Chain of tools

I shared this link in Slack with my co-workers today:

Cultivating depth and stillness in research by Andy Matuschak.

I wasn’t sure whether it belonged in the #research or the #design channel. While it’s ostensibly about research, I think it applies to design more broadly. Heck, it probably applies to most fields. I should have put it in the Slack channel I created called #iiiiinteresting.

The article is all about that feeling of frustration when things aren’t progressing quickly, even when you know intellectually that not everything should always progress quickly.

The article is filled with advice for battling this feeling, including this observation on curiosity:

Curiosity can also totally change my relationship to setbacks. Say I’ve run an experiment, collected the data, done the analysis, and now I’m writing an essay about what I’ve found. Except, halfway through, I notice that one column of the data really doesn’t support the conclusion I’d drawn. Oops. It’s tempting to treat this development as a frustrating impediment—something to be overcome expediently. Of course, that’s exactly the wrong approach, both emotionally and epistemically. Everything becomes much better when I react from curiosity instead: “Oh, wait, wow! Fascinating! What is happening here? What can this teach me? How might this change what I try next?”

But what really resonated with me was this footnote attached to that paragraph:

I notice that I really struggle to generate curiosity about problems in programming. Maybe it’s because I’ve been doing it so long, but I think it’s because my problems are usually with ephemeral ideas, incidental to what I actually care about. When I’m fighting some godforsaken Javascript build system, I don’t feel even slightly curious to “really” understand those parochial machinations. I know they’re just going to be replaced by some new tool next year.

I feel seen.

I know I’m not alone. I know people who were driven out of front-end development because they felt the unspoken ultimatum was to either become a “full stack” developer or see yourself out.

Remember Chris’s excellent post, The Great Divide? Zach referenced it recently. He wrote:

The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?

Mostly I feel sad about the talented people we’ve lost because they felt their front-of-the-front-end work wasn’t valued.

But wait! Can I turn my frown upside down? Can I take Andy Matuschak’s advice and say, “Oh, wait, wow! Fascinating! What is happening here? What can this teach me?”

Here’s one way of squinting at the situation…

There’s an opportunity here. If many people—myself included—feel disheartened and ground down by the amount of time they need to spend dealing with toolchains and build systems, what kind of system would allow us to get on with making websites without having to deal with that stuff?

I’m not proposing that we get rid of these complex toolchains, but I am wondering if there’s a way to make it someone else’s job.

I guess this job is DevOps. In theory it’s a specialised field. In practice everyone adding anything to a codebase partakes in continual partial DevOps because they must understand the toolchains and build processes in order to change one line of HTML.

I’m not saying “Don’t Make Me Think” when it comes to the tooling. I totally get that some working knowledge is probably required. But the ratio has gotten out of whack. You need a lot of working knowledge of the toolchains and build processes.

In fact, that’s mostly what companies hire for these days. If you’re well versed in HTML, CSS, and vanilla JavaScript, but you’re not up to speed on pipelines and frameworks, you’re going to have a hard time.

That doesn’t seem right. We should change it.

Accessibility is systemic

I keep thinking about this blog post I linked to last week by Jacob Kaplan-Moss. It’s called Quality Is Systemic:

Software quality is more the result of a system designed to produce quality, and not so much the result of individual performance. That is: a group of mediocre programmers working with a structure designed to produce quality will produce better software than a group of fantastic programmers working in a system designed with other goals.

I think he’s on to something. I also think this applies to design just as much as development. Maybe more so. In design, there’s maybe too much emphasis placed on the talent and skill of individual designers and not enough emphasis placed on creating and nurturing a healthy environment where anyone can contribute to the design process.

Jacob also ties this into hiring:

Instead of spending tons of time and effort on hiring because you believe that you can “only hire the best”, direct some of that effort towards building a system that produces great results out of a wider spectrum of individual performance.

I couldn’t agree more! It just one of the reasons why the smart long-term strategy can be to concentrate on nurturing junior designers and developers rather than head-hunting rockstars.

As an aside, if you think that the process of nurturing junior designers and developers is trickier now that we’re working remotely, I highly recommend reading Mandy’s post, Official myths:

Supporting junior staff is work. It’s work whether you’re in an office some or all of the time, and it’s work if Slack is the only office you know. Hauling staff back to the office doesn’t make supporting junior staff easier or even more likely.

Hiring highly experienced designers and developers makes total sense, at least in the short term. But I think the better long-term solution—as outlined by Jacob—is to create (and care for) a system where even inexperienced practitioners will be able to do good work by having the support and access to knowledge that they need.

I was thinking about this last week when Irina very kindly agreed to present a lunch’n’learn for Clearleft all about inclusive design.

She answered a question that had been at the front of my mind: what’s the difference between inclusive design and accessibility?

The way Irina put it, accessibility is focused on implementation. To make a website accessible, you need people with the necessary skills, knowledge and experience.

But inclusive design is about the process and the system that leads to that implementation.

To use that cliché of the double diamond, maybe inclusive design is about “building the right thing” and accessibility is about “building the thing right.”

Or to put it another way, maybe accessibility is about outputs, whereas inclusive design is about inputs. You need both, but maybe we put too much emphasis on the outputs and not enough emphasis on the inputs.

This is what made me think of Jacob’s assertion that quality is systemic.

Imagine someone who’s an expert at accessibility: they know all the details of WCAG and ARIA. Now put that person into an organisation that doesn’t prioritise accessibility. They’re going to have a hard time and they probably won’t be able to be very effective despite all their skills.

Now imagine an organisation that priorities inclusivity. Even if their staff don’t (yet) have the skills and knowledge of an accessibility expert, just having the processes and priorities in place from the start will make it easier for everyone to contribute to a more accessible experience.

It’s possible to make something accessible in the absence of a system that prioritises inclusive design but it will be hard work. Whereas making sure inclusive design is prioritised at an organisational level makes it much more likely that the outputs will be accessible.

Declarative design systems

When I wrote about the idea of declarative design it really resonated with a lot of people.

I think that there’s a general feeling of frustration with the imperative approach to designing and developing—attempting to specify everything exactly up front. It just doesn’t scale. As Jason put it, the traditional web design process is fundamentally broken:

This is the worst of all worlds—a waterfall process creating dozens of artifacts, none of which accurately capture how the design will look and behave in the browser.

In theory, design systems could help overcome this problem; spend a lot of time up front getting a component to be correct and then it can be deployed quickly in all sorts of situations. But the word “correct” is doing a lot of work there.

If you’re approaching a design system with an imperative mindset then “correct” means “exact.” With this approach, precision is seen as valuable: precise spacing, precise numbers, precise pixels.

But if you’re approaching a design system with a declarative mindset, then “correct” means “resilient.” With this approach, flexibility is seen as valuable: flexible spacing, flexible ranges, flexible outputs.

These are two fundamentally different design approaches and yet the results of both would be described as a design system. The term “design system” is tricky enough to define as it is. This is one more layer of potential misunderstanding: one person says “design system” and means a collection of very precise, controlled, and exact components; another person says “design system” and means a predefined set of boundary conditions that can be used to generate components.

Personally, I think the word “system” is the important part of a design system. But all too often design systems are really collections rather than systems: a collection of pre-generated components rather than a system for generating components.

The systematic approach is at the heart of declarative design; setting up the rules and ratios in advance but leaving the detail of the final implementation to the browser at runtime.

Let me give an example of what I think is a declarative approach to a component. I’ll use the “hello world” of design system components—the humble button.

Two years ago I wrote about programming CSS to perform Sass colour functions. I described how CSS features like custom properties and calc() can be used to recreate mixins like darken() and lighten().

I showed some CSS for declaring the different colour elements of a button using hue, saturation and lightness encoded as custom properties. Here’s a CodePen with some examples of different buttons.

See the Pen Button colours by Jeremy Keith (@adactio) on CodePen.

If these buttons were in an imperative design system, then the output would be the important part. The design system would supply the code needed to make those buttons exactly. If you need a different button, it would have to be added to the design system as a variation.

But in a declarative design system, the output isn’t as important as the underlying ruleset. In this case, there are rules like:

For the hover state of a button, the lightness of its background colour should dip by 5%.

That ends up encoded in CSS like this:

button:hover {
    background-color: hsl(
        var(--button-colour-hue),
        var(--button-colour-saturation),
        calc(var(--button-colour-lightness) - 5%)
    );
}

In this kind of design system you can look at some examples to see the results of this rule in action. But those outputs are illustrative. They’re not the final word. If you don’t see the exact button you want, that’s okay; you’ve got the information you need to generate what you need and still stay within the pre-defined rules about, say, the hover state of buttons.

This seems like a more scalable approach to me. It also seems more empowering.

One of the hardest parts of embedding a design system within an organisation is getting people to adopt it. In my experience, nobody likes adopting something that’s being delivered from on-high as a pre-made sets of components. It’s meant to be helpful: “here, use this pre-made components to save time not reinventing the wheel”, but it can come across as overly controlling: “we don’t trust you to exercise good judgement so stick to these pre-made components.”

The declarative approach is less controlling: “here are pre-defined rules and guidelines to help you make components.” But this lack of precision comes at a cost. The people using the design system need to have the mindset—and the ability—to create the components they need from the systematic rules they’ve been provided.

My gut feeling is that the imperative mindset is a good match for most of today’s graphic design tools like Figma or Sketch. Those tools deal with precise numbers rather than ranges and rules.

The declarative mindset, on the other hand, increasingly feels like a good match for CSS. The language has evolved to allow rules to be set up through custom properties, calc(), clamp(), minmax(), and so on.

So, as always, there isn’t a right or wrong approach here. It all comes down to what’s most suitable for your organisation.

If your designers and developers have an imperative mindset and Figma files are considered the source of truth, than they would be better served by an imperative design system.

But if you’re lucky enough to have a team of design engineers that think in terms of HTML and CSS, then a declarative design system will be a force multiplier. A bicycle for the design engineering mind.

Getting back

The three-part almost nine-hour long documentary Get Back is quite fascinating.

First of all, the fact that all this footage exists is remarkable. It’s as if Disney had announced that they’d found the footage for a film shot between Star Wars and The Empire Strikes Back.

Still, does this treasure trove really warrant the daunting length of this new Beatles documentary? As Terence puts it:

There are two problems with this Peter Jackson documentary. The first is that it is far too long - are casual fans really going to sit through 9 hours of a band bickering? The second problem is that it is far too short! Beatles obsessives (like me) could happily drink in a hundred hours of this stuff.

In some ways, watching Get Back is liking watching one of those Andy Warhol art projects where he just pointed a camera at someone for 24 hours. It’s simultaneously boring and yet oddly mesmerising.

What struck myself and Jessica watching Get Back was how much it was like our experience of playing with Salter Cane. I’m not saying Salter Cane are like The Beatles. I’m saying that The Beatles are like Salter Cane and every other band on the planet when it comes to how the sausage gets made. The same kind of highs. The same kind of lows. And above all, the same kind of tedium. Spending hours and hours in a practice room or a recording studio is simultaneously exciting and dull. This documentary captures that perfectly.

I suppose Peter Jackson could’ve made a three-part fly-on-the-wall documentary series about any band and I would’ve found it equally interesting. But this is The Beatles and that means there’s a whole mythology that comes along for the ride. So, yes, it’s like watching paint dry, but on the other hand, it’s paint painted by The Beatles.

What I liked about Get Back is that it demystified the band. The revelation for me was really understanding that this was just four lads from Liverpool making music together. And I know I shouldn’t be surprised by that—the Beatles themselves spent years insisting they were just four lads from Liverpool making music together, but, y’know …it’s The Beatles!

There’s a scene in the Danny Boyle film Yesterday where the main character plays Let It Be for the first time in a world where The Beatles have never existed. It’s one of the few funny parts of the film. It’s funny because to everyone else it’s just some new song but we, the audience, know that it’s not just some new song…

Christ, this is Let It Be! You’re the first people on Earth to hear this song! This is like watching Da Vinci paint the Mona Lisa right in front of your bloody eyes!

But truth is even more amusing than fiction. In the first episode of Get Back, we get to see when Paul starts noodling on the piano playing Let It Be for the first time. It’s a momentous occasion and the reaction from everyone around him is …complete indifference. People are chatting, discussing a set design that will never get built, and generally ignoring the nascent song being played. I laughed out loud.

There’s another moment when George brings in the song he wrote the night before, I Me Mine. He plays it while John and Yoko waltz around. It’s in 3/4 time and it’s minor key. I turned to Jessica and said “That’s the most Salter Cane sounding one.” Then, I swear at that moment, after George has stopped playing that song, he plays a brief little riff on the guitar that sounded exactly like a Salter Cane song we’re working on right now. Myself and Jessica turned to each other and said, “What was that‽”

Funnily enough, when we told this to Chris, the singer in Salter Cane, he mentioned how that was the scene that had stood out to him as well, but not for that riff (he hadn’t noticed the similarity). For him, it was about how George had brought just a scrap of a song. Chris realised it was the kind of scrap that he would come up with, but then discard, thinking there’s not enough there. So maybe there’s a lesson here about sharing those scraps.

Watching Get Back, I was trying to figure out if it was so fascinating to me and Jessica (and Chris) because we’re in a band. Would it resonate with other people?

The answer, it turns out, is yes, very much so. Everyone’s been sharing that clip of Paul coming up with the beginnings of the song Get Back. The general reaction is one of breathless wonder. But as Chris said, “How did you think songs happened?” His reaction was more like “yup, accurate.”

Inevitably, there are people mining the documentary for lessons in creativity, design, and leadership. There are already Medium think-pieces and newsletters analysing the processes on display. I guarantee you that there will be multiple conference talks at UX events over the next few years that will include footage from Get Back.

I understand how you could watch this documentary and take away the lesson that these were musical geniuses forging remarkable works of cultural importance. But that’s not what I took from it. I came away from it thinking they’re just a band who wrote and recorded some songs. Weirdly, that made me appreciate The Beatles even more. And it made me appreciate all the other bands and all the other songs out there.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Done

Remember how I said I was preparing an online conference talk? Well, I’m happy to say that not only is the talk prepared, but I’ve managed to successfully record it too.

If you want to see the finished results, come along to An Event Apart Spring Summit on April 19th. To sweeten the deal, I’ve got a discount code you can use when you buy any multi-day pass: AEAJEREMY.

Recording the talk took longer than I thought it would. I think it was because I said this:

It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary.

Once I got that idea in my head, I think I became a lot fussier about the quality of the recording. “Would David Attenborough allow his documentaries to have the sound of a keyboard audibly being pressed? No! Start again!”

I’m pleased with the final results. And I’m really looking forward to the post-presentation discussion with questions from the audience. The talk gets provocative—and maye a bit ranty—towards the end so it’ll be interesting to see how people react to that.

It feels good to have the presentation finished, but it also feels …weird. It’s like the feeling that conference organisers get once the conference is over. You spend all this time working towards something and then, one day, it’s in the past instead of looming in the future. It can make you feel kind of empty and listless. Maybe it’s the same for big product launches.

The two big projects I’ve been working on for the past few months were this talk and season two of the Clearleft podcast. The talk is in the can and so is the final episode of the podcast season, which drops tomorrow.

On the one hand, it’s nice to have my decks cleared. Nothing work-related to keep me up at night. But I also recognise the growing feeling of doubt and moodiness, just like the post-conference blues.

The obvious solution is to start another big project, something on the scale of making a brand new talk, or organising a conference, or recording another podcast season, or even writing a book.

The other option is to take a break for a while. Seeing as the UK government has extended its furlough scheme, maybe I should take full advantage of it. I went on furlough for a while last year and found it to be a nice change of pace.

Preparing an online conference talk

I’m terrible at taking my own advice.

Hana wrote a terrific article called You’re on mute: the art of presenting in a Zoom era. In it, she has very kind things to say about my process for preparing conference talks.

As it happens, I’m preparing a conference talk right now for delivery online. Am I taking my advice about how to put a talk together? I am on me arse.

Perhaps the most important part of the process I shared with Hana is that you don’t get too polished too soon. Instead you get everything out of your head as quickly as possible (probably onto disposable bits of paper) and only start refining once you’re happy with the rough structure you’ve figured out by shuffling those bits around.

But the way I’ve been preparing this talk has been more like watching a progress bar. I started at the start and even went straight into slides as the medium for putting the talk together.

It was all going relatively well until I hit a wall somewhere between the 50% and 75% mark. I was blocked and I didn’t have any rough sketches to fall back on. Everything was a jumbled mess in my brain.

It all came to a head at the start of last week when that jumbled mess in my brain resulted in a very restless night spent tossing and turning while I imagined how I might complete the talk.

This is a terrible way of working and I don’t recommend it to anyone.

The problem was I couldn’t even return to the proverbial drawing board because I hadn’t given myself a drawing board to return to (other than this crazy wall of connections on Kinopio).

My sleepless night was a wake-up call (huh?). The next day I forced myself to knuckle down and pump out anything even if it was shit—I could refine it later. Well, it turns out that just pumping out any old shit was exactly what I needed to do. The act of moving those fingers up and down on the keyboard resulted in something that wasn’t completely terrible. In fact, it turned out pretty darn good.

Past me said:

The idea here is to get everything out of my head.

I should’ve listened to that guy.

At this point, I think I’ve got the talk done. The progress bar has reached 100%. I even think that it’s pretty good. A giveaway for whether a talk is any good is when I find myself thinking “Yes, this has good points well made!” and then five minutes later I’m thinking “Wait, is this complete rubbish that’s totally obvious and doesn’t make much sense?” (see, for example, every talk I’ve ever prepared ever).

Now I just to have to record it. The way that An Event Apart are running their online editions is that the talks are pre-recorded but followed with live Q&A. That’s how the Clearleft events team have been running the conference part of the Leading Design Festival too. Last week there were three days of this format and it worked out really, really well. This week there’ll be masterclasses which are delivered in a more synchronous way.

It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary. I guess this is what life is like for YouTubers.

I think the last time I was in a cinema before The Situation was at the wonderful Duke of York’s cinema here in Brighton for an afternoon showing of The Proposition followed by a nice informal chat with the screenwriter, one Nick Cave, local to this parish. It was really enjoyable, and that’s kind of what Leading Design Festival felt like last week.

I wonder if maybe we’ve been thinking about online events with the wrong metaphor. Perhaps they’re not like conferences that have moved online. Maybe they’re more like film festivals where everyone has the shared experience of watching a new film for the first time together, followed by questions to the makers about what they’ve just seen.

In the zone

I went to art college in my younger days. It didn’t take. I wasn’t very good and I didn’t work hard. So I dropped out before they could kick me out.

But I remember one instance where I actually ended up putting in more work than my fellow students—an exceptional situation.

In the first year of art college, we did a foundation course. That’s when you try a bit of everything to help you figure out what you want to concentrate on: painting, sculpture, ceramics, printing, photography, and so on. It was a bit of a whirlwind, which was generally a good thing. If you realised you really didn’t like a subject, you didn’t have to stick it out for long.

One of those subjects was animation—a relatively recent addition to the roster. On the first day, the tutor gave everyone a pack of typing paper: 500 sheets of A4. We were told to use them to make a piece of animation. Put something on the first piece of paper. Take a picture. Now put something slightly different on the second piece of paper. Take a picture of that. Repeat another 498 times. At 24 frames a second, the result would be just over 20 seconds of animation. No computers, no mobile phones. Everything by hand. It was so tedious.

And I loved it. I ended up asking for more paper.

(Actually, this was another reason why I ended up dropping out. I really, really enjoyed animation but I wasn’t able to major in it—I could only take it as a minor.)

I remember getting totally absorbed in the production. It was the perfect mix of tedium and creativity. My mind was simultaneously occupied and wandering free.

Recently I’ve been re-experiencing that same feeling. This time, it’s not in the world of visuals, but of audio. I’m working on season two of the Clearleft podcast.

For both seasons and episodes, this is what the process looks like:

  1. Decide on topics. This will come from a mix of talking to Alex, discussing work with my colleagues, and gut feelings about what might be interesting.
  2. Gather material. This involves arranging interviews with people; sometimes co-workers, sometimes peers in the wider industry. I also trawl through the archives of talks from Clearleft conferences for relevent presentations.
  3. Assemble the material. This is where I’m chipping away at the marble of audio interviews to get at the nuggets within. I play around with the flow of themes, trying different juxtapositions and narrative structures.
  4. Tie everything together. I add my own voice to introduce the topic and segue from point to point.
  5. Release. I upload the audio, update the RSS feed, and publish the transcript.

Lots of podcasts (that I really enjoy) stop at step two: record a conversation and then release it verbatim. Job done.

Being a glutton for punishment, I wanted to do more of an amalgamation for each episode, weaving multiple conversations together.

Right now I’m in step three. That’s where I’ve found the same sweet spot that I had back in my art college days. It’s somewhat mindless work, snipping audio waveforms and adjusting volume levels. At the same time, there’s the creativity of putting those audio snippets into a logical order. I find myself getting into the zone, losing track of time. It’s the same kind of flow state you get from just the right level of coding or design work. Normally this kind of work lends itself to having some background music, but that’s not an option with podcast editing. I’ve got my headphones on, but my ears are busy.

I imagine that is what life is like for an audio engineer or producer.

When I first started the Clearleft podcast, I thought I would need to use GarageBand for this work, arranging multiple tracks on a timeline. Then I discovered Descript. It’s been an enormous time-saver. It’s like having GarageBand and a text editor merged into one. I can see the narrative flow as a text document, as well as looking at the accompanying waveforms.

Descript isn’t perfect. The transcription accuracy is good enough to allow me to search through my corpus of material, but it’s not accurate enough to publish as is. Still, it gives me some nice shortcuts. I can elimate ums and ahs in one stroke, or shorten any gaps that are too long.

But even with all those conveniences, this is still time-consuming work. If I spend three or four hours with my head down sculpting some audio and I get anything close to five minutes worth of usable content, I consider it time well spent.

Sometimes when I’m knee-deep in a piece of audio, trimming and arranging it just so to make a sentence flow just right, there’s a voice in the back of my head that says, “You know that no one is ever going to notice any of this, don’t you?” I try to ignore that voice. I mean, I know the voice is right, but I still think it’s worth doing all this fine tuning. Even if nobody else knows, I’ll have the satisfaction of transforming the raw audio into something a bit more polished.

If you aren’t already subscribed to the RSS feed of the Clearleft podcast, I recommend adding it now. New episodes will start showing up …sometime soon.

Yes, I’m being a little vague on the exact dates. That’s because I’m still in the process of putting the episodes together.

So if you’ll excuse me, I need to put my headphones on and enter the zone.

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Overlay gap

I think a lot about Danielle’s talk at Patterns Day last year.

Around about the six minute mark she starts talking about gaps and overlaps.

Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.

Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.

This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).

When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:

Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.

I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.

Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:

There are two things common between the websites in these screenshots that I took yesterday.

  1. They are beautifully designed, with great typography, clear branding, all optimized for readability.
  2. I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.

The web can be beautiful. Except it’s not right now.

How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?

PM/Legal/Marketing made me do it

I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.

It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.

I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.

Because the web can’t survive like this.

Design systems roundup

When I started writing a post about architects, gardeners, and design systems, it was going to be a quick follow-up to my post about web standards, dictionaries, and design systems. I had spotted an interesting metaphor in one of Frank’s posts, and I thought it was worth jotting it down.

But after making that connection, I kept writing. I wanted to point out the fetishism we have for creation over curation; building over maintenance.

Then the post took a bit of a dark turn. I wrote about how the most commonly cited reasons for creating a design system—efficiency and consistency—are the same processes that have led to automation and dehumanisation in the past.

That’s where I left things. Others have picked up the baton.

Dave wrote a post called The Web is Industrialized and I helped industrialize it. What I said resonated with him:

This kills me, but it’s true. We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it. We operate more like Taylor and his stopwatch and Gantt and his charts, maximizing effort and impact rather than focusing on the human aspects of product development.

But he also points out the many benefits of systemetising:

At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.

Emphasis mine. I think that’s a key phrase: “a willing team.”

Ethan tackles this in his post The design systems we swim in:

A design system that optimizes for consistency relies on compliance: specifically, the people using the system have to comply with the system’s rules, in order to deliver on that promised consistency. And this is why that, as a way of doing something, a design system can be pretty dehumanizing.

But a design system need not be a constraining straitjacket—a means of enforcing consistency by keeping creators from colouring outside the lines. Used well, a design system can be a tool to give creators more freedom:

Does the system you work with allow you to control the process of your work, to make situational decisions? Or is it simply a set of rules you have to follow?

This is key. A design system is the product of an organisation’s culture. That’s something that Brad digs into his post, Design Systems, Agile, and Industrialization:

I definitely share Jeremy’s concern, but also think it’s important to stress that this isn’t an intrinsic issue with design systems, but rather the organizational culture that exists or gets built up around the design system. There’s a big difference between having smart, reusable patterns at your disposal and creating a dictatorial culture designed to enforce conformity and swat down anyone coloring outside the lines.

Brad makes a very apt comparison with Agile:

Not Agile the idea, but the actual Agile reality so many have to suffer through.

Agile can be a liberating empowering process, when done well. But all too often it’s a quagmire of requirements, burn rates, and story points. We need to make sure that design systems don’t suffer the same fate.

Jeremy’s thoughts on industrialization definitely struck a nerve. Sure, design systems have the ability to dehumanize and that’s something to actively watch out for. But I’d also say to pay close attention to the processes and organizational culture we take part in and contribute to.

Matthew Ström weighed in with a beautifully-written piece called Breaking looms. He provides historical context to the question of automation by relaying the story of the Luddite uprising. Automation may indeed be inevitable, according to his post, but he also provides advice on how to approach design systems today:

We can create ethical systems based in detailed user research. We can insist on environmental impact statements, diversity and inclusion initiatives, and human rights reports. We can write design principles, document dark patterns, and educate our colleagues about accessibility.

Finally, the ouroboros was complete when Frank wrote down his thoughts in a post called Who cares?. For him, the issue of maintenance and care is crucial:

Care applies to the built environment, and especially to digital technology, as social media becomes the weather and the tools we create determine the expectations of work to be done and the economic value of the people who use those tools. A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement. Tools are always beholden to values. This is well-trodden territory.

Well-trodden territory indeed. Back in 2015, Travis Gertz wrote about Design Machines:

Designing better systems and treating our content with respect are two wonderful ideals to strive for, but they can’t happen without institutional change. If we want to design with more expression and variation, we need to change how we work together, build design teams, and forge our tools.

Also on the topic of automation, in 2018 Cameron wrote about Design systems and technological disruption:

Design systems are certainly a new way of thinking about product development, and introduce a different set of tools to the design process, but design systems are not going to lessen the need for designers. They will instead increase the number of products that can be created, and hence increase the demand for designers.

And in 2019, Kaelig wrote:

In order to be fulfilled at work, Marx wrote that workers need “to see themselves in the objects they have created”.

When “improving productivity”, design systems tooling must be mindful of not turning their users’ craft into commodities, alienating them, like cogs in a machine.

All of this is reminding me of Kranzberg’s first law:

Technology is neither good nor bad; nor is it neutral.

I worry that sometimes the messaging around design systems paints them as an inherently positive thing. But design systems won’t fix your problems:

Just stay away from folks who try to convince you that having a design system alone will solve something.

It won’t.

It’s just the beginning.

At the same time, a design system need not be the gateway drug to some kind of post-singularity future where our jobs have been automated away.

As always, it depends.

Remember what Frank said:

A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement.

The reasons for creating a design system matter. Those reasons will probably reflect the values of the company creating the system. At the level of reasons and values, we’ve gone beyond the bounds of the hyperobject of design systems. We’re dealing in the area of design ops—the whys of systemising design.

This is why I’m so wary of selling the benefits of design systems in terms of consistency and efficiency. Those are obviously tempting money-saving benefits, but followed to their conclusion, they lead down the dark path of enforced compliance and eventually, automation.

But if the reason you create a design system is to empower people to be more creative, then say that loud and proud! I know that creativity, autonomy and empowerment is a tougher package to sell than consistency and efficiency, but I think it’s a battle worth fighting.

Design systems are neither good nor bad (nor are they neutral).

Addendum: I’d just like to say how invigorating it’s been to read the responses from Dave, Ethan, Brad, Matthew, and Frank …all of them writing on their own websites. Rumours of the demise of blogging may have been greatly exaggerated.

Web standards, dictionaries, and design systems

Years ago, the world of web standards was split. Two groups—the W3C and the WHATWG—were working on the next iteration of HTML. They had different ideas about the nature of standardisation.

Broadly speaking, the W3C followed a specification-first approach. Figure out what should be implemented first and foremost. From this perspective, specs can be seen as blueprints for browsers to work from.

The WHATWG, by contrast, were implementation led. The way they saw it, there was no point specifying something if browsers weren’t going to implement it. Instead, specs are there to document existing behaviour in browsers.

I’m over-generalising somewhat in my descriptions there, but the point is that there was an ideological difference of opinion around what standards bodies should do.

This always reminded me of a similar ideological conflict when it comes to language usage.

Language prescriptivists attempt to define rules about what’s right or right or wrong in a language. Rules like “never end a sentence with a preposition.” Prescriptivists are generally fighting a losing battle and spend most of their time bemoaning the decline of their language because people aren’t following the rules.

Language descriptivists work the exact opposite way. They see their job as documenting existing language usage instead of defining it. Lexicographers—like Merriam-Webster or the Oxford English Dictionary—receive complaints from angry prescriptivists when dictionaries document usage like “literally” meaning “figuratively”.

Dictionaries are descriptive, not prescriptive.

I’ve seen the prescriptive/descriptive divide somewhere else too. I’ve seen it in the world of design systems.

Jordan Moore talks about intentional and emergent design systems:

There appears to be two competing approaches in designing design systems.

An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.

An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).

An intentional design system is prescriptive. An emergent design system is descriptive.

I think we can learn from the worlds of web standards and dictionaries here. A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

I think it’s more important for a design system to be accurate than beautiful.

As Matthew Ström says, you should start with the design system you already have:

Instead of drawing a whole new set of components, start with the components you already have in production. Document them meticulously. Create a single source of truth for design, warts and all.

The Technical Side of Design Systems by Brad Frost

Day two of An Event Apart San Francisco is finishing with a talk from Brad on design systems (so hot right now!):

You can have a killer style guide website, a great-looking Sketch library, and robust documentation, but if your design system isn’t actually powering real software products, all that effort is for naught. At the heart of a successful design system is a collection of sturdy, robust front-end components that powers other applications’ user interfaces. In this talk, Brad will cover all that’s involved in establishing a technical architecture for your design system. He’ll discuss front-end workshop environments, CSS architecture, implementing design tokens, popular libraries like React and Vue.js, deploying design systems, managing updates, and more. You’ll come away knowing how to establish a rock-solid technical foundation for your design system.

I will attempt to liveblog the Frostmeister…

“Design system” is an unfortunate name …like “athlete’s foot.” You say it to someone and they think they know what you mean, but nothing could be further from the truth.

As Mina said:

A design system is a set of rules enforced by culture, process and tooling that govern how your organization creates products.

A design system the story of how an organisation gets things done.

When Brad talks to companies, he asks “Have you got a design system?” They invariably say they do …and then point to a Sketch library. When the focus goes on the design side of the process, the production side can suffer. There’s a gap between the comp and the live site. The heart and soul of a design system is a code library of reusable UI components.

Brad’s going to talk through the life cycle of a project.

Sell

He begins with selling in a design system. That can start with an interface inventory. This surfaces visual differences. But even if you have, say, buttons that look the same, the underlying code might not be consistent. Each one of those buttons represents time and effort. A design system gives you a number of technical benefits:

  • Reduce technical debt—less frontend spaghetti code.
  • Faster production—less time coding common UI components and more time building real features.
  • Higher-quality production—bake in and enforce best practices.
  • Reduce QA efforts—centralise some QA tasks.
  • Potentially adopt new technologies faster—a design system can help make additional frameworks more managable.
  • Useful reference—an essential resource hub for development best practices.
  • Future-friendly foundation—modify, extend, and improve over time.

Once you’ve explained the benefits, it’s time to kick off.

Kick off

Brad asks “What’s yer tech stack?” There are often a lot of tech stacks. And you know what? Users don’t care. What they see is one brand. That’s the promise of a design system: a unified interface.

How do you make a design system deal with all the different tech stacks? You don’t (at least, not yet). Start with a high priority project. Use that as a pilot project for the design system. Dan talks about these projects as being like television pilots that could blossom into a full season.

Plan

Where to build the design system? The tech stack under the surface is often an order of magnitude greater than the UI code—think of node modules, for example. That’s why Brad advocates locking off that area and focusing on what he calls a frontend workshop environment. Think of the components as interactive comps. There are many tools for this frontend workshop environment: Pattern Lab, Storybook, Fractal, Basalt.

How are you going to code this? Brad gets frontend teams in a room together and they fight. Have you noticed that developers have opinions about things? Brad asks questions. What are your design principles? Do you use a CSS methodology? What tools do you use? Spaces or tabs? Then Brad gets them to create one component using the answers to those questions.

Guidelines are great but you need to enforce them. There are lots of tools to automate coding style.

Then there’s CSS architecture. Apparently we write our styles in React now. Do you really want to tie your CSS to one environment like that?

You know what’s really nice? A good ol’ sturdy cacheable CSS file. It can come in like a fairy applying all the right styles regardless of tech stack.

Design and build

Brad likes to break things down using his atomic design vocabulary. He echoes what Mina said earlier:

Embrace the snowflakes.

The idea of a design system is not to build 100% of your UI entirely from components in the code library. The majority, sure. But it’s unrealistic to expect everything to come from the design system.

When Brad puts pages together, he pulls in components from the code library but he also pulls in one-off snowflake components where needed.

The design system informs our product design. Our product design informs the design system.

—Jina

Brad has seen graveyards of design systems. But if you make a virtuous circle between the live code and the design system, the design system has a much better chance of not just surviving, but thriving.

So you go through those pilot projects, each one feeding more and more into the design system. Lather, rinse, repeat. The first one will be time consuming, but each subsequent project gets quicker and quicker as you start to get the return on investment. Velocity increases over time.

It’s like tools for a home improvement project. The first thing you do is look at your current toolkit. If you don’t have the tool you need, you invest in buying that new tool. Now that tool is part of your toolkit. Next time you need that tool, you don’t have to go out and buy one. Your toolkit grows over time.

The design system code must be intuitive for developers using it. This gets into the whole world of API design. It’s really important to get this right—naming things consistently and having predictable behaviour.

Mina talked about loose vs. strict design systems. Open vs. locked down. Make your components composable so they can adapt to future requirements.

You can bake best practices into your design system. You can make accessibility a requirement in the code.

Launch

What does it mean to “launch” a design system?

A design system isn’t a project with an end, it’s the origin story of a living and evolving product that’ll serve other products.

—Nathan Curtis

There’s a spectrum of integration—how integrated the design system is with the final output. The levels go from:

  1. Least integrated: static.
  2. Front-end reference code.
  3. Most integrated: consumable compents.

Chris Coyier in The Great Divide talked about how wide the spectrum of front-end development is. Brad, for example, is very much at the front of the front end. Consumable UI components can create a bridge between the back of the front end and the front of the front end.

Consumable UI components need to be bundled, packaged, and published.

Maintain

Now we’ve entered a new mental space. We’ve gone from “Let’s build a website” to “Let’s maintain a product which other products use as a dependency.” You need to start thinking about things like semantic versioning. A version number is a promise.

A 1.0.0 designation comes with commitment. Freewheeling days of unstable early foundations are behind you.

—Nathan Curtis

What do you do when a new tech stack comes along? How does your design system serve the new hotness. It gets worse: you get products that aren’t even web based—iOS, Android, etc.

That’s where design tokens come in. You can define your design language in a platform-agnostic way.

Summary

This is hard.

  • Your design system must live in the technologies your products use.
  • Look at your product roadmaps for design system pilot project opportunities.
  • Establish code conventions and use tooling and process to enforce them.
  • Build your design system and pilot project UI screens in a frontend workshop environment.
  • Bake best practices into reusable components & make them as rigid or flexible as you need them to be.
  • Use semantic versioning to manage ongoing design system product work.
  • Use design tokens to feed common design properties into different platforms.

You won’t do it all at once. That’s okay. Baby steps.

Making Research Count by Cyd Harrell

The brilliant Cyd Harrell is opening up day two of An Event Apart in Chicago. I’m going to attempt to liveblog her talk on making research count…

Research gets done …and then sits in a report, gathering dust.

Research matters. But how do we make it count? We need allies. Maybe we need more money. Perhaps we need more participation from people not on the product team.

If you’re doing real research on a schedule, sharing it on a regular basis, making people’s eyes light up …then you’ve won!

Research counts when it answers questions that people care about. But you probably don’t want to directly ask “Hey, what questions do you want answered?”

Research can explain oddities in analytics weird feedback from customers, unexpected uses of products, and strange hunches (not just your own).

Curious people with power are the most useful ones to influence. Not just hierarchical power. Engineers often have a lot of power. So ask, “Who is the most curious engineer, and how can I drag them out on a research session with me?”

At 18F, Cyd found that a lot of the nodes of power were in the mid level of the organisation who had been there a while—they know a lot of people up and down the chain. If you can get one of those people excited about research, they can spread it.

Open up your practice. Demystify it. Put as much effort into communicating as into practicing. Create opportunities for people to ask questions and learn.

You can think about communities of practice in the obvious way: people who do similar things to us, and other people who make design decisions. But really, everyone in the organisation is affected by design decisions.

Cyd likes to do office hours. People can come by and ask questions. You could open a Slack channel. You can run brown bag lunches to train people in basic user research techniques. In more conventional organisations, a newsletter is a surprisingly effective tool for sharing the latest findings from research. And use your walls to show work in progress.

Research counts when people can see it for themselves—not just when it’s reported from afar. Ask yourself: who in your organisation is disconnected from their user? It’s difficult for people to maintain their motivation in that position.

When someone has been in the field with you, the data doesn’t have to be explained.

Whoever’s curious. Whoever’s disconnected. Invite them along. Show them what you’re doing.

Think about the qualities of a good invitation (for a party, say). Make the rules clear. Make sure they want to come back. Design the experience of observing research. Make sure everyone has tools. Give everyone a responsibility. Be like Willy Wonka—he gave clear rules to the invitied guests. And sure, things didn’t go great when people broke the rules, but at the end, everyone still went home with the truckload of chocolate they were promised.

People who get to ask a question buy in to the results. Those people feel a sense of ownership for the research.

Research counts when methods fit the question. Think about what the right question is and how you might go about answering it.

You can mix your methods. Interviews. Diary studies. Card sorting. Shadowing. You can ground the user research in competitor analysis.

Back in 2008, Cyd was contacted by a company who wanted to know: how do people really use phones in their cars? Cyd’s team would ride along with people, interviewing them, observing them, taking pictures and video.

Later at the federal government, Cyd was asked: what are the best practices for government digital transformation? How to answer that? It’s so broad! Interviews? Who knows what?

They refined the question: what makes modern digital practices stick within a government entity? They looked at what worked when companies were going online, so see if there was anything that government could learn from. Then they created a set of really focused interview questions. What does digital transformation mean? How do you know when you’re done? What are the biggest obstacles to this work? How do you make changes last?

They used atechnique called cluster recruiting to figure out who else to talk to (by asking participants who else they should be talking to).

There is no one research method that will always work for you. Cutting the right corners at the right time lets you be fast and cheap. Cyd’s bare-bones research kit costs about $20: a notebook, a pen, a consent form, and the price of a cup of coffee. She also created a quick score sheet for when she’s not in a position to have research transcribed.

Always label your assumptions before beginning your research. Maybe you’re assuming that something is a frustrating experience that needs fixing, but it might emerge that it doesn’t need fixing—great! You’ve just saved a whole lotta money.

Research counts when researchers tell the story well. Synthesis works best as a conversational practice. It’s hard to do by yourself. You start telling stories when you come back from the field (sometimes it starts when you’re still out in the field, talking about the most interesting observations).

Miller’s Law is a great conceptual framework:

To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of.

You’re probably familiar with the “five whys”. What about the “five ways”? If people talk about something five different ways, it’s virtually certain that one of them will be an apt metaphor. So ask “Can you say that in a different way?” five time.

Spend as much time on communicating outcomes as you did on executing the work.

After research, play back how many people you spoke to, the most valuable insight you gained, the themes that are emerging. Describe the question you wanted to answer, what answers you got, and what you’re going to do next. If you’re in an organisation that values memos, write a memo. Or you could make a video. Or you could write directly into backlog tickets. And don’t forget the wall work! GDS have wonderfully full walls in their research department.

In the end, the best tool for research is an illuminating story.

Cyd was doing research at the Bakersfield courthouse. The hypothesis was that a lot of people weren’t engaging with technology in the court system. She approached a man named Manuel who was positively quaking. He was going through a custody battle. He said, “I don’t know technology but it doesn’t scare me. I’m shaking because this paperwork just gets to me—it’s terrifying.” He said who would gladly pay for someone to help him with the paperwork. Cyd wrote a report on this story. Months later, they heard people in the organisation asking questions like “How would this help Manuel?”

Sometimes you do have to fight (nicely).

People will push back on the time spent on research—they’ll say it doesn’t fit the sprint plan. You can have a three day research plan. Day 1: write scripts. Day 2: go to the users and talk to them. Day 3: play it back. People on a project spend more time than that in Slack.

People will say you can’t talk to the customers. In that situation, you could talk to people who are in the same sector as your company’s customers.

People will question the return on investment for research. Do it cheaply and show the very low costs. Then people stop talking about the money and start talking about the results.

People will claim that qualitative user research is not statistically significant. That’s true. But research is something else. It answers different question.

People will question whether a senior person needs to be involved. It is not fair to ask the intern to do all the work involved in research.

People will say you can’t always do research. But Cyd firmly believes that there’s always room for some research.

  • Make allies in customer research.
  • Find the most curious engineer on the team, go to lunch with them, and feed them the most interesting research insights.
  • Record a pain point and a send a video to executives.
  • If there’s really no budget, maybe you can get away with not paying incentives, but perhaps you can provide some other swag instead.

One of the best things you can do is be there, non-judgementally, making friends. It takes time, but it works. Research is like a dandelion in flight. Once it’s out and about, taking root, the more that research counts.

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.