{ "version": "https://jsonfeed.org/version/1", "title": "Adactio: Articles", "description": "Longer thoughts and ramblings from Jeremy Keith, an Irish web developer living and working in Brighton, England.", "icon": "https://adactio.com/images/photo-600.jpg", "favicon": "https://adactio.com/favicon-96x96.png", "home_page_url": "https://adactio.com/articles/", "feed_url": "https://adactio.com/articles/feed.json", "author": { "name": "Jeremy Keith" }, "items": [ { "id": "21110", "url": "https://adactio.com/article/21110", "title": "Declarative Design", "summary": "A presentation given at the final An Event Apart in San Francisco, as well as at Pixel Pioneers in Bristol, Web Summer Camp in Croatia, and Wey Wey Web in Spain.", "date_published": "2024-05-07 11:30:38", "tags": [ "declarative", "design", "frontend", "development", "systems", "mindset", "approach", "worldwideweb", "culture", "programming", "thinking", "assumptions", "fluid", "type", "layout" ], "content_html": "

I want to talk to you today about declarative design. But before I get to that I want to talk to you about music.

I want to talk about two different approaches to musical composition. I want to compare and contrast.

Classical

On the one hand, I’ll take this man as an example. This is Wolfgang Amadeus Mozart, a classical composer. He’s got this amazing music in his head and he can get that down onto sheet music which is very specific, very accurate. You can note down what notes to play, how long to play each note for each instrument, even some instructions on how to play the music.

So very accurate, very specific. That’s one approach to to musical composition.

Jazz

The other approach is very different and I’m going to use this man as an example. This is Miles Davis, from the jazz side of things. And his approach to composition was very different to Mozart.

Take something like his classic album, Kind Of Blue. He comes into the studio with sketches, with maybe some scales and no rehearsal for the musicians. And what resulted was absolutely amazing!

Like So What?, the opening track. That’s where the instructions were literally “Let’s do 16 bars in D Dorian then another eight bars and D flat Dorian and go back to D Dorian eight bars.” That’s as much structure as there is. And what comes out is absolutely amazing.

So these are two different approaches to musical composition: the classical and the jazz approach. Very different styles.

I think there’s a corresponding kind of split that you can see in the world of programming. You can divide programming into two different categories as well.

Imperative programming

On the one hand you’ve got imperative programming. Most programming languages are imperative programming languages. This is where you describe step-by-step what the computer should do. You give it clear instructions. Very specific. This makes it very powerful. It makes it general purpose. Maybe hard to learn because there’s a lot you have to understand before you can write those instructions.

Classic pseudo-code for an imperative language would be something like this. These step-by-step instructions:

You’ve got arrays and loops and ifs and else’s and returns: all the classic bits that go into general purpose imperative programming languages.

Declarative programming

The other category of programming language is declarative programming. This is where you don’t give step-by-step instructions to the computer. Instead you describe the outcome you want and you leave the implementation details to the to the language. You don’t describe the specific steps.

These are often domain-specific. They’re usually not general purpose languages and they’re probably less powerful than imperative languages. But on the other hand they may be easier to learn because you don’t have to get into all the the nitty-gritty that you do with an imperative programming language.

A classic example of a declarative programming language would be something like Structured Query Language or SQL. You describe what you want. You leave the details to the database. So you write something practically in English here:

select items from table where condition is true

Now it might be that under the hood the programming language is going to create an array of items, loop through each one, check if the condition is true and return the result but you don’t need to care about that. You don’t have to write those specific instructions. You describe the outcome you want.

Those are the two categories of programming language: imperative and declarative.

With that in mind let’s turn our attention to the medium where I work and I’m sure a lot of you do as well, which is the World Wide Web.

The World Wide Web

Let’s go back to the the birth of the World Wide Web, which is thanks to this gentleman. This is Sir Tim Berners-Lee working at CERN in the 1980s. He’s trying to solve the problem that there’s just so much information and how to manage it. So he writes this memo, this proposal. Information Management: A Proposal.

Terrible title, riddled with typos, incomprehensible diagrams and yet his supervisor Mike Sendall must have seen some some promise in it because he scrolled across the top:

vague but exciting.

This was March 1989. Tim Berners-Lee gets to go ahead to do that information management project which turns into the World Wide Web.

Now Tim Berners-Lee had some ideas about how to approach programming in general but also information management. He’s got design principles.

The principle of least power

For example, one of the principles he wants to adhere to is the principle of least power. The principle of least power states that:

for any given purpose, choose the least powerful language

…which sounds really counterintuitive. Why on Earth would I choose a less powerful language to accomplish what I want to do?

But this goes back to the declarative/imperative split, because the less powerful language may also be easier to learn, maybe more robust, less fragile.

HTML

He certainly puts that into practice when he creates HTML which is very much a declarative language. The classic case of a declarative language. It’s domain-specific. It’s for the web only.

Interestingly it’s fault-tolerant. By that I mean if you write some HTML and you make a mistake or you just type out an element that doesn’t even exist, the browser doesn’t throw an error . It doesn’t refuse to render any of the HTML that comes after that mistake. It just ignores what it doesn’t understand and carries on.

That actually turns out to be very powerful and I think maybe it’s only possible because HTML is a is a declarative language.

CSS

A couple of years later we get the next language on the web which is CSS. This is also a declarative language. It is also fault tolerant. If you’re writing CSS and you give the browser something it doesn’t understand it doesn’t choke on it. It just ignores it and carries on processing the style sheet. Again that’s very powerful.

But you might be thinking, “Hang on— CSS? Declarative? No no no, surely when I’m writing CSS I’m telling the computer exactly what to do? I’m telling the computer exactly how to style things.”

No. You’re not. It’s worth remembering with CSS you’re not telling the browser to do anything you’re making suggestions. Eric Meyer once said that every line of CSS is a suggestion to the browser. I think that’s worth bearing in mind.

Cascading HTML Style Sheets

Now CSS did not come from Tim Berners-Lee. The original proposal came from HÃ¥kon Wium Lie in an email in 1994. The original proposal was called Cascading HTML Style Sheets and he puts forward some ideas for how this could look.

h1.font.size = 24pt 100%h2.font.size = 20pt 40%

This isn’t what we ended up with. You can see the syntax isn’t like the CSS today but you can kind of make sense of it. You can kind of understand what’s going on. I see this is setting the font size of all the H1s to 24 points and all the H2s to 20 points.

But hang on. What’s going on here with these percentages? 100? 40?

Well, what the percentages indicate is this early idea called influence in CSS where you as the author would basically specify how much do you care about this particular style. Like 100, I really want the H1s to be 24 points. But 40, I don’t care so much.

Influence

The idea was that that there were these competing concerns. There’s you, the author of the style sheet, the person making the website. There’s the browser itself which will have opinions in the user agent style sheets about how things should be styled. And really importantly, the user. The user should also have a say.

And this was the default in early browsers. You would have user style sheets—not user agent style sheets—user style sheets so the user could specify how they wanted things.

The idea with influence was that the browser would somehow hand-wavingly figure out exactly the right number to come up with. It obviously didn’t work. And we don’t have user style sheets anymore in any of the major browsers, which is a real shame. You can install browser extensions to do the same thing so it’s a bit of a power user feature, although if anyone’s played around with The Arc browser they’ve kind of got a similar thing. They call it boosts but it’s basically like user style sheets.

Preference queries

I do see almost a little bit of resurgence of this idea of influence though coming back into CSS. I see it in media queries level five where we have the so-called preference queries, where now the user can at least specify their preference, usually at the operating system level. Like I prefer reduced motion. I prefer reduced data. I prefer a dark color scheme.

Now it’s not quite the same as influence because it’s it’s up to us as developers to honor those preferences. But I like it because it’s again getting back to not thinking of CSS about giving precise instructions but it turns it into more of a dialogue with the user. Instead of the designer dictating their wishes upon all the users, it’s more like a conversation.

“What do you want? What do you care about?”

“Well, here’s what I care about. Let’s come to some kind of agreement.”

I like that. I think that’s good. I want to see more of that kind of dialogue happen on the web rather than just us dictating to users.

JavaScript

There’s one final language on the web. That’s JavaScript. That came along a few years later.

This is not a declarative language. This is an imperative language which makes it more powerful. You can do a lot more with JavaScript.

But JavaScript is not fault tolerant. In other words if you do make a mistake in your JavaScript or give the browser something it doesn’t understand it will stop at that point it will not parse any further. So it’s less resilient than HTML or CSS.

So this is what we have on the front end of the web. These three languages: HTML, CSS ,and JavaScript. Two declarative and fault-tolerant languages. One imperative language that isn’t fault tolerant.

Fault tolerance

I fell into a trap of thinking for a while “Oh, all declarative languages are fault-tolerant.” I guess I was seeing causation where there’s just correlation. That’s not true. You can absolutely make a declarative language that is not fault-tolerant.

In fact XML is a perfect example. The parsing instructions for XML say “if you see a mistake, stop: do not render anything; just quit right then.”

So not all declarative languages are fault tolerant. But I think we’re lucky on the web that the two we have are fault tolerant. I think that makes the web more accessible to people.

I think it would be very hard to make a fault-tolerant imperative language. It would be impossible to debug an imperative language if it just ignored what it didn’t understand and carried on.

This is kind of how I like to approach building on the web, in this order:

  1. the foundations of HTML for the structure,
  2. I’ll layer on my suggestions for styling with CSS and then, if I need to,
  3. I’ll reach for JavaScript to do the more powerful behavioral stuff.

Mindset

So in programming we’ve got these two approaches, these two categories of programming languages: imperative and declarative. But I think more importantly what we have is two different mindsets, two ways of approaching how you solve a problem: an imperative way and a declarative way. And the kind of mindset you have will probably influence the kind of languages you prefer, the kind of languages you gravitate towards.

Let’s take a a problem we’re trying to solve on on the web where we want to make a component, a button component.

How will we go about making this button component? Because a nice thing on the web is there’s no one way of doing things. You can do things however you like.

One approach would be a very imperative way. We reach straight for JavaScript. We’d have the bare minimum HTML, just a div or something. Throw in a bunch of event handlers to handle those clicks. Don’t forget you got to handle keyboard support as well. Got to make sure it’s accessible too so you’ll need to put in some ARIA roles and all that stuff.

So that’s one way to do it.

Or just use a freaking button element.

This would be the declarative approach.

So which approach would you go for?

Now I know what I would do. Mostly because I’m just lazy. I don’t want to have to do all this when I can do this. I’m absolutely going to reach for the declarative approach, which kind of reflects my general approach with client-side JavaScript on the web which is:

JavaScript should only do what only JavaScript can do.

That means if there’s a way of doing it in HTML or CSS, do it in HTML or CSS. Only reach for JavaScript when you when you absolutely need to.

(This is client-side Javascript I’m talking about here. When you’re doing something on a server do what you want. It’s a different environment. I’m talking about in the browser on the World Wide Web.)

Okay, so I know how I would do it. But I’m trying to understand why would anybody do this? Why would anybody reach for JavaScript immediately and do the over-engineered version?

I don’t want to just think “Oh, they don’t know what they’re doing!” That doesn’t feel right. I want to understand the mindset that’s going on.

I’m going to reach for a button element because I get so much for free. I get some styling for free but I can change it. I get interactivity for free. I get the keyboard support for free. It’s accessible by default. So if the browser gives you all the styling and behavior for free, why would you invent everything from scratch and reject that?

I think it’s about control.

Control

This goes back to that mindset. I think if you have this imperative mindset then control is probably a priority. You want to understand exactly how things work in precise step-by-step instructions. I feel like people with this imperative mindset, they’re probably coming from a computer science background where you’re used to specifying everything very precisely. And if that’s your attitude then those free stylings and behavior you get from the browser, those aren’t features— they’re bugs.

Because, well, what if there’s some unintended side effects? If you haven’t written it yourself how do you know how it’s going to behave in every situation? You don’t. Whereas if you invent everything from scratch it feels like you’ve got total control.

Declarative approaches can feel like giving up control: I’m going to leave the details to the web browser, I’m going to trust that the web browser handles the keyboard support and accessibility.

But I feel like this control comes at a cost. I’m not sure you really gain control. I think you get the feeling of control. But it always involves making assumptions, whereas a declarative mindset is about avoiding assumptions.

Assumptions

Let me illustrate what I mean. To illustrate this I’m going to use a declarative language but show you how the mindset could still be imperative.

Let’s say we’ve got a button and we want to style it so we’re going to use CSS, a declarative language. I might write some CSS like this:

button {  font-size: 16px;  padding-left: 16px;}

Absolutely nothing wrong with the CSS. Font size 16 pixels, padding left 16 pixels, on a button. That’s fine.

But is that what I really mean?

When I say font-size: 16px do I really mean the default browser font size? (which is 16 pixels most of the time)

I could be more intentional about this. When I say 16 pixels what I mean is the default browser font size: one rem.

button {  font-size: 1rem;  padding-left: 1rem;}

Now if the user changes their default font size my button will change accordingly. I’ve removed that assumption that the default browser font size is always going to be 16 pixels.

Actually when I say padding-left is one rem, do I really mean padding -*left*? Or do I mean the padding at the start of the text?

If that’s what I mean then I can say that. I can say padding-inline-start is one rem.

button {  font-size: 1rem;  padding-inline-start: 1rem;}

Now inline-start is the same as left if it’s a left-to-right language like English. But I’m making an assumption that it’s going to be presented in a left-to-right language like English when I use padding-left.

This is again removing that assumption. I might not have any plans to translate my website myself, but users can translate my website. It’s more of that that two-way conversation with the user. If they want to do that, let’s allow them to do that. So again I’ve removed an assumption there.

But if I want to really get into a conversation with the user and how they come to me to look at this website I could start to adjust the font size to be relative to their browser width by using the the viewport width unit here: vw.

button {  font-size: calc(0.5rem + 0.666vw);  padding-inline-start: 1rem;}

I throw it into a calc with a bit of rem. The reason I’m doing this is that if you just use viewport width then the font size can’t respond to changing. So mixing it up with a relative unit like this keeps it accessible; people can change their their font size.

But the point here is that the text in the button now could change depending on the browser width. I could do that with media queries but then you’d be jumping at these different break points. This keeps it fluid the whole time.

But maybe I’ve gone too far here. Because maybe at small sizes it’s going to get way too small and at large screen sizes is going to get way too big. That’s okay. I can put guard rails in place.

button {  font-size: clamp(    1rem,    0.5rem + 0.666vw,    1.5rem  );  padding-inline-start: 1rem;}

We’ve got this clamp function in CSS. In the middle I’ve got the same declaration I had before( but I don’t even need to use calc when I use clamp) but what I’ve got on either side is like a minimum and a maximum: don’t ever go below one rem and don’t ever go above 1.5 rems. Let it flow in between those.

Now the truth is I probably wouldn’t do this on a button element. I’d probably want my whole interface to to respond like this. So I’d probably do it on the body and say let the font size adjust. It’s going to be relative to the the viewport but it’s never going to be lower than that it’s never going to be higher than that. And then you end up with an interface that just scales fluidly with font size.

Now I’ve kind of given up a lot of control here. Because if you were to ask me “What is the font size when the screen width is 1024 pixels wide?”my answer would be “I don’t know …but I know it’ll look good.”

I know that everything will be in proportion and I know it will never get too small and it’ll never get too big.

In a way I’m giving up control so that I can have more of that dialogue with the user, more of that conversation.

Utopia

This idea of fluid typography—and fluid spacing as well—is what’s at the heart of a project called Utopia I highly encourage you to check out utopia.fyi

Full disclosure: Utopia started life at Clearleft, the agency I work at. It’s the work of Trys Mudford and James Gilyead.

It’s kind of like clamp on steroids. You set the boundary conditions. In this case it’s what type scale you want to use. And that type scale is probably going to be different for small screens than large screens. So you might say for small screens I want my type scale to be 1.2. That’s the the ratio. But on the large screen I want a larger type scale: 1.3.

\"An

So you specify those those edges and then you let the browser figure it out. It does the calculations. Figures it out for you.

I like this diagram that James put together for it where it shows I designed the minimum and the maximum and then maths figured out the in-between. Handing over those controls.

\"An

I think it’s this great use of machines and humans working together, doing what they’re best at. Because you got the human defining the boundary conditions, doing the design. And then handing over to the machine to do the calculations, to do the maths, which is what machines are good at.

You know when Steve Jobs described computers as a bicycle for the mind? I wonder can CSS be a bicycle for design? Where it’s amplifying design. You set the boundary conditions and then the browser does all the in-between.

Because CSS now is full of these functions that that work like this, where you give it the edges, you give it the boundary conditions and then it figures out the specifics. It does the calculations. It does the maths for you.

calc()clamp()min() and max()fit-contentmin-content and max-contentrepeat()minmax()

Intrinsic web design

This all opens up whole new ways of designing for the web that some people have been exploring. Jen Simmons in particular. She calls this intrinsic web design. She’s got a YouTube channel called Layout Land. It’s well worth checking out.

Watching her talks on this but I feel like it’s kind of got the same mindset that the Utopia project has: defined boundary conditions; let the browser figure out the specifics.

Be the browser’s mentor, not its micromanager

Every-Layout.dev is a project from Heydon Pickering and Andy Bell. It’s a book with various designs you can you can implement. Well worth checking out. I feel like the same mindset is there as well.

Even if you don’t buy the book it’s worth going to the website and reading the thinking behind it. If you go to the axioms there’s a quote there that says:

Instead of thinking of designing for the web as creating visual artifacts, think of it as writing programs for generating visual artifacts.

Declarative programs I would say.

Andy Bell,one of the guys behind that,he also built this website for a conference talk that’s called Build Excellent Websites. The talk was called “Be the browser’s mentor, not its micromanager”. Chef’s kiss! I think that is exactly what I’m trying to get at here:

Give the browser some solid rules and hints then let it make the right decisions for the people that visit it.

That’s exactly what I’m getting at!

Declarative design

So I feel like there’s something here. There’s this kind of declarative approach to design I’m trying to put my finger on and feel like they all share a mindset, a declarative mindset.

Now am I then saying that the the declarative approach is inherently better than the imperative approach?

The answer to that question is …it depends

…which is a bullshit answer. Anytime someone gives you this answer to a question you must follow up with the question, “it depends on what?”

Culture

In this case, whether the declarative approach is inherently better than an imperative approach depends on …culture, for one thing. The culture of the organization that’s producing the website, the culture of the organization that’s shipping.

Companies have different cultures which manifest in different ways, like management: how a company does management.

I think you could you could divide management into two categories like you can do with programming languages. There is a very imperative school of management where it’s all about measurements, it’s all about those performance reports, it’s all about metrics, time tracking. Maybe they install software on your machine to track how long you’ve been working. It’s all about measuring those outputs.

That’s one approach to management. Then there’s a more declarative approach, where you just care about the work getting done and you don’t care how people do it. So if they want to work from home, let them work from home. If they want to work strange hours, let them work strange hours. What do you care as long as the work gets done? This is more about giving people autonomy and trust

Now I know which approach I prefer and I would say at Clearleft we’ve definitely got a declarative approach. We give people lots of autonomy and trust.

That’s not always necessarily a good thing. If someone’s coming from an imperative background and on the first day of work is told “hey you’re a designer, you solve the problem; we’re not going to tell you what to do” people can really flounder if they’re not used to that level of autonomy and trust.

So again I’m not saying one is is right or wrong. I know which place I’d rather work at but I’m not saying one is is right and the other is wrong.

So companies have different cultures. There’s no one-size-fits-all way of thinking that will work for all company cultures.

Culture is one of those things that’s usually implicit and it’s unspoken. It’s rarely made explicit. It can be made explicit and one of the ways I think you can make a company culture explicit is through design systems.

Design systems

I know when normally we think of design systems we’re thinking of component libraries. We’re thinking of interface elements gathered together. To me that’s not what the design system is. My favorite definition of design systems comes from Jina Anne who knows a lot about design systems and she said that design systems are “the way we do things around here.”

I think focusing on those those components is maybe focusing on the the artifacts.

It will come as no surprise to you that I think you can kind of divide design systems into two categories. What are those categories? Imperative and declarative.

Before I look at how we divide design systems into imperative and declarative design systems I’m going to take a step back and talk about how we think about thinking.

I’m going to get meta here. And I don’t mean just on the web I mean how we approach problem solving as human beings.

Analytical thinking

I think there are two different ways historically to approach thinking and problem solving. One is analytical thinking. This isn’t new. This goes back centuries. This is what Emmanuel Kant was talking about in his critique of pure reason.

With analytical thinking you understand how something works by breaking it down into its constituent parts. This is what underpins the scientific method. It’s really, really useful for things that work exactly the same way everywhere in the universe: chemistry and physics and mathematics. I would say it’s probably not so great when you’re dealing with messy unpredictable things like human beings but analytical thinking has served us very, very well in our civilization.

Sytems thinking

On the other hand you’ve got systems thinking, which is almost the opposite of analytical thinking, where you understand the parts by looking at how they all work together as a whole.

To be very crude about it, analytical thinking is about zooming in and systems thinking is about zooming out.

So with that in mind let’s look at design systems.

Well the word “system” is right there in the title so surely what we’re talking about here is systems thinking, right?

Surprisingly most design systems to me seem to exhibit very analytical thinking: breaking things down into their constituent parts. Maybe you do an interface inventory; break things down into components, atoms, molecules.

It doesn’t have to be that way.

Let’s go back to our friend the button. Let’s say we’ve got our design system with a bunch of different button styles and you want to document this. That’ss what a design system for, right? Documenting the different styles you have.

\"Three

You can see these buttons have different background colours and different borders and one approach would be to document those colours. You put those in the design system. Maybe there’s nice tokens, right? And then when someone needs that information they go to the design system and they pluck out the colours. Done.

\"A

That’s one way of doing it. That’s a very imperative, very precise way of doing it.

The other way is to pull back and say “what’s the commonality across all these buttons?” And then you could come up with that rule, that boundary condition. which would be:

the border colour should be 10% lighter than the background colour.

You could convert this to CSS now using custom properties. We’re even gonna have a color-mix function in CSS.

But you see how this is a very different approach to just specifying what the final colours are.

So am I saying that when it comes to design systems a declarative approach is always better than an imperative approach?

I’m saying it depends.

“It depends on what?” you ask.

It depends on your team. It depends on their background and their mindset.

Teams

In a way it doesn’t matter whether you have a declarative or imperative mindset. What matters is what’s the mindset of the team working on this? That should dictate the approach you take.

When I hear someone say “design systems stifle creativity” what I’m hearing is “I’m a declarative designer and the design system is being very imperative, it’s fencing me in.”

But if you’re from an imperative background, “I need to know the color of the border for this button!” And if the design system says “well, you should make the border 10% lighter than the background,” that’s not useful—“Just give me the colors!” That’s the mindset I’ve got: “don’t make me think; just give me the data!”

I was I was talking about composers and I was comparing Mozart and Miles Davis. But let’s also compare their teams.

Classical musicians, they’re trying to sight read. It’s like a superpower they have. Whereas jazz musicians are trained to improvise. That’s their superpower.

With classical musicians, they could play a whole new piece of music without ever having heard it just from sight, which is absolutely amazing. But if Miles Davis had tried to record Kind Of Blue with a bunch of classical musicians I think it would have turned out very, very differently.

And then there’s the process, the design process.

The design process

I struggle with the design process on the web. I assume everyone does.

Jason Grigby has been blogging about this on the Cloud Four blog. The very first post in the series is Traditional Website process is Fundamentally Broken. I think we’d share that frustration.

He shows the traditional sort of very waterfall-y process that happens a lot, where particularly you’ve got this handover stage. The design all happens on this side of the handover and the development all happens on that side of the handover.

\"An

I think we can all agree that’s not good. That’s not going to lead to a great result.

He proposes something more like this, that goes backwards and forwards in time and also intertwingling design and development. And that’s that’s clearly better.

\"An

I still think there’s an issue here. Right here in the middle. Static mock-ups. Very precise, high-fidelity pictures of how something should look before you start actually building the thing.

Tools

Our tools tools influence us. They can’t help but do that.

We shape our tools and then our tools shape us.

Look at the tools we use for designing on the web. It used to be Photoshop and then it was Sketch. Now it’s Figma. Whatever it is, they’re very precise high-fidelity pixel-perfect tools. They’re very imperative. So we take imperative approach and then we try to translate it into a declarative language like CSS which is supposed to be about setting boundary conditions, leaving the browser to figure out the details.

There’s a mismatch there. So what’s the alternative?

We could jump straight to CSS, designing in the browser. Is that the answer? I’m not sure.

I like I like the phrase that Dan Mall uses which isn’t designing in the browser but deciding in the browser. He’s kind of talking about sign off . Instead of having something done in a mock-up and that’s what gets signed off, that’s the ideal we want and then you go to the browser. Instead do as much as you want in the in the design tool but that’s not the sign-off. You get it into the browser and at some point in the browser you go “Yes, that’s what we want” Sign off on that.

Now it’s already been translated into the the language of the browser so there isn’t going to be that disappointment with the traditional design process: the designer is disappointed because the end result doesn’t look like what they designed; the developer is disappointed because the designer did all this without consulting them; the client is disappointed; everybody is disappointed.

So I like the idea of deciding in the browser. But your approach to taking things into the browser sooner—maybe even designing in the browser—will depend entirely on your attitude to CSS.

If you’re from an imperative mindset you might think:

CSS is broken and I want my tools to work around the way that CSS has been designed.

Which is fair enough if you’re coming from that imperative mindset. I totally understand why you’d think that way. And there are tools to help you. This is pretty much what Tailwind does. Or CSS-in-JS. Treat the C in CSS as a bug and and work around. In that situation maybe you do want to stick to designing as much as possible in an imperative tool like Figma.

But if your mindset is more declarative you may think:

CSS is awesome and I want my tools to amplify the way that CSS has been designed.

Then, yeah, go check out Utopia, go check out those other websites I was talking about. Because that’s what they do. They go with the flow of CSS.

I’ve been saying “it depends”. It depends on your team. It depends on the culture of your company. You have to choose the approach that works best foryou and and your team. But to finish I want to point out one more thing that it depends on.

The medium

Choosing a declarative or an imperative approach to design also depends on the medium. The medium that you’re working in.

Now if you’re working in a medium that’s very precise like print or native apps, mobile or operating system specific apps for desktop, then you do actually have a lot of control. So an imperative approach might make a lot more sense. Those assumptions are maybe justified because you’ve got such a tightly controlled environment. So in that situation, yeah, go for it— be like Mozart; take the imperative approach.

But I don’t think that maps very well to the World Wide Web.

I think on the World Wide Web you want to be more Miles. You want to be more declarative.

We’ve tried them the past on the World Wide Web to shoehorn in an imperative mindset, to say “Well, let’s say things are really defined—let’s make assumptions about, you know, how wide everybody’s screen is.”

I’m showing my age here but when I started making websites, the assumption was everybody’s screen is 640 pixels wide. Later on it was 800 pixels wide. 1024.

Then mobile phones came along and everybody shat the bed because they didn’t know what they were supposed to do.

You understand why designers were doing this. It was to try and feel that sense of control. But it was always an illusion of control. It was a consensual hallucination to say “Yes, everybody’s screen is 800 pixels wide.” It doesn’t match with the reality, the messy beautiful reality of the World Wide Web.

To paraphrase the senator from Alderaan:

The more you tighten your grip, the more the World Wide Web slips through your fingers.

\"Princess

People have been talking about this for a long time. There’s an article called A Dao of Web Design by John Alsopp. It was published in 2000. 23 years ago it still holds up today. I recommend going back and reading it.

It was all about rejecting assumptions. Assumptions about devices. Assumptions about browsers. Assumptions about networks. Assumptions about people.

Look, you don’t have control over how people access the web. And that’s okay. That’s that’s not a bad thing.

Ten years after John published A Dao Of Web Design on A List Apart, Ethan Marcotte published Responsive Web Design on A List Apart. And the first thing he does in this article is he quotes John Allsopp’s A Dao Of Web Design. He was building on top of it. It was the same approach, the same mindset that was behind responsive web design.

That was 2010. And now, are we in a new era? Is it time for the next phase? A new era of declarative design maybe?

\"Screenshots

That’s up to you.

CSS is an incredibly powerful tool now. So let’s use it. Let’s lean into the the fluid, ever-changing nature of the World Wide Web.

It is a very very exciting time for design on the web right now and I can’t wait to see what you build.

" } , { "id": "20638", "url": "https://adactio.com/article/20638", "title": "Of Time And The Web", "summary": "A presentation from border:none held in Nuremberg in October 2023, ten years after the first border:none in 2013.", "date_published": "2023-11-15 16:07:08", "tags": [ "worldwideweb", "time", "design", "frontend", "development", "history", "standards", "browsers", "html", "css", "javascript" ], "content_html": "

I want to tell you about a word. And the reaction I had to reading that word. I had a very big reaction to a very small word.

This is the word.

Was.

The past tense of the verb “to be.”

I saw this word and I was struck with a sense of awe.

Now, obviously it’s not just about the word itself—I don’t get struck by feelings of awe anytime anyone says the word “was.” The context matters. Here’s the context…

I was reading the WikiPedia entry for smallpox. Don’t ask me how I ended up here, I don’t remember. But I read the second word in the first sentence, “Smallpox was…” And I think for the first time I really grasped what an astonishing achievement it is to eradicate a deadly disease.

We did that! Humans! Using science! Surely this must stand as one of the greatest achievements of our species?

I was curious as to how this was reported in news outlets at the time. I went searching through newspaper archives on the web to find out.

Here’s an article on Page 21 of The New York Times. It’s five paragraphs long. The first paragraph says:

Smallpox, one of the deadliest and oldest viral diseases of humans, has been eradicated, World Health Organization officials said today in a news conference broadcast here from Geneva.

A few paragraphs down it says:

The health organization began the intensified smallpox eradication program in 1967. In that year, smallpox was reported in 42 countries and killed 2 million people. It also scarred and blinded another eight million people.

In one year! About ten years later, the disease was eradicated! But it was a gradual process.

If something changes gradually, we don’t notice it. We literally exhibit something called change blindness.

But we are hard-wired to notice sudden changes. We pay attention to moments of change.

“Where were you when JFK was assassinated?”

“Where were you on September 11th?”

Nobody is ever going to ask “where were you when smallpox was eradicated?”

We mark the moments when an election is won or lost,the moment war breaks out, the moment a ceasefire is signed.

We also seem to be particular attuned to breaking changes—the moments when bad things happen.

We’re downright suspicious of good news.

We have this phrase: “sounds too good to be true.”

But we don’t have this phrase: “sounds too bad to be true.”

There may well be solid evolutionary reasons for being attuned to danger and threats. But maybe we should occasionally take a step back and notice the changes we’ve been blind to.

The past

What if there were a newspaper that wasn’t published daily or weekly, but once every 100 years? What would the headline on the front page be?

Maybe it would be a moment, like “Yuri Gagarin went into space”—that was a pretty big one!

But maybe the headline would be something we don’t even think about.

Maybe the headline would be about how many more people can read the headline! Adult literacy rates have skyrocketed over the last hundred years—and this is showing percentages, not raw numbers; then the chart would be even more dramatic.

What about a 50 year newspaper? What would the headline be?

Maybe it would be about climate change: “burning coal and oil turns out to have been really bad.” Or maybe the headline would be about the remarkable drop in extreme poverty. Any percentage is still too much, but still, look at that rapid fall… particularly over the last 25 years.

Every single day, a real-time newspaper could’ve run the headline:

“130,000 People Escape Extreme Poverty Today”

…but no newspaper has every reported that. Sounds too good to be true.

What about a newspaper published once every 25 years?

Maybe the headline would still be about the drop in extreme poverty. Or maybe the headline would be about the equally steep drop in violent crime. Or deaths in natural disasters.

Or perhaps the editor would run with the story of infant mortality rates being halved. This time we are looking at the raw numbers; if this were percentages of the world population, the effect would be even more dramatic.

What about a newspaper published every 10 years? That’s the time frame we’re looking at here today, right?

For a newspaper published once a decade, I reckon today’s headline would probably be about climate change and the environment.

Maybe this is the most surprising development in the last 10 years:the huge increase in solar and wind energy. There’s a corresponding graph showing an equally dramatic drop in price, especially for solar.

We could be looking at part of an exponential curve here—time will tell!

But let’s get more specific. Let’s look at the World Wide Web.

How has the web changed since I was standing here 10 years ago?

Thanks to the internet archive, I can show what my own website looked like at the moment I was speaking at the first border: none.

Now let’s compare this to how my website looks today…

Hmmm …not exactly an astonishing amount of change, is it?

I think we need to go further back than 10 years. I think we need to go all the way back.

This is the first website ever made, displayed in the first web browser ever made. They were both made by Tim Berners-Lee 30 years ago. And this website is still online today at its original URL, because it’s cool.

Colour

The World Wide Web was somewhat lacking in colour originally. When I started making websites in the mid nineties, colour had arrived but it was limited.

We had a palette of 216 web safe colours. You knew if a colour was “web safe” if the hexadecimal notation was three sets of duplicated values. If you altered one of those values even slightly, there was no guarantee that the colour would display consistently on the monitors of the time.I have a confession to make: I kind of liked this constraint in a weird way. To this day, if I have a colour value that’s almost web-safe, I can’t resist nudging it slightly.

Fortunately, monitors improved. They got flatter for one thing. They were also capable of displaying plenty of colours.

And we also got more and more ways of specifying colours. As well as hexadecimal, we got RGB: Red, Green, Blue. Better yet, we got RGBa …with alpha transparency. That’s opacity to you and me.

Then we got HSL: hue, saturation, lightness. Or should I say HSLa: hue, saturation, lightness, and alpha transparency.

And there are more colour spaces available today. HWB (hue, whiteness, blackness), LAB, LCH. And now we’ve got a color() function so you can specify even more colour spaces.

Sounds too good to be true.

Typography

In the beginning, typography on the World Wide Web was non-existent. Your browser used whatever was available on your operating system.

That situation continued for quite a while. You’d have to guess which fonts were likely to be available on Windows or Mac.

If you wanted to use a sans-serif typeface, there was Arial on Windows and Helvetica on the Mac. Verdana was a pretty safe bet too.

For a while your only safe option for a serif typeface was Times New Roman. When Mathew Carter’s Georgia was released, it was a godsend. Here was a typeface specifically designed for the screen.

Later Microsoft released another four fonts designed for the screen. Four new fonts! It felt like we were being spoiled.

But what if you wanted to use a typeface that didn’t come installed with an operating system? Well, you went into Photoshop and made an image of the text. Now the user had to download additional images. The text wasn’t selectable and it was a fixed width.

We came up with all sorts of clever techniques to do what was called “image replacement” for text. Some of the techniques involved CSS and background images. One of the techniques involved Flash. It was called sIFR: Scalable Inman Flash Replacement. A later technique called Cufón converted the letter shapes into paths in Canvas.

All of these techniques were hacks. Very clever hacks, but hacks nonetheless. They were clever and they worked but they always reminded me of Samuel Johnson’s description of a dog walking on its hind legs:

It is not done well but you are surprised to find it done at all.

What if you wanted to use an actual font file in a web page?

There was only one browser that supported font embedding: Microsoft’s Internet Explorer. The catch was that you had to use a proprietary font format called Embedded Open Type.

Both type foundries and browser makers were nervous about allowing regular font files to be embedded in web pages. They were worried about licensing. Wouldn’t this lead to even more people downloading fonts illegally? How would the licensing be enforced?

The impasse was broken with a two-pronged approach. First of all, we got a new font format called Web Open Font Format or WOFF. It could be used to take a regular font file and wrap it in a light veneer of metadata about licensing. There’s a sequel that’s even better than the original, WOFF2.

The other breakthrough was the creation of intermediary services like Typekit and Fontdeck. They would take care of serving the actual font files, making sure they couldn’t be easily downloaded. They could also keep track of numbers to ensure that type foundries were being compensated fairly.

Over time it became clear to type foundries that most web designers wanted to do the right thing when it came to licensing fonts. And so these days, you can probably license a font straight from a type foundry for use on the web and host it yourself.

You might need to buy a few different weights. Regular. Bold. Maybe italic. What about extra bold? Or a light weight? It all starts to add up, especially for the end user who has to download all those files.

I remember being at the web typography conference Ampersand years ago and hearing a talk from Nick Sherman. He asked us to imagine one single font file that could go from light to regular to bold and everything in between. What he described sounded like science fiction.

It is now science fact, indistinguishable from magic. Variable fonts are here. You can typeset text on the web to be light, or regular, or bold, or anything in between.

When you use CSS to declare the font-weight property, you can use keywords like “normal” or “bold” but you can also use corresponding numbers like 400 or 700. There’s a scale with nine options from 100 to 900. But why isn’t the scale simply one to nine?

Well, even though the idea of variable fonts would have been pure fantasy when this part of CSS was being specced, the authors had some foresight:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future.

With the creation of variable fonts, HÃ¥kon Wium Lee added:

And the future is now.

On today’s web you could have 999 font-weight options.

Sounds too good to be true.

Images

In the beginning, the World Wide Web was a medium for text only. There were no images and certainly no videos.

In an early mailing list discussion, there was talk of creating a new HTML element for images. Perhaps it should be called “icon”. Or maybe it should be more generic and be called “embed”. Tim Berners-Lee said he imagined using the rel attribute on the A element for embedding images.

While this discussion was happening, Marc Andreessen popped in to say that he had just shipped a new HTML element in the Mosaic browser. It’s called IMG and it takes an attribute called SRC that points to the source of the image.

This was a self-closing tag so there was no way to put fallback content in between the opening and closing tags if the image couldn’t be displayed. So the ALT attribute was introduced instead to provide an alternative description of the image.

For the images themselves, there were really only two choices. JPG for photographic images. GIF for icons or anything that needed basic transparency. GIFs could also do animation and today, that’s pretty much all they’re used for. That’s because there was a concerted campaign to ditch the GIF format on the web. Unisys, who owned the rights to a compression algorithm used by the GIF format, had started to make noises about potentially demanding license fees for its use.

The Portable Network Graphics format—or PNG—was created in response. It was more performant and it allowed you to have proper alpha transparency.

These were all bitmap formats. What if you wanted a vector format for images that would retain crispness at any size or resolution? There was only one option: Flash. You’d have to embed a Flash movie in your web page just to get the benefit of vector graphics.

By the 21st century there were some eggheads working on a text-based vector file format that could be embedded in webpages, but it sounded like a pipe dream. It was called SVG for Scalable Vector Graphics. The format was dreamed up in 2001 but for years, not a single browser supported it. It was like some theoretical graphical Shangri-La.

But by 2011, every major browser supported it. Styleable, scriptable, animatable, vector graphics have gone from fantasy to reality.

There’s more choice in the world of bitmap images too. WebP is well supported. AVIF is is gaining support.

The IMG element itself has grown too. You can use the srcset attribute to give the browser a range of images to choose from to best suit the user’s device and network connection. You can use the loading attribute to get lazy loading of images for free—no JavaScript required.

We now have audio in HTML. No JavaScript required. We now have video in HTML. No JavaScript required.

These elements have been designed with more thought than the IMG element. They are not self-closing elements, by design. You can put fallback content between the opening and closing tags.

The audio and video elements arrived long after the IMG element. For a long time, there was no easy way to do video or audio on the web.

That was very frustrating for me. The first websites I ever built were for bands. The only way to stream music was with a proprietary plug-in like Real Audio.

Or Flash.

While the web standards were still being worked on, Flash delivered the goods with streaming audio and video. This happened over and over. Flash gave us vector graphics, animation, video, and more. But the price was lock-in. Flash was a proprietary format.

Still, Flash showed the web standards bodies the direction of travel. Flash was the hare. Web standards were the tortoise.

We know how that race ended.

In a way, Flash was like the Research and Development incubator for the World Wide Web. We got CSS animations, SVG, and streaming video because Flash showed that there was an appetite for them.

Until web standards provide a way to do something, designers and developers will reach for whatever tool gets the job done. Take layout, for example.

Layout

In the early days of the web, you could have any layout you wanted …as long as it was a single column.

Before long, HTML expanded to provide some rudimentary formatting for that single column of text. Presentational elements and attributes were invented. And even when elements and attributes weren’t meant to be used for formatting, people got creative.

Tables for layout. A single pixel GIF that could be given width and height. These were clever solutions. But they were hacks. And they were in danger of turning HTML into a presentational language instead of a language for structuring content.

CSS came to the rescue. A language specifically for presentation.But we still didn’t get proper layout tools. There was a lot of debate in the early days about whether CSS should even attempt to provide layout tools or whether that was a job for a separate technology.

We could lay things out using the float property, but really that was just another hack.

Floats were an improvement over tables for layout, but we only swapped one tool for another. Our collective thinking still wasn’t very web-like.

For example, designers and developers insisted on building websites with a fixed width. This started in the era of table layouts and carried over into CSS.

To start with, the fixed width was 640 pixels. Then it was 800 pixels. Then people settled on the magical number of 960 pixels. Designers and developers didn’t seem at all concerned that people had different sized screens.

That was until the iPhone came out. Everyone shat the bed. What fixed width were we supposed to design for now?

The answer was there all along. Even before the web appeared in mobile devices, it was possible to build fluid layouts that would adapt to screen size. It’s just that the majority of designers and developers chose not to build in this way.

I was pleased that mobile came along and shook things up. It exposed the assumptions that people were making. And it forced designers and developers to think in a more fluid, webby way.

Even better, CSS had expanded to include media queries so it was possible to alter layouts at different breakpoints.

Ethan came along and put a nice bow on it with his definition of responsive design: fluid media, fluid layouts, and media queries.

I fell in love with responsive web design instantly becuase it matched how I was already thinking about the web. I was one of the handful of weirdos who insisted on building fluid websites when everyone else was using fixed-width layouts.

But I thought that responsive web design would struggle to take hold.I’m delighted to say that I was wrong. Responsive web design has become the default!

If I could go back to my past self in the mid 2000s, I’d love to tell them that in the future, everyone would be building with fluid layouts (and also that time travel had been invented apparently).

Not only that, but we finally have proper layout tools for the web. Flexbox. Grid. No more hacks. We’ve even got container queries, which for years we were told were literally too good to be true.

Web browsers now are positively overflowing with fantastic design tools that would have been unimaginable to my past self. Support for these technologies is pretty much universal.

When browsers differ today, it’s only terms of which standards they don’t yet support.

The present

The web has come along way. It has grown and evolved. Browsers have become more and more powerful while maintaining backward compatibility.

In the past we had to hack our way around the technological limitations of the web and we had a long wish list of features we wanted.

I’m not saying we’re done. I’m sure that more features will keep coming. But our wish list has shrunk.

The biggest challenges facing the World Wide Web today are not technical challenges.

Today it is possible to create beautiful websites that make full use of colour, typography, layout, animation, and more. But this isn’t what users experience.

This is what users experience. A tedious frustrating game of whack-a-mole with websites that claim to value our privacy while asking us to relinquish it.

This is not a technical problem. It is a design decision. The decision might not be made by anyone with designer in their job title, but make no mistake, business decisions have a direct effect on user experience.

On the face of it, the problem seems to be with the business model of advertising. But that’s not quite right. To be more precise, the problem is with the business model of behavioural advertising. That relies on intermediaries to amass huge amounts of personal data so that they can supposedly serve up relevant advertising.

But contextual advertising, which serves up ads based on the content you’re looking at doesn’t require the invasive collection of personal data. And it works. Behavioural advertising, despite being a huge industry that depends on people giving up their privacy, doesn’t even work very well. And on the few occasions when it does work, it just feels creepy.

The problem is not advertising. The problem is tracking. The greatest trick the middlemen ever pulled was convincing us that you can’t have effective advertising without tracking. That is false. But they’ve managed to skew our sense of perspective so that invasive advertising seems inevitable.

Advertising was always possible on the web. You could publish anything and an ad is just one more thing you could choose to publish. But tracking was impossible. That’s because the early web was stateless. A browser requests a resource from a server and once that transaction is done, they both promptly forget about it. That made it very hard to do things like online shopping or logging into an account.

Two technologies were created later that enabled state on the web. Cookies and JavaScript. If these technologies had been limited to a same origin policy (like how cookies were originally specified), they would have nicely solved the problems of online shopping and authentication.

But these technologies work across domains. Third party cookies and third-party JavaScript enables users to be tracked as they move from site to site. The web gone from having no state to having too much.

There is hope. Browsers like Firefox and Safari are blocking third-party cookies by default. Personally, I’d love it if third-party JavaScript got the same treatment. You can also install add-ons to make your browser more secure, although these add-ons are often labelled ad-blockers, which is a shame. Because the problem is not advertising. The problem is tracking.

Perhaps none of this applies to you anyway. You may be thinking that this is a problem for websites. But you build web apps that don’t rely on behavioural advertising.

As I said here ten years ago, I’m not keen on the idea of dividing the entirety of the World Wide Web into two vaguely-defined categories. Ten years on, I still have yet to hear a good definition of “web app” other than “a website that requires JavaScript to work.”

But the phrase “single page app” has a more definite meaning. It refers to an architectural decision. That decision is to reinvent the web browser inside a web browser.

In a sense, it’s a testament to the power of JavaScript that you can choose to do this. Browsers render content and perform navigations, but if you’d rather recreate that functionality from scratch in JavaScript, you can.

But should you? Browsers have increased in complexity so that we can build without complexity. We can use the built-in power of modern HTML, CSS, and JavaScript to make web browsers do the work. If we work with the grain of the web, we can accomplish more and more with less and less code.

But that isn’t what happened. Instead developers have recreated form controls like dropdowns and datepickers from scratch using divs and lashings and lashings of JavaScript.

Perhaps this points to some missing features on the web. It’s still too hard to style native dropdowns and datepickers (but that’s being worked on—there’s standards work underway to give us more styling control over form elements). But that doesn’t explain why developers would choose to recreate something like a button using divs and JavaScript when the button element already exists and can be styled any way you like.

I think there’s a certain mindset being applied to web development here. And that mindset comes from the world of software. Again, it’s a testament to how far the web has come that it can be treated as a software platform on par with operating systems like iOS, Android, or Windows. There’s a lot to be learned from the world of software development, like testing, for example. But the web is different. When a user navigates to a URL, it shouldn’t feel like they’re installing a piece of software.

We should be aiming to keep our payloads as small as possible. And given how powerful browsers have become, we need fewer and fewer dependencies—fewer and fewer polyfills.

But performance has gotten worse. Payloads have gotten bigger. Dependencies like JavaScript frameworks have become more and more widespread even as they became less and less necessary.

When asked to justify the enormous payloads, web developers have responded by saying that user’s expectations have changed. That is correct, but not in the way that I think they mean.

When I talk to people about using the web—especially on mobile—their expectations are that they will have a terrible experience. That websites will be slow to load. And I guarantee you that none of them are saying, “Well I’d be annoyed if this were a website but seeing as this is a web app, I’m absolutely fine with this terrible experience.”

I said that the biggest challenges facing the World Wide Web today are not technical challenges. I think the biggest challenge facing the web today is people’s expectations.

There is no technical reason for websites or web apps to be so frustrating. But we have collectively led people to expect a bad experience on the web.

Our intentions may be have good. We thought users wanted nice page transitions and form elements that were on-brand. But if you talk to people, you find out that what they want is to accomplish their task without megabytes of JavaScript getting in the way.

There’s a great German word, “Verschlimmbessern”: the act of making something worse in the attempt to make it better. Perhaps we verschlimmbessert the web.

Let’s step back. Get some perspective. Instead of assuming that a single page app architecture is needed, ask what users need to accomplish. Instead of assuming you need a CSS framework or a JavaScript library, see what you can do in browsers today with native CSS and vanilla JavaScript. Don’t include a bunch of dependencies by default just in case you might need them. Instead, as Rachel puts it:

Stop solving problems you don’t yet have.

Lean into what web browsers can accomplish today. If you find something missing, that’s the time to reach for a library …but treat it like a polyfill. Whereas web standards stick around, every library and framework comes with a limited lifespan. Treat them as cattle, not pets.

I understand that tools and frameworks can make your life easier. And if we’re talking about server-side frameworks, then I say “Go for it.” Or if you’re using build tools that sit on your computer to do version control, linting, pre-processing, or transpiling, then I say “Go for it.”But once you make users download tools or frameworks, you’re making them pay a tax for your developer convenience.

We need to value user needs above developer convenience. If I have the choice of making something the user’s problem or making it my problem, I’ll make it my problem every time. That’s my job.

We need to change people’s expectations of the World Wide Web, especially on mobile. Otherwise, the web will be lost.

The future

In 2019, I had the great honour of being invited to CERN to mark the 30th anniversary of the original proposal for the World Wide Web. One of the other people there was the journalist Zeynep Tüfekçi. She was on a panel along with Tim Berners-Lee and other luminaries of the early web. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it. If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

I believe that we can save the web. I believe that we can change people’s expectations. We’ll do that by showing them what the web is capable of.

" } , { "id": "20290", "url": "https://adactio.com/article/20290", "title": "“Web3” and “AI”", "summary": "A short talk delivered at a gathering in Brighton by the Design Business Association in July 2023 on the topic of “Web3, AI and Design”.", "date_published": "2023-07-04 17:44:14", "tags": [ "ai", "machinelearning", "language", "models", "web3", "cryptobollocks", "blockchain", "hype", "marketing", "design", "process", "tools" ], "content_html": "

Hello. I was asked by the Design Business Association to talk to you today about “web3 and AI.”

I’d like to explain what those terms mean.

“Web3”

Let’s start with “web3.” Fortunately I don’t have to come up with an explanation for this term because my friend Heydon Pickering has recorded a video entitled “what is web 3.0?”

What is web trois point nought?

Web uno dot zilch was/is a system of interconnected documents traversible by hyperlink.

However, web deux full stop nowt was/is a system of interconnected documents traversible by hyperlink.

On the other hand, web drei dot zilch is a system of interconnected documents traversible by hyperlink.

Should you wish to upgrade to web three point uno, expect a system of interconnected documents traversible by hyperlink.

If we ever get to web noventa y cinco, you can bet your sweet @rse, it will be a system of interconnected documents traversible by f*!king hyperlink.

There you have it. “Web3” is a completely meaningless term. If someone uses it, they’re probably trying to sell you something.

If you ask for a definition, you’ll get a response like “something something decentralisation something something blockchain.”

As soon as someone mentions blockchain, you can tune out. It’s the classic example of a solution in search of a problem (although it’s still early days; it’s only been …more than a decade).

I can give you a defintion of what a blockchain is. A blockchain is multiple copies of a spreadsheet.

I find it useful to be able to do mental substitions like that when it comes to buzzwords. Like, remember when everyone was talking about “the cloud” but no one was asking what that actually meant? Well, by mentally substituting “the cloud” with “someone else’s server” you get a much better handle on the buzzword.

So, with “web3” out of the way, we can move onto the next buzzword. AI.

“AI”

The letters A and I are supposed to stand for Artificial Intelligence. It’s a term that’s almost as old as digital computing itself. It goes right back to the 1950s.

These days we’d use the term Artificial General Intelligence—AGI—to talk about that original vision of making computers as smart as people.

Vision is the right term here, because AGI remains a thought experiment. This is the realm of super intelligence: world-ending AI overlords; paperclip maximisers; Roko’s basilisk.

These are all fascinating thought experiments but they’re in the same arena as speculative technologies like faster-than-light travel or time travel. I’m happy to talk about any of those theoretically-possible topics, but that’s not what we’re here to talk about today.

When you hear about AI today, you’re probably hearing about specific technologies like large language models and machine learning.

Let’s take a look at large language models and their visual counterparts, diffusion models. They both work in the same way. You take a metric shit ton of data and you assign each one to a token. So you’ve got a numeric token that represents a bigger item: a phrase in a piece of text, or an object in an image.

The author Ted Chiang used a really good analogy to describe this process when he said ChatGPT is like a blurry JPEG of the web.

Just as image formats like JPG use compression to smush image data, these models use compression to smush data into tokens.

By the way, the GPT part of ChatGPT stands for Generative Pre-trained Transformer. The pre-training is that metric shit ton of data I mentioned. The generative part is about combining—or transforming—tokens in a way that should make probabalistic sense.

Terminology

Here’s some more terminology that comes up when people talk about these tools.

Overfitting. This is when the output produced by a generative pre-trained transformer is too close to the original data that fed the model. Another word for overfitting is plagiarism.

Hallucinations. People use this word when the output produced by a generative pre-trained transformer strays too far from reality. Another word for this is lying. Although the truth is that all of the output is a form of hallucination—that’s the generative part. Sometimes the output happens to match objective reality. Sometimes it doesn’t.

What about the term AI itself? Is there a more accurate term we could be using?

I’m going to quote Ted Chiang again. He proposes that a more accurate term is applied statistics. I like that. It points to the probabalistic nature of these tools: take an enormous amount of inputs, then generate something that feels similar based on implied correlations.

I like to think of “AI” as a kind of advanced autocomplete. I don’t say that to denigrate it. Quite the opposite. Autocomplete is something that appears mundane on the surface but has an incredible amount of complexity underneath: real-time parsing of input, a massive database of existing language, and on-the-fly predictions of the next most suitable word. Large language models do the same thing, but on a bigger scale.

What’s it good for?

So what is AI good for? Or rather, what is a language or diffusion model good for? Or what is applied statistics or advanced autocomplete good for?

Transformation. These tools are really good at transforming between formats. Text to speech. Speech to text. Text to images. Long form to short form. Short form to long form.

Think of transcripts. Summaries. These are smart uses of this kind of technology.

Coding, to a certain extent, can be considered a form of transformation. I’ve written books on programming, and I always advise people to first write out what they want in English. Then translate each line of English into the programming language. Large language models do a pretty good job of this right now, but you still need a knowledgable programmer to check the output for errors—there will be errors.

(As for long-form and short-form text transformations, the end game may be an internet filled with large language models endlessly converting our written communications.)

When it comes to the design process, these tools are good at quantity, not quality. If you need to generate some lorem ipsum placeholder text—or images—go for it.

What they won’t help with is problem definition. And it turns out that understanding and defining the problem is the really hard part of the design process.

Use these tools for inputs, not outputs. I would never publish the output of one of these tools publicly. But I might use one of these tools at the beginning of the process to get over the blank page. If I want to get a bunch of mediocre ideas out of the way quickly, these tools can help.

There’s an older definition of the intialism AI that dairy farmers would be familiar with, when “the AI man” would visit the farm. In that context, AI stands for artificial insemination. Perhaps thats also a more helpful definition of AI tools in the design process.

But, like I said, the outputs are not for public release. For one thing, the generated outputs aren’t automatically copyrighted. That’s only fair. Technically, it’s not your work. It is quite literally derivative.

Why all the hype?

Everything I’ve described here is potentially useful in some circumstances, but not Earth-shattering. So what’s with all the hype?

Venture capital. With this model of funding, belief in a technology’s future matters more than the technology’s actual future.

We’ve already seen this in action with self-driving cars, the metaverse, and cryptobollocks. Reality never matched the over-inflated expectations but that made no difference to the people profiting from the investments in those technologies (as long as they make sure to get out in time).

By the way, have you noticed how all your crypto spam has been replaced by AI spam? Your spam folder is a good gauge of what’s hot in venture capital circles right now.

The hype around AI is benefiting from a namespace clash. Remember, AI as in applied statistics or advanced autocomplete has nothing in common with AI as in Artificial General Intelligence. But because the same term is applied to both, the AI hype machine can piggyback on the AGI discourse.

It’s as if we decided to call self-driving cars “time machines”—we’d be debating the ethics of time travel as though it were plausible.

For a refreshing counter-example, take a look at what Apple is saying about AI. Or rather, what it isn’t saying. In the most recent Apple keynote, the term AI wasn’t mentioned once.

Technology blogger Om Malik wrote:

One of the most noticeable aspects of the keynote was the distinct lack of mention of AI or ChatGPT.

I think this was a missed marketing opportunity for the company.

I couldn’t disagree more. Apple is using machine learning a-plenty: facial recognition, categorising your photos, and more. But instead of over-inflating that work with the term AI, they stick to the more descriptive term of machine learning.

I think this will pay off when the inevitable hype crash comes. Other companies, that have tied their value to the mast of AI will see their stock prices tank. But because Apple is not associating themselves with that term, they’re well positioned to ride out that crash.

What should you do?

Alright, it’s time for me to wrap this up with some practical words of advice.

Beware of the Law of the instrument. You know the one: when all you have is a hammer, everything looks a nail. There’s a corollary to that: when the market is investing heavily in hammers, everyone’s going to try to convince you that the world is full of nails. See if you can instead cultivate a genuine sense of nailspotting.

It should ring alarm bells if you find yourself thinking “how can I find a use for this technology?” Rather, spend your time figuring out what problem you’re trying to solve and only then evaluate which technologies might help you.

Never make any decision out of fear. FOMO—Fear Of Missing Out—has been weaponised again and again, by crypto, by “web3”, by “AI”.

The message is always the same: “don’t get left behind!”

“It’s inevitable!” they cry. But you know what’s genuinely inevitable? Climate change. So maybe focus your energy there.

Links

I’ll leave you with some links.

I highly recommend you get a copy of the book, The Intelligence Illusion by Baldur Bjarnason. You can find it at illusion.baldurbjarnason.com

The subtitle is “a practical guide to the business risks of generative AI.” It doesn’t get into philosophical debates on potential future advances. Instead it concentrates squarely on the pros and cons of using these tools in your business today. It’s backed up by tons of research with copious amounts of footnotes and citations if you want to dive deeper into any of the issues.

If you don’t have time to read the whole book, Baldur has also created a kind of cheat sheet. Go to needtoknow.fyi and you can a one-page list of cards to help you become an AI bullshit detector.

I keep track of interesting developments in this space on my own website, tagging with “machine learning” at adactio.com/tags/machinelearning

Thank you very much for your time today.

" } , { "id": "19210", "url": "https://adactio.com/article/19210", "title": "In And Out Of Style", "summary": "A presentation from An Event Apart Spring Summit held online in April 2022 and the opening presentation at CSS Day held in Amsterdam in June 2022.", "date_published": "2022-06-22 12:20:28", "tags": [ "css", "styles", "styling", "worldwideweb", "history", "standards", "agreements", "browsers", "technology", "conference", "talk", "presentation", "medium:id=e269e88d47b7" ], "content_html": "

Hello, my name is Jeremy and I am speaking to you today from Brighton on the south coast of England.

I want to tell you about something that happened here in Brighton back in 1985 (pretty sure it took place in one of those buildings along the seafront there). In 1985 Brighton was host to the International Information Theory Symposium. Fascinating.

Something exciting happened there. Word began to go around that there was an unexpected guest attending the event. This unexpected guest was this man, Claude Shannon. The way it was described later was somebody said it was as if Newton had showed up at a physics conference.

He wasn’t even meant to be there, but he was convinced at the dinner after the event to get up and say a few words. And he did, he got up and he started to talk, but he felt like he was losing the audience. So he proceeded to do some juggling.

That’s so Claude Shannon. He was very much into games. He took games very seriously. He was a very playful kind of person.

For example, he invented this machine, which is called the most beautiful machine, also the most useless machine. But I think it’s just wonderful. I mean, it’s like the perfect encapsulation of cybernetics, the ideal feedback loop.

But the reason why people were excited that Claude Shannon was at this event wasn’t because of the most beautiful machine. And it wasn’t because of his juggling. It was because of information theory.

Because Claude Shannon, it’s not like he just revolutionized the field of information theory; Claude Shannon pretty much invented the field of information theory in one fell swoop. In a paper in 1948 called the Mathematical Theory of Communication.

Here’s the TLDR. This is the mathematical part. I won’t go into the details of the mathematical part, but what I recall from Claude Shannon’s work is that he was able to effectively boil information down into fundamental particles. The idea that there’s a single bit of information.

This idea of entropy, the idea that for information to travel between communicator and the receiver, you’ve got the signal that you’re trying to transmit, but there’s also noise. And this noise is unavoidable.

And like I said, this idea of the bit; that any piece of information could be reduced down to a fundamental particle: a one or a zero; on or off, which of course is exactly how computers work. So it’s no exaggeration to say that Claude Shannon is like the father of the digital age.

And one other thing I take from Claude Shannon’s work is that when it comes to communication of information, context matters. In other words, that the expectation between the receiver and the communicator can make a lot of difference.

Shared context

So to give you an example of shared context being very important in information communication I want to illustrate it with a story from the pre-digital age. This is a story from the age of the electrical telegraph.

Now this story is probably completely apocryphal. In some versions of the story, it involves the novelist Victor Hugo. In other versions, it’s Oscar Wilde. But the point is there’s an author. He’s just published a book and now he’s gone off on holiday after writing the book. But while he’s on holiday, he’s really curious to know, how is the book doing? What are the sales like?

So he sends a telegram to his publisher, but because there’s enough shared context between the publisher and the author, all he sends is a single character. A question mark.

?

And then, because there’s this shared context between the publisher and the author and the publisher wants to let the author know that actually sales are going really, really well, the publisher also sends back a telegram with a single character. An exclamation mark.

!

So this is a classic example of the importance of context. I mean, you’re just sending a single character and yet both parties understand the message being conveyed.

Context matters. Shared context matters.

Now I want to try an experiment with you to test how much shared context there is between me (I’m going to try to transmit a message) and you (the receiver of the message).

So we’re starting with a blank slate. And now I’m going to provide one piece of information. Okay. Here’s the piece of information. Probably doesn’t tell you much. A diagonal line. There’s not enough context here.

All right. Back to the blank slate. I’m now going to provide another piece of information.

Okay. Again, in isolation, this probably doesn’t tell you much just another diagonal line. But if I combine it with the first bit of information, then now maybe it starts to become something you can parse. And if I provide just one more bit of information, now maybe it clicks into place that the piece of information I’m trying to convey is ten minutes past ten.

And yet all I’ve done here is I’ve provided you with two diagonal lines in a circle. Yet somehow two diagonal lines in a circle, when we have the shared context of how to read an analogue clock face, is enough to communicate ten minutes past ten.

(The time, by the way, is completely arbitrary. The only reason I chose that time is just that if you ever look at an advertisement for a watch, it’ll usually be ten past ten because the angles of the arms on the watch nicely frame the logo of the watchmaker.)

But anyway, the point is: with enough shared context, two diagonal lines in a circle are enough for me to communicate the piece of information, “ten minutes past ten o’clock.”

I mean, maybe you’re a digital native born in a 21st century, in which case you’re looking at this and thinking, “I just see two diagonal lines in a circle”, but if you can read an analogue clock face, then we have that shared context.

Where did this context come from? Why is it that that clock faces are set up the way they are? Why do clocks go clockwise? It seems like a fairly arbitrary decision.

It is somewhat arbitrary, but one neat solution is that the reason why clocks go in a clockwise direction is that that’s the way that a shadow on a sundial would travel …in the Northern hemisphere.

Now if you look at a sundial in the Southern hemisphere, like this one here—this is in Wellington, in New Zealand—the shadow would actually go in a counterclockwise direction.

\"A

So really it’s almost an accident of history that we have clocks that go clockwise. If clocks had been invented in the Southern hemisphere, then they would go in the other direction. It’s pretty arbitrary, but now we’ve decided, we’ve kind of settled on this arbitrary movement of clocks that they go clockwise and we’re stuck with it.

Because inertia is a very powerful force. If you tried to change the way the clocks work you’d have your work cut out for you, even if the reason why clocks work the way they do is arbitrary to begin with.

Inertia

You know, a very wise person once said the most dangerous phrase in the English language is “We’ve always done it that way.” And that very wise person was the brilliant computer scientist, rear admiral Grace Hopper.

\"Grace

She used to say:

Humans are allergic to change.

She said:

I try to fight that. And that’s why I have a clock on my wall that runs counterclockwise.

Right? It kind of drives home this idea that, hey, this is an arbitrary decision.

And it’s kind of weird for us to look at a clock that runs counterclockwise. I actually managed to find a watch a few years ago that worked like this, that ran counterclockwise. And I wore it for a while and I was able to train my brain to read the clock this way. And it worked fine, but it completely broke my brain for reading normal clocks. So I kind of had to just stop doing it.

But I’m fascinated by these examples of fairly arbitrary decisions made sometime in the past that you’re then stuck with, because it’s very hard to change the inertia. But only recently did I find out that there’s a term for this phenomenon. This is called path dependence.

Path dependence

History is full of path dependence. The classic example is if you wanted to make a new train or a new stretch of railway track, you’re gonna have to use the existing gauge of the railway in question. Now it’s not that there’s one gauge of railway that’s better or worse than any other gauge, but if someone’s made that decision in the past, it’s very hard to change. And you really do want to settle on one gauge so that you don’t have to switch trains when you move between different parts of a country (this actually happened down in Australia, where they had different gauges for the railways, it was kind of a mess).

It’s the canonical example of why you need standards. But really the point of standards isn’t necessarily to enshrine the best way of doing something. The point of standards is to enshrine the agreement. “Hey, let’s all agree to do things this.”

Whether the standard is good, bad, better or worse than other ways of doing things is in some ways less important than the agreement. You just need to have everyone agree on something.

Like, there’s a standard for which side of the road you drive on in your country. And it doesn’t really matter whether the standard is for it to be on the left side of the road or the right hand side of the road. But it really matters that you all agree on the same side of the road.

Agreement and standards brings us very nicely to the World Wide web. Because I think the World Wide web is a fantastic example of agreement.

The web is agreement

There’s a friend of mine, Paul Downey, who does these wonderful illustrations. Fantastic. He has this one called “the web is agreement.” Whenever I think about the word agreement, this is what I think of: that the web is agreement. And he does these kind of Hieronymous Bosch and Breughel-like images of all the different formats and standards that we use on the web.

\"The

And if you think about what the World Wide Web is, this combination of HTTP and URLs and HTML. This was, you know, when the web was first created. Yeah, these are just agreements.

I mean, HTTP is a protocol and that word protocol literally means agreement. (If you think about diplomatic protocols that are like diplomatic agreements, right?)

URLs, “Hey, let’s all agree to use this addressing scheme.”

And HTML, “If we all agree to use this format, then we get interoperability.”

So these formats, these protocols, this set of standards or agreements came from Sir Tim Berners-Lee. This was back when he was working at CERN at the nuclear physics laboratory in the late 1980s, early ’90s.

I’m somewhat fascinated about the birth of the web, which is why it was a huge honour and pleasure for me to be invited to CERN a few years back. This was in the run up to the 30th anniversary of the original proposal that Tim Berners-Lee submitted for what later became the World Wide web. And we did this project and you can check out the project at worldwideweb30.com.

This wonderful group of people came together for a week to kind of hack on something. And what we were hacking on was this project to recreate the very first web browser …but that you could run it in a modern web browser. This is what it looked like.

The first web browser was also called confusingly WorldWideWeb. It was created by Tim Berners-Lee on his NeXT machine. And this was the first demonstration of those three things working together: HTTP, URLs and HTML.

Now I say we were working on this. I didn’t make this part. This was the really smart bit; much cleverer people were working on the smart bit. What I worked on was the website that accompanied the project.

And specifically I worked on creating a timeline.

\"Timeline\"

Remember this is coming up on the 30th anniversary of the web, and I thought it’d be fascinating to not just graph out the 30 years that the web has existed, but also look back at the 30 years before the web existed to see what were the influences that fed into the web.

What I was looking at here really was the path dependencies. What were the path dependencies in computing and networks, hypertext and formats that all fed into that creation of the World Wide Web?

So let’s take formats, for example. Tim Berners-Lee creates HTML along with URLs and HTTP.

And if I show you these elements, they should be quite familiar to you. You recognize what this language is, right? Yeah. Clearly this language is…

SGML. Standard Generalized Markup Language.

Specifically, it’s a flavor of SGML that was being used in CERN at the time. And Tim Berners-Lee thought rather than create his own format, he would use what people were already used to.

He kind of had the same insight that Grace Hopper had, that humans are allergic to change. But instead of trying to change that, he sort of went with the flow.

So he took SGML and basically copied it to make HTML, introducing one new element, which is the, A element.

So, you know, there’s a path dependency, even in the name, right? You think Standard Generalized Markup Language. Oh right. And now we have HyperText Markup Language. So even the name HTML seems to have a path dependency to SGML.

But it goes deeper.

SGML was a specialized version of GML. And GML was supposed to stand for Generalized Markup Language. Except the people who created GML were named Goldfarb, Mosher, and Lorie, which is probably the real reason why it was named GML.

And later we got SGML and then we got HTML. So it turns out there’s a path dependency in the phrase “HTML” that goes back to three dudes wanting to get their names into a format many, many years ago.

Styling

What about styling when it came to the early web?

There’s no CSS at this point. But if you look at this first web browser—and this is the very first web page on the web, which is still available at its original URL—you can see that different types of elements are styled differently.

The links are styled differently than the text around them. The heading is styled differently to the body copy. This definition list has formatting going on. You can see the spacing there. So something is doing some styling.

Well, when we were at this hack week at CERN, we had access to the original source code for this project. We found this file in there.

And this is, I guess, the user agent style sheet. This is the bit that tells the first web browser how to style headings, how to style lists, how to style definitions.

It’s not very readable, but you can tell that, you know, there’s a lot of values here, not many property names, but if you squinted it just right, you can imagine, okay, this is some form of a style sheet.

It became clear though that it wasn’t enough to just allow the user agent to do the styling. Authors—that’s developers and designers—authors also needed a way to provide styling information.

Now for a while there, it looked like the way this was gonna happen was going to be in HTML. People started adding these proprietary elements and attributes to HTML that were all about presentation, all about styling. And that’s not what HTML is for. HTML is about structure, about the semantics, the meaning of a document.

It was really important that there’d be some kind of separation of concerns, that you would use one format—HTML—for your structure, and that there should be a separate format, some other kind of language, for styling.

The question is, what should that other format be?

Well, pretty early on, some proposals started coming in early mailing lists to do with the World Wide Web. Pretty much everyone on the web in the first few months was making their own web browser. It was by nerds for nerds.

Rob Raisch

I think the first proposal for some kind of styling for authors came from Rob Raisch, who was at O’Reilly at the time.

He sent an email to the www-talk mailing list in June of 1993, with this proposal as a way of styling.

@BODY fo(fa=he,si=18)

Now, again, looking at this, it’s not CSS, but if you squint just right, you can sort of make sense of it. It’s kind of like looking at a clock running counterclockwise. It’s not what we’re used to, but you feel like you could parse it.

Clearly the priority here was to do with brevity. We’ve got these two character things like FO for font, FA for face, HE for Helvetica, SI for size. You put that all together and you can say the font face should be Helvetica and the font size should be 18 of whatever unit we’re talking about here.

So, you know, just about able to parse it, there is the concept here of, you know, some kind of selector, right? The way that we say we’re talking about the BODY, we’re talking about the paragraph or talking about B or I.

Pei Wei

The next proposal that came along was by Pei Wei, who was building the Viola web browser. He sent an email to the www-talk mailing list in October of 1993. And he was able to put his entire style sheet in his proposal.

This is what it looks like. Kind of similar to what we saw before. We’ve got the idea of properties and values, but with equal signs rather than colons that we used to now.

But what’s really interesting here is this idea of nesting. We’ve got nesting going on in this proposal which is something that we’re really only just getting in CSS.

HÃ¥kon Wium Lie

Now, the next proposal came from HÃ¥kon Wium Lie. This was October of 1994 and he called his proposal Cascading HTML Style Sheets. And it looked like this.

h1.font.size = 24pt 100%

And again, you can kind of parse this, right? It’s not what we’re used to, but you squint at it and you can make sense of it. You can see the way that the selector and the properties are kind of scrunched together with this dot syntax. And again, there’s an equal sign rather than a colon, but we get it. It’s like, okay, the font size of an H1 element should be 24 points. Got it.

But wait, what’s this percentage after it, like this 100% or this 40%?

Influence

Well, this is a really interesting part of this proposal. This was this idea of influence. The idea that an author should be able to effectively say how much they care about a particular style being applied.

So if you really want that heading to be that size, you say 100%, I care about this. But if you only half care, you could say 50%.

And the idea was that users would also be providing styles and users would also specify how much they cared, how much influence they wanted to exert exert on the styles.

And then there’s kind of a bit of hand-wavy logic where it’s like, “And then the user agent figures out what the final style should be.”

And that last part it turned out was really hard to do. So this idea of influence somewhat fell by the wayside. But I think it’s very powerful and it definitely matched the ideas of Tim Berners-Lee with his first web browser, this idea that the web should be a read and write medium.

Because that first web browser—WorldWideWeb—wasn’t just a browser. It was also an editor.

The idea was you would open a document from the web, you’re looking at it and you think “I wanna make changes to this document. I’m gonna create my own copy, put it on my server and make the changes.”

Now it turned out that was really hard. And so that was one of the first things that got dropped from the World Wide Web, which is a bit of a shame because I think it is a very, very powerful idea, a very empowering idea.

We somewhat got back this idea of a read/write web with things like wikis and blogs and even social media to a certain extent.

And the idea that users should have influence over the styles of a website? Well, that survived in web browsers for quite a while, with this concept of user style sheets.

This is different to user agent style sheets. This was literally that in your browser, you could specify styles to override what an author has specified.

This got dropped from browsers over time because it turned out to be a real power-user feature. Most people weren’t using this. These days, if you want to apply styles as a user, you have to install a browser extension, some kind of plugin. Or your operating system has some kind of translation of like, “these are my preferences at the operating system level” and those get translated to the browser.

I think it’s a real shame that we lost user style sheets. I thought it was a very empowering feature. I get it. It was, you know, somewhat of an edge case. It was power-user feature, but I think it’s a shame we lost it.

I do see, however, a bit of a resurgence in the idea of giving users control over styling with some of the things we’re seeing in CSS, particularly in the media queries level five, this idea of what are being called preference queries. You can say, you know, prefers a color scheme, like dark mode prefers reduced motion, prefers reduced data.

Now it’s a bit different because it’s still up to the author of the style sheet on that website to honour these preferences, right? You still have to write the styles to do the right thing to respect the colour scheme or reduced motion or reduced data.

Though, you know, some browsers are looking into automatically applying some of this stuff automatically: inverting colors and reducing motion.

And on the whole, I welcome the idea that users should have more of a say in how websites are styled. I think it’s a good thing.

So we’re seeing a bit of a resurgence of this idea of influence in modern CSS.

And speaking of modern CSS being somewhat foreshadowed in Håkon’s original proposal, here’s something else that was in that original proposal…

font.size = 12pth1.font.size = font.size * 3h2.font.size = font.size * 2.5h3.font.size = font.size * 2

If you look at this, you can kind of figure out what’s going on. That there’s kind of a declaration at the top to say there’s a variable, if you like, that’s 12 points. And then that variable is used throughout the style sheet. It’s multiplied by different numbers.

And really, we’ve got this now in CSS, thanks to custom properties and calc(), right? The ability to set variables and do calculations on those variables. But it took a long time between this original proposal and this very modern CSS that we have today.

Bert Bos

The final proposal I want to mention came from Bert Bos. This was also in 1994 and his proposal looked like this.

I think this is the first time we started to see colons rather than equal signs for properties and values. But again, you can see the way that the selectors and the properties sort of munged together with this dot syntax.

It’s parsable, right? Again, it’s like looking at a clock running counterclockwise, but you can understand what’s going on here.

So Bert’s proposal is interesting. It’s one more proposal. But what I really like about what Bert did was Bert also provided design principles.

In other words, the thinking behind the proposal. Because like I was kind of saying, you know, the standard itself, in some ways, isn’t the important thing. The important thing is the agreement. So let’s all try and agree on what we’re trying to accomplish with some kind of style sheet language. And I will also freely admit I’m just a sucker for design principles.

Design principles

I’m fascinated by design principles. I even collect them. This is like my equivalent of my interesting rock collection. A collection of design principles at principles.adactio.com.

And if you go there, I’ve collected design principles from individuals, from organizations, and I have Bert’s principles there.

And they’re worth reading through, but one of the issues with Bert’s principles is there’s a lot of them. These are all the different things that feed into the design of a style sheet language. And these are all good things, but I think what’s missing here is some kind of prioritization.

Because the hard part about design principles, isn’t saying what you value. The hard part about design principles is saying we value one thing over another.

So let’s take two of these. We see simplicity and longevity. Well, do we value simplicity more than longevity? Do we value longevity more than simplicity? That’s actually the hard part, to specify the priorities.

So I think it’s a bit of a shame that there isn’t prioritization here, but I think it’s still fascinating that we can look at all of the things that Bert was imagining we have to balance in some kind of style sheet language.

Well, it became pretty clear that Bert and Håkon were working on the same sort of thing. And so they pooled their resources together and kids, that’s where CSS comes from. Jointly from Bert and Håkon.

CSS

And what they settled on—with all of those different design principles and all of the ideas from the different proposals that came before—this is what we got:

selector {  property: value;}

This one pattern. You’ve got a selector, a property and a value. Then we’ve got these special characters for syntax, right? Curly braces, colons, semicolons, but really it’s somewhat arbitrary. The point is that all of CSS pretty much can be boiled down to this one pattern: selector, property, value.

It’s a very simple pattern. And yet it allows for endless complexity. I mean, this is our shared context on the web for styling. If you think about the number of websites out there, right? Billions. And every one of them has a different style sheet and every one of them is different. And yet all of them use the same pattern at its root.

It’s the classic example of how a simple rule can create a complex system. And I think this might also be at the heart of why CSS can be misunderstood. Because this pattern is very simple and because it’s very simple, people might think well, CSS is therefore easy.

But there’s a difference between simplicity and easiness.

Like, you can learn the idea of CSS in an hour, right? Because effectively this pattern is it. You need to get your head around selector, property, value.

But you can then spend a lifetime trying to master CSS because of all the possible combinations of selectors and properties and values, right? It’s a lifetime of learning.

So this is where I think some of the disconnect comes with people thinking, “Oh yeah, I’ll pick up CSS. No problem. It’s easy.” And actually, no. It’s simple, but it’s not easy.

And CSS has grown over time, right? We keep getting more selectors, we keep getting more properties and we keep getting more values. It grows and grows while still maintaining this fundamental pattern.

Hacks

And if we look at where the growth of CSS has come from, you know, a lot of the time it came from hacks. And I don’t mean literal CSS hacks, like the box model hack or tan hack for any anybody old enough to remember that.

I mean hacks in a sense of its original use of a clever solution to a problem, but probably not a great long term solution.

Layout

So the classic example of hacks on the web would’ve been layout. You know, in the early days we were using tables for layout. We had transparent gifs, one pixel by one pixel gifs that we would give width and height to allow us to make all the layouts we wanted. And it worked, but it was a hack.

So then we got CSS and we switched to using floats for layout, which was better. But, you know, it was still a hack because floats weren’t intended for layout. They were intended for, you know, having text flow around images.

And it’s only relatively recently in the history of the web that we finally are able to throw away our hacks and use proper layout tools.

Because now we’ve got flexbox and we’ve got grid and these are made for layout. It took quite a while for us to get there.

But you know, in the early days of the web, it wasn’t even clear if CSS should attempt to do layout or whether there should be a third format specifically for layout. Because maybe there needed to be that separation of concerns between structure (you’ve got HTML), styling (you’ve got CSS), and some third technology for layout which could be considered like its own its own thing.

I mean, if you think about it today, we kind of sequester layout into media queries. So you could imagine that being a separate technology.

But anyway, it became clear over time that CSS should be the home for layout as well as other kinds of styling.

And that’s what we’ve got now. We’ve got flexbox. We’ve got grid. We got proper layout on the web so we were able to stop using our hacks and use the real native tools.

Typography

It’s a similar story with typography.

If you wanted to use a font that wasn’t one of the system fonts that most people would have installed, well, you went into Photoshop and you made an image of text using the font you wanted, and now the user would have to download that image file and the text wasn’t selectable, it was a fixed width, it came with all sorts of problems. And we came up with very clever solutions to do what’s called image replacement in CSS, but they were all hacks.

And now we don’t need the hacks because we’ve got the @font-face rule. So we can literally specify the typeface we want to use. We can stop using the hacks.

Graphic design

We used a lot of hacks for graphic design as well. Things that we weren’t quite able to do natively in CSS.

I’m gonna air my laundry here and show you a website I made back in 2005. This was the website for my first book called DOM Scripting and it’s very much of its time.

This is the very 2005-feeling design. You see the way we’ve got that element there with rounded corners? And you see the way that there’s a gradient in the background of the page? Well, back in 2005, we didn’t have rounded corners in CSS and we didn’t have gradients.

So those rounded corners? Those are images that have been absolutely positioned into that element.

And that gradient is actually an image. It’s a one pixel wide, but very, very tall image that is tiled across the entire background.

So these were hacks and they worked, but obviously I wouldn’t need to do that today. Today, I’ve got border-radius to do my rounded corners and I got linear-gradient to do my gradients.

Ironically, right as we got the power to do rounded corners and gradients in CSS natively, we all collectively decided that, nah, actually what we want is flat design …which we could have been doing all along!

Anyway, let me show you another example. This is a website that dates back even further. This is my own personal website, adactio.com.

This design hasn’t really changed in over 20 years, but I have updated the technology.

Let’s the take image at the top. You can see that’s been given some treatment as a corner has been sliced out of one edge and a gradient has been applied so it fades out.

Now it used to be that I would have to do that in Photoshop. I’d take the original image, I’d slice off the corner, I’d add a gradient layer on top of it.

Well now I don’t need Photoshop because I can use clip-path to take off the corner. I can use a linear gradient using generated content—using the ::after pseudo element—to get the exact same result.

So now I’ve got something that looks pretty much the same, except where I had to do it in Photoshop before, now I’m able to do it natively in CSS.

And that might not sound like much of a win because the end result looks the same, right?

Except now when I’m applying a prefers-color-scheme style sheet and I give a dark mode to my site, I don’t have to make a separate image. Because this is being done natively in CSS. The gradient is in CSS. The clip path is in CSS. It’s all native to CSS.

Material honesty

This comes down to this idea of material honesty. About using the right material for the job, rather than using a material that’s pretending to be something else, whether that’s, you know, an image of text, pretending to be a font or an image of a gradient, pretending to be a real gradient or, you know, any of those graphic design tricks.

We can now be materially honest in CSS because we’ve got grid and flexbox and border-radius and @font-face. It’s more honest.

And also it’s easier, right? It’s less work to avoid the hacks.

Like, one of my favorite examples of something we got recently in CSS to avoid the hacks. It’s a small thing, but it makes a big difference in my opinion, is just styling things like check boxes and radio buttons.

We’ve always been able to do it, but it involved a hack where you’d hide the real checkbox off screen. And then you’d use a background image to show the different states of the checkbox. And it was fine and it worked and we could make it accessible.

But now we can just use accent-color and it’s easier.

So there’s been this movement from hacks to native CSS. And in a way, the hacks show the direction of travel. The hacks show us what the future could be.

Tools

The other place CSS has borrowed from or learned from, has been in our tools. Like the tools we use to generate our CSS.

Sass is the classic example, right? Sass is this enormously popular pre-processor for CSS and people were using Sass to do things you couldn’t do natively in CSS.

And I feel like one of the genius bits of design in Sass was that it worked a lot like the way HTML worked in the early days compared to SGML.

Remember, I was talking about how Tim Berners-Lee took SGML and literally turned it into HTML? Like, you were able to take an SGML document and change the file extension, and it would be a valid HTML document. And so that really helped with the adoption of HTML.

Well, that’s the same with Sass. You could take your existing style sheet document and just changed the file extension to .scss and it was already valid Sass, right? You didn’t have to learn a new syntax.

This isn’t supposition on my part—that this was a reason for the huge success of Sass—because actually Sass had two options, two different syntaxes, and you could choose.

There was this .sass syntax and the .scss syntax.

And with the .sass syntax, it used significant white space. It was more condensed, right? It used indentation.

But with .scss it used the syntax you were already familiar with from CSS.

So you could say that the .sass syntax was more efficient. You could say it’s a better format, but humans are allergic to change. And the .scss syntax was familiar enough that people go, “oh yeah, I get this.”

You could, you could take your existing CSS and just start using the features of Sass you wanted to.

And people overwhelmingly chose the .scss syntax over the .sass syntax. I got to meet Hampton Catlin who invented Sass and he confirmed the numbers for me. He said, yeah, it was a no brainer. Basically the .sass syntax lacks familiarity. It’s like looking at a clock running counter-clockwise.

But anyway, people started using Sass to do things like nesting, calculations (these mix-ins), variables; all these things that we now do in CSS.

Of course, the reason we can do these things in CSS is because Sass proved that there was a desire for these things.

So now you really don’t need Sass for a lot of stuff, but the reason you don’t need Sass is because of Sass. Sass paved the way. Sass showed that there was a demand for this stuff. And now it’s native in the CSS. We don’t need that tool anymore.

And I feel like that’s a test of a really good tool. A really successful tool is when it becomes redundant.

In the JavaScript world, jQuery is a classic example of this, right?

With jQuery you were able to do things using a CSS syntax, whereas otherwise you had to use this long winded DOM syntax of document.getElementsByTagName or getElementById.

Whereas in jQuerythe idea was, “Hey, if you already know how to select something in CSS, just use that syntax again!”

It’s using what people are familiar with. Humans are allergic to change.

But these days we don’t need jQuery because in the DOM, we’ve got querySelectorAll, we’ve got querySelector. We can use CSS selectors to do our DOM scripting.

Why don’t we need jQuery anymore? Because of jQuery.

jQuery showed that this was a really clever idea. It was something people wanted. And so now it’s been standardized. We don’t need jQuery.

So I really feel like the goal of any good library should be to make itself obsolete. It’s so successful, it’s no longer needed.

And you could kind of see this in the history of the web with Sass, with jQuery, even with something like Flash.

You know, Flash showed the way. It showed that, “Hey, we want a way to do animation. We need some way to do video on the web.” And people were using Flash because there was no other way to do those things.

Now we don’t need Flash or jQuery or Sass because we get them natively.

So all of these are almost like research and development for the web.

They’re kind of like hacks, but I think a better way of thinking about them is they’re more like polyfills—these are things we can use until we get a standardized way of doing it.

I think it’s fascinating to look at our tools and see what can they tell us about what’s coming into standards.

Methodologies

A whole set of tools are these methodologies that people came up with, like OOCSS from Nicole Sullivan. And there’s BEM. And SMACSS was another one. There’s a whole bunch.

But I’m fascinated by these because these aren’t an example of some technology we needed to lobby browser makers to implement. Because these are really agreements.

These are agreements. These are saying, “let’s all agree to structure our CSS in a certain way.” Nothing needed to change in browsers, right?

And all of these are testament to the power of the cascade. Because what they do is they almost deliberately limit the cascade, which is seen as almost being too powerful.

So they’re not really tools, they’re methodologies. Or another way of putting it is they are agreements.

Again, the power of saying “let’s all agree to do something.”

Scale

And the problem that most of them are trying to solve is trying to do CSS at scale, trying to do CSS when you’ve got a large team. Which is interesting to think about like, why wasn’t CSS designed to scale well like this?

Miriam Suzanne said:

Large companies find HTML and CSS frustrating “at scale” because the web is fundamentally an anticapitalist mashup art experiment designed to give consumers all the power.

Okay, it’s funny, but it’s kind of funny ’cause it’s true. If you look at those design principles that Bert Bos came up with, it was very much about empowering the end user, that CSS needed to be accessible. It needed to be something you could learn quickly.

So, you know, thinking about CSS as something that needs to be able to scale to multiple teams of people? That wasn’t really on the cards for CSS back then. It wasn’t a priority.

And maybe that’s why we came up with these methodologies like BEM and OOCSS and SMACSS to try and manage this stuff.

But even these methodologies, now the ideas behind them are finding their way into the standards. Now we’re getting cascade layers and scope in CSS.

To me, this feels like a return of this idea of influence that HÃ¥kon Wium Lie was talking about all those years ago.

Now it’s not so much about the influence between an author and a user; it’s about the influence between multiple authors working on, on a giant code base.

So it’s a very exciting time for CSS to see these new tools arrive that can solve these scale problems.

But I do ask myself, what’s still missing in CSS? And this is a great question to ask of JavaScript as well:

What’s still missing?

And you can answer the question by looking at what are we still using tools for? What are we still having to polyfil because we don’t yet have it natively in the browser?

And to answer this question, I’m gonna just quickly finish with three components that kind of demonstrate where CSS is still missing some features.

button

Let’s start with a button component.

All right. If you wanna implement a button component, you’ve kind of got two options. You can use a button element and then you style it with CSS to look however you want. Or you can, you know, make up your own button component using a div or a span, add the CSS and JavaScript and ARIA.

Really there’s absolutely no reason to do that. In this case, it’s a no brainer. You use a button element and you style it with CSS. I cannot think of any reason why you would not do that. There used to be reasons like ages ago in Internet Explorer it used to be hard to style buttons. Those days are long gone.

So in this case, really simple answer to the question. Material honesty. Use a button element. Use CSS to style it.

dropdown

All right. What about a dropdown component? There’s a number of options, you click in and you get a select dropdown.

Well, again, you can use a select element style with CSS, or you can just make your own dropdown component with a div and JavaScript and a bunch of ARIA to try and make it accessible.

Now here, I would still say, use a select element and style it with CSS, but you are gonna hit a limit. Like the open state of the dropdown is kind of out of our control. There’s not much you can do in CSS. So if you really care about styling the open state of a dropdown, I guess I can understand why you would reach for making your own with a div and JavaScript and ARIA.

I mean, I personally wouldn’t do it. But I guess I can see it, because CSS isn’t there yet. We don’t yet have the power to style a dropdown.

date picker

What about a date picker component? You click into it, the user chooses a date.

Again, there appears to be an obvious solution here, which is use input type=\"date\". Boom. You’re done style it how you want, right?

Well, good luck with that. If you’ve ever try to style input type=\"date\", there’s actually very little you can do. And so if you care about the styling of the date selector, you probably are gonna have to make up your own with a bunch of divs and JavaScript and try to make it accessible with ARIA.

Yeah, it’s a real shame.

So I think these three components kind of show the battle ground, if you will, where CSS is still falling short a bit, where we have to still use hacks. It’s kind of this battle between the under-engineered solution—just use the native HTML element—and the over-engineered solution, right? We’re gonna have to create all the functionality and all the accessibility by ourselves.

And I used to get mad at people choosing the hacks, choosing the over-engineered solution. But I realized that it’s kind of like, you know, trying to reduce teen pregnancy by telling people to just stop having sex. Abstinence isn’t realistic. People are going to do it anyway. And the question is, well, how do we make it better and safer in the long run?

So I think that’s the real battleground, is how do we style elements like select and date pickers natively in CSS? And that’s why I think the work being done by the open UI group is really, really important.

It’s at open-ui.org.

And they say:

The purpose of open UI to the web platform is to allow web developers to style and extend built in web controls, such as select dropdowns, check boxes, radio buttons, and date and color pickers.

I think it’s really important work. And I think that’s where we should be putting our effort.

The future

Because, you know, the difference between doing something natively in a web browser and doing something with a framework or with JavaScript is context.

Web browsers are the shared context between users and authors.

Whereas, if you want to use a framework or a library, you have to ship that context to the end user. And that puts a burden on them. It’s not good for performance. It’s not good for the user experience.

So web browsers are where past agreements live on today and they live on into the future.

When something lands in a browser, it stays in a browser. So by using what’s in web browsers, you are benefiting from decades of work by multitudes of people. And it’s better for users.

So treat frameworks and libraries as polyfils. Use them as a temporary measure when there’s something that’s still not possible in browsers (this is very true of JavaScript frameworks).

They can point the way to a shared context in the future, but they themselves are not the future. So don’t get too attached. Treat them as cattle, not pets.

Use frameworks and libraries as scaffolding to help you build. But they are not a foundation.

Web standards in the browser are your foundation to build upon.

You know, having an awareness of the history of technologies from sun dials to web browsers, it can help you understand the way things are today. And in some ways the lessons of path dependence and inertia are sort of grim, right? Because of some arbitrary decision in the past, we are now stuck with the consequences in our clockfaces, in CSS. And it’s very, very hard to change that.

But there’s another way to look at this.

Nothing was inevitable. Which means nothing is inevitable (you know, except for entropy and the heat death of the universe).

So if someone tells you, “Hey, that’s just the way things are; accept it”, don’t believe it.

Understand your position in the timeline.

Yes, the present moment is the result of decisions made in the past, many of them arbitrary, but that also means the future will be highly influenced by the decisions you make today even if those decisions seem small and inconsequential.

The choices you make now could turn out to have long-lasting repercussions into the future.

So make your decisions wisely. You are literally creating the future.

And I’m looking forward to seeing the results.

Thank you.

" } , { "id": "18676", "url": "https://adactio.com/article/18676", "title": "Ain’t no party like a third party", "summary": "This was originally published on CSS Tricks in December 2021 as part of a year-end round-up of responses to the question “What is one thing people can do to make their website bettter?”", "date_published": "2021-12-09 11:25:04", "tags": [ "csstricks", "thirdparty", "scripts", "security", "performance", "frontend", "development", "javascript", "cookies", "privacy", "surveillance", "tracking", "medium:id=cef643cc37b7" ], "content_html": "

I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s modern web it sounds like advice from a tinfoil-hat wearing conspiracy nut. But just because I’m paranoid doesn’t mean they’re not out to get your user’s data.

All I’m asking is that we treat third-party scripts like third-party cookies. They were a mistake.

Browsers are now beginning to block third-party cookies. Chrome is dragging its heels because the same company that makes the browser also runs an advertising business. But even they can’t resist the tide. Third-party cookies are used almost exclusively for tracking. That was never the plan.

In the beginning, there was no state on the web. A client requested a resource from a server. The server responded. Then they both promptly forgot about it. That made it hard to build shopping carts or log-ins. That’s why we got cookies.

In hindsight, cookies should’ve been limited to a same-origin policy from day one. That would’ve solved the problems of authentication and commerce without opening up a huge security hole that has been exploited to track people as they moved from one website to another. The web went from having no state to having too much.

Now that vulnerability is finally being closed. But only for cookies. I would love it if third-party JavaScript got the same treatment.

When you add any third-party file to your website—an image, a style sheet, a font—it’s a potential vector for tracking. But third-party JavaScript files go one further. They can execute arbitrary code.

Just take a minute to consider the implications of that: any third-party script on your site is allowing someone else to execute code on your web pages. That’s astonishingly unsafe.

It gets better. One of the pieces of code that this invited intruder can execute is the ability to pull in other third-party scripts.

You might think there’s no harm in adding that one little analytics script. Or that one little Google Tag Manager snippet. It’s such a small piece of code, after all. But in doing that, you’ve handed over your keys to a stranger. And now they’re welcoming in all their shady acquaintances.

Request Map Generator is a great tool for visualizing the resources being loaded on any web page. Try pasting in the URL of an interesting article from a news outlet or magazine that someone sent you recently. Then marvel at the sheer size and number of third-party scripts that sneak in via one tiny script element on the original page.

That’s why I recommend that the one thing people can do to make their website better is to not add third-party scripts.

Easier said than done, right? Especially if you’re working on a site that currently relies on third-party tracking for its business model. But that exploitative business model won’t change unless people like us are willing to engage in a campaign of passive resistance.

I know, I know. If you refuse to add that third-party script, your boss will probably say, “Fine, I’ll get someone else to do it. Also, you’re fired.”

This tactic will only work if everyone agrees to do what’s right. We need to have one another’s backs. We need to support one another. The way people support one another in the workplace is through a union.

So I think I’d like to change my answer to the question that’s been posed.

The one thing people can do to make their website better is to unionize.

" } , { "id": "18580", "url": "https://adactio.com/article/18580", "title": "The State Of The Web", "summary": "The opening presentation from An Event Apart Spring Summit held online in April 2021.", "date_published": "2021-11-02 17:02:21", "tags": [ "conference", "presentation", "transcript", "aea", "aneventapart", "frontend", "development", "web", "history", "medium:id=ff0ad070c19a" ], "content_html": "

Hello, my friends. I’d like us to try to collectively achieve something today. What I’d like us to achieve is a sense of perspective.

To do this we need to take a step back and cast an eye on the past.

For example, I can look back and say “Wow, what a terrible year!”

A year of death. A year of polarisation. Of inequality. A corrupt government. Protests in the street as people struggled to fight against systemic racism.

Yes, I am of course talking about the year 1968.

The past

By the end of 1968, the United States of America was a nation in turmoil. Civil rights. The war in Vietnam. It felt like the polarising issues of the day were splitting the country in two.

But in the final week of the year, something happened that offered a sense of perspective.

In an audacious move, NASA decided to bring forward the schedule of its Apollo programme. Apollo 7 was a success but that mission was confined to Earth orbit. For Apollo 8, human beings would leave Earth’s orbit for the first time in history. The bold plan was to fly to and around the moon before returning safely to Earth.

From today’s perspective, you might just see it as a dry run for Apollo 11 when human beings would step foot on the moon. But at the time, it was an unbelievably bold move. A literal moonshot.

On the winter solstice, December 21st 1968, Jim Lovell, Frank Borman, and Bill Anders were launched on their six day mission to the moon and back.

The mission was a success. Everything went according to plan. But the reason why we remember the Apollo 8 mission today is for something that wasn’t planned.

First of all, after the translunar injection when the crew had left Earth orbit and were on their way to the moon (already the furthest distance ever travelled by our species), someone—probably Bill Anders—pointed a camera back at Earth.

This was the first picture ever taken by a human being of the whole earth. It’s quite a perspective-setting sight, seeing the whole Earth. To us today, it’s almost commonplace. But remember that were was a time when no one had ever seen this view.

In fact, throughout the 1960s activist Stewart Brand had a campaign, handing out buttons with the question, “Why haven’t we seen a photograph of the whole Earth yet?”

I like the “yet” at the end of that. It gives it a conspiracy-tinged edge.

Stewart Brand suspected that if people could see their home planet in one image, it could reset their perspectives. They would truly grok the idea of Spaceship Earth, as Buckminster Fuller would say. The idea came to Brand when he was on a rooftop, tripping on acid, experiencing the horizon curve away from him and giving him quite a sense of perspective.

Later, he would start the Whole Earth Catalog. It was like a print version of Wikipedia, with everything you needed to know to run a commune.

Later still, he went on to found the Long Now Foundation, an organisation dedicated to long-term thinking. I’m a proud member.

Their most famous project is the clock of the long now, which will keep time for 10,000 years. This is just a scale model in the Science Museum in London. The full-size clock is being built inside a mountain on geologically stable ground. Just thinking about the engineering challenges involved is bound to give you a certain sense of perspective.

But let’s snap back from 10,000 in the future to that Apollo 8 mission in December of 1968.

This picture of the whole earth wasn’t the most important picture taken by Bill Anders on that flight. By Christmas Eve, the crew had reached the moon and successfully entered lunar orbit.

Oh my God! Look at that picture over there! There’s the Earth coming up. Wow, that’s pretty.

Hey, don’t take that, it’s not scheduled.

You got a color film, Jim? Hand me that roll of color quick, would you…

Oh man, that’s…

Quick! Quick!

This is what Bill Anders captured.

Earthrise.

I could try to describe it. But they should’ve sent a poet.

Fifty years later, this poet puts it beautifully. This is Amanda Gorman’s poem Earthrise.

On Christmas Eve, 1968, astronaut Bill Anders

Snapped a photo of the earth

As Apollo 8 orbited the moon.

Those three guys

Were surprised

To see from their eyes

Our planet looked like an earthrise

A blue orb hovering over the moon’s gray horizon,

with deep oceans and silver skies.
It was our world’s first glance at itself

Our first chance to see a shared reality,

A declared stance and a commonality;
A glimpse into our planet’s mirror,

And as threats drew nearer,

Our own urgency became clearer,

As we realize that we hold nothing dearer

than this floating body we all call home.

Astronauts have been known to experience something called the overview effect. It’s a profound change in perspective that comes from seeing the totality of our home planet in all its beauty and fragility.

The Earthrise photograph gave the world a taste of the overview effect, right at a time when it was most needed.

The World Wide Web

I wonder if it’s possible to get an overview effect for the World Wide Web?

There is no photograph of the whole web. We can’t see the web. We can’t travel into space and look back at our online home.

But we can travel back in time. Let’s travel back to 1945.

That was the year that an article was published in The Atlantic Monthly by Vannevar Bush. He was a pop scientist of his day, like Neil de Grasse Tyson or Bill Nye.

The article was called As We May Think. In the article, Bush describes a hypothetical device called a memex.

Imagine a desk filled with reams and reams of microfilm. The operator of this device can find information and also make connections between bits of information, linking them together in whatever way makes sense to them.

This sounds a lot like hypertext. That word would be coined decades later by Ted Nelson to describe “text which is not constrained to be linear.”

Vannevar Bush’s idea of the memex and Ted Nelson’s ideas about hypertext would be a big influence on Tim Berners-Lee, the creator of the World Wide Web.

But his big breakthrough wasn’t just making hypertext into a reality. Other people had already done that.

Douglas Engelbart, who wanted to make the computer equivalent of the memex, had already demonstrated a working hypertext system in 1968 in an astonishing demonstration that came to be known as The Mother Of All Demos.

The idea of hypertext was kind of like a choose-your-own-adventure book. Individual pieces of text in a book are connected with unique identifiers and you can jump from one piece of text to another within the same book.

But what if you could jump between books? That’s the other piece of the puzzle.

The idea of connecting computers together came from the concept of “time sharing” allowing you to remotely access another computer.

With funding from the US Department of Defence’s Advance Research Projects Agency, time sharing was taken to the next level with the creation of a computer network called the ARPANET.

It grew. And it grew. Until it was no longer just a network of computers. It was a network of networks. Or internetwork. Internet, for short.

Tim Berners-Lee took the infrastructure of the internet and mashed it up with the idea of hypertext. Instead of imagining hypertext as a book with interconnected concepts, he imaged a library of books where you could jump from one idea in one book to another idea in a completely different book in a completely different part of the library.

This was the World Wide Web. And Tim Berners-Lee called it the World Wide Web even when it only existed on his computer. You have to admire the chutspah of that!

But the really incredible thing is that it worked! In March of 1989 he proposed a global hypertext system, where anybody could create new pages without asking anyone for permission, and anyone could access those pages no matter what kind of device or operating system they were using.

And that’s what we have today. While the World Wide Web might seem inevitable in hindsight, it was anything but. It is a remarkable achievement.

The World Wide Web was somewhat lacking in colour originally. When I started making websites in the mid nineties, colour had arrived but it was limited.

Colour

When I started making websites in the mid nineties, colour had arrived but it was somewhat limited.

We had a palette of 216 web safe colours. You knew if a colour was “web safe” if the hexadecimal notation was three sets of duplicated values. If you altered one of those values even slightly, there was no guarantee that the colour would display consistently on the monitors of the time.

I have a confession to make: I kind of liked this constraint in a weird way. To this day, if I have a colour value that’s almost web-safe, I can’t resist nudging it slightly.

Fortunately, monitors improved. They got flatter for one thing. They were also capable of displaying plenty of colours.

And we also got more and more ways of specifying colours. As well as hexadecimal, we got RGB: Red, Green, Blue. Better yet, we got RGBa …with alpha transparency. That’s opacity to you and me.

Then we got HSL: hue, saturation, lightness. Or should I say HSLa: hue, saturation, lightness, and alpha transparency.

And there are more colour spaces on the way. HWB (hue, whiteness, blackness), LAB, LCH. And there’s work on a color() function so you can specify even more colour spaces.

Typography

In the beginning, typography on the World Wide Web was non-existent. Your browser used whatever was available on your operating system.

That situation continued for quite a while. You’d have to guess which fonts were likely to be available on Windows or Mac.

If you wanted to use a sans-serif typeface, there was Arial on Windows and Helvetica on the Mac. Verdana was a pretty safe bet too.

For a while your only safe option for a serif typeface was Times New Roman. When Mathew Carter’s Georgia was released, it was a godsend. Here was a typeface specifically designed for the screen.

Later Microsoft released another four fonts designed for the screen. Four new fonts! It felt like we were being spoiled.

But what if you wanted to use a typeface that didn’t come installed with an operating system? Well, you went into Photoshop and made an image of the text. Now the user had to download additional images. The text wasn’t selectable and it was a fixed width.

We came up with all sorts of clever techniques to do what was called “image replacement” for text. Some of the techniques involved CSS and background images. One of the techniques involved Flash. It was called sIFR: Scalable Inman Flash Replacement. A later technique called Cufón converted the letter shapes into paths in Canvas.

All of these techniques were hacks. Very clever hacks, but hacks nonetheless. They were clever and they worked but they always reminded me of Samuel Johnson’s description of a dog walking on its hind legs:

It is not done well but you are surprised to find it done at all.

What if you wanted to use an actual font file in a web page?

There was only one browser that supported font embedding: Microsoft’s Internet Explorer. The catch was that you had to use a proprietary font format called Embedded Open Type.

Both type foundries and browser makers were nervous about allowing regular font files to be embedded in web pages. They were worried about licensing. Wouldn’t this lead to even more people downloading fonts illegally? How would the licensing be enforced?

The impasse was broken with a two-pronged approach. First of all, we got a new font format called Web Open Font Format or WOFF. It could be used to take a regular font file and wrap it in a light veneer of metadata about licensing. There’s a sequel that’s even better than the original, WOFF2.

The other breakthrough was the creation of intermediary services like Typekit and Fontdeck. They would take care of serving the actual font files, making sure they couldn’t be easily downloaded. They could also keep track of numbers to ensure that type foundries were being compensated fairly.

Over time it became clear to type foundries that most web designers wanted to do the right thing when it came to licensing fonts. And so these days, you can probably license a font straight from a type foundry for use on the web and host it yourself.

You might need to buy a few different weights. Regular. Bold. Maybe italic. What about extra bold? Or a light weight? It all starts to add up, especially for the end user who has to download all those files.

I remember being at the web typography conference Ampersand years ago and hearing a talk from Nick Sherman. He asked us to imagine one single font file that could go from light to regular to bold and everything in between. What he described sounded like science fiction.

It is now science fact, indistinguishable from magic. Variable fonts are here. You can typeset text on the web to be light, or regular, or bold, or anything in between.

When you use CSS to declare the font-weight property, you can use keywords like “normal” or “bold” but you can also use corresponding numbers like 400 or 700. There’s a scale with nine options from 100 to 900. But why isn’t the scale simply one to nine?

Well, even though the idea of variable fonts would have been pure fantasy when this part of CSS was being specced, the authors had some foresight:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future.

With the creation of variable fonts, HÃ¥kon Wium Lee added:

And the future is now.

On today’s web you could have 999 font-weight options.

Images

In the beginning, the World Wide Web was a medium for text only. There were no images and certainly no videos.

In an early mailing list discussion, there was talk of creating a new HTML element for images. Perhaps it should be called “icon”. Or maybe it should be more generic and be called “embed”. Tim Berners-Lee said he imagined using the rel attribute on the A element for embedding images.

While this discussion was happening, Marc Andreessen popped in to say that he had just shipped a new HTML element in the Mosaic browser. It’s called IMG and it takes an attribute called SRC that points to the source of the image.

This was a self-closing tag so there was no way to put fallback content in between the opening and closing tags if the image couldn’t be displayed. So the ALT attribute was introduced instead to provide an alternative description of the image.

For the images themselves, there were really only two choices. JPG for photographic images. GIF for icons or anything that needed basic transparency. GIFs could also do animation and today, that’s pretty much all they’re used for. That’s because there was a concerted campaign to ditch the GIF format on the web. Unisys, who owned the rights to a compression algorithm used by the GIF format, had started to make noises about potentially demanding license fees for its use.

The Portable Network Graphics format—or PNG—was created in response. It was more performant and it allowed you to have proper alpha transparency.

These were all bitmap formats. What if you wanted a vector format for images that would retain crispness at any size or resolution? There was only one option: Flash. You’d have to embed a Flash movie in your web page just to get the benefit of vector graphics.

By the 21st century there were some eggheads working on a text-based vector file format that could be embedded in webpages, but it sounded like a pipe dream. It was called SVG for Scalable Vector Graphics. The format was dreamed up in 2001 but for years, not a single browser supported it. It was like some theoretical graphical Shangri-La.

But by 2011, every major browser supported it. Styleable, scriptable, animatable, vector graphics have gone from fantasy to reality.

There’s more choice in the world of bitmap images too. WebP is well supported. AVIF is is gaining support.

The IMG element itself has grown too. You can use the srcset attribute to give the browser a range of images to choose from to best suit the user’s device and network connection. You can use the loading attribute to get lazy loading of images for free—no JavaScript required.

We now have audio in HTML. No JavaScript required. We now have video in HTML. No JavaScript required.

These elements have been designed with more thought than the IMG element. They are not self-closing elements, by design. You can put fallback content between the opening and closing tags.

The audio and video elements arrived long after the IMG element. For a long time, there was no easy way to do video or audio on the web.

That was very frustrating for me. The first websites I ever built were for bands. The only way to stream music was with a proprietary plug-in like Real Audio.

Or Flash.

While the web standards were still being worked on, Flash delivered the goods with streaming audio and video. This happened over and over. Flash gave us vector graphics, animation, video, and more. But the price was lock-in. Flash was a proprietary format.

Still, Flash showed the web standards bodies the direction of travel. Flash was the hare. Web standards were the tortoise.

We know how that race ended.

In a way, Flash was like the Research and Development incubator for the World Wide Web. We got CSS animations, SVG, and streaming video because Flash showed that there was an appetite for them.

Until web standards provide a way to do something, designers and developers will reach for whatever tool gets the job done. Take layout, for example.

Layout

In the early days of the web, you could have any layout you wanted …as long as it was a single column.

Before long, HTML expanded to provide some rudimentary formatting for that single column of text. Presentational elements and attributes were invented. And even when elements and attributes weren’t meant to be used for formatting, people got creative.

Tables for layout. A single pixel GIF that could be given width and height. These were clever solutions. But they were hacks. And they were in danger of turning HTML into a presentational language instead of a language for structuring content.

CSS came to the rescue. A language specifically for presentation.

But we still didn’t get proper layout tools. There was a lot of debate in the early days about whether CSS should even attempt to provide layout tools or whether that was a job for a separate technology.

We could lay things out using the float property, but really that was just another hack.

Floats were an improvement over tables for layout, but we only swapped one tool for another. Our collective thinking still wasn’t very web-like.

For example, designers and developers insisted on building websites with a fixed width. This started in the era of table layouts and carried over into CSS.

To start with, the fixed width was 640 pixels. Then it was 800 pixels. Then people settled on the magical number of 960 pixels. Designers and developers didn’t seem at all concerned that people had different sized screens.

That was until the iPhone came out. It caused a panic. What fixed width were we supposed to design for now?

The answer was there all along. Even before the web appeared in mobile devices, it was possible to build fluid layouts that would adapt to screen size. It’s just that the majority of designers and developers chose not to build in this way.

I was pleased that mobile came along and shook things up. It exposed the assumptions that people were making. And it forced designers and developers to think in a more fluid, webby way.

Even better, CSS had expanded to include media queries so it was possible to alter layouts at different breakpoints.

Ethan came along and put a nice bow on it with his definition of responsive design: fluid media, fluid layouts, and media queries.

I fell in love with responsive web design instantly becuase it matched how I was already thinking about the web. I was one of the handful of weirdos who insisted on building fluid websites when everyone else was using fixed-width layouts.

But I thought that responsive web design would struggle to take hold.

I’m delighted to say that I was wrong. Responsive web design has become the default!

If I could go back to my past self in the mid 2000s, I’d love to tell them that in the future, everyone would be building with fluid layouts (and also that time travel had been invented apparently).

Not only that, but we finally have proper layout tools for the web. Flexbox. Grid. No more hacks. We’re even getting container queries soon (thanks, Miriam!).

Web browsers now are positively overflowing with fantastic design tools that would have been unimaginable to my past self. Support for these technologies is pretty much universal.

When browsers differ today, it’s only terms of which standards they don’t yet support. There was a time when browsers differed massively in how they handled basic web technologies.

There was a time when being a web developer meant understanding all the different quirks between browsers.

And browser makers spent a ludicrous amount of time reverse-engineering the quirky behaviour of whichever browser was the market leader.

That changed with HTML5. We remember HTML5 for introducing new APIs, new form fields, and new structural elements. But the biggest innovation was completely invisible. For the first time, error-handling was standardised. Browsers had a set of rules they could work from. Once browsers adopted this consistent approach to error-handling, cross-browser differences dried up.

That was good news for web developers. We were sick of dealing with different browsers taking different approaches. We had been burned with JavaScript.

JavaScript

In the beginning, there was no scripting on the web, just like there was no styling. Tim Berners-Lee wasn’t opposed to the idea of executing arbitrary code on the web. But he pointed out that you’d need everyone to agree on which programming language browsers would use.

You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interpreted programming language.

This problem of which language to choose was solved in the usual way. Brendan Eich, who was working at Netscape, created a completely new programming language in just ten days. It would be called… LiveScript. Then the marketing department got involved and because Java was the new hotness at the time, this scripting language was renamed to… JavaScript. Even though it has nothing to do with Java. Java is to JavaScript as ham is to hamster.

The important thing is that multiple browsers implemented it. Then the hype started. We were told about this great new technology called DHTML. The D stood for dynamic! This would allow us to programmatically manipulate elements in a web page.

But… the two major browsers at the time, Netscape Navigator and Internet Explorer, used two completely incompatible syntaxes.For Netscape Navigator you’d use document.layers. For Internet Explorer it was document.all.

This was when developers said enough was enough. We wanted standards. The Web Standards Project was formed and we lobbied browser makers to implement web standards, like CSS and also the Document Object Model. This was a standardised way of manipulating elements in a web page. You could use methods like getElementById and getElementsByTagName.

That worked fine, but it was yet another vocabulary to learn. If you already knew CSS, then you already understood how to get an element by ID and get elements by tag name, but with a different syntax.

John Resig created jQuery so that you could use the CSS syntax to do DOM scripting. There were lots of other JavaScript libraries released around the same time, but jQuery was by far the most popular. Clearly, this syntax was something that developers wanted.

Now we no longer need jQuery. We’ve got querySelector and querySelectorAll. But the reason we no longer need jQuery is because of jQuery. Just like Flash, jQuery showed what developers wanted. And just as with Flash, the web standards took more time. But now jQuery is obsolete …precisely because it was so successful.

It’s a similar story with Sass and CSS. There was a time when Sass was the only way to have a feature like variables. But now with custom properties available in CSS, Sass is becoming increasingly obsolete …precisely because it was so successful and showed the direction of travel.

In a way, jQuery and Sass (and maybe even Flash) were kind of like polyfills. That’s a term that my friend Remy Sharp coined for JavaScript libraries that fill in the gaps until browsers have implemented web standards.

By way of explanation to anyone in the States, Polyfilla is a brand name in the UK for what you call spackling paste. So JavaScript libraries like jQuery spackled over the gaps in browsers.

But some capabilities can’t be polyfilled. If a browser doesn’t provide API access to a particular sensor, for example, there’s no way to spackle that gap.

For quite a while, if you wanted access to device APIs, you’d have to build a native app. But over time, that has changed. Now browsers are capable of providing app-like experiences. You can get location data. You can access the camera. You can provide notifications. You can even make websites work offline using service workers.

Native apps had all these capabilities before web browsers. Just as with Flash and jQuery, native apps pointed the way. The gap always looks insurmountable to begin with. But over time, the web always manages to catch up.

At the beginning of 2021, Ire said:

By the end of the year, I would predict that any major native mobile application could be instead built using native web capabilities.

The present

The web has come along way. It has grown and evolved. Browsers have become more and more powerful while maintaining backward compatibility.

In the past we had to hack our way around the technological limitations of the web and we had a long wish list of features we wanted.

I’m not saying we’re done. I’m sure that more features will keep coming. But our wish list has shrunk.

The biggest challenges facing the World Wide Web today are not technical challenges.

Today it is possible to create beautiful websites that make full use of colour, typography, layout, animation, and more. But this isn’t what users experience.

This is what users experience. A tedious frustrating game of whack-a-mole with websites that claim to value our privacy while asking us to relinquish it.

This is not a technical problem. It is a design decision. The decision might not be made by anyone with designer in their job title, but make no mistake, business decisions have a direct effect on user experience.

On the face of it, the problem seems to be with the business model of advertising. But that’s not quite right. To be more precise, the problem is with the business model of behavioural advertising. That relies on intermediaries to amass huge amounts of personal data so that they can supposedly serve up relevant advertising.

But contextual advertising, which serves up ads based on the content you’re looking at doesn’t require the invasive collection of personal data. And it works. Behavioural advertising, despite being a huge industry that depends on people giving up their privacy, doesn’t even work very well. And on the few occasions when it does work, it just feels creepy.

The problem is not advertising. The problem is tracking. The greatest trick the middlemen ever pulled was convincing us that you can’t have effective advertising without tracking. That is false. But they’ve managed to skew our sense of perspective so that invasive advertising seems inevitable.

Advertising was always possible on the web. You could publish anything and an ad is just one more thing you could choose to publish. But tracking was impossible. That’s because the early web was stateless. A browser requests a resource from a server and once that transaction is done, they both promptly forget about it. That made it very hard to do things like online shopping or logging into an account.

Two technologies were created later that enabled state on the web. Cookies and JavaScript. If these technologies had been limited to a same origin policy, they would have nicely solved the problems of online shopping and authentication.

But these technologies work across domains. Third party cookies and third-party JavaScript enables users to be tracked as they move from site to site. The web gone from having no state to having too much.

There is hope. Browsers like Firefox and Safari are blocking third-party cookies by default. Personally, I’d love it if third-party JavaScript got the same treatment. You can also install add-ons to make your browser more secure, although these add-ons are often labelled ad-blockers, which is a shame. Because the problem is not advertising. The problem is tracking.

Perhaps none of this applies to you anyway. You may be thinking that this is a problem for websites. But you build web apps.

Personally I’m not keen on the idea of dividing the entirety of the World Wide Web into two vaguely-defined categories. I have yet to hear a good definition of “web app” other than “a website that requires JavaScript to work.”

But the phrase “single page app” has a more definite meaning. It refers to an architectural decision. That decision is to reinvent the web browser inside a web browser.

In a sense, it’s a testament to the power of JavaScript that you can choose to do this. Browsers render content and perform navigations, but if you’d rather recreate that functionality from scratch in JavaScript, you can.

But should you? Browsers have increased in complexity so that we can build without complexity. We can use the built-in power of modern HTML, CSS, and JavaScript to make web browsers do the work. If we work with the grain of the web, we can accomplish more and more with less and less code.

But that isn’t what happened. Instead developers have recreated form controls like dropdowns and datepickers from scratch using divs and lashings and lashings of JavaScript.

Perhaps this points to some missing features on the web. It’s still too hard to style native dropdowns and datepickers (but that’s being worked on—there’s standards work underway to give us more styling control over form elements). But that doesn’t explain why developers would choose to recreate something like a button using divs and JavaScript when the button element already exists and can be styled any way you like.

I think there’s a certain mindset being applied to web development here. And that mindset comes from the world of software. Again, it’s a testament to how far the web has come that it can be treated as a software platform on par with operating systems like iOS, Android, or Windows. There’s a lot to be learned from the world of software development, like testing, for example. But the web is different. When a user navigates to a URL, it shouldn’t feel like they’re installing a piece of software.

We should be aiming to keep our payloads as small as possible. And given how powerful browsers have become, we need fewer and fewer dependencies—fewer and fewer polyfills.

But performance has gotten worse. Payloads have gotten bigger. Dependencies like JavaScript frameworks have become more and more widespread even as they became less and less necessary.

When asked to justify the enormous payloads, web developers have responded by saying that user’s expectations have changed. That is correct, but not in the way that I think they mean.

When I talk to people about using the web—especially on mobile—their expectations are that they will have a terrible experience. That websites will be slow to load. And I guarantee you that none of them are saying, “Well I’d be annoyed if this were a website but seeing as this is a web app, I’m absolutely fine with this terrible experience.”

I said that the biggest challenges facing the World Wide Web today are not technical challenges. I think the biggest challenge facing the web today is people’s expectations.

There is no technical reason for websites or web apps to be so frustrating. But we have collectively led people to expect a bad experience on the web.

Our intentions may be have good. We thought users wanted nice page transitions and form elements that were on-brand. But if you talk to people, you find out that what they want is to accomplish their task without megabytes of JavaScript getting in the way.

There’s a great German word, “Verschlimmbessern”: the act of making something worse in the attempt to make it better. Perhaps we verschlimmbessert the web.

Let’s step back. Get some perspective. Instead of assuming that a single page app architecture is needed, ask what users need to accomplish. Instead of assuming you need a CSS framework or a JavaScript library, see what you can do in browsers today with native CSS and vanilla JavaScript. Don’t include a bunch of dependencies by default just in case you might need them. Instead, as Rachel puts it:

Stop solving problems you don’t yet have.

Lean into what web browsers can accomplish today. If you find something missing, that’s the time to reach for a library …but treat it like a polyfill. Whereas web standards stick around, every library and framework comes with a limited lifespan. Treat them as cattle, not pets.

I understand that tools and frameworks can make your life easier. And if we’re talking about server-side frameworks, then I say “Go for it.” Or if you’re using build tools that sit on your computer to do version control, linting, pre-processing, or transpiling, then I say “Go for it.”

But once you make users download tools or frameworks, you’re making them pay a tax for your developer convenience.

We need to value user needs above developer convenience. If I have the choice of making something the user’s problem or making it my problem, I’ll make it my problem every time. That’s my job.

We need to change people’s expectations of the World Wide Web, especially on mobile. Otherwise, the web will be lost.

The future

Two years ago, I had the great honour of being invited to CERN to mark the 30th anniversary of the original proposal for the World Wide Web. One of the other people there was the journalist Zeynep Tüfekçi. She was on a panel along with Tim Berners-Lee and other luminaries of the early web. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it. If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

I believe that we can save the web. I believe that we can change people’s expectations. We’ll do that by showing them what the web is capable of.

It sounds like a moonshot. But, y’know, moonshots aren’t made possible by astronauts. They’re made possible by people like Poppy Northcutt in mission control. Katherine Johnson running the numbers. And Margaret Hamilton inventing the field of software engineering to create the software for the lunar lander. Individual people working together on something bigger than any one person.

There’s a story told about the first time President Kennedy visited NASA. While he was a getting a tour of the place, he introduced himself to a janitor. And the president asked the janitor what he did. The janitor answered:

I’m helping put a man on the moon.

It’s the kind of story that’s trotted out by company bosses to make you feel good about having your labour exploited for the team. But that janitor’s loyalty wasn’t to NASA, an organisation. He was working for something bigger.

I encourage you to have that sense of perspective. Whatever company or organisation you happen to be working for right now, remember that you are building something bigger.

The future of the World Wide Web is in good hands. It’s in your hands.

" } , { "id": "18223", "url": "https://adactio.com/article/18223", "title": "Sci-fi and Me", "summary": "A talk about my personal relationship with science fiction literature, delivered at Beyond Tellerrand’s Stay Curious series in June 2021.", "date_published": "2021-06-19 13:54:38", "tags": [ "sci-fi", "sciencefiction", "books", "literature", "talk", "presentation", "btconf", "staycurious", "reading", "writing", "publishing", "medium:id=2e18ba48da8" ], "content_html": "

I’m going to talk about sci-fi, in general. Of course, there isn’t enough time to cover everything, so I’ve got to restrict myself.

First of all, I’m just going to talk about science fiction literature. I’m not going to go into film, television, games, or anything like that. But of course, in the discussion, I’m more than happy to talk about sci-fi films, television, and all that stuff. But for brevity’s sake, I thought I’ll just stick to books here.

Also, I can’t possibly give an authoritative account of all of science fiction literature, so it’s going to be very subjective. I thought what I can talk about is myself. In fact, it’s one of my favourite subjects.

So, that’s what I’m going to do. I’m going to talk about sci-fi and me.

So, let me tell you about my childhood. I grew up in a small town on the south coast of Ireland called Cobh. Here it is. It’s very picturesque when you’re looking at it from a distance. But I have to say, growing up there (in the 1970s and 1980s), there really wasn’t a whole lot to do.

There was no World Wide Web at this point. It was, frankly, a bit boring.

But there was one building in town that saved me, and that was this building here in the town square. This is the library. It was inside the library (amongst the shelves of books) that I was able to pass the time and find an escape.

It was here that I started reading the work, for example, of Isaac Asimov, a science fiction writer. He’s also a science writer. He wrote a lot of books. I think it might have even been a science book that got me into Isaac Asimov.

I was a nerdy kid into science, and I remember there was a book in the library that was essays and short stories. There’d be an essay about science followed by a short story that was science fiction, and it would keep going like that. It was by Isaac Asimov. I enjoyed those science fiction stories as much as the science, so I started reading more of his books, books about galactic empires, books about intelligent robots, detective stories but set on other planets.

There was a real underpinning of science to these books, hard science, in Isaac Asimov’s work. I enjoyed it, so I started reading other science fiction books in the library. I found these books by Arthur C. Clarke, which were very similar in some ways to Isaac Asimov in the sense that they’re very grounded in science, in the hard science.

In fact, the two authors used to get mistaken for one another in terms of their work. They formed an agreement. Isaac Asimov would graciously accept a compliment about 2001: A Space Odyssey and Arthur C. Clarke would graciously accept a compliment about the Foundation series.

Anyway, so these books, hard science fiction books, I loved them. I was really getting into them. There were plenty of them in the local library.

The other author that seemed to have plenty of books in the local library was Ray Bradbury. This tended to be more short stories than full-length novels and also, it was different to the Isaac Asimov and Arthur C. Clarke in the sense that it wasn’t so much grounded in the science. You got the impression he didn’t really care that much about how the science worked. It was more about atmosphere, stories, and characters.

These were kind of three big names in my formative years of reading sci-fi. I kind of went through the library reading all of the books by Isaac Asimov, Arthur C. Clarke, and Ray Bradbury.

Once I had done that, I started to investigate other books that were science fiction (in the library). I distinctly remember these books being in the library by Ursula K. Le Guin, The Left Hand of Darkness, and The Dispossessed. I read them and I really enjoyed them. They are terrific books.

These, again, are different to the hard science fiction of something like Isaac Asimov and Arthur C. Clarke. There were questions of politics and gender starting to enter into the stories.

Also, I remember there were two books by Alfred Bester, these two books, The Demolished Man and Tiger! Tiger! (also called The Stars My Destination). These were just wild. These were almost psychedelic.

I mean they were action-packed, but also, the writing style was action-packed. It was kind of like reading the Hunter S. Thompson of science fiction. It was fear and loathing in outer space.

These were opening my mind to other kinds of science fiction, and I also had my mind opened (and maybe warped) by reading the Philip K. Dick books that were in the library. Again, you got the impression he didn’t really care that much about the technology or the science. It was all about the stuff happening inside people’s heads, questioning what reality is.

At this point in my life, I hadn’t yet done any drugs. But reading Philip K. Dick kind of gave me a taste, I think, of what it would be like to do drugs.

These were also names that loomed large in my early science fiction readings: Ursula K. Le Guin, Alfred Bester, and Philip K. Dick.

Then there were the one-offs in the library. I remember coming across this book by Frank Herbert called Dune, reading it, and really enjoying it. It was spaceships and sandworms, but also kind of mysticism and environmentalism, even.

I remember having my tiny little mind blown by reading this book of short stories by Fredric Brown. They’re kind of like typical Twilight Zone short stories with a twist in the tale. I just love that.

I think a lot of science fiction short stories can almost be the natural home for it because there is one idea explored fairly quickly. Short stories are really good for that.

I remember reading stories about the future. What would the world be like in the year 1999? Like in Harry Harrison’s Make Room! Make Room! A tale of overpopulation that we all had to look forward to.

I remember this book by Walter M. Miller, A Canticle for Leibowitz, which was kind of a book about the long now (civilisations rising and falling). Again, it blew my little mind as a youngster and maybe started an interest I have to this day in thinking long-term.

So, this is kind of the spread of the science fiction books I read as a youngster, and I kept reading books after this. Throughout my life, I’ve read science fiction.

I don’t think it’s that unusual to read science fiction. In fact, I think just about anybody who reads has probably read science fiction because everyone has probably read one of these books. Maybe they’ve read Brave New World or 1984, some Kurt Vonnegut like Slaughterhouse 5 or The Sirens of Titan, the Margaret Atwood books like The Handmaid’s Tale, or Kazuo Ishiguro books.

Now, a lot of the time the authors of these books who are mainstream authors maybe wouldn’t be happy about having their works classified as sci-fi or science fiction. The term maybe was a little downmarket, so sometimes people will try to argue that these books are not science fiction even though clearly the premise of every one of these books is science fictional. But it’s almost like these books are too good to be science fiction. There’s a little bit of snobbishness.

Brian Aldiss has a wonderful little poem, a little couplet to describe this attitude. He said:

“SF is no good,” they cry until we’re deaf.
“But this is good.”
“Well, then it’s not SF!”

Recently, I found out that there’s a term for these books by mainstream authors that cross over into science fiction, and these are called slipstream books. I think everyone at some point has read a slipstream science fiction book that maybe has got them interested in diving further into science fiction.

What is sci-fi?

Now, the question I’m really skirting around here is, what is sci-fi? I’m not sure I can answer that question.

Isaac Asimov had a definition. He said it’s that branch of literature which deals with the reaction of human beings to changes in science and technology. I think that’s a pretty good description of his books and the hard science fiction books of Arthur C. Clarke. But I don’t think that that necessarily describes some of the other authors I’ve mentioned, so it feels a little narrow to me.

Pamela Sargent famously said that science fiction is the literature of ideas. There is something to that, like when I was talking about how short stories feel like a natural home for sci-fi because you’ve got one idea, you explore it in a short story, and you’re done.

But I also feel like that way of phrasing science fiction as the literature of ideas almost leaves something unsaid, like, it’s the literature of ideas as opposed to plot, characterisation, and all this other kind of stuff that happens in literature. I always think, why not both? You know. Why can’t we have ideas, plot, characters, and all the other good stuff?

Also, ideas aren’t unique to sci-fi. Every form of literature has to have some idea or there’s no point writing the book. Every crime novel has to have an idea behind it. So, I’m not sure if that’s a great definition either.

Maybe the best definition came from Damon Knight who said sci-fi is what we point to when we say it. It’s kind of, “I know it when I see it,” kind of thing. I think there’s something to that.

Any time you come up with a definition of sci-fi, it’s always hard to drive hard lines between sci-fi and other adjacent genres like fantasy. They’re often spoke about together, sci-fi and fantasy. I think I can tell the difference between sci-fi and fantasy, but I can’t describe the difference. I don’t think there is a hard line.

Science fiction feels like it’s looking towards the future, even when it isn’t. Maybe the sci-fi story isn’t actually set in the future. But it feels like it’s looking to the future and asking, “What if?” whereas fantasy feels like it’s looking to the past and asking, “What if?” But again, fantasy isn’t necessarily set in the past, and science fiction isn’t necessarily set in the future.

You could say, “Oh, well, science fiction is based on science, and fantasy is based on magic,” but any sci-fi book that features faster than light travel is effectively talking about magic, not science. So, again, I don’t think you can draw those hard lines.

There are other genres that are very adjacent and cross over with sci-fi and fantasy, like horror. You get sci-fi horror, fantasy horror. What about any mainstream book that has magical realism to it? You could say that’s a form of fantasy or science fiction.

Ultimately, I think this question, “What is sci-fi?” is a really interesting question if you’re a publisher. It’s probably important for you to answer this question if you are a publisher. But if you are a reader, honestly, I don’t think it’s that important a question.

What is sci-fi for?

There’s another question that comes on from this, which is, “What is sci-fi for? What’s its purpose?” Is it propaganda for science, almost like the way Isaac Asimov is describing it?

Sometimes, it has been used that way. In the 1950s and ’60s, it was almost like a way of getting people into science. Reading science fiction certainly influenced future careers in science, but that feels like a very limiting way to describe a whole field of literature.

Is sci-fi for predicting the future? Most sci-fi authors would say, “No, no, no.” Ray Bradbury said, “I write science fiction not to predict the future, but to prevent it.” But there is always this element of trying to ask what if and play out the variables into the future.

Frederik Pohl said, “A good science fiction story should be able to predict not the automobile but the traffic jam,” which is kind of a nice way of looking at how it’s not just prediction.

Maybe thinking about sci-fi as literature of the future would obscure the fact that actually, most science fiction tends to really be about today or the time it’s published. It might be set in the future but, often, it’s dealing with issues of the day.

Ultimately, it’s about the human condition. Really, so is every form of literature. So, I don’t think there’s a good answer for this either. I don’t think there’s an answer for the question, “What is sci-fi for?” that you could put all science fiction into.

Sci-fi history

Okay, so we’re going to avoid the philosophical questions. Let’s get down to something a bit more straightforward. Let’s have a history of science fiction and science fiction literature.

Caveats again: this is going to be very subjective, just as, like, my history. It’s also going to be a very Western view because I grew up in Ireland, a Western country.

Where would I begin the history of science fiction? I could start with the myths and legends and religions of most cultures, which have some kind of science fiction or fantasy element to them. You know, the Bible, a work of fantasy.

1818

But if I wanted to start with what I would think is the modern birth of the sci-fi novel, I think Mary Wollstonecraft Shelley’s Frankenstein or The Modern Prometheus could be said to be the first sci-fi novel and invents a whole bunch of tropes that we still use to this day: the mad scientist meddling with powers beyond their control.

It’s dealing with electricity, and I talked about how sci-fi is often about topics of the day, and this is when electricity is just coming on the scenes. There are all sorts of questions about the impact of electricity and science fiction is a way of exploring this.

Talking about reanimating the dead, also kind of talking about artificial intelligence. It set the scene for a lot of what was to come.

1860s, 1880s

Later, in the 19th Century, in the 1860s, and then the 1890s, we have these two giants of early science fiction. In France, we have Jules Verne, and he’s writing books like 20,000 Leagues Under the Sea, From Earth to the Moon, and Journey to the Centre of the Earth, these adventure stories with technology often at the Centre of them.

Then in England, we have H.G. Wells, and he’s creating entire genres from scratch. He writes The Time Machine, War of the Worlds, The Invisible Man, The Island of Doctor Moreau.

Over in America, you’ve got Edgar Allan Poe mostly doing horror, but there’s definitely sci-fi or fantasy aspects to what he’s doing.

1920s, 1930s

Now, as we get into the 20th Century, where sci-fi really starts to boom – even though the term doesn’t exist yet – is with the pulp fiction in the 1920s, 1930s. This is literally pulp paper that cheap books are written on. They were cheap to print. They were cheap for the authors, too. As in, the authors did not get paid much. People were just churning out these stories. There were pulp paperbacks and also magazines.

Hugo Gernsback, here in the 1920s, he was the editor of Amazing Stories, and he talked about scientification stories. That was kind of his agenda.

Then later, in the 1930s, John W. Campbell became the editor of Astounding Stories. In 1937, he changed the name of it from Astounding Stories to Astounding Science Fiction. This is when the term really comes to prominence.

He does have an agenda. He wants stories grounded in plausible science. He wants that hard kind of science.

What you have here, effectively, is yes the genre is getting this huge boost, but also you’ve got gatekeepers. You’ve got two old, white dude gatekeepers kind of deciding what gets published and what doesn’t. It’s setting the direction.

1940s, 1950s

What happens next, though, is that a lot of science fiction does get published. A lot of good science fiction gets published in what’s known as the Golden Age of Science Fiction in the 1940s and 1950s. This, it turns out, is when authors like Isaac Asimov, Ray Bradbury, and Heinlein are publishing those early books I was reading in the library. I didn’t realise it at the time, but they were books from the Golden Age of Science Fiction.

This tended to be the hard science fiction. It’s grounded in technology. It’s grounded in science. There tend to be scientific explanations for everything in the books.

1960s, 1970s

It’s all good stuff. It’s all enjoyable. But there’s an interesting swing of the pendulum in the 1960s and ’70s. This swing kind of comes from Europe, from the UK. This is known as the New Wave. That term was coined by Michael Moorcock in New Worlds magazine that he was the editor of.

It’s led by these authors like Brian Aldiss and J.G. Ballard where they’re less concerned with outer space and they’re more concerned with inner space: the mind, language, drugs, the inner world. It’s some exciting stuff, quite different to the hard science that’s come before.

Like I say, it started in Europe, but then there was also this wave of it in America, broadening the scope of what sci-fi could be. You got less gatekeeping and you got more new voices. You got Ursula K. Le Guin and Samuel R. Delaney expanding what sci-fi could be.

1980s

That trend continued into the 1980s when you began to see the rise of authors like Octavia Butler who, to this day, has a huge influence on Afrofuturism. You’re getting more and more voices. You’re getting a wider scope of what science fiction could be.

I think the last big widening of sci-fi happened in the 1980s with William Gibson. He practically invented (from scratch) the genre of cyberpunk. If Mary Shelley was concerned with electricity then, by the 1980s, we were all concerned with computers, digital networks, and technology.

The difference with cyberpunk is where the Asimov story or Clarke story might be talking about someone in a position of power (a captain or an astronaut) and how technology impacts them, cyberpunk is kind of looking at technology at the street level when the street finds its own uses for things. That was expanded into other things as well.

After the 1980s, we start to get the new weird. We get people like Jeff Noon, China Mieville, and Jeff VanderMeer writing stuff. Is it sci-fi? Is it fantasy? Who knows?

Today

Which brings us up to today. Today, we have, I think, a fantastic range of writers writing a fantastic range of science fiction, like Ann Leckie with her Imperial Radch stories, N.K. Jemisin with the fantastic Broken Earth trilogy, Yoon Ha Lee writing Machineries of Empire, and Ted Chiang with terrific short stories and his collections like Exhalation. I wouldn’t be surprised if, in the future, we look back on now as a true Golden Age of Science Fiction where it is wider, there are more voices and, frankly, more interesting stories.

Sci-fi subjects

Okay, so on the home stretch, I want to talk about the subjects of science fiction, the topics that sci-fi tends to cover. I’m going to go through ten topics of science fiction, list off what the topic is, name a few books, and then choose one book to represent that topic. It’s going to be a little tricky, but here we go.

Planetary Romance

Okay, so planetary romance is a sci-fi story that’s basically set on a single planet where the planet is almost like a character: the environment of the planet, the ecosystem of the planet. This goes back a long way. The Edgar Rice Burroughs stories of John Carter of Mars were kind of early planetary romance and even spawned a little sub-genre of Sword and Planet*.

Brian Aldiss did a terrific trilogy called Helliconia, a series where the orbits of a star system are kind of the driving force behind the stories that take place over generations.

Philip Jose Farmer did this fantastic series (the Riverworld series). Everyone in history is reincarnated on this one planet with a giant river spanning it.

If I had to pick one planetary romance to represent the genre, I am going to go with a classic. I’m going to go with Dune by Frank Herbert. It really is a terrific piece of work.

All right.

Space Opera

Space opera, the term was intended to denigrate it but, actually, it’s quite fitting. Space opera is what you think of when you think of sci-fi. It’s intergalactic empires, space battles, and good rip-roaring yarns. You can trace it back to these early works by E.E. ’Doc’ Smith. It’s the good ol’ stuff.

Space opera has kind of fell out of favour for a while there, but it started coming back in the last few decades. It got some really great, hard sci-fi space opera by Alastair Reynolds and, more recently, Yoon Ha Lee with Ninefox Gambit – all good stuff.

But if I had to pick one space opera book to represent the genre, I’m going to go with Ancillary Justice by Ann Leckie. It is terrific. It’s like taking Asimov, Clarke, Ursula K. Le Guin, and the best of all of them, and putting them all into one series – great stuff.

Generation starships

Now, in space opera, generally, they come up with some way of being able to travel around the galaxy in a faster than light, warp speed, or something like that, which makes it kind of a fantasy, really.

If you accept that you can’t travel faster than light, then maybe you’re going to write about generation starships. This is where you accept that you can’t zip around the galaxy, so you have to take your time getting from star system to star system, which means it’s multiple generations.

Brian Aldiss’s first book was a generation starship book called Non-Stop. But there’s one book that I think has the last word on generation starships, and it’s by Kim Stanley Robinson. It is Aurora. I love this book, a really great book. Definitely the best generation starship book there is.

Utopia

All right. What about writing about utopias? Funnily enough, not as many utopias as there are the counterpart. Maybe the most famous utopias in recent sci-fi is from Ian M. Banks with his Culture series. The Culture is a socialist utopia in space post-scarcity. They’re great space opera galaxy-spanning stuff.

What’s interesting, though, is most of the stories are not about living in a utopia because living in a post-scarcity utopia is, frankly, super boring. All the stories are about the edge cases. All the stories are literally called special circumstances.

All good fun, but the last word on utopian science fiction must go to Ursula Le Guin with The Dispossessed. It’s an anarcho-syndicalist utopia – or is it? It depends on how you read it.

I definitely have some friends who read this like it was a manual and other friends who read it like it was a warning. I think, inside every utopia, there’s a touch of dystopia, and dystopias are definitely the more common topic for science fiction. Maybe it’s easier to ask, “What’s the worst that can happen?” than to ask, “What’s the best that can happen?”

Dystopia

A lot of the slipstream books would be based on dystopias like Margaret Atwood’s terrific The Handmaid’s Tale. I remember being young and reading (in that library) Fahrenheit 541 by Ray Bradbury, a book about burning books – terrific stuff.

But I’m going to choose one. If I’m going to choose one dystopia, I think I have to go with a classic. It’s never been beat. George Orwell’s 1984, the last word on dystopias. It’s a fantastic work, fantastic piece of literature.

I think George Orwell’s 1984 is what got a lot of people into reading sci-fi. With me, it almost went the opposite. I was already reading sci-fi. But after reading 1984, I ended up going to read everything ever written by George Orwell, which I can highly recommend. There’s no sci-fi, but a terrific writer.

Post-apocalypse

All right. Here’s another topic: a post-apocalypse story. You also get pre-apocalypse stories like, you know, there’s a big asteroid coming or there’s a black hole in the Centre of the Earth or something, and how we live out our last days. But, generally, authors tend to prefer post-apocalyptic settings, whether that’s post-nuclear war, post environmental catastrophe, post-plague. Choose your disaster and then have a story set afterward.

J. G. Ballard, he writes stories about not enough water, too much water, and I think it’s basically he wants to find a reason to put his characters in large, empty spaces because that’s what he enjoys writing about.

Very different, you’d have the post-apocalyptic stories of someone like John Wyndham, somewhat derided by Brian Aldiss’s cozy catastrophes. Yes, the world is ending, but we’ll make it back home in time for tea.

At the complete other extreme from that, you would have something like Cormac McCarthy’s The Road, which is relentlessly grim tale of post-apocalypse.

I almost picked Margaret Atwood’s Oryx and Crake trilogy for the ultimate post-apocalyptic story, and it’s really great stuff post-plague, genetically engineered plague – very timely.

But actually, even more timely – and a book that’s really stayed with me – is Station Eleven by Emily St. John Mandell. Not just because the writing is terrific and it is a plague book, so, yes, timely, but it also tackles questions like: What is art for? What is the human condition all about?

Artificial intelligence

All right. Another topic that’s very popular amongst the techies, artificial intelligence, actual artificial intelligence, not what we in the tech world called artificial intelligence, which is a bunch of if/else statements.

Stories of artificial intelligence are also very popular in slipstream books from mainstream authors like recently we had a book from Ian McEwan. We had a new book from Kazuo Ishiguro tackling this topic.

But again, I’m going to go back to the classic, right back to my childhood, and I’ll pick I, Robot, a collection of short stories by Isaac Asimov, where he first raises this idea of three laws of robotics – a word he coined, by the way, robotics from the Czech word for robot.

These three laws are almost like design principles for artificial intelligence. All the subsequent works in this genre kind of push at those design principles. It’s good stuff. Not to be confused with the movie with the same name.

First contact

Here’s another topic: first contact with an alien species. Well, sometimes the first contact doesn’t go well and the original book on this is H.G. Wells The War of the Worlds. Every other alien invasion book since then has kind of just been a reworking of The War of the Worlds. It’s terrific stuff.

For more positive views on first contact stories, Arthur C. Clarke dives into books like Childhood’s End. In Rendezvous with Rama, what’s interesting is we don’t actually contact the alien civilisation but we have an artifact that we must decode and get information from. It’s good stuff.

More realistically, though, Solaris by Stanislaw Lem is frustrating because it’s realistic in the sense that we couldn’t possibly understand an alien intelligence. In the book – spoiler alert – we don’t.

For realism set in the world of today, Carl Sagan’s book Contact is terrific. Well worth a read. It really tries to answer what would a first contact situation look like today.

But I’ve got to pick one first contact story, and I’m actually going to go with a short story, and it’s Stories of Your Life by Ted Chaing. I recommend getting the whole book and reading every short story in it because it’s terrific.

This is the short story that the film Arrival was based on, which is an amazing piece of work because I remember reading this fantastic short story and distinctly thinking, “This is unfilmable. This could only exist in literature.” Yet, they did a great job with the movie, which bodes well for the movie of Dune, which is also being directed by Denis Villeneuve.

Time travel

All right. Time travel as a topic. I have to say I think that time travel is sometimes better handled in media like TV and movies than it is in literature. That said, you’ve got the original time travel story. Again, H.G. Wells just made this stuff from scratch, and it really holds up. It’s a good book. I mean it’s really more about class warfare than it is about time travel, but it’s solid.

Actually, I highly recommend reading a nonfiction book called Time Travel by James Gleick where he looks at the history of time travel as a concept in both fiction and in physics.

You’ve got some interesting concepts like Lauren Beukes’s The Shining Girls, which, as is the premise, time-traveling serial killer, which is a really interesting mashup of genres. You’ve got evidence showing up out of chronological sequence.

By the way, this is being turned into a TV show as we speak, as is The Peripheral by William Gibson, a recent book by him. It’s terrific.

What I love about this, it’s a time travel story where the only thing that travels in time is information. But that’s enough with today’s technology, so it’s like a time travel for remote workers. Again, very timely, as all of William Gibson’s stuff tends to be.

But if I’ve got to choose one, I’m going to choose Kindred by Octavia Butler because it’s just such as a terrific book. To be honest, the time travel aspect isn’t the Centre of the story but it’s absolutely worth reading as just a terrific, terrific piece of literature.

Alternative history

Now, in time travel, you’ve generally got two kinds of time travel. You’ve got the closed-loop time travel, which is kind of like a Greek tragedy. You try and change the past but, in trying to change it, you probably bring about the very thing you were trying to change. The Shining Girls were something like that.

Or you have the multiverse version of time travel where going back in time forks the universe, and that’s what The Peripheral is about. That multiverse idea is explored in another subgenre, which is alternative history, which kind of asks, “What if something different had happened in history?” and then plays out the what-if from there. Counterfactuals, they’re also known as.

I remember growing up and going through the shelves of that library in Cobh, coming across this book, A Transatlantic Tunnel. Hurrah! by Harry Harrison. It’s set in a world where the American War of Independence failed and now it’s the modern-day. The disgraced descendant of George Washington is in charge of building a transatlantic tunnel for the British Empire.

That tends to be the kind of premise that gets explored in alternative history is what if another side had won the war. There’s a whole series of books set in a world where the South won the Civil War in the United States.

For my recommendation, though, I’m going to go with The Man in the High Castle, which is asking what if the other side won the war. In this case, it’s WWII. It’s by Philip K. Dick. I mean it’s not my favourite Philip K. Dick book, but my favourite Philip K. Dick books are so unclassifiable, I wouldn’t be able to put them under any one topic, and I have to get at least one Philip K. Dick book in here.

Cyberpunk

A final topic and, ooh, this is a bit of a cheat because it’s not really a topic – it’s a subgenre – cyberpunk. But as I said, cyberpunk deals with the topic of computers or networked computers more specifically, and there’s some good stuff like Neal Stephenson’s Snow Crash. Really ahead of its time. It definitely influenced a lot of people in tech.

Everyone I know that used to work in Linden Lab, the people who were making Second Life, when you joined, you’re basically handled Snow Crash on your first day and told, “This is what we’re trying to build here.”

But if I’ve got to pick one cyberpunk book, you can’t beat the original Neuromancer by William Gibson. Just terrific stuff.

What’s interesting about cyberpunk is, yes, it’s dealing with the technology of computers and networks, but it’s also got this atmosphere, a kind of noir atmosphere that William Gibson basically created from scratch. Then a whole bunch of other genres spun off from that asking, “Well, what if we could have a different atmosphere?” and explore stories like steampunk. It’s kind of like, “Well, what if the Victorians had computers and technology? What would that be like?”

Basically, if there’s a time in history that you like the aesthetic of, there’s probably a subgenre ending in the word “punk” that describes that aesthetic. You can go to conventions, and you can have your anime and your manga and your books and your games set in these kind of subgenres. They are generally, like I say, about aesthetics with the possible exception of solarpunk, which is what Steph is going to talk about.

Living in the future

I am going to finish with these books as my recommendations for a broad range of topics of science fiction from 50 years of reading science fiction. I think about if I could go back and talk to my younger self in that town in the south coast of Ireland about the world of today. I’m sure it would sound like a science fictional world.

By the way, I wouldn’t go back in time to talk to my younger self because I’ve read enough time travel stories to know that that never ends well. But still, here we are living in the future. I mean this past year with a global pandemic, that is literally straight out of a bunch of science fiction books.

But also, just the discoveries and advancements we’ve made are science fictional. Like when I was growing up and reading science books in that library, we didn’t know if there were any planets outside our own solar system. We didn’t know if exoplanets even existed.

Now, we know that most solar systems have their own planets. We’re discovering them every day. It’s become commonplace.

We have sequenced the human genome, which is a remarkable achievement for a species.

And we have the World Wide Web, this world-spanning network of information that you can access with computers in your pockets. Amazing stuff.

But of all of these advancements by our species, if I had to pick the one that I think is in some ways the most science-fictional, the most far-fetched idea, I would pick the library. If libraries didn’t exist and you tried to make them today, I don’t think you could succeed. You’d be laughed out of the venture capital room, like, “How is that supposed to work?” It sounds absolutely ridiculous, a place where people can go and read books and take those books home with them without paying for them. It sounds almost too altruistic to exist.

But Ray Bradbury, for example, I know he grew up in the library. He said, “I discovered me in the library. I went to find me in the library.” He was a big fan of libraries. He said, “Reading is at the Centre of our lives. The library is our brain. Without the library, you have no civilisation.” He said, “Without libraries, what have we? Have no past and no future.”

So, to end this, I’m not going to end with a call to read lots of sci-fi. I’m just going to end with a call to read – full stop. Read fiction, not just non-fiction. Read fiction. It’s a way of expanding your empathy.

And defend your local library. Use your local library. Don’t let your local library get closed down.

We are living in the future by having libraries. Libraries are science fictional.

With that, thank you.

" } , { "id": "17733", "url": "https://adactio.com/article/17733", "title": "Design Principles For The Web", "summary": "The opening presentation from An Event Apart Online Together: Front-End Focus held online in August 2020.", "date_published": "2021-01-04 11:53:21", "tags": [ "design", "principles", "process", "systems", "frontend", "development", "robustness", "resilience", "clearleft", "conference", "talk", "presentation", "aea", "aneventapart", "transcript", "medium:id=f4b051dfae3c" ], "content_html": "

I’d like to take you back in time, just over 100 years ago, at the beginning of World War One. It’s 1914. The United States would take another few years to join, but the European powers were already at war in the trenches, as you can see here.

\"A

What I want to draw your attention to is what they’re wearing, specifically what they’re wearing on their heads. This is the standard issue for soldiers at the beginning of World War One, a very fetching cloth cap. It looks great. Not very effective at stopping shrapnel from ripping through flesh and bone.

It wasn’t long before these cloth caps were replaced with metal helmets; much sturdier, much more efficient at protection. This is the image we really associate with World War One; soldiers wearing metal helmets fighting in the trenches.

\"A

Now, an interesting thing happened after the introduction of these metal helmets. If you were to look at the records from the field hospitals, you would see that there was an increase in the number of patients being admitted with severe head injuries after the introduction of these metal helmets. That seems odd and the only conclusion that we could draw seems to be that cloth helmets are actually better than the metal helmets at stopping these kind of injuries. But that wouldn’t be correct.

You can see the same kind of data today. Any state where they introduce motorcycle helmet laws saying it’s mandatory to wear motorcycle helmets, you will see an increase in the number of emergency room admissions for severe head injuries for motorcyclists.

Now, in both cases, what’s missing is the complete data set because, yes, while in World War One there was an increase in the field hospital admissions for head injuries, there was a decrease in deaths. Just as today, if there’s an increase in emergency room admissions for severe head injuries because of motorcycle helmets, you will see a decrease in the number of people going to the morgue.

Analytics

I kind of like these stories of analytics where there’s a little twist in the tale where the obvious solution turns out not to be the correct answer and our expectations are somewhat subverted. My favorite example of analytics on the web comes from a little company called YouTube. This is from a few years back.

It was documented by an engineer at YouTube called Chris Zacharias. He blogged about this. He was really frustrated with the page weight on a YouTube video page, which at the time was 1.2 megabytes. That’s without the video. That’s the HTML, the CSS, the JavaScript, the images. This just seemed too big (and I would agree: it is too big).

Chris set about working on making a smaller version of a video page. He called this Project Feather. He worked and worked at it, and he managed to get a page down to just 98 kilobytes, so from 1.2 megabytes to 98 kilobytes. That’s an order of magnitude difference.

Then he set up shipping this to different segments of the audience and watching the analytics to see what rolled in. He was hoping to see a huge increase in the number of people engaging with the content. But here’s what he blogged.

The average aggregate page latency under Feather had actually increased. I had decreased the total page weight and number of requests to a tenth of what they were previously and somehow the numbers were showing that it was taking longer for videos to load on Feather, and this could not be possible. Digging through the numbers more (and after browser testing repeatedly), nothing made sense.

I was just about to give up on the project with my world view completely shattered when my colleague discovered the answer: geography. When we plotted the data geographically and compared it to our total numbers (broken out by region), there was a disproportionate increase in traffic from places like Southeast Asia, South America, Africa, and even remote regions of Siberia.

A further investigation revealed that, in those places, the average page load time under Feather was over two minutes. That means that a regular video page (at over a megabyte) was taking over 20 minutes to load.

Again, what was happening here was that there was a whole new set of data. There were people who literally couldn’t even load the page because it would take 20 minutes who couldn’t access YouTube who now, because of this Project Feather, for the first time were able to access YouTube. What that looked like, according to the analytics, was that page load time had overall gone up. What was missing was the full data set.

Expectations

I really like these stories that kind of play with our expectations. When the reveal comes, it’s almost like hearing the punchline to a joke, right? Your expectations are set up and then subverted.

Jeff Greenspan is a comedian who talks about this. He talks about expectations in terms of music and comedy. He points out that they both deal with expectations over time.

In music, the pleasure comes from your expectations being met. A song sets up a rhythm. When that rhythm is met, that’s pleasurable. A song is using a particular scale and when those notes on that scale are hit, it’s pleasurable. Music that’s not fun to listen to tends to be arhythmic and atonal where you can’t really get a handle on what’s going to come next.

Comedy works the other way where it sets up expectations and then pulls the rug out from under you — the surprise.

Now, you can use music and you can use comedy in your designs. If you were setting up a lovely grid and a vertical rhythm, that’s like music. It’s a lovely, predictable feeling to that. But you can also introduce a bit of comedy; something that peeks out from the grid. You upset (just occasionally) something with a bit of subverted expectations.

You don’t want something that’s all music. Maybe that’s a little boring. You don’t want something that’s all comedy because then it’s just crazy and hard to get a handle on.

You can see music and comedy in how you consume news. You notice that when you read your news sources, all it does is confirm what you already believe. You read something about someone, and you think, “Yes, they’ve done something bad and I always thought they were bad, so that has confirmed my expectations.” It’s like music.

I read something that somebody has done and I always thought they were a good person. This now confirms that they are a good person. That is music to my ears. If your news feels like that, feels like music, then you may be in a bubble.

The comedy approach to music would be more like the clickbait you see at the bottom of the Internet where it’s like, “Click here. You won’t believe what these child stars look like now.” The promise there is that we will subvert your expectations, and that’s where the pleasure will come.

Survivorship bias

My favorite story from history about analytics is not from World War One but from the sequel, World War Two, where again the United States were a few years late to this world war. But when they did arrive and started their bombing raids on Germany, they were coming from England. The bombers would come back all shot up, and so there was a whole thinktank dedicated to figuring out how we can reinforce these planes in certain areas.

You can’t reinforce the whole plane. That would make it too heavy, but you could apply some judicious use of metal reinforcement to protect the plane.

\"A

They treated this as a data problem, as an analytics problem. They looked at the planes coming back. They plotted where the bullet holes were, and that led them to conclude where they should put the reinforcements. You can see here that the wings were getting all shot up, the middle of the fuselage, so clearly that’s where the reinforcements should go.

There was a statistician, a mathematician named Abraham Wald. He looked at the exact same data and he said, “No, we need to reinforce the front of the plane where there are no bullet holes. We need to reinforce the back of the fuselage where there are no bullet holes.”

What he realized was that all the data they were seeing was actually a subset of the complete data set. They were only seeing the planes that made it back. What was missing were all the planes that got shot down. If all the planes that made it back didn’t have any bullet holes in the front of the plane, then you could probably conclude that if you get a bullet hole in the front of the plane, you’re not going to make it back.This became the canonical example of what we now call survivorship bias, which is this tendency to look at the subset of data — the winners.

You see survivorship bias all the time. You walk into a bookstore and you look at the business section and its books by successful business people; that’s survivorship bias. Really, the whole section should be ten times as big and feature ten times as many books written by people who had unsuccessful businesses because that would be a much more representative sample.

We see survivorship bias. You go onto Instagram and you look at people’s Instagram photos. Generally, they’re posting their best life, right? It’s the perfect selfie. It’s the perfect shot. It’s not a representative sample of what somebody’s life looks like. That’s survivorship bias.

Design systems

We have a tendency to do it on the web, too, when people publish their design systems. Don’t get me wrong. I love the fact that companies are making their design systems public. It’s something I’ve really lobbied for. I’ve encouraged people to do this. Please, if you have a design system, make it public so we can all learn from it.

I really appreciate that people do that, but they do tend to wait until it’s perfect. They tend to wait until they’ve got the success.

What we’re missing are all the stories of what didn’t work. We’re missing the bigger picture of the things they tried that just failed completely. I feel like we could learn so much from that. I feel like we can learn as much from anti-patterns as we can from patterns, if not more so.

Robin Rendle talked about this in a blog post recently about design systems. He said:

The ugly truth is that design systems work is not easy. What works for one company does not work for another. In most cases, copying the big tech company of the week will not make a design system better at all. Instead, we have to acknowledge how difficult our work is collectively. Then we have to do something that seems impossible today—we must publicly admit to our mistakes. To learn from our community, we must be honest with one another and talk bluntly about how we’ve screwed things up.

I completely agree. I think that would be wonderful if we shared more openly. I do try to encourage people to share their stories, successes, and failures.

I organized a conference a few years back all about design systems called Patterns Day and invited the best and brightest: Alla Kholmatova, Jina Anne, Paul Lloyd, Alice Bartlett – all these wonderful people. It was wonderful to hear people come up and sort of reassure you, “Hey, none of us have got this figured out. We’re all trying to figure out what we’re doing here.” The audience really needed to hear that. They really needed to hear that reassurance that this is hard.

Gaps and overlaps

I did Patterns Day again last year. My favorite talk at Patterns Day last year, I think, was probably from Danielle Huntrods. I’m biased here because I used to work with Danielle. She used to work at Clearleft, and she’s an absolutely brilliant front-end developer.

She had this lens that she used when she was talking about design systems and other things. She talked about gaps and overlaps, which is one of those things that’s lodged in my brain. I kind of see it everywhere.

She said that when you’re categorizing things, you’re putting things into categories, that means some things will fall between those categories. That leaves you with the gaps, the things that aren’t being covered. It’s almost like Donald Rumsfeld, the unknown unknowns and all that.

What can also happen when you put things into categories is you get these overlaps where there’s duplication; two things are responsible for the same task. This duplication of effort, of course, is what we’re trying to avoid with design systems. We’re trying to be efficient. We don’t want multiple versions of the same thing. We want to be able to reuse one component. There’s a danger there.

She’s saying what we do with the design system is we concentrate on cataloging these components. We do our interface inventory, but we miss the connective part. We miss the gaps between the components. Really, what makes something a system is not so much a collection of components but how those components fit together, those gaps between them.

Fluffy edges

Danielle went further. She didn’t just talk about gaps and overlaps in terms of design systems and components. She talked about it in terms of roles and responsibilities. If you have two people who believe they’re responsible for the same thing, that’s going to lead to a clash.

Worse, you’re working on a project and you find out that there was nobody responsible for doing something. It’s a gap. Everyone assumes that the other person was responsible for getting that thing done.

“Oh, you’re not doing that?”“I thought you were doing that.”“Oh, I thought you were doing that.”

This is the source of so much frustration in projects, either these gaps or these overlaps in roles and responsibilities. Whenever we start a project at Clearleft, we spend quite a bit of time getting this role mapping correct, trying to make sure there aren’t any gaps and there aren’t any overlaps. Really, it’s about surfacing those assumptions.

“Oh, I assumed I was responsible for that.”“No, no. I assumed I was the one who would be doing that.”

We clarify this stuff as early as possible in the design process. We even have a game we play called Fluffy Edges. It’s literally like a card game. We’d ask these questions, “Who is responsible for this? Who is going to do this?” It’s kind of good fun, but really it is about surfacing those assumptions and getting clarity on the roles at the beginning of the design process.

The design process

Now, the design process, I’m talking about the design process like it’s this known thing and it really isn’t. It’s a notoriously difficult thing to talk about the design process.

\"A

Here’s one way of thinking about the design process. This is The Design Squiggle by Damien Newman. He used to be at IDEO. I actually think this is a pretty accurate representation of what the design process feels like for an individual designer. You go into the beginning and it’s chaos, it’s a mess, and it’s entropy. Then, over time, you begin to get a handle on things until you get to this almost inevitable result at the end.

I’m not sure it’s an accurate representation of what the collaborative design process feels like. There’s a different diagram that resonates a lot with us at Clearleft, which is the Double Diamond diagram from Chris Vanstone at the Design Council. The way of thinking about the Double Diamond is almost like it’s two design squiggles back-to-back.

\"Two

It’s a bit of an oversimplification, but the idea is that the design process is split into these triangles. First, it’s the discovery. Then we define. So we’re going out wide with discovery. Then we narrow it down with the definition. Then it’s time to build a thing and we open up wide again to figure out how we’re going to execute this thing. Once we got that figured out, we narrow down into the delivery phase.

The way of thinking about this is the first diamond (discovery and definition), that’s about building the right thing. Make sure you’re building the right thing first. The second diamond (about execution and delivery), that’s about building the thing right. Building the right thing and building the thing right.

The important thing is they follow this pattern of going wide and going narrow. This divergent phase with discovery and then convergent for definition. There’s a divergent phase for execution and then convergent for delivery.

If you take nothing else in the Double Diamond approach, it’s this way of making explicit when you’re in a divergent or convergent phase. Again, it’s kind of about servicing that assumption.“Oh, I assumed we were converging.”“No, no, no. We are diverging here.”That’s super, super useful.

I’ll give you an example. If you are in a meeting, at the beginning of the meeting, state whether it’s a divergent meeting or a convergent meeting. If you were in a meeting where the idea is to generate as many ideas as possible during a meeting, make that clear at the beginning because what you don’t want is somebody in the meeting who thinks the point is to converge on a solution.

You’ve got these people generating ideas and then there’s one person going, “No, that will never work. Here’s why. Oh, that’s technically impossible. Here’s why.” No, if you make it clear at the start, “There are no bad ideas. We’re in a divergent meeting,” everyone is on the same page.

Conversely, if it’s a convergent meeting, you need to make that clear and say, “The point of this meeting is that we come to a decision, one decision,” and you need to make that clear because what you don’t want in a convergent meeting is it’s ten minutes to launch time, converging on something, and then somebody in the meeting goes, “Hey, I just had an idea. How about if we…?” You don’t want that. You don’t want that.

If you take nothing else from this, this idea of making divergence and convergence explicit is really, really, really useful. Again, like I say, this pattern of just assumptions being surfaced is so useful.

This initial diamond of the Double Diamond phase, it’s where we spend a lot of our time at Clearleft. I think, early in the years of Clearleft, we spent more time on the second diamond. We were more about execution and delivery. Now, I feel like we deliver a lot more value in the discovery and definition phase of the design process.

There’s so much we do in this initial discovery phase. I mentioned already we have this fluffy edges game we play for role mapping to figure out the roles and responsibilities. We have things like a project canvas we use to collaborate with the clients to figure out the shape of what’s to come.

We sometimes run an exercise called a pre-mortem. I don’t know if you’ve ever done that. It’s like a post-mortem except you do it at the beginning of the project. It’s kind of a scenario planning.

You say, “Okay, it’s so many months after the launch and it’s been a complete disaster. What went wrong?” You map that out. You talk about it. Then once you’ve got that mapped out, you can then take steps to avoid that disaster happening.

Of course, what we do in the discovery phase, almost more than anything else, is research. You can’t go any further without doing the research.

Assumptions

All of these things, all of these exercises, these ways of working are about dealing with assumptions, either surfacing assumptions that we didn’t know were there or turning assumptions into hypotheses that can be tested. If you think about what an assumption is, it kind of goes back to expectations that I was talking about.

Assumptions are expectations plus internal biases. That gives you an assumption. The things that you don’t even realize you believe; they lead to assumptions. This can obviously be very bad. This is like you’ve got blind spots in your assumptions because of your own biases that you didn’t even realize you had.

They’re not necessarily bad things. Assumptions aren’t necessarily bad. If you think about your expectations plus your biases, that’s another way of thinking about your values. What do you hold to be really dear to you? The things that are self-evident to you, those are your values, your internal expectations and biases.

Values

Now, at Clearleft, we have our company values, our core values, the things we believe. I am not going to share the Clearleft values with you. There are two reasons for that.

One is that they’re Clearleft’s values. They are useful for us. That’s for us to know internally.

Secondly, there’s nothing more boring than a company sharing their values with you. I say nothing more boring. Maybe the only thing more boring than a company sharing their values is when a so-called friend tells you about a dream they had and you have to sit there and smile and nod politely while they tell you about something that is only of interest to them.

Purpose

These values are essentially what give you purpose, whether it’s at an individual level, your personal moral values give you your purpose, or at a company or organization level, you get your purpose – or any endeavor. You think about the founding of a nation-state like the United States of America. You got the Declaration of Independence. That encodes the values. That has the purpose. It’s literally saying, “We hold these truths to be self-evident.” These are assumptions here. That’s your purpose is something like the Declaration of Independence.

Principles

Then you get the principles, how you’re going to act. The Constitution would be an example of a collection of principles. These principles must be influenced by the purpose. Your values must influence the principles you’re going to use to act in the world.

Patterns

Then those principles have an effect on the final patterns, the outputs that you’ll see. In the case of a nation-state like America, I would say the patterns are the laws that you end up with. Those laws come from the principles encoded in the Constitution. The Constitution, those principles in the Constitution are influenced and encoded from the purpose in the Declaration of Independence.

The purpose influences the principles. The principles influence the pattern. This would be true in the case of software as well. You think about the patterns are the final interface elements, the user interface. Those are the patterns. Those have been influenced by the principles of that company, how they choose to act, and those principles are influenced by the purpose of that company and what they believe.

Design principles

This is why I find principles, in particular, to be fascinating because they sit in the middle. They are influenced by the purpose and they, in turn, influence the patterns. I’m talking about design principles, something I’m really into. I’m so into design principles, I actually have a website dedicated to design principles at principles.adactio.com.

Now, all I do on this website is collect design principles. I don’t pass judgment. I don’t say whether I think they’re good design principles or bad design principles. I just document them. That’s turned out to be a good thing to do over time because sometimes design principles disappear, go away, or get changed. I’ve got a record of design principles from the past.

For example, Google used to have a set of principles called Ten Things We Know to Be True — we know to be true, right? We hold these truths to be self-evident. That’s no longer available on the Google website, those ten things, those ten principles. One of them was, “You can make money without doing evil.” Like I said, that’s gone now. That’s not available on the Google website.

There was another set of design principles from Google that’s also not available anymore. That was called Ten Principles That Contribute to a Googley User Experience. I think we understand why those are no longer available. The sheer embarrassment of saying the word Googley out loud, I think.

I’ll tell you something I notice when I see design principles. Like I say, I catalog them without judgment, but I do have ideas. I think about what makes for good or bad design principles or sets of design principles.

Whenever I see somebody with a list that’s exactly ten principles, I’m suspicious. Like, “Really? That’s such a convenient round number. You didn’t have nine principles that contribute to a Googley user experience? You didn’t have 11 things that we know to be true? It happened to be exactly ten?” It feels almost like a bad code smell to me that it’s exactly ten principles.

Even some great design principles like Dieter Rams, the brilliant designer. He has a fantastic set of design principles called Ten Principles for Good Design. But even there I have to think, “Hmm. That’s a bit convenient, isn’t it, that it’s exactly ten principles for good design? Isn’t it, Dieter?”

Now, just in case you think I’m being blasphemous by sugging that Dieter Rams’ Ten Principles for Good Design is not a good set of design principles, I am not being blasphemous. I would be blasphemous if I pointed out that in the Old Testament, God supposedly delivers 10 commandments, not 9, not 11, exactly 10 commandments. Really, Moses, ten?

Anyway, what I’m talking about here is, like I say, almost like these code smells for design principles. Can we evaluate design principles? Are there heuristics for saying whether a design principle is a good design principle or a bad design principle?

Universal principles

To get meta about this, what I’m talking about is, are there design principles for design principles? I kind of think there are. I think you can evaluate design principles and say that’s a good one or that’s a bad one. You can evaluate them by how useful they are.

Let’s take an example. Let’s say you’ve got a design principle like this:

Make it usable.

That’s a design principle. I think this is a bad design principle. It’s not because I don’t agree with it. It’s actually a bad design principle because I agree with it and everyone agrees with it. It’s so agreeable that it’s hard to argue with and that’s not what a design principle is for.

Design principles aren’t these things to go, “Rah-rah! Yes! I feel good about this.” They are there to kind of surface stuff and have discussions, have disagreements – get it out in the open.Let’s say we took this design principle, “make it usable”, and it was rephrased to something more contentious. Let’s say somebody had the design principle like:

Usability is more important than profitability.

Ooh! Now we’re talking.

See, I think this is a good design principle. I’m not saying I agree with it. I’m saying it’s a good design principle because what it has now is priority.

We’re saying something is valued more than something else and that’s what you want from design principles is to figure out what the priorities of this organization are. What do they value? How are they going to behave?

I think this is a great phrasing for design principles. If you can phrase a design principle like this:

___, even over ___

Then that’s really going to make it clear what your values are. You can phrase a design principle as:

Usability, even over profitability.

That’s good.

Now you can have that discussion early on about whether everyone is on board with that. If there’s disagreement, you need to hammer that out and figure it out early on in the process.

Here’s another thing about this phrasing that I really like, “blank, even over blank.” It passes another test of a good design principle, which is reversibility. Rather than being a universal thing, a design principle should be reversible for a different organization.

One organization might have a design principle that says “usability, even over profitability,” and another organization, you can equally imagine having a design principle that says, “profitability, even over usability.” The fact that this principle is reversible like that is a good thing. That shows that it’s an effective design principle because it’s about priorities.

My favorite design principle of all—because I’m such a nerd for design principles, I do have a favorite—is from the HTML design principles. It’s called The Priority of Constituencies. It states:

In case of conflict, consider users over authors over implementors over theoretical purity.

That’s so good.

First of all, it just starts with, “In case of conflict.” Yes! That is exactly what design principles are for. Again, they’re not there to be like, “Rah-rah! Feel-good design principles.” No, they are there to sort out conflict.

Then, “consider users over authors.” That’s like:

Users, even over authors. Authors, even over implementors. Implementors even over theoretical purity.

Really good stuff.

There are, I think, design principles for design principles, these kind of smell tests that you can run your design principles past and see if they pass or fail.

I talked about how design principles are unique to the organization. The reversibility test kind of helps with that. You can imagine a different organization that has the complete opposite design principles to you.

Eponymous laws

I do wonder: are there some design principles that are truly universal? Well, there’s kind of a whole category of principles that we treat as universal truths. That’s kind of these laws. They tend to be the eponymous laws. They’re usually named after a person and there’s some kind of universal truth. There are a lot of them out there.

Hofstadter’s law

Hofstadter’s law, that’s from Douglas Hofstadter. Hofstadter’s law states:

It always takes longer than you expect, even when you take into account Hofstadter’s law.

That does sound like a universal truth and certainly, my experience matches that. Yeah, I would say Hofstadter’s law feels like a universal design principle.

Sturgeon’s law

90% of everything is crap.

Theodore Sturgeon was a science fiction writer and people would poo-poo science fiction and point out that it was crap. He would say, “Yeah, but 90% of science fiction is crap because 90% of everything is crap.” That became Sturgeon’s law.

Yeah, you look at movies, books, and music. It’s hard to argue with Sturgeon’s law. Yeah, 90% of everything is crap. That feels like a universal law.

Murphy’s law

Here’s one we’ve probably all heard of. Murphy’s law:

Anything that can go wrong will go wrong.

It tends to get treated as this funny thing but, actually, it’s a genuinely useful design principle and one we could use on the web a lot more.

Cole’s law

There’s Cole and Cole’s law. You’ve probably heard of that. That’s:

Shredded raw cabbage with a vinaigrette or mayonnaise dressing.

Cole’s law.

Moving swiftly on, there’s another sort of category of these laws, these universal principles that have a different phrasing, and it’s this idea of a razor. Here it’s being explicit about in case of conflict. Here it’s being explicit saying when you try to choose between two choices, which to choose.

Hanlon’s razor

Hanlon’s razor is a famous example that states:

Never attribute to malice that which can be adequately explained by incompetence.

If you’re trying to find a reason for something, don’t go straight to assuming malice. Incompetence tends to be a greater force in the world than malice.

I think it’s generally true, although, there’s also a law by Arthur C. Clarke, Clarke’s third law, which states that, “Any sufficiently advanced technology is indistinguishable from magic.” If you take Clarke’s third law and you mash it up with Hanlon’s razor, then the result is that any sufficiently advanced incompetence is indistinguishable from malice.

Occam’s razor

Another razor that we hear about a lot is Occam’s razor. This is very old. It goes back to William of Occam. Sometimes it’s misrepresented as being the most obvious solution is the correct solution. We know that that’s not true because we saw in the stories of metal helmets in World War One and motorcycle helmets or the bombers in World War Two or the YouTube videos that it’s not about the most obvious solution.

What Occam’s razor actually states is:

Entities should not be multiplied without necessity.

In other words, if you’re coming up with an explanation for something and your explanation requires that you now have to explain even more things—you’re multiplying the things that need to be explained—it’s probably not the true thing.

If your explanation for something is “aliens did it,” well, now you’ve got to explain the existence of aliens and explain how they got here and all this. You’re multiplying the entities. Most conspiracy theories fail the test of Occam’s razor because they unnecessarily multiply entities.

World Wide Web

These design principles that we can borrow, we’ve got these universal ones we can borrow. I also think maybe we can borrow from specific projects and see things that would apply to us. Certainly, when we’re working on the World Wide Web and we’re building things on the World Wide Web, we could look at the design principles that informed the World Wide Web when it was being built by Tim Berners-Lee, who created the World Wide Web, and Robert Cailliau, who worked with him.

The World Wide Web started at CERN and started life in 1989 as just a proposal. Tim Berners-Lee wrote this really quite boring memo called “Information Management: A Proposal” with indecipherable diagrams on it. This is March 12, 1989. His supervisor Mike Sendall, he later saw this proposal and must have seen the possibility here because he scrawled across the top:

Vague but exciting.

Tim Berners-Lee did get the go-ahead to work on this project, this World Wide Web project, and he created the first web browser. He created the first web server. He created HTML.

\"I

You can see the world’s first web server in the Science Museum in London. It’s this NeXTcube. NeXT was the company that Steve Jobs formed after leaving Apple.

I have a real soft spot for this machine because I was very lucky to be invited to CERN last year to take part in this project where we were trying to recreate the experience of using that first web browser that Tim Berners-Lee created on that NeXT machine. You can go to this website worldwideweb.cern.ch and you can see what it feels like to use this web browser. You can use a modern browser with this emulation inside of it. It’s really good fun.

My colleagues were spending their time actually doing the hard work. I spent most of my time working on the website about the project. I built this timeline because I was fascinated about what was influencing Tim Berners-Lee.

\"Timeline\"

It’s kind of easy to look at the 30 years of the web, but I thought it would be more interesting to also look back at the 30 years before the web and see what influenced Tim Berners-Lee when it came to networks, hypertext, and format. Were there design principles that he adhered to?

We don’t have to look far because Tim Berners-Lee himself has published design principles (that he formulated or borrowed from elsewhere) in a document called Axioms of web Architecture. I think he first published this in 1998. These are really useful things that we can take and we can apply when we’re building on the web.

Particularly, now I’m talking about the second diamond of the Double Diamond. When we are choosing how we’re going to execute something or how we’re going to deliver it, building the thing right, that’s when these design principles come in handy.

He was borrowing; Tim Berners-Lee was borrowing from things that had come before, existing creations that the web is built on top of like the Internet and computing. He said:

Principles such as simplicity and modularity are the stuff of software engineering.

So he borrowed those principles about simplicity and modularity.

He also said:

Decentralization and tolerance are the life and breath of the Internet.

Those principles, tolerance and decentralization, they’d proven themselves to work on the Internet. The web is built on top of the Internet. So, it makes sense to carry those principles forward on the World Wide Web.

Robustness

That principle of tolerance, in particular, is something I think you really see on the web. It comes from the principles underlying the Internet. In particular, this person, Jon Postel, who is responsible for maintaining the Domain Name System, DNS, he has an eponymous law named after him. It’s also called the Robustness Principle or Postel’s law. This law states:

Be conservative in what you send. Be liberal in what you accept.”

Now, he was talking about packet switching on the Internet that if you’re going to send a packet over the Internet, try to make it as well-formed as possible. But on the other hand, when you receive a packet and if it’s got errors or something, try and deal with it. Be liberal in what you can accept.

I see this at work all the time on the web, not just in terms of technical things but in terms of UX and usability. The example I always use is if you’re going to make a form on the web, be conservative in what you send. Send as few form fields as possible down the wire to the end-user. But then when the user is filling out that form, be liberal in what you accept. Don’t make them formulate their telephone number or credit card in a certain format. Be liberal in what you accept.

Be conservative in what you send when it comes from front-end development. This matters. Literally, just in terms of what we’re sending down the wire to the end-user, we should be more conservative in what we send. We don’t think about this enough, just the weight, the sheer weight of things we’re sending.

I was doing some consulting with a client and we did a kind of top four of where the weight was coming from. I think this applies to websites in general.

4: Web fonts

Coming in at number four, we had web fonts. They can get quite weighty, but we have ways of dealing with this now. We’ve got font display in CSS. We can subset our web fonts. Variable fonts can be a way of reducing the size of fonts. So, there are solutions to this. There are ways of handling it.

3: Images

At number three, images. Images do account for a lot of the sheer weight of the web. But again, we have solutions here. We’ve got responsive images with source set and picture. Using the right format, right? Not using a PNG if you should be using a JPEG, using WebP, using SVGs where possible. We can deal with this. There are solutions out there, as long as we’re aware of it.

2: Your JavaScript

At number two, your JavaScript, the JavaScript that you send down the wire that you’ve written to the client. It’s gotten kind of out of hand. Libraries in your code, it’s gotten very, very weighty. This is bad, but not as bad as number one, which is other people’s JavaScript, third-party JavaScript.

1: Other people’s JavaScript

“Oh, the marketing department just wanted to add that one line, that one script that then pulls in another script that pulls in three more scripts.” Before you know it, it’s out of hand. Third-party JavaScript is really tough to deal with because so often it’s out of our hands. It’s like we don’t have control over that.

JavaScript

JavaScript is particularly troublesome because, with all the other things—images, web fonts—yeah, we’re talking about weight. It’s the file size is the issue. That’s only part of the issue with JavaScript. Yes, we are sending too much JavaScript, but it also is expensive in terms of the end-user has to not just download that JavaScript, but parse the JavaScript, execute that JavaScript. It’s particularly expensive compared to CSS, HTML, images, or fonts.

It is eating the world. We heard that software is eating the world. I’d say JavaScript is eating the world. There’s another eponymous law from Jeff Atwood. Atwood’s law states that:

Any application that can be written in JavaScript will eventually be written in JavaScript.

We’re seeing that now.

Back in my day, we used to joke about, “Well, you could never build a Photoshop in a web browser.” Now, everything is migrating to being written in JavaScript, which is kind of amazing and speaks to the power of JavaScript. It’s fantastic in one way, but it does feel like we’re using JavaScript to do everything, including things that could be done with other languages.

When it comes to choosing a language, there’s a fantastic design principle that Tim Berners-Lee used when he was designing the World Wide Web. It’s the principle of least power. The principle of least power states:

Choose the least powerful language suitable for a given purpose.

That sounds very counterintuitive. Why would you want to choose the least powerful language? Well, in a way, it’s about keeping things simple. There’s another design principle, “Keep it simple, stupid.” KISS.

It’s kind of related to Occam’s razor, not multiplying entities unnecessarily. Choose the simplest language. The simplest language is likely to be more universal and, because it’s simpler, it might not be as powerful but it’ll generally be more robust.

I’ll give you an example. I’ll quote from Derek Featherstone. He said:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

He’s absolutely right. This is about robustness here. It’s less fragile.

The classic example with ARIA: the best ARIA attribute is no ARIA attribute. Rather than having div role=\"button\", just use a button. If you can do something in CSS rather than JavaScript, do it in CSS. Choose the least powerful language.

Instead, we’re using JavaScript to send our content down the wire. That could be done in HTML. We’re using CSS in JS now, right? We’re using the most powerful language, JavaScript, to do everything, which kind of violates the principle of least power.There’s a set of design principles from the Government Digital Services here in the U.K. and they’re really good design principles. One of them stuck out to me. The design principle itself says:

Do less.

By way of explanation, they say:

Government should only do what only government can do.

Government shouldn’t try to be all things to all people. Government should do the things where private enterprise can’t do these things. The government has to do these things. The government should only do what only government can do.

I thought that this could be extrapolated out and made into a more universal design principle. You could say:

Any particular technology should only do what only that particular technology can do

If that’s too abstract, let’s formulate it into this design principle:

JavaScript should only do what only JavaScript can do.

We can call this Keith’s Law or Keith’s Razor or something. I think it’s a good principle.

I remember the early uses of JavaScript for things like image rollovers and form validation. Now, I wouldn’t use JavaScript for image rollovers or hover effects. I’d use CSS. I wouldn’t use JavaScript for form validation if I can just use required attributes or input type=\"email\". Apply the principle of least power.

Components

Let’s see whether we’re applying the principle of least power on the web. Take an example. Let’s say you’ve got a component that’s a button component. How are you going to go about building this? You could have bare minimal HTML, just a div or a span. You’ve got some CSS to make it look good. Sure. You apply all the JavaScript and ARIA that you need to make it work like a button.

Or, alternatively, you could use a button and you style it however you want using CSS.

Now, in this example, this particular component, I would say it’s a no-brainer. You go with the native button element. Don’t make your own button component with a div and JavaScript and ARIA. Use a native button element.

Okay. That seems pretty straightforward and that is a perfect example of the principle of least power. Choose the least powerful language suitable for the purpose.

But then what if you’ve got a drop-down component, selecting an option from a list of options? Well, you could build this using bare minimum HTML. Again, divs, maybe. You style it however you want it to look and you give it that opening and closing functionality. You give it accessibility using ARIA. Now you’ve got to think about making sure it works with a keyboard — all that stuff, all the edge cases.

Or you just use a select element — job done. You style it with CSS …Ah, well, yes, you can style it to a certain degree with CSS, but if you ever try to style the open state of a select element, you’re going to have a hard time.

Now, this is where it gets interesting. What do you care about more? Can you live with that open state not being styled exactly the way you might want it to be styled? If so, yes, choose the least powerful technology. Go with select. But I can kind of start to see why somebody would maybe roll their own in that case.

Or take this example: a date picker component. Again, you could have bare minimum HTML. Style it how you want. Write it all yourself using JavaScript. Make it accessible using ARIA. Or just use the native HTML input type=\"date\" …and then have fun trying to style that in CSS. You won’t be able to do much, to be honest.

Do you still pick the least powerful technology here?

This would be kind of the under-engineered approach: to just use the native HTML approach: input type=\"date\", select, button.

The over-engineered approach is to go with doing it all yourself: write JavaScript to make it go that way.

It feels like there’s this pendulum swing between the over-engineered versus the under-engineered. Like I say, what it comes down is, what do you prioritize?

Universality

What you get with the native approach is you get access. You get that universality by using the least powerful language. There’s more universal support.

What you get by rolling your own is you get much more control. You’re going from the spectrum of least power to most power and that’s also a spectrum going from most available (widest access) to least available but with more control.

You have to decide where your priorities lie. This is where I think, again, we can look at the web and we can take principles from the web.

Eric has something he said recently that really resonates with me. He said:

The web does not value consistency. The web values ubiquity.

That’s the purpose of the web. It’s the universal access. That’s the value encoded into it.

To put this in another way, we could formulate it as:

Ubiquity, even over consistency.

That’s the design principle of the web.

This passes the reversibility test. We can picture other projects that would say:

Consistency, even over ubiquity.

Native apps value consistency, even over ubiquity. iOS apps are very consistent on iOS devices, but just don’t work at all on Android devices. They’re consistent; they’re not ubiquitous.

We saw this in action with Flash and the web. Flash valued consistency, but you had to have the Flash plugin installed, so it was not ubiquitous. It was not universal.

The World Wide Web is about ubiquity, even over consistency. I think we should remember that.

When we look here in the world’s first-ever web browser, we are looking at the world’s first-ever webpage, which is still available at its original URL. That’s incredibly robust.

What’s amazing is you can not only look at the world’s first webpage in the world’s first web browser, you can look at the world’s first webpage in a modern web browser and it still works, which is kind of amazing. If you took a word processing document from 30 years ago and tried to open it in a modern word processing document, good luck. It just doesn’t work that way. But the web values this ubiquity over consistency.

Let’s apply those principles, apply the principle of least power, apply the robustness principle. Value ubiquity even over consistency. Value universal access over control. That way, you can make products and services that aren’t just on the web, but of the web.

Thank you.

" } , { "id": "17701", "url": "https://adactio.com/article/17701", "title": "npm ruin dev", "summary": "This was originally published on CSS Tricks in December 2020 as part of a year-end round-up of responses to the question “What is one thing you learned about building websites this year?”", "date_published": "2020-12-16 13:56:31", "tags": [ "css", "html", "javascript", "2020", "frontend", "development", "clearleft", "podcast", "process", "build", "tools", "dependencies", "pipeline", "entropy", "longevity", "medium:id=2f1c188843c3" ], "content_html": "

In 2020, I rediscovered the enjoyment of building a website with plain ol’ HTML, CSS, and JavaScript—no transpilin’, no compilin’, no build tools other than my hands on the keyboard.

Seeing as my personal brand could be summed up “so late to the game that the stadium has been demolished”, I decided to start a podcast in 2020. It’s the podcast of my agency, Clearleft, and it has been given the soaringly imaginative title of The Clearleft Podcast. I’m really pleased with how the first season turned out. I’m also really pleased with the website I put together for it.

The website isn’t very big, though it will grow with time. I had a think about what the build process for the site should be and after literally seconds of debate, I settled on a build process of none. Zero. Nada.

This turned out to be enormously liberating. It felt very hands-on to write the actual HTML and CSS that will be delivered to end users, without any mediation. I felt like I was getting my hands into the soil of the site.

CSS has evolved so much in recent years—with features like calc() and custom properties—that you don’t have to use preprocessors like Sass. And vanilla JavaScript is powerful, fully-featured, and works across browsers without any compiling.

Don’t get me wrong—I totally understand why complicated pipelines are necessary for complicated websites. If you’re part of a large team, you probably need to have processes in place so that everyone can contribute to the codebase in a consistent way. The more complex that codebase is, the more technology you need to help you automate your work and catch errors before they go live.

But that set-up isn’t appropriate for every website. And all those tools and processes that are supposed to save time sometimes end up wasting time further down the road. Ever had to revisit a project after, say, six or twelve months? Maybe you just want to make one little change to the CSS. But you can’t because a dependency is broken. So you try to update it. But it relies on a different version of Node. Before you know it, you’re Bryan Cranston changing a light bulb. You should be tweaking one line of CSS but instead you’re battling entropy.

Whenever I’m tackling a problem in front-end development, I like to apply the principle of least power: choose the least powerful language suitable for a given purpose. A classic example would be using a simple HTML button element instead of trying to recreate all the native functionality of a button using a div with lashings of ARIA and JavaScript. This year, I realized that this same principle applies to build tools too.

Instead of reaching for all-singing all-dancing toolchain by default, I’m going to start with a boring baseline. If and when that becomes too painful or unwieldy, then I’ll throw in a task manager. But every time I add a dependency, I’ll be limiting the lifespan of the project.

My new year’s resolution for 2021 will be to go on a diet. No more weighty node_modules folders; just crispy and delicious HTML, CSS, and JavaScript.

" } , { "id": "16334", "url": "https://adactio.com/article/16334", "title": "Building", "summary": "The opening presentation from the New Adventures conference held in Nottingham in January 2019.", "date_published": "2020-01-22 16:20:16", "tags": [ "building", "layers", "language", "metaphors", "architecture", "engineering", "frontend", "development", "web", "design", "history", "longnow", "newadventures", "naconf2019", "conference", "presentation", "talk", "transcript", "medium:id=5cfabe6afacc" ], "content_html": "

Good morning, everybody. It is a real honour to be here. As Simon said, I was here six, seven, eight years ago attending this conference because it’s such a great conference. I’m kind of feeling the pressure now that I’m up here on the stage speaking at this conference. I’m just glad I’m on first so I can get it over with and then listen to all these great talks.

I’m here today to talk to you …which is kind of weird when you think about it. I mean, first, the fact that it’s me up here on the stage through some clerical error.

But also, I’m going to talk to you. I’m going to vibrate air over my vocal cords and move this big meaty piece of flesh inside my jaw up and down vibrating the airwaves and you’re going to listen to me doing that. It seems like a crazy thing to do except for the fact that, of course, I’ll be using language.

Language

Maybe the great distinguishing feature of our species, language. The great leap forward that happened—who knows—50,000, 100,000 years ago when we, as a species, developed language. With language, by moving those vocal cords and that big piece of flesh in my jaw, we can tell stories. I can recount something that happened in the past.

Perhaps more amazingly, we can imagine things that might come to be. I could tell you something that might happen in the future. So language is a kind of time travel.

It’s all possible because we’re speaking the same codebase. The particular language I’m talking now is English. As long as you can decode English then all these noises I’m making will make sense to you even if there isn’t actually any information in the words. I can say Chomsky’s famous one.

Colourless green ideas sleep furiously.

You can parse that. It doesn’t make any sense, but you can parse it.

Most of the time, the sentences we use also convey some kind of information. Language is not just time travel. Language is also communication.

There can be an idea that’s sitting in my head and I’ll, you know, vibrate the air and vocal cords, flap this big fleshy thing in my jaw around, and transfer the idea from my head to your head. Language is almost like a virus. You can’t help but take the idea in.

I can say to you, “Don’t think of an elephant,” right? Now you’ve just thought of an elephant. It’s the language equivalent of the chicken game which, if you haven’t played before, sorry. You’ve just lost.

\"Chicken

This sentence, “Don’t think of an elephant,” is actually the title of a book by George Lakoff. George Lakoff is a linguist. He’s written many books. He wrote Women, Fire, and Dangerous Things. He wrote this, Metaphors We Live By, because he’s kind of obsessed with metaphors.

We use metaphor all the time in language. We use conceptual metaphor, so when we take one idea and we use the language of that idea to talk about a different idea. The classic example being something intangible.

Let’s say time. How do we talk about time when we can’t touch it, we can’t feel it, it’s intangible? Well, we use metaphor.

We talk about time as though it’s a physical object moving through space. We say time flies or time drags or we talk about time as though it’s a resource. We talk about saving time, wasting time.

You can’t do any of those things with time. That’s not how time works. But the metaphor is very helpful.

The other kind of metaphor is the cognitive metaphor. This is what George Lakoff is interested in, particularly in things like political language. How we frame a debate can tip the scales of how that debate would unfold. If we were about to have a debate about tax relief, well, before the debate has even begun, we’ve framed taxation as something you need relief from and the scales have been tipped.

I’m very interested in this idea of metaphor, analogy, and simile and how we talk about the work we do. It’s such a young industry. What we do is we borrow from other industries. We’re not the first to do this. There’s a great book called Understanding Comics by Scott McCloud. Who’s read Understanding Comics? It’s great.

It’s about comics but, really, it’s just a fantastic book. It’s written as a comic. In it, Scott McCloud makes the point of this new medium, comics, had to kind of borrow from the existing mediums that came before. He points out that this isn’t new. He says:

Each new medium begins its life by imitating its predecessors. Many early movies were like filmed stage plays. Much early television was like radio with pictures.

Right? That it takes time.

Now, this idea of a new medium having to borrow the tropes and the language of the medium that came before, this idea pops up again on the web in this article published in the year 2000 by John Allsopp on A List Apart, A Dao of Web Design. Can I get a show of hands of who’s read A Dao of Web Design? Awesome. You are my people. The rest of you, please read it. It’s such a wonderful article.

It’s crazy that I’m standing up here recommending, “Oh, yeah, you should totally read this article from the year 2000,” but it is relevant. It’s amazingly relevant still today. It’s maybe more relevant today than when it was written.
In the article, John says:

When a new medium borrows from an existing one, some of what it borrows makes sense, but much of the borrowing is thoughtless, it’s ritual, and it often constrains the new medium. Over time, the new medium develops its own conventions, throwing off existing conventions that don’t make sense.

Now, at the time John was writing this, 2000, of course, we were borrowing from what had come before in the previous medium and that was print. We were trying to figure out how do we get the same level of control that we were used to in the world of print on the web. We did that using clever techniques thanks to David Siegel who wrote this book, Creating Killer Websites. David Siegel, if you don’t know the name, you’re certainly familiar with his work because he’s the guy who came up with the idea of using tables for layout or having a one-pixel by one-pixel spacer GIF.

Hey, listen. That was the only way we could do it back then. They were hacks, yes, but they were necessary hacks. He did actually recant. Years later, he wrote a piece that said, the web is ruined and I ruined it. This may be overstating the case, but you know.

He was pointing out we could use these techniques, these hacks to constrain Web and make it work like print. We could get pixel-perfect control. John Allsopp, in his article, he’s kind of pushing against and going, no, no, no:

The web is a new medium. It has emerged from the medium of printing whose skills and design language and convention strongly influence it. It is too often shaped by that from which it sprang. Killer websites are usually those which tame the wildness of the web, constraining pages as if they were made of paper. Desktop publishing for the web.

So, I mean, John totally acknowledges that there is a lot to learn from this rich, rich history of print and, before print, just writing. This is clearly the second great leap of our species. We had language where we could communicate ideas, tell stories, imagine the future—as long as we’re in the same physical space—and then we came up with writing. Now we can communicate, re-viral ideas, talk about the future and the past, and we don’t even have to be in the same physical place. Someone who died centuries ago can put an idea in your head by putting language onto a medium like vellum or, later, paper.

You can see this evolution over centuries from illuminated manuscripts to the printing press, Gutenberg, until we get to the 20th Century and we really start to refine the design. We got the Swiss School of Design, the fonts, typography, and the grid system. There’s a lot to learn here.

\"The

What’s interesting to me, though, is what seems to be this battle of extremes. We’ve got David Siegel talking about desktop publishing for the web, effectively, and John Allsopp talking about, “No, the web is its own medium. It needs to have its own conventions.”

They seem to be at opposite ends of a spectrum. Yet, they actually have a commonality because, on both sides, when they’re talking about this, they’re talking about websites — web sites. Now, that in itself is a metaphor. You don’t have physical sites on the web. It’s intangible like time. Yet, we chose this metaphor. The idea of a site, a place where you go to a physical place.

Site actually is pretty good with connotations of a building site, a construction site. That was literally the metaphor in the ’90s. The web is like a construction site. It kind of is constantly under construction. Oh, you want the full nostalgic effect?

\"Under

There we go. We’re back to Geocities. But I feel like then we decided to grow out of this metaphor and use more grownup metaphors. We got professional. We had to borrow from other industries, other mediums, and here’s one that people are very fond of borrowing: architecture—describing what we do as architecture.

Architects

Whether it’s on the design side or the development side, talking about us as architects. It seems like a very appealing industry to borrow from, which is fascinating. If you ever talk to architects, man, it’s a shitty industry. Spec work, awards, and competition, it’s not a great industry.

But we seem to hold it up as, like, “Oh, yeah, we’re like architects because architects are awesome.” I think of Hollywood because every Hollywood movie that has an architect in it, the architects are always really nice people. They’re always like the protagonist, never the antagonist. The architect is never the villain.

It’s fair enough. It’s fair enough to borrow things from something like architecture. For example, I know plenty of designers who would say that this book is the best book about UX that they’ve ever read, 101 Things I Learned in Architecture School by Matthew Frederick. It was published in 2007. It’s not written for UX designers. It’s not written about the web, but there are lessons in there that are directly applicable.

There are other works from the world of architecture that have definitely influenced the work we are doing today like the classic from Christopher Alexander, A Pattern Language. Now this—I say classic rightly—this is a classic book. A classic book is a book everyone has heard of and nobody has read.

That is certainly the case here. Published in 1977, and it influenced lots of people doing things in the digital space. Ward Cunningham, the inventor of the wiki, he said, yeah, he was really influenced by A Pattern Language.

The idea of a pattern language, it’s architecture, but breaking things down into components that you could change the parameters we used in public spaces, buildings, things like that. It’s a modular approach. Later on, in the software world, a gang of four, they wrote Design Patterns: Elements of Reusable Object-Oriented Software, and they were directly influenced by Christopher Alexander, this idea of a pattern language, components, patterns, modularity.

What’s interesting is there’s another book by Molly Wright Steenson, you may remember was a blogger, Girl Wonder. She worked in the world of architecture and she’s written a book about the influence of architects and designers on the digital space. Richard Saul Wurman, and information architecture. There’s a very direct metaphor there, but also Christopher Alexander.

She points out, actually, the funny thing is, he’s had way more of an influence in the digital space than he ever had in architecture. Most architects don’t like him. They think he’s a bit preachy. But his influence in the digital space is massive. Here I am talking about modularity, components, and patterns. Well, I mean, that is so hot right now. Design systems, we’re breaking things down into patterns.
In fact, I ended up organizing a conference in 2017, purely about design systems, pattern libraries, styles, all this stuff called Patterns Day. It was great. We had these wonderful speakers. Jina Anne was there, Rachel Andrew, Alla Kholmatova, Alice Bartlett. It was great.

But, by the end of the day, I was kind of half-joking as saying, we should have had a drinking game where, every time someone referenced Christopher Alexander, we had to take a drink because his spirit loomed large over this. Actually, the full rules of the drinking game I came up with afterward where any time someone references Christopher Alexander, you take a drink. Any time someone says Lego, you take a drink. Any time someone says that naming things is hard, take a drink. Any time someone says atomic or atomic design, take a drink. Anytime someone says bootstrap, you puke the drink back up.

A Pattern Language is a work of architecture that directly not just influenced but is still influencing our work today; the idea of breaking things down into components to reuse.

Now, there’s another work from the world of architecture that has a big influence on me. It’s a classic book, again, How Buildings Learn. It’s the best book I’ve never read, published in 1994, by Stewart Brand. There was also a TV series that went with this that’s pretty fascinating.

In this, he talks about the work of a British architect named Frank Duffy and Duffy’s idea of something he called shearing layers. What Duffy said was that a building properly conceived is several layers of longevity. He kind of broke these down. You’ve got the sites that the building is on. We’re talking about geological time scales.

Then above that, the structure you hope will last for centuries. Then you’ve got the infrastructure inside the building that you might have to swap out every few decades. Change the plumbing. Then you’ve got the walls and the doors. You can change them every so often until you get into the room. You’ve got furniture, which you can move on a daily basis.

The time scales get faster as you move inward. He diagrammed it like this. This is shearing layers diagrammed for the building. I find this really interesting, this idea of different time scales.

\"Shearing

But there’s another factor here I’m kind of fascinated by, which is that each layer depends on the layer below. You can’t have a structure until you’ve got a site to build on. You can’t have furniture inside a room until you’ve got the room. You need to have the walls there. Each layer is building on top of what’s come before. You can’t jump straight ahead to furniture without first having all those other layers.

Now, this reminds me of another idea that the writer Steven Johnson talks about a lot in his work, for example, this book, Where Good Ideas Come From. This is the idea of the adjacent possible, that certain inventions leap forward that can’t happen until other things have happened before them.

There’s a reason why the microwave oven wasn’t invented in medieval France. Too many other things had to be invented first before something like the microwave oven becomes inevitable.

Everything we do is kind of built on this idea of the adjacent possible because businesses and services on the web are on top of a whole bunch of layers of adjacent possibilities. You can’t have Twitter, Facebook, or Wikipedia until the web exists. The web itself is built on all of these layers that have to happen first.

We have to have the Industrial Revolution. We have to have electricity. Then somebody has to create circuitry. We have to get to the idea of having computers and then networked computers, something like the Internet. Then the web becomes possible. Once the web is possible, then all these businesses on top of the web become possible.

This idea of the adjacent possible, the shearing layers, they kind of fascinate me because I’m seeing a parallel there.

Now, Stewart Brand, who wrote about shearing layers and architecture, he revisited this idea of shearing layers and took them out from the world of architecture in a later work called The Clock of the Long Now. Stewart Brand is one of the founders of the Long Now Foundation. If you haven’t heard of it, it’s an organization dedicated to long-term thinking. I’m a card-carrying member. The card is designed to last for a few thousand years as well.

They’re currently building a clock that will tell time for 10,000 years. Brian Eno has written an algorithm for the chimes so that when it chimes once a century, it will never be quite the same chime. It’s encouraging long now thinking.

In this book, the full title of the book being The Clock of the Long Now: Time and Responsibility: The Ideas Behind the World’s Slowest Computer, he extrapolates shearing layers into something he calls pace layers. If you take the shearing layers model and look around you, it’s everywhere. It’s kind of like systems thinking, the Donella Meadows idea that systems are everywhere.

\"Pace

It’s kind of true. You look around these pace layers; shearing layers applied to the real world are everywhere. The example he gives is our species. If we look at the human race, we have these different time scales. The slowest is our physical nature as in our DNA, our physiological nature. That takes millennia to change. Physiologically, there’s no difference between a caveman and a spaceman.

Above that, you’ve got culture. This takes centuries, maybe longer, to accumulate over time.

Then systems of governance; not governments — governance. How are we going to run the societies?

An infrastructure, you want that to move faster, but not too fast or it could be very disruptive. 
Then you get into commerce, trading. Very fast-moving.

Then, finally, you’ve got fashion, which is super-fast. By fashion, he means things like popular music, anything that’s supposed to move fast. If fashion moved slowly, that wouldn’t be a good thing. It’s meant to move fast. It’s meant to try things out. “What about this? No, what about this? Try this.” Right? You don’t want for the things further down.

He’s mapped this onto these layers. From shearing layers, we go to pace layers. They have different timescales.

I’m talking about the difference between these really fast layers at the top, you know, “What about this? Try this? Today, we’re doing that,” compared to the really slow layers at the bottom that move slowly and are resistant to change.

He says:

Fast learns but slow remembers. Fast proposes and slow disposes. Fast is discontinuous but slow is continuous. Fast and small instructs slow and big by a crude innovation, an occasional revolution, and slow and big controls small and fast by constraint and constancy. Fast gets all our attention, but slow has all the power.




Now, once I was exposed to this idea and this virus had landed in my head, I found that I couldn’t get it out of my head. I started seeing pace layers everywhere. At Clear Left, where I work, it’s a running joke. On every project, we have a kickoff. It’s like, what’s the time to pace layers? How long will it be before someone makes a pace layer analogy? It’s like my brain has now been rewired to see pace layers everywhere.

It’s like, you know, the first time that someone points out the arrow in the FedEx logo. There was your life before that and there’s your life after that.

You’ve all seen the arrow in the FedEx logo. Yeah.

What about Toblerone? You’ve all seen the bear? Ah, yeah! Right? You will never be able to unsee that.

Consider the duck.

It’s a perfectly normal, ordinary duck. Agreed? But then your brain is exposed to the idea that all ducks are actually wearing dog masks.

All ducks are actually wearing dog masks. Now, when I show you the same picture of the same duck—

—you will never be able to unsee that. That’s how my brain feels when it comes to pace layers. I see them everywhere. It’s like the crazy wall part of the serial killer’s lair in the murder mystery. It’s just pace layers.

I couldn’t help but apply pace layers to the work we do mapping our medium to pace layers. Let’s try it with the World Wide Web.

\"The

Well, we build on top of the Internet. We can’t have the web before having Internet. At the very bottom layer, you’ve got the protocols of the Internet itself, you know, TCP/IP, which have been pretty much unchanged for decades. They were there from the ARPANET before the Internet. It’s a good thing that they’re unchanged. You would not want to be swapping out that low layer very quickly.

Above that, we have all the different protocols we use, protocols for email, protocols for file transfer, and protocols for the World Wide Web, HTTP, the hypertext transfer protocol. Now, this has evolved over time. We now have HTTP2, but it’s been a slow process and that feels right. Again, we shouldn’t be swapping out too quickly, but it’s a bit faster moving than the Internet protocols.
On top of HTTP, we can put our URLs. Now, I would love it if URLs were right down at the bottom layer and they were permanent and they never changed and they never went away. That is the web I want, but I must acknowledge that, alas, you have to work hard to keep URLs alive. They do change. They do move. They do get destroyed, which is a bit of a shame, but we can work at it, people. We can work on keeping our URLs alive.

What we put at that those URLs, at the simplest level, we’ve got HTML. It was there from the start. From day one of the web, HTML was there and it’s still there today, but it’s evolved. It’s changed over time. Initially, HTML had 21 elements and now it’s got 121 elements, so it’s evolved.

But it feels like you can keep up with the pace of change. The last big evolution of HTML was 2010, later, with HTML5. We do get new editions every now and then, but it’s fine. We can keep up with it.

Then CSS, CSS changes may be more — definitely changes more rapidly than HTML. That feels like a good thing. We kind of want more. Give us some more CSS and now we’ve got Grid and we’ve got Flexbox. We’ve got all these great, new CSS things. Custom properties.

I don’t feel too overwhelmed by that. I still feel like, “Oh, no, this is good. We’ve got new CSS. I’m feeling I can keep on top of this, you know, read the right articles, read the right books, try them out. It’s fine.”

Then there’s the JavaScript ecosystem.

Specifically, the ecosystem, not the language, because the JavaScript language itself doesn’t actually change that often. ES6 or ES2000, whatever we’re talking about the evolution to the language, they’re not so rapid that we’d get overwhelmed. But the language ecosystem, the culture of JavaScript, that feels overwhelming to me. Right? Since I’ve been speaking up here, two new JavaScript frameworks have been released.

The pace, I constantly feel like I’m falling behind like, “Oh, I haven’t even heard of this new thing that apparently everybody is using.”

Does anyone else feel overwhelmed by this pace of change? Okay, good. Keep your hands up for a sec and just look around. All right? You are not alone. This turns out to be normal.

But here’s the thing. By mapping these different rates onto this model of pace layers, I actually start to feel better about this because let’s say the JavaScript ecosystem is fashion: “It’s going to do this. No, no, today we’re doing that. Try this. Try that.”

Whereas, “Oh, okay. It’s supposed to move fast. It would be bad if it moved slow. It’s meant to be trying stuff out. We see what sticks.”

With fashion, the best of pop music will probably last and find its way down the layers into culture, a slower pace layer. With the JavaScript, the patterns that work in JavaScript may find their way down into the slower moving layers.

To give you an example, when JavaScript was first invented—I’m showing my age here—I remember the common use cases were rollovers, image rollovers. And form validation, so mousing over something and changing how it looks, we’d use JavaScript for that. If someone is filling in a form and there’s a required field, we’d use JavaScript to make sure that required field was filled in.

These days, we wouldn’t even use JavaScript for either of those. We’d use CSS to do rollovers. We’d use HTML to add just one required attribute. The pattern, it stuck. The spaghetti stuck to the wall and it moved down the layers into something more stable.

That’s what JavaScript is kind of supposed to do. When we’re trying to responsive images, we had JavaScript solutions until we got to something that was further down the stack in HTML.

I do feel overwhelmed by the pace of change. But I’m starting to feel a little better about feeling overwhelmed, that it’s okay. JavaScript is meant to feel overwhelming. It’s where we try stuff out. It’s where stuff moves fast.

Now, the other thing I realized by mapping our technology stack of the web onto this pace layer model is that this is how I build. When I’m building a website, I pretty much start at the third layer. I don’t worry about, is the Internet on.

I start with URLs. I think URL design is a really good place to start designing. It is a design discipline, a neglected one, but it is design. Then I think about the content and then structure that content using the best available markup of HTML. I think about the presentation may be on a small screen first and then the presentation on larger screens using CSS. Then start thinking about extra behaviors that I can’t get with HTML and CSS, so I reach for JavaScript to add those extra behaviors.

This seems to me to make sense as a way of building on the web because it maps to the structure of the pace layers of the web. But it’s also a testament to the flexibility of the web that you don’t have to build this way. If you don’t want to build in this layered way, you don’t have to.

In fact, you can build like this. You can put something that’s on the Internet, but you just do everything in JavaScript. URL routing, let’s do that in the browser in JavaScript. The Document Object Model, let’s generate that in the browser in JavaScript. CSS, apparently we’re doing it in JS now.

Everything in JavaScript. This is an absolutely legitimate choice. You can choose to build things on the web like this. The web allows this. Again, it’s a testament to the flexibility of the web.

Now, personally, I don’t build like this and this doesn’t feel quite right to me. It doesn’t feel like it maps to the web too well. It kind of turns it into this all or nothing situation where, as long as we’ve got JavaScript, everything is going to be great. But if we don’t, there’s nothing.

You end up with this situation where we’ve turned what we’re building on the web into a binary situation. Either it works great or it just doesn’t work at all. There’s this kind of single point of failure there with the JavaScript.

Now, this model makes complete sense in other mediums. I think other mediums have influenced our thinking on the web. Maybe we’ve borrowed the metaphors of these other mediums.

For example, if you’re building a native app, this makes complete sense. If you’re building an iOS app and I have an iOS device, it works great. I get 100% of what you designed. But if you build an iOS app and I have, say, an Android device, it doesn’t work at all. You can’t install an iOS app onto an Android device. Those are your options: either it works great or it doesn’t work at all. This mental model makes complete sense in that field.

On the web, because we can have this layered approach, that means we can build like this. We can go from something that doesn’t work at all to something that just about works—maybe it’s just text on a screen—to something that works fine—maybe it’s missing a bunch of behaviors, but the user can accomplish what they want to do—to something that works well, but maybe the latest and greatest browser APIs aren’t supported by a particular browser—and then to something that works great like the latest browser running the best device, great network.

\"Building

Most people are going to be somewhere on this continuum. Maybe nobody is going to get 100% of what you hope they get, but no one is going to get zero percent either as long as you’re building in this way, as long as you’re building with the grain of the web, building in layers, one thing on top of the other.

I’m going to quote Ethan here. Hi, Ethan. Ethan said:

I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it would change if, say, fonts aren’t available or JavaScript doesn’t work or if someone doesn’t see the design as you and I might and is having the page read aloud to them.

In a way, this is a way of busting assumptions, the what-ifs. What if something isn’t supported? By building in a layered way, it’s okay. Everything will fall back to the layer below, the adjacent possible.

Now, Ethan, of course, we all know from this article, Responsive Web Design, published on A List Apart. When was that? 2010. My God, nine years ago. Ten years after, John Allsopp published A Dao of Web Design on A List Apart. One of the first things Ethan does in this article is to reference A Dao of Web Design. You could say that Ethan was building on top of that foundational layer that was set by John Allsopp.

Architecture again. Responsive web design. The reason why Ethan chose that term was because there was this idea in architecture called responsive architecture about buildings that could respond to the conditions of the people in the buildings. That made a really good metaphor for talking about the web on large screens, small screens, and everything in between.

This architecture thing, as a metaphor, it’s not bad. We can learn from it. I think, just be careful not to take it too far.

It’s not the only metaphor we use. Here’s another one. When we don’t talk about ourselves as architects, we’re engineers. Yeah.

Engineers

It sounds good. This one predates the web. We’ve been talking about the idea of software engineering for a long time. I’m very partial to this term: software engineering. Not so much for the term itself. Not that I think it’s a particularly good metaphor, but from where it comes from, which is fricken’ awesome.

\"Margaret

The term “software engineering” comes from Margaret Hamilton. Margaret Hamilton was in charge of the onboard flight software on the Apollo moon landing. This is engineering. That is the code base she’s standing next to there, which would then literally be woven into the computers onboard Apollo.

But as a metaphor, engineering, well, there’s a whole bunch of different kinds of it. What kind of engineer are we talking about here? Is it material engineering, structural engineering, chemical engineering, aeronautical engineering? They all have commonalities. One being, as an engineer, you’ve got to know two things. There’s the materials you’re going to be working with and the tools you’re going to use to shape those materials.

Now, I think that we can use that metaphor and apply it to the web. I would say the materials on the web are HTML, CSS, and our JavaScript, hopefully in that order. Then we’ve got the tools we use to design for the materials of the web.
Now, the most obvious tools we could think of are graphic design tools. We started using Photoshop even though that was never intended for Web design. Since then, we’ve evolved and we’ve got tools that are much more focused on the web, things like Sketch, Figma, and all this kind of stuff.

These are obvious tools we use to build the web, but there are less obvious tools. If you were working on a Web project, these tools also get used. You’re going to be talking over email. You’re going to be communicating over Slack, organizing spreadsheets, spreadsheets people.

We talk about these as productivity tools, though sometimes I know it feels like they are reducing productivity rather than increasing it. But it’s kind of a misnomer when you think about productivity tools. All tools are productivity tools. That’s literally what tools are for is to make you more productive.

I think we should acknowledge that these are legitimate design tools. You can’t launch a project without putting in some time and some kind of communication tool.

Then when it comes to the actual welding of these materials, we’ve got a whole bunch of tools that sit in our machines or sit in our Web servers. Now I feel like I’m back up at that top layer of the pace layers and I’m getting overwhelmed with the task runners, the build tools, the chains, the transpilers, and the preprocessors. Apparently, it changes every week. Oh, you’re still using Grunt? No, we’re using Gulp. No, Webpack. That’s what’s so overwhelming.

It also feels like it’s quite complicated. This is complicated stuff, but it’s like we’ve chosen it. We’ve chosen to make our lives complicated, in a way.

I’ll tell you what it reminds me of. Do you remember that startup, Juicero?

Where they sold a big, expensive, complicated machine to make juice, but you had to buy exactly the right juice packets to put in the big, expensive machine to make the juice. It works. It works great. The big, expensive, complicated machine does its job but somebody noticed that you could actually just take the packets and squeeze them by hand and it still produces juice. I’m just saying that squeezing by hand is still an option. You can build websites by squeezing by hand. (I think this metaphor has been stretched just about as far as it can do, so I will leave it there.)

There’s this other kind of spectrum, I guess, between the materials and the tools and then the people that will be exposed to the materials and the tools. They kind of fall into two categories: the engineers themselves and the end-users.

When we’re evaluating our tools and asking, “Is this the right tool to use?” we should evaluate it from our perspective, yes, “Is this going to be a helpful tool to me as an engineer?” if we’re using that metaphor. But I strongly feel we should also ask, “Is this going to be useful for the end-user?”

If those two things come into conflict, what then? Do we privilege our own experience over the user experience? I would hope not. I worry that, in a lot of tool choices, particularly on stuff that gets sent down to the browser. “Oh, I’m going to use a CSS framework.” Great. Good for you. That’s helping you out but now the user has to pay the cost of the benefit that you get from that CSS framework because they have to download the whole CSS framework.

Sometimes, these things come into conflict and I feel like maybe we privileged the developer experience over the user experience and that worries me. The other time they don’t come into conflict. All those tools like preprocessors and task runners that just sit on your own computer, no direct effect on the end-user experience. Frankly, use whatever you like. It doesn’t make a direct effect on the end-user experience.

When we’re evaluating tools, there are all these questions to ask. Who benefits from the tool? If I choose to use this tool, will it benefit the users? Will it benefit the engineers? Neither? Both?

There are other questions we ask like, well, just how good is this tool? To evaluate that we ask; yeah, how well does it work? Does this tool do what it says it will do well?

This, of course, is a completely valid question to ask but there’s a corollary that I think is more valid and that’s to ask not just how well does it work but how well does it fail?

What happens when something goes wrong?

This is exactly why I think this layered approach makes sense because, if you build in this layered way, each one of these layers can fail well. If you build like this, then JavaScript can fail well. What if something goes wrong and you’ve got an error in your JavaScript? You fall back to something that still works. Not as great as it worked before, but it still works. It fails well.

These technologies on the web, they fail well by design. CSS fails well. Use a CSS property the browser doesn’t understand or CSS value. The browser just ignores it. It fails well.

HTML: Make up an HTML element. Throw it into a webpage. The browser doesn’t throw an error. The browser doesn’t stop parsing the webpage. It just ignores it and moves on. It fails well.

It actually makes sense to not jump ahead to the powerful stuff, to the top of the pace layers, but to try and build in layers and stay low for as long as possible. This is actually a principle, a principle that underlies the architecture of the web itself called the Principle of Least Power. You should choose the least powerful language for a given purpose, which seems really counterintuitive.

Why would I choose the least powerful language to do something? Surely, I want more power. The idea here is the power comes at an expense. Power comes at the expense of complexity, fragility. The more powerful technology is maybe more likely to fail badly.

Derek Featherstone put it well. He said:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof. It just works.

The example there was rollovers. How are you going to do rollovers? Do it in JavaScript? No, do it in CSS. :hover - done. Right? Oh, you need to make an interactive button? Use the button element. Be lazy.

This makes a lot of sense, the Principle of Least Power. It makes a lot of sense to me on the web, especially when you combine it with a universal law that definitely applies on the web, and that’s Murphy’s Law:

Anything that can possibly go wrong will go wrong.

This comes directly from the world of engineering. Edward Aloysius Murphy Jr. was an aerospace engineer. It’s because he had this attitude, he never lost anybody on his watch.

I think we tend to dismiss things going wrong as edge cases. We kind of assume the average output. Other industries, when they’re making cars, they test them. They strap crash test dummies in. They smack them into walls at high speed.

To be fair, a lot of the reason why they have to do that is because of regulation. They didn’t necessarily choose to do it, but still. Can you imagine if they went, well, actually, we realize that most people are going to drive cars on roads and people driving into walls is an edge case, so we’re not going to worry too much about that?

Now, obviously, you want to hope for the best but you should prepare for the worst. Trent Walton said:

Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

The web’s inherent variability, that gets to the heart of it.

Dave Siegel was trying to battle with the pixel-perfect labels was the web’s inherent variability. What John Allsopp was calling was for us to embrace the web’s inherent variability. It’s a feature, not a bug.

Are we engineers? Can we call ourselves engineers? Well, let me tell you something from the world of structural engineering.

This is the plan for the Quebec Bridge in Canada, a cantilever bridge. Construction started at the start of the 20th Century. There was a competition to see who get to design and build a bridge because that’s the way the industry works.

The engineer in charge was named Theodore Cooper. Now, originally, the bridge was meant to be 490 meters long but Theodore Cooper changed the specification to make it 550 meters long, mostly because, up in Scotland, the Firth of Forth Bridge, that was the longest bridge in the world at the time, longest cantilever. He wanted this bridge to exceed that, so he made the bridge longer but he did not recalculate the already high stresses being placed on the material of the bridge.

Oh, also, Theodore Cooper refused to work on site. He was down in New York, supposedly overseeing construction from New York. And when it was proposed that somebody should check his calculations, he took that as a personal afront and said, “No, no, no. No, no, that won’t work,” so there was no code reviews happening on this project.

Now, someone was onsite, the young engineer named Norman McLure. By 1907, August 6th, he had started to notice that the steel was bending, getting a lot of stress. Then again, on August 27th, it had got worse.

Cooper was notified down in New York. He did send a telegram back to Quebec. He said, “Place no more load on Quebec bridge until all facts considered - stop.” But he was inferring that the work should stop. He never explicitly said, “Stop the work right now,” so the telegram was ignored and work continued.

On August 29th, 1907, the bridge collapsed. It was shortly before the end of the day. The whistle was just about to blow to signal the end of the working day. There were 86 workers on the bridge and 75 of them died.

Now, something started happening in Canada a few years after this, by 1925. Engineering schools in Canada started holding private ceremonies around graduation time. This was a ceremony that was separate from qualifications. This wasn’t about whether you were qualified to be an engineer. This was called The Ritual of the Calling of the Engineer. You would speak an obligation penned by Rudyard Kipling, which I won’t repeat here because it’s meant to stay within the confines of this ritual.

You would also receive an iron ring. This iron ring would be a symbol of pride of being an engineer, but also a symbol of humility. For the longest time, the myth persisted that the iron itself was made from the steel in the Quebec Bridge. It’s not true, but the Quebec Bridge certainly looms over the idea of the iron ring. You’d wear it on the little finger of your working hand, so it would brush against the paper or the computer keyboard during your working day as a constant reminder of your responsibility as an engineer.

\"The

When we call ourselves engineers, I do have to ask, have we earned it? Do we take our responsibility seriously?

Maybe we don’t call ourselves engineers, but then what do we call ourselves? Does it even matter?

Builders

Well, we could go back to that original metaphor from the ’90s, under construction. Maybe we’re builders. We build things. The web is under construction. We’re the ones constructing it. It’s not so bad, you know, to be the ones literally building the web. It’s kind of awesome when you think about it.

Christopher Alexander, when he was talking about his reason for coming up with A Pattern Language, was because he said:

Most of the wonderful places in the world were not made by architects but by the people.

Maybe we’re at the bottom of the layer stack here as workers just building the web, but maybe we also have all the power — more power than we realize. Our collective power is greater than anything any architect could wield.

Yeah, maybe we’re builders. Maybe we’re bricklayers. I know Simon comes from a long line of bricklayers. It is a noble profession. Think about what our building blocks are, the building blocks of the World Wide Web.

The World Wide Web, I think, is the next great leap forward. We had language, writing, the printing press, and now hypertext in the form of the Word Wide Web. Who gets to build it? We do with this kind of building block: the URL, a link. What an amazing building block that is.

I can make a webpage and put two links on it linking to two different things. That combination of those two links has never existed before in the history of the web. We’ve created something new, link by link, building block by building block, building in layers.

I’m reminded of an apocryphal story may be from medieval times—who knows—a traveler coming across three workers. All three workers are doing the same thing. They’re building. They’re moving stones. They’re putting stones one on top of the other.

The traveler says to the first builder, “What are you doing?”

He says, “Oh, I’m moving stones.”

He says to the second builder, “What are you doing?”
He says, “I’m building a wall.”

He says to the third builder, “What are you doing?”

He says, “I’m building a cathedral.”

They’re all doing the same task but thinking about it in different ways. Maybe that’s what we need to do. Forget about labels, metaphors, architecture, engineer, building, whatever. Just think about what a privilege it is to be doing this, to embrace the fact that we are the builders. We are the bricklayers.

Today, for example, we’re going to hear from quite an amazing collection of bricklayers that I’m really looking forward to hearing from. I want to hear what they’re building. I want to hear their stories of how they built it, why they built it.

But to do that, I need to stop moving air over these vocal cords and flapping this fleshy piece of meat around in my mouth and just stop talking. Thank you for listening.

" } ] }