The fact that writing can be hard is one of the things that makes it meaningful. Removing this difficulty removes that meaning.
There is significant enthusiasm for this attitude inside the companies that produce an distribute media like books, movies, and music for obvious reasons. Removing the expense of humans making art is a real savings to the bottom line.
But the idea of this being an example of democratizing creativity is absurd. Outsourcing is not democratizing. Ideas are not the most important part of creation, execution is.
Generative AI is like the ultimate idea guy’s idea! Imagine… if all they needed to create a business, software or art was their great idea, and a computer. No need to engage (or pay) any of those annoying makers who keep talking about limitations, scope, standards, artistic integrity etc. etc.
Don’t get me wrong, there are some features under the mislabeled bracket of AI that have made a huge impact and improvement to my process. Audio transcription has been an absolute game-changer to research analysis, reimbursing me hours of time to focus on the deep thinking work. This is a perfect example of a problem seeking a solution, not the other way around. The latest wave of features feel a lot like because we can rather than we should, because.
In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.
The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest.
It’s enormously valuable to simply follow your curiosity—and follow it for a really long time, even if it doesn’t seem to be leading anywhere in particular.
Surprisingly big breakthrough ideas come when you bridge two seemingly unconnected areas.
I find, more often than not, that I understand something much less well when I sit down to write about it than when I’m thinking about it in the shower. In fact, I find that I change my own mind on things a lot when I try write them down. It really is a powerful tool for finding clarity in your own mind. Once you have clarity in your own mind, you’re much more able to explain it to others.
You don’t need to write for anyone else. You don’t need to share, or even keep it. You just need the act of it. Writing is a particle collider for reality and the imagination. And new discoveries are the result.
(That’s why I write here, of course. It’s how I think.)
A lovely heartfelt personal look back at dConstruct.
dConstruct was about the big ideas, but not in a wanky TED way. It was about ideas on the horizon brought into focus, it always left me wanting to know more.
dConstruct was never about the big showy thing that will make you millions. It was about the interesting. It gave you seeds to take away with you, and that’s important.
I think it was Jason who once told me that if you want to make someone’s life a misery, teach them about typography. After that they’ll be doomed to notice all the terrible type choices and kerning out there in the world. They won’t be able to unsee it. It’s like trying to unsee the arrow in the FedEx logo.
I think that Stewart Brand’s pace layers model is a similar kind of mind virus, albeit milder. Once you’ve been exposed to it, you start seeing in it in all kinds of systems.
Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.
Last month I sent out an edition of the Clearleft newsletter that was all about pace layers. I gathered together examples of people who have been infected with the pace-layer mindworm who were applying the same layered thinking to other areas:
Hey, wait a minute! If you put that list in reverse order it looks an awful lot like the pace-layers model with the slowest moving layer at the bottom and the fastest moving layer at the top. Perhaps there’s even room for an additional layer when patterns go into production:
Production
Patterns
Principles
Purpose
Your purpose should rarely—if ever—change. Your principles can change, but not too frequently. Your patterns need to change quite often. And what you’re actually putting out into production should be constantly updated.
As you travel from the most abstract layer—“purpose”—to the most concrete layer—“production”—the pace of change increases.
I can’t tell if I’m onto something here or if I’m just being apopheniac. Again.
Chip Delaney and Octavia Butler on a panel together in 1998 when hypertext and “cyberspace” are in the air. Here’s Octavia Butler on her process (which reminds me of when I’m preparing a conference talk):
I generally have four or five books open around the house—I live alone; I can do this—and they are not books on the same subject. They don’t relate to each other in any particular way, and the ideas they present bounce off one another. And I like this effect. I also listen to audio-books, and I’ll go out for my morning walk with tapes from two very different audio-books, and let those ideas bounce off each other, simmer, reproduce in some odd way, so that I come up with ideas that I might not have come up with if I had simply stuck to one book until I was done with it and then gone and picked up another.
So, I guess, in that way, I’m using a kind of primitive hypertext.
The books I have written are created from words invented by others, filled with ideas created by others. Even the few new ideas that are new depend on older ideas to work. What I had to say would probably be said by someone else not long after me. (More probably there have already been said by someone I was not aware of.) I may be the lucky person to claim those rare new ideas, but the worth of my art primarily resides in the great accumulation of the ideas and works of thousands of writers and thinkers before me — what I call the commons. My work was born in the commons, it gets its value by being deeply connected to the commons, and after my brief stewardship of those tiny new bits, it should return to the commons as fast as possible, in as many ways as possible.
Violence is never the answer, unless you’re dealing with nazis or your inner critic.
The excuses—or, I’m sorry, reasons—I hear folks say they can’t write include: I’m not very good at writing (you can’t improve if you don’t write often), my website isn’t finished (classic, and also guilty so shut up), and I don’t know what to write about, there’s nothing new for me to add (oh boy).
The title sounds clickbaity, but this is a thoughtful list of project ideas from Chris (partly prompted by the way other lists seem to involve nothing but JavaScript frameworks).
This is an utterly fascinating interactive description of network effects, complete with Nicky Case style games. Play around with the parameters and suddenly you can see things “going viral”:
We can see similar things taking place in the landscape for ideas and inventions. Often the world isn’t ready for an idea, in which case it may be invented again and again without catching on. At the other extreme, the world may be fully primed for an invention (lots of latent demand), and so as soon as it’s born, it’s adopted by everyone. In-between are ideas that are invented in multiple places and spread locally, but not enough so that any individual version of the idea takes over the whole network all at once. In this latter category we find e.g. agriculture and writing, which were independently invented ~10 and ~3 times respectively.
Play around somewhere and you start to see why cities are where ideas have sex:
What I learned from the simulation above is that there are ideas and cultural practices that can take root and spread in a city that simply can’t spread out in the countryside. (Mathematically can’t.) These are the very same ideas and the very same kinds of people. It’s not that rural folks are e.g. “small-minded”; when exposed to one of these ideas, they’re exactly as likely to adopt it as someone in the city. Rather, it’s that the idea itself can’t go viral in the countryside because there aren’t as many connections along which it can spread.
This really is a wonderful web page! (and it’s licensed under a Creative Commons Zero licence)
We tend to think that if something’s a good idea, it will eventually reach everyone, and if something’s a bad idea, it will fizzle out. And while that’s certainly true at the extremes, in between are a bunch of ideas and practices that can only go viral in certain networks. I find this fascinating.
At least, they don’t physically exist. They are intangible.
They’re in good company.
Feelings are intangible, but real. Hope. Despair.
Ideas are intangible: liberty, justice, socialism, capitalism.
The economy. Currency. All intangible. I’m sure we’ve all had those “college thoughts”:
Money isn’t real, man! They’re just bits of metal and pieces of paper ! Wake up, sheeple!
Nations are intangible. Geographically, France is a tangible, physical place. But France, the Republic, is an idea. Geographically, North America is a real, tangible, physical land mass. But ideas like “Canada” and “The United States” only exist in our minds.
Faith—the feeling—is intangible.
God—the idea—is intangible.
Art—the concept—is intangible.
A piece of art is an insantiation of the intangible concept of what art is.
Incidentally, I quite like Brian Eno’s working definition of what art is. Art is anything we don’t have to do. We don’t have to make paintings, or sculptures, or films, or music. We have to clothe ourselves for practical reasons, but we don’t have to make clothes beautiful. We have to prepare food to eat it, but don’t have to make it a joyous event.
By this definition, sports are also art. We don’t have to play football. Sports are also intangible.
A game of football is an instantiation of the intangible idea of what football is.
Football, chess, rugby, quiditch and rollerball are equally (in)tangible.
But football, chess and rugby have more consensus.
(Christianity, Islam, Judaism, and The Force are equally intangible, but Christianity, Islam, and Judaism have a bit more consensus than The Force).
HTML is intangible.
A web page is an instantiation of the intangible idea of what HTML is.
But we can document our shared consensus.
A rule book for football is like a web standard specification. A documentation of consensus.
By the way, economics, religions, sports and laws are all examples of intangibles that can’t be proven, because they all rely on their own internal logic—there is no outside data that can prove football or Hinduism or capitalism to be “true”. That’s very different to ideas like gravity, evolution, relativity, or germ theory—they are all intangible but provable. They are discovered, rather than created. They are part of objective reality.
Consensus reality is the collection of intangibles that we collectively agree to be true: economy, religion, law, web standards.
We treat consensus reality much the same as we treat objective reality: in our minds, football, capitalism, and Christianity are just as real as buildings, trees, and stars.
Sometimes consensus reality and objective reality get into fights.
Some people have tried to make a consensus reality around the accuracy of astrology or the efficacy of homeopathy, or ideas like the Earth being flat, 9-11 being an inside job, the moon landings being faked, the holocaust never having happened, or vaccines causing autism. These people are unfazed by objective reality, which disproves each one of these ideas.
For a long time, the consensus reality was that the sun revolved around the Earth. Copernicus and Galileo demonstrated that the objective reality was that the Earth (and all the other planets in our solar system) revolve around the sun. After the dust settled on that particular punch-up, we switched up our consensus reality. We changed the story.
That’s another way of thinking about consensus reality: our currencies, our religions, our sports and our laws are stories that we collectively choose to believe.
Web standards are a collection of intangibles that we collectively agree to be true. They’re our stories. They’re our collective consensus reality. They are what web browsers agree to implement, and what we agree to use.
The web is agreement.
For human beings to collaborate together, they need a shared purpose. They must have a shared consensus reality—a shared story.
Once a group of people share a purpose, they can work together to establish principles.
Design principles are points of agreement. There are design principles underlying every human endeavour. Sometimes they are tacit. Sometimes they are written down.
Patterns emerge from principles.
Here’s an example of a human endeavour: the creation of a nation state, like the United States of America.
The purpose is agreed in the declaration of independence.
The principles are documented in the constitution.
The patterns emerge in the form of laws.
HTML elements, CSS features, and JavaScript APIs are all patterns (that we agree upon). Those patterns are informed by design principles.
Here’s one of the design principles behind HTML5. It’s my personal favourite—the priority of constituencies:
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.
“In case of conflict”—that’s exactly what a good design principle does! It establishes the boundaries of agreement. If you disagree with the design principles of a project, there probably isn’t much point contributing to that project.
Also, it’s reversible. You could imagine a different project that favoured theoretical purity above all else. In fact, that’s pretty much what XHTML 2 was all about.
XHTML 1 was simply HTML reformulated with the syntax of XML: lowercase tags, lowercase attributes, always quoting attribute values.
Remember HTML doesn’t care whether tags and attributes are uppercase or lowercase, or whether you put quotes around your attribute values. You can even leave out some closing tags.
So XHTML 1 was actually kind of a nice bit of agreement: professional web developers agreed on using lowercase tags and attributes, and we agreed to quote our attributes. Browsers didn’t care one way or the other.
But XHTML 2 was going to take the error-handling model of XML and apply it to HTML. This is the error handling model of XML: if the parser encounters a single error, don’t render the document.
Of course nobody agreed to this. Browsers didn’t agree to implement XHTML 2. Developers didn’t agree to use it. It ceased to exist.
It turns out that creating a format is relatively straightforward. But how do you turn something into a standard? The really hard part is getting agreement.
Sturgeon’s Law states:
90% of everything is crap.
Coincidentally, 90% is also the percentage of the world’s crap that gets transported by ocean. Your clothes, your food, your furniture, your electronics …chances are that at some point they were transported within an intermodal container.
These shipping containers are probably the most visible—and certainly one of the most important—standards in the physical world. Before the use of intermodal containers, loading and unloading cargo from ships was a long, laborious, and dangerous task.
Along came Malcom McLean who realised that the whole process could be made an order of magnitude more efficient if the cargo were stored in containers that could be moved from ship to truck to train.
But he wasn’t the only one. The movement towards containerisation was already happening independently around the world. But everyone was using different sized containers with different kinds of fittings. If this continued, the result would be a tower of Babel instead of smoothly running global logistics.
Malcolm McLean and his engineer Keith Tantlinger designed two crate sizes—20ft and 40ft—that would work for ships, trucks, and trains. Their design also incorporated an ingenious twistlock mechanism to secure containers together. But the extra step that would ensure that their design would win out was this: Tantlinger convinced McLean to give up the patent rights.
This wasn’t done out of any hippy-dippy ideology. These were hard-nosed businessmen. But they understood that a rising tide raises all boats, and they wanted all boats to be carrying the same kind of containers.
Without the threat of a patent lurking beneath the surface, ready to torpedo the potential benefits, the intermodal container went on to change the world economy. (The world economy is very large and intangible.)
The World Wide Web also ended up changing the world economy, and much more besides. And like the intermodal container, the World Wide Web is patent-free.
Again, this was a pragmatic choice to help foster adoption. When Tim Berners-Lee and his colleague Robert Cailleau were trying to get people to use their World Wide Web project they faced some stiff competition. Lots of people were already using Gopher. Anyone remember Gopher?
The seemingly unstoppable growth of the Gopher protocol was somewhat hobbled in the early ’90s when the University of Minnesota announced that it was going to start charging fees for using it. This was a cautionary lesson for Berners-Lee and Cailleau. They wanted to make sure that CERN didn’t make the same mistake.
On April 30th, 1993, the code for the World Wide Project was made freely available.
This is for everyone.
If you’re trying to get people to adopt a standard or use a new hypertext system, the biggest obstacle you’re going to face is inertia. As the brilliant computer scientist Grace Hopper used to say:
The most dangerous phrase in the English language is “We’ve always done it this way.”
Rear Admiral Grace Hopper waged war on business as usual. She was well aware how abritrary business as usual is. Business as usual is simply the current state of our consensus reality. She said:
Humans are allergic to change.
I try to fight that.
That’s why I have a clock on my wall that runs counter‐clockwise.
Our clocks are a perfect example of a ubiquitous but arbitrary convention. Why should clocks run clockwise rather than counter-clockwise?
One neat explanation is that clocks are mimicing the movement of a shadow across the face of a sundial …in the Northern hemisphere. Had clocks been invented in the Southern hemisphere, they would indeed run counter-clockwise.
But on the clock face itself, why do we carve up time into 24 hours? Why are there 60 minutes in an hour? Why are there are 60 seconds in a minute?
It probably all goes back to Babylonian accountants. Early cuneiform tablets show that they used a sexagecimal system for counting—that’s because 60 is the lowest number that can be divided evenly by 6, 5, 4, 3, 2, and 1.
But we don’t count in base 60; we count in base 10. That in itself is arbitrary—we just happen to have a total of ten digits on our hands.
So if the sexagesimal system of telling time is an accident of accounting, and base ten is more widespread, why don’t we switch to a decimal timekeeping system?
It has been tried. The French revolution introduced not just a new decimal calendar—much neater than our base 12 calendar—but also decimal time. Each day had ten hours. Each hour had 100 minutes. Each minute had 100 seconds. So much better!
It didn’t take. Humans are allergic to change. Sexagesimal time may be arbitrary and messy but …we’ve always done it this way.
Incidentally, this is also why I’m not holding my breath in anticipation of the USA ever switching to the metric system.
Instead of trying to completely change people’s behaviour, you’re likely to have more success by incrementally and subtly altering what people are used to.
That was certainly the case with the World Wide Web.
The Hypertext Transfer Protocol sits on top of the existing TCP/IP stack.
The key building block of the web is the URL. But instead of creating an entirely new addressing scheme, the web uses the existing Domain Name System.
Then there’s the lingua franca of the World Wide Web. These elements probably look familiar to you:
You recognise this language, right? That’s right—it’s SGML. Standard Generalised Markup Language.
Specifically, it’s CERN SGML—a flavour of SGML that was already popular at CERN when Tim Berners-Lee was working on the World Wide Project. He used this vocabulary as the basis for the HyperText Markup Language.
Because this vocabulary was already familiar to people at CERN, convincing them to use HTML wasn’t too much of a hard sell. They could take an existing SGML document, change the file extension to .htm and it would work in one of those new fangled web browsers.
In fact, HTML worked better than expected. The initial idea was that HTML pages would be little more than indices that pointed to other files containing the real meat and potatoes of content—spreadsheets, word processing documents, whatever. But to everyone’s surprise, people started writing and publishing content in HTML.
Was HTML the best format? Far from it. But it was just good enough and easy enough to get the job done.
It has since changed, but that change has happened according to another design principle:
Evolution, not revolution
From its humble beginnings with the handful of elements borrowed from CERN SGML, HTML has grown to encompass an additional 100 elements over its lifespan. And yet, it’s still technically the same format!
This is a classic example of the paradox called the Ship Of Theseus, also known as Trigger’s Broom.
You can take an HTML document written over two decades ago, and open it in a browser today.
Even more astonishing, you can take an HTML document written today and open it in a browser from two decades ago. That’s because the error-handling model of HTML has always been to simply ignore any tags it doesn’t recognise and render the content inside them.
That pattern of behaviour is a direct result of the design principle:
Degrade Gracefully
…document conformance requirements should be designed so that Web content can degrade gracefully in older or less capable user agents, even when making use of new elements, attributes, APIs and content models.
Here’s a picture from 2006.
That’s me in the cowboy hat—the picture was taken in Austin, Texas. This is an impromptu gathering of people involved in the microformats community.
Microformats, like any other standards, are sets of agreements. In this case, they’re agreements on which class values to use to mark up some of the missing elements from HTML—people, places, and events. That’s pretty much it.
And yes, they do have design principles—some very good ones—but that’s not why I’m showing this picture.
Some of the people in this picture—Tantek Çelik, Ryan King, and Chris Messina—were involved in the creation of BarCamp, a series of grassroots geek gatherings.
BarCamps sound like they shouldn’t work, but they do. The schedule for the event is arrived at collectively at the beginning of the gathering. It’s kind of amazing how the agreement emerges—rough consensus and running events.
In the run-up to a BarCamp in 2007, Chris Messina posted this message to the fledgeling social networking site, twitter.com:
how do you feel about using # (pound) for groups. As in #barcamp [msg]?
This was when tagging was all the rage. We were all about folksonomies back then. Chris proposed that we would call this a “hashtag”.
I wasn’t a fan:
Thinking that hashtags disrupt the reading flow of natural language. Sorry @factoryjoe
But it didn’t matter what I thought. People agreed to this convention, and after a while Twitter began turning the hashtagged words into links.
In doing so, they were following another HTML design principle:
Pave the cowpaths
It sounds like advice for agrarian architects, but its meaning is clarified:
When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.
Twitter had previously paved a cowpath when people started prefacing usernames with the @ symbol. That convention didn’t come from Twitter, but they didn’t try to stop it. They rolled with it, and turned any username prefaced with an @ symbol into a link.
The @ symbol made sense because people were used to using it from email. The choice to use that symbol in email addresses was made by Ray Tomlinson. He needed a symbol to separate the person and the domain, looked down at his keyboard, saw the @ symbol, and thought “that’ll do.”
Perhaps Chris followed a similar process when he proposed the symbol for the hashtag.
It could have just as easily been called a “number tag” or “octothorpe tag” or “pound tag”.
This symbol started life as a shortcut for “pound”, or more specifically “libra pondo”, meaning a pound in weight. Libra pondo was abbreviated to lb when written. That got turned into a ligature ℔ when written hastily. That shape was the common ancestor of two symbols we use today: £ and #.
The eight-pointed symbol was (perhaps jokingly) renamed the octothorpe in the 1960s when it was added to telephone keypads. It’s still there on the digital keypad of your mobile phone. If you were to ask someone born in this millenium what that key is called, they would probably tell you it’s the hashtag key. And if they’re learning to read sheet music, I’ve heard tell that they refer to the sharp notes as hashtag notes.
If this upsets you, you might be the kind of person who rages at the word “literally” being used to mean “figuratively” or supermarkets with aisles for “10 items or less” instead of “10 items or fewer”.
Tough luck. The English language is agreement. That’s why English dictionaries exist not to dictate usage of the language, but to document usage.
It’s much the same with web standards bodies. They don’t carve the standards into tablets of stone and then come down the mountain to distribute them amongst the browsers. No, it’s what the browsers implement that gets carved in stone. That’s why it’s so important that browsers are in agreement. In the bad old days of the browser wars of the late 90s, we saw what happened when browsers implemented their own proprietary features.
Standards require interoperability.
Interoperability requires agreement.
So what we can learn from the history of standardisation?
Well, there are some direct lessons from the HTML design principles.
The priority of constituencies
Consider users over authors…
Listen, I want developer convenience as much as the next developer. But never at the expense of user needs.
I’ve often said that if I have the choice between making something my problem, and making it the user’s problem, I’ll make it my problem every time. That’s the job.
I worry that these days developer convenience is sometimes prized more highly than user needs. I think we could all use a priority of constituencies on every project we work on, and I would hope that we would prioritise users over authors.
Degrade gracefully
Web content can degrade gracefully in older or less capable user agents…
I know that I go on about progressive enhancement a lot. Sometimes I make it sound like a silver bullet. Well, it kinda is.
I mean, you can’t just buy a bullet made of silver—you have to make it yourself. If you’re not used to crafting bullets from silver, it will take some getting used to.
Again, if developer convenience is your priority, silver bullets are hard to justify. But if you’re prioritising users over authors, progressive enhancement is the logical methodology to use.
Evolution, not revolution
It’s a testament to the power and flexibility of the web that we don’t have to build with progressive enhancement. We don’t have to build with a separation of concerns like structure, presentation, and behaviour.
We don’t have to use what the browser gives us: buttons, dropdowns, hyperlinks. If we want to, we can make these things from scratch using JavaScript, divs and ARIA attributes.
But why do that? Is it because those native buttons and dropdowns might be inconsistent from browser to browser.
Consistency is not the purpose of the world wide web.
Universality is the key principle underlying the web.
Our patterns should reflect the intent of the medium.
Use what the browser gives you—build on top of those agreements. Because that’s the bigger lesson to be learned from the history of web standards, clocks, containers, and hashtags.
Our world is made up of incremental improvements to what has come before. And that’s how we will push forward to a better tomorrow: By building on top of what we already have instead of trying to create something entirely from scratch. And by working together to get agreement instead of going it alone.
The future can be a frightening prospect, and I often get people asking me for advice on how they should prepare for the web’s future. Usually they’re thinking about which programming language or framework or library they should be investing their time in. But these specific patterns matter much less than the broader principles of working together, collaborating and coming to agreement. It’s kind of insulting that we refer to these as “soft skills”—they couldn’t be more important.
Working on the web, it’s easy to get downhearted by the seemingly ephemeral nature of what we build. None of it is “real”; none of it is tangible. And yet, looking at the history of civilisation, it’s the intangibles that survive: ideas, philosophies, culture and concepts.
The future can be frightening because it is intangible and unknown. But like all the intangible pieces of our consensus reality, the future is something we construct …through agreement.
Now let’s agree to go forward together to build the future web!