Remy was on the team too. He did the heavy lifting of actually making the thing work—quite an achievement in just five days!
Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.
Remy and I were both keen to talk about the work, which is why we did a joint talk at Fronteers in Amsterdam that year. We’re both quite sceptical of talks given by duos; people think it means it’ll be half the work, when actually it’s twice the work. In the end we come up with a structure for the talk that we both liked:
Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.
After putting so much work into the talk, we were keen to give it again somewhere. We had the chance to do that in Nottingham in early March 2020. (cue ominous foreboding)
The folks from local Brighton meetup Async had also asked if we wanted to give the talk. We were booked in for May 2020. (ominous foreboding intensifies)
We all know what happened next. The Situation. Lockdown. No conferences. No meetups.
But technically the talk wasn’t cancelled. It was just postponed. And postponed. And postponed. Before you know it, five years have passed.
Part of the problem was that Async is usually on the first Thursday of the month and that’s when I host an Irish music session in Hove. I can’t miss that!
I really enjoyed giving the talk and the discussion that followed. There was a good buzz.
It also made me appreciate the work that we put into stucturing the talk. We’ve only given it a few times but with a five year gap between presentations, I can confidentally say that’s it’s a timeless topic.
The primary goals of this strategy are to inform decision-making and enhance the success of accessibility-related activities within the GOV.UK Design System team.
Interestingly, accessibility concerns are put into two categories: theoretical and evidenced (with the evidenced concerns being prioritised):
Theoretical: A question or statement regarding the accessibility of an implementation within the Design System without evidence of real-world impact.
Evidenced: Sharing new research, data or evidence showing that an implementation within the Design System could cause barriers for disabled people.
I wonder what kinds of conditions would need to be true for another platform to be built in a similar way? Lots of people have tried, but none of them have the purity of participation for the love of it that the web has.
Thirty years later, it is easy to overlook the web’s origins as a tool for sharing knowledge. Key to Tim Berners-Lee’s vision were open standards that reflected his belief in the Rule of Least Power, a principle that choosing the simplest and least powerful language for a given purpose allows you to do more with the data stored in that language (thus, HTML is easier for humans or machines to interpret and analyze than PostScript). Along with open standards and the Rule of Least Power, Tim Berners-Lee wanted to make it easy for anyone to publish information in the form of web pages. His first web browser, named Nexus, was both a browser and editor.
I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.
The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.
The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).
As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.
Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.
The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:
Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.
But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.
More worrying is the announcement from Apple to only store files for a week of browser use:
The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.
Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:
#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */
We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).
Then he told us something interesting about the next line of code:
#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */
Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.
Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.
Those were the last four digits of his parents’ phone number.
Like a little mini CSS Zen Garden, here’s one compenent styled five very different ways.
Crucially, the order of the markup doesn’t consider the appearance—it’s concerned purely with what makes sense semantically. And now with CSS grid, elements can be rearranged regardless of source order.
CSS is powerful and capable of doing amazingly beautiful things. Let’s embrace that and keep the HTML semantical instead of adapting it to the need of the next design change.
Guten Morgen. All right. I’m just going to get started because I’ve got a lot to talk about and I’m very, very excited to be here.
I’m excited to talk about the web. I’ve been thinking a lot about the web. You know, I think a lot about the web all the time, but this year, in particular, thinking about where the web came from; asking myself where the web came from, which is kind of a dumb question because it’s pretty obvious where the web came from.
It came from this guy. This is Tim Berners-Lee and he is the creator of the World Wide Web. It was 30 years ago, March 1989, that he wrote a proposal while he was at CERN, a very dull-looking proposal called “Information Management: A Proposal” that had incomprehensible diagrams trying to explain what he had in mind. But a supervisor, Mike Sendall, saw the potential and scrawled across the top, “Vague but exciting.”
Tim Berners-Lee starts working on this idea he has for a global hypertext system and he starts creating the world’s first web browser and the world’s first web server, which is this NeXT machine which is in the Science Museum in London, a lovely machine, the NeXT box.
I have a great affection for it because, earlier this year, I was very honored to be invited to CERN, along with this bunch of hackers, to take part in a project related to the 30th anniversary of that proposal. I will show you a video that explains the project.
So, we came to CERN this week in order to create some sort of modern-day interpretation of the very first web browser.
—Kimberly Blessing
Well, the project is to restore the first browser which was developed by the inventor of the Web, and the idea is to create an experience for the people who could not use the web in its early days to have an idea how it felt to use the web at that time.
—Martin Akolo Chiteri
I think the biggest difficulty was to make the browser work in the NeXT machine that we had.
—Angela Ricci
We really needed to work with an original NeXT box in order to really understand what that experience was like in order to be able to write some code and replicate that experience.
—Kimberly Blessing
My role is code, so generating the code to create the interactive aspect of the World Wide web browser, recreated browser. It’s very much writing JavaScript to kind of create all the NeXT operating system UI, making requests to servers to go and get the HTML and massage the HTML back into a format that looks good in the World Wide Web browser; and making sure we end up with a URL that goes into production that someone can visit and see their own webpages. The tangible software is what I’m responsible for, so I have to make sure it all gets done. Otherwise, we have no browser to look at, basically.
—Remy Sharp
We got together a few years back to do a similar sort of hack project here at CERN which was creating the world’s second-ever web browser, which was the Line Mode browser. We had a lot of fun with it and it’s a great bunch of people from all over the world. It’s been really great to get back together and it’s always amazing to be here at CERN, to be at not just the birthplace of the Web, but the most important place on the planet for science.
Yeah, it’s been a lot of fun. I kind of don’t want it to be over because we are in our element, hacking away, having fun, and just soaking up the atmosphere, and we are getting to chat with people who were there 30 years ago, Jean-Francois Groff and Robert Cailliau, these people who were involved in the creation of the World Wide web. To me, that’s amazing to be surrounded by so much World Wide Web history.
The plan is that this will go online and anyone will be able to access it because it’s on the web, and that’s the beautiful thing about the web is that anyone can visit a website, and so everyone will have the opportunity to try using the world’s first web browser and see what modern webpages would look like if they were passed through this first web browser.
—Jeremy Keith
Well, spoiler alert. The project was a success and you can, indeed, look at your websites in a recreation of the first-ever web browser. This is the URL. It’s worldwideweb.cern.ch.
Success, that was good. But as you could probably tell from that video, Remy was the one basically making this all happen. He was the one writing the JavaScript to recreate this in a modern browser. This is the first-ever web page viewed in the first-ever web browser.
As you gathered, again, I was really fascinated by the history of the Web, like, where did it come from, and the people who were there at the time and getting to pick their brains. I spent most of my time working on the accompanying website to go with this project. I was creating this timeline.
Because this was to mark the 30th anniversary of this proposal, I thought, well, we could easily look at what has happened in the last 30 years: websites, web servers, formats, standards - all that stuff. But I thought it would be fascinating to look at the previous 30 years as well and try and figure out the things that were happening that influenced Tim Berners-Lee in terms of hypertext, networks, computing, and all this stuff.
But I’d kind of had given myself this arbitrary cut-off point of 30 years to make this nice symmetry of it being the 30th anniversary of the World Wide Web. I could go further back. I could start asking, well, what happened before 30 years ago? What were the biggest influences on Tim Berners-Lee and the World Wide Web?
Now, if you were to ask Tim Berners-Lee himself who his biggest influencers were, he would give you a straight-up answer. He will say his biggest influencers were Conway Berners-Lee and Mary Lee Woods, his father and mother, which is fair enough. Normally, when you ask people who their influences are, they say, “Oh, my parents. They gave me a loving environment. They kindled my curiosity,” and all that stuff.
I’m sure that’s true but, in this case, it was also a big influence in a practical sense in that both Mary and Conway worked on the Ferranti Mark 1. That’s where they met. They were programmers. Tim Berners-Lee’s parents were programmers on the Ferranti Mark 1, a very early computer. This is in the 1950s in Britain.
Okay, this feels like a good origin story for the web, right? They were working on this early computer.
But it’s an early computer; it’s not the first computer. Maybe I need to go back further. How far back do I go to find the first computer?
Is this the first computer, the Antikythera mechanism? You can see this in a museum in Athens. This was recovered from a shipwreck. It was recovered at the start of the 20th Century, but it dates back thousands of years, a mechanism for predicting the position of stars and planets. It does calculations. It is a calculating device. Not a programmable computer as such, though.
If you’re thinking about the origins of the idea of a programmable computer, I think we could start to look at this gentleman, Charles Babbage. This is half of Charles Babbage’s brain, which is in the Science Museum in London along with that original NeXT box that the World Wide web was created on. The other half is in the Computing History Museum in California.
Charles Babbage lived in the 19th Century, and kind of got a lot of seed funding from the U.K. government to build a device, the Difference Engine, which would do calculations. Later on, he scrapped that and started working on the Analytical Engine which would be even better — a 2.0 version. It never got finished, by the way, but it was a really amazing idea because you could see the architecture of like a central processing unit, but it was still fundamentally a calculator, a calculating machine.
The breakthrough in terms of programming maybe came from Charles Babbage’s collaborator. This is Ada Lovelace. She was translating documents by an Italian mathematician about Difference Engines and calculations. She realized that—hang on—if we’re doing operations on numbers, what if those numbers could stand for other concepts, non-numerical like words or thoughts? Then we could do operations on things other than numbers, which is exactly what we do today in modern computing.
If you use a word processor, you’re not processing words; you’re operating on ones and zeros. If you use a graphics program, you’re not actually moving pixels around; you’re operating on ones and zeroes. This idea of how anything could stand in for ones and zeros for numbers kind of started with Ada Lovelace.
But, as I said, the Difference Engine and the Analytical Engine, they never got finished, and this was kind of a dead-end. It turns out, they weren’t an influence.
Later on, for example, this genius who was definitely responsible for the first working computers, Alan Turing, he wasn’t aware of the work of Babbage and Lovelace, which is a shame. He was kind of working in isolation.
He came up with the idea of the universal machine, the Turing Machine. Give it an infinitely long tape and enough state, enough time, you could calculate literally anything, which is pretty much what computers are.
He was working at Bletchley Park breaking the code for the Enigma machines, and that leads to the creation of what I think would be the first programmable computer. This is Colossus at Bletchley Park. This was created by a colleague of Turing, Tommy Flowers.
It is programmable. It’s using valves, but it’s absolutely programmable. It was top secret, so even for years after the war, this was not known about. In the history books, even to this day, you’ll often see ENIAC listed as the first programmable computer, but I think that honor goes to Tommy Flowers and Colossus.
By the way, Alan Turing, after the war, after 1945, he did go on to work and keep on working in the field of computing. In fact, he worked as a consultant at Ferranti. He was working on the Ferranti Mark 1, the same computer where Tim Berners-Lee’s parents met when they were programmers.
As I say, that was after the war ended in 1945. Now, we can’t say that the work at Bletchley Park was responsible for winning the war, but we could probably say that it’s certainly responsible for shortening the war. If it weren’t for the work done by the codebreakers at Bletchley Park, the war might not have finished in 1945.
1945 is the year that this gentleman wrote a piece that was certainly influential on Tim Berners-Lee. This is Vannevar Bush, a scientist, a thinker. In 1945, he published a piece in the Atlantic Monthly under the heading, “A Scientist Looks at Tomorrow,” he publishes, “As We May Think.”
In this piece, he describes an imaginary device. It’s a mechanical device inside a desk, and the operator is allowed to work on reams and reams of microfilm and to connect ideas together, make these associative trails. This is kind of like hypertext before the word hypertext has been coined. Vannevar Bush calls this device the Memex. That’s published in 1945.
Also, in 1945, this young man has been drafted into the U.S. Navy and he’s shipping out to the Pacific. His name is Douglas Engelbart. Literally as the ship is leaving the harbor to head to the Pacific, word comes through that the war is over.
Now, he still gets shipped out to the Pacific. He’s in the Philippines. But now, instead of fighting against the Japanese, he’s lounging around in a hut on stilts reading magazines and that’s where he reads “As We May Think,” by Vannevar Bush.
Fast-forward years later; he’s trying to decide what to do with his life other than settle down, get married, have a job, you know, that kind of thing. He thinks, “No, no, I want to make the world a better place.” He realizes that computers could be the way to do this if they could implement something very much like the Memex. Instead of a mechanical device, what if computers could create the Memex, this hypertext system? He devotes his life to this and effectively invents the field of Human Computer Interaction.
On December 9th, 1968, he demonstrates what he’s been working at. This is in San Francisco, and he demonstrates bitmap screens. He demonstrates real-time collaboration on documents, working hypertext …and also he invents the mouse for the demo.
We have a pointing device called a mouse, a standard keyboard, and a special key set we have here. And we are going to go for a picture down on our laboratory in Menlo Park and pipe it up. It’ll show you, from another point of view, more about how that mouse works.
Come in, Menlo Park. Okay, there’s Don Anders’ hand in Menlo Park. In a second, we’ll see the screen that he’s working and the way the tracking spot moves in conjunction with movements of that mouse. I don’t know why we call it a mouse sometimes. I apologize; it started that way and we never did change it.
—Douglas Engelbart
This was ground-breaking. The mother of all demos, it came to be known as. This was a big influence on Tim Berners-Lee.
At this point, we’ve entered the time cone of those 30 years before the proposal that Tim Berners-Lee made, which is good because this is the moment where I like to branch off from this timeline and sort of turn it around.
The question I’m sure nobody is asking—because you saw there was a video link-up there; Douglas Engelbart is in San Francisco, and he has a video link-up with Menlo Park to demonstrate real-time collaboration with computers—the question nobody is asking is, who is operating the video camera in Menlo Park?
Well, I’ll tell you the answer to that question that nobody is asking. The man operating the video camera in Menlo Park is this man. His name is Stuart Brand. Now, Stuart Brand has spent most of the ‘60s doing what you would do in the ‘60s; he was dropping acid. This was all kosher. This was before it was illegal.
He was on the Merry Pranksters bus with Ken Kesey and, on one particular acid trip, he literally saw the Earth curving away and realizing that, yeah, we’re all on one planet, man! And he started a campaign with badges called, “Why haven’t we seen a photograph of the whole Earth yet?” I like the “yet” part in there like it’s a conspiracy that we haven’t seen a photograph of the whole Earth.
He was kind of onto something here, realizing that seeing our planet as a whole planet from space could be a consciousness-changing thing much like LSD is a consciousness-changing thing. Sure enough, people did talk about the effect it had when we got photographs like Earthrise from Apollo 8, and he used those pictures when he published the Whole Earth Catalog, which was a series of books.
The Whole Earth Catalog was basically like Wikipedia before the internet. It was this big manual of how to do everything. The idea was, if you were running a commune, living in a commune, you needed to know about technology, and agriculture, and weather, and all the stuff, and you could find that in the Whole Earth Catalog.
He was quite an influential guy, Stuart Brand. You probably heard the Steve Jobs commencement speech where he quotes Stuart Brand, “Stay hungry, stay foolish,” all that stuff.
Stuart Brand also did a lot of writing. After Douglas Engelbart’s demo, he started to see that this computer thing was something else. He literally said computers are the new LSD, so he starts really investigating computing and computers.
He writes this great article in “Rolling Stone” magazine in 1972 about space war, one of the first games you could play on the screen. But he has a wide range of interests. He kind of kicked off the environmental movement in some ways.
At one point, he writes a book about architecture. He writes a book called “How Buildings Learn.” There’s a television series that goes with it as well. This is a classic book (the definition of a classic book being a book that everyone has heard of and nobody has read).
In this book, he starts looking at the work of a British architect called Frank Duffy. Frank Duffy has this idea about architecture he calls shearing layers. The way that Frank Duffy puts it is that a building, properly conceived, consists of several layers of longevity, so kind of different rates of change.
He diagrams this out in terms of a building, and you see that you’ve got the site that the building is on that’s moving at a geological timescale, right? That should be around for thousands of years, we would hope.
Then you’ve got the actual structure that could stand for centuries.
Then you get into the infrastructure inside. You know, the plumbing and all that, you probably want to swap out every few decades.
Basically, until you get down to the stuff inside a room, the furniture that you can move around on a daily basis. You’ve got all these timescales moving from fast to slow as you move inwards into the house.
What I find fascinating about this idea of these different layers as well is the way that each layer depends on the layer below. Like, you can’t have the structure of a building without first having a site to put it on. You can’t move furniture around inside a room until you’ve made the room using the walls and the doors, right? This idea of shearing layers is kind of fascinating, and we’re going to get back to it.
Something else that Stuart Brand went on to do; he was one of the co-founders of the Long Now Foundation. Anybody here part of the Long Now Foundation? Any members of the Long Now Foundation?
Ah… It’s a great organization. It’s literally dedicated to long-term thinking. It was founded by Stuart Brand and Danny Hillis, the computer scientist, and Brian Eno, the musician and producer. Like I said, dedicated to long-term thinking. This is my membership card made out of a durable metal because it’s got to last for thousands of years.
If you go on the website of the Long Now Foundation, you’ll notice that the years are made up of five digits, so instead of 2019, it will be 02019. Well, you know, you’ve got to solve the Y10K problem. They’re dedicated to long-term thinking, to trying to think in the longer now.
One of the most famous projects is the clock of the Long Now. This is a clock that will tell time for 10,000 years. Brian Eno has done the chimes. They’re generative. It’ll never chime the same way twice. It chimes once a century. This is a scale model that’s in the Science Museum in London along with half of Charles Babbage’s brain and the original NeXT machine that Tim Berners-Lee created World Wide Web on.
This is just a scale model. The full-sized clock is going to be inside a mountain in west Texas. You’ll be able to visit it. It’ll be like a pilgrimage. Construction is underway. I hope to visit the clock one day.
Stuart Brand collected his thoughts. It’s a really fascinating project when you think about, how do you design something to last 10,000 years? How do you communicate over 10,000 years? One of those tricky design problems almost like the Voyager Golden Record or the Yucca Mountain waste disposal. How do you communicate to future generations? You can’t rely on language. You can’t rely on semiotics.
Anyway, he collected a lot of his thoughts into this book called “The Clock of the Long Now,” subtitled “Time and Responsibility: The Ideas Behind the World’s Slowest Computer.” He’s thinking about time. That’s when he comes back to shearing layers and these different layers of rates of change; different layers of time.
Stuart Brand abstracts the idea of shearing layers into something called “pace layers.” What if it’s not just architecture? What if any kind of system has these different rates of change, these layers?
He diagrams this out in terms of the human species, so think of humans. We have these different layers that we operate at.
At the lowest, slowest level, there is our nature, literally, like what makes us human in terms of our DNA. That doesn’t change for tens of thousands of years. Physiologically, there’s no difference between a caveman and an astronaut, right?
Then you’ve got culture, which cumulates of centuries, and the tribal identities we have around things like nations, language, and things like that.
Governance, models of governance, so not governments but governance, as in the way we choose to run things, whether that’s a feudal society, or a monarchy, or representative democracy, right? Those things do change, but not too fast, hopefully.
Infrastructure: you’ve got to keep up with the times, you know? This needs to move at a faster pace, again.
Commerce: much more fast-moving. Commerce needs to — you’re getting into the faster timescales there.
Then he puts fashion at the top. By fashion, he means anything that is supposed to be new and exciting, so that includes pop music, for example. The whole idea with fashion is that it’s there to try stuff out and discard it very quickly.
“What about this?” “No.” “What about that?” “Try this.” “No, try that.”
The good stuff, the stuff that kind of sticks to the wall, will maybe find its way down to the longer-lasting layers. Maybe a really good pop song from fashion ends up becoming part of culture, over time.
Here’s the way that Stuart Brand describes pace layers. He says:
Fast learns; slow remembers. Fast proposals, and slow disposes. Fast is discontinuous; slow is continuous. Fast and small instructs slow and big by a crude innovation and by occasional revolution, but slow and big controls small and fast by constraint and constancy.
He says:
Fast gets all of our attention but slow has all the power.
Pace layers is one of those ideas that, once you see it, you can’t unsee it. You know when you want to make someone’s life a misery, you just teach them about typography. Now they can’t unsee all the terrible kerning in the world. I can’t unsee pace layers. I see them wherever I look.
Does anyone remember this book, UX designers in the room, “The Elements of User Experience,” by Jesse James Garrett? It’s old now. We’re going back in the way but, in it, he’s got this diagram about the different layers to a user experience. You’ve got the strategy below that finally ends up with an interface at the top.
I look at this, and I go, “Oh, right. It’s pace layers. It’s literally pace layers.” Each layer depending on the layer below, the slower layers at the bottom, the faster-moving things at the top.
With this mindset that pace layers are everywhere, I thought, “Can I map out the web in terms of pace layers, the technology stack of the web?” I’m going to give it a go.
At the lowest stack, the slowest moving, I would say there’s the internet itself, as in TCP/IP, the transmission control protocol, Internet protocol created by Bob Kahn and Vint Cerf in the ‘70s and pretty much unchanged since then, deliberately dumb, deliberately simple. All it does is move packets around. Pretty much unchanged.
On top of that, you get the other protocols that use TCP/IP, like in the case of the web, the hypertext transfer protocol. Now, this has changed over time. We now have HTTP/2. But it hasn’t been rapid change. It’s been gradual. Again, that kind of feels right. It feels good that HTTP isn’t constantly changing underneath us too much.
Then we serve up over HTTP are URLs. I wish that URLs were down here. I wish that URLs were everlasting, never changing. But, unfortunately, I must acknowledge that that’s not true. Links die. We have to really work hard to keep them alive. I think we should work hard to keep them alive.
What do you put at those URLs? At the simplest level, it’s supposed to be plain text. But this is the web, so let’s say structured text. This is going to be HTML, the hypertext markup language, which Tim Berners-Lee came up with when he created the World Wide Web. I say, “Came up with.” He basically stole it from SGML that scientists at CERN were already using and sprinkled in one or two new tags, as they were calling it back then.
There were maybe like 20-something tags in HTML when Tim Berners-Lee created the web. Now we’ve got over 100 elements, as we call them. But I feel like I’ve been able to keep up with the pace of change. I mean, the vast big kind of growth spurt with HTML was probably HTML5. That’s been a while back now. It’s definitely change that I can keep on top of.
Then we have CSS, the presentation layer. That feels like it’s been moving at a nice clip lately. I feel like we’ve been getting a lot of cool stuff in CSS, like Flexbox and Grid, and all this new stuff that browsers are shipping. Still, I feel like, yeah, yeah, this is good. It’s right that we get lots of CSS pretty rapidly. It’s not completely overwhelming.
Then there’s the JavaScript ecosystem. I specifically say the “JavaScript ecosystem” as opposed to the “JavaScript language” because the JavaScript language is being developed at a nice pace. I feel like it’s going at a good speed of standardization. But the ecosystem, the frameworks, the libraries, the build tools, all of that stuff, that feels like, “You know what? Try this. No, try that. What about this? What about that? Oh, you’re still using that framework? No, no, we stopped using that last week. Oh, you’re still using that build tool? No, no, no, that’s so … we’ve moved on.”
I find this very overwhelming. Can I get a show of hands of anybody else who feels overwhelmed by this rate of change? All right. Keep your hands up. Keep your hands up and just look around. I want you to see you are not alone. You are not alone.
But I tell you what; after mapping these layers out into the pace layer diagram, I realized, wait a minute. The JavaScript layer, the fashion layer, if you will, it’s supposed to be like that. It’s supposed to be trying stuff out. Throw this at the wall. No, throw that at the wall. How about this? How about that?
It’s true that the good stuff does stick. Like if I think back to the first uses of JavaScript—okay, I’m showing my age, but—when JavaScript first came along, we’d use it for things like image rollovers or form validation, right? These days, if I wanted to do an image rollover—you mouseover something and it changes its appearance—I wouldn’t use JavaScript. I’d use CSS because we’ve got :hover.
If I was doing a form validation, like, “Oh, has that field actually been filled in?” because it’s required and, “Does that field actually look like an email address?” because it’s supposed to be an email address, I wouldn’t even use JavaScript. I would use HTML; input type="email" required. Again, the good stuff moves down into the sort of slower layers. Fast learns; slow remembers.
The other thing I realized when I diagrammed this out was that, “Huh, this kind of maps to how I approach building on the web.” I pretty much take this for granted that it’s going to be on the Internet. There’s not much I can do about that. Then I start thinking about URLs like URL-first design, the information architecture of a site. I think it’s underrated. I think people should create a URL-first design. URL design, in general, I think it’s a really good place to start if you’re building a product or a service.
Then, about your content in terms of structure. What is the most important thing on this page? That should be an H1. Is this a paragraph? Is it a list? Thinking about the structure first and then going on to think about the appearance which is definitely the way you want to go if you’re making something responsive. Think about the structure first and then the appearance and all these different form factors.
Then finally, add in behavior with JavaScript. Whatever HTML and CSS can’t do, that’s what I will use JavaScript for to kind of enhance it from there.
This maps really nicely to how I personally approach building things on the web. But, it is a testament to the flexibility of the World Wide Web that, if you don’t want to build in this way, you don’t have to.
If you want, you could build like this. JavaScript is a really powerful language. If you wanted to do navigations and routing in JavaScript, you can. If you want to inject all your content into the page using JavaScript, you can. CSS in JS? You can. Right? I mean, this is pretty much the architecture of a single-page app. It’s on the internet and everything is in JavaScript. The internet is a delivery mechanism for a chunk of JavaScript that does everything: the markup, the CSS, the routing.
This isn’t how I approach building on the web. I kind of ask myself why this doesn’t feel quite right to me. I think it’s because of the way it kind of turns everything into a single point of failure, which is the JavaScript, rather than spreading out those points of failure. We’re on the Internet and, as long as the JavaScript runs okay, the user gets everything. It turns what you’re building into a binary proposition that either it doesn’t work at all or it works great. Those are your own two options.
Now I’ll point out that, in another medium, this would make complete sense. Like if you’re building a native app. If you build an iOS app and I’ve got an iOS device, I get 100% of what you’ve designed and built. But if you’re building an iOS app and I have an Android device, I get zero percent of what you’ve designed and built because you can’t install an iOS app on an Android device. Either it works great or it doesn’t work at all; 100% or zero percent.
The web doesn’t have to be like that. If you build in that layered way on the web, then maybe I don’t get 100% of what you’ve designed and built but I don’t get zero percent, either. I’ll get something somewhere along the way, hopefully, closer to working great. It goes from not working at all to just about working. It works fine and works well; it works great.
You’re building up these layers of experience, the idea being that nobody gets left behind. Everybody gets something regardless of their device, their network, their browser. Everyone is not going to get the same experience, but everybody gets something. That feels very true to the original sort of founding ideas of the web and it maps so nicely to our technical stack on the web, the fact that you can start to think about things like URLs-first and think about the structure, then the presentation, and then the behavior.
I’m not the only one who likes thinking in this layered kind of way when it comes to the web. I’m going to quote my friend Ethan Marcotte. He says:
I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it will change if, say, fonts aren’t available, or JavaScript doesn’t work, or someone doesn’t see the design as you or I might and is having the page read allowed to them.
That’s a really good point that when you build in the layered way, you’re building in the resilience that something can fall back to a layer a little further down.
This brings up something I’ve mentioned here before at Beyond Tellerrand, which is that, when we’re evaluating technologies, the question we tend to ask is, how well does it work? That’s an absolutely valid question. You’re about to try a new tool, a new framework, a new standard. You ask yourself; how well does it work?
I think the more important question to ask is:
How well does it fail?
What happens if that piece of technology fails? That’s why I like this layered approach because this fails really well. JavaScript’s no longer a single point of failure. Neither is CSS, frankly. If the CSS never loaded, the user still gets something.
Now, this brings up an idea, a principle that definitely influenced Tim Berners-Lee. It was at the heart of his design principles for the World Wide Web. It’s called the Principle of Least Power that states, “Choose the least powerful language suitable for a given purpose,” which sounds really counterintuitive. Why would I choose the least powerful language to do something?
It’s kind of down to the fact that there’s a trade-off. With power, you get a fragility, right? Maybe something that is really powerful isn’t as universal as something simpler. It makes sense to figure out the simplest technology you can use to achieve a task.
I’ll give an example from my friend Derek Featherstone. He says:
In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.
Again, he’s talking about the resilience you get by building in a layered way and choosing the least powerful technology.
It’s like a classic example being ARIA. The first rule of ARIA is, don’t use ARIA if you don’t have to. Rather than using a div and then adding the event handlers and the ARIA roles to make it look like a button, just use a button. Use the simpler technology lower in the stack.
Now, I get pushback on this because people will tell me, “Well, that’s fine if you’re building something simple, but I’m not building something simple. I’m building something complex.” Everyone likes to think they’re building something complex, right? Everyone is convinced they’re working on really hard things, which makes sense. That’s human nature.
If you’re at a cocktail party and someone says, “What do you do?” and you describe your work and they say, “Oh, okay. That sounds really easy,” you’d be offended, right?
If you’re at a party and someone says, “What do you do?” you describe your work and they go, “Wow, God, that sounds hard,” you’d be like, “Yeah! Yeah, it is hard. What I do is hard.”
I think we gravitate to this, especially when someone markets it as, “This is a serious tool for serious, complex sites.” I’m like, “That’s me. I’m working on a serious, complex site.”
I don’t think the reality is quite like that. Reality is just messier. There’s nothing quite that simple. Very few things are really that complex.
Everything kind of exists on this continuum somewhere along the way. Even the simplest website has some form of interaction, something appy about it.
Those are those other two terms people use when talking about simple and complex is website and web app, as if you can divide the entirety of the whole World Wide Web into two categories: websites and web apps.
Again, that just doesn’t make sense to me. I think the truth is, things are messier and schmooshier between this continuum of websites and web apps. I don’t get why we even need the separate word. It’s all web stuff.
Though, there is this newer term, “Progressive Web App,” that I kind of like.
Who has heard of Progressive Web Apps? All right.
Who thinks they have a good handle on what a Progressive Web App is?
Okay. See, that’s a lot fewer hands, which is totally understandable because, if you start googling, “What is a Progressive Web App?” you get these Zen-like articles. “It’s a state of mind.” “It’s about rich, native-like interactions, man.”
No. No, it’s not. Worse still, you read, “Oh, a Progressive Web App is a Single PageApp.” I was like, no, you’ve lost me there. No, it’s not. Or least it can be, but any website can be a Progressive Web App. You can elevate a website to be a Progressive Web App.
I don’t mean in some sort of Zen-like fashion. I mean using technologies, three particular technologies.
You make sure that website is running HTTPS,
you have a web app manifest that’s a JSON file with metadata, and
you have a service worker that gets installed on the user’s device.
That’s it. These three technologies turn a website into a Progressive Web App — no mystery about it.
The tricky bit is that service worker part. It’s kind of a weird thing because it’s JavaScript but it’s JavaScript that gets installed on the user’s device and acts like a proxy. It intercepts network requests and can do different things like grab things from the cache instead of going to the network.
I’m not going to go into how it works because I’ve written plenty about that in this book “Going Offline,” so if you want to know the code, you can go read the book.
I will say that when I first came across service workers, it totally did my head in because this is my mental model of the web. We’ve got the stack of technologies that we’re building on top of, each layer depending on the layer below. Then service workers come along and say, “Well, actually, you could have a website like this,” where the lowest layer, the network, the Internet goes away and the website still works. Mind is blown!
It took me a while to get my head around that. The service worker file is on the user’s device and, if they’ve got no Internet connect, it can still make decisions and serve up something like a custom offline page.
Here’s a website I run called huffduffer.com. It’s for making your own podcast out of found sounds. If you’re offline or the website is down, which happens, and you visit Huffduffer, you get this offline page saying, “Sorry, you’re offline.” Not very useful, but it’s branded like the site, okay? It’s almost like the way you have a custom 404 page. Now you can have a custom offline page that matches your site. It’s a small thing, but it can be handy.
We ran this conference in Brighton for two years, Ampersand. It’s a web typography conference. That also has a very simple offline page that just says, “Sorry. You’re offline,” but then it has the bare minimum information you need about the conference like where is this conference happening; what time does it start?
You can imagine a restaurant website having this, an offline page that tells you, “Here is the address. Here are the opening hours.” I would like it if restaurant websites had that information when you’re online as well, but…
You can also have fun with this, like Trivago. Their site relies on search, so there’s not much you can do when you’re offline, so they give you a game to play, the offline maze to keep you entertained.
That’s kind of at the simplest level of what you could do, a custom offline page.
Then at the other level, I’ve written this book called “Resilient web Design.” A lot of the ideas I’m talking about here are in this book. The book is a website. You go to the website and you read the book. That’s it. It’s free. You just go to resilientwebdesign.com. I mean free. I don’t ask for your email address and I’m not tracking any information at all.
This is how it looks when you’re online, and then this is how it looks when you’re offline. It is exactly the same. In fact, the moment you visit the website, it basically downloads the whole book.
Now, that’s the extreme example. Most websites, you wouldn’t want to do that because you kind of want the HTML to be fresh. This is never going to get updated. I’m done with this so I’m totally fine with, you go straight to the cache; never even go to the network. It’s absolutely offline first. You’re probably going to want something in between those two extremes.
On my own website, adactio.com, if you’re browsing around the website and you’re reading things, that’s all fine. Then what if you lose your internet connection? You get the custom offline page that says, “Sorry, you’re offline,” but then it also shows you the things you’ve previously visited.
You can revisit any of these pages. These have all been cached, so you can cache things as people are browsing around the site. That’s a nice little pattern that a lot of websites could benefit from. It only suffers from the fact that all I can show you is stuff you’ve already seen. You have to have already visited these pages for them to show up in this list.
Another pattern that I think is maybe better from a user experience point is when you put the control in the hands of the user.
This website, archive.dconstruct.org, this is what it sounds like. It’s an archive. It’s conference talks.
We ran a conference called dConstruct for 10 years from 2005 to 2015. Breaking news; we’re bringing it back for a one-off event next year, September 2020.
Anyway, all the talks from ten years are online here as audio files. You can browse around and listen to these talks.
You’ll also see that there’s this option to save for offline, exposed on the interface. What that does it is doesn’t just save the page offline; it also saves the audio offline. Then, when you’re an airplane or at the bottom of the ocean or whatever, you can then listen to the things you explicitly asked to be saved offline. It’s effectively a podcast player in the browser.
You see there’s a lot of things you can do. There are kind of a lot of layers you can build upon once you have a service worker.
At the very least, you can do caching because that’s the stuff we do anyway, like put this file in the cache, your CSS, your JavaScript, your icons, whatever.
Then think about, well, maybe I should have a custom offline page, even if it’s just for the branding reasons of having that nice page, just like we have a custom 404 page.
Then you start thinking, well, I want the adding to home screen experience to be good, so you’ve got the web app manifest.
You implement one of those patterns there allowing the users to save things offline, maybe.
Also, push notifications are now possible thanks to service workers. It used to be, if you wanted to make someone’s life a misery, you had to build a native app to give them push notifications all day long. Now you can make someone’s life a misery on the web too.
There’s even more advanced APIs like background sync where the website can talk to the web server even when that website isn’t open in the browser and sync up information — super powerful stuff.
Now, the support for something like service worker and the cache API is almost universal at this point. The support for stuff like background sync notifications is spottier, not universal, and that’s okay because, as long as you’re adding these things in layers. Then it’s fine if something doesn’t have universal support, right? It’s making something work great but, if someone doesn’t get that, it still works good. It’s all about building in that layered way.
Now you’re probably thinking, “Ah-ha! I’ve hoisted him by his own petard because service workers use JavaScript. That means they rely on JavaScript. You’ve made JavaScript a single point of failure!” Exactly what I was complaining about with Single Page Apps, right?
There’s a difference. With a Single Page App, you’re relying on JavaScript. The user gets absolutely nothing if JavaScript doesn’t work.
In the case of service workers, you literally cannot make a website that relies on a service worker. You have to make a website that works first without a service worker and then add the service worker on top, because, think about it; the first time anybody visits the website, even if their browser supports service workers, the service worker is not installed. So you have to build in layers.
I think this is why it appeals to me so much. The design of service workers is a layered design. You have to have something that works first, and then you elevate it. You improve the user experience using these technologies but you don’t rely on it. It’s not a single point of failure. It’s an enhancement.
That means you can take any website. Somebody’s homepage; a book that’s online; this archived stuff ; something that is more appy, sure, and make it work pretty much like a native app. It can appear full screen, add to the home screen, be indistinguishable from native apps so that the latest and greatest browsers and devices get the best experience. They’re making full use of the newest technologies.
But, as well as these things working in the latest and greatest browsers, they still work in the first web browser ever created. You can still look at these things in that very first web browser that Tim Berners-Lee created at worldwideweb.cern.ch.
It’s like it is an unbroken line over 30 years on the web. We’re not talking about the Long Now when we’re talking about 30 years but, in terms of technology, that does feel special.
You can also look at the world’s first webpage in the first-ever web browser but, almost more amazingly, you can look at the world’s first web page at its original URL in a modern web browser and it still works.
We managed to make the web so much better with new APIs, new technologies, without breaking it, without breaking that backward compatibility. There’s something special about that. There’s something special about the web if you build in layers.
I’m encouraging you to think in terms of layers and use the layers of the web.
Thanks to the quick work of Marc and his team, the talk I gave at Beyond Tellerrand on Thursday was online within hours!
I’m really pleased with how this turned out. I wasn’t sure if anybody was going to be interested in the deep dive into history that I took for the first 15 or 20 minutes, but lots of people told me that they really enjoyed that part, so that makes me happy.
This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.
Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.
That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.
At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.
But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.
I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.
There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.
We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:
It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.
This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.
This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.
A physicist is the atom’s way of knowing about atoms.
—George Wald
By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.
64 years ago
To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.
They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.
Every year, CERN is host to thousands of scientists who come to run their experiments.
Remy
Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.
We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:
How does this software look and feel?
How does it work?
How you interact with it?
How does it behave?
The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.
Now we have the machine to run this special software.
By some fluke the good people of the web have captured several different versions of this software and published them on Github.
So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.
Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.
To transfer the software from our machine, to the NEXT machine, we needed to use the network.
Jeremy
62 years ago
In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.
56 years ago
Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.
By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.
America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.
The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.
This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.
Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.
TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.
There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.
Remy
Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.
Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.
Now we finally have the software installed on the NeXT computer and we’re able to run the application.
We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…
Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.
But there’s something different about this software. Something that makes this more than just a word processor.
These documents, they have links…
Jeremy
Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.
56 years ago
He also coined the word “hypertext” in 1963. It is defined by what it is not.
Hypertext is text which is not constrained to be linear.
Ever played a “choose your own adventure” book? That’s hypertext.
You can jump from one point in the book to a different point that has its own unique identifier.
The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.
He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.
Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.
51 years ago
Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.
On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.
It truly is The Mother of All Demos.
39 years ago
There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.
He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.
ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.
A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.
In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?
30 years ago
On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.
The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”
Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.
They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.
So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.
As Robert Cailliau told us, they were thinking “Well, we can always change it later.”
Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.
He thinks about a format for hypertext called the Hypertext Markup Language—HTML.
He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.
But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…
Remy
Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.
The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.
NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.
Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks
He called it: WorldWideWeb.
We finally have the software working the way it ran 30 years ago.
But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.
But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.
So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.
And finally, we see…
We see junk.
We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.
Jeremy
<h1> <h2> <h3> <h4> <h5> <h6>
<ol> <ul> <li> <p>
These tags are probably very familiar to you. You recognise this language, right?
That’s right. It’s SGML.
SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.
SGML is supposed to be short for Standard Generalised Markup Language.
A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.
One thing he did add was a tag called A for anchor.
Its href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.
The hypertext community thought this was a terrible way to make links.
They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.
That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.
Perhaps you’ve experienced broken links?
When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.
Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.
Remy
The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.
Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.
Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.
<link rel="..."
In this case, when the parser encountered <link rel="…" it would see the <.
<
“Yes, a tag; let’s slurp it up”.
<li
Then it reads li and the parser is thinking, “This looks like a list item, good stuff.”
<lin
Then encounters the n (of link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin.
k rel="stylesheet" href="...">
With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link, script, video and img — which of course there was no image support in the world’s first browser.
This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.
So now we have all the reference we need to be able to replicate this browser:
The machine running the original operating system, which gives us colours, fonts, menus and so on.
The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
And finally how it looks when we visit URLs.
So off we go.
Jeremy
While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.
Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.
Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.
The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.
Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.
So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.
Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.
Remy
This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.
This is entirely in the browser and was written using:
React,
React Draggable for the windows and menus,
React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
idb-keyval for some local storage,
Parcel for bundling.
These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.
We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:
An excercise in global information availability
Why don’t we see how it looks…
Jeremy
There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.
We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)
We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.
One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.
But there’s one thing about navigating here that’s different. To open this link, I had to double-click.
Jeremy
The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.
To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.
Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.
Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.
But all of these browsers were missing something that was in the original WorldWideWeb browser.
Remy
The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.
I see Fronteers here is missing a heading. We want to welcome you all:
Welkom
We want to make that a heading. Let’s style that. It’s a heading.
So the browser was meant to edit documents. Let’s put a bit of text here:
Great talks from Remy and Jeremy
(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.
I can save this document as well. I’m going to call it fronteers.html.
Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.
In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.
But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.
And so it wasn’t standardised and doesn’t exist today.
But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.
In the end, simplicity wins.
Jeremy
I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.
Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.
As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.
Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.
I feel like we lost something.
Remy
We head home after a week of hacking.
We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.
A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!
The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww (a precursor to libcurl).
The message read:
Sitting with Tim right now. He loves your browser!
Crushed it.
It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.
What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.
I wouldn’t be stood here today, if it weren’t for the web.
I wouldn’t even know Jeremy, if it weren’t for the web.
I wouldn’t have a career, if it weren’t for the web.
I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.
HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.
This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.
And we’ve got to work at keeping it simple.
Jeremy
When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.
She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:
What would you tell the next generation about how to use this wonderful tool?
She replied:
If you have something wonderful, if you do not defend it, you will lose it.
If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.
Defend the simplicity and resilience that’s so central to the web.
I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.
Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.
I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.
Here’s the talk that Remy and I gave at Fronteers in Amsterdam, all about our hack week at CERN. We’re both really pleased with how this turned out and we’d love to give it again!
Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.
I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.
So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.
“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.
Lots of people entered facebook.com or google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.
People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.
The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.
The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.
Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.
This history of the World Wide Web from 1996 is interesting for the way it culminates with …Java. At that time, the language seemed like it would become the programmatic lingua franca for the web. Brendan Eich sure upset that apple cart.
In the end, March 12, 1989 is as good a date as any to mark the birth of the web. The date that Tim Berners-Lee shared his proposal. That’s when the work began.
Exactly thirty years later, myself, Martin, and Remy are registered and ready to attend the anniversay event in the very same room where the existence of the Higgs boson was announced. There’s coffee, and there are croissants, but despite the presence of Lou Montulli, there are no cookies.
The doors to the auditorium open and we find some seats together. The morning’s celebrations includes great panel discussions, and an interview with Tim Berners-Lee himself. In the middle of it all, they show a short film about our hack week recreating the very first web browser.
It was surreal. There we were, at CERN, in the same room as the people who made the web happen, and everyone’s watching a video of us talking about our fun project. It was very weird and very cool.
Afterwards, there was cake. And a NeXT machine—the same one we had in the room during our hack week. I feel a real attachment to that computer.
We chatted with lots of lovely people. I had the great pleasure of meeting Peggie Rimmer. It was her late husband, Mike Sendall, who gave Tim Berners-Lee the time (and budget) to pursue his networked hypertext project. Peggie found Mike’s copy of Tim’s proposal in a cupboard years later. This was the copy that Mike had annotated with his now-famous verdict, “vague but exciting”. Angela has those words tattooed on her arm—Peggie got a kick out of that.
Eventually, Remy and I had to say our goodbyes. We had to get to the airport to catch our flight back to London. Taxi, airport, plane, tube; we arrived at the Science Museum in time for the evening celebrations. We couldn’t have been far behind Tim Berners-Lee. He was making a 30 hour journey from Geneva to London to Lagos. We figured seeing him at two out of those three locations was plenty.
By the end of the day we were knackered but happy. The day wasn’t all sunshine and roses. There was a lot of discussion about the negative sides of the web, and what could be improved. A lot of that was from Sir Tim itself. But mostly it was a time to think about just how transformative the web has been in our lives. And a time to think about the next thirty years …and the web we want.
This is the lovely little film about our WorldWideWeb hack project. It was shown yesterday at CERN during the Web@30 celebrations. That was quite a special moment.