On a long bet – A Whole Lotta Nothing
Matt’s thoughts on that bet. Not long now…
Matt’s thoughts on that bet. Not long now…
This presentation on the indie web was delivered as the opening keynote at Webstock in Wellington, Zealand in February 2018.
Thank you. Thank you very much, Mike; it really is an honour and a privilege to be here. Kia Ora! Good morning.
This is quite intimidating. I’m supposed to open the show and there’s all these amazing, exciting speakers are going to be speaking for the next two days on amazing, exciting topics, so I’d better level up: I’d better talk about something exciting. So, let’s do it!
Yeah, we need to talk about capitalism. Capitalism is one of a few competing theories on how to structure an economy and the theory goes that you have a marketplace and the marketplace rules all and it is the marketplace that self-regulates through its kind of invisible hand. Pretty good theory, the idea being that through this invisible hand in the marketplace, wealth can be distributed relatively evenly; sort of a bell-curve distribution of wealth where some people have more, some people have less, but it kind of evens out. You see bell-curve distributions right for things like height and weight or IQ. Some have more, some have less, but the difference isn’t huge. That’s the idea, that economies could follow this bell-curve distribution.
But, without any kind of external regulation, what tends to happen in a capitalistic economy is that the rich get richer, the poor get poorer and it runs out of control. Instead of a bell-curve distribution you end up with something like this which is a power-law distribution, where you have the wealth concentrated in a small percentage and there’s a long tail of poverty.
That’s the thing about capitalism: it sounds great in theory, but not so good in practice.
The theory, I actually agree with, the theory being that competition is good and that competition is healthy. I think that’s a sound theory. I like competition: I think there should be a marketplace of competition, to try and avoid these kind of monopolies or duopolies that you get in these power-law distributions. The reason I say that is that we as web designers and web developers, we’ve seen what happens when monopolies kick in.
We were there: we were there when there was a monopoly; when Microsoft had an enormous share of the browser market, it was like in the high nineties, percentage of browser share with Internet Explorer, which they achieved because they had an enormous monopoly in the desktop market as well, they were able to bundle Internet Explorer with the Windows Operating System. Not exactly an invisible hand regulating there.
But we managed to dodge this bullet. I firmly believe that the more browsers we have, the better. I think a diversity of browsers is a really good thing. I know sometimes as developers, “Oh wouldn’t it be great if there was just one browser to develop for: wouldn’t that be great?” No. No it would not! We’ve been there and it wasn’t pretty. Firefox was pretty much a direct result of this monopoly situation in browsers. But like I say, we dodged this bullet. In some ways the web interpreted this monopoly as damage and routed around it. This idea of interpreting something as damaged and routing around it, that’s a phrase that comes from network architecture
As with economies, there are competing schools of thought on how you would structure a network; there’s many ways to structure a network.
One way, sort of a centralised network approach is, you have this hub and spoke model, where you have lots of smaller nodes connecting to a single large hub and then those large hubs can connect with one another and this is how the telegraph worked, then the telephone system worked like this. Airports still pretty much work like this. You’ve got regional airports connecting to a large hub and those large hubs connect to one another, and it’s a really good system. It works really well until the hub gets taken out. If the hub goes down, you’ve got these nodes that are stranded: they can’t connect to one another. That’s a single point of failure, that’s a vulnerability in this network architecture.
It was the single point of failure, this vulnerability, that led to the idea of packet-switching and different network architectures that we saw in the ARPANET, which later became the Internet. The impetus there was to avoid the single point of failure. What if you took these nodes and gave them all some connections?
This is more like a bell-curve distribution of connections now. Some nodes have more connections than others, some have fewer, but there isn’t a huge difference between the amount of connections between nodes. Then the genius of packet-switching is that you just need to get the signal across the network by whatever route works best at the time. That way, if a node were to disappear, even a relatively well-connected one, you can route around the damage. Now you’re avoiding the single point of failure problem that you would have with a hub and spoke model.
Like I said, this kind of thinking came from the ARPANET, later the Internet and it was as a direct result of trying to avoid having that single point of failure in a command-and-control structure. If you’re in a military organisation, you don’t want to have that single point of failure. You’ve probably heard that the internet was created to withstand a nuclear attack and that’s not exactly the truth but the network architecture that we have today on the internet was influenced by avoiding that command-and-control, that centralised command-and-control structure.
Some people kind of think there’s sort of blood on the hands of the internet because it came from this military background, DARPA…Defence Advance Research Project. But actually, the thinking behind this was not to give one side the upper hand in case of a nuclear conflict: quite the opposite. The understanding was, if there were the chance of a nuclear conflict and you had a hub and spoke model of communication, then actually you know that if they take out your hub, you’re screwed, so you’d better strike first. Whereas if you’ve got this kind of distributed network, then if there’s the possibility of attack, you might be more willing to wait it out and see because you know you can survive that initial first strike. And so this network approach was not designed to give any one side the upper hand in case of a nuclear war, but to actually avoid nuclear war in the first place. On the understanding that this was in place for both sides, so Paul Baran and the other people working on the ARPANET, they were in favour of sharing this technology with the Russians, even at the height of the Cold War.
In this kind of network architecture, there’s no hubs, there’s no regional nodes, there’s just nodes on the network. It’s very egalitarian and the network can grow and shrink infinitely; it’s scale-free, you can just keep adding nodes to network and you don’t need to ask permission to add a node to this network. That same sort of architecture then influenced the World Wide Web, which is built on top of the Internet and Tim Berners-Lee uses this model where anybody can add a website to the World Wide Web: you don’t need to ask for permission, you just add one more node to the network; there’s no plan to it, there’s no structure and so it’s a mess! It’s a sprawling, beautiful mess with no structure.
And it’s funny, because in the early days of the Web it wasn’t clear that the Web was going to win; not at all. There was competition. You had services like Compuserve and AOL. I’m not talking about AOL the website. Before it was a website, it was this thing separate to the web, which was much more structured, much safer; these were kind of the walled gardens and they would make wonderful content for you and warn you if you tried to step outside the bounds of their walled gardens into the wild, lawless lands of the World Wide Web and say, ooh, you don’t want to go out there. And yet the World Wide Web won, despite its chaoticness, its lawlessness. People chose the Web and despite all the content that Compuserve and AOL and these other walled gardens were producing, they couldn’t compete with the wild and lawless nature of the Web. People left the walled gardens and went out into the Web.
And then a few years later, everyone went back into the walled gardens. Facebook, Twitter, Medium. Very similar in a way to AOL and Compuserve: the nice, well-designed places, safe spaces not like those nasty places out in the World Wide Web and they warn you if you’re about to head out into the World Wide Web. But there’s a difference this time round: AOL and Compuserve, they were producing content for us, to keep us in the walled gardens. With the case of Facebook, Medium, Twitter, we produce the content. These are the biggest media companies in the world and they don’t produce any media. We produce the media for them.
How did this happen? How did we end up with this situation when we returned into the walled gardens? What happened?
Facebook, I used to wonder, what is the point of Facebook? I mean this in the sense that when Facebook came along, there were lots of different social networks, but they all kinda had this idea of being about a single, social object. Flickr was about the photograph and upcoming.org was about the event and Dopplr was about travel. And somebody was telling me about Facebook and saying, “You should get on Facebook.” I was like, “Oh yeah? What’s it for?” He said: “Everyone’s on it.” “Yeah, but…what’s it for? Is it photographs, events, what is it?” And he was like, “Everyone’s on it.” And now I understand that it’s absolutely correct: the reason why everyone is on Facebook is because everyone is on Facebook. It’s a direct example of Metcalfe’s Law.
Again, it’s a power-law distribution: that the value of the network is proportional to the square of the number of nodes on a network. Basically, the more people on the network the better. The first person to have a fax machine, that’s a useless lump of plastic. As soon as one more person has a fax machine, it’s exponentially more useful. Everyone is on Facebook because everyone is on Facebook. Which turns it into a hub. It is now a centralised hub, which means it is a single point of failure, by the way. In security terms, I guess you would talk about it having a large attack surface.
Let’s say you wanted to attack media outlets, I don’t know, let’s say you were trying to influence an election in the United States of America… Instead of having to target hundreds of different news outlets, you now only need to target one because it has a very large attack surface.
It’s just like, if you run WordPress as a CMS, you have to make sure to keep it patched all the time because it’s a large attack surface. It’s not that it’s any more or less secure or vulnerable than any other CMS, it’s just that it’s really popular and therefore is more likely to be attacked. Same with a hub like Facebook.
OK. Why then? Why did we choose to return to these walled gardens? Well, the answer’s actually pretty obvious, which is: they’re convenient. Walled gardens are nice to use. The user experience is pretty great; they’re well-designed, they’re nice.
The disadvantage is what you give up when you gain this convenience. You give up control. You no longer have control over the content that you publish. You don’t control who’s going to even see what you publish. Some algorithm is taking care of that. These silos—Facebook, Twitter, Medium—they now have control of the hyperlinks. Walled gardens give you convenience, but the cost is control.
This is where this idea of the indie web comes in, to try and bridge this gap that you could somehow still have the convenience of using these beautiful, well-designed walled gardens and yet still have the control of owning your own content, because let’s face it, having your own website, that’s a hassle: it’s hard work, certainly compared to just opening up Facebook, opening a Facebook account, Twitter account, Medium account and just publishing, boom. The barrier to entry is really low whereas setting up your own website, registering a domain, do you choose a CMS? There’s a lot of hassle involved in that.
But maybe we can bridge the gap. Maybe we can get both: the convenience and the control. This is the idea of the indie web. At its heart, there’s a fairly uncontroversial idea here, which is that you should have your own website. I mean, there would have been a time when that would’ve been a normal statement to make and these days, it sounds positively disruptive to even suggest that you should have your own website.
I have my own personal reasons for wanting to publish on my own website. If anybody was here back in the…six years ago, I was here at Webstock which was a great honour and I was talking about digital preservation and longevity and for me, that’s one of the reasons why I like to have the control over my own content, because these things do go away. If you published your content on, say, MySpace: I’m sorry. It’s gone. And there was a time when it was unimaginable that MySpace could be gone; it was like Facebook, you couldn’t imagine the web without it. Give it enough time. MySpace is just one example, there’s many more. I used to publish in GeoCities. Delicious; Magnolia, Pownce, Dopplr. Many, many more.
Now, I’m not saying that everything should be online for ever. What I’m saying is, it should be your choice. You should be able to choose when something stays online and you should be able to choose when something gets taken offline. The web has a real issue with things being taken offline. Linkrot is a real problem on the web, and partly that’s to do with the nature of the web, the fundamental nature of the way that linking works on the web.
When Tim Berners-Lee and Robert Cailliau were first coming up with the World Wide Web, they submitted a paper to a hypertext conference, I think it was 1991, 92, about this project they were building called the World Wide Web. The paper was rejected. Their paper was rejected by these hypertext experts who said, this system: it’ll never work, it’s terrible. One of the reasons why they rejected it was it didn’t have this idea of two-way linking. Any decent hypertext system, they said, has a concept of two-way linking where there’s knowledge of the link at both ends, so in a system that has two-way linking, if a resource happens to move, the link can move with it and the link is maintained. Now that’s not the way the web works. The web has one-way linking; you can just link to something, that’s it and the other resource has no knowledge that you’re linking to it but if that resource moves or goes away, the link is broken. You get linkrot. That’s just the way the web works.
But. There’s a little technique that if you sort of squint at it just the right way, it sort of looks like two-way linking on the web and involves a very humble bit of HTML. The rel
attribute. Now, you’ve probably seen the rel
attribute before, you’ve probably seen it on the link
element. Rel is short for relationship, so the value of the rel
attribute will describe the relationship of the linked resource, whatever’s inside the href
; so I’m sure you’ve probably typed this at some point where you say rel=stylesheet
on a link
element. What you’re saying is, the linked resource, what’s in the href
, has the relationship of being a style sheet for the current document.
link rel="stylesheet" href="..."
You can use it on A
elements as well. There’s rel
values like prev
for previous and next
, say this is the relationship of being the next document, or this is the relationship of being the previous document. Really handy for pagination of search results, for example.
a rel="prev" href="..."
a rel="next" href="..."
And then there’s this really silly value, rel=me
.
a rel="me" href="..."
Now, how does that work? The linked document has a relationship of being me? Well, I use this. I use this on my own website. I have A
elements that link off to my profiles on these other sites, so I’m saying, that Twitter profile over there: that’s me. And that’s me on Flickr and that’s me on GitHub.
a rel="me" href="https://twitter.com/adactio"
a rel="me" href="https://flickr.com/adactio"
a rel="me" href="https://github.com/adactio"
OK, but still, these are just regular, one-way hyperlinks I’m making. I’ve added a rel
value of “me” but so what?
Well, the interesting thing is, if you go to any of those profiles, when you’re signing up, you can add your own website: that’s one of the fields. There’s a link from that profile to your own website and in that link, they also use rel=me
.
a rel="me" href="https://adactio.com"
I’m linking to my profile on Twitter saying, rel=me
; that’s me. And my Twitter profile is linking to my website saying, rel=me
; that’s me. And so you’ve kind of got two-way linking. You’ve got this confirmed relationship, these claims have been verified. Fine, but what can you do with that?
Well, there’s a technology called RelMeAuth that uses this, kind of piggy-backs on something that all these services have in common: Twitter and Flickr and GitHub. All of these services have OAuth, authentication. Now, if I wanted to build an API, I should probably, for a right-API, I probably need to be an OAuth provider. I am not smart enough to become an OAuth provider; that sounds way too much like hard work for me. But I don’t need to because Twitter and Flickr and GitHub are already OAuth providers, so I can just piggy-back on the functionality that they provide, just be adding rel=me
.
Here’s an example of this in action. There’s an authentication service called IndieAuth and I literally sign in with my URL. I type in my website name, it then finds the rel=me
links, the ones that are reciprocal; I choose which one I feel like logging in with today, let’s say Twitter, I get bounced to Twitter, I have to be logged in on Twitter for this to work and then I’ve authenticated. I’ve authenticated with my own website; I’ve used OAuth without having to write OAuth, just by adding rel=me
to a couple of links that were already on my site.
Why would I want to authenticate? Well, there’s another piece of technology called micropub. Now, this is definitely more complicated than just adding rel=me
to a few links. This is an end-point on my website and can be an end-point on your website and it accepts POST requests, that’s all it does. And if I’ve already got authentication taken care of and now I’ve got an end-point for POST requests, I’ve basically got an API, which means I can post to my website from other places. Once that end-point exists, I can use somebody else’s website to post to my website, as long as they’ve got this micropub support. I log in with that IndieAuth flow and then I’m using somebody else’s website to post to my website. That’s pretty nice. As long as these services have micropub support, I can post from somebody else’s posting interface to my own website and choose how I want to post.
In this example there, I was using a service called Quill; it’s got a nice interface. You can do long-form writing on it. It’s got a very Medium-like interface for long-form writing because a lot of people—when you talk about why are they on Medium—it’s because the writing experience is so nice, so it’s kind of been reproduced here. This was made by a friend of mine named Aaron Parecki and he makes some other services too. He makes OwnYourGram and OwnYourSwarm, and what they are is they’re kind of translation services between Instagram and Swarm to micropub.
Instagram and Swarm do not provide micropub support but by using these services, authenticating with these services using the rel=me
links, I can then post from Instagram and from Swarm to my own website, which is pretty nice. If I post something on Swarm, it then shows up on my own website. And if I post something on Instagram, it goes up on my own website. Again I’m piggy-backing. There’s all this hard work of big teams of designers and engineers building these apps, Instagram and Swarm, and I’m taking all that hard work and I’m using it to post to my own websites. It feels kind of good.
There’s an acronym for this approach, and it’s PESOS, which means you Publish Elsewhere and Syndicate to your Own Site. There’s an alternative to this approach and that’s POSSE, or you Publish on your Own Site and then you Syndicate Elsewhere, which I find preferable, but sometimes it’s not possible. For example, you can’t publish on your own site and syndicate it to Instagram; Instagram does not allow any way of posting to it except through the app. It has an API but it’s missing a very important method which is post a photograph. But you can syndicate to Medium and Flickr and Facebook and Twitter. That way, you benefit from the reach, so I’m publishing the original version on my own website and then I’m sending out copies to all these different services.
For example, I’ve got this section on my site called Notes which is for small little updates of say, oh, I don’t know, 280 characters and I’ve got the option to syndicate if I feel like it to say, Twitter or Flickr. When I post something on my own website—like this lovely picture of an amazingly good dog called Huxley—I can then choose to have that syndicated out to other places like Flickr or Facebook. The Facebook one’s kind of a cheat because I’m just using an “If This Then That” recipe to observe my site and post any time I post something. But I can syndicate to Twitter as well. The original URL is on my website and these are all copies that I’ve sent out into the world.
OK, but what about…what about when people comment or like or retweet, fave, whatever it is they’re doing, the copy? They don’t come to my website to leave a comment or a fave or a like, they’re doing it on Twitter or Flickr. Well, I get those. I get those on my website too and that’s possible because of another building block called webmention. Webmention is another end-point that you can have on your site but it’s very, very simple: it just accepts pings. It’s basically a ping tracker. Anyone remember pingback? We used to have pingbacks on blogs; and it was quite complicated because it was XML-RPC and all this stuff. This is literally just a post that goes “ping”.
Let’s say you link to something on my website; I have no way of knowing that you’ve linked to me, I have no way of knowing that you’ve effectively commented on something I’ve posted, so you send me a ping using webmention and then I can go check and see, is there really a link to this article or this post or this note on this other website and if there is: great. It’s up to me now what I do with that information. Do I display it as a comment? Do I store it to the database? Whatever I want to do.
And you don’t even have to have your own webmention end-point. There’s webmentions as a service that you can subscribe to. Webmention.io is one of those; it’s literally like an answering service for pings. You can check in at the end of the day and say, “Any pings for me today?” Like a telephone answering service but for webmentions.
And then there’s this really wonderful service, a piece of open source software called Bridgy, which acts as a bridge. Places like Twitter and Flickr and Facebook, they do not send webmentions every time someone leaves a reply, but Bridgy—once you’ve authenticated with the rel=me
values—Bridgy monitors your social media accounts and when somebody replies, it’ll take that and translate it into a webmention and send it to your webmention end-point. Now, any time somebody makes a response on one of the copies of your post, you get that on your own website, which is pretty neat.
It’s up to you what you do with those webmentions. I just display them in a fairly boring manner, the replies appear as comments and I just say how many shares there were, how many likes, but this is a mix of things coming from Twitter, from Flickr, from Facebook, from anywhere where I’ve posted copies. But you could make them look nicer too. Drew McLellan has got this kind of face-pile of the user accounts of the people who are responding out there on Twitter or on Flickr or on Facebook and he displays them in a nice way.
Drew, along with Rachel Andrew, is one of the people behind Perch CMS; a nice little CMS where a lot of this technology is already built in. It has support for webmention and all these kind of things, and there’s a lot of CMSs have done this where you don’t have to invent this stuff from scratch. If you like what you see and you think, “Oh, I want to have a webmention end-point, I want to have a micropub end-point”, chances are it already exists for the CMS you’re using. If you’re using something like WordPress or Perch or Jekyll or Kirby: a lot of these CMSs already have plug-ins available for you to use.
Those are a few technologies that we can use to try and bridge that gap, to try and still get the control of owning your own content on your own website and still have the convenience of those third party services that we get to use their interfaces, that we get to have those conversations, the social effects that come with having a large network. Relatively simple building blocks: rel=me
, micropub and webmention.
But they’re not the real building blocks of the indie web; they’re just technologies. Don’t get too caught up with the technologies. I think the real building blocks of the indie web can be found here in the principles of the indie web.
There’s a great page of design principles about why are we even doing this. There are principles like own your own data; focus on the user experience first; make tools for yourself and then see how you can scale them and share them with other people. But the most important design principle, I think, that’s on that list comes at the very end and it’s this: that we should 🎉 have fun (and the emoji is definitely part of the design principle).
Your website is your playground; it’s a place for you to experiment. You hear about some new technologies, you want to play with them? You might not get the chance at work but you can try out that CSS grid and the service worker or the latest JavaScript APIs you want to play with. Use your website as a playground to do that.
I also think we should remember the original motto of the World Wide Web, which was: let’s share what we know. And over the next few days, you’re going to hear a lot of amazing, inspiring ideas from amazing, inspiring people and I hope that you would be motivated to maybe share your thoughts. You could share what you know on Mark Zuckerberg’s website. You could share what you know on Ev Williams’s website. You could share what you know on Biz Stone and Jack Dorsey’s website. But I hope you’ll share what you know on your own website.
Thank you.
Here’s the talk I gave at Webstock earlier this year all about the indie web:
In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!
For the shortest month of the year, February managed to pack a lot in. I was away for most of the month. I had the great honour of being asked back to speak at Webstock in New Zealand this year—they even asked me to open the show!
I had no intention of going straight to New Zealand and then turning around to get on the first flight back, so I made sure to stretch the trip out (which also helps to mitigate the inevitable jet lag). Jessica and I went to Hong Kong first, stayed there for a few nights, then went on Sydney for a while (and caught up with Charlotte while we were out there), before finally making our way to Wellington. Then, after Webstock was all wrapped up, we retraced the same route in reverse. Many flat whites, dumplings, and rays of sunshine later, we arrived back in the UK.
As well as giving the opening keynote at Webstock, I did a full-day workshop, and I also ran a workshop in Hong Kong on the way back. So technically it was a work trip, but I am extremely fortunate that I get to go on adventures like this and still get to call it work.
The fascinating history of India’s space program is the jumping-off point for a comparison of differing cultural attitudes to space exploration in Anab’s transcript of her Webstock talk, published on Ev’s blog.
From astronauts to afronauts, from cosmonauts to vyomanauts, how can deep space exploration inspire us to create more democratic future visions?
This is a wonderful piece by Maciej—a magnificent historical narrative that leads to a thunderous rant. Superb!
Matt has transcribed the notes from his excellent Webstock talk. I highly recommend giving this a read.
When I went to Webstock, I prepared a new presentation called Of Time And The Network:
Our perception and measurement of time has changed as our civilisation has evolved. That change has been driven by networks, from trade routes to the internet.
I was pretty happy with how it turned out. It was a 40 minute talk that was pretty evenly split between the past and the future. The first 20 minutes spanned from 5,000 years ago to the present day. The second 20 minutes looked towards the future, first in years, then decades, and eventually in millennia. I was channeling my inner James Burke for the first half and my inner Jason Scott for the second half, when I went off on a digital preservation rant.
You can watch the video and I had the talk transcribed so you can read the whole thing.
It’s also on Huffduffer, if you’d rather listen to it.
During the talk, I pointed to my prediction on the Long Bets site:
The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.
I made the prediction on February 22nd last year (a terrible day for New Zealand). The prediction will reach fruition on 02022-02-22 …I quite like the alliteration of that date.
Here’s how I justified the prediction:
“Cool URIs don’t change” wrote Tim Berners-Lee in 01999, but link rot is the entropy of the web. The probability of a web document surviving in its original location decreases greatly over time. I suspect that even a relatively short time period (eleven years) is too long for a resource to survive.
Well, during his excellent Webstock talk Matt announced that he would accept the challenge. He writes:
Though much of the web is ephemeral in nature, now that we have surpassed the 20 year mark since the web was created and gone through several booms and busts, technology and strategies have matured to the point where keeping a site going with a stable URI system is within reach of anyone with moderate technological knowledge.
The prediction has now officially been added to the list of bets.
We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.
The sysadmin for the Long Bets site is watching this bet with great interest. I am, of course, treating this bet in much the same way that Paul Gilster is treating this optimistic prediction about interstellar travel: I would love to be proved wrong.
The detailed terms of the bet have been set as follows:
On February 22nd, 2022 from 00:01 UTC until 23:59 UTC,
entering the charactershttp://www.longbets.org/601
into the address bar of a web browser or command line tool (like curl)
OR
using a web browser to follow a hyperlink that points to http://www.longbets.org/601
MUST
return an HTML document that still contains the following text:
“The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.”
The suspense is killing me!
A presentation about history, networks, and digital preservation, from the Webstock conference held in Wellington, New Zealand in February 2012.
Our next speaker is Jeremy Keith. Jeremy’s one of the leading practitioners and thinkers on building stuff on the Web. He’s also one of the most humane thinkers around that subject, which is really important too. We’ve been trying to get Jeremy here for a little while to Webstock and absolutely delighted that he’s come here this year. He’s really pleased that he’s speaking early in the conference so he can kick back and relax, in his words, for the next couple of days, which I’m not sure is a threat or a promise, but I guess we’ll find out, but please give Jeremy a huge welcome.
Thank you. Kia ora. Hello. It is an absolute pleasure to be here. I’ve actually had this campaign to get myself to Webstock for years. I’ve been hearing about how awesome Webstock is from my friends who’ve spoken here before so I’ve been telling them, “If you happen to be speaking to Tash, put in a good word for me, get me out to New Zealand.” So it is an absolute thrill for me to be here.
I now have forty minutes of your time. And time is what I want to talk about today.
Well, two things: I want to talk about time and I want to talk about networks. I want to talk about our relationship with time, how we measure time, how we think of time, and how that has changed as our networks have changed, as the speed of our networks have changed. So it’s going to be a little bit of a history-filled lesson.
I’m going to dive into this big ball of timey-wimey stuff.
I’d like to begin maybe about five thousand years ago. This is when human beings first started to clump into the nodes of the civilisation network. We settled down and we started domesticating barley, probably to make bread—more likely to make beer, I think, was the real reason.
At this point we really didn’t need to be able to tell the time that accurately. Essentially all you need to know is the time of the seasons; when is it the right time to plant barley? So we needed some way of measuring that kind of time. You will find these kind of ancient ways of measuring time still to this day.
This is Newgrange in Ireland, in County Meath. It’s about five thousand years old. It’s actually the oldest free-standing structure on the face of planet Earth. This is older than the Pyramids. It’s a burial chamber primarily, but it also functions as an astronomical clock for just one day a year. Because one day a year, the sunshine comes through this lintel above the doorway and illuminates the grave passage. That one day is the winter solstice, December 21st. So you’re able to calculate the rest of the year around that one day.
Actually, the sun comes through about four minutes after sunrise, if you ever get the chance to be in Newgrange on December 21st, but they did the calculations with the precession of the equinoxes, and it turns out five thousand years ago, it would’ve come through at exactly sunrise, but the change in the wobble of planet Earth has just put it out by four minutes.
Just knowing the time of year was enough back then. We began to build our networks. Transport networks across the planet; Romans building roads. But still, the speed of travel was essentially horses, or perhaps signal fires. We didn’t really have particularly fast networks.
Now the big change in our relationship to time came with a different network, and that was the trade routes, trying to travel across sea. Travelling across land was relatively straightforward. You always had landmarks you could gauge yourself by. Trying to travel by sea and figure out where you are at sea is actually a really tough problem. This is the longitude problem.
To explain the issue, there’s three factors at work here.
You’ve got the position of objects in the sky—like the sun or the stars at night—and they have star charts for this, very valuable start charts. When they first started mapping the sky and creating star charts, that knowledge was hugely valuable. If your ship got boarded by pirates, they didn’t go for the treasure; they were going for your maps, as your star charts were valuable.
You’ve got the position of objects in the sky, what time it is, and position on the earth, and these are related.
For example, Kathy was showing the Star Walk apps. You get these apps for the iPad and iPhone that you can hold up and you can see the position of objects in the sky overlaid into the iPad. Well they work because if you know any two of these factors, you can figure out the missing third. In the case of those apps, you know the time—your phone knows the time—and your phone knows your position on the earth; it’s got your latitude and longitude. Therefore, it can display the position of objects in the sky, and overlay that onto a screen.
Okay, so if you have any two of these things, you can figure out the third; there’s a direct relationship.
For ships travelling at sea, they’re tying to figure out their position. Where am I on the earth? And to do that, all they need to know is the position of objects in the sky and what time it is.
Now, figuring out the position of objects in the sky: relatively straightforward. We have tools for that like the sextant. Figuring out what time it is: that was actually the really tough bit.
If we know the position of objects in the sky as seen from a certain point on the planet at a certain time, we’re all set. The certain point in the planet they settled on was here, in Greenwich. This is the zero meridian line. Zero degrees longitude.
And by the way, right now, I am as far from this spot as I have ever been in my life, which is amazing. I am used to being within one or two degrees of zero, and now I’m in the triple digits. It’s crazy!
Anyway…
If they have star charts that say at this time at Greenwich, this constellation will be in this position, and if you have a star chart at sea, and you can see that constellation in a different position, and you know the time, you can calculate how far you’ve travelled east or west, because the planet is going three hundred and sixty degrees in twenty four hours. It’s easy to calculate as long as you know the time. And that was the problem: the time. Because telling time back then relied on pendulums, the pendulum clock. A wonderful invention by Huygens, but not that practical on a ship. Very hard to keep a pendulum clock going on a ship.
So there was a prize. The longitude prize was put forward by the Board of Longitude to try and solve this issue. Some people were coming at it from the time aspect. Other people tried different things.
Edmund Halley was mapping the magnetic fields of the Earth, thinking if you knew the magnetic density of a particular plot, maybe that’s a way of figuring out where you were.
Or there were people saying, if you could see the moons of Jupiter, it effectively functions like a little clockwork clock in the sky, but you’d need really high-powered telescopes for that.
Other people thought of the idea of weaponsalve. This is the powder of sympathy. Science was in its early phase back then, but the idea was, you would take a weapon, you’d injure and animal with it, like a dog for example, and then on the hour in Greenwich, you would dip that weapon into this powder of sympathy, and you had the dog on the ship, and on the hour, the dog would yelp in sympathy as you put the weapon into the powder. It was really important you kept that wound open for the whole voyage. It was a theory. Citation needed.
But the more obvious one was just if you had a way of accurately telling the time on a ship, you could figure out where you were, and that was finally solved with John Harrison. John Harrison created these amazing time-pieces. This is the Harrison Number Five. Harrison Number Four was the clock that really clinched it, that really solved the longitude problem. Actually, Cook had a copy of H4 on his second voyage when he was circumnavigating New Zealand.
So this is a beautiful and historically interesting time-piece.
Here’s another time-piece. Anyone recognise where this is? Anyone? Christchurch, yes. This is the clock in Christchurch. This is an example of a Jubilee Clock.
Across Britain and the Empire, these clocks were springing up to celebrate Victoria’s Jubilee. This was 1897. Here’s another example of a Jubilee Clock, and this is from Brighton, where I live. Brighton, England. The clock tower in the middle, it’s the Jubilee Clock Tower.
It’s also got this fantastic steampunky thing. It was created by this mad steampunk inventor, called Magnus Volk. He had an electric railways and all sorts of cool stuff. But you see there’s that ball, that copper ball? That would travel up that pole over the course of an hour, and then drop on the hour. There was a line linking it to Greenwich, an electrical wire that linked Brighton to Greenwich, so that on the hour, the ball would drop, and everyone would have the correct time.
It didn’t really work that well for two reasons: one, all the neighbours complained because every hour you’re dropping a metal ball, and two, they were ruining the structural integrity of the building. It was essentially like a really slow wrecking ball over time.
But the reason why it was important to know what time it is in Greenwich was because of a network, because of a new network that was emerging, and that was the railway network. Before this time, it didn’t matter if we were using slightly different times at different places. If it was one o’clock in London and ten past one in Brighton, it didn’t make any difference to anybody. But now if you’ve got a schedule of networked devices, you need to make sure that you’re all running off the same clock, so you want to make sure that the railway schedule is the same everywhere. And so time now, it needed to be much more precise. And telling the time became a really important aspect of your business.
There’s this woman, Ruth Belville. She sold time. She lived in London. Every day she would go to Greenwich and she would set her watch by the clock in Greenwich, and then she would go around to her customers—she had about two hundred customers in London—and make sure that their watches were synched up. Literally treating time as a commodity. She actually ended up doing this right up until 1940, which is kind of amazing.
But the network invention that changed the world the most, I think, would have to be the telegraph. Cooke and Wheatstone’s telegraph. It’s really hard to over-state how important this was, this ability to communicate across really vast distances pretty much instantaneously. That’s a completely new, world-changing thing.
And literally world-changing. It practically terraformed the planet, laying this transatlantic cable between Ireland and Newfoundland. It was a huge, huge engineering undertaking. It took two tries, but they eventually managed to do it. It was an incredible feat, a really incredible feat. Before long, the whole world was starting to get wired up for this network.
The difference this made was enormous. The speed of communication was so fast now. To give you an example of the difference it made, in the war of 1812 between Britain and America, there was the Battle of New Orleans, a bloody battle. Many people died. The heartbreaking aspect of this is that this battle was fought after the Treaty of Ghent was already signed. The war was technically over, but they didn’t know that down in New Orleans. The news hadn’t travelled fast enough.
You compare that to something like the Crimean War after the invention of the telegraph, and people in London were reading about what happened the day before in their newspaper because of the telegraph, an absolutely amazing invention.
Before long, we were able to get rid of the wires and turn wireless. We got radio, thanks to Tesla. Now our networks are wireless, and we’re sending signals over radio, and during wartime, those would be encrypted signals.
The Germans were using Lorenz and Shark ciphers with their Enigma machines, and the Allies are attempting to crack those ciphers. And that’s where we get to the invention of the computer. Alan Turing, Tommy Flowers, trying to crack the German code sent over Enigma, and they invent Colossus. They invent the first computer.
Or is it the first computer? Because if you go back a hundred years, there’s an entrepreneur in London who has a plan. He has a plan to build a computer. The Difference Engine. This is Charles Babbage. Actually, this is half of Charles Babbage’s brain, which you can see in the Science Museum in London.
He came up with a plan for an analytical engine. The Difference Engine. To do computing; to calculate with moving parts. It really pre-figures a lot of what we think about computers today. It effectively had a central processing unit. There was this mill that acted like the central processing unit. It could store calculations. It had memory, effectively; it had memory.
So it seems to be this direct forerunner to our modern computers. But actually, it never got built. It remained vapourware pretty much for the whole time. He kept starting and starting again, and despite Government funding, it never really got off the ground. And when Alan Turing was building Colossus in Bletchley Park, he didn’t have any knowledge of Babbage. They found out later about Babbage, but there was no direct link.
So why am I mentioning Babbage? Is it because of his sidekick, Ada Lovelace, world’s first programmer? Actually, no. Her contribution has maybe been over-estimated.
The reason I’m mentioning Babbage and the Difference Engine is because of this man: Joseph Whitworth. He was the engineer on the project. He had to actually build the damn thing. He’s the sys-admin of the Difference Engine. And his contribution to society, I think, has been greater than Babbage’s in some way.
What Joseph Whitworth gave us was standards. He standardised screw-threads. Something we completely take for granted today, but that’s the beauty of standards: we take them for granted. We only notice them when they’re not there.
Before Whitworth, if you wanted to make something, you pretty much made every screw from scratch, and he introduced a standard—British standard Whitworth—so that screws and these tools that we needed to make other tools were of a set size and we could just get on with making stuff.
Standards, as I say, are so important that you really only notice them when they’re not there.
Do we have any visitors from Australia? Yes. So you could tell us a few stories about standards with your railway system, for example. Australian history is littered with the problem of incompatible standards. Building different railway gauges in different parts of the country. Works fine until you try to link them up.
Standards help us evolve as a civilisation. They help us just get on with doing stuff, and if I look back at, say, the twentieth century, of the most important standards that we came up with, I would say there are two:
In the world of atoms, in the world of physical things, the shipping container. It’s the most important standard for physically moving stuff around the planet. It is a standard. It’s ISO 6346, the dimensions of the shipping container. There’s about seventeen million of these in the world, and about ten thousand go missing every year; they just fall off a ship.
The other set of standards I would consider the most important invention of the twentieth century doesn’t come from the world of atoms; it comes from the world of bits. And that’s the standards given to us by Tim Berners-Lee.
Because if you think about what he invented, yes he did invent the world’s first web browser called WorldWideWeb, but really what he came up with was a set of standards. A simple idea that you would have resources, identified with URLs, transmitted over HTTP—a standard, the HyperText Transfer Protocol—and those resources would probably contain HTML, another standard.
That’s all the web is. The web isn’t a physical thing. It’s built on top of a physical network which is the Internet, but the web itself is agreement. It’s a set of standards that we all agree to use.
And so we get the World Wide Web, this fantastic, fantastic network, transmitting information.
What’s interesting about the web is that, even though we use it, we aren’t its natural citizens. What the web is really good at, because it’s this world-wide network of instantaneous transmission, is it’s inhabited by algorithms, by bots, and this can be really useful, right? We build websites like Netflix and Amazon and where you’ve got recommendation engines. Algorithms, effectively, trying to help you figure out people who like this also like this. Maybe this would be the next book you’d like to buy.
But the algorithms turn out to be better than human beings at some things, because of the speed they can operate at. Human beings, we have to evaluate information, make decisions in our head, and then act on that decision, maybe clicking on a mouse. That can be way too slow for some transactions, like trading on the Stock Exchange. So now what we have is algorithmic trading. These happen much faster than any human being; in fact the speed of light becomes a factor in algorithmic trading. You’ve got bots making decisions about what to do with our money, and they’re making decisions faster than we possibly could.
This is a map of where the best loci are of the algorithmic network, where you can get the fastest connection. And there’s obvious things like, if you’re near a backbone connection to the internet, you’re going to get a faster connection, right? So you can see those big red circles are where the backbones are, coming into these countries.
What’s really interesting is that some of these places are in the middle of the ocean, and I guarantee you, you will see people building server farms on those spots in the ocean. We will have human beings working for algorithms. We have human beings working for algorithms already. If you work in the field of Search Engine Optimisation, you are a human being working for an algorithm.
People are already starting to terraform the world to get those few nanoseconds, microseconds edge over their competitors. There’s a company that’s built a straight line fibre-optic cable from New York to Chicago, just to get those few nanoseconds.
The bots are trading. And we don’t know what they’re doing. We built them, but what they then do is beyond our comprehension. They do things we don’t understand. Like on this particular day, May 6th 2010. This was the flash crash. The Stock Market crashed a thousand pounds in minutes. At the time, they didn’t know why this happened. They still don’t know why this happened. It was algorithms; algorithms that we build, but we don’t control.
At least these algorithms were made by human beings. Once we start designing algorithms for building algorithms, we’re totally screwed.
There’s a company in Boston, started analysing the patterns of behaviour from these algorithms, discovering them within the networks and giving them names. Every day on their website, they would post the Stock Market crop circle of the day, because of these interesting patterns they would see; these things we’ve created that are totally beyond our comprehension.
We get to the position where the more important the decision is, the more likely it is it’s going to be made by an algorithm. So, “will I get a new mortgage?” That’s going to be left to an algorithm to decide. Trivial things like, “would you like fries with that?” A human being will be able to answer.
So this is the present day, and I appear to have arrived at quite a dark place. But this network, the web, built on the Internet, is also good for human beings. It’s inhabited by bots, but we can use it to our own ends.
The networks that are most obviously useful to us are the social networks that we use every day. And yes, I think we all know that with Facebook and Google and Twitter, we are the product, right? They’re using us to get at our data. But I think we also know that it’s more of a symbiotic relationship; that we get to use those tools, and we get to stay in touch with people, instantaneously. I can stay in touch with people on the other side of the world thanks to these networks, and that is wonderful.
But I want to highlight a danger with these networks and that ease of access, and that danger is our focus on the real time, our focus on the here and now. And like I said, it’s wonderful. There’s a lot of talk about the real-time web, and I think it’s great. But I think we tend to neglect the other aspect of time: the longer time, that long-term view.
Robin Sloan uses an example from economics to talk about this. He talks about flow and stock—these are terms from economics—to talk about the real-time web and the long-term web. He says:
Flow is the feed. It’s the stream of daily and sub-daily updates that remind people that you exist. Stock is the durable stuff; it’s what people discover via search, it’s what spreads slowly but surely. Flow is ascendant these days, but we neglect stock at our peril.
And I also don’t think it needs to be an either-or situation. I think that all those daily updates over time are contributing to the stock of the web. My friend, Matt Ogle wrote a lovely blog post about that, about this archive we’re all collectively building with these instant updates. He says:
We’ve all been so distracted by the now, that we’ve hardly noticed the beautiful comet-tails of personal history trailing in our wake.
But one of the reasons I think why we don’t consider this is our attitude to what we put on the network. Our attitude tends to be that it’s written in stone. It’s on the network. How often have you heard the phrase, ‘the internet never forgets’ or ‘Google never forgets’, ‘Facebook never forgets’? These phrases are thrown around as truisms, and we nod our heads, oh yeah, yeah, the internet never forgets. Just like the Eskimos have fifty words for snow and everyone at the time of Columbus thought that the world was flat. These things sound logical and reasonable, but they’re all completely false.
Where do you get this data that the internet never forgets? If you actually look at the data, the internet forgets incredibly quickly.
This is a diagram that was put together a few years ago at the height of the dot-com boom of logos of all these different Web 2.0 companies that were wanting access to our data, to our updates. And I think the reason why this was put together was just to show how unimaginative our logo design was. Lots of pastel colours and nice, friendly corners and all that.
But not long after this, Meg Pickard—who spoke here at Webstock before—she took a look at all the companies that were bought up by larger companies.
She also noted down all the companies that were gone, no longer online. And any data that had been poured into those companies was also gone. The internet forgets. The internet will forget our hopes and our dreams, our information, our data that we pour into it.
Case in point: GeoCities.
It was ugly, right? We don’t care. We don’t care if GeoCities is gone. Under construction signs. Who’s going to miss that?
It was such an important part of our history. It formed us. It formed us as citizens of the web.
Phil Gyford put it well. He was writing at the time of destruction of GeoCities by Yahoo! And he said:
GeoCities is an awful, ugly, decrepit mess, and this is why it will be sorely missed. GeoCities sites showed what normal, non-design people will create if given the tools available around the turn of the millennium. As companies like Yahoo! switch off swathes of our on-line universe, little fragments of our collective history disappear. They might be ugly and neglected fragments of our history, but they’re still what got us where we are today.
All that’s gone.
The situation was summed up even better, I think, by Jason Scott. He formed Archive Team. They attempted to rescue as much of GeoCities as they could. He gave us the long-term perspective. He put it like this, and I quote:
When history takes a look at the lives of Jerry Yang and David Filo, this is what it will probably say: Two graduate students intrigued by a growing wealth of material on the internet built a huge fucking lobster trap, absorbed as much of human history and creativity as they could, and destroyed all of it.
Now, we could be fatalistic about this and say, that’s just the nature of the network. My friend Dan wrote this, that the reality of it is very little on the web will be permanent. Embrace that.
Reality: very little on the web will be permanent. Embrace that.
— Dan Cederholm (@simplebits) January 13, 2011
Now I agree with the first part. Very little on the web will be permanent. But embrace that? No. No, no, no, no, no; I will not accept that.
Another friend, Mandy Brown, put it better. She said:
No civilisation has ever saved everything. Acknowledging that fact does not obviate the need to try and save as much as we can. The technological means to produce an archive are not beyond our skills. Sadly, right now at least, the will to do so is insufficient.
But, in a way, that gives me hope. The fact that the problem isn’t technological, that the problem is about us deciding to do something about it, and the first step is acknowledging that this phrase, “the internet never forgets” is complete bollocks. And if we begin with that, then maybe this task of preserving our culture won’t be beyond us.
So if all these companies are absorbing our data, then selling it out and closing down, and they really don’t give a fuck about us, what can we do?
Well you could host your own data, right? Create your own place on the web, and that’s where you keep your stuff. Your own island of your hopes and your dreams and your data and your information.
It’s still a bit geeky though, right? We’re nerds, we can do this. We can host our own, we can set up all this stuff, but for ordinary, everyday people, they’re going to need help, and that’s why these third party companies really do help them. There’s an opportunity here. I think as more people start to get burned by things like GeoCities and other destroyers of culture, they will learn that it is better to host your own.
Now I’m not saying that what you do is you host your own data and you hold onto it and you don’t let anyone at it. I think it’s more that you want the canonical copy to be under your control, and yes, you can let Facebook have a copy; I’ll let Google have a copy, but I want to host that canonical copy. I want that canonical URL.
Then you’ve got to decide what you’re going to store and what format it’s going to be in, because with a long term view, formats don’t last very long at all, especially electronic formats.
There’s a particular format I’m fond of, and it’s the one that Tim Berners-Lee gave us, which is HTML. Obviously I’m biased because I really like HTML. I work with it every day. But it is actually built into the format that it is to be a long-term format. It’s been around for twenty years now. That’s really long for an electronic format.
It’s evolved over time, but it’s always maintained its backwards compatibility. You can still go to the very first Web page ever created, and it works in web browsers, because it’s written in HTML. That’s not an accident.
Here’s a blog post from 2001, from Owen Briggs from the Noodle Incident. He was talking about validation of your code at the time, and why it mattered. But I think it speaks to a bigger picture of why HTML matters. He says:
The code has to expand its capabilities as we do, yet never in a way that blocks out earlier documents. Everything we put down now, will continue to be readable as long as it was recorded using markup valid in its time. This was an attempt to make a code that can go decades and centuries getting broader in scope, without ever shutting out its early versions.
That sounds really idealistic, but that’s a pretty accurate description of the ongoing standardisation of HTML with the HTML Working Group. In fact, Ian Hickson wrote in 2007, the very reason why he is the editor of the HTML spec was this:
I decided that for the sake of our future generations, we should document exactly how to process today’s documents so that when they look back, they can still re-implement HTML browsers and get our data back.
This isn’t a side-product of HTML; it’s a fundamental design principle of HTML.
All right. So we can host our own data, written in HTML, written in standards, at our own domain, our own URL.
There’s actually a problem with the URL part as well, and it’s with the domain aspect. Because we don’t buy domains. We rent domains, and usually on time-scales that are pretty short, right? A year, two years, maybe five years tops, you rent a domain. How many domain names have you let lapse, just in the last decade?
It’s interesting, if you look at the whole idea of domain name systems, it relies on centralisation with ICANN, whereas everything else that’s good about the web is decentralised, and all the positive sides of the web, the fact that it doesn’t have a central authority, but when it comes to domain names, we do have a central authority with ICANN. Now I’m not saying I’ve got a better solution. I’m just pointing it out, that if we’re looking for weaknesses in our network, centralisation I think is a weakness, and maybe domain names are a problem for that reason.
But again, if we acknowledge the problem, if we’re aware of this fact, then that’s the first step in taking care of our URLs.
Tim Berners-Lee wrote in 1999:
Cool URIs don’t change.
URIs, URLs, more or less interchangeable, right? So this is a fundamental principle of the web.
Oh, I also like that he had a little footnote, right, you see the little asterisk there? That was to clarify, he said:
Historical note: at the end of the twentieth century, when this was written, “cool” was an epithet of approval, particularly among young, indicating trendiness, quality or appropriateness.
I love that. That’s some long-term thinking right there.
Thinking in the long term is a great design exercise to make us really focus on what we’re building and appreciate its long-term qualities.
So confident am I with HTML and standards and long-term longevity of the web, I’ve decided to put my money where my mouth is.
There’s a site called LongBets.org, and you can go there and you can make a prediction, and not a short-term prediction; a long-term prediction. You have to put some money down to make the prediction. You put money on the table and you nominate a charity for that money, and if someone disagrees with your prediction, they can money down. They nominate a charity and when the time of the prediction rolls around, somebody wins, somebody loses, one charity benefits from that money.
I put money down and made this prediction at LongBets.org/601. My prediction is: in eleven years, the URL for this prediction will no longer be available.
I like to think of this as a win-win situation, because if the URL is still available, I lose the bet but it proves that the web is better than I thought. If the URL is not available, I get some money. Somebody pointed out it’s actually a lose-lose situation, because if the URL isn’t available, there’s no organisation there to hand out the money.
I made this bet last year, in February of last year, predicting that in eleven years’ time, the URL would not be available. So we’ve got ‘till February 22nd 2022. If anybody would like to take me up on that bet, you can put your money down.
The Long Bets site is great. It’s from the organisation called the Long Now Foundation. Anybody know the Long Now Foundation? Do we have any members of the Long Now Foundation? Oh, just me. Okay.
It’s this wonderful organisation where they’re dedicated to long-term thinking. It was put together by a number of people: Stewart Brand…
Stewart Brand, throughout the twentieth century, he was there. He wrote How Buildings Learn, a fantastic book about architecture. He created the Whole Earth Catalogue for communes in the 1960s. It was like the internet printed out for how to run a civilisation. He started up the Long Now Foundation, together with some other people like Brian Eno, the musician; Roxy Music, ambient music.
He was living in New York at the time and he noticed that everybody was treating time as a really short-term thing. He would ask people, “what are you doing?”, and they would tell him what they were doing that day, maybe even in the last few hours. That was their approach to time. He wanted to encourage long-term thinking.
They came together, formed the Long Now Foundation, and they have a number of different projects. One of their projects is this, the Clock of the Long Now.
The idea is that they want to build a clock that will tell time for ten thousand years. Quite a design challenge. I’m not sure they’re getting everything quite right, but just the fact that they’re thinking about this is really great. You can go to the website of the Clock of the Long Now and they have their design principles written down, and they’re some of the best design principles I’ve ever seen. There’s the idea that it should be scale-free, it should work at different scales, so there’s a working prototype in the Science Museum in London, but it’s also going to work on a very large scale.
One of the places it’s going to be is here in Nevada, I think it is, they found a mountain with some mine-works in it, and that’s where they’re going to build the clock. There’s also a site in Texas.
And this is the thing: if this remained just a thought exercise, it wouldn’t be that useful. They’re actually going to build the thing. They’re going to build it. They are building a ten thousand year clock, and to think about how it’s got to adjust for time: learn the lessons of Newgrange, right? Take the precession of the equinoxes into account. It’s got to be self-adjusting. And because Brian Eno’s working on it, every chime which will happen once ever millennium will be different, it’ll be generated sound.
I love this. I love the fact that there’s this long-term thinking going on. When the human race applies itself to long-term thinking, I think the results are always pretty fantastic.
Another example of long-term thinking would be the global seed banks. This is in Svalbard in Norway. Making back-ups of every seed, of every plant, every species of plant that we know about. That’s good, long-term thinking. Preparing for the unknown.
Really long-term thinking. What do we do with nuclear waste? We’re storing it in the ground. How do we warn future generations to stay away?
We can’t rely on signage. Iconography is very culturally specific; it has to be learned. We can’t rely on written text. We don’t know what languages will still be used, or whether writing will still be used in thousands of years’ time. It’s very difficult.
There was actually a think-tank got together to figure out how to get this message across to future generations. They condensed down the message they were trying to get across, and it was this:
This place is a message and part of a system of messages. Pay attention to it. Sending this message was important to us. We considered ourselves to be a powerful culture. This place is not a place of honour. No highly esteemed deed is commemorated here. Nothing valued is here.
That’s actually a fairly complex message to try and encode without using iconography, without using writing. I think they settled on menacing earthworks as a way of trying to warn off people. There was a school of thought saying, don’t put anything there to mark it, or you’re inviting essentially the grave-robbers of the Pyramids, by drawing attention to it.
Again, long-term thinking, long-term design challenges I think are wonderful. But even this is not the most long-term design challenge there is.
This is a really long-term design challenge. The Voyager Golden Record. Like the pioneer plaques before, these were sent out on the Voyager probes, currently the furthest man-made things from our planet in the universe.
Carl Sagan put this together, and it’s a gramophone record. It contains audio, it contains pictures. Greetings from Planet Earth. How do you encode the instructions to de-code it?
It’s really interesting, that this was in 1977 and they decided to go analogue. A record, rather than digital, even though computers were definitely around back then. They decided it was easier to encode the information, “here’s how to build your own record player” than to try and code the information, “here’s how to build your own computer.”
But even then, all these standards and measurements that we take for granted in our culture and in our planet, we can’t take for granted on a universal scale. If some other civilisation were to come across this artefact—and it’s really unlikely—but how could we let them know the units of measurement that we would use? We can’t use seconds and minutes and hours. Those are just left-over Babylonian bugs.
So what they used was this diagram here. It’s the hyperfine state of a neutral hydrogen atom, the time that that takes, as being essentially universal clockwork; it’s everywhere in the world. And if you could figure this out, if a civilisation could figure this out, they could decode this and they could get greetings from planet Earth. Greetings in many languages, snippets of music: Bach, Chuck Berry, a little time capsule of our civilisation.
So what I want you to do is think about if you had to put together a Voyager Record, what would you put on it? What matters to you? Of everything that the human race has ever created, what would you put on this time capsule for different civilisations?
Think about that, think about what you’d put on there, and then the next thing I want to ask you is, what are you waiting for?
Publish it. Publish it on the network. Put it out there, using standardised formats. Care for the URLs. Make sure those URLs last.
And make lots of copies, and allow people to make lots of copies. There’s this term in digital preservation called LOCKSS: Lots Of Copies Keeps Stuff Safe. So don’t hold onto this stuff. Publish it, put it out there, care for those URLs.
I really think it’s important that this network survives for the long term, not just as a real-time network, but to allow our species to go further.
I talked about lots Of copies keeping stuff safe, but as a race, we have a problem, which is that we’ve only got local back-up. All our DNA is stored on one planet. The dinosaurs died out because they didn’t have a space programme. We need to get off this planet, store our DNA elsewhere, colonise the solar system first, the galaxy second, and to do that, the network can help us. Tell each other things faster. Don’t lose things. Get more people involved in solving problems.
I hope that you will do that. I hope that you will consider what you want to put out there and just do it. Publish it. Take care of those URLs. Use standards. Contribute to the race.
Thank you very much for your time, and I’d also like to thank these people for licensing their photographs liberally and allowing me to use them so that lots of copies can keep those photographs safe, and thank you once more.
Thank you.
This presentation is licenced under a Creative Commons attribution licence. You are free to:
Under the following conditions:
I really enjoyed Matt’s talk from Webstock. I know some people thought it might be a bit of a downer but I actually found it very inspiring.
The video of my talk from Webstock, all about wibbly-wobbly, timey-wimey stuff like networks and memory.
I’m genuinely touched by Matt’s kind words on my Webstock talk. It really means a lot to me, coming from him.
I spent most of February on the far side of the world. I had the great honour and pleasure of speaking at Webstock in New Zealand.
This conference’s reputation had preceded it. I had heard from many friends who have spoken in previous years that the event is great and that the organisers really know how to treat their speakers. I can confirm that both assertions are absolutely true. Jessica and I were treated with warmth and affection, and the whole conference was a top-notch affair.
I gave a new talk called Of Time and the Network taking a long-zoom look at how our relationship with time has changed as our communications networks have become faster and faster. That’s a fairly pretentious premise but I had a lot of fun with it. I’m pretty pleased with how it turned out, although maybe the ending was a little weak. The video and audio should be online soon so you can judge for yourself (and I’ll be sure to get the talk transcribed and published in the articles section of this site).
Perhaps it was because I was looking for the pattern, but I found that a number of the other talks were also taking a healthy long-term view of the web. Matt’s talk in particular—Lessons from a 40 Year Old—reiterated the importance of caring for URLs.
Wilson reprised his Build talk, When We Build, the title of which is taken from John Ruskin’s Lamp of Memory:
When we build, let us think that we build forever. Let it not be for present delight nor for present use alone. Let it be such work as our descendants will thank us for; and let us think, as we lay stone on stone, that a time is to come when those stones will be held sacred because our hands have touched them, and that men will say, as they look upon the labor and wrought substance of them, “See! This our father did for us.”
Another recurring theme was that of craft …which is somewhat related to the theme of time (after all, craftsmanship takes time). Erin gave a lovely presentation, ostensibly on content strategy but delving deeply into the importance of craft, while Jessica, Adam and Michael gave us insights into their respective crafts of lettering and film-making.
All in all, it was a great event—although the way the schedule split into two tracks in the middle of the day led to the inevitable feelings of fomo.
Matt wrote about his Webstock experience when he got back home:
It was easily the best speaking gig I’ve ever had. I’m planning on making this conference a regular visit in the future (as an attendee) and I urge anyone else (especially in America) that has yearned for a top-quality, well-run technology conference in a place you’ve always wanted to visit to do the same.
I love these sketchnotes from my presentation at Webstock.
I can’t fave this picture enough. One moment of Webstock captured by Michael B. Johnson.
Over an after-work beer with Andy yesterday evening, he told me about some of the great presentations he had been listening to from the Webstock conference held in New Zealand earlier this year.
I didn’t even realise there was audio (and video!) available, but there is. Strangely though, there are no RSS feeds. That means you have to go in and manually download each presentation you want to listen to. How quaint.
It doesn’t make it very easy to get the audio from the website to your computer to your iPod. This sounds like a job for a podcast.
I’ve put together a feed of the presentations from Webstock. You can subscribe to the feed in iTunes and listen to the presentations at your leisure.
The weird thing is that podcasts were promised but never delivered. I wonder if perhaps there was some confusion between the terms “podcast” and “audio file”.
Anyway, grab the feed here and get an earful of the great speakers: