The use-case is slightly different—this is about personal archives, like paperwork, screenshots, and bookmarks. But we both came up with the same process:
I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.
And we share the same hope:
Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.
You should read the whole thing, where Alex describes all the other approaches they took before settling on plain ol’ HTML files in a folder:
HTML is low maintenance, it’s flexible, and it’s not going anywhere. It’s the foundation of the entire web, and pretty much every modern computer has a web browser that can render HTML pages. These files will be usable for a very long time – probably decades, if not more.
I’m enjoying this approach, so I’m going to keep using it. What I particularly like is that the maintenance burden has been essentially zero – once I set up the initial site structure, I haven’t had to do anything to keep it working.
They also talk about digital preservation:
I’d love to see static websites get more use as a preservation tool.
I concur! And it’s particularly interesting for Alex to be making this observation in the context of working with the Flickr foundation. That’s where they’re experimenting with the concept of a data lifeboat
I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.
On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.
Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.
If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.
Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.
Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.
The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.
There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.
With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.
Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.
By the time I was back home, the code was done. Here’s the result:
If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.
I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.
Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.
You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.
I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).
I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.
Hurrah! This web page achieves a carbon rating of A+
This is cleaner than 99% of all web pages globally
But under the details about hosting it says:
Oh no, it looks like this web page uses bog standard energy
The Session is hosted on DigitalOcean, who tend to be quite tight-lipped about their energy suppliers. Fortunately others have done some sleuthing to figure out which facilities are running on green energy.
One of the locations to get the green thumnbs up is the Amsterdam facility housed by Equinix. That’s where The Session is hosted.
I’m glad that I was able to find out that the site is running on 100% renewable energy, but I wish I didn’t need to go searching to find this out. DigitalOcean need to be a lot more transparent about the energy sources for their hosting facilities.
I was chatting with my new colleague Alex yesterday about a link she had shared in Slack. It was the Nielsen Norman Group’s annual State of Mobile User Experience report.
There’s nothing too surprising in there, other than the mention of Apple’s app clips and Google’s instant apps.
Remember those?
Me neither.
Perhaps I lead a sheltered existence, but as an iPhone user, I don’t think I’ve come across a single app clip in the wild.
I remember when they were announced. I was quite worried about them.
See, the one thing that the web can (theoretically) offer that native can’t is instant access to a resource. Go to this URL—that’s it. Whereas for a native app, the flow is: go to this app store, find the app, download the app.
(I say that the benefit is theoretical because the website found at the URL should download quickly—the reality is that the bloat of “modern” web development imperils that advantage.)
App clips—and instant apps—looked like a way to route around the convoluted install process of native apps. That’s why I was nervous when they were announced. They sounded like a threat to the web.
In reality, the potential was never fulfilled (if my own experience is anything to go by). I wonder why people didn’t jump on app clips and instant apps?
Perhaps it’s because what they promise isn’t desirable from a business perspective: “here’s a way for users to accomplish their tasks without downloading your app.” Even though app clips can in theory be a stepping stone to installing the full app, from a user’s perspective, their appeal is the exact opposite.
Or maybe they’re just too confusing to understand. I think there’s an another technology that suffers from the same problem: progressive web apps.
Hear me out. Progressive web apps are—if done well—absolutely amazing. You get all of the benefits of native apps in terms of UX—they even work offline!—but you retain the web’s frictionless access model: go to a URL; that’s it.
So what are they? Are they websites? Yes, sorta. Are they apps? Yes, sorta.
That’s confusing, right? I can see how app clips and instant apps sound equally confusing: “you can use them straight away, like going to a web page, but they’re not web pages; they’re little bits of apps.”
I’m mostly glad that app clips never took off. But I’m sad that progressive web apps haven’t taken off more. I suspect that their fates are intertwined. Neither suffer from technical limitations. The problem they both face is inertia:
The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.
True of progressive web apps. Equally true of app clips.
But when I was chatting to Alex, she made me look at app clips in a different way. She described a situation where somebody might need to interact with some kind of NFC beacon from their phone. Web NFC isn’t supported in many browsers yet, so you can’t rely on that. But you don’t want to make people download a native app just to have a quick interaction. In theory, an app clip—or instant app—could do the job.
In that situation, app clips aren’t a danger to the web—they’re polyfills for hardware APIs that the web doesn’t yet support!
I love having my perspective shifted like that.
The specific situations that Alex and I were discussing were in the context of museums. Musuems offer such interesting opportunities for the physical and the digital to intersect.
The other dConstruct talk that’s very relevant to this liminal space between the web and native apps is the 2012 talk from Scott Jenson. I always thought the physical web initiative had a lot of promise, but it may have been ahead of its time.
I loved the thinking behind the physical web beacons. They were deliberately dumb, much like the internet itself. All they did was broadcast a URL. That’s it. All the smarts were to be found at the URL itself. That meant a service could get smarter over time. It’s a lot easier to update a website than swap out a piece of hardware.
But any kind of technology that uses Bluetooth, NFC, or other wireless technology has to get over the discovery problem. They’re invisible technologies, so by default, people don’t know they’re even there. But if you make them too discoverable— intrusively announcing themselves like one of the commercials in Minority Report—then they’re indistinguishable from spam. There’s a sweet spot of discoverability right in the middle that’s hard to get right.
Over the past couple of years—accelerated by the physical distancing necessitated by The Situation—QR codes stepped up to the plate.
They still suffer from some discoverability issues. They’re not human-readable, so you can’t be entirely sure that the URL you’re going to go to isn’t going to be a Rick Astley video. But they are visible, which gives them an advantage over hidden wireless technologies.
They’re cheaper too. Printing a QR code sticker costs less than getting a plastic beacon shipped from China.
QR codes turned out to be just good enough to bridge the gap between the physical and digital for those one-off interactions like dining outdoors during a pandemic:
I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.
Ironically, the nail in the coffin for app clips and instant apps might’ve been hammered in by Apple and Google when they built QR-code recognition into their camera software.
We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.
I’m very happy to lose this bet.
When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.
The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.
These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.
One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.
I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.
The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.
It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.
Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.
In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.
It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.
The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.
But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).
God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.
That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).
Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.
If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.
Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?
(I’m re-reading that article in the current version of Firefox: 95.0.2.)
Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?
From the comments on Jeff’s post, there’s Corey Dutson:
2022: God knows what the Internet will look like at that point. Will we even have websites?
Dan Rubin, who has indeed successfully moved from web work to photography, wrote:
I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.
Joshua Works made a prediction that’s worryingly close to reality:
I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.
In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.
Perhaps the most level-headed observation came from Jonny Axelsson:
The world in 2022 will be pretty much like the world in 2009.
The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.
The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.
Now that’s a sensible perspective!
So who else is looking forward to seeing what the World Wide Web is like in 2036?
I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.
First of all, it’s me pointing at something and saying “Check this out!”
Secondly, it’s a way for me to stash something away that I might want to return to. I tag all my links so when I need to find one again, I just need to think “Now what would past me have tagged it with?” Then I type the appropriate URL: adactio.com/links/tags/whatever
There are some links that I return to again and again.
Back in 2008, I linked to a document called A Few Notes on The Culture. It’s a copy of a post by Iain M Banks to a newsgroup back in 1994.
But in 2013 I linked to the same document on a different domain. That link still works even though I believe it was first published around twenty(!) years ago (view source for some pre-CSS markup nostalgia).
Anyway, A Few Notes On The Culture is a fascinating look at the world-building of Iain M Banks’s Culture novels. He talks about the in-world engineering, education, biology, and belief system of his imagined utopia. The part that sticks in my mind is when he talks about economics:
Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive - and more morally desirable - than one left to market forces.
The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is — without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset — intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.
It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.
Those three paragraphs might be the most succinct critique of unfettered capitalism I’ve come across. The invisible hand as a paperclip maximiser.
Like I said, it’s a fascinating document. In fact I realised that I should probably store a copy of it for myself.
I have a section of my site called “extras” where I dump miscellaneous stuff. Most of it is unlinked. It’s mostly for my own benefit. That’s where I’ve put my copy of A Few Notes On The Culture.
Here’s a funny thing …for all the times that I’ve revisited the link, I never knew anything about the site is was hosted on—vavatch.co.uk—so this most recent time, I did a bit of clicking around. Clearly it’s the personal website of a sci-fi-loving college student from the early 2000s. But what came as a revelation to me was that the site belonged to …Adrian Hon!
I’m impressed that he kept his old website up even after moving over to the domain mssv.net, founding Six To Start, and writing A History Of The Future In 100 Objects. That’s a great snackable book, by the way. Well worth a read.
My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.
I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.
’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.
But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.
Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.
In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.
But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.
As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.
If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.
And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.
In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.
If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.
The Long Now Foundation is dedicated to long-term thinking. I’ve been a member for quite a few years now …which, in the grand scheme of things, is not very long at all.
One of their projects is Long Bets. It sets out to tackle the problem that “there’s no tax on bullshit.” Here’s how it works: you make a prediction about something that will (or won’t happen) by a particular date. So far, so typical thought leadery. But then someone else can challenge your prediction. And here’s the crucial bit: you’ve both got to place your monies where your mouths are.
Ten years ago, I made a prediction on the Long Bets website. It’s kind of meta:
The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.
One year later I was on stage in Wellington, New Zealand, giving a talk called Of Time And The Network. I mentioned my prediction in the talk and said:
If anybody would like to take me up on that bet, you can put your money down.
Matt was also speaking at Webstock. When he gave his talk, he officially accepted my challenge.
So now it’s a bet. We both put $500 into the pot. If I win, the Bletchly Park Trust gets that money. If Matt wins, the money goes to The Internet Archive.
That was ten years ago today. There’s just one more year to go until the pleasingly alliterative date of 2022-02-22 …or as the Long Now Foundation would write it, 02022-02-22 (gotta avoid that Y10K bug).
It is looking more and more likely that I will lose this bet. This pleases me.
I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.
The sentiment I expressed resonated with a lot of people. Like, a lot of people.
I was talking specifically about web development and technology choices, but I think the broader point applies to other disciplines too.
There were the digital-first companies like Spotify, Deliveroo, and Bulb—companies forged in the fires of start-up culture. Then there were the older companies that had to make the move to digital (transform, if you will). I decided to get a show of hands from the audience to see which kind of company most people were from. The overwhelming majority of attendees were from more old-school companies.
Just as most of the ink spilled in the web development world goes towards the newest frameworks and toolchains, I feel like the majority of coverage in the design world is spent on the latest outputs from digital-first companies like AirBnB, Uber, Slack, etc.
The end result is the same. A typical developer or designer is left feeling that they—and their company—are behind the curve. It’s like they’re only seeing the Instagram version of their industry, all airbrushed and filtered, and they’re comparing that to their day-to-day work. That can’t be healthy.
Personally, I’d love to hear stories from the trenches of more representative, traditional companies. I also think that would help get an important message to people working in similar companies:
If you don’t like reading in a web browser, you might like to know that Resilient Web Design is now available in more formats.
Jiminy Panoz created a lovely EPUB version. I tried it out in Apple’s iBooks app and it looks great. I tried to submit it to the iBooks store too, but that process threw up a few too many roadblocks.
Oliver Williams has created a MOBI version. That’s means you can read it on a Kindle. I plugged my old Kindle into my computer, dragged that file onto its disc image, and it worked a treat.
Oh, and there’s the podcast. I’ve only released two chapters so far. The Christmas break and an untimely cold have slowed down the release schedule a little bit.
I’d love to make a physical, print-on-demand version of Resilient Web Design available—maybe through Lulu—but my InDesign skills are non-existent.
If you think the book should be available in any other formats, and you fancy having a crack at it, please feel free to use the source files.
They’re all hosted on the same (virtual) box as adactio.com—Ubuntu 14.04 running Apache 2.4.7 on Digital Ocean. If you’ve got a similar configuration, this might be useful for you.
First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).
I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using sudo in front of the commands):
Seems like a good idea to put that certbot-auto thingy into a directory like /etc:
mv certbot-auto /etc
Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for yourdomain.com:
The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.
The result of this will be a bunch of generated certificates that live here:
Make sure you update the /path/to/yourdomain.com part—you probably want a directory somewhere in /var/www or wherever your website’s files are sitting.
To exit the infernal text editor, hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.
If the yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:
a2ensite yourdomain.com
Time to restart Apache. Fingers crossed…
service apache2 restart
If that worked, you should be able to go to https://yourdomain.com and see a lovely shiny padlock in the address bar.
Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.
Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded.
You could set yourself a calendar reminder to do the renewal (without the --dry-run bit) every few months. Or you could tell your server’s computer to do it by using a cron job. It’s not nearly as rude as it sounds.
You can fire up and edit your list of cron tasks with this command:
crontab -e
This tells the machine to run the renewal task at quarter past six every evening and log any results:
(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.
Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.
If you have other domains you want to secure, repeat the process by running:
You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.
Yeah, I definitely should’ve mentioned that at the start.
I know I say this every year, but this month—and this week in particular—is a truly wonderful time to be in Brighton. I am, of course, talking about The Brighton Digital Festival.
It’s already underway. Reasons To Be Creative just wrapped up. I managed to make it over to a few talks—Stacey Mulcahey, Jon, Evan Roth. The activities for the CodebarCode and Chips scavenger hunt are also underway. Tuesday evening’s event was a lot of fun; at the end of the night, everyone wanted to keep on coding.
I popped along to the opening of Georgina’s Familiars exhibition. It’s really good. There’s an accompanying event on Saturday evening called Unfamiliar Matter which looks like it’ll be great. That’s the same night as the Miniclick party though.
I guess clashing events are unavoidable. Like tonight. As well as the Guardians Of The Galaxy screening hosted by Chris (that I’ll be going to), there’s an Async special dedicated to building a 3D Lunar Lander.
But of course the big event is dConstruct tomorrow. I’m really excited about it. Partly that’s because I’m not the one organising it—it’s all down to Andy and Kate—but also because the theme and the line-up is right up my alley.
Andy has asked me to compere the event. I feel a little weird about that seeing as it’s his baby, but I’m also honoured. And, you know, after talking to most of the speakers for the podcast—which I enjoyed immensely—I feel like I can give an informed introduction for each talk.
I just can’t get excited about the prospect of building something for any particular operating system, be it desktop or mobile. I think about the potential lifespan of what would be built and end up asking myself “why bother?” If something isn’t on the web—and of the web—I find it hard to get excited about it. I’m somewhat jealous of people who can get equally excited about the web, native, hardware, print …in my mind, if it hasn’t got a URL, it’s missing some vital spark.
I know that this is a problem, but I can’t help it. At the very least, I have enough presence of mind to recognise it as being my problem.
Given these unreasonable feelings of attachment towards the web, you might expect me to wish it to become the one technology to rule them all. But I’ve never felt that any such victory condition would make sense. If anything, I’ve always been grateful for alternative avenues of experimentation and expression.
When Flash was a thriving ecosystem for artists to push the boundaries of what was possible to deliver to a web browser, I never felt threatened by it. I never wished for web technologies to emulate those creations. Don’t get me wrong: I’m happy that we’ve got nice smooth animations in CSS, but I never thought the lack of animation was crippling the web’s potential.
Now we have native technologies that can do more than the web can do. iOS and Android apps can access device APIs that web browsers can’t (yet). And, once again, while I look forward to the day that websites will be able to do all the things that native apps can do today, I don’t think that the lack of those capabilities is dooming the web to irrelevance.
There will always be some alternative that is technologically more advanced than the web. First there were CD-ROMs. Then we had Flash. Now we have native apps. Each one of those platforms offered more power and functionality than you could get from a web browser. And yet the web persists. That’s because none of the individual creations made with those technologies could compete with the collective power of all of the web, hyperlinked together. A single native app will “beat” a single website every time …but an app store pales when compared to the incredible reach and scope of the entire World Wide Web.
The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.
The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.
Like I said, I’m okay with that. I’m okay with the web not being as advanced as some other technology stack at any particular moment. I can wait.
In fact, as PPK points out, we could do real damage to the web by attempting to make it mimic some platform that’s currently in the ascendent. I disagree with his framing of it as a battle—rather than conceding defeat, I see it more like waiting out a siege—but I agree completely with this assessment:
The web cannot emulate native perfectly, and it never will.
If we accept that, then we can play to the web’s strengths (while at the same time, playing a slow game of catch-up behind the scenes). The danger comes when we try to emulate the capabilities of something that isn’t the web:
Emulating native leads to bad UX (or, at least, a UX that’s clearly a sub-optimal copy of native UX).
Whenever a website tries to emulate something from an operating system—be it desktop or mobile—the result is invariably something that gets really, really close …but falls just a little bit short. It feels like entering an uncanny valley of interaction design.
I think you make what I call “bicycle bear websites.” Why? Because my response to both is the same.
“Listen bub,” I say, “it is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And look, pal, that bear will never actually be good at riding a bicycle.”
This is how I feel about so many of the fancy websites I see. “It is fascinating that you can do that, but it’s really not what a website is supposed to do.”
It’s time to recognise that this is the wrong approach. We shouldn’t try to compete with native apps in terms set by the native apps. Instead, we should concentrate on the unique web selling points: its reach, which, more or less by definition, encompasses all native platforms, URLs, which are fantastically useful and don’t work in a native environment, and its hassle-free quality.
This is something that Cennydd talked about recently on an episode of the Design Details podcast. The web, he argues, is great for the sharing of information, but not so great for applications.
I think PPK, Cennydd, and I are all in broad agreement, but we almost certainly differ in the details. PPK, for example, argues that maybe news sites should be native apps instead, but for me, those are exactly the kind of sites that benefit from belonging to no particular platform. And when Cennydd talks about applications on the web, it raises the whole issue of what constitutes a web app anyway. If we’re talking about having access to device APIs—cameras, microphones, accelerometers—then yes, native is the way to go. But if we’re talking about interface elements and motion design, then I think the web can hold its own …sometimes.
Of course not every web browser can match the capabilities of a native app—that’s why it’s so important to approach web development through the lens of progressive enhancement rather than treating it like software development no different than that of native platforms. The web is not a platform—that’s the whole point of the web; it’s cross-platform. As Baldur put it:
Treating the web like another app platform makes sense if app platforms are all you’re used to. But doing so means losing the reach, adaptability, and flexibility that makes the web peerless in both the modern media and software industries.
The price we pay for that incredible cross-platform reach is that features on the web will always be lagging behind, and even when do they do arrive, they won’t be available in all web browsers.
To paraphrase William Gibson: capabilities on the web will always be here, but they will never be evenly distributed.
But let’s take a step back from the surface-level differences between web and native. Just as happened with CD-ROMs and Flash, the web is catching up with native when it comes to motion design, visual feedback, and gestures like swiping and dragging. I don’t think those are where the fundamental differences lie. I don’t even think the fundamental differences lie in accessing device APIs like cameras, microphones, and offline storage—the web is (slowly) catching up in those areas too.
What if the fundamental differences lie deeper than the technical implementation? What if the web is suited to some things more than others, not because of technical limitations, but because of philosophical mismatches?
The web was born at CERN, an amazing environment that’s free of many of the economic and hierarchical pressures that shape technology decisions elsewhere. The web’s heritage as a hypertext document sharing system for pure scientific research is often treated as a handicap, something that must be overcome in this age of applications and monetisation. But I see this heritage as a feature, not a bug. It promotes ideals of universal access above individual convenience, creation above consumption, and sharing above financial gain.
For web development to grow as a craft and as an industry, we have to follow the money. Without money the craft becomes a hobby and unmaintained software begins to rot.
But I think there’s a danger here. If we allow the web to be led by money-making, we may end up changing the fundamental nature of the web, and not for the better.
Now, personally, I believe that it’s entirely possible to run a profitable business on the web. There are plenty of them out there. But suppose we allow that other avenues are more profitable. Let’s assume that there’s more friction in making money on the web than there is in, say, making money on iOS (or Android, or Facebook, or some other monolithic stack). If that were the case …would that be so bad?
Suppose, to use PPK’s phrase, we “concede defeat” to Apple, Google, Microsoft, and Facebook. When you think about it, it makes sense that platforms borne from profit-driven companies are going to be better at generating profit than something created by a bunch of idealistic scientists trying to improve the knowledge of the human race. Suppose we acknowledged that the web isn’t that well-suited to capitalism.
I think I’d be okay with that.
Would the web become little more than a hobbyist’s playground? A place for amateurs rather than professional businesses?
Maybe.
I’d be okay with that too.
Y’see, what attracted me to the web—to the point where I have this blind spot—wasn’t the opportunity to make money. What attracted me to the web was its remarkable ability to allow anyone to share anything, not just for the here and now, but for the future too.
If you’ve been reading my journal or following my links for any time, you’ll be aware that two of my biggest interests are progressive enhancement and digital preservation. In my mind, these two things are closely intertwingled.
For me, progressive enhancement is a means of practicing universal design, a way of providing access to as many people as possible. That includes access across time, hence the crossover with digital preservation. I’ve noticed again and again that what’s good for accessibility is also good for longevity, and vice versa.
Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a “permanent web” where nothing can ever disappear.
I don’t want a web where “nothing can ever disappear” but I also don’t want the default lifespan of a resource on the web to be ephemeral. I think that whoever published that resource should get to decide how long or short its lifespan is. The problem, as Maciej points out, is in the mismatch of expectations:
I’ve come to believe that a lot of what’s wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
I completely agree with Bret’s woeful assessment of the web when it comes to link rot:
It is this common record of public thought — the “great conversation” — whose stability and persistence is crucial, both for us alive today and for those who will come after.
I believe we can and should do better. But I completely and utterly disagree with him when he says:
Photos from your friend’s party are not part of the common record.
Nor are most casual conversations. Nor are search histories, commercial transactions, “friend networks”, or most things that might be labeled “personal data”. These are not deliberate publications like a bound book; they are not intended to be lasting contributions to the public discourse.
We can agree when it comes to search histories and commercial transactions, but it makes no sense to lump those in with the ordinary plenty that I’ve written about before:
My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.
For me, this lies at the heart of what the web does. The web removes the need for tastemakers who get to decide what gets published. The web removes the need for gatekeepers who get to decide what gets saved.
Other avenues of expressions will always be more powerful than the web in the short term: CD-ROMs, Flash, and now native. But they all come with gatekeepers. The collective output of the human race—from the most important scholarly papers to the most trivial blog post—is too important to put in the hands of the gatekeepers of today who may not even be around tomorrow: Apple, Google, Microsoft, et al.
The web has no gatekeepers. The web has no quality control. The web is a mess. The web is for everyone.
In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.
But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:
With this has come the slowly dawning realisation that our digital traces will long outlive us.
Citation needed.
I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:
Personal publishing is dead.
JavaScript is ubiquitous.
Privacy is dead.
But I didn’t include the most pernicious and widespread lie of all:
The internet never forgets.
This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.
Laing writes:
Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.
You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.
What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?
These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”
The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.
Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.
If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.
I am far more frightened by an internet that never remembers than I am by an internet that never forgets.
And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:
If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.
In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.
When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.
He warns of the dangers of rapidly-obsoleting file formats:
We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.
It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.
I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.
Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.
I love the idea of owning your content and then syndicating it out to social networks, photo sites, and the like. It makes complete sense… Web-based services have a habit of disappearing, so we shouldn’t rely on them. The only Web that is permanent is the one we control.
But he quite rightly points out that we never truly own our own domains: we rent them. And when it comes to our servers, most of us are renting those too.
It looks like print is a safer bet for long-term storage. Although when someone pointed out that print isn’t any guarantee of perpetuity either, Aaron responded:
Sure, print pieces can be destroyed, but important works can be preserved in places like the Beinecke
Ah, but there’s the crux—that adjective, “important”. Print’s asset—the fact that it is made of atoms, not bits—is also its weak point: there are only so many atoms to go around. And so we pick and choose what we save. Inevitably, we choose to save the works that we deem to be important.
The problem is that we can’t know today what the future value of a work will be. A future president of the United States is probably updating their Facebook page right now. The first person to set foot on Mars might be posting a picture to her Instagram feed at this very moment.
One of the reasons that I love the Internet Archive is that they don’t try to prioritise what to save—they save it all. That’s in stark contrast to many national archival schemes that only attempt to save websites from their own specific country. And because the Internet Archive isn’t a profit-driven enterprise, it doesn’t face the business realities that caused Google to back-pedal from its original mission. Or, as Andy Baio put it, never trust a corporation to do a library’s job.
But even the Internet Archive, wonderful as it is, suffers from the same issue that Aaron brought up with the domain name system—it’s centralised. As long as there is just one Internet Archive organisation, all of our preservation eggs are in one magnificent basket:
Should we be concerned that the technical expertise and infrastructure for doing this work is becoming consolidated in a single organization?
Which brings us back to Aaron’s original question. Perhaps it’s less about “What do we own?” and more about “What are we responsible for?” If we each take responsibility for our own words, our own photos, our own hopes, our own dreams, we might not be able guarantee that they’ll survive forever, but we can still try everything in our power to keep them online. Maybe by acknowledging that responsibility to preserve our own works, instead of looking for some third party to do it for us, we’re taking the most important first step.
My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.
They’re the raw stuff of communication. Same for tweets, and Facebook posts, and the whole bit. And this is where some cynic usually says, “Pah! This is about preserving all that rubbish on Facebook? All that garbage on Twitter? All those pictures of cats?” This is the emblem of people who want to dismiss all the stuff that happens on the internet.
And I’m supposed to turn around and say “No, no, there’s noble things on the internet too. There’s people talking about surviving abuse, and people reporting police violence, and so on.” And all that stuff is important but I’m going to speak for the banal and the trivial here for a moment.
Because when my wife comes down in the morning—and I get up first; I get up at 5am; I’m an early riser—when my wife comes down in the morning and I ask her how she slept, it’s not because I want to know how she slept. I sleep next to my wife. I know how my wife slept. The reason I ask how my wife slept is because it is a social signal that says:
I see you. I care about you. I love you. I’m here.
And when someone says something big and meaningful like “I’ve got cancer” or “I won” or “I lost my job”, the reason those momentous moments have meaning is because they’ve been built up out of this humus of a million seemingly-insignificant transactions. And if someone else’s insignificant transactions seem banal to you, it’s because you’re not the audience for that transaction.
The medieval scribes of Ireland, out on the furthermost edges of Europe, worked to preserve the “important” works. But occasionally they would also note down their own marginalia like:
Pleasant is the glint of the sun today upon these margins, because it flickers so.
Short observations of life in fewer than 140 characters. Like this lovely example written in ogham, a morse-like system of encoding the western alphabet in lines and scratches. It reads simply “latheirt”, which translates to something along the lines of “massive hangover.”
I’m glad that those “unimportant” words have also been preserved.
Centuries later, the Irish poet Patrick Kavanagh would write about the desire to “wallow in the habitual, the banal”:
But the best is yet to come. Tomorrow’s the big day: dConstruct 2014. I’ve been preparing for this day for so long now, it’s going to be very weird when it’s over. I must remember to sit back, relax and enjoy the day. I remember how fast the day whizzed by last year. I suspect that tomorrow’s proceedings might display equal levels of time dilation—I’m excited to see every single talk.
Even when dConstruct is done, the Brighton festivities will continue. I’ll be at Indie Web Camp here at 68 Middle Street on Saturday on Sunday. Also on Saturday, there’s the brilliant Maker Faire, and when the sun goes down, Brighton will be treated to Seb’s latest project which features frickin’ lasers!
We all grieve in different ways. We all find solace and comfort in different places.
There can be solace in walking. There can be comfort in music. Tears. Rage. Sadness. Whatever it takes.
Personally, I have found comfort in reading what others have written about Chloe …but I know Chloe would be really embarrassed. She never liked getting attention.
Chloe must have known that people would want to commemorate her in some way. She didn’t want a big ceremony. She didn’t want any fuss. She left specific instructions (her suicide was not a spur-of-the moment decision).
If you would like to mourn the death—and celebrate the life—of Chloe Weil, she asked that you contribute to one or both of these institutions: