Journal tags: intern

23

sparkline

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

02022-02-22

Eleven years ago, I made a prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

One year later, Matt called me on it and the prediction officially became a bet:

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

I’m very happy to lose this bet.

When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.

The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.

These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.

One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.

I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.

The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.

SafarIE

I was moaning about Safari recently. Specifically I was moaning about the ridiculous way that browser updates are tied to operating system updates.

But I felt bad bashing Safari. It felt like a pile-on. That’s because a lot of people have been venting their frustrations with Safari recently:

I think it’s good that people share their frustrations with browsers openly, although I agree with Baldur Bjarnason that’s good to avoid Kremlinology and the motivational fallacy when blogging about Apple.

It’s also not helpful to make claims like “Safari is the new Internet Explorer!” Unless, that is, you can back up the claim.

On a recent episode of the HTTP 203 podcast, Jake and Surma set out to test the claim that Safari is the new IE. They did it by examining Safari according to a number of different measurements and comparing it to the olden days of Internet Explorer. The result is a really fascinating trip down memory lane along with a very nuanced and even-handed critique of Safari.

And the verdict? Well, you’ll just to have to listen to the podcast episode.

If you’d rather read the transcript, tough luck. That’s a real shame because, like I said, it’s an excellent and measured assessment. I’d love to add it to the links section of my site, but I can’t do that in good conscience while it’s inaccessible to the Deaf community.

When I started the Clearleft podcast, it was a no-brainer to have transcripts of every episode. Not only does it make the content more widely available, but it also makes it easier for people to copy and paste choice quotes.

Still, I get it. A small plucky little operation like Google isn’t going to have the deep pockets of a massive corporation like Clearleft. But if Jake and Surma were to open up a tip jar, I’d throw some money in to get HTTP 203 transcribed (I recommend getting Tina Pham to do it—she’s great!).

I apologise for my note of sarcasm there. But I share because I care. It really is an excellent discussion; one that everyone should be able to access.

Update: the bug with that episode of the HTTP 203 podcast has been fixed. Here’s the transcript! And all future episodes will have transcripts too:

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

Origin story

In an excellent piece called The First Web Apps: 5 Apps That Shaped the Internet as We Know It, Matthew Guay wrote:

The world wide web wasn’t supposed to be this fun. Berners-Lee imagined the internet as a place to collaborate around text, somewhere to share research data and thesis papers.

In his somewhat confused talk at FFConf this year, James Kyle said:

The web was designed to share documents.

Douglas Crockford said

The web was not designed to do any of things it is doing. It was intended to be a simple—even primitive—document retrieval system.

Some rando on Hacker News declared:

Essentially every single aspect of the web is terrible. It was designed as a static document presentation system with hyperlinks.

It appears to be a universally accepted truth. The web was designed for sharing documents, and was never meant for the kind of applications we can build these days.

I don’t think that’s quite right. I think it’s fairer to say that the first use case for the web was document retrieval. And yes, that initial use case certainly influenced the first iteration of HTML. But right from the start, the vision for the web wasn’t constrained by what it was being asked to do at the time. (I mean, if you need an example of vision, Tim Berners-Lee called it the World Wide Web when it was just on one computer!)

The original people working on the web—Tim Berners-Lee, Robert Cailliau, Jean-Francois Groff, etc.—didn’t to try define the edges of what the web would be capable of. Quite the opposite. All of them really wanted a more interactive read-write web where documents could not only be read, but also edited and updated.

As for the idea of having a programming language in browsers (as well as a markup language), Tim Berners-Lee was all for it …as long as it could be truly ubiquitous.

To say that the web was made for sharing documents is like saying that the internet was made for email. It’s true in the sense that it was the most popular use case, but that never defined the limits of the system.

The secret sauce of the internet lies in its flexibility—it’s a deliberately dumb network that doesn’t care about the specifics of what runs on it. This lesson was then passed on to the web—another deliberately simple system designed to be agnostic to use cases.

It’s true that the web of today is very, very different to its initial incarnation. We got CSS; we got JavaScript; HTML has evolved; HTTP has evolved; URLs have …well, cool URIs don’t change, but you get the idea. The web is like the ship of Theseus—so much of it has been changed and added to over time. That doesn’t mean its initial design was flawed—just the opposite. It means that its initial design wasn’t unnecessarily rigid. The simplicity of the early web wasn’t a bug, it was a feature.

The web (like the internet upon which it runs) was designed to be flexible, and to adjust to future use-cases that couldn’t be predicted in advance. The best proof of this flexibility is the fact that we can and do now build rich interactive applications on the World Wide Web. If the web had truly been designed only for documents, that wouldn’t be possible.

Installing Progressive Web Apps

When I was testing the dConstruct Audio Archive—which is now a Progressive Web App—I noticed some interesting changes in how Chrome on Android offers the “add to home screen” prompt.

It used to literally say “add to home screen.”

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome. And there’s the “add to home screen” prompt for https://html5forwebdesigners.com/ HTTPS + manifest.json + Service Worker = “Add to Home Screen” prompt. Add to home screen.

Now it simply says “add.”

The dConstruct Audio Archive is now a Progressive Web App

I vaguely remember there being some talk of changing the labelling, but I could’ve sworn it was going to change to “install”. I’ve got to be honest, just having the word “add” doesn’t seem to provide much context. Based on the quick’n’dirty usability testing I did with some co-workers, it just made things confusing. “Add what?” “What am I adding?”

Additionally, the prompt appeared immediately on the first visit to the site. I thought there was supposed to be an added “engagement” metric in order for the prompt to appear; that the user needs to visit the site more than once.

You’d think I’d be happy that users will be presented with the home-screen prompt immediately, but based on the behaviour I saw, I’m not sure it’s a good thing. Here’s what I observed:

  1. The user types the URL archive.dconstruct.org into the address bar.
  2. The site loads.
  3. The home-screen prompt slides up from the bottom of the screen.
  4. The user immediately moves to dismiss the prompt (cue me interjecting “Don’t close that!”).

This behaviour is entirely unsurprising for three reasons:

  1. We web designers and web developers have trained users to dismiss overlays and pop-ups if they actually want to get to the content. Nobody’s going to bother to actually read the prompt if there’s a 99% chance it’s going to say “Sign up to our newsletter!” or “Take our survey!”.
  2. The prompt appears below the “line of death” so there’s no way to tell it’s a browser or OS-level dialogue rather than a JavaScript-driven pop-up from the site.
  3. Because the prompt now appears on the first visit, no trust has been established between the user and the site. If the prompt only appeared on later visits (or later navigations during the first visit) perhaps it would stand a greater chance of survival.

It’s still possible to add a Progressive Web App to the home screen, but the option to do that is hidden behind the mysterious three-dots-vertically-stacked icon (I propose we call this the shish kebab icon to distinguish it from the equally impenetrable hamburger icon).

I was chatting with Andreas from Mozilla at the View Source conference last week, and he was filling me in on how Firefox on Android does the add-to-homescreen flow. Instead of a one-time prompt, they’ve added a persistent icon above the “line of death” (the icon is a combination of a house and a plus symbol).

When a Firefox 58 user arrives on a website that is served over HTTPS and has a valid manifest, a subtle badge will appear in the address bar: when tapped, an “Add to Home screen” confirmation dialog will slide in, through which the web app can be added to the Android home screen.

This kind of badging also has issues (without the explicit text “add to home screen”, the user doesn’t know what the icon does), but I think a more persistently visible option like this works better than the a one-time prompt.

Firefox is following the lead of the badging approach pioneered by the Samsung Internet browser. It provides a plus symbol that, when pressed, reveals the options to add to home screen or simply bookmark.

What does it mean to be an App?

I don’t think Chrome for Android has any plans for this kind of badging, but they are working on letting the site authors provide their own prompts. I’m not sure this is such a good idea, given our history of abusing pop-ups and overlays.

Sadly, I feel that any solution that relies on an unrequested overlay is doomed. That’s on us. The way we’ve turned browsing the web—especially on mobile—into a frustrating chore of dismissing unwanted overlays is a classic tragedy of the commons. We blew it. Users don’t trust unrequested overlays, and I can’t blame them.

For what it’s worth, my opinion is that ambient badging is a better user experience than one-time prompts. That opinion is informed by a meagre amount of testing though. I’d love to hear from anyone who’s been doing more detailed usability testing of both approaches. I assume that Google, Mozilla, and Samsung are doing this kind of testing, and it would be really great to see the data from that (hint, hint).

But it might well be that ambient badging is just too subtle to even be noticed by the user.

On one end of the scale you’ve got the intrusiveness of an add-to-home-screen prompt, but on the other end of the scale you’ve got the discoverability problem of a subtle badge icon. I wonder if there might be a compromise solution—maybe a badge icon that pulses or glows on the first or second visit?

Of course that would also need to be thoroughly tested.

Teaching in Porto, day one

Today was the first day of the week long “masterclass” I’m leading here at The New Digital School in Porto.

When I was putting together my stab-in-the-dark attempt to provide an outline for the week, I labelled day one as “How the web works” and gave this synopsis:

The internet and the web; how browsers work; a history of visual design on the web; the evolution of HTML and CSS.

There ended up being less about the history of visual design and CSS (we’ll cover that tomorrow) and more about the infrastructure that the web sits upon. Before diving into the way the web works, I thought it would be good to talk about how the internet works, which led me back to the history of communication networks in general. So the day started from cave drawings and smoke signals, leading to trade networks, then the postal system, before getting to the telegraph, and then telephone networks, the ARPANET, and eventually the internet. By lunch time we had just arrived at the birth of the World Wide Web at CERN.

It wasn’t all talk though. To demonstrate a hub-and-spoke network architecture I had everyone write down someone else’s name on a post-it note, then stand in a circle around me, and pass me (the hub) those messages to relay to their intended receiver. Later we repeated this exercise but with a packet-switching model: everyone could pass a note to their left or to their right. The hub-and-spoke system took almost a minute to relay all six messages; the packet-switching version took less than 10 seconds.

Over the course of the day, three different laws came up that were relevant to the history of the internet and the web:

Metcalfe’s Law
The value of a network is proportional to the square of the number of users.
Postel’s Law
Be conservative in what you send, be liberal in what you accept.
Sturgeon’s Law
Ninety percent of everything is crap.

There were also references to the giants of hypertext: Ted Nelson, Vannevar Bush, and Douglas Engelbart—for a while, I had the mother of all demos playing silently in the background.

After a most-excellent lunch in a nearby local restaurant (where I can highly recommend the tripe), we started on the building blocks of the web: HTTP, URLs, and HTML. I pulled up the first ever web page so that we could examine its markup and dive into the wonder of the A element. That led us to the first version of HTML which gave us enough vocabulary to start marking up documents: p, h1-h6, ol, ul, li, and a few others. We went around the room looking at posters and other documents pinned to the wall, and starting marking them up by slapping on post-it notes with opening and closing tags on them.

At this point we had covered the anatomy of an HTML element (opening tags, closing tags, attribute names and attribute values) as well as some of the history of HTML’s expanding vocabulary, including elements added in HTML5 like section, article, and nav. But so far everything was to do with marking up static content in a document. Stepping back a bit, we returned to HTTP, and talked about difference between GET and POST requests. That led in to ways of sending data to a server, which led to form fields and the many types of inputs at our disposal: text, password, radio, checkbox, email, url, tel, datetime, color, range, and more.

With that, the day drew to a close. I feel pretty good about what we covered. There was a lot of groundwork, and plenty of history, but also plenty of practical information about how browsers interpret HTML.

With the structural building blocks of the web in place, tomorrow is going to focus more on the design side of things.

Notice

We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.

One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.

In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.

By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.

In the past couple of years, Andy concocted a new internship scheme:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.

The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.

This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.

I’ll miss having this lot in the Clearleft office.

Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.

They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.

Playback at Clearleft

As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.

Chloe, Monika and Chris at dConstruct

The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.

There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:

Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!

Hmm… sounds like something that could enrich the lives of local residents.

Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.

That was the genesis of Notice. After that, they pulled out all the stops.

Exciting things are afoot with the @Clearleftintern project.

Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.

Monika and Chris, hacking

Last week marked the end of the project and the grand unveiling.

Playing with the @notice_city prototype. Chris breaks it down. Playback time. Unveiling.

They’ve done a great job. All the details are on the website, including this little note I wrote about the project:

This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-­to-­day dealings with web design.

We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.

We hereby declare this experiment a success!

Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.

Out on the streets of Brighton Prototype

As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.

I’m going to miss them.

The terrific trio!

100 words 054

In between publishing the Whole Earth Catalog and spinning up the Long Now Foundation, Stewart Brand wrote an article in Rolling Stone magazine about one of the earliest video games, Spacewar.

Except it isn’t really about Spacewar at all. It’s about the oncoming age of the personal computer.

The article was published in 1972. At the end, there’s an appendix listing some communal places where “one can step in off the street and compute.” One of those places—with 16 terminals available—was run by a certain Bob Kahn.

Together with Vint Cerf he created the Internet’s Transmission Control Protocol.

100 words 041

At Clearleft, we’ve been running internships for quite a while now (paid internships, of course; there’s no justification for unpaid internships—don’t let anyone tell you otherwise). In that time we’ve been incredibly fortunate in getting fantastic people to jump on board our agency train for a few months.

The newest member of our pantheon is James Madson. He took time out of his globetrotting travels to become a temporary Clearlefty. It was great having him around. He’s a smart, humble, talented guy. I’m very glad that he enjoyed his time with us.

And his 100 days project is wonderful.

Hope

Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle

100 words 012

We had a houseguest yesterday evening—Emil was back in Brighton. Emil was the first ever intern at Clearleft. He’s back in the country for Cennydd and Anna’s housewarming party tomorrow. Anna was also an intern at Clearleft; that’s where Anna and Cennydd first met.

Jon was also an intern at Clearleft. He enjoyed the experience so much that he ended up moving to Brighton. Good thing too: this is where he met Hannah, the love of his life. Today, Hannah and Jon got married. It was all rather lovely.

And now they’re off to San Sebastian on their honeymoon.

Cerf rocks

After I wrote about digital preservation and the need to save everything, not just the so-called “important” stuff, Jason wrote a lovely piece with his own thoughts on the matter:

In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.

In a timely coincidence, Vint Cerf also spoke about the importance of digital preservation:

When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.

He warns of the dangers of rapidly-obsoleting file formats:

We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.

It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.

I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.

Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.

Last month, over 45 years after the RFC’s original publication, it became an official standard.

So when Vint Cerf warns about the dangers of digitising into file formats that could become unreadable, I think we should pay attention to him.

9,125 days later

The World Wide Web turned 25 last week. Happy birthday!

As is so often the case when web history is being discussed, there is much conflating of “the web” and “the internet” in some mainstream media outlets. The internet—the network of networks that allows computers to talk to each other across the globe—is older than 25 years. The web—a messy collection of HTML files linked via URLs and delivered with the Hypertext Transfer Protocol (HTTP)—is just one of the many types of information services that uses the pipes of the internet (yes, pipes …or tubes, if you prefer—anything but “cloud”).

Now, some will counter that although the internet and the web are technically different things, for most people they are practically the same, because the web is by far the most common use-case for the internet in everyday life. But I’m not so sure that’s true. Email is a massive part of the everyday life of many people—for some poor souls, email usage outweighs web usage. Then there’s streaming video services like Netflix, and voice-over-IP services like Skype. These sorts of proprietary protocols make up an enormous chunk of the internet’s traffic.

The reason I’m making this pedantic distinction is that there’s been a lot of talk in the past year about keeping the web open. I’m certainly in agreement on that front. But if you dig deeper, it turns out that most of the attack vectors are at the level of the internet, not the web.

Net neutrality is hugely important for the web …but it’s hugely important for every other kind of traffic on the internet too.

The Snowden revelations have shown just how shockingly damaging the activities of the NSA and GCHQ are …to the internet. But most of the communication protocols they’re intercepting are not web-based. The big exception is SSL, and the fact that they even thought it would be desirable to attempt to break it shows just how badly they need to be stopped—that’s the mindset of a criminal organisation, pure and simple.

So, yes, we are under attack, but let’s be clear about where those attacks are targeted. The internet is under attack, not the web. Not that that’s a very comforting thought; without a free and open internet, there can be no World Wide Web.

But by and large, the web trundles along, making incremental improvements to itself: expanding the vocabulary of HTML, updating the capabilities of HTTP, clarifying the documentation of URLs. Forgive my anthropomorphism. The web, of course, does nothing to itself; people are improving the web. But the web always has been—and always will be—people.

For some time now, my primary concern for the web has centred around what I see as its killer feature—the potential for long-term storage of knowledge. Yes, the web can be (and is) used for real-time planet-spanning communication, but there are plenty of other internet technologies that can do that. But the ability to place a resource at a URL and then to access that same resource at that same URL after many years have passed …that’s astounding!

Using any web browser on any internet-enabled device, you can instantly reach the first web page ever published. 23 years on, it’s still accessible. That really is something special. Digital information is not usually so long-lived.

On the 25th anniversary of the web, I was up in London with the rest of the Clearleft gang. Some of us were lucky enough to get a behind-the-scenes peak at the digital preservation work being done at the British Library:

In a small, unassuming office, entire hard drives, CD-ROMs and floppy disks are archived, with each item meticulously photographed to ensure any handwritten notes are retained. The wonderfully named ‘ancestral computing’ corner of the office contains an array of different computer drives, including 8-inch, 5 1⁄4-inch, and 3 1⁄2-inch floppy disks.

Most of the data that they’re dealing with isn’t much older than the web, but it’s an order of magnitude more difficult to access; trapped in old proprietary word-processing formats, stuck on dying storage media, readable only by specialised hardware.

Standing there looking at how much work it takes to rescue our cultural heritage from its proprietary digital shackles, I was struck once again by the potential power of the web. With such simple components—HTML, HTTP, and URLs—we have the opportunity to take full advantage of the planet-spanning reach of the internet, without sacrificing long-term access.

As long as we don’t screw it up.

Right now, we’re screwing it up all the time. The simplest way that we screw it up is by taking it for granted. Every time we mindlessly repeat the fallacy that “the internet never forgets,” we are screwing it up. Every time we trust some profit-motivated third-party service to be custodian of our writings, our images, our hopes, our fears, our dreams, we are screwing it up.

The evening after the 25th birthday of the web, I was up in London again. I managed to briefly make it along to the 100th edition of Pub Standards. It was a long time coming. In fact, there was a listing on Upcoming.org for the event. The listing was posted on February 5th, 2007.

Of course, you can’t see the original URL of that listing. Upcoming.org was “sunsetted” by Yahoo, the same company that “sunsetted” Geocities in much the same way that the Enola Gay sunsetted Hiroshima. But here’s a copy of that listing.

Fittingly, there was an auction held at Pub Standards 100 in aid of the Internet Archive. The schwag of many a “sunsetted” startup was sold off to the highest bidder. I threw some of my old T-shirts into the ring and managed to raise around £80 for Brewster Kahle’s excellent endeavour. My old Twitter shirt went for a pretty penny.

I was originally planning to bring my old Pownce T-shirt along too. But at the last minute, I decided I couldn’t part with it. The pain is still too fresh. Also, it serves a nice reminder for me. Trusting any third-party service—even one as lovely as Pownce—inevitably leads to destruction and disappointment.

That’s another killer feature of the web: you don’t need anyone else. You can publish to this world-changing creation without asking anyone for permission. I wish it were easier for people to do this: entrusting your heritage to the Yahoos and Pownces of the world is seductively simple …but only in the short term.

In 25 years time, I want to be able to access these words at this URL. I’m going to work to make that happen.

Chüne

We’ve had an internship programme at Clearleft for a few years now, and it has served us in good stead. Without it, we never would have had the pleasure of working with Emil, Jon, Anna, Shannon, and other lovely, lovely people. Crucially, it has always been a paid position: I can’t help but feel a certain level of disgust for companies that treat interns as a source of free manual labour.

For the most recent internship round, Andy wanted to try something a bit different. He’s written about it on the Clearleft blog:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief to create a product that turned an active digital behaviour into a passive one.

The three interns were Killian, Zassa, and Victor—thoroughly lovely chaps all. It was fun having them in the office—and at Hackfarm—especially as they were often dealing with technologies beyond our usual ken: hardware hacking, and the like. They gave us weekly updates, and we gave them feedback and criticism; a sort of weekly swoop’n’poop on all the work they had been doing.

It was fascinating to watch the design process unfold, without directly being a part of it. At the end of their internship, they unveiled Chüne. They describe it as:

…a playful social music service that intelligently curates playlists depending on who is around, and how much fun they’re having.

They specced it out, built a prototype, and walked us through the interactions involved. It’s a really nice piece of work.

You can read more about it around the web:

Victor has written about the experience from his perspective, concluding:

Clearleft is by far the nicest company and working environment I have come across. All I can say is, if you are thinking about applying for next years internship programme, then DO IT, and if you aren’t thinking about it, well maybe you should start thinking!

Aw, isn’t that nice?

Dealing with IE again

People have been linking to—and saying nice things about—my musings on dealing with Internet Explorer. Thank you. You’re very kind. But I think I should clarify a few things.

If you’re describing the techniques I showed (using Sass and conditional comments) as practical or useful, I appreciate the sentiment but personally I wouldn’t describe them as either. Jake’s technique is genuinely useful and practical.

I wasn’t really trying to provide any practical “take-aways”. I was just thinking out loud. The only real point to my ramblings was at the end:

When we come up with clever hacks and polyfills for dealing with older versions of Internet Explorer, we shouldn’t feel pleased about it. We should feel angry.

My point is that we really shouldn’t have to do this. And, in fact, we don’t have to do this. We choose to do this.

Take the particular situation I was describing with a user of The Session who using IE8 on Windows XP with a monitor set to 800x600 pixels. A lot people picked up on this observation:

As a percentage, this demographic is tiny. But this isn’t a number. It’s a person. That person matters.

But here’s the thing: that person only started to experience problems when I chose to try to “help” IE8 users. If I had correctly treated IE8 as the legacy browser that it is, those users would have received the baseline experience …which was absolutely fine. Not great, but fine. But instead, I decided to jump in with my hacks, my preprocessor, my conditional comments, and worst of all, my assumptions about the viewport size.

In this case, I only have myself to blame. This is a personal project so I’m the client. I decided that I wanted to give IE8 and IE7 users the same kind of desktop navigation that more modern browsers were getting. All the subsequent pain for me as the developer, and for the particular user who had problems, is entirely my fault. If you’re working at a company where your boss or your client insists on parity for IE8 or IE7, I guess you can point the finger at them.

My point is: all the problems and workarounds that I talked about in that post were the result of me trying to crowbar modern features into a legacy browser. Now, don’t get me wrong—I’m not suggesting that IE8 or IE7 should be shut out or get a crap experience: “baseline” doesn’t mean “crap”. There’s absolutely nothing wrong with serving up a baseline experience to a legacy browser as long as your baseline experience is pretty good …and it should be.

So, please, don’t think that my post was a hands-on, practical example of how to give IE8 and IE7 users a similar experience to modern browsers. If anything, it was a cautionary tale about why trying to do that is probably a mistake.

Dealing with IE

Laura asked a question on Twitter the other day about dealing with older versions of Internet Explorer when you’ve got your layout styles nested within media queries (that older versions of IE don’t understand):

It’s a fair question. It also raises another question: how do you define “dealing with” Internet Explorer 8 or 7?

You could justifiably argue that IE7 users should upgrade their damn browser. But that same argument doesn’t really hold for IE8 if the user is on Windows XP: IE8 is as high as they can go. Asking users to upgrade their browser is one thing. Asking them to upgrade their operating system feels different.

But this is the web and websites do not need to look the same in every browser. Is it acceptable to simply give Internet Explorer 8 the same baseline experience that any other old out-of-date browser would get? In other words, is it even a problem that older versions of Internet Explorer won’t parse media queries? If you’re building in a mobile-first way, they’ll get linearised content with baseline styles applied.

That’s the approach that Alex advocates in the Q&A after his excellent closing keynote at Fronteers. That’s what I’m doing here on adactio.com. Users of IE8 get the linearised layout and that’s just fine. One of the advantages of this approach is that you are then freed up to use all sorts of fancy CSS within your media query blocks without having to worry about older versions of IE crapping themselves.

On other sites, like Huffduffer, I make an assumption (always a dangerous thing to do) that IE7 and IE8 users are using a desktop or laptop computer and so they could get some layout styles. I outlined that technique in a post about Windows mobile media queries. Using that technique, I end up splitting my CSS into two files:

<link rel="stylesheet" href="/css/global.css" media="all">
<link rel="stylesheet" href="/css/layout.css" media="all and (min-width: 30em)">
<!--[if (lt IE 9) & (!IEMobile)]>
<link rel="stylesheet" href="/css/layout.css" media="all">
<![endif]-->

The downside to this technique is that now there are two HTTP requests for the CSS …even for users of modern browsers. The alternative is to maintain one stylesheet for modern browsers and a separate stylesheet for older versions of Internet Explorer. That sounds like a maintenance nightmare.

Pre-processors to the rescue. Using Sass or LESS you can write your CSS in separate files (e.g. one file for basic styles and another for layout styles) and then use the preprocessor to combine those files in two different ways: one with media queries (for modern browsers) and another without media queries (for older versions of Internet Explorer). Or, if you don’t want to have your media query styles all grouped together, you can use Jake’s excellent method.

When I relaunched The Session last month, I initially just gave Internet Explorer 8 and lower the linearised content—the same layout that small-screen browsers would get. For example, the navigation is situated at the bottom of each page and you get to it by clicking an internal link at the top of each page. It all worked fine and nobody complained.

But I thought that it was a bit of a shame that users of IE8 and IE7 weren’t getting the same navigation that users of other desktop browsers were getting. So I decided to use a preprocesser (Sass in this case) to spit out an extra stylesheet for IE8 and IE7.

So let’s say I’ve got .scss files like this:

  • base.scss
  • medium.scss
  • wide.scss

Then in my standard .scss file that’s going to generate the CSS for all browsers (called global.css), I can write:

@import "base.scss";
@media all and (min-width: 30em) {
 @import "medium";
}
@media all and (min-width: 50em) {
 @import "wide";
}

But I can also generate a stylesheet for IE8 and IE7 (called legacy.css) that calls in those layout styles without the media query blocks:

@import "medium";
@import "wide";

IE8 and IE7 will be downloading some styles twice (all the styles within media queries) but in this particular case, that doesn’t amount to too much. Oh, and you’ll notice that I’m not even going to try to let IE6 parse those styles: it would do more harm than good.

<link rel="stylesheet" href="/css/global.css">
<!--[if (lt IE 9) & (!IEMobile) & (gt IE 6)]>
<link rel="stylesheet" href="/css/legacy.css">
<![endif]-->

So I did that (although I don’t really have .scss files named “medium” or “wide”—they’re actually given names like “navigation” or “columns” that more accurately describe what they do). I thought I was doing a good deed for any users of The Session who were still using Internet Explorer 8.

But then I read this. It turned out that someone was not only using IE8 on Windows XP, but they had their desktop’s resolution set to 800x600. That’s an entirely reasonable thing to do if your eyesight isn’t great. And, like I said, I can’t really ask him to upgrade his browser because that would mean upgrading the whole operating system.

Now there’s a temptation here to dismiss this particular combination of old browser + old OS + narrow resolution as an edge case. It’s probably just one person. But that one person is a prolific contributor to the site. This situation nicely highlights the problem of playing the numbers game: as a percentage, this demographic is tiny. But this isn’t a number. It’s a person. That person matters.

The root of the problem lay in my assumption that IE8 or IE7 users would be using desktop or laptop computers with a screen size of at least 1024 pixels. Serves me right for making assumptions.

So what could I do? I could remove the conditional comments and the IE-specific stylesheet and go back to just serving the linearised content. Or I could serve up just the medium-width styles to IE8 and IE7.

That’s what I ended up doing but I also introduced a little bit of JavaScript in the conditional comments to serve up the widescreen styles if the browser width is above a certain size:

<link rel="stylesheet" href="/css/global.css">
<!--[if (lt IE 9) & (!IEMobile) & (gt IE 6)]>
<link rel="stylesheet" href="/css/medium.css">
<script>
if (document.documentElement.clientWidth > 800) {
 document.write('<link rel="stylesheet" href="/css/wide.css">');
}
</script>
<![endif]-->

It works …I guess. It’s not optimal but at least users of IE8 and IE7 are no longer just getting the small-screen styles. It’s a hack, and not a particularly clever one.

Was it worth it? Is it an improvement?

I think this is something to remember when we’re coming up solutions to “dealing with” older versions of Internet Explorer: whether it’s a dumb solution like mine or a clever solution like Jake’s, we shouldn’t have to do this. We shouldn’t have to worry about IE7 just like we don’t have to worry about Netscape 4 or Mosaic or Lynx; we should be free to build according to the principles of progressive enhancement safe in the knowledge that older, less capable browsers won’t get all the bells and whistles, but they will be able to access our content. Instead we’re spending time coming up with hacks and polyfills to deal with one particular family of older, less capable browsers simply because of their disproportionate market share.

When we come up with clever hacks and polyfills for dealing with older versions of Internet Explorer, we shouldn’t feel pleased about it. We should feel angry.

Update: I’ve written a follow-up post to clarify what I’m talking about here.

Things

Put the kettle on, make yourself a cup of tea, and settle down to read a couple of thought-provoking pieces about networked devices.

First up, Scott Jenson writes Of Bears, Bats, and Bees: Making Sense of the Internet of Things:

The Internet of Things is a growing, changing meme. Originally it was meant to invoke a giant swarm of cheap computation across the globe but recently has been morphing and blending, even insinuating, into established product concepts.

Secondly, Charles Stross has published an abridged version of a talk he gave back in June called How low (power) can you go?:

The logical end-point of Moore’s Law and Koomey’s Law is a computer for every square metre of land area on this planet — within our lifetimes. And, speaking as a science fiction writer, trying to get my head around the implications of this technology for our lives is giving me a headache. We’ve lived through the personal computing revolution, and the internet, and now the advent of convergent wireless devices — smartphones and tablets. Ubiquitous programmable sensors will, I think, be the next big step, and I wouldn’t be surprised if their impact is as big as all the earlier computing technologies combined.

And I’ll take this opportunity to once again point to one of my favourite pieces on the “Internet of Things” by Russell Davies:

The problem, though, with the Internet Of Things is that it falls apart when it starts to think about people. When big company Internet Of Things thinkers get involved they tend to spawn creepy videos about sleek people in sleek homes living optimised lives full of smart objects. These videos seem to radiate the belief that the purpose of a well-lived life is efficiency. There’s no magic or joy or silliness in it. Just an optimised, efficient existance. Perhaps that’s why the industry persists in inventing the Internet Fridge. It’s top-down design, not based on what people might fancy, but on what technologies companies are already selling.

Fortunately, though, there’s another group of people thinking about the Internet of Things - enthusiasts and inventors who are building their own internet connected things, adding connectivity and intelligence to the world in their own ways.

You can read it on your networked device or you can listen to it on your networked device …while you’re having your cup of tea …in a non-networked cup …with water from a non-networked kettle.

BBC - Podcasts - Four Thought: Russell M. Davies 21 Sept 2011 on Huffduffer

Cables

After speaking at Go Beyond Pixels in St. John’s, I had some time to explore Newfoundland a little bit. Geri was kind enough to drive me to a place I really wanted to visit: the cable station at Heart’s Content.

Heart's Content Cable Station

I’ve wanted to visit Heart’s Content (and Porthcurno in Cornwall) ever since reading The Victorian Internet, a magnificent book by Tom Standage that conveys the truly world-changing nature of the telegraph. Heart’s Content plays a pivotal role in the story: the landing site of the transatlantic cable, spooled out by the Brunel-designed Great Eastern, the largest ship in the world at the time.

Recently I was sent an advance reading copy of Tubes by Andrew Blum. It makes a great companion piece to Standage’s book as Blum explores the geography of the internet:

For all the talk of the placelessness of our digital age, the Internet is as fixed in real, physical places as any railroad or telephone system ever was.

There’s an interview with Andrew Blum on PopTech, a review of Tubes on Brain Pickings, and I’ve huffduffed a recent talk by Andrew Blum in Philadelphia.

Andrew Blum | Tubes: A Journey to the Center of the Internet - Free Library Podcast on Huffduffer

Now there are more places I want to visit: the nexus points on TeleGeography’s Submarine Cable Map; the hubs of Hibernia Atlantic, whose about page reads like a viral marketing campaign for some soon-to-be-released near-future Hollywood cyberpunk thriller.

I’ve got the kind of travel bug described by Neal Stephenson in his classic 1996 Wired piece Mother Earth Mother Board:

In which the hacker tourist ventures forth across the wide and wondrous meatspace of three continents, acquainting himself with the customs and dialects of the exotic Manhole Villagers of Thailand, the U-Turn Tunnelers of the Nile Delta, the Cable Nomads of Lan tao Island, the Slack Control Wizards of Chelmsford, the Subterranean Ex-Telegraphers of Cornwall, and other previously unknown and unchronicled folk; also, biographical sketches of the two long-dead Supreme Ninja Hacker Mage Lords of global telecommunications, and other material pertaining to the business and technology of Undersea Fiber-Optic Cables, as well as an account of the laying of the longest wire on Earth, which should not be without interest to the readers of Wired.

Maybe one day I’ll get to visit the places being designed by Sheehan Partners, currently only inhabited by render ghosts on their website (which feels like it’s part of the same subversive viral marketing campaign as the Hibernia Atlantic site).

Perhaps I can find a reason to stop off in Ashburn, Virginia or The Dalles, Oregon, once infamous as the site of a cult-induced piece of lo-tech bioterrorism, now the site of Google’s Project 02. Not that there’s much chance of being allowed in, given Google’s condescending attitude when it comes to what they do with our data: “we know what’s best, don’t you trouble your little head about it.”

It’s that same attitude that lurks behind that most poisonous of bullshit marketing terms…

The cloud.

What a crock of shit.

The cloud is a lie

Whereas other bullshit marketing terms once had a defined meaning that has eroded over time due to repeated use and abuse—Ajax, Web 2.0, HTML5, UX—“the cloud” is a term that sets out to deceive from the outset, imbued with the same Lakoffian toxicity as “downsizing” or “friendly fire.” It is the internet equivalent of miasma theory.

Death to the cloud! Long live the New Flesh of servers, routers, wires and cables.