Journal tags: open

34

sparkline

Resigning from the AMP advisory committee

Inspired by Terence Eden’s example, I applied for membership of the AMP advisory committee last year. To my surprise, my application was successful.

I’ve spent the time since then participating in good faith, but I can’t do that any longer. Here’s what I wrote in my resignation email:

Hi all,

As mentioned at the end of the last call, I’m stepping down from the AMP advisory committee.

I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.

If I were to remain on the advisory committee, my feelings of resentment about this situation would inevitably affect my behaviour. So it’s best for everyone if I step away now instead of descending into outright sabotage. It’s not you, it’s me.

I’d like to thank the OpenJS Foundation for allowing me to participate. It’s been an honour to watch Tobie and Jory in action.

I wish everyone well and I hope that the advisory committee can successfully guide the AMP project towards a happy place where it can live out its final days in peace.

I don’t have a replacement candidate to nominate but I’ll ask around amongst other independent sceptical folks to see if there’s any interest.

All the best,

Jeremy

I wrote about the fundamental problem with Google AMP when I joined the advisory committee:

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is.

There’s the collection of web components. If that were all AMP is, it would be a very straightforward project, similar to other collections of web components (like Polymer). But then there’s the concept of validation. The validation comes from a set of rules, defined by Google. And there’s the AMP cache, or more accurately, Google hosting.

Only one piece of that trinity—the collection of web components—is eligible for the label of being open source, and even that’s a stretch considering that most of the contributions come from full-time Google employees. The other two parts are firmly under Google’s control.

I was hoping it was a marketing problem. We spent a lot of time on the advisory committee trying to figure out ways of making it clearer what AMP actually is. But it was a losing battle. The phrase “the AMP project” is used to cover up the deeply interwingled nature of its constituent parts. Bits of it are open source, but most of it is proprietary. The OpenJS Foundation doesn’t seem like a good home for a mostly-proprietary project.

Whenever a representative from Google showed up at an advisory committee meeting, it was clear that they viewed AMP as a Google product. I never got the impression that they planned to hand over control of the project to the OpenJS Foundation. Instead, they wanted to hear what people thought of their project. I’m not comfortable doing that kind of unpaid labour for a large profitable organisation.

Even worse, Google representatives reminded us that AMP was being used as a foundational technology for other Google products: stories, email, ads, and even some weird payment thing in native Android apps. That’s extremely worrying.

While I was serving on the AMP advisory committee, a coalition of attorneys general filed a suit against Google for anti-competitive conduct:

Google designed AMP so that users loading AMP pages would make direct communication with Google servers, rather than publishers’ servers. This enabled Google’s access to publishers’ inside and non-public user data.

We were immediately told that we could not discuss an ongoing court case in the AMP advisory committee. That’s fair enough. But will it go both ways? Or will lawyers acting on Google’s behalf be allowed to point to the AMP advisory committee and say, “But AMP is an open source project! Look, it even resides under the banner of the OpenJS Foundation.”

If there’s even a chance of the AMP advisory committee being used as a Potempkin village, I want no part of it.

But even as I’m noping out of any involvement with Google AMP, my parting words have to be about how impressed I am with the OpenJS Foundation. Jory and Tobie have been nothing less than magnificent in their diplomacy, cat-herding, schedule-wrangling, timekeeping, and other organisational superpowers that I’m crap at.

I sincerely hope that Google isn’t taking advantage of the OpenJS Foundation’s kind-hearted trust.

Bookshop

Back at the start of the (first) lockdown, I wrote about using my website as an outlet:

While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

Last week was eventful and stressful. For everyone. I found myself once again taking refuge in my website, tinkering with its inner workings in the way that someone else would potter about in their shed or take to their garage to strip down the engine of some automotive device.

Colly drew my attention to Bookshop.org, newly launched in the UK. It’s an umbrella website for independent bookshops to sell through. It’s also got an affiliate scheme, much like Amazon. I set up a Bookshop page for myself.

I’ve been tracking the books I’m reading for the past three years here on my own website. I set about reproducing that list on Bookshop.

It was exactly the kind of not-exactly-mindless but definitely-not-challenging task that was perfect for the state of my brain last week. Search for a book; find the ISBN number; paste that number into a form. It’s the kind of task that a real programmer would immediately set about automating but one that I embraced as a kind of menial task to keep me occupied.

I wasn’t able to get a one-to-one match between the list on my site and my reading list on Bookshop. Some titles aren’t available in the online catalogue. For example, the book I’m reading right now—A Paradise Built in Hell by Rebecca Solnit—is nowhere to be found, which seems like an odd omission.

But most of the books I’ve read are there on Bookshop.org, complete with pretty book covers. Then I decided to reverse the process of my menial task. I took all of the ISBN numbers from Bookshop and add them as machine tags to my reading notes here on my own website. Book cover images on Bookshop have predictable URLs that use the ISBN number (well, technically the EAN number, or ISBN-13, but let’s not go down a 927 rabbit hole here). So now I’m using that metadata to pull in images from Bookshop.org to illustrate my reading notes here on adactio.com.

I’m linking to the corresponding book on Bookshop.org using this URL structure:

https://uk.bookshop.org/a/{{ affiliate code }}/{{ ISBN number }}

I realised that I could also link to the corresponding entry on Open Library using this URL structure:

https://openlibrary.org/isbn/{{ ISBN number }}

Here, for example, is my note for The Raven Tower by Ann Leckie. That entry has a tag:

book:ean=9780356506999

With that information I can illustrate my note with this image:

https://images-eu.bookshop.org/product-images/images/9780356506999.jpg

I’m linking off to this URL on Bookshop.org:

https://uk.bookshop.org/a/980/9780356506999

And this URL on Open Library:

https://openlibrary.org/isbn/9780356506999

The end result is that my reading list now has more links and pretty pictures.

Oh, I also set up a couple of shorter lists on Bookshop.org:

The books listed in those are drawn from my end of the year round-ups when I try to pick one favourite non-fiction book and one favourite work of fiction (almost always speculative fiction). The books in those two lists are the ones that get two hearty thumbs up from me. If you click through to buy one of them, the price might not be as cheap as on Amazon, but you’ll be supporting an independent bookshop.

Geneva Copenhagen Amsterdam

Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.

It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.

I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.

So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.

If you’re at Techfestival.co in Copenhagen, drop in to this shipping container where I’ll be demoing WorldWideWeb.cern.ch

“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.

Lots of people entered facebook.com or google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.

People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.

The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.

All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:

The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.

Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.

Just change it

Amber and I often have meta conversations about the nature of learning and teaching. We swap books and share ideas and experiences whenever we’re trying to learn something or trying to teach something. A topic that comes up again and again is the idea of “the curse of knowledge“—it’s the focus of Steven Pinker’s book The Sense Of Style. That’s when the author/teacher can’t remember what it’s like not to know something, which makes for a frustrating reading/learning experience.

This is one of the reasons why I encourage people to blog about stuff as they’re learning it; not when they’ve internalised it. The perspective that comes with being in the moment of figuring something out is invaluable to others. I honestly think that most explanatory books shouldn’t be written by experts—the “curse of knowledge” can become almost insurmountable.

I often think about this when I’m reading through the installation instructions for frameworks, libraries, and other web technologies. I find myself put off by documentation that assumes I’ve got a certain level of pre-existing knowledge. But now instead of letting it get me down, I use it as an opportunity to try and bridge that gap.

The brilliant Safia Abdalla wrote a post a while back called How do I get started contributing to open source?. I definitely don’t have the programming chops to contribute much to a codebase, but I thoroughly agree with Safia’s observation:

If you’re interested in contributing to open source to improve your communication and empathy skills, you’re definitely making the right call. A lot of open source tools could definitely benefit from improvements in the documentation, accessibility, and evangelism departments.

What really jumps out at me is when instructions use words like “simply” or “just”. I’m with Brad:

“Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.

But rather than letting that feeling overwhelm me, I now try to fix the text. Here are a few examples of changes I’ve suggested, usually via pull requests on Github repos:

They all have different codebases in different programming languages, but they’re all intended for humans, so having clear and kind documentation is a shared goal.

I like suggesting these kinds of changes. That initial feeling of frustration I get from reading the documentation gets turned into a warm fuzzy feeling from lending a helping hand.

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)

Open source

Building and maintaining an open-source project is hard work. That observation is about as insightful as noting the religious affiliation of the pope or the scatological habits of woodland bears.

Nolan Lawson wrote a lengthy post describing what it feels like to be an open-source maintainer.

Outside your door stands a line of a few hundred people. They are patiently waiting for you to answer their questions, complaints, pull requests, and feature requests.

You want to help all of them, but for now you’re putting it off. Maybe you had a hard day at work, or you’re tired, or you’re just trying to enjoy a weekend with your family and friends.

But if you go to github.com/notifications, there’s a constant reminder of how many people are waiting

Most of the comments on the post are from people saying “Yup, I hear ya!”

Jan wrote a follow-up post called Sustainable Open Source: The Maintainers Perspective or: How I Learned to Stop Caring and Love Open Source:

Just because there are people with problems in front of your door, that doesn’t mean they are your problems. You can choose to make them yours, but you want to be very careful about what to care about.

There’s also help at hand in the shape of Open Source Guides created by Nadia Eghbal:

A collection of resources for individuals, communities, and companies who want to learn how to run and contribute to an open source project.

I’m sure Mark can relate to all of the tales of toil that come with being an open-source project maintainer. He’s been working flat-out on Fractal, sometimes at work, but often at home too.

Fractal isn’t really a Clearleft project, at least not in the same way that something like Silverback or UX London is. We’re sponsoring Fractal as much as we can, but an open-source project doesn’t really belong to anyone; everyone is free to fork it and take it. But I still want to make sure that Mark and Danielle have time at work to contribute to Fractal. It’s hard to balance that with the bill-paying client work though.

I invited Remy around to chat with them last week. It was really valuable. Mind you, Remy was echoing many of the same observations made in Nolan’s post about how draining this can be.

So nobody here is under any illusions that this open-source lark is to be entered into lightly. It can be a gruelling exercise. But then it can also be very, very rewarding. One kind word from somebody using your software can make your day. I was genuinely pleased as punch when Danish agency Shift sent Mark a gift to thank him for all his hard work on Fractal.

People can be pretty darn great (which I guess is an underlying principle of open source).

Ice cold in Copenhagen

I went to Copenhagen last week for the Coldfront conference. It was lovely to be back in Denmark’s capital. I used to go over there ever year when the Reboot conference was running, but that wrapped up a few year’s back so it’s been quite a while since I had the opportunity to savour Copenhagen’s architecture, culture, coffee, food, and beer.

Coldfront was fun. Kenneth has modelled the format of the event on Remy’s Full Frontal conference—one day of a single track of front-end dev talks in a comfy cinema.

Going to a focused conference like this is a great way of getting a short sharp shock of what’s hot—like a State of the Union address for the web. At Coldfront there were some very clear themes around building for resilience, and specifically routing around the damage of inconsistent connectivity. There was a very clear message—from Paul, Alex, and Patrick (blog imminent)—that the network is not always on our side. Making our sites work offline should be much more of a priority than it currently is.

On a related note, the technology that was mentioned the most was Service Workers …and Jake wasn’t even there! Heck, even I mentioned it in glowing terms in my own little presentation. I was admiring the way it has been designed specifically to be used in a progressive enhancement kind of way.

So if I were Mr. McGuire in The Graduate, my line to a web developer equivalent of Dustin Hoffman would be “I want to say one word to you, just one word. Are you listening? …Service Workers.”

100 words 076

The Open Market is situated just off London Road in Brighton. For the longest time, it—along with London Road itself—was the kind of place you really didn’t want to venture into. It was sort of seedy, and not a little bit off-putting.

But in the past few years The Open Market has had a complete overhaul. Now it’s downright pleasant. There are funky stalls, a great butcher shop, a good fishmonger, multiple fruit’n’veg places, and a truly excellent Greek café.

The atmosphere on the weekends is particularly convivial. If this is gentrification, then bring it on I say.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

100 words 034

It was a busy week with lots of commuting up and down to London, so I’ve been looking forward to a weekend of unwinding.

Jessica and I like to spend our Saturday afternoons doing our shopping for the weekend, planning out some nice leisurely meals. Today we went down to the Open Market, a recently-renovated collection of stalls under one roof selling local produce and goods. The market is also home to an excellent Greek café, where we had lunch.

I guess it’s part of the gentrification of the London Road area. If this is gentrification, bring it on.

Hope

Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Anniversary

A funny thing happened when I was in Berlin two weekends ago. I was walking down the street that my AirBnB apartment was on when I heard someone say “Jeremy Keith?” It turned out it was Andre Jay Meissner, one of the founders of the excellent Open Device Lab website. We had emailed but never met before. Small world!

The Twitter account for the open device lab in Nuremburg pointed out that it’s been one year since I wrote a blog post about the open device lab I set up:

Much as I’d love to take credit for the idea of an open device lab, it simply isn’t true. Jason and Lyza had been working on setting up the open device lab in Portland for quite a while when I flung open the doors of the Clearleft test lab. But I will take credit for the “Ah, fuck it!” attitude that I introduced to the idea of sharing test devices with the community. Partly because I had seen how long it was taking the Portland device lab to get off the ground while they did everything by the book, I decided to just wait for the worst to happen instead of planning for it:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

It proved to be a great policy. So far, nothing has gone wrong. And it also served as an example to other people thinking about opening up device labs at their companies: “don’t sweat it; I didn’t!”

But as far as anniversaries go, the one-year birthday of the Clearleft device lab is not the most significant event of April 30th. Today is the twentieth anniversary of the publication of one of the most important documents in technological history: the document that officially put the World Wide Web into the public domain.

Open device labs are a small, small part of working on the web but I like to think that they represent the same kind of spirit of openness and sharing that Tim Berners-Lee and his colleagues demonstrated at CERN:

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

At UX London I had dinner with a Swiss entrepreneur who was showing off his proprietary native app on his phone and proudly declaring that he had been granted a patent. He seemed like a nice chap, but his attitude kind of made my skin crawl. It seemed so antithetical to the spirit of sharing and openness that I’m used to from the web.

James Gleick once described the web as the patent that never was:

Tim Berners-Lee invented the Web and the Web browser — that is, the world as we now know it — pretty much single-handedly, starting in about 1989, when he was working as a software engineer at CERN, the particle-physics laboratory in Geneva. He didn’t patent it, or any part of it. On the contrary, he has labored tirelessly to keep cyberspace open and nonproprietary.

We are all reaping the benefits of Sir Tim’s kindness and generosity.

Command lines

Here’s a nifty piece of in-browser behaviour…

Fire up Chrome and in the address field type “huffduffer.com” followed by a space and BOOM! …the address field transforms into a site-specific search field: “Search Huffduffer.”

That’s thanks to this XML file that’s been on Huffduffer since day one. It’s the most straightforward example of the OpenSearch format.

It’s the same format that powers Firefox’s search providers. If you visit Huffduffer with Firefox and click in the browser’s search field, you’ll see the option to “Add Huffduffer” to the list of search engines.

I can’t imagine many people will actually do that but still, no harm in providing the option.

Another option that’s been added to Huffduffer recently (thanks to Andy’s hacking) is the ability to access the API through YQL. Feel free to open up the console and have a play around.

Scandinavian sojourn

I’ve been on a little trip to Copenhagen. Usually when I go to Denmark, it’s for Reboot but alas, there is no Reboot this year. Instead, I was there for Drupalcon.

I have to admit, it was quite a surprise to be asked to speak at a Drupal event. After all, I don’t use the Drupal framework. To be fair, I don’t use any framework—though I did dabble with Django once. Clearleft is a backend-agnostic company: we do UX, IA, front-end, but we’ve deliberately avoided committing to one particular server-side solution.

Anyway, I was kinda nervous about addressing a large group of programmers devoted to a PHP framework that I’m not that familiar with. I needn’t have worried. Everyone was incredibly welcoming and I got a very warm reception.

I had been asked along to speak about HTML5 but rather than just run through a whole bunch of features in the spec, I thought it would be more interesting to talk about why features have been added to HTML5. So I concentrated on the design principles driving the development of the specification.

I’m pretty pleased with how it turned out. The whole thing was streamed live and it’s all been recorded and posted online.

The Drupal community is clearly very vibrant: the 1000+ people gathered in Copenhagen were very enthusiastic about their chosen platform. That said, I did sense some frustration from the theming community—it isn’t always the easiest to change the markup and CSS that’s output by Drupal. This is something that Dries acknowledged in his keynote and people like Jen Simmons are fighting the good fight to improve Drupal’s front-end output.

The Drupal community also know how to party. This was the first conference I’ve been to that had its own beer; the rather excellent Awesomesauce from the world-renowned Mikkeller.

All in all, it was a thoroughly enjoyable experience. And I had enough time before my flight back to Blighty to nip across to Malmö in Sweden, where Emil showed me the sights.

Then it was time to catch a ludicrously comfy train across a remarkable bridge, past a stunning wind farm to the snazilly-designed airport to catch my flight home.

97 Bottles

Almost two years ago, I was having a drink with Keith in The Alembic Bar on my last night in San Francisco:

Keith buys me a beer from the local microbrewery. I opt for an amber ale while he plumps for an IPA. After the first sips, we compare tasting notes and once again speculate about a beer-oriented version of Cork’d.

Well, like Tom says, everyone’s got ideas. It’s following through that counts. So that’s exactly what Blue Flavor have done. They built 97 Bottles:

97 bottles is a totally free, new service that lets you review, recommend, and learn about all kinds of beers. Use it to keep track of your favorite brews, find drinking buddies, and more.

It’s currently in private beta or, as they put in their nicely-crafted copy, in the cask fermenting. But here’s the twist: if you have an OpenID you can sign in with that without waiting to be added to the list of beta testers. That’s smart. As Jeff put it:

It’s not a bug, it’s a feature. It’s OpenID evangelism!

So head on over and sign in with your OpenID. I’ve been kicking the tyres, adding some new beers to the site and reviewing existing ones. It’s a well-crafted social network: fun and useful.

All it needs is a little sprinkling of microformats.

Open Tech 2008

Open Tech was fun. It was like a more structured version of BarCamp: the schedule was planned in advance and there was a nominal entrance fee of £5 but apart from that, it was pretty much OpenCamp. Most of the talks were twenty minutes long, grouped into hour-long thematically linked trilogies.

Things kicked off with a three way attack by Kim Plowright, Simon Wardley and Matt Webb. I particularly enjoyed Matt’s stroll down the memory lane of the birth of cybernetics. Alas, the fact that I stayed to enjoy this history lesson meant that I missed David Hayes’s introduction to Edenbee. But I did stick around for the next set of environment-related talks including a demo of the Wattson from DIY Kyoto and the always-excellent Gavin Starks of AMEE fame.

After a pub lunch spent being entertained by Ewan Spence’s thoroughly researched plan for a muppet remake of Star Wars, I made it back in time for a well-connected burst of talks from Simon, Gavin and Paul. Simon pimped OpenID. Gavin delivered a healthy dose of perspective from the h’internet. Paul ranted about the technologies depicted in his wonderful illustration entitled The Web is Agreement.

I made sure to catch the state of the nation address from Open Street Map. It was, as expected, inspiring. It’s quite amazing how far the project has come since the last Open Tech in 2005. Hearing about the wealth of data available gave me the kick up the arse to update the dConstruct location page to use Open Street Map tiles. It turned out to be a simple process involving the addition of just a few more lines of JavaScript.

My talk at Open Tech was a reprise of my XTech presentation, Creating Portable Social Networks With Microformats although the title on the schedule was Publishing With Microformats. I figured that the Open Tech audience would be fairly advanced so I decided against my original plan of doing an introductory level talk. The social network portability angle also tied in with quite a few other talks on the day.

I shared my slot with Jeni Tennison who gave a hands-on look at RDFa at the London Gazette. The two talks complemented each other well… just like microformats and RDFa. As Jeni said, microformats are great for doing the easy stuff—the low-hanging fruit—and deliberately avoid more complex data structures: they hit 80% of the use cases with 20% of the effort. RDFa, on the other hand, can handle greater complexity but with a higher learning curve. RDFa covers the other 20% of use cases but with 80% effort. Jeni’s case study was the perfect example. Whereas as I had been showing the simple patterns of user profiles and relationships on social networks (easily encoded with and ), she was dealing with a very specific data set that required its own ontology.

I was chatting with Dan at the start of Open Tech about this relationship. We’re both pretty fed up with the technologies being set up as somehow being rivals. Personally, I’m very happy that RDFa covers the kind of data structures that microformats doesn’t touch. When someone comes to the microformats community with an idea for a complex data format, it’s handy to have another technology to point them to. If you’re dealing with simple, common structures that have aggregate benefit like contact details, events and reviews, microformats are the perfect fit. But if you’re dealing with more complex structures—and I’m thinking here about museum collections, libraries and laboratories—chances are that some flavour of is going to be more suitable.

Jeni and I briefly discussed whether we should set up our talks as a kind of mock battle. But that kind of rivalry, even when it’s done in a jokey fashion, is unnecessary and frankly, more than a little bit dispiriting. It’s more constructive to talk about real-world use cases. On that basis, I think our Open Tech presentations hit the right note.

Open Tech schedule

I’ll be heading up to the University of London tomorrow for Open Tech 2008. The last Open Tech was in 2005 which was, by all accounts, a legendary affair—it led directly to the creation of the ORG.

I’ll be speaking about microformats, probably reworking some of the things I was talking about at XTech. It looks like there’ll be quite a lot of discussion around social networks, portability and privacy so I’m going to concentrate on XFN and hCard. Speaking of which, be sure to read Ben’s excellent article on Digital Web and then check out David’s superb implementation of the Social Graph API: what a productive pair of flatmates!

I put together an hCalendar schedule for Open Tech so if you’re going along, you might want to subscribe. I recommend subscribing over downloading as the schedule is likely to change. I’ll do my best to update the hCalendar document accordingly. Depending on the WiFi situation and how knackered I am after the early start from Brighton, I may try to do some liveblogging.

The Copenhagen Report

Reboot 10 was everything I thought it would be: chaotic, stimulating, frustrating and fun. It’s an odd conference, pitched somewhere between TED and a BarCamp, carried off with a distinctly European flair.

The speakers delivered the goods on a wide range of subject matter. Howard Rheingold was as thought-provoking and interesting as you would expect, Jyri shared his thoughts on social interaction online and I thoroughly enjoyed listening to David Weinberger riffing on and . With at least three tracks of simultaneous talks at any one time, and with plenty of catching up to do in the corridor, I didn’t get to see all the talks but a superb round of micro-presentations gave me the opportunity to get the quick versions of talks I missed.

My presentation seemed to go down fairly well although I thought I was just rambling on. Maybe the fact that I was accompanying myself on mandolin meant that the audience was more forgiving. I didn’t really have slides, just a few hyperlinked documents to tie the narrative together.

The theme of this year’s Reboot was Free. Fittingly, my presentation resulted in my receiving two free gifts. Michael Rose, a local piano player, gave me a CD on which he accompanies a series of Irish tunes. Nikolai—who was introducing the speakers and taking care of the sound in the room where I was presenting—was reminded by my mention of Lawrence Lessig that he had boxes full of The Future of Ideas that were originally destined for the Danish parliament. They were distributed amongst the attendees of Reboot instead.

Free

Rebooting

I’m off to Copenhagen for Reboot 10. Reboot is always a fun gathering. It might not be the most useful event but as part of a balanced conference diet, it’s got a unique European flavour.

As usual, I’m going to use the opportunity to talk about something a bit different to my usual web development spiels. This time I’ll be talking about The Transmission of Tradition, a subject I’ve already road-tested at BarCamp London 3:

This talk will look at the past, present and future of transmitting traditional Irish music from the dance to the digital, punctuated with some examples of the tunes. This will serve as a starting point for a discussion of ideas such as the public domain, copyright and the emergence of a reputation economy on the Web.

At the very least, it will give me a chance to debut that mandolin I picked up in Nashville.

This will be my third Reboot. My previous talks were:

I recently discovered the video of that last presentation. Jessica was kind enough to transcribe the whole thing. She also transcribed my talk from this year’s XTech. Go ahead and read through them if you have the time.

If you don’t have the time, you can always mark them for later reading using Instapaper. I love that app. It does one simple little thing but does it really well. Hit a bookmarklet labelled “read later” and you’re done.

Here’s a little sampling of documents I’ve marked for later reading:

Maybe I should fire them up in multiple tabs and read them on the flight to Denmark. Or I could spend the time brushing up on my Danish.

If you’re headed to Reboot, I’ll see you there. Otherwise …Farvel!