Why you should have a Web Site (and other Web 3.0 issues)
This presentation by Steven Pemberton increases in value over time.
This presentation by Steven Pemberton increases in value over time.
Steven Pemberton's talk from XTech 2008 in Dublin is becoming more relevant with each passing day as yet another service shuts down; Pownce, Ficlets, Stikkit...
A presentation from XTech 2008 in Dublin.
Hello. Fáilte romhabh go léir.
I’m going to talk about, supposedly, creating portable social networks with microformats. It’s a very grandiose title for something that’s actually very, very simple.
Hands up: who was in David Recordon’s keynote yesterday? He had about a two-minute segment there where he showed himself on his own blog having rel="me"
in links, and then he showed the OpenSocial thing. That’s pretty much it, really. So, he did it in two minutes and I’ve been given 45. We’ll go into a little more detail, but there really isn’t that much to it. It’s pretty straightforward.
So what this is about is all these different social networks—and there are quite a few of them now. I’m on quite a few social networks myself, and I want to see which ones you guys are on as well. So, a show of hands for any of these:
Pownce, anybody on Pownce? Okay, one or two. Magnolia? Magnolia, a few—like del.icio.us, but with more of the whole social aspect going on. Anybody know a site called Edenbee? No? Social network site for environmental stuff. It’s based out of Dublin, which is why I thought I’d mention it. Upcoming, for events—who uses Upcoming? Okay, quite a few. Last.fm? Music? Okay, good, good, lots of Last.fm-ers. Twitter. Oh, I’m surprised not to see every single hand in the room go up for Twitter. But how about Flickr? Okay, most people are using Flickr.
Alright, so, we’ve got a lot of different social networks here, and on each one, you have to sign up, and you have to enter your details when you sign up—and then you have to go and find your friends on each one, and you have to say, “Yes, I know that person, yes, I want to share my photos with that person, yes, I want to share my music with that person.” And by the fourth or fifth social network, it gets a little bit tiring having to through all this, right? I see some nodding heads—yes, yes, we’re fed up with this.
So, this is one angle of the whole idea of portable social networks: this idea of social network fatigue. But, to be honest, that’s not really such a big problem. It’s kind of something that’s going to affect those of us who use a lot of these social networks, but maybe we’re sort of canaries in the coalmine, and this isn’t really an issue that’s going to affect the average user.
But it’s about much more than that. This isn’t really about the whole social network fatigue thing. It’s about being able to move freely. It’s freedom of travel, the idea that you should be able to seamlessly move between these social networks and not have to deal with the hassles of having to fill in another form or go through another process of finding all your contacts.
And when I talk about social network portability, I’m not talking about, “I’m leaving this social network, I’m taking all my friends with me, and I’m going over to this new social network that everybody’s raving about.” It’s not that kind of portability. It’s more about ease of movement. It’s the idea that I can have as many social networks as I want, and the ease of getting into it and setting it up is nice and simple, and it’s not complicated. So that’s the reason why there’s a lot of different people, very smart people, trying to tackle this problem. It’s really a question of interoperability more than portability.
The ways that people have tried to tackle this problem of trying to make it easy to sign up to a site and then to get all your contacts listed on that site… Well, something that people have been doing for a while is to ask you for your user name and your password—for instance, from a webmail client you might use, like Google Mail or Yahoo Mail or Hotmail.
…Short hiatus while Jeremy is rickrolled by Aral…
Aral just rickrolled me, the bastard!
Audience member: That was antisocial networking.
Jeremy: Yeah. It’s always me! Is everybody familiar with the concept of rickrolling? Okay. Sorry, I just got live-rickrolled, and I think it’s going to be on the Internets in a few minutes.
Where was I? Oh yes, user name and password for third-party sites. So you sign up to a new social network and it says, “Do you use Google Mail, Gmail? Do you use Yahoo Mail? Well then, why don’t you just give us your user name and your password.” Now, crucially here, they’re asking for your Gmail password not on Gmail, they’re asking for it on example.com. They’re asking on the latest social network site that’s probably disemvoweled and ends with the letter “r” without the letter “e” attached.
This is just wrong. This is something that gets referred to as the password anti-pattern, because essentially what you’re doing here is you’re teaching users how to be phished. You’re telling them that it’s okay for you to throw around your password on any site. So instead of saying, “Only ever enter your Gmail password on Gmail,” this is saying, “It’s okay to throw around your Gmail password willy-nilly.” And in the long term, this is really, really harmful.
And it’s just really, really insecure. How do you trust this site? How do you know it’s not going to spam all your friends? How do you know it’s not going to go into your inbox and use that account that is also the same account you use for Google Checkout, if you use Google Checkout.
To get around that, what people have been doing is using APIs, which is a much more secure way of dealing with this issue, where you can authenticate, give authorization to a site like, say, Gmail or Yahoo Mail or whatever. This combination of using an API together with some kind of authentication is much safer and much more secure, and this is why you see this combination of some sort of API and some sort of authentication, like OAuth, or like BBAuth, or AuthSub—there’s all these different kinds of authentication things.
Now, for instance, with Gmail you can do this; there’s a Google Contacts API. So in this case, you say, “Hey, do you have a Gmail account?” They say, “Yes.” “Well, okay, click this button to start importing friends.” And you get sent off to the Google domain—and there you might have to input your user name and password, but that’s an order of magnitude better than doing it on a third-party site. The flow works much better.
OAuth is aimed at making this flow work the same for all these different social networks and sites so that you don’t have to write a different API call for every different social network. And it’s pretty much based on the Flickr model. If you’ve ever used some desktop application that uses the Flickr API, you know that first you have to authorize it, and that involves going to the Flickr site and saying, yes, I give permission to allow this application to look at my photos, or maybe I give it permission to also upload photos. You set the permissions. So this is good. This whole combination of some sort of authentication, like OAuth, and some sort of API works really, really well.
But it is quite complex. That’s not a bad thing for us; we’re all pretty smart developers, and we can implement this kind of stuff. But there is a certain barrier to entry with getting this stuff done. It’s not something you can do overnight.
I would say this is generally the best way if you are trying to get, say, email addresses out of an address book. Whether that really defines who your friends are is another question again. I don’t know how it is for you, but for me, email is no longer really the defining factor of whether somebody is my friend, somebody I know. I’m very good friends with a bunch of people and I don’t even know their email addresses. I might know them on Twitter and Flickr and Last.fm and all these other places, and if I stopped and thought about it and I wanted to write them an email, I’d think, actually, I don’t even know their email addresses. So if you think of email as this way of getting at contacts, then this way works pretty well. But like I said, I’m not even sure that an email address is still an identifier for having a friend.
There’s this other way which kind of complements all these other methods like APIs, which is to use microformats. Microformats are not very full-featured, not a very complex way of transferring information or storing data. It’s really all about being lazy, frankly. There’s a couple of microformats principles, and they’re all pretty much based on being really quite lazy.
Hands up if you’re familiar with microformats. Okay, good, that’s good. I was kind of assuming that I wouldn’t have to go into much detail. But just to give a quick overview of the philosophy behind microformats, and this philosophy of laziness: They’re built on the idea of reusing. At all costs, avoid reinventing the wheel. If somebody’s already solved the problem and there’s some kind of standard out there, steal it. Just take it verbatim, use it. And they’re deliberately simple, they deliberately don’t try and solve every problem. “Avoid boiling the ocean” is one of the principles.
There’s this idea, the Pareto principle—which comes from economics, the Italian economist Pareto—also called the 80/20 principle. I think what he noticed was that 80 percent of the wealth was with 20 percent of the population. These numbers show up in quite a few different places, and the Pareto principle applies to microformats in the sense that if we can hit 80 percent of the use cases with 20 percent of the effort, that’s good enough. Because as soon as you get into that extra 20 percent, the edge cases, the effort required to cover those edge cases gets exponential. So whereas other formats will aim to hit 100 percent of the possible use cases—that you should be able to encode absolutely any possible, conceivable scenario—the formats tend to get kind of complex, because the effort required to design a format that can cover all those scenarios increases exponentially.
With microformats, the idea is, “You know what? There are some cases where this just won’t work.” There’s an event microformat, for instance—hCalendar—and you might have a situation where you say, “Aha, but what about this kind of event, where it starts here and it’s in a leap year on a full moon, and how does the hCalendar microformat cope with that?” Well, it doesn’t. Basically, we’re not even going to try. You’re falling into that 20 percent edge-case stuff, and you’ll need a different format.
Again, it’s kind of being lazy. There’s this whole idea—I think was Joel Spolsky who said that good programmers were lazy programmers because you just always look for the simplest solution, and that’s the sign of a good programmer. That’s definitely the philosophy behind microformats. It’s like, “Aww, you mean we have to work on this stuff? Can’t we just steal somebody else’s work?” Because that’s what we do: just reuse, borrow, steal, build on existing formats. That’s a key thing: to not try and come up with something brand new, to always build on what’s already out there.
Crucially, as well, when deciding what needs formatting, what should be a microformat and what shouldn’t, is look at what people are publishing. Not what we think people should be publishing—we’re not trying to encourage people to publish this kind of data or that kind of data. It’s much more about what kind of data are people publishing anyway that could do with being formatted a bit better.
If you look at the language that people are publishing in, the existing language, that is very much HTML on the World Wide Web. It’s far more popular. Theoretically, you can put a Word document on the Web, a PDF, you can put all sorts of things on the Web. But HTML is the lingua franca of the Web because it’s a good, simple markup language.
There’s this idea that microformats, because they’re in the HTML, people are coming to this idea that because we’re publishing these microformats in the HTML—and HTML on the World Wide Web is kind of like a RESTful interface because you’ve got a URL, and at that URL there’s a Web document which happens to be HTML rather than XML or JSON, but still, it’s an addressable document—could your website be your API? Could you point at a URL and say, “Extract this data.” And to a certain extent, you can with microformats, though it’s very much a read-only API, whereas with a fully-featured API you can read and you can write.
With microformats, it’s kind of possible to build a super-simple, almost dumb, read-only API. Because if you look at what’s being published inside the HTML, you’ll find there’s a bunch of different stuff. On a social network site, there will be a page that has my details. So that data is already being published. This isn’t something we need to encourage social network sites to publish; they’re publishing it already. Who my contacts are on a social network site—that’s already there, that’s already in the markup at a publicly available URL. And then the details of those contacts are also being published in the same way that my details are.
So each one of these is being published, but if they’re just being published in regular HTML, there’s some semantic fidelity being lost because there are no elements in HTML to say this is a human being and this is that human being’s contact. We’ve got a nice set of elements in HTML, but they don’t provide quite that level of semantic fidelity.
Let’s go through these. For my details, somewhere on a social network site—how do we identify a person? To do this, we have the hCard microformat, which is essentially an HTML version of vCard. vCard is the address book format that you’ll find on your desktop in Outlook Express, you’ll find it on your mobile phone in your address book there, you’ll find it anywhere where addresses are used. So that whole idea of being lazy and reusing existing formats very much applied to the creation of hCard, where, when we were deciding what values should be in hCard, it was every single value that’s in vCard—and that’s it, we’re done. Game over.
Let’s say on a social network site, I’ve got a link like this that’s linking off to my profile. This social network site, example.com, has got a profile page for me. My user name on this site is adactio, because my user name is adactio on pretty much every social networking site. I can wrap that with some kind of containing element—in this case it’s a span, but it could be any element, we don’t mandate what elements you need to use—and this is going to be an hCard. The reason it why says vCard rather than hCard is because, as I said, we took all the values from vCard verbatim. Because vCard begins with the root element of “vCard”, it’s always called vCard in hCard. And I say, “This is the formatted name and the nickname, and this is the URL.”
There’s actually a whole lot of optimization going on here, because technically in vCard, you must a formatted name—or “name” is the only required value, I think. There are all these optimization rules, like, if you see “fn” and “nickname” together, that means, “This is the nickname and it’s the formatted name because we don’t have the full name.” And for the URL, it knows to look in the href rather than in the text between the opening and closing tags.
There are just a couple of classes there, and, of course, the great thing about the “class” attribute is the fact that you can space-separate values. And I could be throwing on my own classes there and that would be absolutely fine; I could be putting in non-hCard values: fn, nickname, url, user name, profile—any of my own class names are absolutely fine.
What this does is that it basically says: this is a human being, these are the contact details for a person, and this is that person’s nickname, and this is a URL that represents that person—just with the addition of a few class names.
This is something that I think people sometimes forget with hCard. If you’re familiar with hCard, the usual way it’s sold or pushed is that, well, it’s like vCard, so you put up your contact details and then people can translate it to vCard. There are plug-ins for Firefox, there’s a Technorati service that will convert hCard to vCard. So people wonder sometimes: what’s the point of putting up an hCard that’s only got this much information? Because here you’ll see that there’s no email address, there’s no telephone number, there’s no street address—none of these kind of real contact details are in there. All this really does is identify one piece of string on a page as being a person as opposed to some other piece of string which is just a text.
So, there is value in using really, really simple hCards. If you converted this to a vCard, it wouldn’t be much use. You’d have that in your address book, but you couldn’t call me, you couldn’t email me—it’s not much use. But there is still value in identifying this piece of text as being a human being.
What we’re doing here is essentially making up for the fact that there is no “person” element in HTML. The elements I’ve had to use are things like “span” or “a”—like I say, use whatever elements you want. But the great thing about HTML is that we are provided with our own semantic building blocks, like the “class” attribute, which, as the spec said, is for general-purpose processing by user agents. So it there for you to add your own semantics. It kind of got co-opted by CSS for many years, where people thought that the only reason you ever used classes was for CSS. But that’s not true. There is a CSS class selector, but you can use classes as hooks for Javascript, as hooks for whatever you want. It’s for adding your own semantic data.
On this page on this social network site, it’s linking off to a profile page that represents me. And if we go to that profile page, I’d probably have another hCard there. In this case, the element might be h1, say, because now we’re on the page that is about this person. Like I say, you can use whatever element you want. And I’m saying, this is the URL, and it’s linking off to my other URL, my homepage, my blog or whatever. And here I’m saying “fn”, formatted name—here we’ve actually got my full name rather than just my user name, so there’s more information being added here. And this is the kind of stuff that a lot of social networks are publishing anyway.
So, this is useful; we’re getting a lot of different bits of data about me pulled together. But what’s really nice is to tie together the fact that, okay, this page we’re on, on example.com, this represents me, this is a page about me—but this other site as well, adactio.com, that’s also a URL about me. It’s a URL representing a person. So, by adding one value, rel="me"
, we make that explicit. We say, that URL is me also.
So, the rel attribute you’re probably mostly familiar with from using in the link
element at the top of your pages when you link off to a stylesheet; you’ll say link rel="stylesheet"
. And what that is doing is saying, this document I’m linking off to is a stylesheet for the current document. So it’s the relationship—the relationship of the linked document is “stylesheet”. Here, what we’re saying is the relationship of the linked document—the value in the href—is “me”. There’s a relationship being established.
Audience question: Why do you need the URL in the class anymore?
Jeremy Keith: What’s happening here is the fn and the URL are for the hCard, and this rel=”me” is part of XFN. So now what we’ve done is that actually we’ve got two different microformats going on. So: fn, URL, hCard, rel="me"
, XFN. And that’s one of the other nice things about microformats: you can mash them up. You can have an hCard and an hCalendar and XFN. They lend themselves well to being built Lego-style and put together like this. That explains why they’re all there: there’s two different microformats here.
So, XFN is this other microformat. I say it’s a microformat, but it’s actually kind of a proto-microformat in that it existed before microformats did. Microformats have to go through this whole Process before you have a finished microformat. It takes a while and, like I say, it has to fulfil all these criteria, it has to solve a real problem, it has to be using stuff that people are already publishing.
XFN didn’t do any of that. XFN was kind of born fully-featured by a bunch of people who got together—it was Tantek Çelik, Eric Meyer, Matt Mullenweg who got together and said, okay, we want to have a whole bunch of values for defining relationships between people. They did base it on what people were publishing, because around that time—this was a good few years ago—blogs were taking off and getting very popular, and what you’d have a lot in blogs was you’d have a sidebar with a blogroll. It’s always Americans who call it blogroll. Nobody over here seems to call it blogroll because I think it sounds too much like bogroll. There it sounds like logroll, but here it sounds like bogroll.
Anyway, what that means is you’ve now got something for representing the idea of contacts, which was the second part of what we were talking about—the kind of data that’s already being used on social networks.
Let’s say that on my profile page, I might have a list—an unordered list, an ordered list, whatever—of people who are contacts of me. I’ll mark them up as hCards as well—might as well, there’s no harm in there—but I will also add an XFN value, in this case “contact”. So here the relationship is the linked resource has the relationship of being a contact of the current resource.
XFN actually has a whole bunch of values—13, 14 values I think—but really, contact is enough. This is something that Chris Messina talked about recently after we had a panel discussion at South by Southwest, that instead of trying to convince people that you’ve gotta use XFN, you’ve gotta use XFN, it was like, well, actually, you’ve gotta just put these few characters into a link, rel="contact"
. That’s actually enough.
There’s a whole bunch of XFN values like friend, sweetheart, crush, co-worker, colleague, all these things. But I think for social networks, you don’t need them. I think on a personal site, it’s still good to have that. When I’m writing a blog post and I link off to someone, I like to have the ability to say I’ve met that person and I consider them a friend and they work in the same industry, so they’re a colleague. I think that level of fidelity is nice to have on personal publishing sites, but for social networks it can actually be really dangerous to programmatically try and say, “You are friends with that person.” That’s something that a person should be deciding for themselves. So I kind of agree with Chris that, at least in the case of social networks, rel="contact"
is plenty. That’s enough.
If you have a social network that’s, say, a dating site, then maybe you would want to use rel="crush"
or rel="sweetheart"
. But that’s kind of an edge case. Or if you have a professional site like LinkedIn, then there’s a use case for rel="colleague"
, rel="co-worker"
, these kinds of things. But, mostly, rel="contact"
.
So, essentially, out of XFN, two really, really useful values are rel="me"
and rel="contact"
. And that’s it. There are a whole bunch of other values, but you don’t have to worry about it, I would say. For the purpose of building a social network, at least, don’t worry about them.
I’ve probably got a whole bunch of friends like this, and I’m linking to all of them: this is a contact, this person is a contact, and they’d all be marked up as hCards and all that. What happens a lot of the time is that I’ll have so many friends ‘cause I’m so popular that it’ll go on to more than one page. It’ll paginate. This is how it works on Flickr and on Twitter; you’ve got a page full of twenty contacts, click “Next” to see the next twenty and the next twenty and the next twenty. What you can do there is that you can semantically attach some more data here to say, okay, we’re continuing on. So this is another page representing me, and we use the rel="prev"
and rel="next"
values that have actually been baked into HTML for quite a while. They’d be used more in the “link” element, but there’s no reason why you can’t use them in the “a” element as well. So this works really well for pagination. This is another page representing me, the next page, and the previous page representing me as well.
So, again, this rel="me"
thing is very handy. And, like the “class” attribute, the “rel” attribute can take multiple values, space-separated. Very, very useful. So there are few little things in HTML that are incredibly powerful. Incredibly powerful.
Finally, you’ve got their details. And that’s really just the same as what you did for your own details. In the same way as I had my page that was marked up with hCard and XFN, any one of my contacts—in this case, my friend Brian—he’s got his own profile page, and that’s got his hCard on it, and it’s linking off to his URL, and that’s tied up using rel="me"
. So, if every one of my friends has something like this, that’s quite a large network of friends.
And that’s pretty much it as regards what you need to do to add that extra bit of semantic goodness in there, to make it clear that this is a person, this is a contact of that person, here’s another page of contacts of that person. It’s just using hCard and XFN. Not even a full hCard, a very simple hCard, and not even many XFN values. Two values we’re talking about here, really: “me”, “contact”. “Prev” and “next” aren’t even XFN values, they’re just values. And we’re talking about two attributes of HTML—not even elements, attributes. Not new, custom attributes that we’ve invented; these are attributes that have been in HTML for quite a while.
It isn’t a replacement for an API. I’m not saying you don’t have to worry about making an API. But this makes a nice complement: the fact that, until you get around to building that API, why don’t you just add a few class and rel values and you’ve got a nice, simple, read-only way of allowing people to get at that data.
And generally I would also say: allow people to get at your data in as many formats as possible. So if you are building an API, of course you’ll export in XML, say, but you’ll want to export in JSON as well because developers want to have that format. Well, why not do this as well, so that if that’s what they want, rf they want to get at it in this hCard or XFN format, give them that option too. Basically, you can’t have too many data formats as far as developers are concerned. Give them anything you possibly can.
As regards publishing this stuff, it is that simple. It is that two-minute thing that David showed in his keynote yesterday: rel="me"
, rel="contact"
, some hCard values where applicable.
What about parsing this stuff, though? What if you want to consume this information from a social network that is publishing XFN and hCard? That’s tricky, because that’s hard work. Parsing can be tricky, parsing an HTML page—you’d usually have to run it through something like HTML Tidy to get it all cleaned up so you can then parse it as a well-formed document, because most HTML is actually pretty messy out there. Then what’s even harder than parsing is spidering, having to follow all those links, all those rel="next"
, all those rel="contact"
. Following all of those, programmatically, that’s actually a lot of work to do.
Fortunately for us, Google have taken care of it for us. This is kind of exactly the thing that Google are good at, because to do this spidering and parsing, you kind of need to have a copy of the World Wide Web stored somewhere on you server—which they happen to have, which is really handy. There’s a video of Brad Fitzpatrick talking about how they did this, and it’s basically, “Well, we took all the links on the World Wide Web and threw away all the ones that didn’t have rel values.” But that’s pretty much what they can do.
So, we can use what they’ve done. They’ve provided an API, the Social Graph API, which, again, David was demonstrating yesterday. And this will spider rel values—it’ll spider rel="me"
, it’ll spider rel="contact"
, it’ll spider all that stuff, it’ll spider the previous and next values—and it will send back a JSON file of what it finds. You can also use this to parse FOAF files, and it will follow all the FOAF files. What it does is it actually takes the FOAF files and converts them to XFN, so it always ends up as XFN right before the end anyway.
Let’s say you’ve got a new social network site, and you’re trying to get sign-ups, you’re trying to get people to join your site, and you want to make it as easy as possible. Give them the option to provide a URL somewhere else, and then use this Social Graph API to spider that URL and look for contacts, and look for other URLs that represent this person, and look for contacts over there. And then you’ve got to try and do some matching and figure out if you’ve found a contact for this person. This isn’t a Boolean thing, this isn’t a one/zero kind of thing where you can say, “Oh, that person on that social network is the same person as this person over here on this social network.” We don’t have any kind of unique identifier for this, but you can use some fuzzy logic here, I think.
If you’ve got a formatted name, which is the fn name in hCard, and it’s the same value over here as it is over here, that’s a pretty good chance. Not 100 percent, as I say, there are edge cases. But if there’s a Jeremy Keith on that site and there’s a Jeremy Keith on your site, it could well be the same person.
You’ve got the nickname, because a lot of people use the same user names on a lot of different sites, so that’s something to check out. If you get a combination of the two, that’s looking really good. And actually, URL is something that people identify with a lot, so when you allow people to put in a URL on a website—it’s usually their blog or something—and if you find a match for that, that’s looking pretty good.
None of these are 100 percent, absolute certainty things to say, “Yes, that is definitely the same person as that person over there.” But with a combination of matching this kind of stuff, you could probably make a pretty good guess.
What you don’t get is email, generally, because this isn’t something that’s generally published on the public Web because of the spam we’ve had to deal with for years. So most social networking sites, on your profile page, they will not display your email. And rightly so; that’s information you wouldn’t necessarily want published. So when it comes to parsing this stuff and matching values, you don’t have access to email, usually, whereas you would if you were doing the whole address book API thing.
But I think you can get by pretty well with what you’ve got. And email is often used as a unique identifier, but I’m not even sure that that’s a safe bet to make.
So, what I’m saying here is, basically, you don’t get 100 percent accuracy, you get maybe, about, 80 percent. So we’re kind of back to this 80/20 principle. You’ll probably get about 80 percent of the people where you say, “I think they’re pretty much the same person,” and you might get some edge cases that fall through and it doesn’t match them up, or you get false matches and say, “Ah, that person isn’t actually the same person.” But that’s pretty much it as regards parsing.
To anticipate some of the questions you might have and pre-empt some of them, questions that get asked a lot include: what about trust? How can you believe the assertion that this person is that same person over there on another social network. I’m signing up to your social network and it says, “Give me a URL,” and I put in a URL. How do you know that’s really my URL? I could be putting in your URL or your URL. How do we trust this input?
Well, we can’t. Or at least, that’s not something that microformats can solve. But this question of trust is something that’s true of everything on the Web. How do you trust any piece of information you read on the Web? And it’s different for every person. You can try and solve this programmatically—you’ve got secure certificates and things like that—but generally, this is actually a problem of just publishing on the Web anywhere.
Some people will read a Wikipedia article and they will trust that source because it is on Wikipedia. Other people will read a Wikipedia article and they will not trust that source because it is on Wikipedia. What I’m saying is that it’s different for every person. So you can’t make any assertions about the veracity of a claim just using the format. However, there are other technologies out there that do aim to solve this. So, OpenID aims to solve this one, to say you can authenticate the fact that that person claiming to reside at that URL really does reside at that URL.
By mashing up microformats for the formatting and OpenID for the authentication, then you can be pretty secure. But the whole idea of trust and authentication is not something to be solved at the formatting level. That is something to be solved at the protocol level.
Now, what about walled gardens? Because everything I’ve been talking about has been publicly published information, and I would say that this whole idea of hCard and XFN does work best on public websites. If you are running a walled garden like Facebook—Facebook does not allow much in the way of public access to profile information or friends lists, contact lists, all that kind of stuff—what do you do in that case? Well, I still think there’s no harm in publishing hCard and XFN because it only takes a few seconds to edit a template and put in those values. But, of course, the Social Graph API or other spiders can’t get to that information because you’ve locked it up in a walled garden behind a user name and password.
Again, some kind of mashup with an authentication layer like OpenID or SSL or password authentication would allow access. But generally, for walled gardens, you’re probably going to be relying on a more complex solution to get at that data: an API together with something like OAuth, or we saw the Open Social building-new-widgets thing and doing all that. But generally, for walled gardens, things get more complex. It’s just the way it is. For stuff that’s public, it’s generally straightforward.
So, who is publishing this stuff on the public Web? Well, every single one of those sites that I asked you about earlier on, every single one of those sites is publishing hCard and XFN. And you could plug one of your profiles on those sites into the Social Graph API and get back a whole list of contacts that you could then suck into another website. So there are a lot of people publishing.
Who’s parsing this stuff? Not many. Surprisingly few, considering how relatively straightforward it is now using the Social Graph API. Although the Social Graph API has only been around since the end of February, start of March. So, okay, that’s not very long. It’s not that surprising. Dopplr is doing some nice stuff, where you enter a URL and it tries to find people who are already on Dopplr who are on some social network site over there. Get Satisfaction does something interesting; it doesn’t have a friends list, a contact list or anything like that, but it does have a single-field sign-up which is really nice. The first question—you can fill in the whole form, saying what’s your name, what’s your email address, all this kind of stuff, or you can fill in a form over here which is one field, which is: What’s your URL? And if that URL is encoded as an hCard, it will suck out the hCard data. But it’s not yet doing the whole friends list stuff.
So there is room for some innovation here, for you people to get in there and start doing this stuff and get a leap. So the future is looking good. Like I said, this idea of one form field for sign-up. And actually, David mentioned this yesterday, and I didn’t even realize that Plaxo and Pulse were doing this, that it asks you for an OpenID URL, a URL that represents your OpenID, and while it’s got that URL, it says, “You know what? I’ll just run this through the Social Graph API and see if I find any rel links that I can match against people who are on my social network and see if you’ve got any friends here.”
So it’s this idea that I can come to a new social network site, and instead of filling out a long form with my contact details and then going through the whole process of saying that these are my friends, I could least try at the start to say, look, here’s my URL, you do the work. Don’t make me do the work. You go find who my friends are, you go find my contact information. Again, you won’t get 100 percent accuracy, but you’d get maybe 80 percent and you’d do pretty well.
So what you can do, certainly, is start publishing this stuff. The parsing, like I said, is pretty hard, but certainly publish this stuff, because it’s so easy to add these rel values and it’s so easy to add these class names.
But I would say that the real challenge here—because from a technological point of view, it’s really simple—the real challenge is in design. How do you design this stuff to flow nicely, how do you design it to not be creepy? That you make this flow nice… Some of the things would be: don’t use jargon. Don’t ever mention microformats or hCard or XFN or any of that stuff. People don’t care, and nor should they. They shouldn’t know what the technology is.
And don’t make assumptions. You’ve got an 80 percent match, you think that person is the same person as this person, we’ll make them friends on my network—don’t assume that. Always allow people to explicitly make that connection. So you might provide a list of names with checkboxes and allow people to check or uncheck those names. Don’t assume anything like that.
Should you notify people? When you’ve stored their network from over here, and now when a friend of theirs from over there joins a week later, do you let that person know? Do you send out an email saying, “Hey, your buddy from Flickr just joined our new social networking site!” Or is that creepy? These are kind of design challenges.
And what about allowing people to subscribe to it: rather than a one-time import, store that URL. So you say, “Do you have a URL on some other social network?” “Yeah, here’s my Flickr URL.” Instead of throwing that away once you’ve spidered it for XFN values, hold on to it, and every couple of days, spider it again and see if there’s any new people? In other words, allowing people to subscribe to a contact list somewhere else rather than just import. Again, design challenges, trying not to be creepy, that can be tough.
What I would say is that the technological side of things is super simple. It really is just a couple of attributes. The real challenge here is design. Big “D” design thinking sort of stuff. But that’s why we have designers. They’re going to help us solve this.
So, my website is adactio.com, I’ll post these slides up there and blog about this later. And microformats.org is where you can go to learn more about microformats, but you pretty much got everything you need from that little session there.
Thank you very much.
I enjoyed being back in Ireland. Jessica and I arrived into Dublin last Saturday but went straight from the airport to the train station so that we could spend the weekend in my hometown seeing family and friends. Said town was somewhat overwhelmed by the arrival of one of the largest cruise ships in the world.
We were back in Dublin in plenty of time for the start of this year’s XTech conference. A good time was had by the übergeeks gathered in the salubrious surroundings of a newly-opened hotel in the heart of Ireland’s capital. This was my third XTech and it had much the same feel as the previous two I’ve attended: very techy but nice and cosy. In some ways it resembles a BarCamp (but with a heftier price tag). The talks are held in fairly intimate rooms that lend themselves well to participation and discussion.
I didn’t try to attend every talk — an impossible task anyway given the triple-track nature of the schedule — but I did my damndest to liveblog the talks I did attend:
There were a number of emergent themes around social networks and portability. There was plenty of SemWeb stuff which finally seems to be moving from the theoretical to the practical. And once again the importance of XMPP, first impressed upon me at the Social Graph Foo Camp, was once again made clear.
Amongst all these high-level technical talks, I gave a presentation that was ludicrously simple and simplistic: Creating Portable Social Networks with Microformats. To be honest, I could have delivered the talk in 60 seconds: Add
If you’re interested, you can download a PDF of the presentation including notes.rel="me"
to these links, add rel="contact"
to those links, and that’s it.
I made an attempt to record my talk using Audio Hijack. It seems to have worked okay so I’ll set about getting that audio file transcribed. The audio includes an unusual gap at around the four minute mark, just as I was hitting my stride. This was the point when Aral came into the room and very gravely told me that he needed me to come out into the corridor for an important message. I feared the worst. I was almost relieved when I was confronted by a group of geeks who proceeded to break into song. You can guess what the song was.
Ian caught the whole thing on video. Why does this keep happening to me?
Simon's slides and demos from his half-day workshop at XTech.
Sean McGrath is delivering the closing keynote at XTech 2008. Sean would like to reach inside and mess with our heads today. He plans to modify our brain structures, talking about the movable Web.
Even though Sean has been doing tech stuff for a long time he freely admits that he doesn’t know what the Web is. He quotes Dylan:
I was so much older then, I’m so much younger now.
Algorithms + Data Structures = Programs is a book by Nicklaus Wirth from 1978. Anyone remember Pascal? Sean went to college here at Trinity in 1983 doing four years of computer science which is where he came across that book.
Computing is all about language …human language. People first, machines second. Information is really about words, not numbers. Words give the numbers context.
Sean used to sit in his student bedsit and think about what algorithms actually are. He was also around at the birth of SGML in 1985. More words, then. Then he got involved in the creation of XML …even more words. Then the Web came along. HTML is, yup, more words. Even JavaScript is words. His epiphany was realising that HTTP was about sending words across the wire. The Web is fundamentally words.
There’s a Bob Dylan documentary called Eat The Document. Sean took this as a sign from God …or at least from Dylan.
Sean explains Ogham stones — horizontal lines from top to bottom. The Book of Ballymote is the Rosetta Stone of Ogham writing. The translation on this particular stone is If I were you, I would not stand here.
The Irish have been using words for a long time. They’ve also been hacking for a long time. Dolmens are an example of neolithic hacking.
Illuminated documents demonstrate the long Irish history of writing unit test cases for Cascading Style Sheets. A common thread in books from the Book Of Ballymote up to the Annals of the Four Masters was that they were from a religious background. Joyce came along with the world’s first hypertext novel, Finnegan’s Wake. Sean goes from Yeats to Shane McGowan, quoting Summer In Siam as a sublime piece of Zen metaphysics:
When it’s Summer in Siam then all I really know is that I truly am in the Summer in Siam.
The Irish will even go to war over words. Copyright was a big bone of contention between St. Finnian and his student St. Columba in the 6th Century. St. Columba ran a proto-Pirate Bay. If you saw him coming, you’d bury your books. There was a war between St. Finnian and St. Columba in which 3,000 people lost their lives. Finally, the High King of Ireland said As to every cow its calf, so to every book its copy
, the first official statement on copyright. But because books were actually written on cows (vellum), the statement is ambiguous.
Here’s a picture. Nobody in the room knows what it is. We haven’t had our brains rewired yet.
Sean loves the simplicity of the idea that computing is words. Sadly, it’s just not true. There are plenty of images and video on the Web.
Back to that picture. It’s a cow. One person in the room sees the cow.
Sean likes the idea of the Web as electronic Ogham stones. But he sought the 2nd path to Web enlightenment. He realised that not only is the Web not just all words, the Web doesn’t exist at all.
What is the true nature of the words on the Web? Here’s something Sean created called Finite State Machines for a mobile app called Mission Control that generated documents based on the user, the device, the location and the network. There were no persistent documents. No words, just evaporation
as Leonard Cohen said.
There are three models for the world.
Model A exists within Model B which exists within Model C. Model C is the general case. If you have a system that is that dynamic, you could generate Model B and therefore Model A. Look at the way our sites have evolved over time. We used to create Model A websites. Then we switched over to Model B with Web Standards. Now we’re at Model C — we’re not going to create any actual content at all. There is no content but there is also an infinite amount of content at the same time. We generate a tailor-made document for each user but we don’t hold on to that document, we throw it away. So what content actually exists on the Web?
PHP, Django, Rails, Google App Engine …on the Web, Model C wins. It’s even starting to happen on the client side with Ajax, Silverlight and Air. It’s spooky sometimes to view source and see no actual content, just JavaScript to generate the content.
Doing everything dynamically is fine as long it scales. It’s better to solve the problems of scalability than to revert to the static model. The benefits of Model C are just so much greater than Model A.
Amazon are making great services but they are rubbish at naming things, like Mechanical Turk.
So where are all the words? HTTP still delivers words to me but they are generated on the fly. The programs that generate them are hidden.
The Web is becoming a Web of silos. As the Web becomes more dynamic, it’s harder for the little guy to compete (behind me I hear Simon grumble something about Moore’s Law). So we build silos on the client side; so-called Rich Internet Applications. We’re losing URIs.
Model C is Turing complete, user-sensitive, location-sensitive and device-sensitive. It’s scalable if it’s designed right. It’s commercially viable if it’s deployed right.
But we lose hyertext and deep linking as we know it. Perhaps we will lose search. Will the Googlebot download that JavaScript and eval
it to spider it? URIs have emergent properties because they can be bookmarked, tagged and mashed up. We are also losing simplicity: simply surfing documents.
So is it worth it?
Mu. That means I reject the premise of the question.
We have no choice. We are heading towards Model C whether we want to or not. That’s bad for the librarians such as the Orangutan librarian from Discworld. Read Borges’s The Garden of Forking Paths. Sean recommends reading Borges first and Pratchett second — it just doesn’t work the other way around. Now Sean mentions Borges and John Wilkins — Jesus, this is just like my Hypertext talk at Reboot! Everyone has a good laugh about taxonomies. Model C makes it possible to build the library of Babel — every possible book that is 401 pages long. But the library of Babel is, in Standish’s view, useless. He says that a library is not useful for the books it contains but for the books that it doesn’t contain — the rubbish has been filtered out. How will we filter out the rubbish on a Model C Web?
Information content is inversely related to probability said Claude Shannon. George Dyson figured out that the library of Babel would be between a googol and googolplex of books.
Nothing that Sean has seen this week at XTech has rocked his belief that we are marching towards Model C. Our content is going into the cloud, despite what Steven Pemberton would wish for.
When Sean first started using the Web, you had static documents and you had a cgi-bin. Now we generate our documents dynamically. We are at an interesting crossroads right now between Joycean documents and Turing applications. Is there a middle way, a steady-state model? Sean doesn’t think so because he now believes that the Web doesn’t actually exist. The Web is really just HTTP. The value of URIs is that we can name things. It’s still important that we use URIs wisely.
Perhaps HTML is trying to be too clever, to anthropomorphise it. Perhaps HTML, in trying to balance documents and applications, is a jack of all trades and a master of none.
Sean now understands what Fielding was talking about. There is no such thing as a document. All there is is HTTP. Dan Connolly has a URI for his Volkswagen Beetle because it’s on the Web. Sean is now at peace, understanding the real value of HTTP + URIs.
Now Sean will rewire our brains by showing us the cow in the picture. Once we see the cow, we cannot unsee it.
The enigmatic Steven Pemberton is at XTech to tell us Why you should have a Web site: it’s the law! (and other Web 3.0 issues). God, I hope he’s using Web 3.0 ironically.
Steven has heard many predictions in his time: that we will never have LCD screens, that digital photography could never replace film, etc. But the one he wants to talk about is Moore’s Law. People have been seeing that it hasn’t got long to go since 1977. Steven is going to assume that Moore’s Law is not going to go away in his lifetime.
In the 1980s the most powerful computers were the Crays. People used to say One day we will all have a Cray on our desk.
In fact most laptops are about 120 Craysworth and mobile phones are about 35 Craysworth.
There is actually an LED correlation to Moore’s Law (brighter and cheaper faster). Steven predicts that within our lifetime all lighting will be LCDs.
Bandwidth follows a similar trend. Jakob Nielsen likes to claim this law; that bandwidth will double every year. In fact the timescale is closer to 10.5 months.
Following on from Moore’s and Nielsen’s laws, there’s Metcalfe’s Law: the value of a network is proportional to the square of the number of nodes. This is why it’s really good that there is only one email network and bad that there are so many instant messenger networks.
Let’s define the term Web 2.0 using Tim O’Reilly’s definition: sites that gain value by their users adding data to them. Note that these kinds of sites existed before the term was coined. There are some dangers to Web 2.0. When you contribute data to a web site, you are locking yourself in. You are making a commitment just like when you commit to a data format. This was actually one of the justifications for XML — data portability. But there are no standard ways of getting your data out of one Web 2.0 site and into another. What if you want to move your photos from one website to another? How do you choose which social networking sites to commit to? What about when a Web 2.0 site dies? This happened with MP3.com and Stage6. Or what about if your account gets closed down? There are documented cases of people whose Google accounts were hacked so those accounts were subsequently shut down — they lost all their data.
These are examples of Metcalfe’s law in action. What should really happen is that you keep all your data on your website and then aggregators can distribute it across the Web. Most people won’t want to write all the angle brackets but software should enable you to do this.
What do we need to realize this vision? First and foremost, we need machine-readable pages so that aggregators can identify and extract data. They can then create the added value by joining up all the data that is spread across the whole Web. Steven now pimps RDFa. It’s like microformats but it will invalidate your markup.
Once you have machine-readable semantics, a browser can do a lot more with the data. If a browser can identify something as an event, it can offer to add it to your calendar, show it on a map, look up flights and so on. (At this point, I really have to wonder… why do the RDFa examples always involve contact details or events? These are the very things that are more easily solved with microformats. If the whole point of RDFa is that it’s more extensible than microformats, then show some examples of that instead of showing examples that apply equally well to hCalendar or hCard)
So rather than putting all your data on other people’s Web sites, put all your data on your Web site and then you get the full Metcalfe value. But where can you store all this stuff? Steven is rather charmed by routers that double up as web servers, complete with FTP. For a personal site, you don’t need that much power and bandwidth. In any case, just look at all the power and bandwidth we do have.
To summarise, Web 2.0 is damaging to the Web. It divides the Web into topical sub-webs. With machine-readable pages, we don’t need those separate sites. We can reclaim our data and still get the value. Web 3.0 sites will aggregate your data (Oh God, he is using the term unironically).
Questions? Hell, yeah!
Kellan kicks off. Flickr is one of the world’s largest providers of RDFa. He also maintains his own site. Even he had to deal with open source software that got abandoned; he had to hack to ensure that his data survived. How do we stop that happening? Steven says we need agreed data formats like RDFa. So, Kellan says, first we have to decide on formats, then we have to build the software and then we have to build the aggregators? Yes, says Steven.
Dan says that Web 2.0 sites like Flickr add the social value that you just don’t get from building a site yourself. Steven points to MP3.com as a counter-example. Okay, says Dan, there are bad sites. Simon interjects, didn’t Flickr build their API to provide reassurance to people that they could get their data out?
Not quite, says Kellan, it was created so that they could build the site in the first place.
Someone says they are having trouble envisioning Steven’s vision. Steven says I’m not saying there won’t be a Flickr
— they’ll just be based on aggregation.
Someone else says that far from being worried about losing their data on Flickr, they use Flickr for backup. They can suck down their data at regular intervals (having written a script on hearing of the Microsoft bid on Yahoo). But what Flickr owns is the URI space.
Gavin Starks asks about the metrics of energy usage increases. No, it drops, says Steven.
Ian says that Steven hit on a bug in social websites: people never read the terms of service. If we encouraged best practices in EULAs we could avoid worst-case scenarios.
Someone else says that our focusing on Flickr is missing the point of Steven’s presentation.
Someone else agrees. The issue here is where the normative copy of your data exists. So instead of the normative copy living on Flickr, it lives on your own server. Flickr can still have a copy though. Steven nods his head. He says that the point is that it should be easy to move data around.
Time’s up. That was certainly a provocative and contentious talk for this crowd.
It’s time for my second Gavin of the day at XTech. Gavin Bell asks Data portability for whom?
To start with, we’ve got a bunch of great technologies like OpenID and OAuth that we’re using to build an infrastructure of openness and portability but right now, these technologies don’t interoperate very cleanly. Getting a show of hands, everyone here knows of OpenID and OAuth and almost everyone here has an OpenID and uses it every week.
But we’re the alpha geeks. We forget how ahead of the curve we are. Think of RSS. We imagine it’s a widely-accepted technology but most people don’t know what it is. That doesn’t matter though as long as they are using RSS readers and subscribing to content; people don’t need to know what the underlying technology is.
Clay Shirky talked about cognitive surplus recently. We should try to tap into that cognitive surplus as Wikipedia has done. Time for some psychology.
Cognitive psychology as a field is about the same age as the study of artificial intelligence. A core tool is something called a schema, a model of understanding of the world. For example, we have a schema for a restaurant. They tend to have tables, chairs, cutlery, waiters, menus. But there is room for variation. Chinese restaurants have chopsticks instead of knives and forks, for example. We have a schema for the Web involving documents that reside at URLs. Schema congruence is the degree to which our model of the world matches the ideal model of the world.
Schemas change and adapt. Our idea of what a mobile phone is, or is capable of, has changed in the last few years. Schemas teach us that gradual change is better than big bang changes. We need a certain level of stability. When you’re pushing the envelope and changing the mental model of how something can work, you still need to support the old mental model. A good example of mental model extension is the graceful way that Flickr added video support. However, because the change was quite sudden, a portion of people got very upset. Gradual change is less scary.
Cognitive dissonance, a phrase that is often misused, is the unfortunate tension that can result from holding two conflicting thoughts at the same time. On the web, the cognitive dissonance of seeing content outside its originating point is dissipating.
J.J. Gibson came up with the idea of affordances. Chairs afford sitting on. Cups afford liquid to be poured into them. When we’re using affordances, it’s important to stick to common convention. If, on a website, you use a plus sign to allow someone to add something to a cart, you shouldn’t use the same symbol later on to allow an image to be enlarged.
Flow is the immensely enjoyable state of being fully immersed in what you’re doing. This is like the WWILFing experience on Wikipedia. You get it on Flickr too. Now we’re getting flow with multiple sites as we move between del.icio.us and Dopplr and Twitter, etc. Previously we would have experienced cognitive dissonance. Now we’re pivoting.
B.F. Skinner did a lot of research into reinforcement. We are sometimes like rats and pigeons on the Web as we click the buttons in an expectation of change (refreshing RSS, email, etc.).
Experience vs. features …don’t be feature led. A single website is just one part of people’s interaction with one another. Here’s the obligatory iPod reference: they split the features up so that the bare minimum were on the device and the rest were put into the iTunes software.
We’ve all lost count of the number of social networks we’ve signed up to. That’s not true of — excuse me, Brian — regular people. Regular people won’t upgrade their browser for your website. Regular people won’t install a plug-in for their browser. We shouldn’t be trying to sell technologies like OpenID, we should be making the technology invisible.
Gavin uses Leslie’s design of the Satisfaction sign-up process as an example. She never mentions hCard. Nobody needs to know that.
We’re trying lots of different patterns and we often get it wrong. The evil password antipattern signup page on the Spokeo website is the classic example of getting it wrong.
We must remember the hinternet. Here’s a trite but true example: Gavin’s mum …she doesn’t have her own email address. She shares it with Gavin’s dad. According to most social network sites, they are one person. And be careful of exposing stuff publicly that people don’t expect. Also, are we being elitist with things like OpenID delegation that is only for people who have their own web page and can edit it?
Our data might be portable but what about the context? If I can move a picture from Flickr but I can’t move the associated comments then what’s the point?
We’re getting very domain-centric. It would be great if everyone was issued with their own domain name. Most people don’t even think about buying a domain name. They might have a MySpace page or Facebook profile but that’s different.
Some things are getting better. People have stopped mentioning the http://
prefix. But many people don’t even see or care about your lovely URL structure. Anyway, with portable data, when you move something (like a blog post), you lose the lovely URL path.
Larry Tesler came up with the law of the conversation of complexity. There is a certain basic level of complexity. We are starting to build this basic foundation with OpenID and OAuth — they could be like copy and paste on the desktop.
We built a Web for us, geeks, but we built it in a social way. We are discoverable. We live online. This lends itself well to smaller, narrower, tailored services like Dopplr for travel, Fire Eagle for location, AMEE for carbon emissions. But everything should integrate even better. Why can’t clicking “done” in Basecamp generate an invoice in Blinksale, for example? If they were desktop applications, we’d script something. Simon interjects that if they were open source, we would modify them. That’s what Gavin is agitating for. The boundaries are blurring. We have lots of applications both on and off the Web but they are all connected by the internet. People don’t care that much these days about what application they are currently using or who built it; it’s the experience that’s important.
Here’s something Gavin wants somebody to make: identity brokerage. This builds on his id6 idea from last year. That was about contact portability. Now he wants something to deal with all the invitations he gets from social networks. Now that we’ve got OpenID, why can’t we automate the acceptance or rejection of friend requests?
We are heading towards a distributed future. DiSo points the way. But let’s learn from RSS and make the technology invisible. We need to make sense of the Web for the people coming after us. That may sound elitist but Gavin doesn’t mean it to be.
Kellan asks if we can just change the schema. Gavin says we can but we should change it gradually.
Step-by-step reassurance is important. Get the details right. Magnolia is starting to get this right with its sign-in form which lists the services you can sign in through, rather than the technology (OpenID).
We are sharing content, not making friends. Dopplr gets this right by never using the word friend. Instead it lists people with whom you share your trips. The Pownce approach of creating sub-groups from a master list is close to how people really work.
Scaffolding and gradual change are important. As a child, we are told two apples plus three apples is five apples. Later we learn that two plus three equals five; the scaffolding is removed. We must first build the scaffolding but we can remove it later.
Gavin wraps up and even though the time is up, the discussion kicks off. Points and counterpoints are flying thick and fast. The main thrust of the discussion is whether we need to teach the people of the hinternet about they way things work or to hide all that stuff from them. There’s a feast of food for thought here.
Simon Batistoni is responsible for Flickr’s internationalisation and he’s going to share his knowledge here at XTech. Flickr is in a lucky position; its core content is pictures. Pictures of cute kittens are relatively universal.
We, especially the people at this conference, are becoming hyperconnected with lots of different ways of communicating. But we tend to forget that there is this brick wall that many of us never run into; we are divided.
In the beginning was the Babelfish. When some people think of translation, this is what they think of. We’ve all played the round-trip translation game, right? Oh my, that’s a tasty salad
becomes that’s my OH — this one is insalata of tasty pleasure.
It’s funny but you can actually trace the moment where tasty
becomes of tasty pleasure
(it’s de beun gusto
in Spanish). Language is subtle.
It cannot really be encoded into rules. It evolves over time. Even 20 years ago if you came into the office and said I had a good weekend surfing
it may have meant something different. Human beings can parse and disambiguate very well but machines can’t.
Apocraphyl story alert. In 1945, the terms for Japanese surrender were drawn up using a word which was intended to convey no comment
. But the Japanese news agency interpreted this as we ignore
and reported it as such. When this was picked up by the Allies, they interpreted this as a rejection of the terms of surrender and so an atomic bomb was dropped on Hiroshima.
Simon plugs The Language Instinct, that excellent Steven Pinker book. Pinker nails the idea of ungrammatically but it’s essentially a gut instinct. This is why reading machine translations is uncomfortable. Luckily we have access to language processors that are far better than machines …human brains.
Here’s an example from Flickr’s groups feature. The goal was to provide a simple interface for group members to translate their own content: titles and descriptions. A group about abandoned trains and railways was originally Spanish but a week after internationalisation, the group exploded in size.
Here’s another example: 43 Things. The units of content are nice and succinct; visit Paris
, fall in love
, etc. So when you provide an interface for people to translate these granular bits, the whole thing snowballs.
Dopplr is another example. They have a “tips” feature. That unit of content is nice and small and so it’s relatively easy to internationalise. Because Dopplr is location-based, you could bubble up local knowledge.
So look out for some discrete chunks of content that you can allow the community to translate. But there’s no magic recipe because each site is different.
Google Translate is the great white hope of translation — a mixture of machine analysis on human translations. The interface allows you to see the original text and offers you the opportunity to correct translations. So it’s self-correcting by encouraging human intervention. If it actually works, it will be great.
Wait, they don’t love you like I love you… Maaa-aa-a-aa-aa-a-aa-aaps.
Maps are awesome, says Simon. Flickr places, created by Kellan who is sitting in front of me, is a great example of exposing the size and variation of the world. It’s kind of like the Dopplr Raumzeitgeist map. Both give you an exciting sense of the larger, international community that you are a part of. They open our minds. Twittervision is much the same; just look at this amazing multicultural world we live in.
Maps are one form of international communication. Gestures are similar. We can order beers in a foreign country by pointing. Careful about what assumptions you make about gestures though. The thumbs up gesture means something different in Corsica. There are perhaps six universal facial expressions. The game Phantasy Star Online allowed users to communicate using a limited range of facial expressions. You could also construct very basic sentences by using drop downs of verbs and nouns.
Simon says he just wants to provide a toolbox of things that we can think about.
Road signs are quite universal. The roots of this communication stretches back years. In a way, they have rudimentary verbs: yellow triangles (“be careful of”), red circles (“don’t”).
Star ratings have become quite ubiquitous. Music is universal so why does Apple segment the star rating portion of reviews between different nationality stores? People they come together, people they fall apart, no one can stop us now ‘cause we are all made of stars.
To summarise:
Grab the slides of this talk at hitherto.net/talks.
It’s question time and I ask whether there’s a danger in internationalisation of thinking about language in a binary way. Most people don’t have a single language, they have a hierarchy of languages that they speak to a greater or lesser degree of fluency. Why not allow people to set a preference of language hierarchy? Simon says that Flickr don’t allow that kind of preference setting but they do something simpler; so if you are on a group page and it isn’t available in your language of choice, it will default to the language of that group. Also, Kellan points out, there’s a link at the bottom of each page to take you to different language versions. Crucially, that link will take you to a different version of the current page you’re on, not take you back to the front of the site. Some sites get this wrong and it really pisses Jessica off.
Someone asks about the percentage of users who are from a non-English speaking country but who speak English. I jump in to warn of thinking about speaking English in such a binary way — there are different levels of fluency. Simon also warns about taking a culturally imperialist attitude to developing applications.
There are more questions but I’m too busy getting involved with the discussion to write everything down here. Great talk; great discussion.
Gavin Starks, the man behind AMEE — the Avoiding Mass Exctinction Engine — is back at XTech this year. The service was launched at XTech in Paris last year.
Data providers have been added in the last year, including the Irish government. There’s also a bunch of new sources that are data mined. There are plenty of consumers too, including Google and change.ie from the Irish government. It’s cool to have countries on board. Here’s Edenbee. Yay! Gavin really likes it. The Carbon Account is another great one. But Gavin’s favourite is probably the Dopplr integration.
AMEE is tracking 850,000 carbon footprints now. That’s all happened in 12 months. There are over 500 organisations and individuals using AMEE. That’s over 500 calls to Gavin’s mobile number which he made available on the website.
Gavin describes AMEE as a neutral aggregation platfrom. The data is provided from agencies that can license or syndicate their data. This data is then used by developers who can build products and services on top of it. So AMEE is, by design, commercially enabling to 3rd parties.
Gavin says they are trying to catalyse change. They want to create a standard for measuring carbon emissions. To a large extent, they’ve achieved that. Even though there are lots of different data providers, AMEE provides a single point of measurement. The vision is to measure the CO2 emissions of everything. That’s a non-trivial task so they’ve concentrated solely on doing that one thing.
AMEE has profiles for your carbon identity and your energy identity but both are deliberately kept separate. The algorithms for energy measurement might change (for example, how carbon emissions from flights are measured) but your carbon identity should remain constant. This separation allows for real data portability e.g. integrating your Dopplr account with your Edenbee account. AMEE takes care of tracking energy but they don’t care about who you are: everything is anonymous and abstracted. It’s up to you as a developer of social apps to take care of establishing identity. There’s a lot of potential here, kind of like Fire Eagle; a service that concentrates on doing one single thing really well.
They’re partnering on tracking technology. For example, tracking Blackberries and using the speed of travel to guess what mode of transport you are using at any one time.
AMEE has a RESTful API that returns XML and JSON. They also provide more complicated, Enterprise-y stuff to please the Java people.
There are different pricing models. Media companies pay more than other companies. Charities pay nothing.
What’s next? AMEE version 2; making it easier for people to engage with the service. In the long term, let’s go after all the products that exist. Someone has that data in a spreadsheet somewhere — let us get at it.
Why do all this? Why do you think? Does anybody really need to be convinced about climate change at this stage? There will always be debate in science but even senior conservative scientists are coming out and saying that they may have underestimated the impact of carbon emissions. If a level of 450ppm continues long enough (and that’s the level we’re aiming for), that’s a sea rise of up to 75 metres. That’s an exctinction level event. We might well be fucked but as Stephen Fry says:
Doing nothing risk everything and gains comparitively little, doing something risks comparitively little and gains the whole world.
Here’s where AMEE comes in: if we can measure and visualise energy consumption change, that will drive social change. In the long term we will have to completely re-engineer our lifestyles and re-invent the power grid. Shut down power stations, shut down oil platforms, reduce all travel …measure and visualise all of it.
We don’t just need change; we need a systematic redesign of the future. We could start with the political language we use. Instead of using the word “consumer” with its positive connotations, let’s say “waster” which is more accurate.
What will you build? www.amee.cc
I skipped a lot of the afternoon presentations at XTech to spend some time in the Dublin sunshine. I came back to attend Blaine’s presentation on The Real Time Web only to find that Blaine and Maureen didn’t make it over to Ireland because of visa technicalities. That’s a shame. But Matt is stepping into the breach. He has taken Blaine’s slides and assembled a panel with Seth and Rabble from Fire Eagle to answer the questions raised by Blaine.
Matt poses the first question …what is the real-time Web? Rabble says that HTTP lets us load data but isn’t so good at realtime two-way interaction. Seth concurs. With HTTP you have to poll “has anything changed? has anything changed? has anything changed?” As Rabble says, this doesn’t scale very well. With Jabber there is only one connection request and one response and that response is sent when something has changed.
What’s wrong with HTTP, Comet or SMTP? Seth says that SMTP has no verifiable authentication and there’s no consistent API. Rabble says that pinging with HTTP has timeout problems. Seth says that Comet is a nice hack (or family of hacks, as Matt says) but it doesn’t scale. Bollocks!
says Simon, Meebo!
Jabber has a lot of confusing documentation. What’s the state of play for the modern programmer? Rabble dives in. Jabber is just streaming XML documents and the specs tell you what to expect in that stream. Jabber addressing looks a lot like emails. Seth explains the federation aspect. Jabber servers authenticate with each other. The payload, like with email, is a message, explains Rabble. Apart from the basic body of the message, you can include other things like attachments. Seth points out that you can get presence information like whether a mobile device is on roaming. You can subscribe to Jabber nodes so that you receive notifications of change from that node. Matt makes the observation that at this point we’re talking about a lot more than just delivering documents.
So we can send and receive messages from either end, says Matt. There’s a sense of a “roster”: end points that you can send and receive data from. That sounds fine for IM but what happens when you apply this to applications? Twitter and Dopplr can both be operated from a chat client. Matt says that this is a great way to structure an API.
Rabble says that everything old is new again. Twitter, the poster child of the new Web, is applying the concept of IRC channels.
Matt asks Rabble to explain how this works with Fire Eagle. Rabble says that Fire Eagle is a fairly simple app but even a simple HTTP client will ping it a lot because they want to get updated location data quickly. With a subscribable end point that represents a user, you get a relatively real-time update of someone’s location.
What about state? The persistence of state in IM is what allows conversations. What are the gotchas of dealing with state?
Well, says Seth, you don’t have a consistent API. Rabble says there is SOAP over XMPP …the room chuckles. The biggest gotcha, says Seth, is XMPP’s heritage as a chat server. You will have a lot of connections.
Chat clients are good interfaces for humans. Twitter goes further and sends back the human-readable message but also a machine-readable description of the message. Are there design challenges in building this kind of thing?
Rabble says the first thing is to always include a body, the human-readable message. Then you can overload that with plenty of data formats; all the usual suspects. GEO Atom in FIre Eagle, for example.
Matt asks them to explain PubSup. It’s Publish/Subscribe, says Seth. Rather than a one-to-one interaction, you send a PubSub request to a particular node and then get back a stream of updates. In Twitter, for example, you can get a stream of the public timeline by subscribing to a node. Rabble mentions Ralph and Blaine’s Twitter/Jaiku bridge that they hacked together during one night at Social Graph Foo Camp. Seth says you can also filter the streams. Matt points out that this is what Tweetscan does now. They used to ping a lot but know they just subscribe. Rabble wonders if we can handle all of this activity. There’s just so much stuff coming back. With RSS we have tricks like “last modified” timestamps and etags but it would be so much easier if every blog had a subscribable node.
We welcome to the stage a special guest, it’s Ralph. Matt introduces the story of the all-night hackathon from Social Graph Foo Camp and asks Ralph to tell us what happened there. Ralph had two chat windows open: one for the Twitterbot, one for the Jaikubot. They hacked and hacked all night. Data was flowing from the US (Twitter) to Europe (Jaiku) and in the other direction. At 7:10am in Sebastapol one chat window went “ping!” and one and a half seconds later another chat window went “pong!”
Matt asks Ralph to stay on the panel for questions. The questions come thick and fast from Dan, Dave, Simon and Gavin. The answers come even faster. I can’t keep up with liveblogging this — always a good sign. You kind of had to be there.
Michael Smith from the W3C is talking about the changing browser landscape. Just in the last year we’ve had the release of the iPhone with WebKit, the Beta of IE8 and just yesterday, Opera’s Dragonfly technology.
In the mobile browser space, the great thing about the iPhone is that it has the same WebKit engine as Safari on the desktop. Opera Mini 4 — the proxy browser — is getting a lot better too. It even supports CSS3 selectors. Mozilla, having previously expressed no interest in Mobile, have started a project called Fennec. Then there’s Android which will use WebKit as the rendering engine for browsers.
Looking at the DOM/CSS space, there have been some interesting developments. The discovery of IE’s interesting quirk with generated elements was one. The support by other browsers for lots of CSS3 selectors is quite exciting. The selectors API is gaining ground. Michael says he’s not fond of using CSS syntax for DOM traversal but he’s definitely in the minority — this is a godsend.
Now for an interlude to look at Web developer tools in browsers. Firebug really started a trend. IE8 has copied it almost verbatim. WekKit has its pretty Web Inspector and now Opera has Dragonfly. Dragonfly has a remote debugging feature, like Fiddler, which Michael is very excited by.
On the Ajax front, things are looking up in HTML5 for cross-site requests. He pimps Anne’s talk tomorrow. Then there’s Doug’s proposal for JSONrequest. Browser vendors haven’t shown too much interest in that. Meanwhile, Microsoft comes out with XDR, its own implementation that nobody is happy about. The other exciting thing in HTML5 is the offline storage stuff which works like Google Gears.
XSLT is supported very well on the client side now. But apart from Michael, who cares? Give me the selectors API any day. SVG is still strong in Mozilla and Opera.
ARIA is the one I’m happiest with. It’s supported across the board now.
The HTML5 video element is supported in WebKit nightlies and in Mozilla. There’s an experimental Opera build and, of course, no IE support. The biggest issues seem to be around licensing and deciding on a royalty-free format for video. Sun has some ideas for that.
Ah, here comes the version targetting “innovation”. The good news, as Micheal notes, is that is defaults now to the latest version. Damn straight!
Here are some Acid 3 measurements so that we can figure out which browser has the biggest willy.
Finally, look at all the CSS innovations that Dave Hyatt is putting in WebKit (and correctly prefixing with webkit-
).
Looking to the year ahead, you’ll see more CSS innovations and HTML5 movement. Michael is rushing through this part because he’s running out of time. In fact, he’s out of time.
The slides are up at w3.org/2008/Talks/05-07-smith-xtech/slides.pdf.
Rob Lee is talking about making the most of user-authored (or user-generated) content. In other words, content written by you, Time’s person of the year.
Wikipedia is the poster child. It’s got lots of WWILFing: What Was I Looking For? (as illustrated by XKCD). Here’s a graph entitled Mapping the distraction that is Wikipedia generated from a greasemonkey script that tracks link paths.
Rob works for Rattle Research who were commissioned by the BBC Innovation Labs to do some research into bringing WWILFing to the BBC archive.
Grab the first ten internal links from any Wikipedia article and you will get ten terms that really define that subject matter. The external links at the end of an article provide interesting departure points. How could this be harnessed for BBC news articles? Categories are a bit flat. Semantic analysis is better but it takes a lot of time and resources to generate that for something as large as the BBC archives. Yahoo’s Term Extractor API is a handy shortcut. The terms extracted by the API can be related to pages on Wikipedia.
Look at this news story on organic food sales. The “see also” links point to related stories on organic food but don’t encourage WWILFing. The BBC is a bit of an ivory tower: it has lots of content that it can link to internally but it doesn’t spread out into the rest of the Web very well.
How do you decide what would be interesting terms to link off with? How do you define “interesting”? You could use Google page rank or Technorati buzz for the external pages to decide if they are considered “interesting”. But you still need contextual relevance. That’s where del.icio.us comes in. If extracted terms match well to tags for a URL, there’s a good chance it’s relevant (and del.icio.us also provides information on how many people have bookmarked a URL).
So that’s what they did. They called it “muddy boots” because it would create dirty footprints across the pristine content of the BBC.
The “muddy boots” links for the organic food article links off to articles on other news sites that are genuinely interesting for this subject matter.
Here’s another story, this one from last week about the dissection of a giant squid. In this case, the journalist has provided very good metadata. The result is that there’s some overlap between the “see also” links and the “muddy boots” links.
But there are problems. An article on Apple computing brings up a “muddy boots” link to an article on apples, the fruit. Disambiguation is hard. There are also performance problems if you are relying on an external API like del.icio.us’s. Also, try to make sure you recommend outside links that are written in the same language as the originating article.
Muddy boots was just one example of using some parts of the commons (Wikipedia and del.icio.us). There are plenty of others out there like Magnolia, for example.
But back to disambiguation, the big problem. Maybe the Semantic Web can help. Sources like Freebase and DBpedia add more semantic data to Wikipedia. They also pull in data from Geonames and MusicBrainz. DBpedia extracts the disambiguation data (for example, on the term “Apple”). Compare terms from disambiguation candidates to your extracted terms and see which page has the highest correlation.
But why stop there? Why not allow routes back into our content? For example, having used DBpedia to determine that your article is about Apple, the computer company, you could an hCard for the Apple company to that article.
If you’re worried about the accuracy of commons data, you can stop. It looks like Wikipedia is more accurate than traditional encyclopedias. It has authority, a formal review process and other tools to promote accuracy. There are also third-party services that will mark revisions of Wikipedia articles as being particularly good and accurate.
There’s some great commons data out there. Use it.
Rob is done. That was a great talk and now there’s time for some questions.
Brian asks if they looked into tying in non-text content. In short, no. But that was mostly for time and cost reasons.
Another question, this one about the automation of the process. Is there still room for journalists to spend a few minutes on disambiguating stories? Yes, definitely.
Gavin asks about data as journalism. Rob says that this particularly relevant for breaking news.
Ian’s got a question. Journalists don’t have much time to add metadata. What can be done to make it easier — it is an interface issue? Rob says we can try to automate as much as possible to keep the time required to a minimum. But yes, building things into the BBC CMS would make a big difference.
Someone questions the wisdom of pushing people out to external sources. Doesn’t the BBC want to keep people on their site? In short, no. By providing good external references, people will keep coming back to you. The BBC understand this.
I’m at the first non-workshop day at XTech 2008 in Dublin’s fair city. David Recordon is delivering the second of the morning keynotes. I missed most of Simon Wardley’s opening salvo in favour of having breakfast — sorry Simon. David, unlike Simon, will not be using Comic Sans and nor will he have any foxes, the natural enemy of the duck.
Let’s take a look at how things have evolved in recent years.
Open is hip. Open can get you funded. Just look at MySQL.
Social is hip. But the sheer number of social web apps doesn’t scale. It’s frustrating repeating who you know over and over again. Facebook apps didn’t have to deal with that. The Facebook App platform was something of a game changer.
Networked devices are hip, from iPhones to Virgin America planes.
All of these things have common needs. We don’t just need portability, we need interoperability. We have some good formats for that:
We need a way to share abstract information …securely. The password anti-pattern, for example, is wrong, wrong, wrong. OAuth aims to solve this problem. Here’s a demo of David authorising FireEagle to have access to his location. He plugs Kellan’s OAuth session which is on tomorrow.
We need a way to communicate with people. Email just sucks. The IM wars were harmful. Jabber (XMPP) emerged as a leader because it tackled the interoperability problem. Even AOL are getting into it.
We need to know who someone is. Naturally, OpenID gets bigged up here. It’s been a busy year for OpenID. The number of relying parties has grown exponentially. The ability to get that done — to grow from a few people on a mailing list to having companies like Yahoo and Microsoft and AOL supporting that standard — that’s really something new that you wouldn’t have seen on the Web a few years ago.
But people don’t exist at just one place. The XFN microformat (using rel=”me”) is great for linking up multiple URLs. David demos his own website which has a bunch of rel=”me” links in his sidebar: “this isn’t just some profile, this is my profile.” He plugs my talk — nice one!
We need to know who someone knows. The traditional way of storing relationships has been address books and social networks but this is beginning to change. David demos the Google Social Graph API, plugging in his own URL. Then he uses Tantek’s URL to show that it works for anybody. The API has problems if you leave off the trailing slash, apparently.
We need to know what people are doing. Twitter, Fire Eagle, Facebook and Open Social all deal with realtime activity in some way. They’re battling things out in this space. Standards haven’t emerged yet. Watch this space. If Google ties Open Social to its App Engine we could see some interesting activity emerge.
Let’s look at where things stand right now. Who’s getting things right and who’s getting things wrong?
When you create an API, use existing standards. Don’t reinvent vcard or hCard, just use what people already publish.
Good APIs enable mashups. Google is a leader here with its mapping API.
Fire Eagle is an interesting one. You will hardly ever visit the site. Fire Eagle doesn’t care who your friends are. It just deals with one task: where are you. Here’s a demo of Fireball.
It’ll be interesting to see how these things play out in the next few years where you have services that don’t really involve a website very much at all. Just look at Twitter: people bitch and moan about the features they think it should support but the API allows you to build on top of Twitter to add the features you want. Tweetscan and Quotably are good examples of this in action.
David shows a Facebook app he built (yes, you heard right, a Facebook app). But this app allows you to publish from Facebook onto other services.
Plaxo Pulse runs your OpenID URL through the Google Social Graph API to see who you already know (I must remember this for my talk tomorrow).
The DiSo project is one to watch: figuring out how to handle activity, relationships and permissions in a distributed environment.
And that’s all, folks.
While I was at XTech in Paris, Ian Forrester took me aside for an interview about microformats. Here's the video of our little chat.
Pausing for breath is for pussies. Simon's slides illustrate how to pack everything including the OpenID kitchen sink into 45 minutes.
The last day of Xtech rolled around and… whaddya mean “what happened to day two?” They can’t have a conference in Paris and not expect me to take at least one day off to explore the city.
So I skipped the second day of XTech and I’m sure I missed some good presentations but I spent a lovely day with Jessica exploring the streets and brasseries of Paris.
Ah, Paris! (uttering this phrase must always be accompanied by the gesture of flinging one arm into the air with abandon)
The conference closed today with a keynote from Matt Webb. It was great: thought-provoking and funny. It really drove home the big take-away message from XTech for me this year which is that hacking on hardware now is as easy as software.
I can has Arduino?
For those of you who attended my XTech talk yesterday (and, indeed, for those of you who didn’t), here are a few jumping off points I mentioned:
Simon's slides from his talk at XTech on JavaScript libraries (which I missed). Good stuff contained within.
I’ve been a very bad conference attendee. I slept in this morning ‘till 11am and missed the opening keynotes. I was looking forward to seeing what Adam Greenfield had to say but I guess I was more tired than I realised.
It’s not like I had a particularly late night last night. I spent a very pleasant evening in a cosy bistro with Jessica, Brian, Andy and Gavin.
By the time I made it over to the conference venue, the morning sessions were wrapping up so I had lunch for breakfast. Once I was all caffeined up, I started getting ready for my talk.
I gave a presentation called microformats: the nanotechnology of the semantic web. I enjoyed myself and I think other people did too. I might have pushed the nanotech anology too far but I got a kick out of talking about buckyballs and grey goo. I talked for a bit longer than I was planning so I didn’t have as much time for questions as I would have liked but I also think I managed to anticipate a lot of questions during the talk anyway.
I should have really stuck around in the same room after my talk to listen to a presentation on RDFa and GRIDDL but I dashed next door to hear Gavin’s presentation on provenance. I loved this. He’s thinking about a lot of the same things that I have in terms of lifestreams and portable social networks but whereas I just talk about this stuff, he’s gone and built some proof-of-concept to illustrate how it’s possible today to join up the dots of identity online. I really wish he was coming to Hack Day.
Speaking of Hack Day (it’s just a month away now), I fully expect to see plenty of hacking on hardware going on. Before XTech, this was unknown territory for me but I know I’d really like to roll up my sleeves and get hacking (and I haven’t even heard what Matt Webb has to say yet).
Today I was introduced to a piece of hardware with a difference: the Nabaztag—a WiFi-enabled rabbit with flashing lights and movable ears. I want one. The Nabaztag presentation also included the quote of the day for me:
If you can connect rabbits, you can connect nearly everything.