Google Profiles, namespace lock-in & social search

I’d originally intended to respond to Joshua Schacter’s post about URL shorteners and how they’re merely the tip of the data iceberg, but since I missed that debate, Google has fortuitously plied me with an even better example by releasing custom profile URLs today.

My point is to reiterate one of Tim O’Reilly’s ever-prescient admonishments about Web 2.0: lock-in can be achieved through owning a namespace. In full:

5. Chief among the future sources of lock in and competitive advantage will be data, whether through increasing returns from user-generated data (eBay, Amazon reviews, audioscrobbler info in last.fm, email/IM/phone traffic data as soon as someone who owns a lot of that data figures out that’s how to use it to enable social networking apps, GPS and other location data), through owning a namespace (Gracenote/CDDB, Network Solutions), or through proprietary file formats (Microsoft Office, iTunes). (“Data is the Intel Inside”)

(I’ll note that the process of getting advantage from data isn’t necessary a case of companies being “evil.” It’s a natural outcome of network effects applied to user contribution. Being first or best, you will attract the most users, and if your application truly harnesses network effects to get better the more people use it, you will eventually build barriers to entry based purely on the difficulty of building another such database from the ground up when there’s already so much value somewhere else. (This is why no one has yet succeeded in displacing eBay. Once someone is at critical mass, it’s really hard to get people to try something else, even if the software is better.) The question of “don’t be evil” will come up when it’s clear that someone who has amassed this kind of market position has to decide what to do with it, and whether or not they stay open at that point.)

Consider two things:

Owning the “people” namespace will determine whether people see the web through Google’s technicolor glasses or Facebook’s more nuanced and monochrome blue hues.

Curiously, it has been (correctly) argued that Google “doesn’t get social”, a criticism that I generally support. And yet, with their move to more convenient profile URLs that point to profiles that aggregate content from across the web (beating Facebook to the punch), a bigger (albeit incomplete) picture begins to emerge.

When I blogged that my name is not a URL, I wasn’t so much arguing against vanity or custom profile URLs but instead making the point that such things really should go away over time, from a usability perspective.

Let me put it this way: at one point, if you weren’t in the Yellow Pages, you basically didn’t exist. Now imagine there being several competitors to the Yellow Pages — the Red, Green and Blue Pages — each maintaining overlapping but incomplete listings of people. You’re going to want to use the one that has the most complete, exhaustive and easy-to-use list of names, right? And, I bet beyond that, if one of them was able to make the people that you know and actually care about more accessible to you, you’d pick that one over all the others. And this is where owning — and getting people to “live in” — a namespace begins to reveal its significance.

Google Profile Search

So, it’s telling thing to look at Google and Facebook’s respective approaches to their people search engines and indexes. Indeed, having a readily accessible index of living persons — structured by their connections to one another — will become a necessary precondition to getting social search right (see Aardvark for a related approach, which connects to the Facebook and IM portions of your social graph to facilitate question answering).

As social search and living through your social graph becomes “the norm” (i.e. with increasing reliance on social filtering), Google and Facebook’s ability to create compelling experiences on top of data about you and who you know will come to define and differentiate them.

To date, Google’s profile search has been rather unloved and passed over, but with the new, more convenient profile URLs and the location of profile search at google.com/profiles, I suspect that Google is finally getting serious about social.

Compare Facebook and Google’s search results for my buddy, Dave Morin:

Facebook logged out:

Search Names: dave morin | Facebook

Facebook logged in:

Facebook | Search: dave morin

Google results (there’s no difference between logged in and logged out views):

Dave Morin - Google Profile Search

Notice the difference? See how much better Facebook’s search is because it knows which “Dave Morin” is my friend?

Now, consider the profile result when you click through:

Dave’s Facebook profile (logged out):

Dave Morin - San Francisco, CA | Facebook

(Facebook’s logged in profile view is as you’d expect — a typical Facebook profile with stream and wall.)

Now, here’s the clincher. Take a look at Google’s profile for Dave:

Dave Morin - Google Profile

Google is able to provide a much richer and simpler profile, that’s much more accessible (without requiring any kind of sign in) because they’ve radically simplified their privacy model on this page (show what you want, and nothing more). Indeed, Google’s made it easier for people to be open — at least with static information — than Facebook!

So much for Facebook’s claim to openness! 😉

Of course, default Google profiles are pretty sparse, but this is just the beginning. (Bonus: both Facebook and Google public profiles support the microformat!)

And the point is: where will you build your online identity? Under whose namespace do you want to exist? (Personally, I choose my own.)

Clearly the battle for the future of the social web is heating up in subtle but significant ways, and Google’s move today shouldn’t be thought of anything less than the opening salvo in moving the battle back to its turf: search.

Independent study on OpenID awareness using Mechanical Turk

Even though I wasn’t able to attend the eighth Internet Identity Workshop this week in Mountain View (check out the latest episode of TheSocialWeb.tv for a glimpse), I wanted to do my part to contribute so I’m sharing the results of a study that Brynn Evans and I performed on Mechanical Turk a short while ago.

I’ll cut to the chase and then go into some background detail.

Heard of OpenID?Of the 302 responses we received, we only rejected one, leaving us with 301 valid data points to work with. Of those 301:

  • 19.3% had heard of OpenID (58 people)
  • 9.0% knew what OpenID was used for (27) and 8.0% were unsure (24)
  • 1.3% used OpenID (4) and 18.3% were unsure if they used it (55).
  • 5.3% recognized the OpenID icon (16) and 7.0% were unsure (21).

Combining some of the results, we found that:

  • of those who know what OpenID is, 14.81% use it.
  • of those who have merely heard of it, 6.9% use it.

That’s what the data show.

Background

Several weeks ago, Yahoo released usability research and best practices for OpenID (PDF). This research was performed by Beverly Freeman in the Yahoo! Customer Insights division in July of this year and involved 9 female Yahoo! users age 32-39 with self-declared medium-to-high level of Internet savvy.

This research, along with Eric Sachs’ later contributions (Google), have taken us from virtually zero research on the usability of OpenID to having a much more robust pool of information to pull from. And though I’m sure many would agree that this research only points to opportunities for improvement, many people interpreted the results as an indication that “OpenID is too confusing” or that it “befuddles users“.

A lot of people also took cheap shots, using the Yahoo! results to bolster their long-held arguments against the protocol and its unfamiliar interaction flow. The problem with such criticism, as far as I’m concerned, is that generalizing from the experiences of nine female Yahoo! users in their thirties is not necessarily representative of the web at large, nor are the conditions favorable to such research. Y’know, Ford got a lot of flack too when he introduced the Model T because everyone loved their horse and carriages. Good thing Ford was right.

Now, some of the criticism of OpenID is valid, especially if it can be turned into productive outcomes, like making OpenID easier to use, or less awkward.

And it serves no one’s interests to make grandiose claims on the basis of minimal data, so given Brynn’s work using Mechanical Turk (with Ed Chi from PARC), I thought I’d ask her to help me set up a study to discover just what awareness of OpenID might be among a wider segment of the population, especially with Japanese awareness of OpenID topping out around 28% (with usage of OpenID at 15%, more than ten times what we saw with Turkers).

Mechanical Turk Demographics

First, it’s important to point out something about Turker demographics. Because Turkers must have either a US bank account or be willing to be paid in Amazon gift certificates, the quality of participants you get (especially if you design your HIT well) will actually be pretty good (compared with, say, a blog-based survey). Now, Mechanical Turk actually has rules against asking for demographic or personally identifying information, but some information has been gathered by Panos Ipeirotis to shed some light on who the Turkers are and why they participate. I’ll leave the bulk of the analysis up to him, but it’s worth noting that a survey put out on Mechanical Turk about OpenID will likely hit a fairly average segment of the internet-using population (or at least one that doesn’t differ greatly from college undergraduates).

Method

Over the course of a week (October 19 – 26), we fielded 302 responses to our survey, paying $0.02 for each valid reply (yes, we were essentially asking people for their “two cents”). We only rejected one response out of the batch, leaving us with 301 valid data points at a whooping cost of $6.02.

Findings

As I reported above, contrary to the 0% awareness demonstrated in the Yahoo! study of nine participants, we found that nearly 20% of respondents had at least heard of OpenID, though a much smaller percentage (1.3%) actually used it (or at least were consciously aware of using it — nearly everyone (18%) who’d heard of OpenID didn’t know if they used it or not).

There was also at least some familiarity with the OpenID logo/icon (5.3%).

What’s also interesting is that many respondents, upon hearing about “OpenID”, expressed an interest in finding out more: “What is it? LOL.”; “I’ve gotta look it up!”; “This survey has sparked my interest”; “Heading to Google to find out”. I can’t say that this shows clear interest in the concept, but at least some folks showed a curious disposition, as such:

How can I tell for sure whether I’ve used OpenID or not when I don’t know what it is? I’ve surely heard of it. That confuses me mainly in Magnolia {bookmarking service} where I want to sign up, but I can’t as it asks for OpenID. And until you mentioned above, it simply didn’t occur to me to just search it up. Hell, after submitting this hit, I’m going to do that first and foremost. Anyways, thanks a lot for indirectly suggesting a move!!!

Now, I won’t repeat the other findings, as they’ve already been reported above.

Thoughts and next steps

The results of this survey are interesting to me, but not unexpected. They’re not reassuring either, and they tell me that we’re doing well considering that we’ve only just begun.

Consider that 20% of a random sampling of 300 people on the internet had at least heard of OpenID, before Google, MySpace or Microsoft turned on their support for the protocol (MySpace announced their intention to support OpenID in July).

Consider that nearly a year ago Marshall Kirkpatrick sounded the deathknell of what seemed like the forgone conclusion about OpenID:

Big Players are Dragging Their Feet … Sharing User Info is a Whole Other Matter … Public Facing Profiles are Anemic … Ease of Use and Marketing Clarity Remain Low Priorities

Consider that no concerted effort has been made to date to inform or educate the general web population about OpenID, or about the problems with sharing your user credentials all over the web, and that many of the large providers have yet to turn on their OpenID support (despite all coming to the table and agreeing that it’s the way forward for identity on the web (save, as usual, Facebook, looking more Microsoftian by the day).

Consider also that momentum to rev the protocol to accommodate email addresses in OpenID is just now gaining traction.

In other words, with areas of user education becoming obvious, with provider adoption starting to happen (vis-a-via MySpace demonstrating the value and prevalence of URL-based identifiers) and necessary usability improvements starting to take shape (both in terms of the OpenID and OAuth flows being combined, and in terms of email addresses becoming valid in OpenID flows), we’re truly just getting started with making OpenID ready for mainstream audiences. It’s been a hard slog so far, and it’s bound to continue to be challenging, but the shared vision for where we’re going gets clearer every time there’s an Internet Identity Workshop.

I plan to re-run this study every 3-6 months from this point forward to keep track of our progress. I hope that these numbers will shed some much-needed balanced light on the subject of OpenID awareness and adoption — both to demonstrate how far we have to go, and how far we’ve come.

Musings on Chrome, the rebirth of the location bar and privacy in the cloud

Imagine a browser of the web, by the web, and for the web. Not simply a thick client application that simply opens documents with the http:// protocol instead of file://, but one that runs web applications (efficiently!), that plays the web, that connects people across the boundaries of the silos and gives them local-like access to remote data.

It might not be Chrome, but it’s a damn near approximation, given what people today.

Take a step back. You can see the relics of desktop computing in our applications’ file menus… and we can intuit the assumptions that the original designer must have made about the user, her context and the interaction expectations she brought with her:

Firefox Menubar

This is not a start menu or a Dock. This is a document-driven menubar that’s barely changed since Netscape Communicator.

Indeed, the browser is a funny thing, because it’s really just a wrapper for someone else’s content or someone’s else’s application. That’s why it’s not about “features“. It’s all about which features, especially for developers.

It’s a hugely powerful place to insert oneself: between a person and the vast expanse that is the Open Web. Better yet: to be the conduit through which anyone projects herself on to the web, or reaches into the digital void to do something.

So if you were going to design a new browser, how would you handle the enormity of that responsibility? How would you seize the monument of that opportunity and create something great?

Well, for starters, you’d probably want to think about that first run experience — what it’s like to get behind the wheel for the very time with a newly minted driver’s permit — with the daunting realization that you can now go anywhere you please…! Which is of course awesome, until you realize that you have no idea where to go first!

Historically, the solution has been to flip-flop between portals and search boxes, and if we’ve learned anything from Google’s shockingly austere homepage, it comes down to recognizing that the first step of getting somewhere is expressing some notion of where you want to go:

Camino. Start

InquisitorThe problem is that the location field has, up until recently, been fairly inert and useless. With Spotlight-influenced interfaces creeping into the browser (like David Watanabe’s recently acquired Inquisitor Safari plugin — now powered by Yahoo! Search BOSS — or the flyout in Flock that was inspired by it) it’s clear that browsers can and should provide more direction and assistance to get people going. Not everyone’s got a penchant for remembering URLs (or RFCs) like Tantek’s.

This kind of predictive interface, however, has only slowly made its way into the location bar, like fish being washed ashore and gradually sprouting legs. Eventually they’ll learn to walk and breath normally, but until then, things might look a little awkward. But yes, dear reader, things do change.

So you can imagine, having recognized this trend, Google went ahead and combined the search box and the location field in Chrome and is now pushing the location bar as the starting place, as well as where to do your searching:

Chrome Start

This change to such a fundamental piece of real estate in the browser has profound consequences on both the typical use of the browser as well as security models that treat the visibility of the URL bar as sacrosanct (read: phishing):

Omnibox

The URL bar is dead! Long live the URL bar!

While cats like us know intuitively how to use the location bar in combination with URLs to gets us to where to we want to go, that practice is now outmoded. Instead we type anything into the “box” and have some likely chance that we’re going to end up close to something interesting. Feeling lucky?

But there’s something else behind all this that I think is super important to realize… and that’s that our fundamental notions and expectations of privacy on the web have to change or will be changed for us. Either we do without tools that augment our cognitive faculties or we embrace them, and in so doing, shim open a window on our behaviors and our habits so that computers, computing environments and web service agents can become more predictive and responsive to them, and in so doing, serve us better. So it goes.

Underlying these changes are new legal concepts and challenges, spelled out in Google’s updated EULA and Privacy Policy… heretofore places where few feared to go, least of all browser manufacturers:

5. Use of the Services by you

5.1 In order to access certain Services, you may be required to provide information about yourself (such as identification or contact details) as part of the registration process for the Service, or as part of your continued use of the Services. You agree that any registration information you give to Google will always be accurate, correct and up to date.

. . .

12. Software updates

12.1 The Software which you use may automatically download and install updates from time to time from Google. These updates are designed to improve, enhance and further develop the Services and may take the form of bug fixes, enhanced functions, new software modules and completely new versions. You agree to receive such updates (and permit Google to deliver these to you) as part of your use of the Services.

It’s not that any of this is unexpected or Draconian: it is what it is, if it weren’t like this already.

Each of us will eventually need to choose a data brokers or two in the future and agree to similar terms and conditions, just like we’ve done with banks and credit card providers; and if we haven’t already, just as we have as we’ve done in embracing webmail.

Hopefully visibility into Chrome’s source code will help keep things honest, and also provide the means to excise those features, or to redirect them to brokers or service providers of our choosing, but it’s inevitable that effective cloud computing will increasingly require more data from and about us than we’ve previously felt comfortable giving. And the crazy thing is that a great number of us (yes, including me!) will give it. Willingly. And eagerly.

But think one more second about the ramifications (see Matt Cutts) of Section 12 up there about Software Updates: by using Chrome, you agree to allow Google to update the browser. That’s it: end of story. You want to turn it off? Disconnect from the web… in the process, rendering Chrome nothing more than, well, chrome (pun intended).

Welcome to cloud computing. The future has arrived and is arriving.

Google Chrome and the future of browsers

Chrome LogoNews came today confirming Google’s plans for Chrome, its own open source browser based on Webkit.

This is big news. As far as I’m concerned, it doesn’t get much bigger than this, at least in my little shed on the internet.

I’ve been struggling to come to grips with my thoughts on this since I first heard about this this morning over Twitter (thanks @rww @Carnage4Life and @furrier). Once I found out that it was based on Webkit, the pieces all fell into place (or perhaps the puzzle that’s been under construction for the past year or so became clearer).

Chrome is powered by Webkit

Last May I ranted for a good 45 minutes or so about the state of Mozilla and Firefox and my concerns for its future. It’s curious to look back and consider my fears about Adobe Air and Silverlight; it’s more curious to think about what Google Chrome might mean now that it’s been confirmed and that those frameworks have little to offer in the way of standards for the open web.

I read announcement as the kid gloves coming off. I just can’t read this any other way than to think that Google’s finally fed up waiting around for Firefox to get their act together, fix their performance issues in serious ways, provide tangible and near-term vision and make good on their ultimate promise and value-proposition.

Sure, Google re-upped their deal with Firefox, but why wouldn’t they? If this really is a battle against Microsoft, Google can continue to use Firefox as its proxy against the entrenched behemoth. Why not? Mozilla’s lack of concern worries me greatly; if they knew about it, what did they do about it? Although Weave has potential, Google has had Google Browser Sync for ages (announced, to wit, by Chrome’s product manager Brian Rakowski). Aza Raskin might be doing very curious and esoteric experiments on Labs, but how does this demonstrate a wider, clearer, focused vision? Or is that the point?

Therein lies the tragedy: Google is a well-oiled, well-heeled machine. Mozilla, in contrast, is not (and probably never will be). The Webkit team, as a rhizomatic offshoot from Apple, has a similar development pedigree and has consistently produced a high quality — now cross-platform — open source project, nary engaging in polemics or politics. They let the results speak for themselves. They keep their eyes on the ball.

Ultimately this has everything to do with people; with leadership, execution and vision.

When Mozilla lost Ben Goodger I think the damage went deeper than was known or understood. Then Blake Ross and Joe Hewitt went over to Facebook, where they’re probably in the bowels of the organization, doing stuff with FBML and the like, bringing Parakeet into existence (they’ve recently been joined by Mike Schroepfer, previously VP of Engineering at Mozilla). Brad Neuberg joined Google to take Dojo Offline forward in the Gears project (along with efforts from Dylan Schiemann and Alex Russell). And the list goes on.

Start poking around the names in the Google Chrome comic book and the names are there. Scott McCloud’s drawings aren’t just a useful pictorial explanation of what to expect in Chrome; it’s practically a declaration of independence from the yesteryear traditions of browser design of the past 10 years, going all the way back to Netscape’s heyday when the notion of the web was a vast collection of interlinked documents. With Chrome, the web starts to look more like a nodal grid of documents, with cloud applications running on momentary instances, being run directly and indirectly by people and their agents. This is the browser caught up.

We get Gears baked in (note the lack of “Google” prefix — it’s now simply “of the web”) and if you’ve read the fine-print closely, you already know that this means that Chrome will be a self-updating, self-healing browser. This means that the web will rev at the speed of the frameworks and the specifications, and will no longer be tied to the monopoly player’s broken rendering engine.

And on top of Gears, we’re starting to see the light of the site-specific browser revolution and the maturing of the web as an application platform, something Todd Ditchendorf, with his Fluid project, knows something about (also based on Webkit — all your base, etc):

Google Chrome + Gears

In spite of its lofty rhetoric in support of a free Internet, Chrome isn’t Mozilla’s pièce de résistance. Turns out that it’s going to be Apple and Google who will usher in the future of browsers, and who will get to determine just what that future of browsers are going to look like:

Google Chrome, starting from scratch

To put it mildly, things just got a whole lot more exciting.

The Community Ampflier

Twitter / O'Reilly OSCON: Chris Messina receiving "Be...

os-awardI am honored to be a recipient of this year’s Google O’Reilly Open Source Award for being the “best community amplifier” for my work with the microformats, Spread Firefox and BarCamp communities! (See the original call for nominations).

Inexplicably I was absent when they handed out the award, hanging out with folks at a Python/Django/jQuery drinkup down the street, but I’m humbled all the same… especially since I work on a day to day basis with such high caliber and incredible people without whom none of these projects would exist, would not have found success, and most importantly, would never have ever mattered in the first place.

Also thanks to @bmevans, @TheRazorBlade, @kveton, @anandiyer, @donpdonp, @dylanjfield, @bytebot, @mtrichardson, @galoppini for your tweets of congratulations!

And our work continues. So lucky we are, to have such good work, and such good people to work with.

Facebook, the USSR, communism, and train tracks

Low hills closed in on either side as the train eventually crawled on to high, tabletop grasslands creased with snow. Birds flew at window level. I could see lakes of an unreal cobalt blue to the north. The train pulled into a sprawling rail yard: the Kazakh side of the Kazakhstan-China border.

Workers unhitched the cars, lifted them, one by one, ten feet high with giant jacks, and replaced the wide-gauge Russian undercarriages with narrower ones for the Chinese tracks. Russian gauges, still in use throughout the former Soviet Union, are wider than the world standard. The idea was to the prevent invaders from entering Russia by train. The changeover took hours.

— Robert D. Kaplan, The Ends of the Earth

I read this passage today while sunning myself at Hope Springs Resort near Palm Springs. Tough life, I know.

The passage above immediately made me think of Facebook, and I had visions of the old Facebook logo with a washed out Stalin face next to the wordmark (I’m a visual person). But the thought came from some specific recent developments, and fit into a broader framework that I talked about loosely to Steve Gillmor about on his podcast. I also wrote about it last week, essentially calling for Facebook and Google to come together to co-develop standards for the social web, but, having been reading up on Chinese, Russian, Turkish and Central Asian history, and being a benefactor of the American enterprise system, I’m coming over to Eran and others‘ point that 1) it’s too early to standardize and 2) it probably isn’t necessary anyway. Go ahead, let a thousand flowers bloom.

If I’ve learned anything from Spread Firefox, BarCamp, coworking and the like, it’s that propaganda needs to be free to be effective. In other words, you’re not going to convince people of your way of thinking if you lock down what you have, especially if what you have is culture, a mindset or some other philosophical approach that helps people narrow down what constitutes right and wrong.

Look, if Martin Luther had nailed his Ninety-five Theses to the door but had ensconced them in DRM, he would not have been as effective at bringing about the Reformation.

Likewise, the future of the social web will not be built on proprietary, closed-source protocols and standards. Therefore, it should come as no surprise that Google wants OpenSocial to be an “open standard” and Facebook wants to be the openemest of them all!

The problem is not about being open here. Everyone gets that there’s little marginal competitive advantage to keeping your code closed anymore. Keeping your IP cards close to your chest makes you a worse card player, not better. The problem is with adoption, gaining and maintaining [developer] interest and in stoking distribution. And, that brings me to the fall of the Communism and the USSR, back where I started.

I wasn’t alive back when the Cold War was in its heyday. Maybe I missed something, but let’s just go on the assumption that things are better off now. From what I’m reading in Kaplan’s book, I’d say that the Soviets left not just social, but environmental disaster in their wake. The whole region of Central Asia, at least in the late 90s, was fucked. And while there are many causes, more complex than I can probably comprehend, a lot of it seems to have to do with a lack of cultural identity and a lack of individual agency in the areas affected by, or left behind by, Communist rule.

Now, when we talk about social networks, I mean, c’mon, I realize that these things aren’t exactly nations, nation-states or even tribal groups warring for control of natural resources, food, potable water, and so forth. BUT, the members of social networks number in the millions in some cases, and it would be foolish not to appreciate that the borders — the meticulously crafted hardline boundaries between digital nation-states — are going to be redrawn when the battle for cultural dominance between Google (et al) and Facebook is done. It’s not the same caliber of détente that we saw during the Cold War but it’s certainly a situation where two sides with very different ideological bents are competing to determine the nature of the future of the [world]. On the one hand, we have a nanny state who thinks that it knows best and needs to protect its users from themselves, and on the other, a lassé-faire-trusting band of bros who are looking to the free market to inform the design of the Social Web writ large. On the one hand, there’s uncertainty about how to build a “national identity”-slash-business on top of lots of user data (that, oh yeah, I thought was supposed to be “owned” by the creators), and on the other, a model of the web, that embraces all its failings, nuances and spaghetti code, but that, more than likely, will stand the test of time as a durable provider of the kind of liberty and agency and free choice that wins out time and again throughout history.

That Facebook is attempting to open source its platform, to me, sounds like offering the world a different rail gauge specification for building train tracks. It may be better, it may be slicker, but the flip side is that the Russians used the same tactic to try to keep people from having any kind of competitive advantage over their people or influence over how they did business. You can do the math, but look where it got’em.

S’all I’m sayin’.

The battle for the future of the social web

When I was younger, I used to bring over my Super Nintendo games to my friends’ houses and we’d play for hours… that is, if they had an SNES console. If, for some reason, my friend had a Sega system, my games were useless and we had to play something like Sewer Shark. Inevitably less fun was had.

What us kids didn’t know at the time was that we were suffering from a platform war, that manifested, more or less, in the form of a standards war for the domination of the post-Atari video game market. We certainly didn’t get why Nintendo games didn’t work on Sega systems, they just didn’t, and so we coped, mostly by not going over to the kid’s house who had Sega. No doubt, friendships were made — and destroyed — on the basis of which console you had, and on how many games you had for the preferred platform. Indeed, the kids with the richest parents got a pass, since they simply had every known system and could play anyone’s games, making them by default, super popular (in other words, it was good to be able to afford to ignore the standards war altogether).

Fast-forward 10 years and we’re on the cusp of a new standards war, where the players and stakes have changed considerably but the nature of warfare has remained much the same as Hal R. Varian and Carl Shapiro described in Information Rules in 1999. But the casualties, as before, will likely be the consumers, customers and patrons of the technologies in question. So, while we can learn much from history about how to fight the war, I think that, for the sake of the web and for web citizens generally, this coming war can be avoided, and that, perhaps, it should be.

Continue reading “The battle for the future of the social web”

An update to my my OpenID Shitlist, Hitlist and Wishlist

OpenID hitlist

Back in December, I posted my OpenID Shitlist, Hitlist and Wishlist. I listed 7 companies on my shitlist, 14 on hitlist and 12 on my wishlist. Given that it’s been all of two months but we’ve made some solid progress, I thought I’d go ahead and give a quick update on recent developments.

First, the biggest news is probably that Yahoo has come online as an OpenID 2.0 identity provider. This is old news for anyone who’s been watching the space, but given that I called them out on my wishlist (and that their coming online tripled the number of OpenIDs) they get serious props, especially since Flickr profile URLs can now be used as identity URLs. MyBlogLog (called out on my shitlist) gets a pass here since they’re owned by Yahoo, but I’d still like to see them specifically support OpenID consumption

Second biggest news is the fact that, via Blogger, Google has become both an OpenID provider (with delegation) and consumer. Separately, Brad Fitzpatrick released the Social Graph API and declared that URLs are People Too.

Next, I’ll give big ups to PBWiki for today releasing support for OpenID consumption. This is a big win considering they were also on my shitlist and I’d previously received assurances that OpenID for PBWiki would be coming. Well, today they delivered, and while there are opportunities to improve their specific implementation, I’d say that Joel Franusic did a great job.

And, in other good news, Drupal 6.0 came out this week, with support for OpenID in core (thanks to Walkah!), so there’s another one to take off my hitlist.

I’d really like to take Satisfaction off my list, since they’ve released their API with support for OAuth, but they’ve still not added support for OpenID, so they’re not out of the woods just yet… even though their implementation of OAuth makes me considerably happy.

So, that’s about it for now. I hear rumblings from Digg that they want to support OpenID, but I’ve got no hard dates from them yet, which is fine. There’re plenty more folks who still need to adopt OpenID, and given the support the foundation has recently received from big guys like Microsoft, Google, Yahoo, Verisign and IBM, my job advocating for this stuff is only getting easier.

Blogger adopts OpenID site-wide

Twitter / Case: OpenID FTW!

Clarification: The title of this post is a little misleading, as Oxa pointed out. Blogger has only enabled OpenID commenting site-wide. The author regrets any impression otherwise.

In what has to be a positive sign of things to come, Blogger has taken the OpenID commenting feature from beta to live in a matter of two weeks.

This is huge.

With great progress coming on OAuth Discovery, we’re rapidly approaching the plumbing needed to really start to innovate on citizen-centric web services… and social network portability.

Slow, steady and iterative wins the race

Consider this a cry for anti-agile, anti-hyped solutions for delivering the open social web.

I read posts like Erick Schonfeld’s “OpenSocial Still “Not Open for Business”” and I have two reactions: first, TechCrunch loves to predict the demise or tardiness of stuff but isn’t very good at playing soothsayer generally and second: there’s really no way Google could have gotten the launch of OpenSocial right. And not like my encouragement will make much difference in this case, but I’m with Google this time: just keep trucking, we’ll get there eventually.

On the first point, I’d like to jog your memory back to when TechCrunch declared Ning RIP and that Mozilla’s Coop was bad news for Flock. Let’s just say that both companies are both alive and kicking and looking better than ever. I don’t really care to pick on TechCrunch, only point out that it often doesn’t really serve much purpose to try to predict the demise of anyone before they’re really gone or to lament the latency of some brand new technology that holds great promise but has been released early.

On my second reaction, well, I have some sympathies for Google for releasing OpenSocial prematurely, before it was fully baked or before they had parity with the Facebook platform. We suffered a similar coal-raking when Flock 0.2 launched. It was literally a bunch of extensions and a theme baked on to the husk of Firefox and when people asked “hey, where’s the beef?!”, well, we had to tell them it was in the future. You can imagine how well that went over.

The point is, we kept at it. And after I left, Flock kept at it. And so just over two years after we’d released 0.2, Flock 1.0 came out, and the reviews, well, were pretty good.

Had we not released Flock 0.2 when we did and gone underground, there’s a chance we would have had more time to prove out the concepts internally before taking them to a wider and less-forgiving audience and would have avoided the excessive media buzz we unnecessarily spun up and that blew up in our face. I learned from that experience that enthusiasm isn’t enough to convince other people to be patient and to believe that you’re going to deliver. I also learned the hard way how long good technology development actually takes and that typically whatever you come out with first is going to have to be radically changed throughout the testing and iterations of any design process.

To look at what Google’s attempting to do, I can see the same kind of awe shucks, we’re changing the world kind of technical development going on. But unlike Flock’s early days, there wasn’t the same political and economic exposure that I’m amazed to see Google taking on. It’s one thing for Facebook, with its youth and bravado, to force its partners to toe whatever line it sets, and to have them build to their specifications. After all, if they don’t, their apps won’t work.

Google’s doing something slightly different and more dangerous, in that, not only are they basically building a cross-platform operating system that runs on the web itself, but they’re also having to balance the needs and passing fancies of multiple business partners involved in actually implementing the specifications to greater or lesser degrees, with varying amounts of attention and wherewithal.

Worst of all, between Facebook and Google’s platforms, we’re quickly heading down a path that leads us to a kind of stratefied Internet Explorification of the web that we haven’t seen since Firefox 1.0 came on the scene. Already we’re seeing inconsistencies between the existing OpenSocial containers and it’s only going to get worse the more adopters try to work to the unfinished specs. As for Facebook apps, well, for every Facebook app built to run inside of Facebook, that’s one less app that, today, can’t run on the Open Web and then God kills another kitten.

So the moral here is that slow and steady isn’t as sexy for the media or VCs but it typically wins races in terms of technology adoption and deployment. So I guess I can’t fault Google for releasing a little too early, but it’ll be interesting to see if it actually helps them get to a stable OpenSocial sooner. Whether it matters when they get there is a matter of debate of course, but in the meantime, hell, there’s plenty for us to do to improve the web as it is today and to solve some tricky problems that neither OpenSocial or Facebook have begun to consider. So, I’m with Doc. And I’m in it for the long haul. I’ll keep my eyes open on OpenSocial and frankly hope that it succeeds, but in the end, I’m interested developing on the best citizen-centric technology that’s going to shape the next evolution of the web.

So long as it’s open, it’s free, non-proprietary, and inclusive, well, even if it’s slow going getting it done, at least we’re not treading back in time.