Stowe Boyd launches Microsyntax.org

hashStowe Boyd launched Microsyntax.org this morning and announced that I will be the first member of his advisory board.

Stowe and I have batted around a number of ideas for making posts on Twitter contain more information than what is superficially presented, and this new effort should create a space in which ideas, research, proposals and experiments can be made and discussed.

Ultimately, my hope is that Microsyntax.org will reach beyond Twitter and provide a forum for thinking through how we encapsulate data in channels that don’t natively support metadata by using conventions that express as much meaning as much as they encode.

Twitter with Channel tagsSince I originally proposed hashtags in August of 2007, I’ve thought a lot about what these conventions mean, and how wide adoption of something can radically elevate the field of competition.

There is a similar opportunity here, where, if the discourse is developed properly, such conventions can actually enable a greater range of expression over narrow channels, allowing for wider participation in and understanding of conversations.

Take, for instance, Stowe’s “GeoSlash” (as christened by Ross Mayfield) proposal. Whether his syntax is the right one (or even necessary!) isn’t something that can be argued rationally. It’s only something that can be investigated through experimentation and observation. To this point, there has been no central convening context in which such a proposal could be brought up, debated, discussed, considered, tinkered with, improved, championed and evaluated.

As a result, countless proposals have been made for baking moremeta” into Twitter’s data stream, but few have really taken off (compared with the relative success of hashtags and @replies).

While I’m sympathetic to arguments (and pleas!) against adding additional structure or formatting to tweets, I think that the bigger opportunity here extends beyond Twitter (which is primarily a public broadcast channel) to other applications, regardless of whether they use Twitter as the message routing infrastructure or not. Indeed, given my recent (and very positive) experience where @AlaskaAir checked me in to my flight over Twitter, you can imagine an opportunity developing where, say, forward-thinking airlines actually collaborate to develop a syntax for expressing checkin requests via some sort of direct SMS-based channel.

The situation of multiple competing-yet-overlapping SMS syntaxes lead me, somewhat mockingly, to start documenting what I called “picoformats“. If I’ve learned anything from the microformats process, it’s that anyone can invent a schema or a format, but getting adoption is the hard part (and also the most valuable). So, in order to promote adoption, you should always try to model behavior that already exists in the wild, and then work to make the intensions of the behavior more clear, repeatable and memorable.

Most microsyntax efforts fail to follow this process, and as a result, fail in the wild. Efforts that employ the scientific method tend to see more success: hashtags modeled the convention started by IRC channels and Jaiku (Joshua Schachter also used the hash to denote tags in the early days of Delicious); the $ticker convention (from StockTwits) follows how many financial trade publications denote stock symbols. And so on.

So when it comes to proposing new behaviors that don’t yet exist in the wild, I think that the Microsyntax.org project will be an excellent place to convene and host conversations and experiments, many of which will admittedly fail. But at minimum, there will be a record of what’s been tried, what the thinking and goals were, and where, hopefully, some modest successes have been achieved.

I’m looking forward to contributing to this effort and helping to stand up the community infrastructure with Stowe. While I’m not eager to see the Twitter stream polluted with characters intended only for computers, I think that there is still much explored ground in what can be accomplished through modest modifications of the way that we communicate over these kinds of narrow, unidimensional channels.

Comixology and the future of connected commerce

Custom Burger ReceiptIt dawned on me recently that, not only are we in a period of great change and transformation, but that those of us who have been working on the web to make it a more social and humane place have only barely begun the process of taking the “personality-ization” (not “personalization”) and connectedness that we take for granted on the web into the offline world.

All at once, my sense tell me that things coming to a head, and, as Om Malik pointed out, we are at the end of an era. It’s anyone’s guess how the next chapter of the social web will read, but a few experiences lately got me thinking.

A connected Apple experience

I first saw a glimmer of this when in Boston, shopping at the Apple store for a USB charger. Upon checkout, I was asked whether I wanted a print copy of my receipt or to have it emailed to me. Reluctant to explain the “+apple” in my email address, I hesitated for a moment but submitted: “by email.”

The Apple employee looked at his screen, read back my email address and said, “Is that correct?”

“Yeah…” I stammered, somewhat surprised. “It is.”

Of course all they did was correlate my credit card number to the email address I’d previously had my receipts sent to. When I was shopping in San Francisco. Here I was in Boston!

Apple had recorded my email, associated it with my credit card (perhaps more than one), and then shared it with all their stores, providing me with a specific kind of convenience that few other stores — at least that I know of — have attempted. (Aside: And don’t give me any buts about privacy and correlations and any of that bullshit. Privacy has a certain kind of value and importance, but I’ve heard so little vision out of privacy zealots that it’s time think about the other side of the coin.)

Now, that small example of convenience may not seem significant on the surface, but it does suggest that new connections — between the world of brick and mortar identity and the realm of digital identity — are emerging, creating new opportunities for creative commerce.

Comixology and Isotope

James Sime by Bryan Lee O'MalleyMy favorite comic book store is located in Hayes Valley in San Francisco. It’s run by James Sime — someone who belongs in comics, much moreso than he belongs selling them. His shop is called Isotope and every month or so, as time allows, I stop in to pick up my “subscriptions” — known in the comic book universe as my “pull list”.

The pull list is a simple concept, essentially a list of comic books that I want to set aside on an individual or ongoing basis — that I’ll come and pick up later. Since new books arrive every Wednesday, it’s not terribly efficient for me to drop in just to pick up one or two issues, so the pull list is the best way to make sure I don’t miss an issue while stretching the time between visits.

The pull list is also a kind of personal relationship: I trust James to not only grab the titles that I’ve explicitly asked for, but to also suggest new books that I might not otherwise learn about. He also has to set aside inventory that might otherwise be made available to his walk-in patrons — even though I might ultimately decide, “Y’know, I think I’ll pass on this one”, so in that way, he’s trusting me to be a reliable patron.

Some time ago, James told me about a dashboard widget that he had discovered that let him see what comics were coming out soon. I checked it out — but then forgot about it — preferring the high touch relationship I had of visiting the store and browsing the shelves.

On a recent visit, James told me that he’d actually been in touch with the makers of the widget and that they were collaborating on “something big.” Having personally introduced James to both Twitter and Foursquare, I was intrigued… I mean, James has long had a blog, has presented at a BarCamp — as comic book retailers go, he’s about as 2.0 as you can get. And since he knows what a big web dork I am, his excitement told me that he was indeed on to something.

“They have an iPhone app,” he began, “called Comixology. It’s like the dashboard widget, but get this: I’ve been working with them on a pilot to hook up my store to their website.”

“Ok,” I said.

“So go to their website and create an account. Then search for my store. You’ll see a button that says ‘connect’. Hit that. From then on, whenever you add something to your digital pull list on the Comixology service, I’ll see it and add a copy to your stack.”

Retail Connection

“Wow,” I thought, “this changes everything.”

Connected commerce, activity streams and the point

It isn’t that my Apple experience or the Comixology service is the answer to question “what is the future of retail?”, but they outline the contours of the nexus between the social web and the real world.

Given what I’ve been working on in a round-about way on the DiSo Project, it is so patently clear to me that where Apple connects a credit card number to an email address, I see an OpenID associated with a payment gateway and a transaction dropbox that happens to be hosted by Google (that is, my email); where James and Comixology see a contextualized relationship management and inventory tool, I see an iPhone application that lets me buy physical goods, connect to a real life merchant of my choosing (based on his high-touch service), and then communicate my tastes and purchases to my friends and fourth-party services through activity streams.

Imagine: after a month of so assembling a good sized pull list on Comixology.com, I visit Isotope and James presents my selections, suggesting a few new books I might be interested in. I agree to give them a try, he updates my pull list on his Mac through the Comixology site, immediately updating on my iPhone. I review the list — everything looks good — and tap the “checkout” button in the app. Pre-loaded funds are immediately withdrawn from my Apple iTunes account; James receives an instant payment confirmation and I can take my comics to go without having ever reached for my wallet. Walking out the door with my nose in my phone, I uncheck a few comics from my transaction history and send the rest to my activity broker — which in turn pushes updates out to Facebook, FriendFeed, and to anyone else who is subscribed to my comic book purchases (yeah, like two people) — and in turn, they take my social recommendations, applying James’, and add some of my picks to their respective pull lists.

The whole thing takes about three minutes, with room for salutations.

This is buyer-mediated commerce (contrary to vendor-mediated), or what I might call “connected commerce.” This is one potential future for platforms like Facebook Connect to get real, and where I think identity, social, commercial and location technology will begin to hit their stride.

Google Profiles, namespace lock-in & social search

I’d originally intended to respond to Joshua Schacter’s post about URL shorteners and how they’re merely the tip of the data iceberg, but since I missed that debate, Google has fortuitously plied me with an even better example by releasing custom profile URLs today.

My point is to reiterate one of Tim O’Reilly’s ever-prescient admonishments about Web 2.0: lock-in can be achieved through owning a namespace. In full:

5. Chief among the future sources of lock in and competitive advantage will be data, whether through increasing returns from user-generated data (eBay, Amazon reviews, audioscrobbler info in last.fm, email/IM/phone traffic data as soon as someone who owns a lot of that data figures out that’s how to use it to enable social networking apps, GPS and other location data), through owning a namespace (Gracenote/CDDB, Network Solutions), or through proprietary file formats (Microsoft Office, iTunes). (“Data is the Intel Inside”)

(I’ll note that the process of getting advantage from data isn’t necessary a case of companies being “evil.” It’s a natural outcome of network effects applied to user contribution. Being first or best, you will attract the most users, and if your application truly harnesses network effects to get better the more people use it, you will eventually build barriers to entry based purely on the difficulty of building another such database from the ground up when there’s already so much value somewhere else. (This is why no one has yet succeeded in displacing eBay. Once someone is at critical mass, it’s really hard to get people to try something else, even if the software is better.) The question of “don’t be evil” will come up when it’s clear that someone who has amassed this kind of market position has to decide what to do with it, and whether or not they stay open at that point.)

Consider two things:

Owning the “people” namespace will determine whether people see the web through Google’s technicolor glasses or Facebook’s more nuanced and monochrome blue hues.

Curiously, it has been (correctly) argued that Google “doesn’t get social”, a criticism that I generally support. And yet, with their move to more convenient profile URLs that point to profiles that aggregate content from across the web (beating Facebook to the punch), a bigger (albeit incomplete) picture begins to emerge.

When I blogged that my name is not a URL, I wasn’t so much arguing against vanity or custom profile URLs but instead making the point that such things really should go away over time, from a usability perspective.

Let me put it this way: at one point, if you weren’t in the Yellow Pages, you basically didn’t exist. Now imagine there being several competitors to the Yellow Pages — the Red, Green and Blue Pages — each maintaining overlapping but incomplete listings of people. You’re going to want to use the one that has the most complete, exhaustive and easy-to-use list of names, right? And, I bet beyond that, if one of them was able to make the people that you know and actually care about more accessible to you, you’d pick that one over all the others. And this is where owning — and getting people to “live in” — a namespace begins to reveal its significance.

Google Profile Search

So, it’s telling thing to look at Google and Facebook’s respective approaches to their people search engines and indexes. Indeed, having a readily accessible index of living persons — structured by their connections to one another — will become a necessary precondition to getting social search right (see Aardvark for a related approach, which connects to the Facebook and IM portions of your social graph to facilitate question answering).

As social search and living through your social graph becomes “the norm” (i.e. with increasing reliance on social filtering), Google and Facebook’s ability to create compelling experiences on top of data about you and who you know will come to define and differentiate them.

To date, Google’s profile search has been rather unloved and passed over, but with the new, more convenient profile URLs and the location of profile search at google.com/profiles, I suspect that Google is finally getting serious about social.

Compare Facebook and Google’s search results for my buddy, Dave Morin:

Facebook logged out:

Search Names: dave morin | Facebook

Facebook logged in:

Facebook | Search: dave morin

Google results (there’s no difference between logged in and logged out views):

Dave Morin - Google Profile Search

Notice the difference? See how much better Facebook’s search is because it knows which “Dave Morin” is my friend?

Now, consider the profile result when you click through:

Dave’s Facebook profile (logged out):

Dave Morin - San Francisco, CA | Facebook

(Facebook’s logged in profile view is as you’d expect — a typical Facebook profile with stream and wall.)

Now, here’s the clincher. Take a look at Google’s profile for Dave:

Dave Morin - Google Profile

Google is able to provide a much richer and simpler profile, that’s much more accessible (without requiring any kind of sign in) because they’ve radically simplified their privacy model on this page (show what you want, and nothing more). Indeed, Google’s made it easier for people to be open — at least with static information — than Facebook!

So much for Facebook’s claim to openness! 😉

Of course, default Google profiles are pretty sparse, but this is just the beginning. (Bonus: both Facebook and Google public profiles support the microformat!)

And the point is: where will you build your online identity? Under whose namespace do you want to exist? (Personally, I choose my own.)

Clearly the battle for the future of the social web is heating up in subtle but significant ways, and Google’s move today shouldn’t be thought of anything less than the opening salvo in moving the battle back to its turf: search.

Does OpenID need to be hard?

Prompted by posts by Randy Reddig and Tony Stubblebine and a conversation with Elliott Kember, I wanted to address, yet again, the big fat stinking elephant in the room: OpenID usability and the paradox of choice.

Elliott proposed a pretty clear picture of what he thinks OpenID should look like on StackOverflow, given the relative value of each provider to him:

How OpenID should look, by Elliott Kember

Compare that to how it actually looks today:

Login or Register - Stack Overflow

I’m with him. I get it.

We’re at this crossroads where it really doesn’t matter which OpenID provider you use — because while it might save you the hassle of creating yet another password — there’s little else that you can do with an OpenID beyond that.

And, if you’ve already got more than one OpenID, not much exists to help you decide which OpenID provider you should use (many people tell me: “I hate OpenID! I’ve got like 15 OpenIDs and I never know which one to use!”).

So on the one hand, we’ve done a poor job of building out the value of using an OpenID, and on the other, have failed to explain what it means to have an OpenID (or several) or how to go about deciding which one to use and why (hat tip to OpenID Explained for taking a crack at it).

Meanwhile, there’s a tension between the convenience of having one reusable and durable identity against the desire to express many aspects of one’s identity with many separate IDs, resulting in complex user interfaces.

Fortunately, OpenID as a technology can serve both needs, but communicating and demonstrating that effectively has remained a challenge.

Putting OpenID in context

For my part, I’ve used the metaphor of credit cards to try to explain OpenID:

  • Online identity is moving from its “cash and check” era to the era of “credit cards”.

    Before the advent of charge cards, payment systems were decentralized — inefficient, cumbersome, and prone to fraud. There were a number of different, non-interoperable payment mechanisms that took 30+ years to get straightened out. Indeed, the credit card system that we take for granted today (so much so that airlines have moved to relying on them as the sole form of in-flight payment) only came about in the late 90s, a good 70 years after Western Union began issuing the first credit cards.

    Imagine OpenID taking 70 years to get mass adoption!!

    Taking this metaphor at face value, it’s clear that we’re in the neonatal stages of the build-out of the OpenID network and still have much work ahead of us. Fortunately, adoption cycles have also accelerated — I don’t have the actual numbers off-hand, but I can tell you that it took longer than four years to get the first 500 million credit card users!

  • As with credit cards, you can have as many OpenIDs as you like for different purposes. I presume that common divisions will fall along work, personal, and affinity lines:

    Credit cards

    …and of course there are cases I’ve not even considered yet

  • To close out this metaphor, picking an identity provider should be like picking a bank or credit card provider: as a fourth-party service provider that advocates for your interest, since you’re their customer! Today, to Elliott’s point, there are not many obvious differences between providers; over time, I expect this to change and for this relationship to become core to one’s experience on (and enjoyment of) the web.

    Instead of agreeing to terms of service that disclaim all responsibility to you, the customer, I hope that competition in the identity space will lead providers to actually take responsibility for their services — charging good money for doing so. If your account gets hacked — no problem! — your identity provider can put back the pieces and make things right again! You could even take out online identity insurance in case your identity is ever stolen — so you can always get back to your life and recover your data without the hassle and interruption when it happens today.

    Which credit card company would you give your business to? The one that automatically credits back false charges on your account and investigates them or the one that harasses you when you travel and presumes the worst of you? I know which one I’d pick — and I’d apply the same decision heuristics to whoever provides my online identity.

The OpenID “NASCAR”

Apart from confusion over having multiple OpenIDs, the user interface that has resulted from having many top-tier providers in the space also causes confusion.

nascar-babyElliott’s criticism of the StackOverflow OpenID interface is really aimed at the noise of the brand logos displayed as buttons — intended to help people sign in using an account they already have. This kind of interface is what Daniel Burka refers to as the “OpenID NASCAR” because all the logos look like a NASCAR racecar covered with brand stickers, all jockeying for your attention.

He’s got a point. Since he’s logging in with his Google account, he really only wants a Google button:

How OpenID should look, by Elliott Kember

For all he cares, it could look like this:

OpenID without choice

…and the result would be the same thing.

Indeed, it is this kind of lack of choice that makes Facebook Connect so seductively compelling.

And dangerous.

fbconnectIt’s a frigging button. You can’t mistake it. If you argued that reducing choice increases the likelihood that the user will “get it right” and be able to sign in to your site, you’d be correct.

But, that kind of restriction of freedom of choice impairs healthy competition in the marketplace. And lack of competition is, generally, bad for the health of an ecosystem, and ultimately bad for the consumer.

The harmony in the Yin & Yang of Simplicity and Choice

Ignoring your actual preference for Coke, if this were the universal experience for buying soda, one might argue that simplicity and fewer choices are better:

No Choice

But having choice is a better overall condition. Even when a popular brand is made more prominent, having alternatives means at least maintaining the illusion of control over one’s destiny:

Coke & Others

(Original photo by Bryan Costin shared under the Creative Commons license.)

So the question is, how can we simplify OpenID so that anyone can use it without reducing freedom of choice? Well, what if the backend technology was fundamentally interoperable, but every site simply supported a button, like this:

Uber-sign in button

…and upon clicking it, a new window would pop open and you’d be presented with a box, in which you could type just about anything: an email address, a URL, the name of a social network, your phone number… heck, you could even type your name (and if you were signed into a site like Facebook that leaks basic aspects of your identity), you could select yourself from a list of names and photos and then proceed through the typical OpenID flow to prove that you are who you are, completing the sign in process.

One problem that I’ve observed with OpenID input boxes, to date, is that they look far too similar to another solitary but familiar input box. Namely — the Google search box! …where anything goes:

Googlebox

Given the training that people have learned from using Google, we must balance the need for simplicity with the ability to make an informed personal choice about which identity to present to a site. Needs which are, in many respects, at odds. Yet, the future of OpenID depends on us unraveling these issues and developing suitable interfaces that are streamlined and straight-forward that also enhance individual freedom.

With the recently approved User Interface Working Group, headed up by Allen Tom from Yahoo!, and with the involvement of folks from Facebook and other organizations, I’m optimistic that we will make considerable progress this year.

And that ultimately, no, OpenID need not be hard. Making it so just won’t happen overnight.

Generation Open

I spent the weekend in DC at TransparencyCamp, an event modeled after BarCamp focused on government transparency and open access to sources of federal data (largely through APIs and web services). Down the street, a social-media savvy conference called PowerShift convened over 12,000 of the nation’s youth to march on Congress to have their concerns about the environment heard. They were largely brought together on social networks.

Last week, after an imbroglio about a change to their terms of service, Facebook published two plain-language documents setting the course for “governing Facebook in an Open and Transparent way“: a Statement of Rights and Responsibilities coupled with a list of ten guiding principles.

The week before last, the Association for Computing Machinery (ACM) released a set of recommendations for open government that, among other things, called for government data to be available in formats that promote reuse and are available via public APIs.

WTF is going on?

Clearly something has happened since I worked on the Spread Firefox project in 2004 — a time when Mozilla was an easily dismissed outpost for “modern communists” (since meritocracy and sharing equals Communism, apparently).

Seemingly, the culture of “open” has infused even the most conservative and blood-thirsty organizations with companies falling over each other to claim the mantle of being the most open of them all.

So we won, right?

I wouldn’t say that. In fact, I think it’s now when the hard work begins.

. . .

The people within Facebook not only believe in what they’re doing but are on the leading edge of Generation Open. It’s not merely an age thing; it’s a mindset thing. It’s about having all your references come from the land of the internet rather than TV and becoming accustomed to — and taking for granted — bilateral communications in place of unidirectional broadcast forms. Where authority figures used to be able to get away with telling you not to talk back, Generation Open just turns to Twitter and lets the whole world know what they think.

But it’s not just that the means of publishing have been democratized and the new medium is being mastered; change is flowing from the events that have shaped my generation’s understanding of economics, identity, and freedom.

Maybe it started with Pearl Jam (it did for me!). Or perhaps witnessing AOL incinerate Netscape, only to see a vast network emerge to champion the rise of Firefox from its ashes. Maybe being bombarded by stinking piles of Flash and Real Player one too many times lead to a realization that, “yeah, those advertisers ain’t so cool. They’re fuckin’ up my web!” Of course watching Google become a residue on the web itself, imbuing its colorful primaries on HTTP, as a lichen seduces a redwood, becoming inseparable from the host, also suggests a more organic approach to business as usual.

Talking to people who hack on Drupal or Mozilla, I’m not surprised when they presume openness as matter of course. They thrive on the work of those who have come before and in turn, pay it forward. Why wouldn’t their work be open?

Talking to people at Facebook (in light of the arc of their brief history) you might not expect openness to come culturally. Similarly, talking to Microsoft you could presume the same. In the latter case, you’d be right; in the former, I’m not so sure.

See, the people who populate Facebook are largely from Generation Open. They grew up in an era where open source wasn’t just a bygone conclusion, but it was central to how many of them learned to code. It wasn’t in computer science classes at top universities — those folks ended up at Arthur Anderson, Accenture or Oracle (and probably became equally boring). Instead, the hobbyist kids cut their teeth writing WordPress plugins, Firefox extensions, or Greasemonkey scripts. They found success because of openness.

ShareThat Zuckerberg et al talk about making the web a more “open and social place” where it’s easy to “share and connect” is no surprise: it’s the open, social nature of the web that has brought them such success, and will be the domain in which they achieve their magnum opus. They are the original progeny of the open web, and its natural heirs.

. . .

Obama is running smack against the legacy of the baby boomers — the generation whose parents defeated the Nazis. More relevant is that the boomers fought the Nazis. Their children, in turn, inherited a visceral fear of machinery, in large part thanks to IBM’s contributions to the near-extermination of an entire race of people. If you want to know why privacy is important — look to the power of aggregate knowledge in the hands of xenophobes 70 years ago.

But who was alive 70 years ago? Better: who was six years old and terribly impressionable fifty years ago? Our parents, that’s who.

And it’s no wonder why the Facebook newsfeed (now stream) and Twitter make these folks uneasy. The potential for abuse is so great and our generation — our open, open generation — is so beautifully naive.

. . .

We are the generation that will meet Al Qaeda not “head on”, but by the length of each of its tentacles. Unlike our parents’ enemies, ours are not centralized supernations anymore. Our enemies act like malware, infecting people’s brains, and thus behave like a decentralized zombie-bot horde that cannot be stopped unless you shift the environment or shut off the grid.

We are also the generation that watched our government fail to protect the victims of Katrina — before, during and after the event. The emperor’s safety net — sworn nemesis of fiscal conservatives — turned out not to exist despite all their persistent whining. Stranded, hundreds took to their roofs while helicopters hovered over head, broadcasting FEMA’s failure on the nightly news. While Old Media gawked, the open source community solved problems, delivering the Katrina PeopleFinder database, meticulously culled from public records and disparate resources that, at the time, lacked usable APIs.

But that wasn’t the first time “privacy” worked against us. On September 11, 2001 we flooded the cell networks, just wanting to know whether our friends and family were safe. The network, controlled by a few megacorporations, failed under the weight of our anxiety and calls; those supposed consumer protections designed to keep us safe… didn’t, turning technology and secrecy against us.

. . .

Back to this weekend in DC.

You put TransparencyCamp in context — and think about all the abuses that have been perpetrated by humans against humans — throughout time… you have to stop and wonder: “Geez, what on earth will make this generation any different than the ones that have come before? What’s to say that Zuckerberg — once he assembles a mass of personally identifying information on his peers on an order of magnitude never achieved since humans started counting time — won’t he do what everyone in his position has done before?”

Oddly enough, the answer is probably not. The reason is the web. Even weirder is that Facebook, as I write this, seems to be taking steps to embrace the web, seeking to become a part of it — rather than competing against it. It seems, at least in my interactions with folks at Facebook, that a good portion of them genuinely want to work with the web as it today, as they recognize the power that they themselves have derived from it. As they benefitted from it, they shall benefit it in turn.

Seems counterproductive to all those MBAs who study Microsoft as the masterstroke of the 21st century, but to the citizens of the web — we get it.

What Facebook is attempting — like the Obama administration in parallel — is nothing short of a revolution; you simply can’t evolve out of a culture of fear and paranoia that was passed down to us. You have to disrupt the ecosystem, and create a new equilibrium.

If we are Generation Open, then we are the optimistic generation. Ours only comes around every several generations with the resurgence of pure human spirit coupled with the resplendent realization of intent.

There are, however, still plenty who reject this attitude and approach, suffering from the combined malaise of “proprietariness”, “materialism”, and “consumerism”.

But — I shit you not — as the world turns, things are changing. Sharing and giving away all that you can are the best defenses against fear, obsolescence, growing old, and, even, wrinkles. It isn’t always easy, but it’s how we outlive the shackles of biology and transcend the physicality of gravity.

To transcend is to become transparent, clear, open.

How to use Twimailer securely

TwimailerTwimailer is a nifty service that launched recently that makes Twitter BACN (“email that you want, just not right now“) more useful and informative (example).

The only problem is that it requires you to change your Twitter account email to point to an address provided by Twimailer — on the whole, not a big deal if you trust Twimailer, but in general bad practice. (Rod Begbie also pointed out that this prevents people from being able to find you by your email address).

Fortunately there is a better and more secure way to take advantage of Twimailer.

I’ll demonstrate in Gmail but really I’m just auto-forwarding new follower notifications from Twitter to your Twimailer address. That’s it.

  1. First, go ahead and sign up for a new Twimailer account. To get started, they just need an email address to send your notifications to. Twimailer will assign you a unique email address like [email protected]. Set this aside (copy it to TextEdit or something).
  2. Next, load up your Gmail inbox and search for “is now following you on Twitter!”. Open up one of the notifications from Twitter (the From email should be something like [email protected]). In the right hand drop-down menu, pick “Filter messages like this“:
    Filter messages like this
  3. You should then see an interface like this (click to enlarge):
    Create a filter
    Go ahead and test this search to make sure it’s working (presuming you haven’t deleted all your notifications).
  4. If everything looks good, go ahead and click Next Step and at check off “Forward it to” and enter your Twimailer email address that you set aside in Step 1.

    If you don’t want duplicate notifications from Twitter and Twimailer, you should also check off “Skip the Inbox” or “Delete it” (the message will still be forwarded).

    My setup looks like this (click to enlarge):

    Twimailer Filter

  5. Bonus: to filter or create a label for Twimailer notices, use this search: from:([email protected]) OR to:([email protected]).

That’s it!

It seems to me that this kind of feature improvement is something that Twitter should really do itself, but of course it’s great to see someone from the community pitch in and add incremental value until Twitter gets around to it.

At the same time, putting Twimailer in between you and Twitter’s password recovery mechanism seems unnecessarily dangerous (i.e. Twimailer could go down, get hacked, sold or might be simply be implemented insecurely (consider Spotify’s recent security breach)). I actually have no insight into these things about Twimailer, but I’d rather not take any unnecessary chances.

The approach that I described above should mitigate any risk with using Twimailer and keep you in direct control over your Twitter account.

Future of White Boys’ Clubs Redux #fowaspeak

White Boys (+1)

In September of 2006, I wrote a piece called The Future of White Boy Clubs taking to task Ryan Carson for putting together a speaker lineup for his Future of Web Apps conference made up entirely of white men (for the record, Tantek resents being lumped in as “white”; he’s says he’s Turkish).

As a white male speaker, I wanted to make a point that not just lamented the dearth of female speakers, but also asserted a broader point about the value of diversity to tech conferences.

Two and half years later and the future of the web was yet again being presented from the perspective of a bunch of white guys — and were it not for a last minute substitution, Kristina Halvorson wouldn’t have made it on stage as the sole female voice.

Kristina Halvorson: I LOVE DUDES by Judson CollierKristina felt compelled to say something and so she did, sharing the last 10 of her 25 speaking minutes with Ryan Carson and me, confronting this perennial elephant in the room and calling for specific action.

Without context, some members of the audience felt ambushed.

But Kristina hadn’t planned to bring this up on stage; she wanted to talk about copy! Had progress been made over the last two years, she wouldn’t have had to. But she felt strongly — and after receiving encouragement from Kevin Marks, Daniel Burka and me — she decided to raise the issue because, frankly, no one else had plans to.

She didn’t merely want to complain and didn’t wish to inspire guilt in the predominantly white male audience (what’s there to feel guilty about anyway?). Her point was to frame the issue in a way that helped people recognize the symptoms of the problem, identify where responsibility lies (answer: with all of us) and provide constructive means to address them.

Let’s be real: I doubt it’s lost on anyone that the tech industry and its requisite events lack women. We know this. And we all suffer as a result (for the perspective and experiences they bring, among other things). Lately it’s getting worse: depending on the study you read, there are more females online than males, and yet enrollment by that demographic in computer science is on the wane. Events that purport to be about the “future of web” and yet fail to present speakers that represent the web’s actual diversity serve only to perpetuate this trend.

Turns out, white men also don’t have the monopoly on the best speakerseven in the tech industry — yet their ilk continue to make up a highly disproportionate number of the folks who end up on stage. And that means that good content and good ideas and important perspectives aren’t making it into the mix that should be, and as a result, audiences are getting short-changed.

The question is no longer “where are all the women?” — it’s why the hell aren’t white men making sure that women are up on stage telling their story and sharing the insights that they uniquely can provide!

Why should it only be women who raise their voices on this issue? This isn’t just “their” problem. This is all of our problem, and each of us has something to do about it, or knows someone who should be given an audience but has yet to be discovered.

As a conference organizer, Ryan pointed out that he’s not omniscient. As a fellow conference organizer, I can tell you that you aren’t going to achieve diversity just by talking about it. You have to work at it. To use a lame analogy: if you want food at your event, you’ve got to actually place the order, not just “talk about it”.

Similarly, with female speakers and attendees, you’ve got to work at it, and you’ve got to think about their needs and what will get them come to you (remember, it’s the audience that’s missing out here).

Now, to be fair, I know that Ryan and his team reached out to women. I know that some were too busy; others unavailable; some accepted only to later cancel. Yet still, only two of eight workshops were run by women (with Kristina doing double duty as the only female speaker). It wasn’t for complete lack of effort that more women weren’t on stage or in the audience; it was also the lack of visibility of — and outreach to — women operating on the cutting edges of technology, business, and the web.

This is what our on-stage discussion sought to address by soliciting recommendations from members of the audience tagged with #fowaspeak. By bringing the negative spaces in the conference agenda to the fore — calling attention to the incidental omission of women presenters — we acknowledged that that lack wasn’t necessarily the realization of intent but something more insidious.

It isn’t that women need “help” from white men; this isn’t about capability. To the contrary, the saturation of men in technology leads to women become marginalized and invisible. They are there, and they are present, but somehow we don’t miss them when they’re not up on stage standing next to us. And that’s something that absolutely must change.

Turning the spotlight to deserving women who work just as hard (if not harder) than men does not diminish them, nor should it minimize their accomplishments. An intelligent audience should be able to discern who on stage is meritorious and who is not.

That there are fewer women in the industry means first that conference organizers need work harder to find them and second that audiences need to become vigilant about their absences on conference schedules. It is something that all of us must internalize as our own struggle and then take ongoing, explicit actions to address.

As far as I’m concerned, one of the greatest opportunities to seize the future of web apps is to cement the necessity of diversity in our processes and in our thinking, not for the sake of diversity alone (deserving though it is) but because the technology that we produce is better for it, being more robust, more versatile and flexible, and ultimately, more humane.

The future of web apps — and the conferences that tell their stories — should not be gender-neutral or gender-blind — but gender-balanced. Today, as it was two years ago, we suffer from a severe imbalance. It is my hope that, in raising the specter of consequences of the lack of women in technology, we begin to make as much progress in stitching diversity into the fabric of our society as we are making in producing source code.

BBC Digital Planet podcast featuring OpenID

Update: The BBC has posted a write-up of the report called Easy login plans gather pace.

Digital Planet album artworkI was interviewed by Gareth Mitchell last week about OpenID for the BBC’s Digital Planet podcast.

Our conversation lasted about 10 minutes — of which only about two minutes survived (mirrored here as they currently do not keep an archive of previous episodes).

It was a familiar conversation for me, since the primary concerns Gareth expressed had to do with privacy, identity and the notion that “someone else” could “own” another’s identity on the web. His premise sounded familiar: “Won’t OpenID make my identity more hackable?”

The answer, of course, isn’t that straight-forward, and depends on a lot of mitigating factors. However, the fundamental take-away is that OpenID really is no more insecure than email, and even then, provides a future-facing design that that leads to many kinds of protection that email, in practice, does not.

. . .

I’ve also noticed over the past several years that Europeans harbor much greater sensitivities to privacy issues while Americans tend to concentrate on matters concerning “property” (physical, personal and intellectual). This is evidenced by yesterday’s blow up around Facebook’s changes to their Terms of Service. On the one hand, there’s this weird American outcry against Facebook owning your data (in common, at least) forever. From the European side, it seems like the concern is centered more around what the changes mean to one’s privacy, rather than whether Facebook can perpetually “make money” off your stuff.

I bring this up because it’s immensely relevant with regards to the conversation I had with Gareth (given that he’s based in the UK).

With the current case, I’m sympathetic to Facebook, because I know that this will be the year that people have their “mindframes” bent around new conceptions of personal privacy and control and ownership of data. I believe (as Facebook purports to) that people’s desire to share will overcome their desire for control over their personal data, and that they will gradually realize that sharing will require letting go. It is this reality — the reality of networked data in the cloud — that necessitated Facebook’s change to their terms of service — not some nefarious desire to steal your first born (or your data).

In other words, the conditions and kind of thinking that lead to the backlash against Plaxo known as Scoblegate will cease to exist in the future. Facebook’s change is merely a recognition of this new environment.

It remains unclear to me whether the pundits in this space realize that this shift will occur, and will occur naturally (as it has already begun — consider the integration of Facebook and Flickr in iPhoto ’09), or whether they just want to scream and holler when they notice something that seems astray.

. . .

Last December, I spent time talking to Boaz Sender of HTML Times at length about several of these topics (including discussing the intellectual property issues surrounding many of the technologies that are helping to ensure that the web remain an open playing field) in an interview about Identity in the Network. In juxtaposition to my interview with the BBC, I think this interview gets into some of the deeper issues at work here that must also be considered when it comes to the future of online identity, privacy and data control and (co)-ownership.

What really happened at Ma.gnolia and lessons learned

http://vimeo.com/moogaloop.swf?clip_id=3205188&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1

Citizen Garden 11Larry (@lhalff) and I have been recording a podcast for the past year called Citizen Garden that covers various topics related to the web, technology, and social networking.

Well, given Ma.gnolia’s recent catastrophe, we decided that episode 11 would dedicated to exactly what went down and why, and what lessons Larry has learned that others should heed in order to avoid facing a similar crisis.

I think the basic take-away is that, four years ago, when Larry started Ma.gnolia, your IT options were pretty much to use commodity shared hosting or to do it yourself. If you used Ruby on Rails — in which Ma.gnolia is written — your options were even more limited. And so Larry chose to do it himself.

Today, with services like Amazon S3 & EC2, Joyent Accelerators and Google AppEngine, reliable, scalable hosting is no longer as much a problem, as these services have risen to meet the needs of applications like Ma.gnolia. But these are services that Larry did not take complete advantage of and the burden of taking care of over half a terabyte of data eventually caught up with him.

All is not lost necessarily, and Larry hopes that Ma.gnolia will someday return, perhaps as an invite-only service to start, in order to give him time to earn back people’s trust and scale the service slowly. I’m also confident that he’s decided to completely outsource his IT, taking the lessons from this current situation deeply to heart.

This episode is also downloadable as an MP3.

Where data goes when it dies and other musings

I’ve been wanting to write about Ma.gnolia’s catastrophic data loss last week ever since it happened, but wasn’t quite sure how I wanted to approach it. Larry (Ma.gnolia founder and the sole person who maintained the site) is a good friend of mine, and Ma.gnolia was one of Citizen Agency’s first clients. It’s been painful to see him struggle through this, both personally and professionally, and it’s about the worst possible [preventable] thing that can happen to a Web 2.0 service.

Still, kept in context, it’s made me reconsider some things about the nature and value of open, networked data.

I. How I Learned to Stop Worrying and Love the Bomb

According to Google’s cache of my profile on Ma.gnolia, I had accrued 5758 bookmarks and 6162 tags since I first started using the service August 08, 2004. That’s a lot of data capital to have instantly wiped out. You might think that I’d be angry, or disappointed. But I’m surprising zen about the whole thing. Even if I never got any of my bookmarks back, I don’t think I’d be that upset, and I’m not sure why.

If Flickr went down, I’d be pretty pissed. But Ma.gnolia for me was primarily a tool for publishing — something that I used to broadcast pointers to things that I took a momentary fancy in. There’s a lot of history in my bookmarks, no doubt. In some ways, it’s a record of all the things that I’ve read that I thought might be worth someone else reading (hence why my bookmarks are public), and clearly is a list of things that have affected and informed my thinking on a broad array of topics.

But, the beauty of bookmarks is that they’re secondary references to other things. The payload is elsewhere and distributed. So in some ways, yeah, I mean, there’s a lot of good data there that’s been lost (at least for the moment). But, the reality is that the legacy of my bookmarks are forever imbued in my brain as changes in how my synapses fire. The things that I can’t remember, well, perhaps they weren’t that important to begin with.

II. Start over; the blank slate.

Leopard Blank Slate

With the money I won from the Google/O’Reilly Open Source award last summer, I decided I’d break down and by myself a new MacBook Pro. As I was initially setting it up, I figured I’d transfer my previous system setup over from my Time Machine backup and just pick up from where I left off.

I did this, but once I logged in, the new MacBook lost it’s feeling of newness, and I felt encumbered. What amounted to bit-for-bit data portability left me feeling claustrophobic and restricted. I wanted the freedom of a clean system back; somehow buying a new machine wasn’t just about better performance, but about giving myself license to forget and to start over and to make new mistakes.

I wiped the hard drive and reinstalled OS X with the minimum options. I’ve installed about ten apps so far, and I intend to hold off on anything that I don’t feel an absolute need to install, taking a hint from Ethan Kaplan:

Twitter / Ethan Kaplan: @factoryjoe only install a ...

III. And the band played on

While I love the form-factor of my MacBook Air (now my previous system), the first generation just isn’t fast enough or beefy enough for the way that I use a Mac. It’s great for email and traveling and it really is the machine that I want to be using — just with better performance (though I hear the new models are much better).

Because the hard drive on the thing is pretty miniscule by today’s standards (80GB), I quickly maxed it out with music, videos, photos and screenshots. I was down to about 6GB of space, and OS X crawls when it can’t cache the shit out of everything so I decided to take aggressive action and deleted my entire 30GB iTunes library.

Command-A. Command-Delete. Empty Trash.

And then it was done.

Now, I still need iTunes for iPhone syncing, but now I have no local music store. With the combination of Spotify, SimplifyMedia and Pandora (using PandoraJam or PandoraBoy), I’ve got a good selection of music wherever I’ve got wifi.

The act of deleting my entire music library (okay fine, I do have a complete backup on my Mac Mini media center) was cathartic. All that data… in an instant, gone. All those ratings, all that metadata, all those play counts revealing my accumulated listening habits. Gone (well, except for my Last.fm’s profile).

Of course, it’s not like I had original, irreplaceable copies of these tracks. There are copies upon copies out there. And knowing this, I intentionally destroyed all this data without really worrying about whether I’d ever be able to re-experience or relive my music again. In fact, I didn’t even give it a thought.

But my system sure seems a bit faster now.

IV. Microformats are the vinyl of the web

Vinyl is 4 Ever by Bruce Berrien

The first thing that I thought about when I heard that Ma.gnolia had had “catastrophic data loss” was that Google and Yahoo probably had pretty good caches of the site, especially given its historically high PageRank. The second thing that I thought about was that, since the site was microformatted with XFN, xFolk and other formats, recovering structured data from these caches would likely be most reliable way of externally reconstituting Ma.gnolia, in lieu of other, more conventional data retrieval methods.

Though Larry is still engaged in a full out recovery process, it gave me some sense of pride and optimism that we had had the forethought to mark up Ma.gnolia with microformats. Indeed, this kind of archival purpose was something that Tantek had presaged in 2006:

Microformats from the beginning in my mind are serving two very important purposes.

  1. Microformats provide simple ways of identifying larger chunks of information on the Web for easily and immediately publishing, sharing, moving, aggregating, and republishing.
  2. Microformats are perhaps a step forward in providing building blocks for the longevity of higher fidelity information as well.

In talking with Tantek about this, he pointed out some interesting things about many modern web services, lamenting their apparent lack of concern over longevity. For example, clearly there is a great deal of movement afoot to advance the state of distributed social networking, as evidenced by XML and JSON-based protocols like Portable Contacts and Activity Streams. But these are primarily transaction-based protocols, and archive poorly (another argument for RESTful architectural, certainly).

I would therefore agree with Tantek’s oft-repeated admonishment that services that are serious about their data should always start by marking up their sites with microformats and then add additional APIs to provide functionality (as TripIt did). It’s simply good data hygiene. It’s also about the separation between form and function (or data and interactivity). And with emerging technologies like , people can now build arbitrary mashups from the HTML on your homepage, without even having to know about your custom API.

It also means that, in the event of catastrophe (Ma.gnolia’s case) or dissolution of a service (as in the cases of Pownce, Journalspace or Consumating), there is some hope for data refugees left out in the cold.

When APIs go dark, how do you do a data backup? (Answer: you often can’t.) With public, microformatted content, there will likely be a public archive that can be used to reconstitute at least portions of the service. With dynamic APIs and proprietary data formats, all bets are off.

V. Death and data reincarnation

With both the intentional and unintentional destruction of data recently, it’s given me lots to ponder about in terms of the value, relevance, importance and longevity of data.

I talk about “data capital” like it matters, because I suppose I want it to, and hope that someday it does make a difference just how much of yourself you share with the world, simply because it’s better to share than not to.

And now I’m in this funny situation where, because I did share, and shared openly (specifically on Ma.gnolia), there is the very real possibility of reincarnating my data from the ether of the web. It could just be that all the private data, including messages, private bookmarks and thanks are forever gone, because they were kept private. But those things which were made available to anyone and everyone, through that simple aspect, can be reconstituted by extracting their essence from the caches of the internet’s memory banks.

You think about photographs of people who have died, and of videos and other media. In the past several years we’ve had to start thinking about what happens to social networking profiles on Facebook, MySpace and Twitter of people who are no longer with us. Over time, societies have invented symbols and rituals to commemorate the dead, and often use items imbued with the deceased’s social residue to help them remember and recall and relive.

How do that work when those items are locked away in incompatible and proprietary data stores? How do we cope when technology gets between humans and their humanity?

The web is a fragile place it turns out, in spite of its redundancy and distributed design.

Efforts that threaten to close it up, lock it down or wall it into proprietary gardens are turning the web against us, against history and against civilization and the collective memory. This is perhaps one reason of the primary reasons why the open web is so important to me, and factors in so centrally to my work. As I grow older, perhaps I won’t always have perspective on which things will be the most important to me, but it’s critical that in the future, I don’t inhibit my and my progeny’s ability to access my digital legacy.

Ma.gnolia logoI find it fitting that Ma.gnolia uses an organic symbol as its logo. It has, for all intents and purposes, died.

But there is a silver lining here, and I think Larry intuitively understands: in the Ma.gnolia Open Source (M2) project, he had already sowed the seeds for Ma.gnolia’s rebirth. Though it is lamentable that a such disaster would occur, I believe that creative destruction is absolutely necessary to natural systems, as forest fires are critical to the lifecycle of forests.

I also believe that things happen for a reason and that the soil of this tragedy will lead to a new start and new growth. It’s not accidental that the design of M2 called for a distributed, redundant mesh of independent bookmarking service endpoints. If anything, this situation provides Larry license to start anew, proving the necessity of death, and the wisdom of genetic inheritance and variation.