Well, that didn’t take long. The day after I posted it, I got an email from David Nüscheler, CTO at Day Software, asking if I’d be interested in working with them. Given that my friend and accidental mentor Roy works there, and that the company is obviously very clueful about the Web and open source, it seemed like a terrific fit. In my discussions with them since then, that impression has been confirmed over and over again. Honestly I don’t know why I didn’t approach them.

I’ll be joining their solutions team, which means I’ll be helping with the design, development, and deployment of custom solutions built upon Day’s product line. The position isn’t unlike the consulting work I’d been doing much of the past six years, only more hands-on (yeah!), and of course, Day-specific.

It also means a lot more travel, so watch for me in your neck of the woods on Dopplr.

It was about 2.5 years ago now, that I joined Research in Motion – makers of the Blackberry – for what turned out to be the shortest stint of my career. I was brought in as their “Web 2.0” guy, though as part of the standards organization rather than R&D (which should have been a warning sign). My job, initially, was to write a white paper which described what RIM needed to do to embrace the Web. What’s the standards organization doing defining an R&D roadmap you might ask? Good question. I wondered the same thing. But that’s not what this post is about.

What it is about is that earlier this week, at the BBDC, RIM announced what is, AFAIK, its first on the topic of the Web; Web Signals;

BlackBerry Web Signals leverages RIM’s unique push technology to allow online content providers to automatically notify BlackBerry smartphone users when relevant content has been published and to allow streamlined, one-click access to the online information.

So I dug into the technical overview, and spotted this near the beginning;

To push content to users, content providers must first register their web signals with Research In Motion.

Bzzt!

As they don’t seem to realize, the Web is agreement; a large, complex distributed system made possible by parties who agreed to use its constituent protocols. Publishers agreed because it gave them a low cost path to distribute information directly to the users who had also made those same agreements (by using an agent which implemented the protocols). Imagine now, if you will, what would have happened to the Web, had publishers needed to register with, say, AOL to reach AOL users, or Comcast for Comcast users. What a huge burden! It could be worse, the burden could be on the users, but why bother with one at all? Remember PQA? My point exactly.

Always, always, always, try do what you need using existing agreement.

In this case – of notification of content changes – RIM had a couple obvious options. Most simply, they could have used email, though of course the user experience is suboptimal, not to mention the privacy concerns of handing out the user’s email address to every publisher. Alternately, there’s RSS/Atom, something publishers are already pretty comfortable with. It might even sound a little familiar, seeing as I described the architecture necessary to support it in that white paper I wrote for them.

If you’ve read ahead in that tech overview, you’ll also notice that they predefine their URI structure, and don’t even mention which HTTP method to use on those URIs to send a notification, which probably means that GET does the deed. Yuck.

Come on RIM, get your act together. Competition is heating up, and those guys in Cupertino (mostly) have their act together when it comes to the Web.

If you’re a Firefox user (or any browser user for that matter), run, don’t walk, to download the latest Firefox 3 Beta. Damn, this thing is lean and mean. I’ve got my usual 40-50 tabs open right now and it’s consuming about one third to one quarter of the memory FF2 did on WinXP. Plus tab and new window creation is instantaneous, even after many hours of use. There’s some subtle chrome improvements too, including little things like smooth-scrolling tabs that prevent me from getting lost when I’ve got more than about 10 tabs per window; very useful for Wiki-despamming or reading developer documentation.

Go!

I think Google really missed the mark with its attempt at embeddable maps. I suppose something is better than nothing for the myriads of folks who want this functionality, but when a simpler, less opaque solution (read; declarative), GMapEZ, has existed for ages, you have to wonder what Google were thinking. The blob of HTML you get might as well be Javascript, or heck, even a Java applet in the sense that it’s opaque to all but the most inquisitive of developers.

This is becoming a bad trend.

I have to say, I’m with James in his response to Sam’s long bets;

To say, as Sam and Tim both do, that REST is important is like saying the fan in my laptop is “important”. There’s really nothing to discuss about it. RESTful services are fundamentally critical to the continued evolution of the Web. It just is. You just need to do things in a RESTful way. Period.

REST is just a starting point. What’s more important going forward is the framework which permits us to reason about REST extensions and other changes to the Web (or portions thereof).

Elias has noticed that my blog has fallen silent recently, and suggests that REST’s victory over WS-* has something to do with it.

He’s right.

I haven’t had much contract work the past few weeks, but have been helping a couple of startups. But the main reason I don’t blog is that my curmudgeonly style really only works when I’m the lone voice, arguing for the unpopular-but-superior solution. Now that the arguments I’ve been making for the past seven or so years are finally being recognized as superior, I’m sure I’d come off as just plain mean if I were to go after anybody who said that they were sticking by WS-* (something about kicking a horse when it’s down).

What comes next for me and this weblog then?

Something I considered doing a couple of years ago was a regular “Ask Mark” piece, where I’d publish one of the many REST/Web questions I get via email. I’d been answering those privately for years, but perhaps I could now do so on condition that I can publish them (though few are really interesting).

Another thought was covering REST/Web esoterica. There’s an abundance of interesting topics to cover on the fringes of REST and the Web. Yet another was a retrospective of some of the more heated battles over the past years, on weblogs and mailing lists.

Let me know what you’d like to see me cover.

BOSH is a specification that defines how XMPP can be used over HTTP. It’s obviously written by people who know what they’re talking about, because they’ve got good requirements, and get into great detail about the design choices they’ve made. Unfortunately, BOSH makes the one big mistake that so many others make; treating HTTP as a transport protocol. To wit;

POST /webclient HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: 188

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message to='[email protected]'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>

  <message to='[email protected]'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

(you might also note that all of their example requests are POSTs to /webclient – a warning sign if ever there was one)

The intent of that message is to send two messages, one to each of the recipients at example.com. If we were treating HTTP as an application protocol, that would be done like this;

POST mailto:[email protected] HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: nnn

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
<body>

POST mailto:[email protected] HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: mmm

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

Alternately, if you don’t like proxies, the mailto URIs could be swapped out for an http URI specific to each mail address. But the point is that HTTP semantics be reused by recasting XMPP to them, rather than the current approach of grafting XMPP on top (read: obliterating). Don’t like two messages? Try pipelining them. Can’t pipeline? Does some other feature not map well onto HTTP in this way? Then it wasn’t meant to be.

We use HTTP (and the Web) because we want to be part of the Web; participate in the network effects, make information freely available (like, say, my presence status), etc.. We don’t do it because we need a way to get past firewalls. Good admins will avoid deploying software behind their firewall which subverts the intent of the firewall.

It’s nice to see Pat Helland join the REST/SOA conversation.

His first post is in a rather quisical, loose style that I hadn’t seen before, but that’s ok, I think I get what he’s talking about. The point seems to be summed up here;

Is the purchase-order (or even the line-item) a noun or a verb? I would argue is it syntactically a noun but semantically a verb.

Hmm. I’m quite certain it’s pure noun. If it were a verb, then it would only have a single-purpose – to order something – and wouldn’t be able to be archived, printed, translated, etc… which it clearly can. Obviously a message can only have one authoritative application-level verb, and if you’re using HTTP, then the request method is it.

This made my day. 8-)

I made a comment on Pete Lacey’s latest in the “RIA” discussion that I wanted to reiterate here;

how is that different than the bad old days when a site was developed for one particular browser?