mark nottingham tag:www.mnot.net,2010-11-11:/blog//1 2025-02-19T08:19:59Z Apple’s Best Option: Decentralize iCloud https://www.mnot.net/blog/2025/02/09/decentralize-icloud 2025-02-09T00:00:00Z Mark Nottingham https://www.mnot.net/personal/ What can Apple do in the face of a UK order to weaken encryption worldwide? Decentralize iCloud, to start. <![CDATA[

As has been widely reported, the government of the United Kingdom has secretly ordered Apple to build a back door into iCloud to allow ‘blanket capability to view fully encrypted material.’

Assuming the UK doesn’t back down, what are Apple’s options? This is my personal take: if I’ve missed something, I’d love to hear about it.

Option 1: Comply

Most companies would just comply with the order, but Apple is not most companies.

That’s not just because they have marketed themselves as privacy and security conscious, although that presumably factors into their decision. From what I’ve seen from interacting with their engineers and observing how they behave (both in technical standards bodies and in their products), this is a commitment that goes much deeper than just marketing.

More significantly, Apple will be considering the secondary and tertiary consequences of compliance. So far, every democratic country around the world has refrained from making such an order; for example, Australia’s widely debated legislation that mirrors the UK “Snooper’s Charter” has an explicit provision to disallow “systemic weakening” of encryption like we see here.

If the UK successfully forces Apple’s hand, every other government in the world is likely to take notice and consider making similar (or even more extreme) demands. CSAM scanning will just be the start: once access to that much data is available, it’s open season for everything from Lèse-majesté to punishing activists and protesters to policing sexual orientation, abortion, and other socially motivated laws. Even if a particular country doesn’t make the same demand of Apple, arrangements like Five Eyes will allow one agency to peer over another’s shoulders.

As I’ve written before, no one should have that much power.

In the tinderbox that politics has become in many parts of the world, this is gasoline. I’d pay good money to be a fly on the wall in the meetings taking place with the Foreign Service, because they of all people should understand the potential global impact of a move like this. Of course, in a world where USAID is shut down by Elon Musk and some teenagers, nothing is off the table – and that’s why we should be so concerned about this outcome.

Option 2: Leave

Apple’s second option is to leave the UK. Full stop.

Close the Apple stores, online and retail. Stop providing iCloud, stop selling iPhones and all the other various i-gear. Close the beautiful new UK HQ at Battersea, and lay off (or transfer overseas) around 8,000 employees (reportedly).

This is (obviously) the nuclear option. It puts Apple outside the jurisdiction of the UK,1 and at the same time orphans every UK Apple user – their phones and computers don’t quite become bricks, but they will definitely have limited utility and lifetime.

Given that along with Apple’s claim to support 550,000 UK jobs, it’s likely to be effective – these consequences would make the government extremely unpopular overnight.

However, this option is also massively expensive: reportedly, total Apple revenue in the UK is something like £1.5bn. Add on top the one-time shutting down costs, and even Apple’s balance sheet will notice.

Perhaps more importantly, this is also a strategically worrisome direction to go in, because it plays into the narrative that Big Tech is more powerful than sovereign nations. Other countries will take notice, and may coordinate to overcome Apple’s reticence. Apple will now have to choose the markets that it operates in based on how it feels about those country’s commitments to human rights on an ongoing basis – hardly a situation that any CEO wants to be in.

Finally, this option simply won’t work if one of those countries is the United States, Apple’s home. I’ll leave it to you, dear reader, to decide how much you trust your predictions of its actions.

Option 3: Open Up

Apple’s third option is to remove itself as a target in a more subtle way than option two.

The UK is presumably interested in Apple providing this functionality because iCloud’s design conveniently makes a massive amount of data convenient to access in one location: Apple’s servers. If that data is instead spread across servers operated by many different parties, it becomes less available.

In effect, this is the decentralize iCloud option. Apple would open up its implementation of iCloud so that third-party and self-hosted providers could be used for the same functions. They would need to create interfaces to allow switching, publish some specifications and maybe some test suites, and make sure that there weren’t any intellectual property impediments to implementation.

There could be some impact on Apple revenue here, but I suspect it’s not huge; many people would continue to buy iCloud for convenience, and for non-storage features that Apple bundles in iCloud+.

Think of it this way: Apple provides e-mail service with iCloud, but doesn’t require you to use it: you can use your own or a third party provider without any drama, because they use common protocols and formats. Why should file sync be any different? Why can’t Apple make using a third-party service as seamless and functional as iCloud?

This isn’t a perfect option. Orders could still force weakened encryption, but now they’d have to target many different parties (depending on the details of implementation and deployment), and they’d have to get access to the stored data. If you choose a provider in another jurisdiction, that makes doing so more difficult, depending on what legal arrangements are in place between those jurisdictions; if you self-host, they’ll need to get physical access to your disks.

What Will (and Should) Apple Do?

Computer operating systems are fundamental to security: once we lose trust in them, it’s pretty much game over. The UK has chosen a risky and brash path forward, and Apple will need to think carefully about how to navigate it.

It should be no surprise that I favour option three. While Apple is notoriously a closed company, it’s not completely averse to collaborating and working in the open when doing so is in its interests – and, given its other options, that’s arguably the case here.

Conceivably, Apple might even be forced into taking the “decentralize iCloud” option if regulators like those implementing the Digital Markets Act in the EU decide that doing so is necessary for competition. Apple has been designated as a gatekeeper for the ‘core platform service’ provided by iOS, and while that designation currently doesn’t include file synchronisation services, that might change.

Of course, the UK government may back down. However, the barrier to some other government taking similar steps is now smaller, and Apple would do well to consider its longer term options even if action turns out to be unnecessary right now.

Thanks to Ian Brown for his input to this article.

  1. Presumably. Both inter-jurisdictional coordination and extraterritorial application of the law may complicate that. IANAL. 

]]>
Platform Advantages: Not Just Network Effects https://www.mnot.net/blog/2024/11/29/platforms 2024-11-29T00:00:00Z Mark Nottingham https://www.mnot.net/personal/ A new book explores an intriguing idea: that there are core processes in some platforms that naturally tilt the table towards being implemented in a single company. <![CDATA[

Over the past few years, there’s been growing legal and academic interest in platforms — their functioning, potential harms, and advantages over competitors.

On that last question, most of the literature that I’ve seen has focused on factors like network effects and access to data. However, a forthcoming book by Carliss Baldwin proposes some significant additional – and structural – advantages that accrue to those who control them. Design Rules Volume 2: How Technology Shapes Organizations1 builds on Volume One (which I wrote about earlier) with a goal to ‘build and defend a general theory explaining how technologies affect the structure and evolution of organizations that implement the technologies.’

Baldwin argues that “whether a technology will generate the most value through single, unified corporations, through platform-based business ecosystems, or through open source projects depends on the balance of complementarity within the technical system.” Let’s unpack that (in my words, with apologies for any misinterpretation of her work).

Modularity

Imagine a technical system, such as a service provided across the Internet, that comprises numerous components. This is a common occurrence because, as mentioned in Volume One, we manage complexity through modularity. We break down tasks into smaller units that can be distributed among many individuals, preventing any single person from having to comprehend the entire system’s intricacies.

These components can have various degrees of coupling – i.e., interdependency. While we always strive for loose coupling, which allows for easy modification or replacement of components without affecting others, it’s not always feasible to avoid tight coupling when there are close dependencies.

Coupling and Governance

Baldwin points out that systems with many tightly coupled functions are better situated in a single company due to the ease of managing these relationships within the hierarchical and closely related environment of a modern corporation. Conversely, she suggests that those with very loosely coupled functions are more appropriate for implementation across multiple entities because this arrangement enables the generation of greater overall value.

In the middle lies a “Goldilocks zone,” where some amount of coordination is necessary, but there’s still a benefit to distributing functions amongst many actors. These conditions allow formation of a business ecosystem – a set of “independent organizations and individuals engaged in complimentary activities and investments.” As Baldwin points out:

Ecosystems rely on distributed governance, meaning that each member has the right to make certain decisions according to his or her own interests and perceptions. In place of direct authority, coordination of an ecosystem requires negotiation among members with different priorities and interests.

Platforms

There are many examples of such distributed governance schemes, including Open Source and Open Standards. However, it’s hard to ignore the dominance of platforms in the current landscape, which she defines as ‘a technological means of coordinating design, production, and exchange within modular architectures.’ Platforms aren’t so much distributed governance schemes as they are centralised control points (or even choke points).

She then goes on to break down a typology of platforms, with particular focus on transaction platforms, like eBay, Amazon, and Chrono24 – and communication platforms, such as Facebook, Bluesky, and X.

Here’s where things get really interesting. Baldwin argues that certain core processes which are essential to implement these types of platforms are bound to be tightly coupled, thereby heavily tilting the table towards implementation by a single company:

The need for tight integration of core processes is the first reason for-profit corporations subject to unified governance have replaced organizations subject to distributed governance in almost all digital exchange platforms. Traditional exchange processes did not require the same high degree of synchronization as algorithmic processes.

In transaction platforms, she identifies search and ad placement, dynamic pricing, and data analysis and prediction as processes that must occur within milliseconds to provide a satisfactory user experience. For communication platforms, the relevant core services are search and ad placement, ad selection, dynamic pricing of an ad, and (again) data analysis and prediction.

What this means for the Internet

Yes, search is more difficult on a federated platform like Mastodon, but it’s possible if you relax the need for immediate updates – as it can be if you rework the relationships in that arena. When you get past that, it’s also hard not to notice that these core processes are mostly advertising-related.

And that’s crucial. These companies have stepped in to solve coordination problems (“How do we communicate around the globe? How do we do transactions with people we haven’t met?”) by creating platforms that fully exploit their centralization. They are supported by real-time advertising systems because the table is tilted towards that outcome, and building a real-time advertising-supported ecosystem with distributed governance is hard.2

Much of that friction goes away if you relax the constraint of being advertising-supported, or even remove the real-time requirement from advertising (e.g., by using contextual advertising). However, you still have a coordination problem, and because real-time advertising is the most lucrative way to monetise a centralized position, decentralizing these systems means big companies won’t be nearly as interested in these outcomes.

The history of the Internet is illustrative here. We had RSS and Atom feeds, but there wasn’t a business model in that: however, there was in ‘news feeds’ on Facebook. We had open messaging protocols like XMPP, but they were supplanted by proprietary chat platforms that wanted to lock their users in and monetise them. Meanwhile, e-mail is being slowly swallowed by GMail and a few others as we helplessly watch.

In short: there are less-recognised structural forces that push key Internet services into centralized, real-time advertising-supported platforms. Along with factors like network effects and access to data, they explain some of why the Internet landscape looks like it does.

Decentralized alternatives must overcome those forces where they can’t be avoided. They also need to be developed and supported, and to compete with those centralised platforms, they will need to be well-funded. To go back to the RSS/Atom example, there is a lot of work that could improve that ecosystem, but no one has a strong incentive to do so.

In these conditions, ‘build it and they will come’ is insufficient; simply creating Internet standards and Open Source software won’t solve the coordination challenges. Most current Internet companies lack the incentive to fund such efforts since they’re unlikely to accommodate real-time advertising.

Who might? My thoughts turn to the various discussions surrounding Digital Public Infrastructure. Exploring how to make that viable is a crucial (and important) topic that I’ll leave for another day.

This is just one aspect of Design Rules Volume 2; there’s much more to discover in this excellent book. I’ve been enthusiastically recommending it to anyone who takes the time to listen.

Thanks to Robin Berjon for reviewing this article.

  1. To be published on 24 December. Many thanks to Professor Baldwin for an early copy. 

  2. Again, not necessarily impossible; for example, look at what Mozilla et al are doing in the Private Advertising Technology effort. 

]]>
On Opting Out of Copyright https://www.mnot.net/blog/2024/09/18/opt-out 2024-09-18T00:00:00Z Mark Nottingham https://www.mnot.net/personal/ The EU AI Act and emerging practice flip copyright’s default opt-in regime to an opt-out one. What effects is this likely to have on the balance of power between rights holders and reuse? <![CDATA[

The EU AI Act and emerging practice flip copyright’s default opt-in regime to an opt-out one. What effects is this likely to have on the balance of power between rights holders and reuse?

Copyright is a default opt-in regime, from the standpoint of the rights holder. If I publish something on this blog, the presumption is that I retain rights unless I specifically license them – for example, by attaching a creative commons license. If I don’t do that, you can’t legally reuse my content (unless your use falls within certain exemptions).

You can think about this arrangement in terms of protocol design: it’s an agreement between parties whose nature creates certain incentives and barriers to behaviour. Someone who wants to reuse my content has the burden of getting a license from me, and proving that they have one if I challenge them. I have the burden of finding misuse of my content and pursuing it.

Technical systems can assist both parties in these tasks. I can use search engines of various sorts to find potential abuses; a licensee can prove that a particular license was available by showing its existence in the cache of a disinterested third party (often, one of the same search engines).

This creates an equilibrium: the burdens are balanced to favour certain behaviours. You might argue that the balance is unjust (and many do), but it is known and stable.

As discussed previously, the EU AI Act and emerging practice flip copyright’s default opt-in regime to an opt-out one. A rights holder now has to take positive action if they want to reserve their rights. While on the face of it they still have the same capability, this ends up being a significant practical shift in power.

That’s partly because of the nature of opt-out itself. The burden shifts: now, the rights holder must find misuse of their content, and prove that they opted out.

Proving that you consistently opted out at every opportunity is difficult, because it’s effectively proving a negative – that you never failed to opt out. Search engines don’t see every request made on the Internet; they just crawl it periodically, sampling what they see. An AI crawler can plausibly claim that the opt out wasn’t present when they crawled, and the rights holder is reduced to proving that the teapot isn’t in orbit.

Notably, this is the case whether the opt-out is attached to the content by a mechanism like robots.txt or if it’s embedded in the content itself as metadata. In the former case, content without the opt-out might be obtained at a different location, or at a different time; in the latter, the opt-out might be stripped from the content or a copy of it, either intentionally or unintentionally (e.g. it is a common to strip metadata from images to optimise performance and improve privacy).

On top of that, using this regime for AI makes finding misuse difficult too. There’s no easy way to query an LLM for a particular bit of content in the corpus that was used to train it; instead, you have to trust the vendor to tell you what they used. While transparency measures are being discussed as a policy solution to this issue, they don’t have the same properties as third-party or technical verification, in that they require trusting assertions from the vendor.

In this manner, changing copyright’s default opt-in to an opt-out for AI dramatically shifts the burden of compliance to rights holders, and the lack of support for managing those burdens brings into question the practical enforceability of the regime. It could be argued that this is appropriate for policy reasons – in particular, to enable innovation. However, it is a mistake to say it doesn’t represent a change in the balance of power as compared to opt-in.

]]>
What RSS Needs https://www.mnot.net/blog/2024/08/25/feeds 2024-08-25T00:00:00Z Mark Nottingham https://www.mnot.net/personal/ Web feeds could be so much more if we put some effort into them. This post explores some ideas of how to start. <![CDATA[

More than twenty years ago, Web feeds were all the rage. Not proprietary news feeds on Facebook or ‘X’ – openly defined, direct producer-to-user feeds of information that you had total control over. Without ads. ‘Syndication’ meant that publishers could reach wider audiences without intermediaries; ‘aggregation’ meant that you could get updates from everyone you were interested in without having to hop all over the Web.

I’m talking about RSS and Atom, of course. I have fond memories of the community that launched this, having started the Syndication Yahoo! Group and later going on to co-edit the Atom specification. Since that period of busy activity, however, the underlying technology hasn’t seen much care or attention. There are some bright spots – podcasts have effectively profiled RSS to create a distributed ecosystem, and ActivityPub has taken the mantle of social feeds – but the core ‘I want to track updates from the Web in a feed reader’ use case has languished.

Despite that lack of attention, the feed ecosystem is flourishing; there are many feeds out there, helped by things like automatic feed generation in platforms such as Wordpress. Right now, I’m subscribed to more than a hundred feeds, tracking everything from personal blogs to academic publications to developments in competition law to senate inquiries, and I check them multiple times a day almost every day.

It’s just that feeds could be so much more with some love and directed care – something that could jump from a niche use case to a widespread ‘normal’ part of the Web for many.

It’s also a good time to revitalise feeds. When Google killed Reader years ago, no one questioned their right to do so, even if we grumbled about it. Now, however, regulators are much more aware of the power that platforms have to tilt markets to their benefit, and many are calling for more decentralised approaches to functions that are currently only provided by concentrated intermediaries. People are also more wary of giving away their e-mail addresses for newsletters (the old-tech solution to feeds) when e-mail addresses are rapidly becoming the replacement for tracking by third-party cookies.

With that in mind, here are some of the areas where I think RSS needs some help.

Community

Communication between implementers of a technology is important; it facilitates coordination for addressing bugs and interoperability problems, and smooths the introduction of new features.

Unfortunately, the feed ecosystem has little such coordination. There are few opportunities for developers of software that consumes or produces feeds to talk. This lack of coordination is compounded by how diverse the ecosystem is: there are many implementations on both sides, so it’s hard to improve things for any one actor.

This situation reminds me of the state of the HTTP protocol in the early 2000’s. Back then, that protocol’s implementers were exhausted, because the Web was still scaling up rapidly, and their software needed to mature to match it. HTTP/1.1 had shipped in the late 90’s, and no one was willing to discuss what comes next: they were too busy. Even if they did want to talk about the protocol, there was no natural ‘home’ for it – and that lack of community resulted in numerous interoperability issues and one-off workarounds (anyone remember the X-Pad header field?).

What we did to improve HTTP suggests some possible paths forward for feeds. In 2007, we established the HTTP Working Group in the IETF to act as a home for the protocol. At first it was just a few people who had the time and interest, but over time we had more and more of the implementer community paying attention, eventually taking on HTTP/2.

Not everyone has the time or willpower to participate in standards, however. So, several years ago we started holding the HTTP Workshop, an informal community get-together for implementers and practitioners, where we can discuss our experiences, problems, and proposals to keep the protocol healthy and active.

Both of these approaches could be used to revitalise the feed implementer community over time, if we can get a core of people interested.

User Agency

Feed readers are an example of user agents: they act on behalf of you when they interact with publishers, representing your interests and preserving your privacy and security. The most well-known user agents these days are Web browsers, but in many ways feed readers do it better – they don’t give nearly as much control to sites about presentation and they don’t allow privacy-invasive technologies like cookies or JavaScript.

However, this excellent user agency isn’t well-defined, and we don’t even know if it’s consistent from reader to reader. We need a common understanding of what a feed reader is and what it isn’t, so that users can evaluate whether their reader is a ‘good’ one, and so we can make principled decisions about what a feed reader does and doesn’t do when we extend them.

I started to write about this a while back in Privacy Considerations for Web Feed Readers, but it fizzled out due to the lack of an active community (see above).

Interoperability Tests

Feed readers need to behave in predictable, compatible ways; otherwise publishers won’t know how their content will be presented to users, and won’t trust them to do it. Most readers have settled on a latent profile of the Web stack when they show feed content, but it’s uneven, and that variability limits the use cases for Web feeds.

For example, many YouTube content creators are looking for alternatives because they don’t want to be at the mercy of Google’s algorithm; some are setting up their own Web sites to host video, but are finding that it’s difficult to hold their users’ attention in a sea of choices. Feeds could help – if video interoperates cleanly in feed readers. Does it? We have no idea.

Another example: some feeds that I view in the excellent Reeder show the ‘hero’ image twice: once because it shows up in the entry’s metadata, and once in the content. I suspect that’s the case because the reader that the publisher used didn’t show the metadata-sourced image. Interop tests would have picked this up and helped to create pressure for one way to do it.

Let’s not even get started on feed autodiscovery.

Creating interop tests requires both resources and buy-in from the developer community, but if we want Web feeds to be a platform that publishers create content for, it’s necessary: the Web has set the bar high.

Best Practices for Feeds

Publishers need stronger and more current guidance for how to publisher their feed content. Some of that is the basics: for example, ‘test your feeds, including for autodiscovery’ – although interop issues (per above) makes that a difficult task.

We should also go further and share good patterns. For example, as someone who uses a feed reader, I’m annoyed by all of the ‘subscribe’ banners I see when I click through to the site – it should know that I came from a feed reader. How? If the feed’s links contain a query string that indicates the source, the Web page should be able to hide ‘subscribe’ banners and use cookies to remember that I’m a feed subscriber.

(I can do something about this one more easily than the others by updating the RSS and Atom Feed Tutorial. I’ll put that on my TODO list…)

Browser Integration

Web browsers used to know about feeds: you’d know whether a page had a feed, and could subscribe (either locally, or have it dispatched out to a separate reader). The user experience wasn’t great, but it at least made feeds visible on the Web.

Now, the browser vendors have ripped feed support out, seeming to have the attitude that feeds can be accommodated with extensions. That’s true to a point: Reeder has a Safari extension, for example, and it lets you know when there’s a feed on the page and subscribe to it.

However, using an extension has privacy implications: I need to trust my feed reader to see the content of every Web page I go to serve this function. That’s not great. Also, most users can’t be bothered to walk through the steps of adding extensions: it’s not ‘normal’ on the Web if you have to modify your browser to do it.

Feed support should be built into browsers, and the user experience should be excellent. It should be possible to dispatch to a cloud reader. It should be possible to have customised subscription flows. It should work out of the box so people don’t have to struggle with installing privacy-invasive extensions.

However, convincing the browser vendors that this is in their interest is going to be challenging – especially when some of them have vested interests in keeping users on the non-feed Web.

Authenticated Feeds

Some publishers want to gate their feeds behind a subscription – for example, ars technica has both free and subscriber-only feeds. Right now, that’s only possible through clunky mechanisms like capability URLs, which are less than ideal for shared clients like Web feed readers, because they can ‘leak’ the subscription information, and don’t benefit from caching.

We might not be able to do better than that, but it’s worth considering: would publishers trust third parties like cloud feed readers enough to delegate subscription authentication to them? What would that look like?

Publisher Engagement

Finally, one of the downsides of feeds from a publisher standpoint is that you get very little information about how your feed is used. That’s a huge benefit from a privacy perspective, but it also hinders adoption of Web feeds, forcing people into subscribing by e-mail (which has its own privacy issues).

I recognise that some are fine with this: personally, most of the feed content I consume isn’t commercial, and it’s great. However, if we can make feeds more ‘normal’ by providing some limited feedback to publishers in a privacy preserving way, that could be very good for the ecosystem.

For example, if the Web browser were able to indicate to the Web site what proportion of a site’s audience uses feed readers, publishers can get an idea of how large their potential feed audience is. Keeping in mind that feed readers are likely much ‘stickier’ than other delivery mechanisms, this could be quite attractive.

Or, if feed readers were able to give publishers an indication of what articles were viewed, that would give them the information they need to optimise their content.

On the Web, these kinds of tasks are currently performed with privacy-invasive technologies, often using third-party cookies. Feed readers could take advantage of newer privacy tech like Privacy Preserving Measurement and Oblivious HTTP to provide these functions in much smarter and targeted ways.

However, doing so would require coordination between implementers (see: Community) and a deep respect for the user in how they’re designed (see: User Agency).

What Else?

I’m sure there’s more. I wrote this primarily to gauge interest: who’s up for taking Web feeds to the next level?

If this is interesting to you, I’d love to hear about it. I asked for an IETF mailing list about feeds to be set up a while back, and while it hasn’t been used yet, that’s probably the best place to start – please subscribe and post there!

]]>
Are Internet Standards Competitive or Collaborative? https://www.mnot.net/blog/2024/07/16/collaborative_standards 2024-07-16T00:00:00Z Mark Nottingham https://www.mnot.net/personal/ It's often assumed that standards work is inherently competitive. This post examines why Internet standards are often more collaborative than competitive, and outlines some implications of this approach. <![CDATA[

It’s often assumed that standards work is inherently competitive. After all, the legal reason for Standards Developing Organisations (SDOs) to exist at all is as a shelter from prosecution for what would otherwise be anti-competitive behaviour.1

That description evokes images of hard-fought, zero-sum negotiation where companies use whatever dirty tricks they can to steer the outcome and consolidate whatever market power they can.

And that does happen. I’ve experienced that testosterone-soaked style of standardisation in the ’00s when IBM, Microsoft, and Oracle attempted to standardise Web Services (i.e., machine-to-machine communication using XML) at the W3C. It did not end well.2

Thankfully, the reality of modern Internet and Web standards work differs greatly from that experience. While companies are still competing in relevant markets, and still use standards as strategic tools – and yes, sometimes they behave badly – both the culture and processes of these bodies are geared heavily towards collaboration and cooperation.

In part, that’s due to the history of the Internet and the Web. Both were projects born from collaborative, non-commercial efforts in research environments that reward cooperation. Early on, these attitudes were embedded into the cultures of both the IETF and W3C: Web Services was an anomaly because corporate interests brought that effort from the ‘outside.’

This tendency can be seen in everything from the IETF’s ‘we participate as individuals, there is no membership’ ethic to W3C’s focus on building a positive working environment. When someone attempts to steer an outcome to benefit one company or appears to act in bad faith, people notice – these things are frowned upon.

Internet and Web standards work also tend to attract people who believe in the mission of these organisations – often to the point where they identify more with the standards work than their current employer. In fact, it’s not uncommon for people to shift from company to company over their careers while still working on the same standards, and without appreciably changing their perspectives on what the correct outcomes are.

As a result, long-term standards participants in these bodies often build strong relationships with each other: through years of interaction, they come to understand each other’s points of view, quirks of behaviour, and red lines. That doesn’t mean they always agree, of course. However, those relationships form the backbone of how much of Internet standards work gets done.

Another factor worth mentioning: as SDOs have matured, they’ve created increasingly elaborate process mechanisms and cross-cutting reviews of aspects like security, privacy, operability, and more. In practice, these checks and balances tend to reward collaborative work and discourage unilateral behaviour.

All of this means that it’s best to think of these as communities, rather than mere gatherings of competitors. That’s not to say that they always get along or even that they’re healthy communities – sometimes things get very bad indeed.3 I’m also not suggesting that companies do not behave competitively in Internet standards: it’s just that competition happens in a more subtle way. Rather than trying to use standards as a way to direct markets towards themselves, companies more often compete in implementation and delivery – building value on top of what’s standardised.

And, to be clear, this is specific to Internet-related standards bodies; other places often still follow the ‘old ways,’ from what I gather.

An Aside on Collaboration and Innovation

All of this might sound counterintuitive if you take the view that innovation primarily comes from deep within companies that produce things – firms that cannily use interoperability to consolidate their market share, or grudgingly share their valuable work with others under pain of anti-trust prosecution. Carliss Baldwin and Eric von Hippel persuasively argue against this view in Modeling a Paradigm Shift: From Producer Innovation to User and Open Collaborative Innovation:

We have seen, and expect to continue to see, single-user innovation and open collaborative innovation growing in importance relative to producer innovation in most sectors of the economy. We do not believe that producer innovation will disappear, but we do expect it to become less pervasive and ubiquitous than was the case during most of the 20th century, and to be combined with user and open collaborative innovation in many settings.

Most interestingly, they point out that decreasing design and communication costs brought by – wait for it – technical innovations like the Internet mean that open collaboration becomes a viable model for innovation in more and more cases.

In short, under the right circumstances, success becomes more likely when cooperating as opposed to attempting to innovate on your own. Open collaborative efforts like Internet standards can be a significant source of innovation, and in some circumstances a distinctly superior one. I think this is an important point to consider when contemplating innovation and competition policies.

Some Less Obvious Implications

The collaborative nature of modern Internet standards work has some interesting implications, and some can be seen as downsides – or at least features that we need to be well aware of.

Most importantly, it means that SDOs have distinct cultures with values and norms. This isn’t surprising to anyone familiar with organisational theory, but it can seem exclusionary or even hostile to those who don’t share them. Outsiders may need to do considerable work to get ‘up to speed’ with those values and norms if they want to be successful in these bodies. Even then, they may face roadblocks if their goals and values aren’t aligned with that of those already entrenched in the organisation.

It also means that these bodies are opinionated about the work they take on: Internet SDOs don’t typically ‘rubber stamp’ proposals from outside. Because they are communities with context and specific expertise, what they produce is ‘flavoured’ by the values and even tastes of those communities. Again, this can be difficult to understand for those who just want their proposal standardised. This factor also limits the ability of these organisations to address problems outside their areas of expertise: they’re not generic venues for any kind of standards work (a more common model elsewhere).

The clubby nature of collaboration can sit very uneasily with the competitive aspects that are inevitably still present in much of this work. Outcomes are heavily influenced by who shows up: three router vendors collaborating well are likely to come up with something that works well for router vendors. However, if the work negatively affects other parties who show up later and try to change things, integrating their viewpoints can be challenging, because they will be seen as going against an established (albeit smaller) consensus (see also the previous discussion of the limits of openness).

When there is contention over how to apply values to standards work (or over the values themselves) a collaborative SDO can struggle to manage the resulting confrontation, especially if their culture is primarily oriented towards ‘friendly’ engagements. For example, many still harbour significant bitterness regarding the well-documented disagreement about DRM in HTML and how it was resolved. The Do Not Track specification failed because the participants couldn’t agree on the meaning of the term, and because adoption of Internet standards is voluntary. It remains to be seen whether the very divergent views of the advertising, publishing, and privacy communities can be reconciled in more recent efforts.

Ultimately, these factors put pressure on the SDO’s governance: a well-governed venue will assure that collaboration functions well, while avoiding capture by narrow interests and assuring that all affected parties have an opportunity to participate – even if they aren’t a good ‘cultural fit’.

  1. See, for example, TFEU Article 101(3), which exempts an agreement between competitors – including those in standards bodies – that “contributes to improving the production or distribution of goods or to promoting technical or economic progress, while allowing consumers a fair share of the resulting benefit[…]” 

  2. To get a sense of what Web Services standards felt like in music video form, try watching Nobody Speak from DJ Shadow (warning: lyrics). 

  3. Note that the IETF culture has changed in significant ways since that article was published. 

]]>