The explosion of digital infrastructure systems over the past thirty years may be the biggest but least recognised shock delivered by the internet. We need to develop the governance capacity to manage it and force it out of hands of the democracy-hostile corporations that control it today.
People mean well but ethics is hard. In tech, we have a knack for applying ethics in the most useless ways possible — even when we earnestly want to improve humankind's lot. Why does this matter, why are we failing, and how can we fix it?
What if the internet were public interest technology? Is that too wildly speculative? I think not. I am not talking about a utopian project here — a public interest internet would be a glorious imperfect mess and it would be far from problem-free. But while there is a lot of solid thinking about various digital issues or pieces of internet infrastructure (much of which I rely upon here), I have yet to read to an answer to this question: What global digital architecture should we assemble if we take seriously the idea that the internet should be public interest technology?
We know from experience and empirical analysis that open source and open standards projects drift into oligarchies that struggle to reform themselves and become ossified. Often, we can simply let them die and replace them with fresher alternatives, but when that's a costly option we can learn from theoretical models of institutional change to understand how to compost the oligarchy and regrow the project from within.
Over the past few months I've been having many conversations with people who all have a particular set of skills but whose job titles are all over the place. I believe that we form a more coherent group than we realise, that this novel role exists for a good reason, and that we would benefit from making that known.
Thought experiment: how hard would it be to implement ActivityPub over ATProto? The answer might surprise you!
Trust has been the defining constraint on the Web's evolution towards more powerful, more applicative capabilities. In a Web context, the user must be able to safely load any arbitrary URL, to safely click on any arbitrary link. The way in which this is achieved is that the runtime places strict limits on what a Web page can do, which in turn necessarily limits powerful capabilities. Could we get more power using a primitive that places more stringent constraints in what pages can do?
Browsers are hugely load-bearing in the Web's architecture, and yet they haven't changed very much in quite a while. If the Web is indeed for user agency, we should take a hard look at our user agents to see if they might not need improvement.
We take the Web for granted as that thing that's there and we talk of things being good or bad for the Web, but we don't ever sit down and really say what the Web is for. I take a look at this question with an eye towards understanding what it is we need to do to build a Web that's actually better.
This is the kick-off post in a series in which I'm going to explore things that we could change about the Web. The odds are pretty good that I will be wrong, possibly even very wrong. You're going to dislike some, or perhaps all of it! My point isn't to jump straight into building these ideas — even though I do believe they point in a better direction and are feasible — but rather to break out of the incrementalist rut and stagnant vision that the Web finds itself mired in. It's not in a good place and I feel the need for more vision work and much more thinking about architectural interventions that can bring about radical change.