dead framework theory | AI Focus
This is depressing.
This is depressing.
The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.
That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer’s IP address). And it’s the hard part when they’re prompting language models to predict plausible-looking Python.
The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.
I love the small web, the clean web. I hate tech bloat.
And LLMs are the ultimate bloat.
So much truth in one story:
They built a machine to gentrify the English language.
They have built a machine that weaponizes mediocrity and sells it as perfection.
They are strip-mining your confidence to sell you back a synthetic version of it.
Maureen has written a really good overview of web feeds for this year’s HTMHell advent calendar.
The common belief is that nobody uses RSS feeds these days. And while it’s true that I wish more people used feed readers—the perfect antidote to being fed from an algorithm—the truth is that millions of people use RSS feeds every time they listen to a podcast. That’s what a podcast is: an RSS feed with enclosure elements that point to audio files.
And just as a web feed doesn’t necessarily need to represent a list of blog posts, a podcast doesn’t necessarily need to be two or more people having a recorded conversation (though that does seem to be the most common format). A podcast can tell a story. I like those kinds of podcasts.
The BBC are particularly good at this kind of episodic audio storytelling. I really enjoyed their series Thirteen minutes to the moon, all about the Apollo 11 mission. They followed it up with a series on Apollo 13, and most recently, a series on the space shuttle.
Here’s the RSS feed for the 13 minutes podcast.
Right now, the BBC have an ongoing series about the history of the atomic bomb. The first series was about Leo Szilard, the second series was about Klaus Fuchs, and the third series running right now is about the Cuban missile crisis.
The hook is that each series is presented by people with a family connection to the events. The first series is presented by the granddaughter of one of the Oak Ridge scientists. The second series is presented by the granddaughter of Klaus Fuch’s spy handler in the UK—blimey! And the current series is presented by Nina Khrushcheva and Max Kennedy—double blimey!
Here’s the RSS feed for The Bomb podcast.
If you want a really deep dive into another pivotal twentieth century event, Evgeny Morozov made a podcast all about Stafford Beer and Salvadore Allende’s collaboration on cybernetics in Chile, the fabled Project Cybercyn. It’s fascinating stuff, though there’s an inevitable feeling of dread hanging over events because we know how this ends.
The podcast is called The Santiago Boys, though I almost hesitate to call it a podcast because for some reason, the website does its best to hide the RSS feed, linking only to the silos of Spotify and Apple. Fortunately, thanks to this handy tool, I can say:
Here’s the RSS feed for The Santiago Boys podcast.
The unifying force behind all three of these stories is the cold war:
I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.
There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.
Delivering total nonsense, with complete confidence.
The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
That’s it.
That’s the $13T growth story that MorganStanley is telling. It’s why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.
Now, if AI could do your job, this would still be a problem. We’d have to figure out what to do with all these technologically unemployed people.
But AI can’t do your job. It can help you do your job, but that doesn’t mean it’s going to save anyone money.
AI has the Jeopardy Phenomenon too.
If you use it to generate code that is outside your expertise, you are likely to think it’s all well and good, especially if it seems to work at first pop. But if you’re intimately familiar with the technology or the code around the code it’s generating, there is a good chance you’ll be like hey! that’s not quite right!
Not just code. I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”
Like, if you know that the results are super dodgy for your own area of expertise, why would you think they’d be any better for, I don’t know, restaurant recommendations in a city you’ve never been to?
My mind boggles at the thought of using a generative tool based on a large language model to do any kind of qualatitive user research, so every single thing that Gregg says here makes complete sense to me.
Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.
I’ve come to realize that statements about the future aren’t predictions: they’re more like spells. When someone describes
somethingto you as the future, they’re sharing a heartfelt belief that thissomethingwill be part of whatever comes next. “Artificial intelligence isn’t going anywhere” quite literally involves casting a technology forward into time. How could that be anything else but a kind of magic?
React is no longer just a library. It’s a full ecosystem that defines how frontend developers are allowed to think.
Browsers now ship View Transitions, Container Queries, and smarter scheduling primitives. The platform keeps evolving at a fair pace, but most teams won’t touch these capabilities until React officially wraps them in a hook or they show up in Next.js docs.
Innovation keeps happening right across the ecosystem, but for many it only becomes “real” once React validates the approach. Which is fine, assuming you enjoy waiting for permission to use the platform you’re already building on.
Zing!
The critique isn’t that React is bad, but that treating any single framework as infrastructure creates blind spots in how we think and build. When React becomes the lens through which we see the web, we stop noticing what the platform itself can already do, and we stop reaching for native solutions because we’re waiting for the framework-approved version to show up first.
If your team’s evolution depends on a single framework’s roadmap, you are not steering your product; you are waiting for permission to move.
Explore the platform. Challenge yourself to discover what the modern web can do natively. Pure HTML, CSS, and a bit of vanilla JS…
This isn’t a rhetorical question. I genuinely want to know why developers choose to build websites using React.
There are many possible reasons. Alas, none of them relate directly to user experience, other than a trickle-down justification: happy productive developers will make better websites. Citation needed.
It’s also worth mentioning that some people don’t choose to use React, but its use is mandated by their workplace (like some other more recent technologies I could mention). By my definition, this makes React enterprise software in this situation. My definition of enterprise software is any software that you use but that you yourself didn’t choose.
By far the most common reason for choosing React today is inertia. If it’s what you’re comfortable with, you’d need a really compelling reason not to use it. That’s generally the reason behind usage mandates too. If we “standardise” on React, then it’ll make hiring more straightforward (though the reality isn’t quite so simple, as the React ecosystem has mutated and bifurcated over time).
And you know what? Inertia is a perfectly valid reason to choose a technology. If time is of the essence, and you know it’s going to take you time to learn a new technology, it makes sense to stick with what you know, even if it’s out of date. This isn’t just true of React, it’s true of any tech stack.
This would all be absolutely fine if React weren’t a framework that gets executed in browsers. Any client-side framework is a tax on the end user. They have to download, parse, and execute the framework in order for you to benefit.
But maybe React doesn’t need to run in the browser at all. That’s the promise of server-side rendering.
There used to be a fairly clear distinction between front-end development and back-end development. The front end consisted of HTML, CSS, and client-side JavaScript. The back end was anything you wanted as long as it could spit out those bits of the front end: PHP, Ruby, Python, or even just a plain web server with static files.
Then it became possible to write JavaScript on the back end. Great! Now you didn’t need to context-switch when you were scripting for the client or the server. But this blessing also turned out to be a bit of a curse.
When you’re writing code for the back end, some things matter more than others. File size, for example, isn’t really a concern. Your code can get really long and it probably won’t slow down the execution. And if it does, you can always buy your way out of the problem by getting a more powerful server.
On the front end, your code should have different priorities. File size matters, especially with JavaScript. The code won’t be executed on your server. It’s executed on all sorts of devices on all sorts of networks running all sorts of browsers. If things get slow, you can’t buy your way out of the problem because you can’t buy every single one of your users a new device and a new network plan.
Now that JavaScript can run on the server as well as the client, it’s tempting to just treat the code the same. It’s the same language after all. But the context really matters. Some JavaScript that’s perfectly fine to run on the server can be a resource hog on the client.
And this is where it gets interesting with React. Because most of the things people like about React still apply on the back end.
When React first appeared, it was touted as front-end tool. State management and a near-magical virtual DOM were the main selling points.
Over time, that’s changed. The claimed speed benefits of the virtual DOM turned out to be just plain false. That just left state management.
But by that time, the selling points had changed. The component-based architecture turned out to be really popular. Developers liked JSX. A lot. Once you got used to it, it was a neat way to encapsulate little bits of functionality into building blocks that can be combined in all sorts of ways.
For the longest time, I didn’t realise this had happened. I was still thinking of React as being a framework like jQuery. But React is a framework like Rails or Django. As a developer, it’s where you do all your work. Heck, it’s pretty much your identity.
But whereas Rails or Django run on the back end, React runs on the front end …except when it doesn’t.
JavaScript can run on the server, which means React can run on the server. It’s entirely possible to have your React cake and eat it. You can write all of your code in React without serving up a single line of React to your users.
That’s true in theory. The devil is in the tooling.
Next.js allows you to write in React and do server-side rendering. But it really, really wants to output React to the client as well.
By default, you get the dreaded hydration pattern—do all the computing on the server in JavaScript (yay!), serve up HTML straight away (yay! yay!) …and then serve up all the same JavaScript that’s on the server anyway (ya—wait, what?).
It’s possible to get Next.js to skip that last step, but it’s not easy. You’ll be battling it every step of the way.
Astro takes a very different approach. It will do everything it can to keep the client-side JavaScript to a minimum. Developers get to keep their beloved JSX authoring environment without penalising users.
Alas, the collective inertia of the “modern” development community is bound up in the React/Next/Vercel ecosystem. That’s a shame, because Astro shows us that it doesn’t have to be this way.
Switching away from using React on the front end doesn’t mean you have to switch away from using React on the back end.
The titular question I asked is too broad and naïve. There are plenty of reasons to use React, just as there are plenty of reasons to use Wordpress, Eleventy, or any other technology that works on the back end. If it’s what you like or what you’re comfortable with, that’s reason enough.
All I really care about is the front end. I’m not going to pass judgment on anyone’s choice of server-side framework, as long as it doesn’t impact what you can do in the client. Like Harry says:
…if you’re going to use one, I shouldn’t be able to smell it.
Here’s the question I should be asking:
Why use React in the browser?
Because if the reason you’re using React is cultural—the whole team works in JSX, it makes hiring easier—then there’s probably no need to make your users download React.
If you’re making a single-page app, then …well, the first thing you should do is ask yourself if it really needs to be a single-page app. They should be the exception, not the default. But if you’re determined to make a single-page app, then I can see why state management becomes very important.
In that situation, try shipping Preact instead of React. As a developer, you’ll almost certainly notice no difference, but your users will appreciate the refreshing lack of bloat.
Mostly though, I’d encourage you to investigate what you can do with vanilla JavaScript in the browser. I totally get why you’d want to hold on to React as an authoring environment, but don’t let your framework limit what you can do on the front end. If you use React on the client, you’re not doing your users any favours.
You can continue to write in React. You can continue to use JSX. You can continue to hire React developers. But keep it on your machine. For your users, make the most of what web browsers can do.
Once you keep React on the server, then a whole world of possibilities opens up on the client. Web browsers have become incredibly powerful in what they offer you. Don’t let React-on-the-client hold you back.
And if you want to know more about what web browsers are capable of today, come to Web Day Out in Brighton on Thursday, 12th March 2026.
Machine learning is amazing if … the value of a correct answer is much higher than the cost of an incorrect answer.
Related to Laissez-faire Cognitive Debt:
And that’s where I start to get really annoyed by a lot of the LLM hype. It’s pushing machine-learning approaches into places where there are significant harms for sometimes giving the wrong answer. And it’s doing so while trying to outsource the liability to the customers who are using these machines in ways in which they are advertised as working. It’s great for translation! Unless a mistranslated word could kill a business deal or start a war. It’s great for summarisation! Unless missing a key point could cost you a load of money. It’s great for writing code! Unless a security vulnerability would cost you lost revenue or a copyright infringement lawsuit from having accidentally put something from the training set directly in your codebase in contravention of its license would kill your business. And so on. Lots of risks that are outsourced and liabilities that are passed directly to the user.
I think of Cognitive Debt as ‘where we have the answers, but not the thinking that went into producing those answers’.
Lately, I have started noticing examples of not just where the debt is being accrued, but who then has the responsibility to pick it up and repay it.
Too often, an LLM doesn’t replace the need for thinking in a group setting, but simply creates more work for others.
I find Brian Eno to be a fascinating chap. His music isn’t my cup of tea, but I really enjoy hearing his thoughts on art, creativity, and culture.
I’ve always loved this short piece he wrote about singing with other people. I’ve passed that link onto multiple people who have found a deep joy in singing with a choir:
Singing aloud leaves you with a sense of levity and contentedness. And then there are what I would call “civilizational benefits.” When you sing with a group of people, you learn how to subsume yourself into a group consciousness because a capella singing is all about the immersion of the self into the community. That’s one of the great feelings — to stop being me for a little while and to become us. That way lies empathy, the great social virtue.
Then there’s the whole Long Now thing, a phrase that originated with him:
I noticed that this very local attitude to space in New York paralleled a similarly limited attitude to time. Everything was exciting, fast, current, and temporary. Enormous buildings came and went, careers rose and crashed in weeks. You rarely got the feeling that anyone had the time to think two years ahead, let alone ten or a hundred. Everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous. I came to think of this as “The Short Now”, and this suggested the possibility of its opposite - “The Long Now”.
I was listening to my Huffduffer feed recently, where I had saved yet another interview with Brian Eno. Sure enough, there was plenty of interesting food for thought, but the bit that stood out to me was relevant to, of all things, prototyping:
I have an architect friend called Rem Koolhaas. He’s a Dutch architect, and he uses this phrase, “the premature sheen.” In his architectural practice, when they first got computers and computers were first good enough to do proper renderings of things, he said everything looked amazing at first.
You could construct a building in half an hour on the computer, and you’d have this amazing-looking thing, but, he said, “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”
I went to visit him one day when they were working on a big new complex for some place in Texas, and they were using matchboxes and pens and packets of tissues. It was completely analog, and there was no sense at all that this had any relationship to what the final product would be, in terms of how it looked.
It meant that what you were thinking about was: How does it work? What do we want it to be like to be in that place? You started asking the important questions again, not: What kind of facing should we have on the building or what color should the stone be?
I keep thinking about that insight: “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”
Substitute the word “buildings” for whatever output is supposedly being revolutionised by generative models today. Websites. Articles. Public policy.
I am interested in art—we are interested in art, in any and all of its forms—because humans made it. That’s the very thing that makes it interesting; the who, the how, and especially the why.
The existence of the work itself is only part of the point, and materializing an image out of thin air misses the point of art, in very much the same way that putting a football into a Waymo to drive it up and down the street for a few hours would be entirely missing the point of sports.
The generative AI industry only exists because some people decided that it’s okay for them to take all this work with no permission, let alone compensation for the original creators, and to charge others for the privilege of using the probabilistic plagiarism machines they’ve fed it to.
Can you ship AI-generated code without creating a maintenance nightmare six months from now? Can you debug it when it breaks? Can you modify it when requirements change? Can you onboard new engineers to a codebase they didn’t write and the AI barely explained?
Most teams haven’t realized this shift yet. They’re optimizing for code generation speed while comprehension debt silently accumulates in their repos.
One team I talked to spent 3 days fixing what should have been a 2-hour problem. They had “saved” time by having AI generate the initial implementation. But when it broke, they lost 70 hours trying to understand code they had never built themselves.
That’s comprehension debt compounding. The time you save upfront gets charged back with interest later.