Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

If Not React, Then What?

Frameworkism isn't delivering. The answer isn't a different tool, it's the courage to do engineering.

Over the past decade, my work has centred on partnering with teams to build ambitious products for the web across both desktop and mobile. This has provided a ring-side seat to a sweeping variety of teams, products, and technology stacks across more than 100 engagements.

While I'd like to be spending most of this time working through improvements to web APIs, the majority of time spent with partners goes to remediating performance and accessibility issues caused by "modern" frontend frameworks and the culture surrounding them. Today, these issues are most pronounced in React-based stacks.

This is disquieting because React is legacy technology, but it continues to appear in greenfield applications.

Surprisingly, some continue to insist that React is "modern." Perhaps we can square the circle if we understand "modern" to apply to React in the way it applies to art. Neither demonstrate contemporary design or construction. They are not built for current needs or performance standards, but stand as expensive objets harkening back to the peak of an earlier era's antiquated methods.

In the hope of steering the next team away from the rocks, I've found myself penning advocacy pieces and research into the state of play, as well as giving talks to alert managers and developers of the dangers of today's frontend orthodoxy.

In short, nobody should start a new project in the 2020s based on React. Full stop.1

Code that runs on the server can be fully costed. Performance and availability of server-side systems are under the control of the provisioning organisation, and latency can be actively managed.

Code that runs on the client, by contrast, is running on The Devil's Computer.2 Almost nothing about the latency, client resources, or even API availability are under the developer's control.

Client-side web development is perhaps best conceived of as influence-oriented programming. Once code has left the datacenter, all a web developer can do is send thoughts and prayers.

As a direct consequence, an unreasonably effective strategy is to send less code. Declarative forms generate more functional UI per byte sent. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios. These improvements in resilience and reductions in costs are beneficial in compounding ways over a site's lifetime.

Stacks based on React, Angular, and other legacy-oriented, desktop-focused JavaScript frameworks generally take the opposite bet. These ecosystems pay lip service the controls that are necessary to prevent horrific profliferations of unnecessary client-side cruft. The predictable consequence are NPM-algamated bundles full of redundancies like core-js, lodash, underscore, polyfills for browsers that no longer exist, userland ECC libraries, moment.js, and a hundred other horrors.

This culture is so out of hand that it seems 2024's React developers are constitutionally unable of building chatbots without including all of these 2010s holdovers, plus at least one extremely chonky MathML or TeX formatting library in the critical path of displaying an <input> for users to type in. A tiny fraction of query responses need to display formulas — and yet.

Tech leads and managers need to break this spell. Ownership has to be created over decisions affecting the client. In practice, this means forbidding React in new work.

This question comes in two flavours that take some work to tease apart:

Teams that have grounded their product decisions appropriately can productively work through the narrow form by running truly objective bakeoffs.

Building multiple small PoCs to determine each approach's scaling factors and limits can even be a great deal of fun.3 It's the rewarding side of real engineering; trying out new materials in well-understood constraints to improve user outcomes.

Just the prep work to run bakeoffs tends to generate value. In most teams, constraints on tech stack decisions have materially shifted since they were last examined. For some, identifying use-cases reveals a reality that's vastly different than product managers and tech leads expect. Gathering data on these factors allows for first-pass cuts about stack choices, winnowing quickly to a smaller set of options to run bakeoffs for.4

But the teams we spend the most time with don't have good reasons to stick with client-side rendering in the first place.

Many folks asking "if not React, then what?" think they're asking in the narrow form but are grappling with the broader version. A shocking fraction of (decent, well-meaning) product managers and engineers haven't thought through the whys and wherefores of their architectures, opting instead to go with what's popular in responsibility fire brigade.5

For some, provocations to abandon React create an unmoored feeling; a suspicion that they might not understand the world any more.6

Teams in this position are working through the epistemology of their values and decisions.7 How can they know their technology choices are better than the alternatives? Why should they pick one stack over another?

Many need help deciding which end of the telescope to examine frontend problems through because frameworkism has become the dominant creed of frontend discourse.

Frameworkism insists that all problems will be solved if teams just framework hard enough. This is non-sequitur, if not entirely backwards. In practice, the only thing that makes web experiences good is caring about the user experience — specifically, the experience of folks at the margins. Technologies come and go, but what always makes the difference is giving a toss about the user.

In less vulgar terms, the struggle is to convince managers and tech leads that they need to start with user needs. Or as Public Digital puts it, "design for user needs, not organisational convenience"

The essential component of this mindset shift is replacing hopes based on promises with constraints based on research and evidence. This aligns with what it means to commit wanton acts of engineering, because engineering is the practice of designing solutions to problems for users and society under known constraints.

The opposite of engineering is imagining that constraints do not exist, or do not apply to your product. The shorthand for this is "bullshit."

Ousting an engrained practice of bullshitting does not come easily. Frameworkism preaches that the way to improve user experiences is to adopt more (or different) tooling from the framework's ecosystem. This provides adherents with something to do that looks plausibly like engineering, but it isn't.

It can even become a totalising commitment; solutions to user problems outside the framework's expanded cinematic universe are unavailable to the frameworkist. Non-idiomatic patterns that unlock wins for users are bugs to be squashed, not insights to be celebrated. Without data or evidence to counterbalance bullshit artists's assertions, who's to say they're wrong? So frameworkists root out and devalue practices that generate objective criteria in decisionmaking. Orthodoxy unmoored from measurement predictably spins into absurdity. Eventually, heresy carries heavy sanctions.

And it's all nonsense.

Realists do not wallow in abstraction-induced hallucinations about user experiences; they measure them. Realism requires reckoning with the world as it is, not as we wish it to be, and in that way, it's the opposite of frameworkism.

The most effective tools for breaking this spell are techniques that give managers a user-centred view of their system's performance. This can take the form of RUM data, such as Core Web Vitals (check yours now!), or lab results from well-configured test-benches (e.g., WPT). Instrumenting critical user journeys and talking through business goals are quick follow-ups that enable teams to seize the momentum and formulate business cases for change.

RUM and bench data sources are essential antidotes to frameworkism because they provide data-driven baselines to argue about. Instead of accepting the next increment of framework investment on faith, teams armed with data can begin to weigh up the actual costs of fad chasing versus likely returns.

Prohibiting the spread of React (and other frameworkist totems) by policy is both an incredible cost-saving tactic and a helpful way to reorient teams towards delivery for users. However, better results only arrive once frameworkism itself is eliminated from decision-making. Avoiding one class of mistake won't pay dividends if we spend the windfall on investments within the same error category.

A general answer to the broad form of the problem has several parts:

To see how realism and frameworkism differ in practice, it's helpful to work a few examples. As background, recall that our rubric9 for choosing technologies is based on the number of incremental updates to primary data in a session. Some classes of app, like editors, feature long sessions and many incremental updates where a local data model can be helpful in supporting timely application of updates, but this is the exception.

Sites with short average sessions cannot afford much JS up-front.
Sites with short average sessions cannot afford much JS up-front.

It's only in these exceptional instances that SPA architectures should be considered.

Very few sites will meet the qualifications to be built as an SPA

And only when an SPA architecture is required should tools designed to support optimistic updates against a local data model — including "frontend frameworks" and "state management" tools — ever become part of a site's architecture.

The choice isn't between JavaScript frameworks, it's whether SPA-oriented tools should be entertained at all.

For most sites, the answer is clearly "no".

Sites built to inform should almost always be built using semantic HTML with optional progressive enhancement as necessary.

Static site generation tools like Hugo, Astro, 11ty, and Jekyll work well for many of these cases. Sites that have content that changes more frequently should look to "classic" CMSes or tools like WordPress to generate HTML and CSS.

Blogs, marketing sites, company home pages, public information sites, and the like should minimise client-side JavaScript payloads to the greatest extent possible. They should never be built using frameworks that are designed to enable SPA architectures.10

Informational sites have short sessions and server-owned application data models; that is, the source of truth for what's displayed on the page is always the server's to manage and own. This means that there is no need for a client-side data model abstraction or client-side component definitions that might be updated from such a data model.

Note: many informational sites include productivity components as distinct sub-applications. CMSes such as Wordpress are comprised of two distinct surfaces; a low-traffic, high-interactivity editor for post authors, and a high-traffic, low-interactivity viewer UI for readers. Progressive enhancement should be considered for both, but is an absolute must for reader views which do not feature long sessions.9:1

E-commerce sites should be built using server-generated semantic HTML and progressive enhancement.

A large and stable performance gap between Amazon and its React-based competitors demonstrates how poorly SPA architectures perform in e-commerce applications. More than 70% of Walmart's traffic is mobile, making their bet on Next.js particularly problematic for the business.
A large and stable performance gap between Amazon and its React-based competitors demonstrates how poorly SPA architectures perform in e-commerce applications. More than 70% of Walmart's traffic is mobile, making their bet on Next.js particularly problematic for the business.

Many tools are available to support this architecture. Teams building e-commerce experiences should prefer stacks that deliver no JavaScript by default, and buttress that with controls on client-side script to prevent regressions in material business metrics.

The general form of e-commerce sites has been stable for more than 20 years:

Across all of these page types, a pervasive login and cart status widget will be displayed. This widget, and the site's logo, are sometimes the only consistent elements.

Long experience has demonstrated low UI commonality, highly variable session lengths, and the importance of fresh content (e.g., prices) in e-commerce. These factors argue for server-owned application state. The best way to reduce latency is to optimise for lightweight pages. Aggressive caching, image optimisation, and page-weight reduction strategies all help.

Media consumption sites vary considerably in session length and data update potential. Most should start as progressively-enhanced markup-based experiences, adding complexity over time as product changes warrant it.

Many interactive elements on media consumption sites can be modeled as distinct islands of interactivity (e.g., comment threads). Many of these components present independent data models and can therefore be modeled as Web Components within a larger (static) page.

This model breaks down when media playback must continue across media browsing (think "mini-player" UIs). A fundamental limitation of today's web platform is that it is not possible to preserve some elements from a page across top-level navigations. Sites that must support features like this should consider using SPA technologies while setting strict guardrails for the allowed size of client-side JS per page.

Another reason to consider client-side logic for a media consumption app is offline playback. Managing a local (Service Worker-backed) media cache requires application logic and a way to synchronise information with the server.

Lightweight SPA-oriented frameworks may be appropriate here, along with connection-state resilient data systems such as Zero or Y.js.

Social media apps feature significant variety in session lengths and media capabilities. Many present infinite-scroll interfaces and complex post editing affordances. These are natural dividing lines in a design that align well with session depth and client-vs-server data model locality.

Most social media experiences involve a small, fixed number of actions on top of a server-owned data model ("liking" posts, etc.) as well as distinct update phase for new media arriving at an interval. This model works well with a hybrid approach as is found in Hotwire and many HTMX applications.

Islands of deep interactivity may make sense in social media applications, and aggressive client-side caching (e.g., for draft posts) may aid in building engagement. It may be helpful to think of these as unique app sections with distinct needs from the main site's role in displaying content.

Offline support may be another reason to download a snapshot of user data to the client. This should be as part of an approach that builds resilience agains flaky networks for the main application. Teams in this situation should consider a Service Worker-based, multi-page app with "stream stitching". This allows sites to stick with HTML, while enabling offline-first logic and synchronisation. Because offline support is so invasive to an architecture, this requirement must be identified up-front.

Note: Many assume that SPA-enabling tools and frameworks are required to build compelling Progressive Web Apps that work well offline. This is not the case. PWAs can be built using stream-stitching architectures that apply the equivalent of server-side templating to data on the client, within a Service Worker.

With the advent of multi-page view transitions, MPA architecture PWAs can present fluid transitions between user states without heavyweight JavaScript bundles clogging up the main thread. It may take several more years for the framework community to digest the implications of these technologies, but they are available today and work exceedingly well, both as foundational architecture pieces and as progressive enhancements.

Document-centric productivity apps may be the hardest class to reason about, as collaborative editing, offline support, and lightweight (and fast) "viewing" modes with full document fidelity are all general product requirements.

Triage-oriented data stores (e.g. email clients) are also prime candidates for the potential benefits of SPA-based technology. But as with all SPAs, the ability to deliver a better experience hinges both on session depth and up-front payload cost. It's easy to lose this race, as this blog has examined in the past.

Editors of all sorts are a natural fit for local data models and SPA-based architectures to support modifications to them. However, the endemic complexity of these systems ensures that performance will remain a constant struggle. As a result, teams building applications in this style should consider strong performance guardrails, identify critical user journeys up-front, and ensure that instrumentation is in place to ward off unpleasant performance surprises.

Editors frequently feature many updates to the same data (e.g., for every keystroke or mouse drag). Applying updates optimistically and only informing the server asynchronously of edits can deliver a superior experience across long editing sessions.

However, teams should be aware that editors may also perform double duty as viewers and that the weight of up-front bundles may not be reasonable for both cases. Worse, it can be hard to tease viewing sessions apart from heavy editing sessions at page load time.

Teams that succeed in these conditions build extreme discipline about the modularity, phasing, and order of delayed package loading based on user needs (e.g., only loading editor components users need when they require them). Teams that get stuck tend to fail to apply controls over which team members can approve changes to critical-path payloads.

Some types of apps are intrinsically interactive, focus on access to local device hardware, or center on manipulating media types that HTML doesn't handle intrinsically. Examples include 3D CAD systems, programming editors, game streaming services), web-based games, media-editing, and music-making systems. These constraints often make complex, client-side JavaScript-based UIs a natural fit, but each should be evaluated in a similarly critical style as when building productivity applications:

Success in these app classes is possible on the web, but extreme care is required.

A Word On Enterprise Software: Some of the worst performance disasters I've helped remediate are from a cateogry we can think of, generously, as "enterprise line-of-business apps". Dashboards, worfklow systems, corporate chat apps, that sort of thing.

Teams building these excruciatingly slow apps often assert that "startup performance isn't important because people start our app in the morning and keep it open all day". At the limit, this can be true, but what this attempted deflection obscures is that performance is cultural. Teams that fail to define and measure critical user journeys (include loading) always fail to manage post-load interactivity too.

The old saying "how you do anything is how you do everything" is never more true than in software usability.

One consequence of cultures that fail to put the user first are products whose usability is so poor that attributes which didn't matter at the time of sale (like performance) become reasons to switch.

If you've ever had the distinct displeasure of using Concur or Workday, you'll understand what I mean. Challengers win business from them not by being wonderful, but simply by being usable. These incumbents are powerless to respond because their problems are now rooted deeply in the behaviours they rewarded through hiring and promotion along the way. The resulting management blindspot becomes a self-reinforcing norm that no single leader can shake.

This is why it's caustic to product success and brand value to allow a culture of disrespect towards users in favour of venerating developers (e.g., "DX"). The only antidote is to stamp it out wherever it arises by demanding user-focused realism in decision making.

To get unstuck, managers and tech leads that become wedded to frameworkism have to work through a series of easily falsified rationales offered by Over Reactors in service of their chosen ideology. Note, as you read, that none of these protests put the user experience front-and-centre. This admission by omission is a reliable property of the conversations these sketches are drawn from.

This chestnut should be answered with the question: "for how long?"

The dominant outcome of fling-stuff-together-with-NPM, feels-fine-on-my-$3K-laptop development is to get teams stuck in the mud much sooner than anyone expects.

From major accessibility defects to brand-risk levels of lousy performance, the consequence of this approach has been crossing my desk every week for a decade. The one thing I can tell you that all of these teams and products have in common is that they are not moving faster.

Brands you've heard of and websites you used this week have come in for help, which we've dutifully provided. The general prescription is "spend a few weeks/months unpicking this Gordian knot of JavaScript."

The time spent in remediation does fix the revenue and accessibility problems that JavaScript exuberance cause, but teams are dead in the water while they belatedly add ship gates and bundle size controls and processes to prevent further regression.

This necessary, painful, and expensive remediation generally comes at the worst time and with little support, owing to the JavaScript-industrial-complex's omerta. Managers trapped in these systems experience a sinking realisation that choices made in haste are not so easily revised. Complex, inscrutable tools introduced in the "move fast" phase are now systems that teams must dedicate time to learn, understand deeply, and affrimatively operate. All the while the pace of feature delivery is dramatically reduced.

This isn't what managers think they're signing up for when acccepting "but we need to move fast!"

But let's take the assertion at face value and assume a team that won't get stuck in the ditch (🤞): the idea embedded in this statemet is, roughly, that there isn't time to do it right (so React?), but there will be time to do it over.

This is in direct opposition to identifying product-market-fit. After all, the way to find who will want your product is to make it as widely available as possible, then to add UX flourishes.

Teams I've worked with are frequently astonished to find that removing barriers to use opens up new markets and leads to growth in parts of a world they had under-valued.

Now, if you're selling Veblen goods, by all means, prioritise anything but accessibility. But in literally every other category, the returns to quality can be best understood as clarity of product thesis. A low-quality experience — which is what is being proposed when React is offered as an expedient — is a drag on the core growth argument for your service. And if the goal is scale, rather than exclusivity, building for legacy desktop browsers that Microsoft won't even sell you at the cost of harming the experience for the majority of the world's users is a strategic error.

To a statistical certainty, you aren't making Facebook. Your problems likely look nothing like Facebook's early 2010s problems, and even if they did, following their lead is a terrible idea.

And these tools aren't even working for Facebook. They just happen to be a monopoly in various social categories and so can afford to light money on fire. If that doesn't describe your situation, it's best not to overindex on narratives premised on Facebook's perceived success.

React developers are web developers. They have to operate in a world of CSS, HTML, JavaScript, and DOM. It's inescapable. This means that React is the most fungible layer in the stack. Moving between templating systems (which is what JSX is) is what web developers have done fluidly for more than 30 years. Even folks with deep expertise in, say, Rails and ERB, can easily knock out Django or Laravel or Wordpress or 11ty sites. There are differences, sure, but every web developer is a polyglot.

React knowledge is also not particularly valuable. Any team familiar with React's...baroque...conventions can easily master Preact, Stencil, Svelte, Lit, FAST, Qwik, or any of a dozen faster, smaller, reactive client-side systems that demand less mental bookkeeping.

The tech industry has just seen many of the most talented, empathetic, and user-focused engineers I know laid off for no reason other than their management couldn't figure out that there would be some mean reversion post-pandemic. Which is to say, there's a fire sale on talent right now, and you can ask for whatever skills you damn well please and get good returns.

If you cannot attract folks who know web standards and fundamentals, reach out. I'll help you formulate recs, recruiting materials, hiring rubrics, and promotion guides to value these folks the way you should: unreasonably effective collaborators that will do incredible good for your products at a fraction of the cost of solving the next problem the React community is finally acknowledging it caused.

But even if you decide you want to run interview loops to filter for React knowledge, that's not a good reason to use it! Anyone who can master the dark thicket of build tools, typescript foibles, and the million little ways that JSX's fork of HTML and JavaScript syntax trips folks up is absolutely good enough to work in a different system.

Heck, they're already working in an ever-shifting maze of faddish churn. The treadmill is real, which means that the question isn't "will these folks be able to hit the ground running?" (answer: no, they'll spend weeks learning your specific setup regardless), it's "what technologies will provide the highest ROI over the life of our team?"

Given the extremely high costs of React and other frameworkist prescriptions, the odds that this calculus will favour the current flavour of the week over the lifetime of even a single project are vanishingly small.

It makes me nauseous to hear managers denigrate talented engineers, and there seems to be a rash of it going around. The idea that folks who come out of bootcamps — folks who just paid to learn whatever was on the syllabus — aren't able or willing to pick up some alternative stack is bollocks.

Bootcamp grads might be junior, and they are generally steeped in varying strengths of frameworkism, but they're not stupid. They want to do a good job, and it's management's job to define what that is. Many new grads might know React, but they'll learn a dozen other tools along the way, and React is by far the most (unnecessarily) complex of the bunch. The idea that folks who have mastered the horrors of useMemo and friends can't take on board DOM lifecycle methods or the event loop or modern CSS is insulting. It's unfairly stigmatising and limits the organisation's potential.

In other words, definitionally atrocious management.

For more than a decade, the core premise of frameworkism has been that client-side resources are cheap (or are getting increasingly inexpensive) and that it is, therefore, reasonable to trade some end-user performance for developer convenience.

This has been an absolute debacle. Since at least 2012, the rise of mobile falsified this contention, and (as this blog has meticulously catalogued) we are only just starting to turn the corner.

Frameworkist assertion that "everyone has fast phones" is many things, but first and foremost it's an admission that the folks offering it don't know what they're talking about — and they hope you don't either.

No business trying to make it on the web can afford what they're selling, and you are under no obligation to offer your product as sacrifice to a false god.

This is, at best, a comforting fiction.

At worst, it's a knowing falsity that serves to omit the variability in React-based stacks because, you see, React isn't one thing. It's more of a lifestyle, complete with choices to make about React itself (function components or class components?) languages and compilers (typescript or nah?), package managers and dependency tools (npm? yarn? pnpm? turbo?), bundlers (webpack? esbuild? swc? rollup?), meta-tools (vite? turbopack? nx?), "state management" tools (redux? mobx? apollo? something that actually manages state?) and so on and so forth. And that's before we discuss plugins to support different CSS transpilation, among other optional side-quests frameworkists have led many teams I've consulted with down.

Across more than 100 consulting engagements, I've never seen two identical React setups, save smaller cases where folks had yet to change the defaults of Create React App — which itself changed dramatically over the years before finally being removed from the React docs as the best way to get started.

There's nothing standard about any of this. It's all change, all the time, and anyone who tells you differently is not to be trusted.

Hopefully, if you've made it this far, you'll forgive a digression into how the "React is industry standard" misdirection became so embedded.

Given the overwhelming evidence that this stuff isn't even working on the sites of the titular React poster children, how did we end up with React in so many nooks and crannies of contemporary frontend?

Pushy know-it-alls, that's how. Frameworkists have a way of hijacking every conversation with assertions like "virtual DOM is fast" without ever understanding anything about how browsers work, let alone the GC costs of their (extremely chatty) alternatives. This same ignorance allows them to confidently assert that React is "fine" when cheaper alternatives exist in every dimension.

These are not serious people. You do not have to entertain arguments offered without evidence. But you do have to oppose them and create data-driven structures that put users first. The long-term costs of these errors are enormous, as witnessed by the parade of teams needing our help to achieve minimally decent performance using stacks that were supposed to be "performant" (sic).

Which part, exactly? Be extremely specific. Which packages are so valuable, yet wedded entirely to React, that a team should not entertain alternatives? Do they really not work with Preact? How much money exactly is the right amount to burn to use these libraries? Because that's the debate.

Even if you get the benefits of "the ecosystem" at Time 0, why do you think that will continue to pay out at T+1? Or T+N?

Every library presents a separate, stochastic risk of abandoment. Even the most heavily used systems fall out of favour with the JavaScript-industrial-complex's in-crowd, stranding you in the same position as you'd have been in if you accepted ownership of more of your stack up-front, but with less experience and agency. Is that a good trade? Does your boss agree?

And how's that "CSS-in-JS" adventure working out? Still writing class components, or did you have a big forced (and partial) migration that's still creating headaches?

The truth is that every single package that is part of a repo's devDependencies is, or will be, fully owned by the consumer of the package. The only bulwark against uncomfortable surprises is to consider NPM dependencies a high-interest loan collateralized by future engineering capacity.

The best way to prevent these costs spiralling out of control is to fully examine and approve each and every dependency for UI tools and build systems. If your team is not comfortable agreeing to own, patch, and improve every single one of those systems, they should not be part of your stack.

Do you feel lucky, punk? Do you?

Because you'll have to be lucky to beat the odds.

Sites built with Next.js perform materially worse than those from HTML-first systems like 11ty, Astro, et al.

It simply does not scale, and the fact that it drags React behind it like a ball and chain is a double demerit. The chonktastic default payload of delay-loaded JS in any Next.js site will compete with ads and other business-critical deferred content for bandwidth, and that's before any custom components or routes are added. Even when using React Server Components. Which is to say, Next.js is a fast way to lose a lot of money while getting locked in to a VC-backed startup's proprietary APIs.

Next.js starts bad and only gets worse from a shocking baseline. No wonder the only Next sites that seem to perform well are those that enjoy overwhelmingly wealthy userbases, hand-tuning assistance from Vercel, or both.

So, do you feel lucky?

React Native is a good way to make a slow app that requires constant hand-tuning and an excellent way to make a terrible website. It has also been abandoned by it's poster children.

Companies that want to deliver compelling mobile experiences into app stores from the same codebase as their web site are better served investigating Trusted Web Activities and PWABuilder. If those don't work, Capacitor and Cordova can deliver similar benefits. These approaches make most native capabilities available, but centralise UI investment on the web side, providing visibility and control via a single execution path. This, in turn, reduces duplicate optimisation and accessibility headaches.

These are essential guides for frontend realism. I recommend interested tech leads, engineering managers, and product managers digest them all:

These pieces are from teams and leaders that have succeeded in outrageously effective ways by applying the realist tentants of looking around for themselves and measuring. I wish you the same success.

Thanks to Mu-An Chiou, Hasan Ali, Josh Collinsworth, Ben Delarre, Katie Sylor-Miller, and Mary for their feedback on drafts of this post.


  1. Why not React? Dozens of reasons, but a shortlist must include:

    • React is legacy technology. It was built for a world where IE 6 still had measurable share, and it shows.
    • Virtual DOM was never fast.
      • React was forced to back away from misleading performance claims almost immediately.11
      • In addition to being unnecessary to achieve reactivity, React's diffing model and poor support for dataflow management conspire to regularly generate extra main-thread work in the critical path. The "solution" is to learn (and zealously apply) a set of extremely baroque, React-specific solutions to problems React itself causes.
      • The only (positive) contribution to performance that React's doubled-up work model can, in theory, provide is a structured lifecycle that helps programmers avoid reading back style and layout information at the moments when it's most expensive.
      • In practice, React does not prevent forced layouts and is not able to even warn about them. Unsurprisingly, every React app that crosses my desk is littered with layout thrashing bugs.
      • The only defensible performance claims Reactors make for their work-doubling system are phrased as a trade; e.g. "CPUs are fast enough now that we can afford to do work twice for developer convenience."
        • Except they aren't. CPUs stopped getting faster about the same time as Reactors began to perpetuate this myth. This did not stop them from pouring JS into the ecosystem as though the old trends had held, with predictably disasterous results Sales volumes of the high-end devices that continues to get faster stagnated over the past decade. Meanwhile, the low end exploded in volume whole remaining stubbornly fixed in performance.
        • It isn't even necessary to do all the work twice to get reactivity! Every other reactive component system from the past decade is significantly more efficient, weighs less on the wire, and preserves the advantages of reactivitiy without creating horrible "re-render debugging" hunts that take weeks away from getting things done.
    • React's thought leaders have been wrong about frontend's constraints for more than a decade.
    • The money you'll save can be measured in truck-loads.
      • Teams that correctly cabin complexity to the server side can avoid paying inflated salaries to begin with.
      • Teams that do build SPAs can more easily control the costs of those architectures by starting with a cheaper baseline and building a mature performance culture into their organisations from the start.
    • Not for nothing, but avoiding React will insulate your team from the assertion-heavy, data-light React discourse.

    Why pick a slow, backwards-looking framework whose architecture is compromised to serve legacy browsers when smaller, faster, better alternatives with all of the upsides (and none of the downsides) have been production-ready and successful for years?

  2. Frontend web development, like other types of client-side programming, is under-valued by "generalists" who do not respect just how freaking hard it is to deliver fluid, interactive experiences on devices you don't own and can't control. Web development turns this up to eleven, presenting a wicked effective compression format for UIs (HTML & CSS) but forces experiences to be downloaded at runtime across high-latency, narrowband connections with little to no caching. To low-end devices. With no control over which browser will execute the code.

    And yet, browsers and web developers frequently collude to deliver outstanding interactivity under these conditions. Often enough, that "generalists" don't give a second thought to the miracle of HTML-centric Wikipedia and MDN articles loading consistently quickly, as they gleefully clog those narrow pipes with JavaScript payloads so large that they can't possibly delivering similarly good experiences. All because they neither understand nor respect client-side constraints.

    It's enough to make thoughtful engineers tear their hair out.

  3. Tom Stoppard's classic quip that "it's not the voting that's democracy; it's the counting" chimes with the importance of impartial and objective criteria for judging the results of bakeoffs.

    I've witnessed more than my fair share of stacked-deck proof-of-concept pantomimes, often inside large organisations with tremendous resources and managers who say all the right things. But honesty demands more than lip service.

    Organisations looking for a complicated way to excuse pre-ordained outcomes should skip the charade. It will only make good people cynical and increase resistance. Teams that want to set bales of benajmins on fire because of frameworkism shouldn't be afraid to say what they want.

    They were going to get it anyway; warts and all.

  4. An example of easy cut lines for teams considering contemporary development might be browser support versus base bundle size.

    In 2024, no new application will need to support IE or even legacy versions of Edge. They are not a measurable part of the ecosystem. This means that tools that took the design constraints imposed by IE as a given can be discarded from consideration, given that the extra client-side weight they required to service IE's quirks makes them uncompetitive from a bundle size perspective.

    This eliminates React, Angular, and Ember from consideration without a single line of code being written; a tremendous savings of time and effort.

    Another example is lock-in. Do systems support interoperability across tools and frameworks? Or will porting to a different system require a total rewrite? A decent proxy for this choice is Web Components support.

    Teams looking to avoid lock-in can remove frameworks from consideration that do not support Web Components as both an export and import format. This will still leave many contenders, and management can rest assured they will not leave the team high-and-dry.14

  5. The stories we hear when interviewing members of these teams have an unmistakable buck-passing flavour. Engineers will claim (without evidence) that React is a great13 choice for their blog/e-commerce/marketing-microsite because "it needs to be interactive" — by which they mean it has a Carousel and maybe a menu and some parallax scrolling. None of this is an argument for React per se, but it can sound plausible to managers who trust technical staff about technical matters.

    Others claim that "it's an SPA". But should it be a Single Page App? Most are unprepared to answer that question for the simple reason they haven't thought it through.9:2

    For their part, contemporary product managers seem to spend a great deal of time doing things that do not have any relationship to managing the essential qualities of their products.

    Most need help making sense of the RUM data already available to them. Few are in touch with device and network realities of their current and future (🤞) users. PMs that clearly articulate critical-user-journeys for their teams are like hen's teeth. And I can count on one hand teams that have run bakeoffs — without resorting to binary.

  6. It's no exaggeration to say that team leaders encountering evidence that their React (or Angular, etc.) technology choices are letting down users and the business go through some things.

    Following the herd is an adaptation to prevent their specific decisions from standing out — tall poppies and all that — and it's uncomfortable when those decisions receive belated scrutiny. But when the evidence is incontrovertible, needs must. This creates cognitive dissonance.

    Few are so entitled and callous that they wallow in denial. Most want to improve. They don't come to work every day to make a bad product; they just thought the herd knew more than they did. It's disorienting when that turns out not to be true. That's more than understandable.

    Leaders in this situation work through the stages of grief in ways that speak to their character.

    Strong teams own the reality and look for ways to learn more about their users and the constraints that should shape product choices. The goal isn't to justify another rewrite, but to find targets the team should work towards, breaking down the problem into actionable steps. This is hard and often unfamiliar work, but it is rewarding. Setting accurate goalposts helps teams take credit as they make progress remediating the current mess. These are all markers of teams on the way to improving their performance management maturity.

    Some get stuck in anger, bargaining, or depression. Sadly, these teams are taxing to help. Supporting engineers and PMs through emotional turmoil is a big part of a performance consultant's job. The stronger the team's attachment to React community narratives, the harder it can be to accept responsibility for defining team success in terms of user success. But that's the only way out of the deep hole they've dug.

    Consulting experts can only do so much. Tech leads and managers that continue to prioritise "Developer Experience" (without metrics, obviously) and "the ecosystem" (pray tell, which parts?) in lieu of user outcomes can remain beyond reach, no matter how much empathy and technical analysis is provided. Sometimes, you have to cut bait and hope time and the costs of ongoing failure create the necessary conditions for change.

  7. Most are substituting (perceived) popularity for the work of understanding users and their needs. Starting with user needs creates constraints to work backwards from.

    Instead of doing this work-back, many sub in short-term popularity contest winners. This goes hand-in-glove with predictable failure to deeply understand business goals.

    It's common to hear stories of companies shocked to find the PHP/Python/etc. system they are replacing with the New React Hotness will require multiples of currently allocated server resources for the same userbase. The impacts of inevitably worse client-side lag cost, too, but only show up later. And all of these costs are on top of the salaries for the bloated teams the New React Hotness invariably requires.

    One team shared that avoidance of React was tantamount to a trade secret. If their React-based competitors understood how expensive React stacks are, they'd lose their (considerable) margin advantage. Wild times.

  8. UIs that works well for all users aren't charity, they're hard-nosed business choices about market expansion and development cost.

    Don't be confused: every time a developer makes a claim without evidence that a site doesn't need to work well on a low-end device, understand it as a true threat to your product's success, if not your own career.

    The point of building a web experience is to maximize reach for the lowest development outlay, otherwise you'd build a bunch of native apps for every platform instead. Organisations that aren't spending bundles to build per-OS proprietary apps...well...aren't doing that. In this context, unbacked claims about why it's OK to exclude large swaths of the web market to introduce legacy desktop-era frameworks designed for browsers that don't exist any more work directly against strategy. Do not suffer them gladly.

    In most product categories, quality and reach are the product attributes web developers can impact most directly. It's wasteful, bordering insubbordinate, to suggest that not delivering those properties is an effective use of scarce funding.

  9. Should a site be built as a Single Page App?

    A good way to work this question is to ask "what's the point of an SPA?". The answer is that they can (in theory) reduce interaction latency, which implies many interactions per session. It's also an (implicit) claim about the costs of loading code up-front versus on-demand. This sets us up to create a rule of thumb.

    Should this site be built as a Single Page App? A decision tree. (hint: at best, maybe)

    Sites should only be built as SPAs, or with SPA-premised technologies if (and only if):

    • They are known to have long sessions (more than ten minutes) on average
    • More than ten updates are applied to the same (primary) data

    This instantly disqualifies almost every e-commerce experience, for example, as sessions generally involve traversing pages with entirely different primary data rather than updating a subset of an existing UI. Most also feature average sessions that fail the length and depth tests. Other common categories (blogs, marketing sites, etc.) are even easier to disqualify. At most, these categories can stand a dose of progressive enhancement (but not too much!) owing to their shallow sessions.

    What's left? Productivity and social apps, mainly.

    Of course, there are many sites with bi-modal session types or sub-apps, all of which might involve different tradeoffs. For example, a blogging site is two distinct systems combined by a database/CMS. The first is a long-session, heavy interaction post-writing and editing interface for a small set of users. The other is a short-session interface for a much larger audience who mostly interact by loading a page and then scrolling. As the browser, not developer code, handles scrolling, we omit from interaction counts. For most sessions, this leaves us only a single data update (initial page load) to divide all costs by.

    If the denominator of our equation is always close to one, it's nearly impossible to justify extra weight in anticipation of updates that will likely never happen.12

    To formalise slightly, we can understand average latency as the sum of latencies in a session, divided by the number of interactions. For multi-page architectures, a session's average latency (Lavg) is simply a session's summed LCP's divided by the number of navigations in a session (N):

    L m avg = i = 1 N LCP ( i ) N

    SPAs need to add initial navigation latency to the latencies of all other session interactions (I). The total number of interactions in a session N is:

    N=1+I

    The general form is of SPA average latency is:

    L avg = latency ( navigation ) + i = 1 I latency ( i ) N

    We can handwave a bit and use INP for each individual update (via the Performance Timeline) as our measure of in-page update lag. This leaves some room for gamesmanship — the React ecosystem is famous for attempting to duck metrics accountability with scheduling shenanigans — so a real measurement system will need to substitute end-to-end action completion (including server latency) for INP, but this is a reasonable bootstrap.

    INP also helpfully omits scrolling unless the programmer does something problematic. This is correct for the purposes of metric construction as scrolling gestures are generally handled by the browser, not application code, and our metric should only measure what developers control. SPA average latency simplifies to:

    L s avg = LCP + i = 0 I INP ( i ) N

    As a metric for architecture, this is simplistic and fails to capture variance, which SPA defenders will argue matters greatly. How might we incorporate it?

    Variance (σ2) across a session is straightforward if we have logs of the latencies of all interactions and an understanding of latency distributions. Assuming latencies follows the Erlang distribution, we would have to work to assess variance, except that complete logs simplify this to the usual population variance formula. Standard deviation (σ) is then just the square root:

    σ 2 = ( x - μ ) 2 N

    Where μ is the mean (average) of the population X, the set of measured latencies in a session, with this value summed across all sessions.

    We can use these tools to compare architectures and their outcomes, particularly the effects of larger up-front payloads for SPA architecture for sites with shallow sessions. Suffice to say, the smaller the deonominator (i.e., the shorter the session), the worse average latency will be for JavaScript-oriented designs and the more sensitive variance will be to population-level effects of hardware and networks.

    A fuller exploration will have to wait for a separate post.

  10. Certain frameworkists will claim that their framework is fine for use in informational scenarios because their systems do "Server-Side Rendering" (a.k.a., "SSR").

    Parking for a moment discussion of the linguistic crime that "SSR" represents, we can reject these claims by substituting a test: does the tool in question send a copy of a library to support SPA navigations down the wire by default?

    This test is helpful, as it shows us that React-based tools like Next.js are wholly unsuitable for this class of site, while React-friendly tools like Astro are appropriate.

    We lack a name for this test today, and I hope readers will suggest one.

  11. React's initial claims of good performance because it used a virtual DOM were never true, and the React team was forced to retract them by 2015. But like many retracted, zombie ideas, there seems to have been no reduction in the rate of junior engineers regurgitating this long-falsified idea as a reason to continue to choose React.

    How did such a baldly incorrect claim come to be offered in the first place? The options are unappetising; either the React team knew their work-doubling machine was not fast but allowed others to think it was, or they didn't know but should have.15

    Neither suggest the grounded technical leadership that developers or businesses should invest heavily in.

  12. It should go without saying, but sites that aren't SPAs shouldn't use tools that are premised entirely on optimistic updates to client-side data because sites that aren't SPAs shouldn't be paying the cost of creating a (separate, expensive) client-side data store separate from the DOM representation of HTML.

    Which is the long way of saying that if there's React or Angular in your blogware, 'ya done fucked up, son.

  13. When it's pointed out that React is, in fact, not great in these contexts, the excuses come fast and thick. It's generally less than 10 minutes before they're rehashing some variant of how some other site is fast (without traces to prove it, obvs), and it uses React, so React is fine.

    Thus begins an infinite regression of easily falsified premises.

    The folks dutifully shovelling this bullshit aren't consciously trying to invoke Brandolini's Law in their defence, but that's the net effect. It's exhausting and principally serves to convince the challenged party not that they should try to understand user needs and build to them, but instead that you're an asshole.

  14. Most managers pay lip service to the idea of preferring reversible decisions. Frustratingly, failure to put this into action is in complete alignment with social science research into the psychology of decision-making biases (open access PDF summary).

    The job of managers is to manage these biases. Working against them involves building processes and objective frames of reference to nullify their effects. It isn't particularly challenging, but it is work. Teams that do not build this discipline pay for it dearly, particularly on the front end, where we program the devil's computer.2:1

    But make no mistake: choosing React is a one-way door; an irreversible decision that is costly to relitigate. Teams that buy into React implicitly opt into leaky abstractions like timing quirks of React's (unique; as in, nobody else has one because it's costly and slow) synthentic event system and non-portable concepts like portals. React-based products are stuck, and the paths out are challenging.

    This will seem comforting, but the long-run maintenance costs of being trapped in this decision are excruciatingly high. No wonder Reactors believe they should command a salary premium.

    Whatcha gonna do, switch?

  15. Where do I come down on this?

    My interactions with React team members over the years, combined with their confidently incorrect public statements about how browsers work, have convinced me that honest ignorance about their system's performance sat underneath misleading early claims.

    This was likely exascerbated by a competitive landscape in which their customers (web developers) were unable to judge the veracity of the assertions, and a deference to authority; surely Facebook wouldn't mislead folks?

    The need for an edge against Angular and other competitors also likely played a role. It's underappreciated how tenuous the position of frontend and client-side framework teams are within Big Tech companies. The Closure library and compiler that powered Google's most successful web apps (Gmail, Docs, Drive, Sheets, Maps, etc.) was not staffed for most of its history. It was literally a 20% project that the entire company depended on. For the React team to justify headcount within Facebook, public success was likely essential.

    Understood in context, I don't entirely excuse the React team for their early errors, but they are understandable. What's not forgivable are the material and willful omissions by Facebook's React team once the evidence of terrible performance began to accumulate. The React team took no responsibility, did not explain the constraints that Facebook applied to their JavaScript-based UIs to make them perform as well as they do — particularly on mobile — and benefited greatly from pervasive misconceptions that continue to cast React is a better light than hard evidence can support.

Platform Strategy and Its Discontents

The web is losing. Badly. But a comeback is possible.

This post is an edited and expanded version of a now-mangled Mastodon thread.

Some in the JavaScript community imagine that I harbour an irrational dislike of their tools when, in fact, I want nothing more than to stop thinking about them. Live-and-let-live is excellent guidance, and if it weren't for React et. al.'s predictably ruinous outcomes, the public side of my work wouldn't involve educating about the problems JS-first development has caused.

But that's not what strategy demands, and strategy is my job.1

I've been holding my fire (and the confidences of consulting counterparties) for most of the last decade. Until this year, I only occasionally posted traces documenting the worsening rot. I fear this has only served to make things look better than they are.

Over the past decade, my work helping teams deliver competitive PWAs gave me a front-row seat to a disturbing trend. The rate of failure to deliver usable experiences on phones was increasing over time, despite the eye-watering cost of JS-based stacks teams were reaching for. Worse and costlier is a bad combo, and the opposite of what competing ecosystems did.

Native developers reset hard when moving from desktop to mobile, getting deeply in touch with the new constraints. Sure, developing a codebase multiple times is more expensive than the web's write-once-test-everywhere approach, but at least you got speed for the extra cost.

That's not what web developers did. Contemporary frontend practice pretended that legacy-oriented, desktop-focused tools would perform fine in this new context, without ever checking if they did. When that didn't work, the toxic-positivity crowd blamed the messenger.2

Frontend's tragically timed turn towards JavaScript means the damage isn't limited to the public sector or "bad" developers. Some of the strongest engineers I know find themselves mired in the same quicksand. Today's popular JS-based approaches are simply unsafe at any speed. The rot is now ecosystem-wide, and JS-first culture owns a share of the responsibility.

But why do I care?

I want the web to win.

What does that mean? Concretely, folks should be able to accomplish most of their daily tasks on the web. But capability isn't sufficient; for the web to win in practice, users need to turn to the browser for those tasks because it's easier, faster, and more secure.

A reasonable metric of success is time spent as a percentage of time on device.3

But why should we prefer one platform over another when, in theory, they can deliver equivalently good experiences?

As I see it, the web is the only generational software platform that has a reasonable shot at delivering a potent set of benefits to users:

No other successful platform provides all of these, and others that could are too small to matter.

Platforms like Android and Flutter deliver subsets of these properties but capitulate to capture by the host OS agenda, allowing their developers to be taxed through app stores and proprietary API lock-in. Most treat user mediation like a bug to be fixed.

The web's inherent properties have created an ecosystem that is unique in the history of software, both in scope and resilience.

So why does this result in intermittent antagonism towards today's JS community?

Because the web is losing, and instead of recognising that we're all in it together, then pitching in to right the ship, the Lemon Vendors have decided that predatory delay and "I've got mine, Jack"-ism is the best response.

What do I mean by "losing"?

Going back to the time spent metric, the web is cleaning up on the desktop. The web's JTBD percentage and fraction of time spent both continue to rise as we add new capabilities to the platform, displacing other ways of writing and delivering software, one fraction of a percent every year.5

The web is desktop's indispensable ecosystem. Who, a decade ago, thought the web would be such a threat to Adobe's native app business that it would need to respond with a $20BN acquisition attempt and a full-fledged version of Photoshop (real Photoshop) on the web?

Model advantages grind slowly but finely. They create space for new competitors to introduce the intrinsic advantages of their platform in previously stable categories. But only when specific criteria are met.

First and foremost, challengers need a cost-competitive channel. That is, users have to be able to acquire software that runs on this new platform without a lot of extra work. The web drops channel costs to nearly zero, assuming...

80/20 capability. Essential use-cases in the domain have to be (reliably) possible for the vast majority (90+%) of the TAM. Some nice-to-haves might not be there, but the model advantage makes up for it. Lastly...

It has to feel good. Performance can't suck for core tasks.6 It's fine for UI consistency with native apps to wander a bit.7 It's even fine for there to be a large peak performance delta. But the gap can't be such a gulf that it generally changes the interaction class of common tasks.

So if the web is meeting all these requirements on desktop – even running away with the lead – why am I saying "the web is losing"?

Because more than 75% of new devices that can run full browsers are phones. And the web is getting destroyed on mobile.

Utterly routed.

It's not going well
It's not going well

This is what I started warning about in 2019, and more recently on this blog. The terrifying data I had access to five years ago is now visible from space.

Public data <a href='https://vimeo.com/364402896'>shows what I warned about, citing Google-private data, in 2019.</a> In the US, time spent in browsers continues to stagnate while smartphone use grows, and the situation is even more dire outside the states. The result is a falling fraction of time spent. This is not a recipe for a healthy web.
Public data shows what I warned about, citing Google-private data, in 2019. In the US, time spent in browsers continues to stagnate while smartphone use grows, and the situation is even more dire outside the states. The result is a falling fraction of time spent. This is not a recipe for a healthy web.

If that graph looks rough-but-survivable, understand that it's only this high in the US (and other Western markets) because the web was already a success in those geographies when mobile exploded.

That history isn't shared in the most vibrant growth markets, meaning the web has fallen from "minuscule" to "nonexistent" as a part of mobile-first daily life globally.

This is the landscape. The web is extremely likely to get cast in amber and will, in less than a technology generation, become a weird legacy curio.

What happens then? The market for web developers will stop expanding, and the safe, open, interoperable, gatekeeper-free future for computing will be entirely foreclosed — or at least the difficulty will go from "slow build" to "cold-start problem"; several orders of magnitude harder (and therefore unlikely).

This failure has many causes, but they're all tractable. This is why I have worked so hard to close the capability gaps with Service Workers, PWAs, Notifications, Project Fugu, and structural solutions to the governance problems that held back progress. All of these projects have been motivated by the logic of platform competition, and the urgency that comes from understanding that that web doesn't have a natural constituency.8

If you've read any of my writing over this time, it will be unsurprising that this is why I eventually had to break silence and call out what Apple has done on iOS, and what Facebook and Android have done to more quietly undermine browser choice.

These gatekeepers are kneecapping the web in different, but overlapping and reinforcing ways. There's much more to say here, but I've tried to lay out the landscape over the past few years,. But even if we break open the necessary 80/20 capabilities and restart engine competition, today's web is unlikely to succeed on mobile.

Web developers and browsers have capped the web's mobile potential by ensuring it will feel terrible on the phones most folks have. A web that can win is a web that doesn't feel like sludge. And today it does.

This failure has many fathers. Browsers have not done nearly enough to intercede on users' behalf; hell, we don't even warn users that links they tap on might take them to sites that lock up the main thread for seconds at a time!

Things have gotten so bad that even the extremely weak pushback on developer excess that Google's Core Web Vitals effort provides is a slow-motion earthquake. INP, in particular, is forcing even the worst JS-first lemon vendors to retreat to the server — a tacit acknowledgement that their shit stinks.

So this is the strategic logic of why web performance matters in 2024; for the web to survive, it must start to grow on mobile. For that growth to start, we need the web to be a credible way to deliver these sorts of 80/20 capability-enabled mobile experiences with not-trash performance. That depends both on browsers that don't suck (we see you, Apple) and websites that don't consistently lock up phones and drain batteries.

Toolchains and communities that retreat into the numbing comfort of desktop success are a threat to that potential.

There's (much) more for browsers to do here, but developers that want the web to succeed can start without us. Responsible, web-ecology-friendly development is more than possible today, and the great news is that it tends to make companies more money, too!

The JS-industrial-complex culture that pooh-poohs responsibility is self-limiting and a harm to our collective potential.

Nobody has ever hired me to work on performance.

It's (still) on my plate because terrible performance is a limiting factor on the web's potential to heal and grow. Spending nearly half my time diagnosing and remediating easily preventable failure is not fun. The teams I sit with are not having fun either, and goodness knows there are APIs I'd much rather be working on instead.

My work today, and for the past 8 years, has only included performance because until it's fixed the whole web is at risk.

It's actually that dire, and the research I publish indicates that we are not on track to cap our JS emissions or mitigate them with CPUs fast enough to prevent ecosystem collapse.

Contra the framework apologists, pervasive, preventable failure to deliver usable mobile experiences is often because we're dragging around IE 9 compat and long toolchains premised on outdated priors like a ball and chain.9

Things look bad, and I'd be remiss if I didn't acknowledge that it could just be too late. Apple and Google and Facebook, with the help of a pliant and credulous JavaScript community, might have succeeded where 90s-era Microsoft failed — we just don't know it yet.

But it seems equally likely that the web's advantages are just dormant. When browser competition is finally unlocked, and when web pages aren't bloated with half a megabyte of JavaScript (on average), we can expect a revolution. But we need to prepare for that day and do everything we can to make it possible.

Failure and collapse aren't pre-ordained. We can do better. We can grow the web again. But to do that, the frontend community has to decide that user experience, the web's health, and their own career prospects are more important than whatever JS-based dogma VC-backed SaaS vendors are shilling this month.


  1. My business card says "Product Manager" which is an uncomfortable fudge in the same way "Software Engineer" was an odd fit in my dozen+ years on the Chrome team.

    My job on both teams has been somewhat closer to "Platform Strategist for the Web". But nobody hires platform strategists, and when they do, it's to support proprietary platforms. The tactics, habits of mind, and ways of thinking about platform competition for open vs. closed platforms could not be more different. Indeed, I've seen many successful proprietary-platform folks try their hand at open systems and bounce hard off the different constraints, cultures, and "soft power" thinking they require.

    Doing strategy on behalf of a collectively-owned, open system is extremely unusual. Getting paid to do it is almost unheard of. And the job itself is strange; because facts about the ecosystem develop slowly, there isn't a great deal to re-derive from current events.

    Companies also don't ask strategists to design and implement solutions in the opportunity spaces they identify. But solving problems is the only way to deliver progress, so along with others who do roughly similar work, I have camped out in roles that allow arguments about the health of the web ecosystem to motivate the concrete engineering projects necessary to light the fuse of web growth.

    Indeed, nobody asked me to work on web performance, just as nobody asked me to develop PWAs, in the same way that nobody asked me to work on the capability gap between web and native. Each one falls out of the sort of strategy analysis I'm sharing in this post for the first time. These projects are examples of the sort of work I think anyone would do once they understood the stakes and marinated in the same data.

    Luckily, inside of browser teams, I've found that largely to be true. Platform work attracts long-term thinkers, and those folks are willing to give strategy analysis a listen. This, in turn, has allowed the formation of large collaborations (like Project Fugu and Project Sidecar to tackle the burning issues that pro-web strategy analysis yields.

    Strategy without action isn't worth a damn, and action without strategy can easily misdirect scarce resources. It's a strange and surprising thing to have found a series of teams (and bosses) willing to support an oddball like me that works both sides of the problem space without direction.

    So what is it that I do for a living? Whatever working to make the web a success for another generation demands.

  2. Just how bad is it?

    This table shows the mobile Core Web Vitals scores for every production site listed on the Next.js showcase web page as of Oct 2024. It includes every site that gets enough traffic to report mobile-specific data, and ignores sites which no longer use Next.js:10

    Mobile Core Web Vitals statistics for Next.js sites from Vercel's showcase, as well as the fraction of mobile traffic to each site. The last column indicates CWV stats (LCP, INP, and CLS) that consistently passed over the past 90 days.
    Tap column headers to sort.
    Site mobile LCP (ms) INP (ms) CLS 90d pass
    Sonos 70% 3874 205 0.09 1
    Nike 75% 3122 285 0.12 0
    OpenAI 62% 2164 387 0.00 2
    Claude 30% 7237 705 0.08 1
    Spotify 28% 3086 417 0.02 1
    Nerdwallet 55% 2306 244 0.00 2
    Netflix Jobs 42% 2145 147 0.02 3
    Zapier 10% 2408 294 0.01 1
    Solana 48% 1915 188 0.07 2
    Plex 49% 1501 86 0.00 3
    Wegmans 58% 2206 122 0.10 3
    Wayfair 57% 2663 272 0.00 1
    Under Armour 78% 3966 226 0.17 0
    Devolver 68% 2053 210 0.00 1
    Anthropic 30% 4866 275 0.00 1
    Runway 66% 1907 164 0.00 1
    Parachute 55% 2064 211 0.03 2
    The Washington Post 50% 1428 155 0.01 3
    LG 85% 4898 681 0.27 0
    Perplexity 44% 3017 558 0.09 1
    TikTok 64% 2873 434 0.00 1
    Leonardo.ai 60% 3548 736 0.00 1
    Hulu 26% 2490 211 0.01 1
    Notion 4% 6170 484 0.12 0
    Target 56% 2575 233 0.07 1
    HBO Max 50% 5735 263 0.05 1
    realtor.com 66% 2004 296 0.05 1
    AT&T 49% 4235 258 0.18 0
    Tencent News 98% 1380 78 0.12 2
    IGN 76% 1986 355 0.18 1
    Playstation Comp Ctr. 85% 5348 192 0.10 0
    Ticketmaster 55% 3878 429 0.01 1
    Doordash 38% 3559 477 0.14 0
    Audible (Marketing) 21% 2529 137 0.00 1
    Typeform 49% 1719 366 0.00 1
    United 46% 4566 488 0.22 0
    Hilton 53% 4291 401 0.33 0
    Nvidia NGC 3% 8398 635 0.00 0
    TED 28% 4101 628 0.07 1
    Auth0 41% 2215 292 0.00 2
    Hostgator 34% 2375 208 0.01 1
    TFL "Have your say" 65% 2867 145 0.22 1
    Vodafone 80% 5306 484 0.53 0
    Product Hunt 48% 2783 305 0.11 1
    Invision 23% 2555 187 0.02 1
    Western Union 90% 10060 432 0.11 0
    Today 77% 2365 211 0.04 2
    Lego Kids 64% 3567 324 0.02 1
    Staples 35% 3387 263 0.29 0
    British Council 37% 3415 199 0.11 1
    Vercel 11% 2307 247 0.01 2
    TrueCar 69% 2483 396 0.06 1
    Hyundai Artlab 63% 4151 162 0.22 1
    Porsche 59% 3543 329 0.22 0
    elastic 11% 2834 206 0.10 1
    Leafly 88% 1958 196 0.03 2
    GoPro 54% 3143 162 0.17 1
    World Population Review 65% 1492 243 0.10 1
    replit 26% 4803 532 0.02 1
    Redbull Jobs 53% 1914 201 0.05 2
    Marvel 68% 2272 172 0.02 3
    Nubank 78% 2386 690 0.00 2
    Weedmaps 66% 2960 343 0.15 0
    Frontier 82% 2706 160 0.22 1
    Deliveroo 60% 2427 381 0.10 2
    MUI 4% 1510 358 0.00 2
    FRIDAY DIGITAL 90% 1674 217 0.30 1
    RealSelf 75% 1990 271 0.04 2
    Expo 32% 3778 269 0.01 1
    Plotly 8% 2504 245 0.01 1
    Sumup 70% 2668 888 0.01 1
    Eurostar 56% 2606 885 0.44 0
    Eaze 78% 3247 331 0.09 0
    Ferrari 65% 5055 310 0.03 1
    FTD 61% 1873 295 0.08 1
    Gartic.io 77% 2538 394 0.02 1
    Framer 16% 8388 222 0.00 1
    Open Collective 49% 3944 331 0.00 1
    Õhtuleht 80% 1687 136 0.20 2
    MovieTickets 76% 3777 169 0.08 2
    BANG & OLUFSEN 56% 3641 335 0.08 1
    TV Publica 83% 3706 296 0.23 0
    styled-components 4% 1875 378 0.00 1
    MPR News 78% 1836 126 0.51 2
    Me Salva! 41% 2831 272 0.20 1
    Suburbia 91% 5365 419 0.31 0
    Salesforce LDS 3% 2641 230 0.04 1
    Virgin 65% 3396 244 0.12 0
    GiveIndia 71% 1995 107 0.00 3
    DICE 72% 2262 273 0.00 2
    Scale 33% 2258 294 0.00 2
    TheHHub 57% 3396 264 0.01 1
    A+E 61% 2336 106 0.00 3
    Hyper 33% 2818 131 0.00 1
    Carbon 12% 2565 560 0.02 1
    Sanity 10% 2861 222 0.00 1
    Elton John 70% 2518 126 0.00 2
    InStitchu 27% 3186 122 0.09 2
    Starbucks Reserve 76% 1347 87 0.00 3
    Verge Currency 67% 2549 223 0.04 1
    FontBase 11% 3120 170 0.02 2
    Colorbox 36% 1421 49 0.00 3
    NileFM 63% 2869 186 0.36 0
    Syntax 40% 2531 129 0.06 2
    Frog 24% 4551 138 0.05 2
    Inflect 55% 3435 289 0.01 1
    Swoosh by Nike 78% 2081 99 0.01 1
    Passing % 38% 28% 72% 8%

    Needless to say, these results are significantly worse than those of responsible JS-centric metaframeworks like Astro or HTML-first systems like Eleventy. Spot-checking their results gives me hope.

    Failure doesn't have to be our destiny, we only need to change the way we build and support efforts that can close the capability gap.

  3. This phrasing — fraction of time spent, rather than absolute time — has the benefit of not being thirsty. It's also tracked by various parties.

    The fraction of "Jobs To Be Done" happening on the web would be the natural leading metric, but it's challenging to track.

  4. Web developers aren't the only ones shooting their future prospects in the foot.

    It's bewildering to see today's tech press capitulate to the gatekeepers. The Register and The New Stack stand apart in using above-the-fold column inches to cover just how rancid and self-dealing Apple, Google, and Facebook's suppression of the mobile web has been.

    Most can't even summon their opinion bytes to highlight how broken the situation has become, or how the alternative would benefit everyone, even though it would directly benefit those outlets.

    If the web has a posse, it doesn't include The Verge or TechCrunch.

  5. There are rarely KOs in platform competitions, only slow, grinding changes that look "boring" to a tech press that would rather report on social media contratemps.

    Even the smartphone revolution, which featured never before seen device sales rates, took most of a decade to overtake desktop as the dominant computing form-factor.

  6. The web wasn't always competitive with native on desktop, either. It took Ajax (new capabilities), Moore's Law (RIP), and broadband to make it so. There had to be enough overhang in CPUs and network availability to make the performance hit of the web's languages and architecture largely immaterial.

    This is why I continue to track the mobile device and network situation. For the mobile web to take off, it'll need to overcome similar hurdles to usability on most devices.

    The only way that's likely to happen in the short-to-medium term is for developers to emit less JS per page. And that is not what's happening.

  7. One common objection to metaplatforms like the web is that the look and feel versus "native" UI is not consistent. The theory goes that consistent affordances create less friction for users as they navigate their digital world. This is a nice theory, but in practice the more important question in the fight between web and native look-and-feel is which one users encounter more often.

    The dominant experience is what others are judged against. On today's desktops, browsers that set the pace, leaving OSes with reduced influence. OS purists decry this as an abomination, but it just is what it is. "Fixing" it gains users very little.

    This is particularly true in a world where OSes change up large aspects of their design languages every half decade. Even without the web's influence, this leaves a trail of unreconciled experiences sitting side-by-side uncomfortably. Web apps aren't an unholy disaster, just more of the same.

  8. No matter how much they protest that they love it (sus) and couldn't have succeeded without it (true), the web is, at best, an also-ran in the internal logic of today's tech megacorps. They can do platform strategy, too, and know that that exerting direct control over access to APIs is worth its weight in Californium. As a result, the preferred alternative is always the proprietary (read: directly controlled and therefore taxable) platform of their OS frameworks. API access gatekeeping enables distribution gatekeeping, which becomes a business model chokepoint over time.

    Even Google, the sponsor of the projects I pursued to close platform gaps, couldn't summon the organisational fortitude to inconvenience the Play team for the web's benefit. For its part, Apple froze Safari funding and API expansion almost as soon as the App Store took off. Both benefit from a cozy duopoly that forces businesses into the logic of heavily mediated app platforms.

    The web isn't suffering on mobile because it isn't good enough in some theoretical sense, it's failing on mobile because the duopolists directly benefit from keeping it in a weakened state. And because the web is a collectively-owned, reach-based platform, neither party has to have their fingerprints on the murder weapon. At current course and speed, the web will die from a lack of competitive vigour, and nobody will be able to point to the single moment when it happened.

    That's by design.

  9. Real teams, doing actual work, aren't nearly as wedded to React vs., e.g., Preact or Lit or FAST or Stencil or Qwik or Svelte as the JS-industrial-complex wants us all to believe. And moving back to the server entirely is even easier.

    React is the last of the totalising frameworks. Everyone else is living in the interoperable, plug-and-play future (via Web Components). Escape doesn't hurt, but it can be scary. But teams that want to do better aren't wanting for off-ramps.

  10. It's hard to overstate just how fair this is to the over-Reactors. The (still) more common Create React App metaframework starter kit would suffer terribly by comparison, and Vercel has had years of browser, network, and CPU progress to water down the negative consequences of Next.js's blighted architecture.

    Next has also been pitched since 2018 as "The React Framework for the Mobile Web" and "The React Framework for SEO-Friendly Sites". Nobody else's petard is doing the hoisting; these are the sites that Vercel is chosing to highlight, built using their technology, which has consistently advertised performance-oriented features like code-splitting and SSG support.

Older Posts