Emma needs ☕️
“The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.” - adactio.com/links/15808
The Jevons Paradox in action:
Faster networks should fix our performance problems, but so far, they have had an interesting if unintentional impact on the web. This is because historically, faster network speed has enabled developers to deliver more code to users—in particular, more JavaScript code.
And because it’s JavaScript we’re talking about:
Even if folks are on a new fast network, they’re very likely choking on the code we’re sending, rendering the potential speed improvements of 5G moot.
The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.
“The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.” - adactio.com/links/15808
5G Will Definitely Make the Web Slower, Maybe adactio.com/links/15808
Many interactions are not possible without JavaScript, but that doesn’t mean we should look to write more than we have to. The server doing something useful is a requirement for building an interesting business. The client doing something is often a nice-to-have.
There’s also this:
It’s really fast
One of the arguments for a SPA is that it provides a more reactive customer experience. I think that’s mostly debunked at this point, due to the performance creep and complexity that comes in with a more complicated client-server relationship.
Here’s a handy free tool from Calibre that’ll give your website a performance assessment.
Put the kettle on; it’s another epic data-driven screed from Alex. The footnotes on this would be a regular post on any other blog (and yes, even the footnotes have footnotes).
This is a spot-on description of the difference between back-end development and front-end development:
Code that runs on the server can be fully costed. Performance and availability of server-side systems are under the control of the provisioning organisation, and latency can be actively managed by developers and DevOps engineers.
Code that runs on the client, by contrast, is running on The Devil’s Computer. Nothing about the experienced latency, client resources, or even available APIs are under the developer’s control.
Client-side web development is perhaps best conceived of as influence-oriented programming. Once code has left the datacenter, all a web developer can do is send thoughts and prayers.
As a result, an unreasonably effective strategy is to send less code. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios. Declarative forms generate more functional UI per byte sent. These improvements in resilience and reductions in costs are beneficial in compounding ways over a site’s lifetime.
“And so what we did is we started looking at, internally, all of the places where we’re using web technology — so all of our internal web UIs — and realized that they were just really unacceptably slow.”
Why were they slow? The answer: React.
“We realized that our performance, especially on low-end machines, was really terrible — and that was because we had adopted this React framework, and we had used React in probably one of the worst ways possible.”
Picture me holding Trys back and telling him, “Leave it alone, mate, it’s not worth it!”
How I switched to high-resolution maps on The Session without degrading performance.
Safari 18 supports `content-visibility: auto` …but there’s a very niche little bug in the implementation.
A performance boost in Chrome.
A small-scale conspiracy theory from the innards of Google.
With this bookmarklet you’re only ever one click away from the Lighthouse results for a page.
1 Share
# Shared by jornane on Friday, September 13th, 2019 at 6:40pm