Jeremy Keith

Jeremy Keith

Making websites. Writing books. Hosting a podcast. Speaking at events. Living in Brighton. Working at Clearleft. Playing music. Taking photos. Answering email.

Journal 3132 sparkline Links 10522 sparkline Articles 85 sparkline Notes 7696 sparkline

Sunday, February 16th, 2025

The hardest working font in Manhattan – Aresluna

This is absolutely wonderful!

There’s deep dives and then there’s Marcin’s deeeeeeep dives. Sit back and enjoy this wholesome detective work, all beautifully presented with lovely interactive elements.

This is what the web is for!

Saturday, February 15th, 2025

Friday, February 14th, 2025

The Tyranny of Now — The New Atlantis

I’m not a fan of Nicholas Carr and his moral panics, but this is an excellent dive into some historical media theory.

What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.

AI is Stifling Tech Adoption | Vale.Rocks

Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!

Well, your coding co-pilot is not going to going to be of any help.

Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.

Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

So we get this instead:

I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

How Indie Devs and Small Teams Can Win in a Tech Downturn - The New Stack

In which Rich nails Clearleft’s superpower:

“Clearleft is a relatively small team, but we can achieve big results because we are nimble and extremely experienced. As strategic design partners, we have a privileged position where we can work around a large company’s politics,” Rutter said. “We need to understand those politics — and help the client staff navigate them — but we don’t need to be bound by them. We bring a thoroughly user-centered approach to our design partnership, and that can be something novel to companies. By showing them what good design looks like (not so much the interface, as the actual process of getting to really well-designed products and services), we can be disruptive within the organization and leave them in a much better place.”

Tech continues to be political | Miriam Eric Suzanne

Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.

This. A million times, this.

I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.

I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

Thursday, February 13th, 2025

Putting the ink into design thinking | Clearleft

The power of prototyping:

Most of my work is a set of disposables rather than deliverables, and I celebrate this.

I like the three questions that Chris asks himself:

  1. What’s the quickest, cheapest thing I can create to help make the next design decision?
  2. What can I create to best demonstrate the essence of the concept?
  3. How can I most effectively share the thinking behind the design with decision-makers?

We Live Like Royalty and Don’t Know It — The New Atlantis

Strong Deb Chachra vibes in this ongoing series by Charles C. Mann:

he great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they don’t inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.

Wednesday, February 12th, 2025

Monday, February 10th, 2025

Sunday, February 9th, 2025

Older »