Tags: technology

694

sparkline

Tuesday, February 18th, 2025

The Generative AI Con

I Feel Like I’m Going Insane

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.

We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.

Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.

Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.

Friday, February 14th, 2025

AI is Stifling Tech Adoption | Vale.Rocks

Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!

Well, your coding co-pilot is not going to going to be of any help.

Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.

Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

So we get this instead:

I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Tech continues to be political | Miriam Eric Suzanne

Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.

This. A million times, this.

I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.

I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

Wednesday, February 12th, 2025

AI wants to rule the World, but it can’t handle dairy.

AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.

Is it okay?

Robin takes a fair and balanced look at the ethics of using large language models.

Saturday, February 8th, 2025

UI Pace Layers - Jim Nielsen’s Blog

Every UI control you roll yourself is a liability. You have to design it, test it, ship it, document it, debug it, maintain it — the list goes on.

It makes you wonder why we insist on rolling (or styling) our own common UI controls so often. Perhaps we’d be better off asking: What are the fewest amount of components we have to build to deliver value to our users?

Friday, January 17th, 2025

Changing

It always annoys me when a politician is accused of “flip-flopping” when they change their mind on something. Instead of admiring someone for being willing to re-examine previously-held beliefs, we lambast them. We admire conviction, even though that’s a trait that has been at the root of history’s worst attrocities.

When you look at the history of human progress, some of our greatest advances were made by people willing to question their beliefs. Prioritising data over opinion is what underpins the scientific method.

But I get it. It can be very uncomfortable to change your mind. There’s inevitably going to be some psychological resistance, a kind of inertia of opinion that favours the sunk cost of all the time you’ve spent believing something.

I was thinking back to times when I’ve changed my opinion on something after being confronted with new evidence.

In my younger days, I was staunchly anti-nuclear power. It didn’t help that in my younger days, nuclear power and nuclear weapons were conceptually linked in the public discourse. In the intervening years I’ve come to believe that nuclear power is far less destructive than fossil fuels. There are still a lot of issues—in terms of cost and time—which make nuclear less attractive than solar or wind, but I honestly can’t reconcile someone claiming to be an environmentalist while simultaneously opposing nuclear power. The data just doesn’t support that conclusion.

Similarly, I remember in the early 2000s being opposed to genetically-modified crops. But the more I looked into the facts, there was nothing—other than vibes—to bolster that opposition. And yet I know many people who’ve maintainted their opposition, often the same people who point to the scientific evidence when it comes to climate change. It’s a strange kind of cognitive dissonance that would allow for that kind of cherry-picking.

There are other situations where I’ve gone more in the other direction—initially positive, later negative. Google’s AMP project is one example. It sounded okay to me at first. But as I got into the details, its fundamental unfairness couldn’t be ignored.

I was fairly neutral on blockchains at first, at least from a technological perspective. There was even some initial promise of distributed data preservation. But over time my opinion went down, down, down.

Bitcoin, with its proof-of-work idiocy, is the poster-child of everything wrong with the reality of blockchains. The astoundingly wasteful energy consumption is just staggeringly pointless. Over time, any sufficiently wasteful project becomes indistinguishable from evil.

Speaking of energy usage…

My feelings about large language models have been dominated by two massive elephants in the room. One is the completely unethical way that the training data has been acquired (by ripping off the work of people who never gave their permission). The other is the profligate energy usage in not just training these models, but also running queries on the network.

My opinion on the provenance of the training data hasn’t changed. If anything, it’s hardened. I want us to fight back against this unethical harvesting by poisoning the well that the training data is drawing from.

But my opinion on the energy usage might just be swaying a little.

Michael Liebreich published an in-depth piece for Bloomberg last month called Generative AI – The Power and the Glory. He doesn’t sugar-coat the problems with current and future levels of power consumption for large language models, but he also doesn’t paint a completely bleak picture.

Effectively there’s a yet-to-decided battle between Koomey’s law and the Jevons paradox. Time will tell which way this will go.

The whole article is well worth a read. But what really gave me pause was a recent piece by Hannah Ritchie asking What’s the impact of artificial intelligence on energy demand?

When Hannah Ritchie speaks, I listen. And I’m well aware of the irony there. That’s classic argument from authority, when the whole point of Hannah Ritchie’s work is that it’s the data that matters.

In any case, she does an excellent job of putting my current worries into a historical context, as well as laying out some potential futures.

Don’t get me wrong, the energy demands of large language models are enormous and are only going to increase, but we may well see some compensatory efficiencies.

Personally, I’d just like to see these tools charge a fair price for their usage. Right now they’re being subsidised by venture capital. If people actually had to pay out of pocket for the energy used per query, we’d get a much better idea of how valuable these tools actually are to people.

Instead we’re seeing these tools being crammed into existing products regardless of whether anybody actually wants them (and in my anecdotal experience, most people resent this being forced on them).

Still, I thought it was worth making a note of how my opinion on the energy usage of large language models is open to change.

But I still won’t use one that’s been trained on other people’s work without their permission.

Wednesday, December 11th, 2024

THE AI CON - How to Fight Big Tech’s Hype and Create the Future We Want

A shame that this must-read book won’t be out in time for Christmas—’twould make a great stocking filler for a lot of people I know.

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.

Thursday, December 5th, 2024

Thursday, November 21st, 2024

MomBoard: E-ink display for a parent with amnesia

Technology doesn’t have to be terrible. Here’s an absolutely wonderful use of an e-ink display:

I made as much use of vanilla HTML and CSS as possible. I used a small amount of JavaScript but no framework or other libraries.

Tuesday, November 12th, 2024

The meaning of “AI”

There are different kinds of buzzwords.

Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.

Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.

The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.

“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.

The biggest problem with the phrase “AI” is that there’s a name collision.

For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.

Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.

This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.

To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…

Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.

Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.

Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.

It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.

And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

It’s almost as if the labelling of the current technologies was more about marketing than accuracy.

Thursday, November 7th, 2024

Information literacy and chatbots as search • Buttondown

If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

Saturday, November 2nd, 2024

Unsaid

I went to the UX Brighton conference yesterday.

The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.

But…

The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.

Not a single speaker addressed where the training data for current large language models comes from (it comes from scraping other people’s copyrighted creative works).

Not a single speaker addressed the energy requirements for current large language models (the requirements are absolutely mahoosive—not just for the training, but for each and every query).

My charitable reading of the situation yesterday was that every speaker assumed that someone else would cover those issues.

The less charitable reading is that this was a deliberate decision.

Whenever the issue of ethics came up, it was only ever in relation to how we might use these tools: considering user needs, being transparent, all that good stuff. But never once did the question arise of whether it’s ethical to even use these tools.

In fact, the message was often the opposite: words like “responsibility” and “duty” came up, but only in the admonition that UX designers have a responsibility and duty to use these tools! And if that carrot didn’t work, there’s always the stick of scaring you into using these tools for fear of being left behind and having a machine replace you.

I was left feeling somewhat depressed about the deliberately narrow focus. Maggie’s talk was the only one that dealt with any externalities, looking at how the firehose of slop is blasting away at society. But again, the focus was only ever on how these tools are used or abused; nobody addressed the possibility of deliberately choosing not to use them.

If audience members weren’t yet using generative tools in their daily work, the assumption was that they were lagging behind and it was only a matter of time before they’d get on board the hype train. There was no room for the idea that someone might examine the roots of these tools and make a conscious choice not to fund their development.

There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating:

Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.

But none of the speakers at UX Brighton chose to examine the larger context of the tools they were encouraging us to use.

One speaker told us “Be curious!”, but clearly that curiosity should not extend to the foundations of the tools themselves. Ignore what’s behind the curtain. Instead look at all the cool stuff we can do now. Don’t worry about the fact that everything you do with these tools is built on a bedrock of exploitation and environmental harm. We should instead blithely build a new generation of user interfaces on the burial ground of human culture.

Whenever I get into a discussion about these issues, it always seems to come back ’round to whether these tools are actually any good or not. People point to the genuinely useful tasks they can accomplish. But that’s not my issue. There are absolutely smart and efficient ways to use large language models—in some situations, it’s like suddenly having a superpower. But as Molly White puts it:

The benefits, though extant, seem to pale in comparison to the costs.

There are no ethical uses of current large language models.

And if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to stop using the current crop of exploitative large language models.

Anyway, like I said, all the talks at UX Brighton were very good. But I just wish just one of them had addressed the underlying questions that any good UX designer should ask: “Where did this data come from? What are the second-order effects of deploying this technology?”

Having a talk on those topics would’ve been nice, but I would’ve settled for having five minutes of one talk, or even one minute. But there was nothing.

There’s one possible explanation for this glaring absence that’s quite depressing to consider. It may be that these topics weren’t covered because there’s an assumption that everybody already knows about them, and frankly, doesn’t care.

To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”

Saturday, October 19th, 2024

2004 was the first year of the future

I enjoyed reading through these essays about the web of twenty years ago: music, photos, email, games, television, iPods, phones

Much as I love the art direction, you’d never know that we actually had some very nice-looking websites back in 2004!

My solar-powered and self-hosted website | Dries Buytaert

This is a neat project form Dries:

This project is driven by my curiosity about making websites and web hosting more environmentally friendly, even on a small scale. It’s also a chance to explore a local-first approach: to show that hosting a personal website on your own internet connection at home can often be enough for small sites. This aligns with my commitment to both the Open Web and the IndieWeb.

At its heart, this project is about learning and contributing to a conversation on a greener, local-first future for the web.

Tuesday, October 15th, 2024

She Built a Microcomputer Empire From Her Suburban Home

The story of Lore Harp McGovern is like something from Halt And Catch Fire.

Saturday, October 12th, 2024

It turns out I’m still excited about the web

While I’ve grown more cynical about much of tech, movements like the Indieweb and the Fediverse remind me that the ideals I once loved, and that spirit of the early web, aren’t lost. They’re evolving, just like everything else.

Thursday, October 10th, 2024

Mismatch

This seems to be the attitude of many of my fellow nerds—designers and developers—when presented with tools based on large language models that produce dubious outputs based on the unethical harvesting of other people’s work and requiring staggering amounts of energy to run:

This is the future! I need to start using these tools now, even if they’re flawed, because otherwise I’ll be left behind. They’ll only get better. It’s inevitable.

Whereas this seems to be the attitude of those same designers and developers when faced with stable browser features that can be safely used today without frameworks or libraries:

I’m sceptical.

Wednesday, October 9th, 2024

Report: Thinking about using AI? - Green Web Foundation

A solid detailed in-depth report.

The sheer amount of resources needed to support the current and forecast demand from AI is colossal and unprecedented.