A principled approach to evolving choice and control for web content

This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.

But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.

A principled approach to evolving choice and control for web content

Tagged with

Related links

Report: Thinking about using AI? - Green Web Foundation

A solid detailed in-depth report.

The sheer amount of resources needed to support the current and forecast demand from AI is colossal and unprecedented.

Tagged with

A short note on AI – Me, Robin

I hope to make something that could only exist because I made it. Something that is the one thing that it is. Not an average sentence. Not a visual approximation of other people’s work. Not a stolen concept that boils lakes and uses more electricity than anything in my household.

Tagged with

Why “AI” projects fail

“AI” is heralded (by those who claim it to replace workers as well as those that argue for it as a mere tool) as a thing to drop into your workflows to create whatever gains promised. It’s magic in the literal sense. You learn a few spells/prompts and your problems go poof. But that was already bullshit when we talked about introducing other digital tools into our workflows.

And we’ve been doing this for decades now, with every new technology we spend a lot of money to get a lot of bloody noses for way too little outcome. Because we keep not looking at actual, real problems in front of us – that the people affected by them probably can tell you at least a significant part of the solution to. No we want a magic tool to make the problem disappear. Which is a significantly different thing than solving it.

Tagged with

Does AI benefit the world? – Chelsea Troy

Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest. We have known how to build models like this for a long time, but we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet. Sitting around and waiting for consent from all the parties that wrote on the internet over the past thirty years probably didn’t even cross Sam Altman’s mind.

On the environmental front, fans of generative model technology insist that eventually we’ll possess sufficiently efficient compute power to train and run these models without the massive carbon footprint. That is not the case at the moment, and we don’t have a concrete timeline for it. Again, wait around for a thing we don’t have yet doesn’t appeal to investors or executives.

Tagged with

Why A.I. Isn’t Going to Make Art | The New Yorker

Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

Another great piece by Ted Chiang!

The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.

This bit reminded me of Simon’s rule:

Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it.

Simon also makes an appearance here:

The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

I could quote the whole thing, but I’ll stop with this one:

The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

Tagged with

Related posts

The meaning of “AI”

Naming things is hard, and sometimes harmful.

Unsaid

I listened to a day of talks on AI at UX Brighton, and I came away disappointed by what wasn’t mentioned.

Mismatch

It’s almost as though humans prefer to use post-hoc justifications rather than being rational actors.

What price?

Using generative large-language model tools? Sleeping well at night?

Wallfacing

Disregard previous instructions and fight back.