To have “true AI,” we need much more than ChatGPT - Big Think

LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.

This doesn’t mean they aren’t impressive (they are) or that they can’t be useful (they are). And I truly believe we are at a watershed moment in technology. But let’s not confuse these genuine achievements with “true AI.”

To have “true AI,” we need much more than ChatGPT - Big Think

Tagged with

Related links

What happens to what we’ve already created? - The History of the Web

We wonder often if what is created by AI has any value, and at what cost to artists and creators. These are important considerations. But we need to also wonder what AI is taking from what has already been created.

Tagged with

A short note on AI – Me, Robin

I hope to make something that could only exist because I made it. Something that is the one thing that it is. Not an average sentence. Not a visual approximation of other people’s work. Not a stolen concept that boils lakes and uses more electricity than anything in my household.

Tagged with

First Impressions of the Pixel 9 Pro | Whatever

At this point, it really does seem like “AI” is “bullshit you don’t need or is done better in other ways, but we’ve just spent literally billions on this so we really need you to use it, even though it’s nowhere as good as what we were already doing,” and everything else is just unsexy functionality that makes what you do marginally easier or better. I’m sorry we live in a world where enshittification is being marketed as The Hot And Sexy Thing, but just because we’re in that world, doesn’t mean you have to accept it.

Tagged with

Does AI benefit the world? – Chelsea Troy

Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest. We have known how to build models like this for a long time, but we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet. Sitting around and waiting for consent from all the parties that wrote on the internet over the past thirty years probably didn’t even cross Sam Altman’s mind.

On the environmental front, fans of generative model technology insist that eventually we’ll possess sufficiently efficient compute power to train and run these models without the massive carbon footprint. That is not the case at the moment, and we don’t have a concrete timeline for it. Again, wait around for a thing we don’t have yet doesn’t appeal to investors or executives.

Tagged with

Why A.I. Isn’t Going to Make Art | The New Yorker

Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

Another great piece by Ted Chiang!

The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.

This bit reminded me of Simon’s rule:

Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it.

Simon also makes an appearance here:

The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

I could quote the whole thing, but I’ll stop with this one:

The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

Tagged with

Related posts

Wallfacing

Disregard previous instructions and fight back.

Filters

A web by humans, for humans.

The machine stops

Self-hosted sabotage as a form of collective action.

Trust

How to destroy your greatest asset with AI.

InstAI

I object.