Link tags: tool

354

sparkline

Information literacy and chatbots as search • Buttondown

If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

Report: Thinking about using AI? - Green Web Foundation

A solid detailed in-depth report.

The sheer amount of resources needed to support the current and forecast demand from AI is colossal and unprecedented.

I wasted a day on CSS selector performance to make a website load 2ms faster | Trys Mudford

Picture me holding Trys back and telling him, “Leave it alone, mate, it’s not worth it!”

A short note on AI – Me, Robin

I hope to make something that could only exist because I made it. Something that is the one thing that it is. Not an average sentence. Not a visual approximation of other people’s work. Not a stolen concept that boils lakes and uses more electricity than anything in my household.

First Impressions of the Pixel 9 Pro | Whatever

At this point, it really does seem like “AI” is “bullshit you don’t need or is done better in other ways, but we’ve just spent literally billions on this so we really need you to use it, even though it’s nowhere as good as what we were already doing,” and everything else is just unsexy functionality that makes what you do marginally easier or better. I’m sorry we live in a world where enshittification is being marketed as The Hot And Sexy Thing, but just because we’re in that world, doesn’t mean you have to accept it.

The State of ES5 on the Web

This is grim:

If you look at the data below on how popular websites today are actually transpiling and deploying their code to production, it turns out that most sites on the internet ship code that is transpiled to ES5, yet still doesn’t work in IE 11—meaning the transpiler and polyfill bloat is being downloaded by 100% of their users, but benefiting none of them.

Why “AI” projects fail

“AI” is heralded (by those who claim it to replace workers as well as those that argue for it as a mere tool) as a thing to drop into your workflows to create whatever gains promised. It’s magic in the literal sense. You learn a few spells/prompts and your problems go poof. But that was already bullshit when we talked about introducing other digital tools into our workflows.

And we’ve been doing this for decades now, with every new technology we spend a lot of money to get a lot of bloody noses for way too little outcome. Because we keep not looking at actual, real problems in front of us – that the people affected by them probably can tell you at least a significant part of the solution to. No we want a magic tool to make the problem disappear. Which is a significantly different thing than solving it.

Does AI benefit the world? – Chelsea Troy

Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest. We have known how to build models like this for a long time, but we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet. Sitting around and waiting for consent from all the parties that wrote on the internet over the past thirty years probably didn’t even cross Sam Altman’s mind.

On the environmental front, fans of generative model technology insist that eventually we’ll possess sufficiently efficient compute power to train and run these models without the massive carbon footprint. That is not the case at the moment, and we don’t have a concrete timeline for it. Again, wait around for a thing we don’t have yet doesn’t appeal to investors or executives.

Why A.I. Isn’t Going to Make Art | The New Yorker

Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

Another great piece by Ted Chiang!

The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.

This bit reminded me of Simon’s rule:

Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it.

Simon also makes an appearance here:

The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

I could quote the whole thing, but I’ll stop with this one:

The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

s19e01: Do Reply; Use plain language, and tell the truth

Very good writing advice from Dan:

Use plain language. Tell the truth.

Related:

The reason why LLM text for me is bad is that it’s insipid, which is not a plain language word to use, but the secret is to use words like that tactically and sparingly to great effect.

They don’t write plainly because most of the text they’ve been trained on isn’t plain and clear. I’d argue that most of the text that’s ever existed isn’t plain and clear anyway.

Aboard Newsletter: Why So Bad, AI Ads?

The human desire to connect with others is very profound, and the desire of technology companies to interject themselves even more into that desire—either by communicating on behalf of humans, or by pretending to be human—works in the opposite direction. These technologies don’t seem to be encouraging connection as much as commoditizing it.

Pop Culture

Despite all of this hype, all of this media attention, all of this incredible investment, the supposed “innovations” don’t even seem capable of replacing the jobs that they’re meant to — not that I think they should, just that I’m tired of being told that this future is inevitable.

The reality is that generative AI isn’t good at replacing jobs, but commoditizing distinct acts of labor, and, in the process, the early creative jobs that help people build portfolios to advance in their industries.

One of the fundamental misunderstandings of the bosses replacing these workers with generative AI is that you are not just asking for a thing, but outsourcing the risk and responsibility.

Generative AI costs far too much, isn’t getting cheaper, uses too much power, and doesn’t do enough to justify its existence.

New Web Development. Or, why Copilots and chatbots are particularly bad for modern web dev – Baldur Bjarnason

The paradigm shift that web development is entering hinges on the fact that while React was a key enabler of the Single-Page-App and Component era of the web, in practice it normally tends to result in extremely poor products. Built-in browser APIs are now much more capable than they were when React was first invented.

Ideas Aren’t Worth Anything - The Biblioracle Recommends

The fact that writing can be hard is one of the things that makes it meaningful. Removing this difficulty removes that meaning.

There is significant enthusiasm for this attitude inside the companies that produce an distribute media like books, movies, and music for obvious reasons. Removing the expense of humans making art is a real savings to the bottom line.

But the idea of this being an example of democratizing creativity is absurd. Outsourcing is not democratizing. Ideas are not the most important part of creation, execution is.

How do we build the future with AI? – Chelsea Troy

This is the transcript of a fantastic talk called “The Tools We Still Need to Build with AI.”

Absorb every word!

The mainstreaming of ‘AI’ scepticism – Baldur Bjarnason

  1. Tech is dominated by “true believers” and those who tag along to make money.
  2. Politicians seem to be forever gullible to the promises of tech.
  3. Management loves promises of automation and profitable layoffs.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Because There’s No “AI” in “Failure”

My new favourite blog on Tumblr.

On being human and “creative”

Now we have this collision of those who, with the specific intent of creative expression, make things that are wholly the product of their unique experience and skills and offer them in the marketplace. Then there are those who use machines to produce derivatives of other’s creative work to offer as products in the marketplace. Both are seeking an audience and financial benefit for their offering.

Those who wholly manufacture creative works are asking the same value be put on their imitation of creative expression as the value inherent with sentient creation. They are saying they deserve the same recognition—be that in respect, attention, acknowledgement or compensation—that works created by a person might receive. But they haven’t earned it.

Using generative AI is to ask What If but then hand off not only the responsibility and effort of answering the question but also accountability for the answer. When the machine creates something pleasing or marketable, it’s “look at what I did”. When the machine creates something terrible or wrong, it’s “not my fault, the machine did it”. The claim of ownership is conditional and only maintained if the output can generate value.

There’s so much to love here, like this:

My art is the story of how I have spent the time in my life.

And this:

The value of an idea comes from the execution of the idea.