Link tags: machinelearning

220

sparkline

The Future of Software Development is Software Developers – Codemanship’s Blog

The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer’s IP address). And it’s the hard part when they’re prompting language models to predict plausible-looking Python.

The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.

The Colonization of Confidence., Sightless Scribbles

I love the small web, the clean web. I hate tech bloat.

And LLMs are the ultimate bloat.

So much truth in one story:

They built a machine to gentrify the English language.

They have built a machine that weaponizes mediocrity and sells it as perfection.

They are strip-mining your confidence to sell you back a synthetic version of it.

Dissent | blarg

I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.

There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.

AI CEO – Replace Your Boss Before They Replace You

Delivering total nonsense, with complete confidence.

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) – Pluralistic: Daily links from Cory Doctorow

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That’s it.

That’s the $13T growth story that MorganStanley is telling. It’s why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We’d have to figure out what to do with all these technologically unemployed people.

But AI can’t do your job. It can help you do your job, but that doesn’t mean it’s going to save anyone money.

The Jeopardy Phenomenon – Chris Coyier

AI has the Jeopardy Phenomenon too.

If you use it to generate code that is outside your expertise, you are likely to think it’s all well and good, especially if it seems to work at first pop. But if you’re intimately familiar with the technology or the code around the code it’s generating, there is a good chance you’ll be like hey! that’s not quite right!

Not just code. I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”

Like, if you know that the results are super dodgy for your own area of expertise, why would you think they’d be any better for, I don’t know, restaurant recommendations in a city you’ve never been to?

The only winning move is not to play

My mind boggles at the thought of using a generative tool based on a large language model to do any kind of qualatitive user research, so every single thing that Gregg says here makes complete sense to me.

On not choosing nice versions of AI – This day’s portion

Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.

The line and the stream. — Ethan Marcotte

I’ve come to realize that statements about the future aren’t predictions: they’re more like spells. When someone describes something to you as the future, they’re sharing a heartfelt belief that this something will be part of whatever comes next. “Artificial intelligence isn’t going anywhere” quite literally involves casting a technology forward into time. How could that be anything else but a kind of magic?

David Chisnall (*Now with 50% more sarcasm!*): “I think this needs to be repeated…”

Machine learning is amazing if … the value of a correct answer is much higher than the cost of an incorrect answer.

Related to Laissez-faire Cognitive Debt:

And that’s where I start to get really annoyed by a lot of the LLM hype. It’s pushing machine-learning approaches into places where there are significant harms for sometimes giving the wrong answer. And it’s doing so while trying to outsource the liability to the customers who are using these machines in ways in which they are advertised as working. It’s great for translation! Unless a mistranslated word could kill a business deal or start a war. It’s great for summarisation! Unless missing a key point could cost you a load of money. It’s great for writing code! Unless a security vulnerability would cost you lost revenue or a copyright infringement lawsuit from having accidentally put something from the training set directly in your codebase in contravention of its license would kill your business. And so on. Lots of risks that are outsourced and liabilities that are passed directly to the user.

Laissez-faire Cognitive Debt – Smithery

I think of Cognitive Debt as ‘where we have the answers, but not the thinking that went into producing those answers’.

Lately, I have started noticing examples of not just where the debt is being accrued, but who then has the responsibility to pick it up and repay it.

Too often, an LLM doesn’t replace the need for thinking in a group setting, but simply creates more work for others.

Alchemy - Josh Collinsworth blog

I am interested in art—we are interested in art, in any and all of its forms—because humans made it. That’s the very thing that makes it interesting; the who, the how, and especially the why.

The existence of the work itself is only part of the point, and materializing an image out of thin air misses the point of art, in very much the same way that putting a football into a Waymo to drive it up and down the street for a few hours would be entirely missing the point of sports.

Pink goo and stolen sandwiches | Frederic Marx, Front-End Developer

The generative AI industry only exists because some people decided that it’s okay for them to take all this work with no permission, let alone compensation for the original creators, and to charge others for the privilege of using the probabilistic plagiarism machines they’ve fed it to.

cubic blog: The real problem with AI coding

Can you ship AI-generated code without creating a maintenance nightmare six months from now? Can you debug it when it breaks? Can you modify it when requirements change? Can you onboard new engineers to a codebase they didn’t write and the AI barely explained?

Most teams haven’t realized this shift yet. They’re optimizing for code generation speed while comprehension debt silently accumulates in their repos.

One team I talked to spent 3 days fixing what should have been a 2-hour problem. They had “saved” time by having AI generate the initial implementation. But when it broke, they lost 70 hours trying to understand code they had never built themselves.

That’s comprehension debt compounding. The time you save upfront gets charged back with interest later.

ChatGPT’s Atlas: The Browser That’s Anti-Web - Anil Dash

I love the web, and this thing is bad for the web.

  1. Atlas substitutes its own AI-generated content for the web, but it looks like it’s showing you the web
  2. The user experience makes you guess what commands to type instead of clicking on links
  3. You’re the agent for the browser, it’s not being an agent for you

It’s very clear that a lot of the new AI era is about dismantling the web’s original design.

eurollm.io

A different world is possible. Here, for example, is an open-source large language model from Europe, designed to support the 24 official languages of the European Union.

I have no idea why their top level domain is for the British Indian Ocean Territory, soon to be no more. That doesn’t instil confidence.

Measured AI | Note to Self

It’s creepy to tell people they’ll lose their jobs if they don’t use AI. It’s weird to assume AI critics hate progress and are resisting some inevitable future.

The AI Gold Rush Is Cover for a Class War

Under the guise of technological inevitability, companies are using the AI boom to rewrite the social contract — laying off employees, rehiring them at lower wages, intensifying workloads, and normalizing precarity. In short, these are political choices masquerading as technical necessities, AI is not the cause of the layoffs but their justification.

Frank Chimero · Beyond the Machine

The transcript of a very thoughtful talk by Frank.

“AI is inevitable” is bullshit · Eric Eggert

LLMs are useful when you need a compromise between fast and good. You will never get a good outcome fast.

I’m afraid we are settling into a status of good enough when using “AI,” which is especially hurtful for accessibility.