Tags: language

589

sparkline

Wednesday, March 18th, 2026

Working with agents doesn’t feel like flow — Bill de hÓra

Related to Matt’s thoughts:

…working with agents feels much less like classic deep work, and much more like playing a game. Not to say the work is frivolous—it’s just because it feels like I’m in a game loop.

Flow, at least in the usual sense for me, feels smooth and continuous. The work and your attention starts to line up so cleanly that the experience becomes frictionless. You disappear into the work and meld with it. One notable aspect of flow has been I lose track of time. Working with agents on the other hand, is not like that at all. It’s highly engaging, but in a more jagged, reactive way. I’m focused, but not settled. I’m absorbed, but not merged with the task. I’m paying close attention the whole time, but the attention is dynamic and tactical rather than continuous. I don’t lose track of time at all.

Tuesday, March 17th, 2026

Gas Town and Bullet Hell – Petafloptimism

Matt has some smart reckons on the relationship between time and technology:

The factory bell, the railway timetable, the telegraph wire, the always-on smartphone — each imposed a new temporal discipline, each produced its own characteristic form of exhaustion, and each was eventually (partially, imperfectly) domesticated through a combination of regulation, design, and collective action.

Monday, March 16th, 2026

Stop Sloppypasta: Don’t paste raw LLM output at people

slop·py·pas·ta n. Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

Thursday, March 12th, 2026

Generative AI vegetarianism | Sean Boots

Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.

Wednesday, March 11th, 2026

I work, I think? - Annotated

This is about something that’s already happening, that doesn’t show up in employment figures: the quiet destruction of the feedback loop that turns inexperienced people into competent ones. The process by which you get something wrong, feel it, understand why, and become slightly less wrong next time. It’s unglamorous and it’s slow and it’s the only way it’s ever worked.

AI short-circuits that learning completely. Not maliciously. Just structurally. When you can generate something that looks right without doing the thinking, you will (most people, most people being me, will, most of the time, under pressure, with a deadline) and the muscle that thinking would have built never develops.

your ai slop bores me

Mutually assured Mechanical Turk.

This is genuinely much more interesting and wholesome than a chat interface powered by a large language model.

Tuesday, March 10th, 2026

I am in an abusive relationship with the technology industry

The cognitive overload of AI trying to Make You More Productive™️ whilst you’re actually trying to be productive is so shockingly absurd. And yet, we are being made to feel like we are stagnating, being left behind, not good enough, that we are luddites should we not adopt this imposing technology. We are being told we’re missing out, even though we’re probably doing just fine. The technology is gaslighting us.

Sunday, March 8th, 2026

Thursday, March 5th, 2026

LLMs Are Antithetical to Writing and Humanity

If you’re dyslexic and just trying to communicate more clearly in writing, or you’ve got a bullshit job and you just want to get your bullshit job’s bullshit tasks out of the way so you can move on to more meaningful endeavors, or at least move past the day-to-day slog that permeates your workday and serves no real purpose other than to pay the bills, then I cede; I cannot fault you.

But if, say, you’re a “writer” and you’re using an LLM to “help you” “write” or “think” because it’s easier and takes less time and thought, then I stand my ground; I can and do fault you.

Wednesday, March 4th, 2026

Feedback

If you wanted to make a really crude approximation of project management, you could say there are two main styles: waterfall and agile.

It’s not as simple as that by any means. And the two aren’t really separate things; agile came about as a response to the failures of waterfall. But if we’re going to stick with crude approximations, here we go:

  • In a waterfall process, you define everything up front and then execute.
  • In an agile process, you start executing and then adjust based on what you learn.

So crude! Much approximation!

It only recently struck me that the agile approach is basically a cybernetic system.

Cybernetics is pretty much anything that involves feedback. If it’s got inputs and outputs that are connected in some way, it’s probably cybernetic. Politics. Finance. Your YouTube recommendations. Every video game you’ve ever played. You. Every living thing on the planet. That’s cybernetics.

Fun fact: early on in the history of cybernetics, a bunch of folks wanted to get together at an event to geek about this stuff. But they knew that if they used the word “cybernetics” to describe the event, Norbert Wiener would show up and completely dominate proceedings. So they invented a new alias for the same thing. They coined the term “artificial intelligence”, or AI for short.

Yes, ironically the term “AI” was invented in order to repel a Reply Guy. Now it’s Reply Guy catnip. In today’s AI world, everyone’s a Norbert Wiener.

The thing that has the Wieners really excited right now in the world of programming is the idea of agentic AI. In this set-up, you don’t do any of the actual coding. Instead you specify everything up front and then have a team of artificial agents execute your plan.

That’s right; it’s a return to waterfall. But that’s not as crazy as it sounds. Waterfall was wasteful because execution was expensive and time-consuming. Now that execution is relatively cheap (you pay a bit of money to line the pockets of the worst people in exchange for literal tokens), you can afford to throw some spaghetti at the wall and see if it sticks.

But you lose the learning. The idea of a cybernetic system like, say, agile development, is that you try something, learn from it, and adjust accordingly. You remember what worked. You remember what didn’t. That’s learning.

Outsourcing execution to machines makes a lot of sense.

I’m not so sure it makes sense to outsource learning.

Madra Teanga - Open Source Irish Language Programming

An open source project that has already produced a great app for learning Irish—programmed in a language called Draíocht (sin “magic” as Béarla)!

I’m supporting this on Open Collective.

Monday, March 2nd, 2026

The nature of the job

Large language models help you build the thing faster, which is the primary end goal for your company but only sometimes for you. My primary goal might be to build the thing faster, but it also might be to learn something durably, to enjoy the work, to look forward to Monday.

I don’t like the mental fragility of not fully understanding how my own code works, where AI-generated code is “mine” in that it’s attributed to me in the git blame and I’m its maintainer going forward.

Tuesday, February 24th, 2026

Webspace Invaders · Matthias Ott

There’s a power imbalance at work here that’s hard to ignore. Large “AI” companies, the ones with billions in venture capital, send their bots to harvest free content. Not only from big publishers or Wikipedia, but from small, independent websites, too. But we, the people running these sites – often as passion projects, as ways to freely share what we’ve learned, as digital gardens we tend in our spare time – we’re the ones paying for the bandwidth and server resources to handle all those additional requests while those companies profit from the training data they extract. It’s an asymmetric battle: small systems absorbing the demands generated at an entirely different, industrial scale.

Sunday, February 22nd, 2026

I guess I kinda get why people hate AI

To be clear, I think AI will be ultimately extremely helpful. I still am using it on my projects. I am going to use it at my next job. I, personally, don’t hate AI.

But I can’t deny that the vibes right now are awful.

Not just bad, awful. It’s not just the “chat we’re cooked you’re the permanent underclass” stuff influencers say. It’s not just the “everybody is fucked” hyperbole CEOs sprout. It’s the actual, day-to-day experience with the technology. I’m a programmer—AI actually helps me a lot. But for normal people, their interactions are profoundly more negative, and none of the people behind this technology seem to care.

blakewatson.com - I used Claude Code and GSD to build the accessibility tool I’ve always wanted

You know my thoughts on generative tools based on large language models, but this example of personal empowerment is undeniably liberating.

The Mythology Of Conscious AI

This superb essay by Anil Seth won the 2025 Berggruen Prize Essay Competition.

The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.

Friday, February 20th, 2026

Training your replacement | Go Make Things

I’ve had a lot of people recently tell me AI is “inevitable.” That this is “the future” and “we all better get used to it.”

For the last decade, I’ve had a lot of people tell me the same thing about React.

And over that decade of React being “the future” and “inevitable,” I worked on many, many projects without it. I’ve built a thriving career.

AI feels like that in many ways. It also feels different in that non-technical people also won’t shut the fuck about it.

Thursday, February 19th, 2026

A considered approach to generative AI in front-end… | Clearleft

A thoughtful approach from Sam:

  1. Use AI only for tasks you already know how to do, on occasions when the time that would be spent completing the task can be better spent on other problems.
  2. When using AI, provide the chosen tool with something you’ve made as an input along with a specific prompt.
  3. Always comprehensively review the output from an AI tool for quality.

A programmer’s loss of identity - ratfactor

We value learning. We value the merits of language design, type systems, software maintenance, levels of abstraction, and yeah, if I’m honest, minute syntactical differences, the color of the bike shed, and the best way to get that perfectly smooth shave on a yak. I’m not sure what we’re called now, “heirloom programmers”?

Do I sound like a machine code programmer in the 1950s refusing to learn structured programming and compiled languages? I reject that comparison. I love a beautiful abstraction just as much as I love a good low-level trick.

If the problem is that we’ve painted our development environments into a corner that requires tons of boilerplate, then that is the problem. We should have been chopping the cruft away and replacing it with deterministic abstractions like we’ve always done. That’s what that Larry Wall quote about good programmers being lazy was about. It did not mean that we would be okay with pulling a damn slot machine lever a couple times to generate the boilerplate.

Wednesday, February 18th, 2026

Deep Blue

My social networks are currently awash with Deep Blue:

…the sense of psychological ennui leading into existential dread that many software developers are feeling thanks to the encroachment of generative AI into their field of work.