Last week, at a conference, I had a random hallway conversation with another engineer. We ended up talking about Zed and he told me he’ll try it, but does it have any AI features? If so, can you turn them off?
I told him that, yes, you can turn them off. Sensing what made him ask, I added that if you do turn them off, it’s all deactivated, no AI in the background, foreground, underground.
Curious now, having a chance for more nuance than a GitHub issue usually allows, with this being a real conversation in the Real World, I asked: so you don’t use AI? Not at all?
No, he said. With a shrug, he added: I tried it once, it was completely wrong, so I stopped using it. Never used it for coding, he said.
What’d you use, I asked. Claude? ChatGPT? Have you tried GPT4?
Not sure, some website, he said with another shrug.
I haven’t been able to stop thinking about it.
There wasn’t any doubt in those shrugs. A couple of shrugs saying: I don’t care about all that AI stuff, I’m not interested, I just want to turn it off.
And I keep thinking about it and… I don’t get it.
What I do get is if you think AI is over-hyped, or that it’ll never lead to AGI, or that LLMs can’t reason, or that there’s a whole bunch of bullshit flying around in the world with the tag “AI” attached to it, or that it’s too expensive, too inefficient, too restricted, generates too much crap, or isn’t useful for what you’re doing — I get that.
What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future. Absolutely, yes, that claim sounds ridiculous — but don’t you want to see for yourself?
The boy cried wolf, we won’t fall for that old trick again, hype’s hype and hot air is hot air, but now the whole town is saying there’s a wolf alright and you’re not interested in seeing what it looks like, not at all?
There’s Andreas Kling, creator of SerenityOS and the Ladybird browser, using Copilot to build JIT compilers and often saying how much he values Copilot. Mitchell Hashimoto, founder of Hashicorp and creator of so many successful tools that I don’t know which one to name here, doesn’t use language servers but Copilot when hacking on Ghostty. Fabrice Bellard, a hacker with a portfolio so impressive that if someone would say that he’s made-up and doesn’t really exist you wouldn’t immediately brush it off, has been getting into LLMs and building tools for them. John Carmack — John Carmack — is working in AI now. Jarred Sumner, who wrote Bun into the world, is using Claude to do something he could easily do himself. Simon Eskildsen, who’s done more engineering on napkins than others have on their computers, is using AI “all the time”. antirez — the antirez — has been getting into LLMs for at least the last year.
That’s just off the top of my head. I could go on and on and on, but I won’t because — somehow, magically? — I can hear you say “that’s an appeal to authority, it doesn’t mea—” Yes, yes, yes, you’re right.
Look. I’m not saying you should kneel in front of the AGI altar.
What I’m saying is that ever since I got into programming I’ve assumed that one shared trait between programmers was curiosity, a willingness to learn, and that our maxim is that we can’t ever stop learning, because what we’re doing is constantly changing beneath our fingers and if we don’t pay attention it might slip aways from us, leaving us with knowledge that’s no longer useful.
Maybe that assumption was wrong, maybe we don’t all share this trait, and maybe that’s okay, but even if… I don’t get how you can see some of the world’s best programmers use a new technology to make them better at programming and shrug it off.
How can you see them all use it and not think that, okay, maybe it’s not all bullshit, maybe something’s there, I need to figure out what it is?
Whenever someone asks me this question, my response is pretty similar followed by "it takes out the fun in programming for me". Programming for me isn't just building product, but also solving new problems, writing good code. AI tools for me, atleast, takes the fun out of problem solving.
"But what about mundane tasks? Boilerplates?" I do enjoy finishing the switch case with all the cases, typing those makes them engraved into my brain. Plus, is asking Copilot really faster than just typing it? Doesn't it break your "flow"?
One last thing i don't like about AI agents in code editors is, they're very eager to complete my code. Whenever I stop to think how im gonna write or solve this problem, they're always suggesting me code. Which in turn distracts my brain.
Although i do sometimes use AI tools but usually its not once every hour but once per week.
Curiosity can take many forms and many directions. Perhaps they'd more curious about the domain they work in for example, or the problem they're solving, not the code they use to express a solution - which might be quite basic, but still valuable code.
Equally whats mundane for the worlds best programmers, might be the more interesting aspect of the job for those whose day to day software engineering is generally mundane.
The reality is most software engineers are building business as usual systems, not working on the latest, cutting edge technology.