Re: But...isn't it all just bollocks?
> AI, as with any tool, is only as good as the hands that wield it ... if you already suck at [xyz], then AI will make you suck harder.
I think that's an excellent slogan for the role of AI as a technical tool, and ought to be emblazoned on the box of every AI.
As a research scientist, I very occasionally use AI, primarily as a relevant-literature search tool for a problem area I'm unfamiliar with, and need to get up to speed with quickly. It works pretty well for that; of course, if it turns up less-than-relevant references that's hardly catastrophic, as I'll realise pretty quickly. (I've not found current LLMs to "invent"1 references - I think more recent LLMs tend not to do that so much.)
I've also tried throwing a hard problem (in statistics) at an AI - "hard" meaning I'd been unable to solve it myself. The results were interesting, but not much more; it failed to solve the problem (fair enough, nor did I), replicated some of my own unsuccessful attempts, and even came up with some plausible (but also ultimately unsuccessful) approaches I'd not tried. It was clear, though, and no doubt unsurprising, that while it was doing a fair job of discovering existing techniques and approaches in the problem domain, it wasn't doing anything original - it had no "creative insights". As such, I find AI of very limited usefulness as a research tool.
Some of my students have used AI2 for porting code to a different language (e.g., Matlab to Python), with generally fast and on the whole acceptable results in terms of correctness and efficiency (then again, Matlab and Python are hardly distant neighbours). Of course they need to check that the code produces identical results; this is harder than the actual porting. IIRC, they tried getting the AI to automate testing, but this turned out to require much more tweaking.
1May I also take this opportunity to blow off about the usage of "hallucinate" as applied to AIs. It's annoyingly anthropomorphic and annoyingly inaccurate. What people generally mean when they use the word is "It made a mistake", or simply delivered an untruth. That is nothing remotely like what happens when humans hallucinate, e.g., under the influence of psychedelic drugs, or due to a medical condition. And no, the AI didn't "lie" either; lying implies intention to deceive, and AIs have no intentions. Perhaps a better word here, I think, is "confabulate" - roughly, to tell an untruth unintentionally.
2I neither encourage nor discourage them to do this! I don't really care, as long as their work is good, they understand what they're doing, and are not wasting their (and by extension my) time. I may well run your slogan past them in future, though :-)