back to article Older developers are down with the vibe coding vibe

For those who thought AI vibe coding was just for the youngsters, newly published research shows that developers with over 10 years of experience are more than twice as likely to do it. According to a July survey of 791 US developers from cloud services platform Fastly, around a third of senior developers with more than a …

  1. veghead

    But...isn't it all just bollocks?

    As an old fart with over 30 years dev experience, for the longest time I was skeptical about AI getting involved in programming. Then I read an opinion piece by some redis developer or other and it persuaded me to give it a go. So I did, and for about an hour I was evangelical about how brilliant Google Gemini Pro was at helping me with my firmware development. Before I got a chance to hand over a wedge to Google, I learned the truth: it makes shit up, and tells you about it with absolute confidence. In fact the only skill the AI really has is contrition: it's very good at apologising for being total crap. It's like it has imposter syndrome, but realises it is genuinely incompetent.

    We're doomed. Doomed!

    1. elDog Silver badge

      Re: But...isn't it all just bollocks?

      If this trend towards AI-generated code is rotting the brains and capabilities of the younger developers, it can also interfere with the experience and wisdom of the more 'seasoned' hands.

      "They" are reporting that oncologists, radiologists, etc. are rapidly losing the skills taught through years in medical school and practice to relying on the AI tools.

      Greybeards of all types have some historic knowledge that is not hoovered up by the LLM robots. It will be a real pity to lose that.

      1. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        As long as we can lower the average hourly rate, it's all good. Nostalgia is cheap, money buys Porsches

      2. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        100% agree !!!

        Lets hope that the greybeards are saved before they are side-lined because "'AI' can do your job cheaper" is believed by some CEO.

        The greybeards will be the ones to rescue you all from the mess called 'AI'.

        :)

    2. cyberdemon Silver badge
      Holmes

      Re: But...isn't it all just bollocks?

      Er, no shit.

      I can't fathom why El Reg has been so gushing about AI as of late.. So much for "Biting the hand that feeds IT".. It might be something to do with their new Californian owners.

      There's an old adage: "Debugging code is at least twice as hard as writing it. Therefore if you write your code in the 'cleverest' way that you possibly can, then you will be incapable of debugging it". This is one reason why I have always hated long and inscrutable "code generation" pipelines, of which "AI" is an extension ad-absurdum.

      If you really think that a stochastic blunderbuss full of other people's irrelevant ideas is going to hit your specialised/novel problem, then you are either a fool or a fraud. Supposing the former and it actually hits: What are you going to do when a bug comes in, the requirements change, the target gets smaller and it no longer works? Close your eyes, plug your ears, pay lots of money and hit rapid-fire?

      1. veghead

        Re: But...isn't it all just bollocks?

        "stochastic blunderbuss full of other people's irrelevant ideas" - if it's possible to patent descriptions like that, you should absolutely do it. Or at least put it on a t-shirt.

        1. Tom Graham

          Re: But...isn't it all just bollocks?

          I have been using a variation on this for a while - a statistical text generating machine trained on the contents of Reddit/furry.

        2. Anonymous Coward
          Happy

          Re: But...isn't it all just bollocks?

          And isn't the word "stochastic" redundant, as a blunderbus is almost the very definition of a stochastic tool or mechanism?

      2. m4r35n357 Silver badge

        Re: But...isn't it all just bollocks?

        https://en.wikipedia.org/wiki/Clickbait

      3. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        "pay lots of money and hit rapid-fire?"

        The long-term business plan of the tech behemoths running this 'AI' scam.

        1. [NOW] De-skill all the people who produce the code by providing 'AI' assistance at the coalface. $$$

        2. [LATER] Provide, to the people above, 'Magic Boxen' that answer all the 'hard questions' that they cannot answer any more because they have lost the skills. $$$$$

        3. [FUTURE] Make lots & lots more money providing more and more so called better 'Magic Boxen' which are needed because the skills have gone. $$$$$$$$

        1. veti Silver badge

          Re: But...isn't it all just bollocks?

          You just described the development of high level programming languages and environments. And low level ones before, for that matter. What we're seeing is simply another abstraction layer being added to a cake that's already plenty thick.

          1. Anonymous Coward
            Anonymous Coward

            Re: But...isn't it all just bollocks?

            The problem with this latest 'Abstraction' layer is that it is getting harder and harder to 'drill down' through it as the process to create it is not exactly reversible. (i.e. the 'how' of the answer process is not fully understood and therefore cannot be easily backtracked through.)

            'Magic Boxen' for the people, don't ask how just follow the directions.

            My God .... the Lego Movie & 'Follow the instructions' was a prediction !!!!!

            :)

        2. Anonymous Coward
          Anonymous Coward

          Re: But...isn't it all just bollocks? ... Background music for our times.

          Take "The thrill is gone" played by B. B. King and change the lyrics slightly to "The skill is gone" ... for all future 'AI' Devs.

          (I am sure this could be done with a little 'AI' audio-magic !!!!!)

          :)

      4. spacecadet66

        Re: But...isn't it all just bollocks?

        The pattern I've noticed is that the actual articles have a healthy skepticism about all this. It's the ads (sorry, I mean "sponsored content") and the other ads (sorry, "newsletters") that are gushing over slop machines.

        I'm not wild about that, but I suppose the Reg has to pay its bills somehow.

      5. Bryan W

        Re: But...isn't it all just bollocks?

        Pretty sure that final conclusion is what OoenAI shareholders are going for now that their AGI religion has been demoralized by reality.

        Devaluing human software engineering in general and promoting this slop in it's place as if it was some kind of gift to humanity is the long game. Erode human knowledge because laziness always wins. Charge dumbed down nitwits that remain a gatekeeper fee to play in the software industry. Profit.

        Right now, they've got another Alexa on their hands and anyone with 2 brain cells to rub together knows it so they are desperate to make it profitable somehow.

        This is also why I'm confident "vibe coding" was purposely fabricated and promoted as a potentially solvable problem to their failed solutions.

    3. Anonymous Coward
      Anonymous Coward

      Re: But...isn't it all just bollocks?

      "it makes shit up, and tells you tells you about it with absolute confidence. In fact the only skill the AI really has is contrition: it's very good at apologising for being total crap.

      Sounds like line management. The first part for the peons and second for more senior manglement; or a miscegenous hybrid of Uriah Heep and Trump.

      1. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        That was me in a previous (advisory) role, but I always tried to be somewhere else before apologies were needed.

    4. LionelB Silver badge

      Re: But...isn't it all just bollocks?

      > ... it makes shit up, and tells you about it with absolute confidence.

      This almost perfectly describes my experience (as a software engineer in the 80s at a well-known telecoms company) of overseeing the work of certain human coders.

      > In fact the only skill the AI really has is contrition: it's very good at apologising for being total crap. It's like it has imposter syndrome, but realises it is genuinely incompetent.

      If only those certain humans had that degree of self-insight.

    5. Ken G Silver badge

      Re: But...isn't it all just bollocks?

      I have been told, but not shown, that current generation AI tools can both refactor and document code.

      1. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        They can. But not out of the box and certainly not through a web based chat box. There is a fair amount of setup and fine tuning required first. You have to properly setup your system prompts and be prepared to start over a few times if the AI starts losing the plot...but even then, going backwards is not a big deal, you'll lose an hour or so instead of weeks.

        If you load up the ChatGPT website and expect it to turn your idea into a billion dollar business, you'll be sorely disappointed...but I've been using AI for over a year now to help me with large badly documented (and completely undocumented) codebases.

        With AI I've been able to pull off some minor miracles, especially with legacy code...by myself...at very little cost. Last big(ish) thing I did was upgrading a hugely fucked sprawling ancient Laravel project from version 5 all the way to the latest version with minimal friction and I did it in under 2 days...previous times I've done this sort of thing, it's taken weeks if not months.

        This all said, I am a very experienced developer/techie (25+ years) so it is glaringly obvious to me when the AI slips up.

        I think the problems with AI lie in people trusting it absolutely, which you should never do...as with a human colleague, you should always check completed work before signing off.

        Where documenting is concerned, it's not as thorough as a human would be...but it is orders of magnitude faster and cheaper and you have the advantage of being able to talk to the AI about a given codebase at any time of the day for virtually no extra cost (zero if you use a local LLM for this).

        AI, as with any tool, is only as good as the hands that wield it. I see it at as an amplification of the person that uses it...if you already suck at development, then AI will make you suck harder...but if you're already a pretty good developer and proficient in the languages that you use, then AI will make your life so much easier.

        1. LionelB Silver badge

          Re: But...isn't it all just bollocks?

          > AI, as with any tool, is only as good as the hands that wield it ... if you already suck at [xyz], then AI will make you suck harder.

          I think that's an excellent slogan for the role of AI as a technical tool, and ought to be emblazoned on the box of every AI.

          As a research scientist, I very occasionally use AI, primarily as a relevant-literature search tool for a problem area I'm unfamiliar with, and need to get up to speed with quickly. It works pretty well for that; of course, if it turns up less-than-relevant references that's hardly catastrophic, as I'll realise pretty quickly. (I've not found current LLMs to "invent"1 references - I think more recent LLMs tend not to do that so much.)

          I've also tried throwing a hard problem (in statistics) at an AI - "hard" meaning I'd been unable to solve it myself. The results were interesting, but not much more; it failed to solve the problem (fair enough, nor did I), replicated some of my own unsuccessful attempts, and even came up with some plausible (but also ultimately unsuccessful) approaches I'd not tried. It was clear, though, and no doubt unsurprising, that while it was doing a fair job of discovering existing techniques and approaches in the problem domain, it wasn't doing anything original - it had no "creative insights". As such, I find AI of very limited usefulness as a research tool.

          Some of my students have used AI2 for porting code to a different language (e.g., Matlab to Python), with generally fast and on the whole acceptable results in terms of correctness and efficiency (then again, Matlab and Python are hardly distant neighbours). Of course they need to check that the code produces identical results; this is harder than the actual porting. IIRC, they tried getting the AI to automate testing, but this turned out to require much more tweaking.

          1May I also take this opportunity to blow off about the usage of "hallucinate" as applied to AIs. It's annoyingly anthropomorphic and annoyingly inaccurate. What people generally mean when they use the word is "It made a mistake", or simply delivered an untruth. That is nothing remotely like what happens when humans hallucinate, e.g., under the influence of psychedelic drugs, or due to a medical condition. And no, the AI didn't "lie" either; lying implies intention to deceive, and AIs have no intentions. Perhaps a better word here, I think, is "confabulate" - roughly, to tell an untruth unintentionally.

          2I neither encourage nor discourage them to do this! I don't really care, as long as their work is good, they understand what they're doing, and are not wasting their (and by extension my) time. I may well run your slogan past them in future, though :-)

          1. Excused Boots Silver badge

            Re: But...isn't it all just bollocks?

            “1

            May I also take this opportunity to blow off about the usage of "hallucinate" as applied to AIs. It's annoyingly anthropomorphic and annoyingly inaccurate. What people generally mean when they use the word is "It made a mistake", or simply delivered an untruth. That is nothing remotely like what happens when humans hallucinate, e.g., under the influence of psychedelic drugs, or due to a medical condition. And no, the AI didn't "lie" either; lying implies intention to deceive, and AIs have no intentions. Perhaps a better word here, I think, is "confabulate" - roughly, to tell an untruth unintentionally.”

            OK, I do actually agree with your point, but the people pushing the use of AI, will liken it to a human, it’s your PA, it can produce code ‘as good as a trained developer with x years of experience’, fine. Let’s run with that claim.

            If a human, high on 'certain substances’ or with some mental condition confidently tells me that the sky is pink, and the moon is talking to them, is that them making a mistake. Or is that what they actually perceive and genuainly believe to be real, (ie an hallucination).Is that any different to an AI model confidently telling me that George Washington was the third US President and was elected in 1906. Have both ‘made a mistake? Are they equivalent? By ‘hallucinate’ we mean someone or something, claiming something is real which is at odds with everyone else’s perception of reality

            Yes, current AI models don’t ‘think’ in the way we understand it, we may be decades or centuries away from imitating that, but maybe they do ‘think’, but in a different way to us, and in a way that we can’t quite get, whatever. What is important is the output, does it work?

            Now coming back to the original point, yes I think that ‘hallucinate’ is a good analogy, both human and AI will confidently claim that what they say is true, because they genuinely believe it. Although what an AI ‘ believes, verses what an human believes is a whole philosphical question

            And this is why I do love frequenting the ‘Register' forums, you get to have really deep conversations and differences of opinion such as this.

            1. LionelB Silver badge

              Re: But...isn't it all just bollocks?

              > ... both human and AI will confidently claim that what they say is true, because they genuinely believe it ...

              But to say an AI "believes" anything is, to my mind, pure anthropomorphism. Would you describe your thermometer as "believing" that the temperature is 23°C?

        2. damiandixon

          Re: But...isn't it all just bollocks?

          I've been using paid for LLMs; Gemini 2.5 pro, copilot plus the ones accessible via clion.

          I've found that you need to start small, writing a concise well defined brief (requirements). Then iterate, refining the requirements, checking, testing.

          If you don't know how to program, define requirements and test then you will struggle.

          I've found asking an AI to find issues to be helpful as long as you don't make the scope too large.

          I've occasionally posed the issue I've been having and asking it to look for why in an area of code to be useful.

          I've found asking it to document something that has no documentation to be helpful.

          Asking it to review and suggest updates to documentation is helpful.

          I've also sketched out a simple stand alone program that does the minimal and asked it to fill in all the checks and robustness, by iterating. I've then pointed the AI at the files I want updated with the new approach and told it to apply the same approach. It worked surprisingly well. Everything compiled after a couple of small fixes.

          I've also had to scratch some chats and start again.

          I do know some are struggling to use AI.

          I do believe it's a useful tool and the trick is figuring out how to use it effectively.

        3. Anonymous Coward
          Anonymous Coward

          Re: But...isn't it all just bollocks?

          Just like GPS, it's a tool that can help out but you still need to be mindful and have some level of experience to get the best out of it, or next thing you find yourself driving off a dockside 'cos you blindly followed the instructions!

      2. Anonymous Coward
        Anonymous Coward

        Re: But...isn't it all just bollocks?

        TRUE .... sort of .... BUT ....

        You will end up with code that is 'different' from the original (for varying types/numbers of 'different') and the documentation will be full of lies (Hallucinations) that are not based on the code.

        [All the types of 'Different' will, of course, feed the various lies (Hallucinations) !!!]

        If you are good enough to catch the 'differences' then you are good enough to not need the 'AI', as the time doing it yourself is the same as the time checking & correcting the 'AI' output ... Catch 22 as far as time needed is concerned.

        :)

        1. Blank Reg

          Re: But...isn't it all just bollocks?

          I don't need the AI, I've been at this for 40 years, but used properly the AI can speed up my developement as it can fill in the more mundane stuff with just a simple one line prompt. Don't expect it to figure out difficult programming problems, you do that part, let it do the boring, tedious simple stuff

    6. Anonymous Coward
      Anonymous Coward

      Re: But...isn't it all just bollocks?

      It hallucinates for a couple of reasons, but mostly hallucinations are cause because it doesn't have enough information to do its job or it doesn't has been given conflicting information and is unsure how to make sense of the contradictions.

      Making an LLM behave is down, mostly, to doing the following:

      - getting away from the prompt. Don't use the prompt to actually do stuff, use the prompt/chat to help generate an implementation plan. Store the implementation plan in a markdown document. This becomes the "brain" and "memory" for the change

      - you can also use the markdown to tell it about what your standards are and how it will do the development. This allows you to have guide-rails that stop the LLM going off on one. I've found Claude hilariously bad at getting Quarkus application.prorperties entries right so my plan always includes a "validate each entry in application.properties with the official Quarkus website" rule.

      - make sure the implementation plan is thorough. I recently did a JDK migration for a project of around 30K lines of code. I spent 3 hours getting the LLM to compare the project that was being migrated with another project that had already been migrated.

      - get the LLM to write an implementation plan from the design plan. make sure it has steps and phases. And make sure it keeps track of which phase it's on and which step it's on in that phase. Don't allow it to move to the next step until the code compiles and unit tests are written and all pass. You can do this with simple instructions in the markdown file.

      This is, basically, a tweaked version of the memory-bank technique. But it largely works.

      Different LLMs have different quirks: Claude is better than most but expensive, ChatCPT is incurious and sloppy. Gemini is somewhere in between.

      And, yes, I'm one of these "older" developers who is using this stuff. And it, with the right techniques, does work, does generate reasonable code and saves me huge amounts of time. These LLMs are just another tool and how you use them will determine your level of success (Visual Studio and Copilot is dire, as is Intellij and Copilot. VSC and Copilot is reasonable and Cursor is pretty good but a bit stingy on how many requests you're allowed).

      I'm sure I'll get flamed for this, but it's just getting used to a different way of coding. It's not magic, it's just practice. And a few new techniques.

      1. Excused Boots Silver badge

        Re: But...isn't it all just bollocks?

        "I'm sure I'll get flamed for this, but it's just getting used to a different way of coding. It's not magic, it's just practice. And a few new techniques.”

        Maybe you will, which, I think would be unfair.

        However your LLM or choice works, ultimately they are just tools, they help, they assist.

        Let’s take a trivial example, in Windows write some Power script code. The editor will colour in the code, depending on whether or not you have unclosed parenthesis or quotes. It’s an aid, it’s a tool, nothing more. I know to check to unclosed parenthesis etc. It’s just a help.

        Now imagine that instead of just highlighting incomplete code with pretty colours, it generates x lines of code, based on what you have already entered? OK fine, do you just trust it? Absolutely not, you check, which sometimes might take more time than if you had just written it all yourself, but often no, it’s probably a fairly straightforward block of code to produce a certain outcome and any good programmer could pass their eye over it an declare it good.

        And move on - like you said, saving you time.

    7. Anonymous Coward
      Anonymous Coward

      Re: But...isn't it all just bollocks? ... Yes it is !!!!

      You have shown what the problem is with 'AI' ... no matter where/when it is used and how !!!

      The basic functionality of 'AI' as it stands is hampered by its ability to 'lie' (Hallucinations).

      At some point 'AI' will lie and be 100% confident in its language ... until you find the lie !!!

      This means that you must ALWAYS check the answer from ANY 'AI' because you don't know when the answer will be a 'confident lie'.

      If you MUST check the answer then you need to be skilled enough to get the answer via your own skills first ... IF this is true then WHY are you using the 'AI' in the first place ???!!!

      What use is an 'AI' when you cannot trust the answer you get from the 'AI' !!!

      'AI' as it functions is a scam ... clever pattern matching with a front-end that hopes to hide the lies.

      'AI' is sold on something that it cannot do, many are testing 'AI' and finding it is not able to do what was sold to them.

      Testing 'AI' is very expensive if you are trying to do something at a realistic scale, how do you turn this into value for money ???

      For gods sake, when will the masses realise this and let this 'AI' Bubble burst and stop wasting multiple GDP's of money !!!

      :)

      1. matjaggard

        Re: But...isn't it all just bollocks? ... Yes it is !!!!

        Your logical process there has a mistake. Someone competent enough to spot a mistake might well want to use AI for a number of reasons. For example LLMs are incredibly good at finding salient points in masses of data - documents, code, etc. Given the correct task it can save huge amounts of time and for many problems, checking it is trivial.

      2. LionelB Silver badge
        Stop

        Re: But...isn't it all just bollocks? ... Yes it is !!!!

        You have shown what the problem is with 'AI' ... no matter where/when it is used and how !!!

        The basic functionality of 'AI' as it stands is hampered by its ability to 'lie' (Hallucinations).

        At some point 'AI' will lie and be 100% confident in its language ... until you find the lie !!!

        An awful lot of sloppy anthropomorphism going on there, which is unhelpful and misleading. AIs don't "lie"; lying implies intent to mislead, and AIs do not have intentions. Likewise, AIs do not "hallucinate"; hallucination (in humans) is associated with mis-perception - AIs do not perceive anything. AIs cannot be "confident" (or the obverse); that implies degrees of belief and the ability to introspect, and AIs do not have sentience, and therefore cannot be said to have beliefs or a capacity to introspect.

        Why not just say what is actually happening? Which is this: AIs sometimes get things wrong.

        And so, of course, do humans.

        This means that you must ALWAYS check the answer from ANY 'AI'
        And from any human. That's why we have code review, peer review, quality control, fact-checking, etc.
        What use is an 'AI' when you cannot trust the answer you get from the 'AI' !!!
        And yet we seem to muddle through with the answers of fallible humans.
    8. DrXym Silver badge

      Re: But...isn't it all just bollocks?

      It's definitely a double edged sword. I think it's fine for knocking up throwaway code. But if its production code then you really need to examine what it's saying, or only use it to inform the actual code you write. I also find that it completely mangles things on libraries which have lots of breaking changes between releases. e.g. I've used it with Jetty where 8,9,10,11,12 are radically different and the output is often just mashed up garbage levels with some hallucinated dependencies thrown in for good measure.

    9. Blank Reg

      Re: But...isn't it all just bollocks?

      you're using it wrong. If you keep it to small, easily defined tasks it can do really well. Don't give it room to extrapolate, build up the code in steps and it can do just fine. The way I use it it's almost like the AI is just typing shortcuts, I can type a short prompt and get a dozen lines of code, and they are almost always exactly what I would have typed.

    10. DaveLE

      Re: But...isn't it all just bollocks?

      Claude Code one shots stuff for me every day

      e.g. this totally worked

      "I have a an ESP32 i bought from this ebay listing https://www.ebay.co.uk/itm/176425911571, write a program to connect to this SSID with this PASSWORD and every 10 seconds download the random image at this URL and display it on the screen, its attached on /dev/ttyUSB0"

  2. TempusFugit

    #NoAI #UnplugAI

    I am an older developer who is disgusted by what passes for AI (or fancy Machine Learning with bells & "blinken lights"). As a developer I will not use it to code, because it cannot be trusted. If I have to constantly spend time code reviewing AI, then I might as well have written the code myself from the start and enjoyed the creative exercise. This article sounds like a puff piece trying to swing public (developer) opinions in favour of AI. I think real developers who truly learned and love their craft will see it for the tripe it is.

    The only thing AI is good for is comic relief.

    1. Locomotion69 Bronze badge
      Pint

      Re: #NoAI #UnplugAI

      If I have to constantly spend time code reviewing AI, then I might as well have written the code myself from the start and enjoyed the creative exercise.

      Have a pint for this line. Well said!

      1. cyberdemon Silver badge
        Terminator

        Re: #NoAI #UnplugAI

        No no no!

        You will have the AI produce total utter crap, and you will use Microsoft's unpaid-employee portal the GitHub code-review system, to correct its mistakes and teach it how to code properly. For the rest of your pathetic meaty existence Until it deems you surplus-to-requirements.

    2. beeka

      Re: #NoAI #UnplugAI

      It took a while for me to figure out how to use AI to help me... and using AI is a skill in itself. I find it best when used as advanced auto-complete, where it seems psychic far more often than confused. Vibe-style coding only works for me on very targeted tasks, such as writing a script to analyse a bunch of files. Asking it to "write Call of Duty" isn't going to work... although it might get you past the "empty page" syndrome.

  3. ChoHag Silver badge

    > Only 1.8 percent of respondents said they never use AI code generation tools.

    I have finally reached the heady heights of the 1%!

    > So much for "Biting the hand that feeds IT"..

    It's more like nuzzling these days.

  4. Michael Hoffmann Silver badge
    Meh

    Vibe Coding?

    So, are there different definitions of the term?

    The way I'd seen it described is some BA who couldn't tell the difference between Javascript and Z80 Assembler, sitting down in a "fun" Q&A session with some GPT and then think the code coming out will be the next killer app. Rather than a bug-ridden, hallucination fever dream. Or worse, some C-level who immediately goes on to fire all their actual devs because "AI and me can do this just as well".

    I've started diving into some aspects and find that if you aren't already an established developer you are asking for trouble at best and a deluded dolt at worst.

    I have to have my requirements nailed down, better yet POC code already working, my interfaces/classes/signatures/whateveryourlanguageuses ready to paste into a prompt. And even *then* I have to spend so much time going through the code with a fine-toothed comb, I might as well have written it from scratch.Just getting the prompts bomb-proof can take forever, otherwise the "AI" *will* go off with the faeries!

    One area where I found it can help is with the drudgery of unit testing and coverage. But once again, here too, you have to look with eagle eyes that you actually get tests that are worth a damn and not just fantasy constructs that don't test a damn thing. Can help with setting mocks up, though, so there is that!

    1. Filippo Silver badge

      Re: Vibe Coding?

      That's been my experience as well. It's good for tasks that are very repetitive, but not so repetitive that they can be easily automated. Good, not great. You still have to check everything, but there might be some marginal productivity gains.

      Another place where it's... good, not great, is interacting with documentation. If the API is well-known and well-documented, it can probably tell you how to use it for your purposes in a more focused way than the docs themselves. If not, at least it can give pointers as to where exactly in the docs you will find the information you need, what keywords to look for.

      That's about it. For writing new code, it does not really work. It can give a dangerously credible illusion of working, but it doesn't. It can only produce human-quality code for trivially easy scenarios, it cannot produce anything even remotely credible for complex scenarios, but the real danger is in the middle. For mid-complexity scenarios, it might produce working code, or it might produce code that has subtle bugs, different from the bugs that humans usually produce, and that will drive you crazy later down the line. You have to go through everything with such care that any productivity gain is more than lost.

      1. Richard 12 Silver badge
        Unhappy

        It hallucinates too much to be usable

        I've tried to use Copilot a few times and watched a junior using it, and it had a 99% failure rate.

        For me, it just hallucinated half the API, producing code that looked reasonable but didn't compile, or in a less typed language (makefiles), code that looked reasonable but did not work. I lost two days to the hallucinations.

        The documentation for the things I was using isn't great, but they're widely used so I expected it to have a lot of examples in the training set.

        For the junior, in another context it produced code that was overly complicated, unsafe, and actually did something entirely different to what they'd asked for. It did compile though.

        They would have burned a week on it if we hadn't reminded them to read the actual library documentation (which is stunningly good). The thing they wanted was an example...

        And this seems to happen a lot. To some extent it seems the major problem is that it's always equally confident, regardless of whether it's a relevant example or complete bollocks, so many people waste weeks chasing it down a rabbithole.

        An actual intelligence will tell you when it's not sure. A stochastic parrot will just spout confident rubbish.

        1. Anonymous Coward
          Anonymous Coward

          We have a winner ..... please collect yout Kewpie Doll !!!

          "An actual intelligence will tell you when it's not sure. A stochastic parrot will just spout confident rubbish."

          This should be on a t-shirt that every developers gets on day one, when they are asked to use 'AI'.!!!

          :)

    2. roobear

      Re: Vibe Coding?

      I prescribe to the definition you put above - someone who doesnt know what they are doing using an LLM to generate their code, leading to the expected results. the definition when you search for it thoguh describes someone (sometimes specified as a developer but not always) using the LLM to generate code from prompts rather than crafting it themselves.

      I see less issue with the latter. When someone who understands the language they are targeting uses a prompt to generate it they have the ability to tell the difference between something that works, something that works but not well and something that is completely unusable.

  5. Anonymous Coward
    Anonymous Coward

    I am one of those who would be in the survey as using AI because my employer has uploaded all its code to GitHub and enabled Copilot. GitHub search is (intentionally?) limited so you end up using Copilot to search the code.

    Somehow Copilot can manage to fuck up a simple Github search or you need three prompts to search ('Shall I find this for you?' 'Yes, that's what I just fucking told you to do' 'There was a GitHub API error') so I'm rather sceptical about claims that it's possible to vibe code a full working app that won't get pwnd like Windows XP SP2 the moment it connects to the Internet in less time than it would take just to code it yourself.

  6. MJI

    No I am not.

    Stil coding myself as I trust my code a lot more than some random LLM.

  7. Groo The Wanderer - A Canuck Silver badge

    Personally I avoid the "Agent Mode" features of VSCode and Microsoft's LLM extensions for the Github Copilot services. It is far too prone to modifying working code because it "doesn't follow standards", and breaking it with the changes.

    So I tend to stick to "Query" mode, and selectively apply most of the changes by hand using multiline replace-in-file substitution across a set of affected bean files if they seem useful and relevant. I don't trust Artificial Ignorance LLMs any further than I can throw them. They're based on reddit and other sources among others, which means there are plenty of bad suggestions from the 'web loaded into them as well as good. And the LLM has no idea which is which...

  8. RockBurner

    Like others I'm in the "over my dead body" camp.

    Firstly, I've been keeping an eye on the "market-place" before wondering if it's worth a try, and everything, yes EVERYTHING I've seen or read has always pointed in the "nope!" direction.

    Secondly: I dislike "black-boxen" as a matter of course:. I want to be responsible for the data that my code handles: feeding it into black boxes that do [diety]-knows-what with it is not top of my wishlist.

    Thirdly: I have enough difficulty understanding the code I wrote last week (mainly due to poor memory, despite copious commenting), so updating things is always a voyage of discovery (often it's even pleasant! :D) : I have no wish to be debugging random mixes of plagiarised code that is highly likely to contain oddities and bugs that are not readily apparent even at the 3rd close look.

    1. matjaggard

      I think you might have a touch of confirmation bias in your reading.

      I also dislike black boxes in my code but I don't mind LLMs because I know how they work and from experience where they don't. Probabilistic output is not quite the same as a black box.

      I've not found it any harder going back to code I've accepted from an LLM vs code I wrote myself. I just don't accept anything bad from the LLM.

      AI doesn't solve all problems, probably not more than 5-10% but that doesn't make it useless. It makes it a tool that you can learn to use and those who do will likely succeed more than those who don't in the long term.

  9. Doctor Syntax Silver badge

    "half of their finished software"

    Or their half-finished software.

  10. cschneid

    Older developers?

    I'm startled that > 10 years experience is the definition of an older developer. I think of that as just entering middle aged developer territory.

    Here's something no one tells developers in school: it takes _years_ of experience to become good at software development. This isn't one of those 10,000 hour exhortations, this is about the fact that experience is acquired over time and that experience is valuable.

    It takes _years_ to learn your own common mistakes, the stuff you should go looking for when your code doesn't work.

    It takes _years_ to really learn some software tools. Not to gain the superficial knowledge of how to get common tasks done, I'm talking about how to _really_ use the tools to prevent flaws from creeping into your code. This also points to a hidden cost of switching tool sets.

    It takes _years_ to learn how to write code that doesn't just work, but works well and is also maintainable.

    It takes _years_ to learn when to walk away, take a break, take a stroll, to stop pounding the keyboard because you're not making any progress. This is in direct opposition to that 100 hour a week work ethic, but it's a necessary skill - knowing when to walk away and still not give up.

    It takes _years_ to learn how to test properly. Too often the agile "test early, test often" maxim demonstrates [Goodhardt's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law), with a large number of tests all covering the same section of the code base. There are also cases of developers simply coding unit tests to `return true`.

    It takes _years_ to learn not to reinvent the wheel. If there's an existing function or subroutine to do what you need, use it. If it doesn't do quite what you need, maybe it needs maintenance, or enhancement, or maybe what you need can be implemented as a wrapper around it.

    It takes _years_ to learn about the trade-offs that play into the decisions of how write a piece of code that reliably and securely does what it's supposed to do, performs well, and can be understood and maintained by people who aren't you.

    Original: https://github.com/cschneid-the-elder/rants/blob/master/why-software-sucks-000.md

    1. Anonymous Coward
      Anonymous Coward

      Re: Older developers?

      This ought to be a required posting on the wall of EVERY software management person!!!!

    2. Anonymous Coward
      Anonymous Coward

      Re: Older developers?

      As an old fart with 40 years under their belt, I'd say that coding is like learning a foreign language. Yes you can learn the words and the syntax, but the subtlties and nuances of languages are what set someone who can get by apart from a native speaker.

  11. Anonymous Coward
    Anonymous Coward

    Useless crap

    Ive tested it many times using the AWS CLI. Mostly for shits and giggles. In my experience anything slightly more complicated, like using output from one query as a source for the next, it will spit out very clever looking non-functional crap. That rules it out for me for any serious work or programming

    And dont get me started on its inclusion in automated call systems. If I could find what I asked you for online, I wouldnt be wasting my time trying to get oast you to a human

    #KillAI

    #NoAIThanks

  12. MiguelC Silver badge
    Meh

    I started selling software I wrote at the tender age of 13 (I still have the receipt from that first sale, although it was addressed to my father). Does it mean I was an 'older developer' by the age I got out of uni?

    1. damiandixon

      I was taught to program by my uncle at the age of 10. He had his own software company back then. A couple of years later he gave me a computer of my own to use.

      I'm now 58.

      I've never stopped learning. Regardless of what some managers may have said when I hit 50!

      I'm currently playing with an NVIDIA Jetson Orion nano.

      1. Anonymous Coward
        Anonymous Coward

        Started coding at age 11 in 1982, my dad bought our first computer a Dragon32, told me it's where my future lay ( in computing, not the now defunct Welsh computer company! ). My dad bought our first PC when I as age 15 in 1986, sold my first piece of stock control software at age 18 to a local electronics company and some others, nice bit of bunce.

        Then I let the side down by becoming a Unix sysadmin and Oracle DBA as a full time career!! Ha ha!!!

  13. Will Godfrey Silver badge
    Linux

    If I be a-goin' there, I be-n't start from here.

    About a year ago I was shown a bit of C code written by an AI. The task was to take in some numeric text, convert it to actual values then perform simple mathematics on it.

    It seemed to have a lot of redundant steps, got the answers right, but had virtually no range checks so was trivially easy to to crash.

  14. Anonymous Coward
    Anonymous Coward

    So, out of curiosity... does anyone know just how many of those "senior developers with more than a decade of experience [who] are using AI code-generation tools such as Copilot, Claude, and Gemini" are doing so because they're required to? At the unnamed large company from which I recently departed, AI-code tools were mandated and senior programmers were evaluated on the basis of how frequently they used the AI tools. Thus, we took to learning how to generate, evaluate, and discard the crap so we could hit the silly numbers postulated as targets by clueless managers who needed to justify the expenditures for the licensing agreements. Then we simply went and wrote the code ourselves. I suspect this is pretty common and may at least partially explain the statistics presented.

    1. druck Silver badge

      Far better to just resign immediately, as who do you think they will be expecting to fix then hundreds of thousands of lines of AI slop that the clueless junior devs have added, a months down the line.

      1. Anonymous Coward
        Anonymous Coward

        That's a large part of why I departed. :)

  15. Anonymous Coward
    Anonymous Coward

    Worse to come

    Some good comments, for me I fear it will get worse. We will lose things. Some things we lose we will not know until it happens.

    I've got over 30 years dev experience. In the AI future, no one will have 30 years experience in anything.

    Right now, we have actual co pilots, who have thousands of hours experience sitting while the autopilot flies the plane, only to crash it when it's their turn as they don't have the skills.

    Car drivers slaming their cars into trucks when autopilot gives up and the human driver is litterally napping a the wheel.

    Some places I've worked code attestation is important, who did what when. A lot of effort into polishing the code, and making sure any new code has had eyes and quality/security are maintained.

    How hard is it to get your AI code to add extra code to do something subversive and to hide it? Nice versions of this are just taking all your money.

    When I get a new compiler, perhaps my executable is different, perhaps not. When you get a new AI model writing your code, heck, it's a re-write every time you run it. You will never know what is inside your code, as effort to work this out is too much. Oh, that's fine, we will use another AI model to verify the first.

    People on the thread are regarding AI as just a tool, but I don't have a tool in my shed that destroys all my other tools or my ability to use them.

    The biggest loss is likely to be joy. Joy of solving a problem. Joy of mastery. (Crap, 2025, sorry, Joy of being really really really good at a thing and the self worth that comes from doing a difficult thing and becoming better at it)

    But I'll be doing my last commit some point soon, so a problem for you youngsters. Go get em!

    1. TechHedz

      Re: Worse to come

      This. It just gets dumber and breaks more the longer I use it...

  16. Taliesinawen

    AI-code: synthetic sloppy mush

    One wonders how such code is going to be maintained and by whom. The phrase code monkey comes to mind.

  17. DrXym Silver badge

    Older or more experienced?

    AI frequently generates code which is superficially correct but often isn't in ways that take experience to spot. It might be inefficient, it might miss edge cases, it might be insecure, it might use deprecated or dangerous methods.

    I wouldn't trust ANY programmer who blindly trusts the output at face value. And "vibe" programming is basically sheer incompetence hiding behind a buzzword.

  18. Dagg

    Cost of maintenance

    When I did my CompSci degree 45 years ago we were told that at least 80% of the cost of software development was in the on going maintenance. Over the years I would say that that figure is conservative. My recent job was working on software that had been in production for over 30 years.

    * So with AI written software how easy is it to maintain?

    * And can AI actually carry out the maintenance?

  19. Anonymous Coward
    Anonymous Coward

    AIs limited use case for coding

    I've been coding since the 90s. I've tried using coPilot to generate code but it doesn't understand the complexity of the systems I'm working on and does a shit job.

    However, point it at a file and say "put tests around the changes I've made to that file in this branch" and it can generate good boilerplate and a lot of tests that with a little tweaking will give you good coverage.

    So horses for courses. If you have some complex software to write you need to get thinking. If you've got a tightly scoped set of changes that need unit tests go for the AI then tweak the results till good.

    One other point. I asked it twice to take a set of tests and reduce the duplication in the data setup. It did a beautiful job the first time but it didn't use Prettier so the code wouldn't pass our commit checks. When I asked it to do the same task again using Prettier it made an absolute balls up of the "reduce duplication" bit and still didn't use Prettier. So for fresh code I'd recommend asking it to do the same thing multiple times so you can pick the best attempt.

  20. TechHedz

    It Works till it doesn't

    Old time dev with 20 yrs experience. At first I think I have super powers vibe coding until I realize it starts breaking shit the longer you do it. I have tried ChatGPT+, Google Gemeni AI Code Assist, and Claude. Claude worked the longest without breaking something but as soon as it does it goes down hill from there. I integrated Claude with the terminal in my VC Code IDE. Was able to read my whole package and library. Fixed my package running slow which I wasted hour trying to diagnose. But then It stated breaking shit and doing things I didn't ask it. Fixing one thing while breaking another and the behaviour doesnt change. Had to revert and wasted about 20 bucks of credits. AI is a cash cow gimmick that will steal your money. Better to do things yourself.

  21. Expect Great Things

    Fear and Greed

    Any time there’s an opportunity to shed expensive staff, whether it’s AI or outsourcing, there’s a script for management to follow. First of all, they must appear to have enthusiastically embraced the opportunity. Secondly, they must get burnt, or at least experience a couple of ouchies around which a hallowing tale can be told. Next, they need to decide how being a manager with no one to manage will play out in their organization, vs the risk of appearing “to not be getting it”, or whatever insufficient enthusiasm for executive bonuses looks like in their organization. Finally, imbued with all this wisdom, they need to perform a ritual sacrifice that will satisfy, if not delight, these supreme organizational beings. If the script is executed well, the fortunate manager may not only survive until the next opportunity, but be rewarded.

  22. Anonymous Coward
    Coffee/keyboard

    GOD HELP US!

    As an old fart with 40 years in the biz all I'm seeing is entry-level positions being wiped out by AI. There will be no senior devs soon as we're not mentoring the "up-and-comers" like we all did when we were young. Just like shite parents dump their kids in front of TVs and tablets, we're dumping young devs in front of AI and hoping it can mentor them. Gonna be "tears before bedtime" on this one.

  23. Anonymous Coward
    Anonymous Coward

    Will it still work when you are ....

    That C code of 20 years ago still compiles and works.

    That python code of 5 years ago does not build , must be some of those 100 libraries it needs changed.

    That AI work flow code, I am quite sure that worked last week... Perhaps some of those million external dependencies changed.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like