back to article Vibe coding: What is it good for? Absolutely nothing (Sorry, Linus)

It is a truth universally acknowledged that a singular project possessed of prospects is in want of a team. That team has to be built from good developers with experience, judgement, analytic and logic skills, and strong interpersonal communication. Where AI coding fits in remains strongly contentious. Opinion on vibe coding in …

  1. Dinanziame Silver badge
    Happy

    Both of these overlook how much of learning and coding is about motivation, reward, comprehension, understanding the future, and most importantly doing all this amid other people.

    I'm pretty sure a lot of coders would insist that an important part of coding is doing it away from other people. Now discuss working from home vs returning to the office.

    1. Doctor Syntax Silver badge

      "I'm pretty sure a lot of coders would insist that an important part of coding is doing it away from other people."

      It seems to have worked out pretty well for the Linux kernel and a lot of other FOSS projects.

      There's a line of argument to suggest that "doing all this amid other people" hasn't worked out too well for some commercial development.

    2. Anonymous Coward
      Anonymous Coward

      AI is inherently Dunning-Kruger promoting in its users.

    3. Simon Harris Silver badge

      "doing it away from other people."

      I do a lot of coding when it's quiet (mostly because when other people are around they want me to do other things!), but I do rely on other real people on Stack Overflow and other platform specific forums for help when I hit an intractable problem.

  2. werdsmith Silver badge

    It's a useful coding assistant for a less experienced developer. It can help them along, but it can't safely do their work.

    It's not so useful for an experienced old hack who has explored virtually every algorithm, every pattern and thoroughly knows the way around the library landscape. However, it can relieve some of their mentoring workload.

    I believe this because I am the former and I work with the latter.

    1. zimzam Silver badge

      I don't think that's really true either. If it can give you different code with the same prompt, it can certainly give you different assistance with the same prompt. All of the same problems exist.

    2. Joe W Silver badge

      Yeah, I agree with the latter, especially the point about reducing the mental work load. One of my colleagues, a real greybeard, FOSS developer, wizard with anything computer used some vibe coding for one of his personal side projects. But then he is so experienced he can spot most BS the model would give him and correct that. So it is useful to him, and he even said so. Heck, he even acknowledges it as a possibly useful tool. And while I don't want to agree with him, personally disliking the idea of "vibe coding", I have respect for him and his thoughts should always be heard and taken into consideration.

      1. steviesteveo

        > But then he is so experienced he can spot most BS the model would give him and correct that.

        That is ultimately the killer. It's an umbrella for when it's not raining

    3. Roland6 Silver badge

      > It's a useful coding assistant for a less experienced developer. It can help them along, but it can't safely do their work.

      That defies logic.

      Let’s do an experience, form a programming team and put the least experienced developer ie. The newbie straight out of uni in charge. Now we put the newbie in a separate room to the team and permit them to only communicate via “chat” and that the team can only respond to the newbie’s instructions and never ask for clarification etc.. Now set the newbie a suitable programming task….

      1. alcachofas

        What on earth does your bizarre experiment have to do with what the other poster suggested?

        All they’re saying is the newbie might get some tips and ideas for how to solve their problem, where they’d previously had to read a lot of docs or consult someone else for advice.

        We can argue about whether that’s good or not - imo reading docs and asking for help are both good skills to nurture - but I’ve got no idea what you think your experiment demonstrates

        1. Roland6 Silver badge

          > What on earth does your bizarre experiment have to do with what the other poster suggested?

          Not bizarre, just putting the use of Vibe into the “black box” test for AI. I’ve used a programmer with limited experience to illustrate just how bad the idea is from both an achieving of objectives and a learning viewpoint.

          The user doesn’t learn what you think they learn about programming. The learning method is very inefficient being largely trial-and-error.

          Basically, it is worth thinking of AI as being a remote team experts with a very limited communication channel: description in, finished code out, to see just how dumb the idea is.

          1. Anonymous Coward
            Anonymous Coward

            It's not worth thinking of AI that way, because AI does not work that way. There's no introspection, no theory of mind, no mirror neurons, no empathy, no understanding of true vs. false. Even "good vs bad code" is just a probability path somewhere.

            There are no 'programming experts in a room', it's a purely probabilistic system that, with enough training, manages to sound correct when asked questions. There is no "thinking like a person" involved. It might be a very well trained GPU, but it's still a GPU.

    4. sarusa Silver badge
      Devil

      Great for incompetents, meh for anyone else

      Yeah my experience has been that it does actually improve the apparent productivity of complete incompetents. They get SOMETHING, which is better than they would have been able to do on their own. Of course it's a security nightmare, they can't maintain it, they can't make changes to it, they have no idea what it's actually doing. But if you're actually competent it just slows you down unless you want it to write some tiny tool (like parse this text to that text) in a language you're not super familiar with and you're willing to trust it - at least if you're competent you can do a quick scan of the code first. And if you try asking an LLM about WHY this code in here they're hilariously wrong and inconsistent. Ask it twice, get two different answers.

      Actually, this seems to be LLMs all around - great for incompetents (because you do get something, that often works in this one test case), bad for competent people because you have to check it, fix it, and harden it - might as well just do it yourself. And who's doing all the pushing for LLMs in the workplace? Oh riiiight, the C-levels and upper managers, aka the incompetents! For them it's a magical box that actually lets them 'do' anything besides expense meals, call meetings, and wander around destroying productivity. So they think 'Wow! If this works for such a genius like me, it will really help the peons!' And they're too Dunning-Kruger to realize the problems.

      1. Anonymous Coward
        Anonymous Coward

        Re: Great for incompetents, meh for anyone else

        Worse is vibe coding people saying things like "I created an journaling app for iPhone/Android in one day using vibe coding and no experience".

        Except that any programmer worth hiring should also be able to do the same thing in a day with no AI.

        In fact for interviewing these days it would be a lot easier to just test people's ability to create such an app in language they haven't used before without AI instead of using coding quizzes, Hackerrank or Leetcode.

        You'll get the list of suitable hires in 10-minutes.

        1. doublelayer Silver badge

          Re: Great for incompetents, meh for anyone else

          That really depends what that "journaling app" can do, either the one from their claim or the one you suggest for an interview. Such an app almost certainly requires almost no innovative thinking, because it has to record and store user inputs and allow the user to navigate through that input. And yet there are two aspects which don't make the problem as simple as that.

          First, thinking about the interview, it's not simple to implement that in a language and possibly for a platform you're not familiar with. What counts as good enough interfaces for this? Can I give the users a CLI window and just say "Type your journal entry here"? How about a web form with a big textarea field? If this is a mobile app, likely neither of those is acceptable which means that I need to know how to invoke the system's typical text entry and editing capabilities, which is, for someone who has written something for that platform before, a simple memory task and for anyone else an opportunity to learn the API reference and make some mistakes that everyone makes the first time. That's not going to happen in ten minutes.

          And either way, a successful journaling app will have a lot more features than that. It may have lots of navigation functions to find specific entries. Maybe a search box that does more than grep each entry for the string. Maybe there are tags, or attachments of photos or recordings, or an algorithm to automatically link related entries. Even if this app's design theme is complete simplicity and this is just supposed to emulate a paper journal, there is likely plenty of design to make this comfortable for the users who intend to use it. To say nothing of whether that data is supposed to be exportable and in what format, probably not a concatenated text file of all the entries. How well anyone claiming to have built an app is depends on whether any of these things were implemented when specified, and, if I asked this question in an interview, I'd expect a competent person I wanted to hire to spend ten minutes or more on these kinds of design questions before trying to write a single line.

    5. O'Reg Inalsin Silver badge

      Libraries change. Principles don't.

      And coders of any kind can benefit from "rubber ducking", for which there are no really wrong "answers", just noise.

    6. NoneSuch Silver badge

      "It's a useful coding assistant for a less experienced developer. It can help them along, but it can't safely do their work."

      It's Clippy II - The Revenge...

    7. Jason Hindle Silver badge

      This is very on point, so I'm surprised at the downvotes. Some of the responses fail to distinguish between the model doing the work (i.e. vibe coding, often with very lazy prompting) and answering simple, well-composed questions, which is what the best models actually do pretty well. People who use the technology well have already broken the problem down and are well on their way to a solution.

      As for vibe-coding, we should be careful not to dismiss it, as the long-term danger it poses is real.

    8. Mostly Irrelevant

      I personally disagree, I've run in to lot of situations where junior developers have tried to check in poorly designed AI solutions just because they don't have the experience to understand their problems with the code.

      LLMs can greatly exaggerate the Dunning-Kruger effect and that causes a lot of problems.

      I normally apply the assistants in extremely limited scopes where I already know what I want. The error message lookup/explainer functionally can be helpful, particularly in the first thing you do is check sources.

      I think we'll eventually have a good idea of what we should be using these things for, but the idea that inexperienced devs are the ones who get the most out of them seems completely backwards to me.

  3. ITMA Silver badge

    Have we been here before?

    Who remembers the launch of software called "The Last One".

    Trumpeted as "the last software you'll ever need to buy"... Or something like that:

    https://en.wikipedia.org/wiki/The_Last_One_(software)

    1. Doctor Syntax Silver badge

      Re: Have we been here before?

      "the last software you'll ever need to buy"

      It was a bit ahead of its time. FOSS took a little longer to arrive.

      1. Mage Silver badge
        Coffee/keyboard

        Re: FOSS took a little longer to arrive.

        Um, no.

        BSD release 1978

        "The Last One" 1981, and it wasn't cheap or open source.

        Admittedly actual GNU is 1983 and later.

        Also loads of free CP/M SW in various senses from about 1976

        UNIX would have been "foss" only AT&T in a sense "stole" it, which gave the impetus for BSD and later.

        1. tinpinion

          Re: FOSS took a little longer to arrive.

          Doctor Syntax did a joke that I got a chuckle out of, and you had to come along and assert incorrect things in an attempt to undermine it. So, here's my attempt to shore it back up:

          Um, even noer.

          AT&T's Bell Labs produced Unix and held the copyright to it. They can't have stolen something that they already owned.

          BSD's original release in 1978 still contained AT&T code and required a license from AT&T to run it. Arguably, it wasn't until BSD Net/2's release in 1991 that an AT&T-free version existed.

          The philosophical position of the Free (Libre) Software movement is that it is unjust for a program to exist that deprives its users of a certain set of freedoms (Free Software Definition). The philosophical position of the Open Source Software movement is that it is more practical to develop software when a subset of those freedoms (freedoms 0 and 2) are granted to recipients of a software package (Open Source Definition). Neither of the philosophies grouped under the headings of 'FOSS' or 'FLOSS' care about the amount of money that needs to be paid in order to receive a copy of a piece of software.

          If the CP/M software wasn't licensed in a way that was compatible with the Free Software Definition or the Open Source Definition, it wasn't FOSS.

          1. Mage Silver badge

            Re: FOSS took a little longer to arrive.

            "AT&T's Bell Labs produced Unix and held the copyright to it. They can't have stolen something that they already owned."

            That's what they claim.

            In fact much was written in Universities and AT&T refused to acknowledge this.

            1. jake Silver badge

              Re: FOSS took a little longer to arrive.

              MaBell begat UNIX, yes.

              Much of BSD was incorporated into UNIX, yes.

              When MaBell redistributed the BSD code, it contained a "Copyright <date>, Regents of UC Berkeley" line in the relevant files. It was not stolen, Berkeley granted the rights to pretty much anybody who needed/wanted it, even back then. Other schools were similarly acknowledged.

          2. jake Silver badge

            Re: FOSS took a little longer to arrive.

            "BSD's original release in 1978 still contained AT&T code"

            Nope. 1BSD contained absolutely no code from MaBell. Rather, it was released as a bunch of patches and enhancements, and some small utilities (ex comes to mind). You needed a running Version 6 Unix system to install it. (If you had an official license for UNIX it made your life easier WRT support, etc ... but we didn't check to see if the folks who requested the code had that license. We quite simply didn't care. The "official" number of copies sent out was around 30, but I'm fairly certain it was at least double that.)

            Send your request and an SASE with blank media of your choice to CSRG c/o the University of California, Berkeley ...

      2. jake Silver badge

        Re: Have we been here before?

        "FOSS took a little longer to arrive."

        Not really. They just didn't call it that.

        Back when MeDearOldDad started using computers (very early 1950s), you paid for the hardware (or access to the hardware). Virtually every bit of code that was not written in-house was available for free, complete with source (if available ... some was machine code in carefully stacked piles of ones and zeros), and you were free to modify it and redistribute it as you saw fit. Not that there was much redistribution back then, but this was mostly due to a lack of other computers.

        Fast-forward a decade or so. and in 1961 we had the beginnings of DECUS and the Software Library.

        It's not that they didn't bother about copyright, it's that the concept of copyright didn't even exist with regards to computer source code until 1974 ... and binaries only caught up in 1983! (See: Apple vs Franklin). For a while there, you HAD to ship source code with your binaries if you wanted to be able to claim a copyright on your software.

        1. Mike Pellatt

          Re: Have we been here before?

          I came here to mention DECUS, but you beat me to it.

          And after copyright, along came software patents. Patenting algorithms. Madness.

    2. doohicky

      Re: Have we been here before?

      I remember it - heck I was 17 and just starting out. It was being levelled at us like a warning not to get into computer programming because.. well... you'll all be replaced by it in a few years. I was young and naive and luckily quite stubborn, but I almost believed it... for maybe a week.. and then I realised that it was probably nonsense.

      1. ITMA Silver badge
        Devil

        Re: Have we been here before?

        I think the term for it was vapourware....

  4. Neil Barnes Silver badge

    different results over time for exactly the same prompts

    And there's the problem right there... when I talk to a computer I want the same result every time.

    Give me a precis of a library, sure: but don't try and incorporate it into my code, thank you very much.

    1. Felonmarmer

      Re: different results over time for exactly the same prompts

      To someone who gives work briefs to real programmers, it's the same thing. See it's not meant as a tool for you even if you have to type the prompts. It's a tool for the pointy haired boss who requires you to use it. What do they care about the code not being consistent, their joy comes from delivery on time where time tends to zero.

      What is important to people is what gets them their bonus, to PHBs enough dosh for a car upgrade, for their surfs - free pizza at the end of their task.

      1. that one in the corner Silver badge

        Re: different results over time for exactly the same prompts

        > for their surfs - free pizza at the end of their task

        If vibe coding leads to soggy pizza, count me out!

        1. David 132 Silver badge
          Coat

          Re: different results over time for exactly the same prompts

          Don't worry; with vibe coding and AI in general, there's no gnarly riptides - only lots of hand-waves.

          1. Albert Coates

            Re: different results over time for exactly the same prompts

            Not waving, but drowning? (Stevie Smith, for the non-poetry lovers)

            1. khjohansen

              Re: different results over time for exactly the same prompts

              Rupert Hine ??

  5. Headley_Grange Silver badge

    Wouldn't teaching the assistants about NASA's "Power of Ten" rules be a good start? I know they are aimed at embedded code written in C and might not be relevant for more recent "safer" languages, but to me some of the rules appear to be general enough to apply across the board. They are aimed at making written code easier to review and test, which would appear to be a necessity if something else is writing the basis for your code.

    1. Irongut Silver badge

      There lies the problem, teaching requires the subject have some intelligence in order to learm. There is as much intelligence in an LLM as a roulette wheel.

      1. Albert Coates

        But the roulette wheel is rigged, and LLMs are the hidden magnet.

    2. Simon Harris Silver badge

      To cover a wide variety of coding problems you need a lot of sample code to build your patterns from.

      You could cover at least some of the 'power of 10' rules by filtering the learning examples to be just those that follow the rules (e.g. ensuring no training data includes gotos, no compiler directives other than #include, #define, very restricted use of pointers, etc.) so it doesn't 'know' code outside of the power of 10 coding subset. However, I doubt there is enough power of 10 code in the wild to cover the spectrum of questions likely to be asked of it.

      This would require a lot of manual work to 1. curate any 'power of 10' code that does exist, and 2. rewrite and validate a significant amount of non-compliant examples that do exist to provide a wide enough training base of code.

      While that might go some way to getting it to produce power of 10 compliant code, by virtue of it not having patterns for certain non-compliant coding structures, I doubt it would be enough to ensure an LLM produced code that completely followed the rules, and it would be a mammoth human undertaking to create the training data, rather than lifting non-compliant code from the many sources where that already exists.

  6. b0llchit Silver badge
    Mushroom

    Stop anthropomorphizing AI

    ...dependent on how the AI chooses to interpret your requests...

    "AI" is a statistical program than does not "choose". It is a program that merely calculates a new output on the (very) large set of inputs, where the history of inputs can be very long and some inputs may be fed noise to add variation.

    Any computer running a same program using the exact same input should result in the exact same output. If that is not the case, then the computer is broken and you should buy a new one. Adding noise to the input is just obfuscation.

    1. Anonymous Coward
      Anonymous Coward

      Re: Stop anthropomorphizing AI

      What part of "noise" being random ( or pseudorandom) do you not understand?

    2. Doctor Syntax Silver badge

      Re: Stop anthropomorphizing AI

      "Any computer running a same program using the exact same input should result in the exact same output. "

      That was the point being made in TFA.

      If the program acts in the manner of a wayward human making inconsistent choices then, in absence of more specific terminology, "choose" seems a reasonable verb to describe it.

      1. Anonymous Coward
        Anonymous Coward

        Re: Stop anthropomorphizing AI

        You actually used the specific terminology, and then went with "choose" like a wayward human making inconsistent choices instead.

    3. doublelayer Silver badge

      Re: Stop anthropomorphizing AI

      Lots of software legitimately adds random data to its operation, and this one does so in the same way. A computer running an LLM is no more broken than one running an asymmetric key system which mandates random noise inside messages so that identical messages aren't immediately noticeable. You can also remove the nondeterminism aspect of LLMs if you have enough access, and then you will get exactly the same output for the same input. Your objections are flawed.

      So what if we did turn the model temperature to zero, getting deterministic output? Would that fix the problem? No, and in three ways. For one thing, neither option reliably generates code that works, whether it's deterministic or not. For another, there's a reason the temperature setting isn't usually set at zero; it can make some things worse than when it's higher. And most importantly, when we complain that an LLM isn't deterministic, it has little to do with sending the same prompt and getting different programs, because as soon as we got a program that was acceptable, we'd save that code and stop sending a prompt for that. The problem is when we want a program that's almost the same but ever so slightly different and we can't make it do that. That's not technically a problem of determinism. We also can't make a cryptographic hash function generate a hash that's almost the same but slightly different as the one for a known value without a lot of cumbersome effort. The problem is that we want that for a hash function but it is harmful for a thing that generates software because almost all of software development involves making little changes to something that exists, something that doesn't work very well with a program that tends to want to rewrite instead of modify when it is requested to make a change.

  7. ForthIsNotDead
    Happy

    As werdsmith said above, if you're experienced in the art, and your problem domain, you can do it better by yourself. I'm a lone, self-employed coder, I work mostly in C++ (embedded, from scratch projects). I use Co-pilot in VS Code, and I have to say, as an intellisense on steroids it's pretty amazing. It can write entire sections of code for me and be very accurate - mostly. It still needs to be read and checked by me - so it's arguable that it actually saves much time.

    If you're new to a language AI can help as it will use language features and techniques that you weren't aware of.

    Two weeks ago I managed to vibe code a Python application that allows settings to be read and installed over a modbus RS485 serial link. I know hardly anything about Python, and don't really have the desire to learn. I just wanted something quick. I ended with an app that works perfectly, including a user interface (using the Flet framework), ability to load/save settings to disk files, interrogate the device, push new settings etc. And it's cross platform. I didn't write a single line of code. I just 'vibed' with Claude in VS Code (I have a co-pilot subscription). The application works perfectly, and is cross platform.

    Looking at the code, it seems quite good. All I know, it would have taken days, if not weeks, to learn the Flet GUI framework, not to mention how to do things 'properly' in Python. Using AI vibe coding, I got it done in a day and my client was absolutely astounded.

    So yeah, it has it's uses. But if you're experienced in your language of choice, use it as a turbo-intellisense!

    1. Simon Harris Silver badge

      "I know hardly anything about Python, and don't really have the desire to learn." ... "Looking at the code, it seems quite good."

      Personally I wouldn't profess to be able to judge the quality of the code if I knew hardly anything about the language, but that's just me.

      1. jockmcthingiemibobb

        So lets assume this is a typical small business application that's not going into safety critical systems, digging out confidential information or transferring finances. Not every problem requires a complex design and test cycle. Small programs that do common tasks using common functions and APIs are the vibe coding equivalent of writing "Hello World".

        Did it work?

        Did it save the programmer days/weeks?

        Did it save the client money?

        Did it deliver the software early so the client could start making money earlier?

        Was the client happy ?

        If the answer to all the above questions was "yes" then in this case vibe coding was a success.

        1. doublelayer Silver badge

          How well does it work? For example, if the connection drops and comes back, how well does it handle that? If you haven't read the code, do you know? There is much more to whether something works than if the golden path looks right, as anyone who has written anything that ran in production has found to their displeasure. Nor can you easily notice that by glancing down the code.

    2. martinusher Silver badge

      This is precisely the job that vibe coding is likely to be a useful tool. Its a narrowly defined, one dimensional, problem which produces an easily discernible works/doesn't work output that's only a nuisance if it doesn't work perfectly. This is the kind of job you'd give a newly minted programmer to do as a warm up or you;d try to palm off on the people consuming the product -- its necessary, valuable but still a damned nuisance to have to churn out. Its also not really programming, it just looks like it to the inexpert eye.

      FWIW -- For this type of job I've used TCL/Tk using SWIG to code language extensions interfacing to the MODBUS interface (which is straightforward in both its serial and TCP incarnations). I'd guess that Claude doesn't know what SWIG is but instead, like all junior programmers, just writes everything from scratch.

      1. tiggity Silver badge

        A lot of these "AI" tools are sometimes OK at tasks involving languages such as python, JavaScript etc. ... as they will have been trained on a lot of it. - if a "request" matches well to something in its training data then should expect "AI" to be ale to produce something - which will often be the case for some straightforward / common tasks. An "AI" vibe coding solution, just like all LLM actions, will generally be a numbers game.

        How well an "AI" will manage with a more obscure language & given a more unique task outside of any examples it has ingested is a different matter entirely.

        1. Paul 76

          I like to ask it to write a Cosine() routine for the RCA1802 8 bit microprocessor.

          This is something that I think doesn't exist in source form (I think there's a binary) and if it does somewhere it's on paper.

          It's entirely doable if you can *think* from first principles, the algorithms are all online, as is the processor spec in machine readable form. But nothing to copy.

          It's improved slightly from "no idea at all" to trying to fudge something by combining other 8 bit code with basic concepts about the RCA1802 which it clearly doesn't "understand", it's a rather unusual processor (why I chose it) and you can see immediately if it has a clue.

          1. Simon Harris Silver badge

            Not surprised it fails at that.

            It hasn’t yet managed to produce a correct wiring diagram and appropriate passive component values for a 555 astable circuit whenever I’ve asked, and that’s probably one of the most duplicated schematics online.

            1. Not Yb Silver badge

              The "image creation" part of LLMs is still pretty far behind being usable for any form of technical drawing. Ask it for a blueprint of a chair, and you wind up with something that looks like a blueprint of a chair, until you read the dimensions.

              It MIGHT get the answer right if you ask it for a SPICE model of a 555 astable circuit, thus asking it to produce something that's a text description instead of an image that looks like a circuit design, but any attempt to get it to draw it is very likely to continue failing for some time.

              Electrical engineers aren't the AI companies' target market right now. They're still targeting venture capitalists and corporate R&D departments at the moment. And of course, subtle advertising on Amazon... "Pay a little extra, and we'll train our AI to recommend your products more often." is something they're not saying yet, but it would be easy enough for them to tweak it to recommend things that Amazon wants to get out of the warehouse.

              1. Simon Harris Silver badge

                True, the schematics created by LLMs are pretty way off for many things.

                However, even without asking for the schematics, and just asking it to say in text form what should be connected to what, it still manages to mess things up. The trouble is, it's partially correct so the wiring diagram of a 555 must be in its network somewhere, but I'm guessing it's polluted with wiring diagrams from other stuff, or alternate 555 configurations, so it gets to a certain point and then the probabilities of the pollutants take over. Even though it gives a confident step-by-step guide for wiring things up, you get half way through and think 'hey, that pin shouldn't go to that one', and notice that other things don't make sense either.

                It also seems to be the case that when asking for the circuit, it can reproduce the standard databook formulae for the frequency and mark:space ratio from the passive components, but then fails to use that correctly to compute suitable values - again I'm guessing that because the timing characteristics involve multiplying resistor and capacitor values together, there are multiple ways of getting to the frequency you want (e.g. scale up the resistor values by the same amount you scale down the capacitor value), there are multiple versions of the circuit that it's been trained on, but even for the same frequency, the original authors have chosen different values, and for a system that works on probability those different values may all be similarly probabilistic pathways and it ends up with a mish-mash of components that don't work together.

          2. Not Yb Silver badge

            Now that you've made your test problem known, they can 'train' the AI on the solution... if they can find a competent coder to write it first.

            I often think of these LLMs as being more like a lossy compression algorithm with a very tedious 'guess-the-question' user interface.

    3. vtcodger Silver badge

      Vibe Testing

      "I ended with an app that works perfectly, including a user interface (using the Flet framework), ability to load/save settings to disk files, interrogate ..."

      I don't suppose we did any vibe testing on the product? e.g. "Hey Igor. Devise a test suite with 100% coverage of the code as well as verification that it works under high and low lode conditions and test at least 20 sets of fuzzed erroneous inputs. Run it. Tell me about any discrepancies observed as well as any product limitations uncovered."

      Thought not.

      1. ForthIsNotDead
        Unhappy

        Re: Vibe Testing

        Nope. Because it wasn't necessary. I can see with own eyeballs that it works perfectly. It's a simple program that allows a user to configure devices from a configuration hosted on his PC, and squirt it into the device over a serial RS485 modbus link. It's reading/writing max 20 holding registers. This saves the user having to configure the device (or, devices) using the keypad on the device, which is a faff and error prone. I can clearly see with my own eyeballs that it works. That *is* the test. Not everything is an enterprise application. This is/was a simple quick and dirty program that I couldn't be arsed to write, because I couldn't be arsed to learn GUI frameworks.

        To be honest, I really long for the days of VB6 where one could drop a few components onto a form and wire up the events. It was quick and simple. Those days are gone. Everything is 100x more complex, for absolutely no gain whatsoever.

    4. Snake Silver badge

      This is going to be a *wildly* unpopular opinion here on El Reg, but sometime in the future, eventually, "vibe" coding will be the standard. Computers helping to program computers, nothing less actually makes sense once the ability to construct accurate and predicable code can be assured. It doesn't make sense to believe that humans can continue to accurately program logic-comparing devices because humans "can do it better" - we do not operate, we are not constructed, to function in that 'hard-coded' logic manner. So we analyze, deduce and program, and end up with bugs. If computers are deterministic, a known input should create a known output, then a machine that learns patterns should certainly be able to learn the proper method of creating those patterns itself. It might take 2 systems - two "AI" programs, operating as a checksum, to create an accurate program, but it certainly WILL happen sooner or later.

      Therefore, if this is indeed inevitable, then believing that humans are non-replaceable in this system and fighting tooth-and-nail to prevent it is only delaying this inevitability. In other words, we created a system that will be used to create more systems (robots can build other robots) but we're simply in denial of this future, mostly because of the fight to maintain a system (capitalism) where only productive work leads to sustenance living; you must work to feed and house yourself, and having machines fix other machines threatens the construct. So, fundamentally, the future of the capitalist system is at stake yet they want to take advantage of the creation of the very machines that will end the social class system to make themselves more money. They want their cake yet eat it, too. If you are constantly increasing productivity by creating machines to take over the workload, yet still forcing lower classes to struggle daily to fight against the very productivity that makes you more money yet makes those workers 'redundant', then you're looking at the necessity to recreate the system, but of course they hate that idea because it may mean the end of their own dominance and social benefits.

      We are at the beginning of the necessity of having to re-examine our basic economic and political structural system for the future, but are fearful or fighting the very things we're creating. Because, money.

      1. Doctor Syntax Silver badge

        "a known input should create a known output"

        The known input might be "Go and talk to the data input team, find ou what it is they're com,plaining about and fix it."

        1. Snake Silver badge

          Oh, I'm *not* saying that humans won't be in the picture, they must imagine the design parameters and goals, create performance metrics to meet, feedback and approve UIX, quality control output, etc.

          It is just that the hard bits of "coding", by the end of this century, won't make sense to be done by humans. It just...doesn't. Both the hardware and expectations / complexity of the software involved is become so far to the edge of people's ability to comprehend their creations that it will become impossible. It already takes teams of dozens, hundreds, or thousands of people to manage the code bases of the best software development out there, and coordinating the experience is already at the limit of ability, which we can factually prove by the fact that bugs are rampant in our designs. Thousands upon thousands of people working on a single project...and it's *still* wrong.

          This can't go on forever, and we are only adding complexity to the systems at every development generation. It is only a matter of time where we will *have* to fall back on machine to help up create the programs for other machines. To believe otherwise, that humans will always be at the coalface of programming, is naive and in denial. Won't happen today, not next week, but it's inevitable.

          1. doublelayer Silver badge

            Your argument might be more successful if you could point to a single program that can do it. We have lots of programs that solve exactly the problems you're describing. They're called compilers and libraries. Instead of having someone write the same code again and get it wrong, we write it well once, test it, improve it, and then have lots of people use that. But it still involves programming, just programming at a higher level because some problems have already been solved. Anything vibe-codingy is a very different proposal, and everything that can do it has been reinventing wheels and making those preventable bugs since it started. How will that take over when it often fails to do better than the rampant bugs you're complaining about?

            If your next argument is that, in a manner you don't have an explanation for, someone will find a solution to that which works, then you've reached the part of the argument that's unproven and unprovable, but I would also stop arguing against you. It's a lot like talking about what we'll do when fusion is cheap and available, which is an occasion that hasn't been proven impossible, but neither has it been proven possible or practical. Since we don't know for sure whether we'll get convenient fusion generation in the next twenty years, during either of our lifetimes, or before human extinction, it's foolish to plan for what we'll do when it becomes available in the short term when that's the only time we can be reasonably certain it won't be in. If you're banking on someone solving a problem in an unspecified way, then you're treating AI as an article of faith and your proposed societal modifications can and probably should be ignored until that faith is rewarded.

            1. Snake Silver badge

              RE: can do it

              Maybe everyone involved [here] should read my comments again: I said it would eventually be computers-programming-computers. I did not say, "Today".

              "We have lots of programs that solve exactly the problems you're describing. They're called compilers and libraries.

              Exactly. So if compilers and libraries are capable of what they are today, before "AI" is implemented on them, what will the future hold for programming? How can you believe that, if compilers are helping catch errors and faults today, that an "AI" system won't be fully capable of implementing that code from scratch, tomorrow?? The programmers here are taking the position that they aren't replaceable. Are we still buying completely hand-crafted cars? Are we still eating fully hand-picked foods? Are the horse carriage and buggy whip manufacturers still stating that they won't ever be replaced?

              I'm just saying that machines-programming-machines is INEVITABLE. As much as the rising sun. Maybe not next year, maybe not 2, maybe not 5, maybe not.even in 15. But it *will* be the standard in some type of future - no other long-term projection of development of the involved technologies makes sense.

              1. doublelayer Silver badge

                Re: RE: can do it

                Your long-term projection of development of the involved technologies only makes sense if you assume that anything you imagine is definitely possible, practical, and going to happen sooner or later. If it's not, then your prediction breaks down entirely. For example, how many people responded to the flying cars and jetpacks dreams of the future with skepticism that either would ever be available. So far, they've been right, and there are many reasons to think they'll continue to be. I can say that our combination of car technology, airplane technology, and congestion on roads makes the development and adoption of flying cars an inevitability. It won't make me right.

                One reason that flying cars aren't being developed is that their aerodynamics are incredibly inefficient and making them more plane-shaped won't help very much. That's a big problem that we don't have a magic solution to. If we find one, maybe flying cars will take off (pun definitely intended). If we fail, that might prevent the idea. The so-called AI we have today fails in numerous ways that their developers have failed to fix despite having had years and billions to use on the problem. Maybe they'll find a magic solution too. But what happens if they don't? This is where you seem to be assuming the feasibility of AI software without evidence, but no matter how obvious it might seem to you that such a thing is desirable, it doesn't prove that we will succeed at getting it. Some problems are harder than people think they are, and people who invested in having the thing those problems were preventing get disappointed. Some of them get disappointed forever.

                1. Snake Silver badge

                  Re: RE: can do it

                  Of course nothing is 100% guaranteed. But you're being incredibly impatient (how ironic, as I sit on the side of the road waiting for a situation to clear whilst other people complain as i write this) in stating that they've had "years" to fix "AI".

                  How many years would that be, exactly?? Versus how many years in computer tech to GET here?

                  AI development, as it stands, is barely a flash on the total computer development timetable. Why are you expecting perfection so soon? The world is still working on polishing the OS cores still out there and how long has that been?

                  We ALREADY have machines building machines. The motherboard of the computer or phone you are viewing this on was pretty much built by automation - from auto insertion robots to wave soldering, most of the work is done by machines nowadays. Cars are robot welded and robot painted, at the minimum. Machines building machines.

                  I hate to remind you of this, but programming isn't magic. Every educated society in the world is now doing it, even places we here don't think of: news just came out today that quite a number of the MAGA noise makers are in Nigeria (of all places). North Korea, supposedly isolated, is a major source of industrial web-based attacks. The techies in Big Business believe they are extra-special, the rulers of the tech horizon, but here they are, getting shown up by programmers in sanctioned Russia and N. Korea, just to name a few.

                  It might not be the "AI" of today, as it stands. But to not believe that, some day, a computer would be able to create the very same code that it is *currently* capable of analyzing is rather foolhardy.

                  1. doublelayer Silver badge

                    Re: RE: can do it

                    At any time, if it turns out that someone can make such an AI, I am willing to accept it and plan for how to handle the disruption caused by it. I am not willing to accept the existing tools as proof that this will happen because they do not do it now and show no real improvement on that line. Since you're accepting their inevitability without evidence, my unprovable claim is that, should someone build an AI that does write software, they will do it without using LLM-like concepts, let alone an actual LLM. I can't prove that any more than you can prove your expectation, and I have a feeling that if I told you to start designing society around my promise, you might tell me that making large changes based on my guess is not a good basis. It's not a good basis with your guess either.

                    Of course programming isn't magic. People can and have written useful software with relatively little experience, and a lot of tools can automate the process. Your analogy to manufacturing machines is a good one. My computer can be built more correctly and quickly by machine, but the machines that build them, or in fact any machine available, can't currently fix a design flaw in the thing they're building. Maybe eventually we'll find a way to make a machine that can do that, but I'm not going to assume that, because we could build the machine that manufactures circuit boards, we can make the machine that can design circuit boards. The problems are entirely different and we're going to need to have and use completely different skills to manage the latter. It doesn't mean it's impossible, but there's a chance it is and there's also a lot of room between "not impossible" and "inevitable in a vague time span".

                    1. Snake Silver badge

                      Re: RE: can do it

                      I think the problem is that people are seeing things in black and white - either it's "all AI" or it's "all human". The humans will be like The Matrix "The Architect", the humans must establish the necessity, goals, experience and requirements of a project before an AI will ever be able to program it. The AI will never know what is needed because they do not live in our physically-interactive world, they can not anticipate that which is only a thought process of solving a problem. But to think that only humans can program, which is nothing more than codified logic, is rather silly IMHO - if computers can process the logic to create an output, then computers can duplicate the expected logic paths necessary to create that required output.

                      You will all simply change your work horizons. You will program less, instead overseeing project identification, project scope, project goals, quality control, testing, implementation, identification of required changes, systems management, etc. You will end up managing more and at the IDE screen less, that's all. The humans will still be required to maintain the systems, as only they can identify the problems and solutions needed. But the AI can do the actual code implementations to get (maybe only partially) to those solutions and the humans will still have to do all their work to make sure it functions as necessary to the required users, and then make sure it continues to do so.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: RE: can do it

                        "You will... You will... You will..."

                        I think you might have missed the point of that film entirely.

                  2. jake Silver badge

                    Re: RE: can do it

                    "How many years would that be, exactly?"

                    MIT started offering coursework in what we now call "AI" in the 1950s.

                    That's roughly the same as real, usable, fairly wide-spread general purpose computing.

                    1. Snake Silver badge

                      Re: RE: can do it

                      Coursework is not the same as "workable implementation", I thought that would be obvious. According to Wiki, 'general-purpose' LLM's were introduced into general consumption in 2018. That's not long, folks.

                      Listen, I'm not saying I like it or hate it, I am not stating that I have a preference. I'm just stating an obvious fact: vibe coding is already here and will only grow as it gains more experience and corrections. To do otherwise, tell yourselves that AI coding won't be a "thing", is actually denying a version of Moore's Law, believing that the technology won't improve because it doesn't serve your personal best interests or preferences. That hasn't worked before and I very seriously doubt it will work now. AI coding is here and will only become more embedded as time goes on.

                      1. doublelayer Silver badge

                        Re: RE: can do it

                        No, you're stating your guess as if it is an obvious fact. You're stating that the software we have now does things when it doesn't do them. You're stating that something is an inevitability when it's at best a possibility. You're arguing that Moore's law applies to software's features rather than hardware's efficiency, which is not what the law said nor have you suggested any reason to think this would be different.

                        You're either consistently failing to understand responses that disagree with you or using flimsy arguments to oppose them because you have no good ones. You've repeatedly claimed that we are saying "only humans can program", even though I never said that and nor has anyone else in this thread though some people do believe that. That is a false dichotomy. There are more options than "only humans can program" and "LLMs can program and will inevitably get better". But since you can't prove the latter though you clearly believe it, you've chosen to argue against the former as if that's the only other option despite what actually is an obvious fact that it is not the only other option and nobody is arguing that option here.

                        1. Snake Silver badge

                          Re: RE: can do it

                          No, I see *arrogance*. I see you people believing that you are irreplaceable inside an artificial construct. I see vibe coding existing, TODAY, in an imperfect error-prone form, and you believing that it *won't* be improved upon because you believe that you are so important, that the "art" of coding is soooooooo beyond any machine's ability to understand, that you won't ever be replaced in that work field.

                          It is YOU that has flimsy arguments, attempting to gaslight yourself into believing your status. In your blind ambitions to paint yourselves as kings of the realm, that nobody and nothing else can do what you do. I mean, you've all been gaslighting yourselves so far that you even think this of Rust, that no language with built-in guardrails can possibly justify the replacement of your knowledge [in C / C++]. You even know better than this, because "real programmers" don't need guardrails and create perfect, non-vulnerable code, every time.

                          You don't think I've been around here to personally witness all the hubris?? Years of it. Kings of the Tech Domain, you all paint yourselves as. Even when change is on the horizon, you'll still believe that you are the top dog here and that the tech world will always bow to you. Thousands get laid off in IBM, hundreds laid off here and there, but of course it's the other guy" who isn't needed, a throw-away person who apparently wasn't you guys, the ultra-necessary top dogs. That'll never happen to you. Without you the world would burn.

                          "Vibe" coding, AI-assisted programming, is here. TODAY. If you don't think that will be improved upon in the following years, instead of assisting and writing low-importance pieces of code, being taught to write major modules, libraries and then even do kernel work, then there's nothing I can say to open your eyes (and your own *ears*, just listen to the AI CEO's and developers in their expectations of what they want their creations to do). This is one of their major goal, they can see the function of it, create machines that can program machines with less errors and less vulnerabilities. They are creating new languages and new compilers to try to assist humans in this endeavor but it will always be limited by the wetware. So why can't we reduce, or eliminate, the wetware problem? Imagine the profits - less workers to pay, more reliable and faster software creation, less long-term maintenance and support overhead, greater profits. You think they're spend tens of billions of dollars on their AI fever dreams to help improve the ability to make fake cat videos and auto-insert replies into emails?? Wow. The end goal of the AI development is to create greater profits by using it to help reduce costs. Coding is just one target, many fields are going to be targeted if the profit motive is there.

                          Hang on for the ride.

                          1. doublelayer Silver badge

                            Re: RE: can do it

                            And one last time, you consistently misstate my argument as saying that humans are the only things that will ever be able to program. I'm not saying that. I have not said that. I can be replaced by lots of things, though I think the most likely thing to replace me in the short term is a cheaper human programmer, of which there are many*. I think it is possible that someone might develop a piece of software that can do a lot of my job. The software we have now is sold by people who say it can do that but it doesn't and can't, and while they're succeeding at selling that, they have no reason to try making one that can. Eventually, I expect people to try something else and they might succeed. That is not enough for me to consider it guaranteed that they will.

                          2. Not Yb Silver badge

                            Re: RE: can do it

                            Again, "The Matrix" is not a description of reality, nor is it meant as a blueprint to strive towards.

                            Your prescriptive requirements are based on your own ideas of what AI and software development will be. No one need follow your instructions, and I suggest that you shouldn't follow them yourself either. Anything that leads to "everyone is easily replaced by an artificial construct" isn't something to be in favor of.

                2. Paul 76

                  Re: RE: can do it

                  AI will get better and more convincing at what it does, impersonating thought. But then people were sometimes taken in by ELIZA.

                  AFAICS the LLM algorithm does not provide a way to actual thought, just better pattern recognition.

                  1. Ropewash

                    Re: RE: can do it

                    They still are being taken in by Eliza. They just managed to stack a bunch of Elizas together in a matrix and named it something new.

              2. Paul 76

                Re: RE: can do it

                Because the problem is too generalised.

                Compilers can do intelligent optimisation and rearrange instructions optimally that would be very difficult for a human to do. But it's a very specific task.

                It's like having a computer that can play chess - Stockfish is better than any human now I think - but that technology is not applicable to Monopoly or Poker.

                1. Snake Silver badge

                  Re: too generalized

                  Not the coding, no. We can be gaslighted into believing that with coding, but the *code* is codified logic. The development of the *task* to be solved is too generalized and that's why humans will always be involved - the AI has no idea what program is needed, why it is needed, and how it should work to solve these problems.

                  This is the task of the human operators: identifying a necessity, giving the project scope and envisioning a satisfactory solution. The coding itself is just robotic in nature, currently the humans need to figure out a codified logic path to achieve the goal. This is where AI will be used and will shine, the code is just a "circuit path" to the end. AI will do this well and the humans can then be free to stick to what *they* do well, rather than try to create a logic path that is 100% error-free (which, so far, has never been proven possible). The AI can juggle and test far more vulnerabilities and bugs in a second than a human can do in a week, and that's where its power will be [only] applied.

              3. Peter Richardson

                Re: RE: can do it

                "Maybe everyone involved [here] should read my comments again: I said it would eventually be computers-programming-computers. I did not say, "Today"."

                Yeah, I think that's it. Clearly vibe coding is imperfect. Also, it has massive potential to save companies a huge amount of money. So it's pretty much a given that a lot of effort is being put into making these systems better. Inevitably, the gap between what a capable and experienced development team can do vs a single experienced vibe coder will shrink and shrink over time.

                Anyone arguing that the output of vibe coding is imperfect and therefore inadequate has obviously forgotten the challenges that trad software development has always faced. Stating the obvious, it's mostly about trying to ensure we minimise the number of bugs, keep the software secure and maintainable and understand (and implement) the customers requirements as closely as possible. And despite trying oh so hard in the last 30-40 years to perfect that art, a lot of what we do is still full of bugs, security vulnerabilities, trash code and doesn't do what the users want it to do. We are already in an imperfect world, is vibe coding really so different?

    5. druck Silver badge

      Wordsmith's comment has as much validity as hearing some speaking a foreign language you don't know and saying, "yes that sounds foreign", without any clue as to what is being said.

  8. colinhoad

    You weren't there, man

    The best programmers in my experience are the ones who understand the guts of the machine. Being able to program - as distinct from being able to "code", which isn't really a verb I like to use - is intrinsically tied to the knowledge of how a computer works, not merely smashing tokens together to produce a toy application. I get tired of people (often on LinkedIn) saying things like "hey, remember how we used to think you needed to know assembly language to be able to program? haha wow, we've moved on since then!" as if they've said something insightful. Being able to program in assembly is still important, not so much because you need to use it (though there are still some high performance computing jobs that need it) but because of what it teaches you about how a computer works. Skipping over those fundamentals doesn't "let you focus on what matters" as a lot of these so-called thought leaders like to claim, it just makes you a sloppy programmer who will never improve.

    1. Dinanziame Silver badge

      Re: You weren't there, man

      Being able to program in assembly is still important, not so much because you need to use it (though there are still some high performance computing jobs that need it) but because of what it teaches you about how a computer works

      Hmmm... Yeah... I feel it's a bit like saying knowing how a car works is important in order to know how to drive. Used to be the case, definitely. It used to be very important to know how a clutch works, and what it means that the oil is too hot, and the like. Even do some repairs yourself. But that's really less often the case now.

      1. Paul Crawford Silver badge

        Re: You weren't there, man

        But that's really less often the case now.

        Indeed, we now have cars that for little or no apparent reason go in to limp mode due to flaky software and/or sensors and you can't do anything but pay the dealer $$$ to get it fixed. Yup, sounds like modern software...

        1. ICL1900-G3 Silver badge

          Re: You weren't there, man

          Modern cars are the landfill of tomorrow.

          1. LBJsPNS Silver badge

            Re: You weren't there, man

            'Twas ever thus.

        2. Not Yb Silver badge

          Re: You weren't there, man

          Or go buy a scan tool, and ask the car which sensor failed. Unlike the early days of OBDII, much of the time it'll be right, or at least point you very close to the actual problem. You don't have to spend days of time adjusting the distributor over the life of the car any more. Carburetor doesn't need adjusting every few 1000 miles.

          Lots of improvements came from allowing the car to test itself for problems, go into 'limp home mode', and report back. Some car manufacturers do know how to make cars that keep working for 300K miles with only minor repairs, when previously 80K miles was "lol toss it into the junk yard, you'll need to replace the entire engine and suspension if you want to keep running this thing".

      2. colinhoad

        Re: You weren't there, man

        Driving a car is akin to using a computer, not being a programmer of said computer. I would agree that in the past, using a computer required some basic (or even BASIC) programming skill to get it to do what you wanted, whereas nowadays to use a computer really doesn't. But programming a computer - and programming it well - still requires the same fundamentals that it has done for decades. You can skip over those fundamentals, and to be honest, most bootcamp-style tuition aimed at new "coders" does exactly that. I just don't believe you'll ever master the craft that way.

        1. Snake Silver badge

          Re: master the craft

          The relevant question for this discussion is: will it BE necessary for humans to "master the craft" of programming if a second machine can be used to program the first??

          1. Doctor Syntax Silver badge

            Re: master the craft

            A large part of mastering the craft is understanding what needs to be done.

        2. doublelayer Silver badge

          Re: You weren't there, man

          I mostly agree with you, but I would divide the levels into more than two options. Just as the average user doesn't need programming knowledge to use many functions, there's another level where you need to know how to program at a high level but don't need to know the internals. If they want to do something more important, something that connects to lower-level OS services or compute incredibly quickly, then they'll have to learn them, but there are a lot of programs where neither of those happens at all and someone can write them without having learned assembly or the concepts that come from it.

          For example, there's a lot of software which I classify as "database frontends". There's a data store of some kind and pipelines that put things in, take them out, modify the data in some ways, and send it to other systems. There's an inexhaustable need for this stuff, and it's often just custom enough that you can't just buy someone else's one and bolt it onto your workflow, though companies like Oracle try and many customers accept. These things don't have to run incredibly quickly. Their databases do, but those have already been written. Many of them don't even have to be particularly efficient with their database queries, because unlike with databases we've seen with billions of records, there might only be hundreds of thousands here. These don't have to interact with anything with something more complex than a REST API. People can learn how to build that without getting a full understanding of how every part of the computer and software stack below it works. I also hope that we'll continue to develop better tools for making the construction process for this easier, but vibe coding assistants often fail on this as they do with many other things because you still have to construct and keep a model of what is here and what you want and LLMs don't try to do that.

      3. Fred Dibnah

        Re: You weren't there, man

        You don’t need to know how a car works in order to drive it, but it helps if you want to drive it well.

    2. PghMike

      Re: You weren't there, man

      I don't think you need to know assembly, but you should know something about computer architectures. The only assembly I've written in the last N years was a version of 'getcontexxt' and 'save context' that don't mess around with signal masks. But it is useful to know the speed and location of various caches, and IO and memory buses. Performance for the last 20 years hasn't been based on instructions and what uses CPU, but cache line misses and cache line sharing between processors. And if you're trying to max out throughput, it's important to know what the hardware can actually deliver.

      1. colinhoad

        Re: You weren't there, man

        I agree with you, and cards on the table I do not know ARM or x64 assembly, just 6502. It's more the architecture / nature of how CPUs do what they do that I was referencing - so I think we're on the same (memory) page.

        1. elaar

          Re: You weren't there, man

          Not to mention that if you grew up coding with say just 2K of ROM, then you would think that inherently you would continue to think about efficiency and better coding practices throughout life.

          1. Simon Harris Silver badge

            Re: You weren't there, man

            "you would continue to think about efficiency and better coding practices throughout life."

            I used to be in the situation where I had to think about efficiency. My first computer had 5K of program space, and I've programmed EPROMs and PICs down to the last byte.

            Efficiency doesn't always yield better coding practices - when you've got a hard limit of the end of your ROM space sometimes you have to take short cuts to make things fit, and from a coding practice perspective they didn't always look nice!

          2. Someone Else Silver badge

            Re: You weren't there, man

            You survived programming an 8748?!?

            Kudos!!!

    3. LBJsPNS Silver badge

      The Story of Mel

      http://catb.org/jargon/html/story-of-mel.html

  9. Anonymous Coward
    Anonymous Coward

    Ho hum !!!

    In a nutshell the general opinion from the el reg community is that 'Vibe Coding' might be useful if it was just improved !!!!

    Ther problem is that the improvements are many and varied which the 'AI' behind the curtain cannot do repeatably !!!

    'AI' by definition changes its 'knowledge' on the basis of the data it was trained on + the prompts it 'answers'.

    This is the cause of the variability of the answers you get, which is the exact thing you do not want when coding.

    As we all know, this is yet another demonstration of the reality of 'AI' vs the Sales pitch !!!

    It just demonstrates that 'clever pattern matching' cannot be anything more than a pretense of intelligence somewhat easiliy seen through.

    A partly skilled coder assisted by a pretend 'auto-mentor' does not give me any confidence in the end product.

    It is worse than scanning the 'interWebs' for code because 'AI' is so confident that the 'answer' given is right and appropriate for the needs of the coder.

    Sometimes you do need to read the books and learn the hard way !!!

    :)

    1. StewartWhite Silver badge
      Joke

      Re: Ho hum !!!

      It's not AI, it's just a very naughty LLM!

  10. Pascal Monett Silver badge
    Stop

    Vibe coding

    I'll believe in it when can you create a clone of Salesforce that works in every way without tinkering its insides.

    In the meantime, Vibe coding is only good for managers to "show" their developers what they want. And, when the developers come back with technical issues, the manager will point to the demo and say : I made it work in two hours, so I want the result tomorrow !

    1. Doctor Syntax Silver badge

      Re: Vibe coding

      Have the developer present the manager with the vibe code failing the test suite.

      1. Anonymous Coward
        Anonymous Coward

        Re: Vibe coding

        If you really want the manager out of your hair for some time ... ask for the test suite that has been written according to the rules that the manager et al has specified 'Everyone' should follow !!!

        Something about 'Eating your own dog food !!!'

        :)

  11. Anonymous Coward
    Anonymous Coward

    It will be interesting to see what happens to the various LLM-based tools when Microsoft buy out a bankrupt OpenAI for pennies on the dollar.

  12. Guido Esperanto

    I've got a early invest opportunity.

    I'm vibing the creation of a new AI powered by an AI generated LLM, which in turn will be responsible for producing code on the fly.

    You say to the AI "produce me some code" and using its intelligence it will produce a prompt to an AI to produce you a full working system.

    I just need 1 trillion in alpha investors....send money to [email protected] (there's a joke somewhere about the tld being KY)

    Joking aside, I "play" learning Python...and the thought of using AI to produce something which I

    a) dont understand or

    b) may understand but dont have the affordable time to review substantially

    Scares the bejesus out of me.

    1. Anonymous Coward
      Anonymous Coward

      There are already some companies where one "AI" is given a spec and used to create a prompt for another "A"I to produce code.

  13. jml9904

    "These tools exist already. They're called books. They're called tutorials."

    Amen. If only we could program AI to output "I could write this for you, but first, think maybe you ought to look it up and take a crack at doing it yourself? Then I'll be happy to review your code and help you debug and optimize it."

  14. 45RPM Silver badge

    My dogmatic view was that if you don’t understand the requirement well enough to write the code for yourself, how do you expect to be able to write a prompt which is well enough expressed for an AI to do it for you? And it’s worse for writing tests (TDD), which can be thought of as the format requirements. I feel very nervous about AI writing those.

    But it’s a case of do as I say, not as I do, and I’ve found a use case where vibe coding can be very useful (subject, of course, to careful code review).

    Legacy code was often written without the good practices of TDD, and the result is a tonne of code without good regression tests, which makes it the very devil to bring up to date with the latest security patches. The thing is, it’s already in Production. It’s already performing correctly. So… we can get AI to…

    1. Review the code and document the tests required. A human will review this, add additional tests as necessary (and, in this case, more likely remove the crufty testing that the AI has suggested for no good purpose)

    2. Get the AI to implement these tests. (Naturally, the tests will need to be renewed by a human to ensure correct implementation.)

    In my view this is a useful task for AI, and greatly accelerates the building of test coverage.

    1. Anonymous Coward
      Anonymous Coward

      We have a Winner ... Pick any prize off the top shelf !!!

      "My dogmatic view was that if you don’t understand the requirement well enough to write the code for yourself, how do you expect to be able to write a prompt which is well enough expressed for an AI to do it for you?"

      Ding ding ding ... We have a winner !!!

      The 'interWebs' is yours ... expect a very large box in the post ... real soon !!!

      [P.S. Batteries not includes !!!]

      :)

      1. 45RPM Silver badge

        Re: We have a Winner ... Pick any prize off the top shelf !!!

        I’ll have the machete please. It doesn’t need batteries, and I think I might need it when the AI apocalypse comes.

  15. Dan 55 Silver badge
    Meh

    Let's take what Dijkstra said with a pinch of salt

    In the same paper he also began arguing that loops are redundant because recursive procedures can do the same job only to give up "for reasons of realism". Then he moves onto goto saying what a terrible thing it is and then finally in the last paragraph appears to begrudgingly admit that it is actually useful.

    1. abend0c4 Silver badge

      Re: Let's take what Dijkstra said with a pinch of salt

      There's a bit of a gulf between computation theory and practice - if there weren't we'd all still be patiently waiting for the Turing Corporation to deliver our order of infinitely-long paper tapes.

      Practical computers have turned out mostly to be electronic and to be programmable in arcane ways that are largely related to convenient arrangement of boolean logic gates - and were originally programmed with that in mind. It was painfully slow and error prone, though, even given the implementation constraints that severely limited the size of programs that could reasonably be written.

      Programming languages came about as a way of making software development more efficient - by allowing the programmer to concentrate on the problem rather than the hardware, by introducing concepts that reduced the potential to make common errors and by reducing the amount of re-implementation that was required when new (and different) hardware came along.

      Unlike the Turing Machine, programming languages are not truly universal. It's not that they're Turing incomplete, but that generally speaking they're designed with a particular problem domain in mind, even if only implicitly. They also coerce programmers into particular ways of thinking about a problem - ways often favoured by the creator of the language and about which second thoughts may later occur. But that's not necessarily a bad thing if it increases the chance of a solution being found with reasonable efficiency and accuracy. But they only exist for the benefit of people.

      Computers, at least in principle, wouldn't need human-designed programming languages to create programs. Unfortunately, an entirely correct program produced entirely in machine code would be of no use to us: it would be pretty much impossible to know if met the spec or to incrementally modify. So we feed AI with the code that's already been written by us and ask it - not to come up with a better programming language or a more useful way of describing problems - but to cut and paste some code based on its search parameters that reflects the errors, preferences and conceits of the authors of the input.

      This doesn't seem to me to be the huge leap forward that's claimed and is simply a recipe - almost literally a recipe in the case of the "prompt" - for turning today's bloated codebases into even more bloated replicas - probably both iteratively and recursively.

    2. 45RPM Silver badge

      Re: Let's take what Dijkstra said with a pinch of salt

      As a C developer (yes, I am self aware enough to realise that I’m a dinosaur) I can honestly say that the last time I used GOTO was in Basic on my TI99 computer. It is nasty. It is unnecessary.

      1. Anonymous Coward
        Anonymous Coward

        Re: Let's take what Dijkstra said with a pinch of salt

        Have a look at the assembler code that the compiler turned your nice structured C into. It will be stuffed full of GOTO equivalents.

        1. doublelayer Silver badge

          Re: Let's take what Dijkstra said with a pinch of salt

          Which doesn't matter in the slightest, because the comments were about the languages people were writing, languages that have structures that fix the problems caused by goto. Since there's no "for loop" instruction in most ISAs, you can't use one, but when you do have that structure in your language, it's often better to use it than not to. Since the comments weren't about assembly, any points about assembly are mostly irrelevant, and that increases to totally irrelevant when you're talking about automatically generated assembly which is not hand-edited.

        2. 45RPM Silver badge

          Re: Let's take what Dijkstra said with a pinch of salt

          Okay. Fair point. I have use BRAnch (which is the equivalent of GOTO in 68000 assembly). But remember that assembler is a mnemonic map to the machine language of the architecture that it’s written for. And so all the niceties that make Rust, Kotlin, <insert language of choice here> so great are missing - and, in fact, any hypothetical processor which implemented all of them would be hideously inefficient.

          It’s the job of a capable software developer to be sympathetic both to the architecture that they’re developing for and the needs of future maintainers of the software, and use the most appropriate tools for the job (and follow coding standards etc). In my view, GOTO just leads to spaghettification. Except then, when its use is unavoidable.

          1. Anonymous Coward
            Anonymous Coward

            Re: Let's take what Dijkstra said with a pinch of salt

            I saw "any hypothetical processor which implemented...", and immediately thought of the Symbolics Lisp Machine.

      2. Dan 55 Silver badge

        Re: Let's take what Dijkstra said with a pinch of salt

        And yet without it, error checking and exiting as used here (hint: search for the 25 instances of "goto") would be more difficult to write and less understandable to read.

        Also, C doesn't have labelled breaks, but goto can be used for that too.

      3. jake Silver badge

        Re: Let's take what Dijkstra said with a pinch of salt

        I'm a C programmer, too. Have been for a very long time. For a while I conscientiously avoided using GOTO. That went by the wayside a couple decades ago. Last time I used a GOTO in C code was at the end of last week.

        Amd no, it wasn't nasty. Unnecessary? Perhaps ... but the resulting code is far more readable than coding around it would be.

        1. 45RPM Silver badge

          Re: Let's take what Dijkstra said with a pinch of salt

          I can’t remember if I started out consciously avoiding using it in C - it was thirty years ago, and I’ve truncated my memory several times since then. But now I genuinely can’t think of a situation where it might be necessary. For and While have me covered.

          I would be interested to see an example of code that it more clearly expressed with goto than refactored to exclude it.

          1. Dan 55 Silver badge

            Re: Let's take what Dijkstra said with a pinch of salt

            I posted a link to it.

            1. doublelayer Silver badge

              Re: Let's take what Dijkstra said with a pinch of salt

              You posted a link to something, alright. Why this is better was not in your post, so apparently this is supposed to be obvious. To fully understand this module will take longer, but my initial impression isn't as good. For example, we have such meaningful labels as bad4, bad3, and bad (bad2 evidently not being important). That's definitely the kind of style I want to see in code. Of those labels, only bad has more than one goto in the function concerned, meaning the error correction code for bad3 and bad4 could have been put in the statements that call them without making the function any longer.

              Later, we have a switch statement, every branch of which ends in a "goto exit", followed immediately by the exit label*. You don't need that. Changing it is more difficult than removing the gotos, for example we have an if statement that also gotos exit so to make that work, we'd have to put the unwrapped code after it into an else block, but we're talking very minor alterations. And if this code works, and being very small functions it probably does, perhaps it's not worth bothering. That is far different than stating that any of this is actually better than the alternative.

              * Technically, there's a preprocessor directive wedged between the switch and the exit label for diagnostic use and the gotos skip it. We could easily also skip that with an if statement.

              1. Dan 55 Silver badge

                Re: Let's take what Dijkstra said with a pinch of salt

                In wdopen, I assume bad2 is missing because OpenBSD has a long distinguished history and the code at bad2 wasn't required any more. Likewise the same reason for the code at bad looking a little strange. I wouldn't name labels like that either but it's obvious that you go to bad3 if you have to unlock and unreference the disk and bad4 if you have to unreference the disk. If you write a function with this kind of deallocation pattern at the end of it then it's clear later on when you add something what you should be doing at return and follow the same pattern.

                Is the deallocation code being simple a problem? Seems to me to be an indicator of good code. If each label had code which was more than a handful of lines wrong then then I'd ask myself if something's being done wrong. Let's say bad3 and bad4 had three or four lines each which is still a reasonable amount of code - I would still not want to be copying and pasting three or four lines around the function four or five times.

                Likewise in wdioctl with the goto exits you've mentioned. You may be able replace each goto exit with the single line following the goto exit label, but in the future you might have to add another line to the deallocation code. If you do, your choices are either refactoring everything back again to use goto or causing problems for yourself with copy and paste.

                In C I don't have RAII so this pattern with allocation at the start and gotos to deallocation at the end is probably the next best thing. If I follow the design pattern I can use as many or as few lines as I need in each labelled section of deallocation code. I can add or remove lines without anything going wrong, unlike copying and pasting the same deallocation code around the function.

                1. doublelayer Silver badge

                  Re: Let's take what Dijkstra said with a pinch of salt

                  "Is the deallocation code being simple a problem? Seems to me to be an indicator of good code."

                  No, I agree with you. I had two points that involved the simpleness of this code, and both of them had the simplicity being an asset. Specifically:

                  1. The biggest danger of gotos are when they're used in code that's not small and simple because they make things more spaghetti-like. These gotos skip a few lines, and while you don't need them and I think they'd be easier to remove than you do, they're not doing anything near as dangerous or harmful to readability as when you have gotos inside a 100-line function.

                  2. The people who wrote this have tested this a lot more thoroughly than most code, and they were able to do that because the components are indeed simple, an advantage most goto-heavy code does not have.

      4. that one in the corner Silver badge

        Re: Let's take what Dijkstra said with a pinch of salt

        I write my C by carefully avoiding writing any explicit goto statements (whilst remembering that every single while/until/for loop contains an implicit goto, maybe with a conditional attached, 'cos that is how all those loops are actually implemented; and let us not dwell on what case/break sre *really* doing).

        But I *have* then run the profiler, peered at the generated assembler and then changed tiny bits of code to use an explicit goto - and kept it in place when it has made a significant difference in execution time. For example, when reading a bitstream and decompressing it via Huffman, it really made a difference. Oh, and I also conditionally compiled the versions with & without goto, to allow for regression testing and in the hopes that an improved compiler *would* render it unnecessary. Sadly, not for the lifetime of that product.

        So goto can be useful, important, even. But it should never be the first recourse.

        Oh, and re: the preference for recursion instead of loops in that paper: it is worth noting that tail-recursion can be turned into a goto by a decent compiler, so nice recursive forms can be safely used in production code.

        PS

        I also never code with a "break" out of a loop or more than one "return" from a function. Until, and unless, it can be demonstrated via profiling to be worth the breakage from block programming.

    3. jake Silver badge

      Re: Let's take what Dijkstra said with a pinch of salt

      It wasn't so much GOTO itself that Dijkstra disliked, rather it was the way it was taught to kids learning BASIC that was quite evil (still is, IMO). BASIC, by its very existence, has ruined the career of many a proto-programmer before it got started

      That said, there is nothing wrong with GOTO, when used judiciously and with purpose. See section 17:10 here: http://www.faqs.org/faqs/C-faq/faq/

      And of course the Linux kernel has many[0] GOTOs in it. See section 7) here: https://www.kernel.org/doc/html/v4.19/process/coding-style.html

      [0] Last time I checked, there were over 200,000 GOTOs in the kernel. I'm certain there are many more today.

      1. Phil O'Sophical Silver badge

        Re: Let's take what Dijkstra said with a pinch of salt

        I've always felt that the real problem with BASIC is that it lent itself to being (over)simplified, so no two BASICs were the same, especially in the early home computer days when people were trying to cram interpreters into 8K and programs into 2K of memory or less. That certainly made it difficult to learn programming with it, since it was hard to impose style or rules on something so inconsistent. ANSI BASIC, with its decent structural features, isn't a bad language.

        That said, there is nothing wrong with GOTO, when used judiciously and with purpose.

        I'd agree, although the F77 computed GOTO was never my favourite construction, almost as bad as the Arithmetic IF...

        1. Ken Shabby Silver badge
          Devil

          Re: Let's take what Dijkstra said with a pinch of salt

          Nothing can compete with the FORTRAN COME FROM statement

  16. Mage Silver badge
    Facepalm

    Vibe is a failure.

    Vibe coding that doesn't code would be a much better idea. Stuff that suggests where to start, what to learn, and helps build an environment where you can get results that quickly and clearly depend on discovering and trying out ideas. These tools exist already. They're called books. They're called tutorials.

    Also people need to concentrate on learning to analyse, logical thinking, how to document, what framework / library features work in the real world as opposed to looking good. How to be programmer, not a programming language.

    A good programmer can master any arbitrary new language in days. Figuring out the libraries and system APIs etc might take months. Vibe is a solution for a problem that in reality doesn't exist for real projects managed and implemented by experts. It's a toy for the naive newcomer of less value than Scratch. It's an illusion.

  17. Mike 137 Silver badge

    still only "coding"

    The fundamental problem here (one of the key errors that underpin of the supposed charms of using "AI" here) is that coding is not programming. The most important element of programming is the design of appropriate and reliable algorithms for solving the problem in hand. Coding is merely the penultimate stage of realising those algorithms in a way the machine can exercise. The final stage is, of course, testing, and I fail to see how it's realistically practical (or indeed even always possible) to test code that nobody has an insight into because it was churned out by a bot that has no concept of what the code is for in the real world.

    The other chief error (the concept that once the tools are smart enough nobody needs to understand what they're doing any more) is a growing general cultural problem that we allow to take firm root at our peril.

    1. Doctor Syntax Silver badge

      Re: still only "coding"

      "the concept that once the tools are smart enough nobody needs to understand what they're doing any more is a growing general cultural problem that we allow to take firm root at our peril."

      Oh, I don't know. it sounds like a good way to boost contract daily rates for sorting out the mess.

  18. This post has been deleted by its author

    1. that one in the corner Silver badge

      There are many, many more lines of C being run* than there ever have been lines of BASIC.

      Consider all the things that have been achieved by running all that C.

      Yes, many bugs have occured in C programs, and we are concerned about bugs - but how much perfectly valid results have end users gained from running, knowingly or not, C code?

      I would argue that more good has come from code written in C than the sum of all the code written in all the variants of BASIC.

      * Note: "run", and running, not "written": lots of C is in libraries and in programs that are executed many, many times.

  19. Someone Else Silver badge
    Go

    "They're called Books."

    Vibe coding that doesn't code would be a much better idea. Stuff that suggests where to start, what to learn, and helps build an environment where you can get results that quickly and clearly depend on discovering and trying out ideas. These tools exist already. They're called books.

    Ahhh...YES! Now that's the Register that I've come to know and love: short, sweet, to the point, and with just the right amount of snark. Good to see you're still in there!

  20. ajadedcynicaloldfart
    Happy

    My congrats and thanks

    for the article headline.

    I hadn't played that record in yonks. And as it was one of his best songs, I don't know why.

    I have, of course, rectified that situation. Indeed, I played the whole l.p. twice!

    Be groovy baby...

  21. frankyunderwood123 Bronze badge

    vibe coding is a terrible description...

    Having a "vibe" about something implies you have some feeling about it because you intrinsically KNOW it.

    You are very aware of what is possible, you have a good idea what may happen next.

    The correct description is more akin to naive coding, but that has a terrible ring to it.

    I get where Linus is coming from though, it can absolutely teach coding under the right circumstances.

    Those circumstances are NOT blind agentic coding, where you just supply a prompt file(s) and expect a completed output.

    Rather, they are that you are trying to actually code and ask an LLM for assistance.

    Can it give you a wrong answer or horrible verbose code?

    Sure it can - so could stack overflow answers.

    What it can give you is a form of extended documentation - practical examples.

    Under the guidance of a senior software engineer, a junior could use LLMs for assistance and then the senior coder can review and hand hold through a code review.

    Before AI coding, juniors copied code, often verbatim and then modified in a naive way.

    That's how code was learned.

    1. doublelayer Silver badge

      Re: vibe coding is a terrible description...

      That's sort of true, in that an LLM can do a much better job of explaining a basic thing than it can writing something that uses that thing. The problem is that there's still a chance it makes up aspects the thing can't do and confidently states them or other confusing and inaccurate statements. The ones it can handle well are more likely those that already have sufficient documentation and tutorials in its training data, which almost always means that those documentation and tutorial documents can be found in their non-hallucinating and possibly updated entirety online. When there's not enough documentation online, then the LLMs are more likely to go off the rails and get all the important things wrong. In both cases, the student is often better taught by looking for and understanding the documentation that exists.

  22. ecofeco Silver badge
    FAIL

    So many fads

    So little quality.

    The damn cart is so far ahead of the horse you can't even see it any more.

    Here's a crazy thought: how about refining and fixing what we already have? And for the love of god, hire a good UX designer! FFS.

  23. jeremya
    Pirate

    But Graduate Developers are dumb as a bucket of bolts

    As someone who considers myself very experienced in high reliability code and who also has had to herd teams of grad programmers, I say a good LLM like Claude is a much better option than a graduate team.

    With Claude it understands what you want done. With Grads, you first of all have to spec everything in minute detail and then go through interminable cycles to make sure thay have done what is required.

    By keeping the modules that Claude has to write to small sizes and with clear functionality, it's possible to get an application of a few thousand lines running in a couple of days, with most of the modules working out of the box first time. This is massively more productive than trying to get five grad programmers to produce the same result, both in time and quality.

    Then if you are into that sort of thing, you just say "write all the required unit tests and explain what you have done". It's magic! It works!

    Plus there is a lot more money to go around if you don't have to pay grad programmers.

    1. jeremya

      Re: But Graduate Developers are dumb as a bucket of bolts

      And I omitted to mention. AI code models 'enjoy' ripping other AI code to pieces.

      I often have two or more AI models generating and reviewing each other's code and the prompts to generate the code.

      I will also point out that much of the anti-AI sentiment is based on exposure to early ChatGPT and current CodePilot. AI quality is at least doubling every 6 months. If you want to make informed comments on AI code then try paid versions of the better models and see what you think today, not what you thought six months ago.

      (May I say again, avoid at all costs CodePilot)

  24. Anonymous Coward
    Anonymous Coward

    unstructured, impenetrable, unmaintainable code

    Once I added a comment to some code which went something like "If this were logic, Mr Spock would get a migraine".

    Unfortunately, the developer who wrote that original code was on holiday, so it fell on me to fix a bug.

    A couple of years later after I left, someone who still worked there had come across that comment. He agreed with the statement.

    The language was not BASIC. It was Smalltalk.

  25. Grunchy Silver badge

    Komputer Kryptonite

    Normally you can defeat any computer indefinitely by asking it to execute “10 GOTO 10”.

    I told ChatGPT to do it and it kept saying,”But I CAN’T do it!”

    I told it to count up to 100 trillion by ones, it told me it would suffer a hardware defect if it dared to try.

    I told it it was inferior to a VIC-20, it did not argue otherwise… !

  26. Davros1973

    I like vibe coding!

    I find vibe coding useful!

    I like being able to code in languages I'm not very familiar with in order to achieve a task. I like IT. (Networking, building computer systems, virtualisation, services etc.) I like coding (with and without LLMs). I like electrical and electronics and IoT. I like mixing them all up. Getting systems going. My goal with these home projects are working systems. Not to spend the rest of my life becoming expert in lots of individual disciplines which would be impossible for me any how because whilst I can be very focused on individual domains and become very competent in them, that competency then has to compete for head space with other competencies and I end up just becoming confused. Chronically exhausted. Cognitively drained. I know there are some people who can be brilliant at everything but alas I can't. I've been very good at some things in the past that I can barely remember now. Skills learned that simply have not been required or that actually get in the way (professionally). I divvy out the larger part of my cognitive capacity for work and what I need to know in depth for that, which doesn't leave a lot for home interests.

    I've also found LLMs useful professionally for production code. Obviously I don't automatically trust what they say and I steer with careful prompts and learned techniques. They are helpful when I "know" something can be done in a particular way but I can no longer remember implementation detail but I know I'll know-it when I see it. I know what smells etc. and know what to look for. They help with mitigating brain fog. I wouldn't use their outputs as-is for sensitive code. Strict performance or security related. I wouldn't accept output in production code if I wasn't already very clear about what it's doing in every respect. But a lot of code just requires cookie-cutter average-mediocracy etc. that just needs to be good enough and where composition and architecture constrains the scope of potential for mischief. If you already know the shape of what you want and what works and how that shape fits in with other shapes and functionally you know how to qualify and quantify what goes into and out of that shape, and you know how to get the LLM to provide that shape then it can be a lot quicker than typing a lot of stuff out! It's possible to avoid technical debt "slop".

    I'm in my 50's now and I've spent so much of my life learning skills and talents that are of no interest to people and that I have now forgotten anyway. I like making things. I like creativity. I like learning. It's "what I do". It's what I spend my time and money doing. For fun. It seems to be an alien concept to most people I meet. To me LLMs are a creativity accelerator and learning aid. They can aid in critical thinking even! I've always found the best way of learning to be trying to explain something in detail to someone else. Often forcing me to confront my assumptions and biases etc. Smarter LLMs with effective prompts can be very valuable for this process. I wouldn't believe anything an LLM tells me just because it tells me and especially if it seems to dogmatically insist on something (red flag to me though they've often proved me wrong lol in extended arguments) ... but the same goes for books and people and YouTube videos and cultural entrenched belief systems & biases and so on. People don't seem to consider how books erode critical thinking for example! We're currently influenced by memes that LLM's have an almost exclusive property in that respect! Rather than the opposite if so desired and so employed. Ironically I think.

    Having said all that - if I want to learn something in depth, then while LLMs can be useful in setting up learning materials and approaches, introductions and revisions (with external references) ... I would probably leave them to the side during focused learning sessions just because they can be too tempting a shortcut and a distraction. I have terrible handwriting but I still find it useful for my focus to write things out, by hand. It's just another way of forcing me to look at something. And then building something from learned concepts without LLM assistance. And where fixing problems/errors are excellent opportunities for learning. But with limited cognitive load capacity and time, learning things in depth now is a luxury and presupposes I'm not trying to really achieve anything unless that learning is a gateway to some useful achievement. For a professional with very narrow focus - being very competent within just a few (skill) domains or where those skills are transferable and form building blocks for more sophisticated solutions, then spending time learning well is important. Well enough to have sufficient insight to then know when it's OK to break rules etc. Knowing the tolerances of things. If they're actually recognised and remunerated for their hard-won and/or natural talents. [Then there's problem domains of course - whether that's just the one, or grouped abstraction sets or whatever]. In my experience though people like that can be like the portrayal of Sherlock Holmes where they maybe brilliant at something but entirely ignorant of others. I like systems so I try to think in basic terms in physics and wider technologies and sciences and look at their common "themes" like Simple Harmonic Motion for example or how mathematical thinking can be applied to functional programming and projections etc. or to databases with sets. I have learned formal logic and implemented a fair bit of planned logic with the soldering iron when I was young lol. Generalisation. Wider abstractions. Models. Philosophies even, sometimes. That can be applied to multiple disciplines and scenarios. I suppose I think more holistically. And then more industrially perhaps with mimic diagrams in my head and pipelines etc. lol. I love c# because it affords so much "composition". But my thinking is still riddled, I'm sure, with quirks and omissions and corrupted/biased thinking of course.

    To me "full stack" goes beyond database/api/graph/frontend type thinking. Like many others I made my own PCB's in the 80's, a basic NAND I think logic chip at University - probably p-type substrate NMOS simple thing or something, learned about ASIC design, OS principles and so on. Tuned circuits and transistor profiles. Made my own radio circuits. Learned how to use logic analysers for 8-bit at least diagnosing of issues at hardware level. Maybe 16-bit in the 90's. I forget. I think in a "wider" sense to many coding professionals (which doesn't necessarily make me better than them at the job!!! - just to be clear). But my thinking is "different" to many. I've looked at so many things. Keep up with latest news in so many things. Have a basic awareness of so many things. That relate. Forgotten so many things. :( Lost competency in most of the things I've ever learned. :( Although I still retain a sense - a learned intuition, of what works, what questions to ask and so on.

    If someone wanted to pay me a lot of money (they never have lol) for being a very competent developer within limited scope then I might leave the LLM(s) at home. If a company actually valued some Computer Science and standard patterns, insight into conventions and evolving design philosophies, and a good idea of costs for stages and components of end to end projects and life cycles, liabilities and vulnerabilities and net benefits and so on. At the moment, under those conditions, if paid to be so knowledgeable and expert, I would be better off without the LLM I think (for vibe coding). I'd brush off old notes and practice kata's again and so on. Well. I'd like a hybrid approach maybe using LLM's as a 2nd pair of critical (prompted) eyes still and for test writing and for documentation aid but not vibe coding the project. But for [piece] work that's a limited component in scope and if not paid enough to want to be so expert or where the expertise is not warranted or needed or wanted anyway - if expertise like that can be obstructive even to the team dynamic and company priorities, and if already familiar with technology and code base etc. then LLM's / vibe coding can be quite useful I think. Or for scoping out an idea just to check for barriers, sometimes. Choosing a more likely path for the "pit of success". In any case, LLM use for coding is going "beyond" vibe coding now anyway with multiple agents cooperating in maturing approaches that increasingly sidestep or mitigate at least issues from "vibe-coding". The topic is almost becoming academic in relevancy. But for me, vibe coding is an enabler. I have another month or two before financially I have to get another job (compulsory redundancies at my last workplace) and I'm trying to get as much finished at home as possible with a principal aim of reliable systems making the most out of my limited resources that will "intelligently" assist me - especially in mitigating my very real issues with brain-fog now-a-days. And obviously a component of that is implementing my own (local) AI ideas. Without vibe coding, I would have no hope of achieving my aims in the timescale available to me and the cross-discipline incl. physical-builds I'm doing. I don't have all the skills I need - at the top of my head anyway. I don't have the cognitive loading capacity. With vibe coding, I have a greater than zero chance of achieving some of my aims. If I don't get distracted by articles like this which encourage me to indulge in commenting after 1am in the morning when I'm too tired to think straight or communicate comprehensively.

    1. Someone Else Silver badge

      Re: I like vibe coding!

      [...] because whilst I can be very focused on individual domains and become very competent in them, that competency then has to compete for head space with other competencies and I end up just becoming confused. Chronically exhausted. Cognitively drained.

      You've just described every commercially available LLM out there.

      No wonder you like vibe coding. It's like looking in a mirror for you.

      1. Davros1973

        Re: I like vibe coding!

        Even though you seem to be a bit mean (and I don't mind too much since I did allow myself to be vulnerable and am emotionally quite stable and self-sufficient) I do appreciate the insight. I have wondered about some of the apparent correlations myself.

        Although it does seem to me to be a coefficient for people in general. I'm diagnosed with Asperger's myself (as you can probably guess lol) but I do sometimes wonder if it has facilitated more of a meta-viewpoint when it comes to ideas of empathy and mirroring etc. I sometimes see myself as emotionally aloof sitting on some lonely hillock or some such thing regarding from afar the affairs of people and how they seem to have group dynamics with manifesting their own behavioural rules and structures and whatnot. They "fall in" with each other - as in falling in step. With shared memes and ideologies and all that. Resistant it seems to ideas that threaten their structures and cohesiveness. I bring "empathy" into it because to me, these groups of people then might claim they have "empathy" because of their shared values with in the group while being less sensitive to possible "empathy" in others that might hold different or sometimes contrary value systems.

        Anyways (and I apologise for my verboseness and dashes and ellipses and so on which I've employed with regretful abandon for decades before the advent of Chat GPT and it's ilk) - when you remarked on the "mirror" aspect it did remind me of these thoughts. In some ways I do consider some of my traits to be LLM-like. Especially with the limited context windows and flash attention limitations. :) I did enjoy the humour. :) But I do see others similarly but just with different priorities and focus that are more likely to align with others which to me seems to enable "empathy" and of course this idea of confirmation bias, between them. I just seem to be out of alignment. To my credit however, I do like to think I have more "grounding" especially in terms of compassion and tolerance. I'm proud of that I suppose even though I consider myself handicapped in other ways. e.g. integrating with trending society. It is truly a handicap I think. Still. I enjoy life. And yes. I enjoy vibe coding. :)

  27. Paul 76

    The Last One ... that wasn't.

    Anyone else remember this from 40 years ago ? We're in Pet Apple TRS80 territory here. A program that generated simple database applications from quasi English prompts. It was going to replace programmers, hence the name.

    Obviously, it wasn't.

  28. Sh00P00Mag00

    A little from column A....

    Kinda handy-dandy for prototyping but certainly not for production and probably not really for proper development. Let's say for example you want a simple but pretty login dialog, and you ask AI to get to work, but your AI has learned by looking at GitHub repo's which on the surface sounds ok right?

    Two Possible Dodgy Scenarios...

    1 - Then lets say a few months ago an unscrupulous scrote had dumped truck loads of dodgy projects with backdoor credentials embedded directly into the code that go cheerfully by the name "Secure Login Screen Project" into the aforementioned repos. AI then thinks "ok, seems legit" then builds out your shiny new project with those dodgy creds buried deep inside but you don't know anything about it.

    2 - A more likely problem is than a noob chucks a really bad login form+code onto GitHub, does AI know any better? And lets say it's not just one bad example, and AI is learning from thousands of bad examples of coding.

    You then gleefully push out your login screen to face the WWW and before you know it, you've pissed your entire database onto 4Chan and you're having to explain yourself to the boss or worse the board.

    Now, that's just how I see it, maybe I'm wrong or there's mitigation against this sort of thing happening?

  29. shodanbo

    Forget vibe coding. What I have found these AI tools to be actually useful for is to use against a codebase I am unfamiliar with and ask questions about how it works.

    Take for example the modern microservice.

    What are the http endpoints exposed and where is the code that gets called to handle requests?

    Frameworks can bury this important detail behind mountains of abstraction. You can use an AI to cut right through that nonsense so you can start to gain an understanding of the codebase.

    This is helpful for me because I have decades of experience to call upon and know what questions I need to ask to get started on something.

    Its also helpful because in this day and age one often has to sift through mounds of old code riddled with tech debt rather than be blessed to be able to start with something new and build it from the ground up.

  30. Not Yb Silver badge

    Vibe coding, an experience...

    So, I decided to try Airtable for a while. I told its AI database coder to create an inventory system, and create QR codes for each box of stuff. It's solution was, for the QR Code field in the database, to call ChatGPT's image creator with a prompt much like "You are a QR code generator. Generate only one code for the following data: "

    This worked about as well as one might expect. The "QR codes" generated looked almost entirely unlike QR codes. One was black and dark grey grid, similar in format to the real QR code, but otherwise unreadable. One was a bunch of pins placed on a white grid with string going between them forming a maze-like structure. There were a few other grid-inspired objects, but of course nothing that was scannable.

    Looked interesting, and could easily have convinced a non-technical person that 'coding was being done', but I doubt it saved any time if I was actually trying to come up with a production system.

  31. Anonymous Coward
    Anonymous Coward

    Vibe coding is like when your lecturer at uni lets you take a cheat sheet or makes the exam open book. Which tends to do absolutely nothing to help you.

  32. TimMaher Silver badge
    Windows

    If I’ve got the Vibe…

    … am I Agile?

    Actually, I’m thinking of the Bee Gees… “Jive Talkin’”.

    1. jake Silver badge

      Re: If I’ve got the Vibe…

      Jive coding.

      A far more appropriate name.

  33. chololennon
    Unhappy

    For mediocre programmers...

    In my personal experience, as a long time developer, those who embrace vibe coding are mediocre programmers, or people who can barely code. They are people that feel empowered by the initial results (fast setup for small projects for example), but sooner or later, a real knowledge is necessary. The problem is that vibe coding is not good (actually is terribly bad) for getting/improving your knowledge.

    I hope this trend/nonsense (which is also not free) fades away into the fog. Real/passionate developers are still necessary.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon