back to article AI superintelligence is a Silicon Valley fantasy, Ai2 researcher says

You want artificial general intelligence (AGI)? Current-day processors aren't powerful enough to make it happen and our ability to scale up may soon be coming to an end, argues well-known researcher Tim Dettmers. "The thinking around AGI and superintelligence is not just optimistic, but fundamentally flawed," the Allen …

  1. user555

    Agreed, but still wrong

    The problem isn't the lack of compute, it's the lack of science. We just don't know how to build an AGI.

    1. DS999 Silver badge

      Re: Agreed, but still wrong

      And more computational power was never going to be the answer to that. An AI based on LLMs will never be "intelligent". They think throwing more megawatts of compute and ingesting more data will magically take it to human level intelligence, or beyond.

      Which would be fine if all they were doing was wasting their own money, but when they are severely distorting the economy especially markets for electricity, water, and chips that makes it everyone's problem. So the sooner reality slaps them in the face and they're forced to admit failure, the better.

    2. Ian Johnston Silver badge

      Re: Agreed, but still wrong

      We don't even know what intelligence is, which puts a bit of a hamper on building one. The only thing we can be sure of is that glorified autocomplete systems have absolutely no intelligence whatsoever.

      1. Anonymous Coward
        Anonymous Coward

        Re: Agreed, but still wrong

        ^ damper

        1. breakfast Silver badge

          Re: Agreed, but still wrong

          I'm sticking with OP here. The impediment is a wickerwork basket full of delicious festive goods.

      2. O'Reg Inalsin Silver badge

        Re: Agreed, but still wrong

        I would consider "the collective knowledge, intelligence, and creativity of all human societies, persisting beyond any one human life" as a definition of "super-intelligence" as worthy of debate, despite abundant evidence to the contrary.

      3. MachDiamond Silver badge

        Re: Agreed, but still wrong

        "We don't even know what intelligence is,"

        We don't know in detail how the brain works. As a photographer, I'm interested in trying to get better by understanding as much as possible about just how we perceive the visual world and that's still only known in broad terms. There's no model of that to run on a computer at many times a human's speed of perception.

        Intelligence is far more than parsing a store of knowledge but having the experience to weight different values using that data which will also change from situation to situation. I don't see how we can build a machine that will act in our best interests (more importantly, mine). By logic we should not risk the lives of many to find a lost hiker that was silly enough to go out alone and didn't even check a weather forecast. We do risk those lives and we take comfort in knowing that we might be saved if we commit a big stupid. How do we program an AI to take an interest when a dozen miners are trapped underground due to a collapse or a child falls into an abandoned well.

      4. breakfast Silver badge

        Re: Agreed, but still wrong

        This is exactly correct, which means the problem isn't a lack of compute or a lack of science: it's a lack of philosophy.

        Some of the smartest minds of history have tried to figure out how intelligence works and what knowledge is - to create AGI we effectively need to answer the core questions of epistemology. That's simply not the kind of problem you can throw processors at until it magically answers itself.

      5. mevets

        Intelligence is very well defined.

        It is that thing the IQ test measures.

    3. Mage Silver badge
      Headmaster

      Re: don't know how to build an AGI.

      I argued in late 1980s when people said we needed more computer power, that the issue was design. If we knew how we could have made a slow one. Computer "Neural Nets" are not the same as biological ones. Mis-named.

      1. DS999 Silver badge

        Re: don't know how to build an AGI.

        Yes, if it turned out 10 gigawatts was the magic number to turn neural nets, LLMs or any other software based solution we currently have into human level intelligence that would be kind of sad given it only takes 10 watts for a human brain to do the same!

        1. Ken Shabby Silver badge
          Holmes

          Re: don't know how to build an AGI.

          Lightbulb moment

    4. sedregj
      Childcatcher

      Re: Agreed, but still wrong

      "The problem isn't the lack of compute, it's the lack of science. We just don't know how to build an AGI."

      Very true but that does not win arguments in the muggle world that will insist on "well just nerd harder". This argument demonstrates how the current efforts will never achieve AGI because laws of physics, no matter how much cash you throw at it.

      Trump might want to restore basic science funding to give the nerd harder thing a chance.

      1. MachDiamond Silver badge

        Re: Agreed, but still wrong

        "Very true but that does not win arguments in the muggle world that will insist on "well just nerd harder". "

        It's like Mike's argument against Wyoming's position that a solution to Luna's food problem will present itself when needed. "We'll dig it out when we need it". Mike was right in saying that genius is where you find it and there's no knowing when it might appear. Making every physicist work on figuring out what gravity is and what can be done about it won't bring about an answer any faster than the point where somebody has an epiphany.

    5. shodanbo

      Re: Agreed, but still wrong

      We do know how to make GI though.

      And the technique required to get it started is fun!

  2. Anonymous Coward
    Anonymous Coward

    Probably true

    The Middle Kingdom's focus on applications of the AI we have today is a far more pragmatic approach with greater long-term viability. "The key value of AI is that it is useful and increases productivity," he wrote.

    We already have all the tools necessary to construct an impermeable digital prison, which is very likely to have been the ultimate goal of the elites, all along.

    China offers a blueprint in that respect.

  3. beast666 Silver badge

    "not because they are well founded but because they serve as a compelling narrative,"

    Indeed, this applies to many, if not all, current issues pushed on the masses by the mainstream media.

    1. Anonymous Coward
      Anonymous Coward

      Sorry, but you appear to have missed the "first post" position that so many of your intentionally glib trollposts depend upon to disrupt the discussion.

      Better luck next time, Mr. Beast.

  4. cyberdemon Silver badge
    Holmes

    See Icon

    What is being sold as so-called AI is little more than an average of observed human (and increasingly non-human) interactions/observations.

    If you ask it a well-defined, specific question, it will give a slightly different answer each time. (not good, if you wanted a computer). And for an open-ended, creative question it will give still different, but eerily similar answers. (not good, if you wanted an artist)

    There is no reasoning, no logic, no original thought and indeed, no ghostly soul. It is a dark mirror that absorbs and imitates the 'soul' of whomever has the misadventure of interacting with it.

    Throwing more power at it, stacking more layers, is not going to change the fact that it is simply a pile of statistics ABOUT life, and not actual life.

    But for those who simply need a fast army of idiots who don't ask moral questions, it's great...

    Running a scam house? ChatGPT won't rat you out. Want to hook teenagers on mindless short-form crap videos? Sora won't get bored. Want to launch hack+ransom attacks against healthcare and charities? Claude doesn't care. Want to swing an election in a foreign country? Grok is here to help. Want to analyse all citizens to find out who is unhappy with this? Gemini won't question why. Want to control armed robots to exterminate them all? Peter Thiel is working on that.

    1. sedregj
      Gimp

      Re: See Icon

      Let's drop the AI moniker (divisive and bollocks) and look at "LLMs" instead.

      Please bear with me because I'm going to whitter on as an engineer and I think you are more humanities focused.

      I use them as a tool, just as I do a hack saw or a slide rule (I have two) or some of my "make it smaller, deeper or more broken" devices: percussion tools - fencing maul, sledge hammer, lump hammer ... you get the idea.

      I bought a second hand Nvidia A2000 with 16GB of RAM and popped it into a box at work (Dell server) that generally acts as a fancy NAS for customer backups over night, so its bored during the day.

      With llama.cpp I can run a small LLM - 20B parameters or so locally. It is quite surprising how much general knowledge even a small model can have. I'm actually interested mainly in programming but I do get them to do english to latin and vv or eng to german and vv. I've also asked it to explain physics ("tell me about the bernouiilllii equations" - with deliberate mucking about with spelling) and get a reasonable answer. Questions about small towns in Somerset get reasonable but rather generic answers.

      Bear in mind this is in a dataset that is around 16GB in size or around three DVDs - ie a pretty big encyclopedia that works quite fast and can sort of reason too.

      ChatGPT, Claude and co have much bigger data sets and their models run to something like 100s of billions and even trillions. They also have a lot of other machinery tacked on too. At that scale of data, you might question the quality and even the provenance of the data inputs. Let's put it this way - they ain't 100% encyclopedia Britannica. That said, neither am I.

      So, you can rail against the machine or not. Your last para did rather anthropomorphisise (how the blazes does anyone spell Greeklish correctly!) the beasties. For me they are a handy tool but they do need some care to use effectively.

      1. doublelayer Silver badge

        Re: See Icon

        I don't think they are entirely "humanities focused". Their objections include lots of problems when using them in scientific or engineering areas, and in fact most of the problems they name are worse in those areas. If you're asking for some poetry, a little variation from the same prompt is not going to hurt and may help a little. If you're trying to use this tool to solve a problem where answers are right or wrong, variation makes it much worse.

        I'm hoping that you successfully use this as a tool. Maybe you do. Unfortunately, I know many people who say they can and they definitely do, a tool that solves their problem very quickly if the problem is that they have tasks to do and solving the problem consists of being able to lie that those tasks are now complete. Unfortunately, most people I know who like to use AI have quality problems and some of them make those my problem by failing to care enough about the usefulness of what they produce. You can get lots of summaries that occasionally include completely wrong information. You can write lots of software that occasionally fails to even compile. Perhaps it is sometimes faster to correct the inaccuracies than to generate the whole thing from scratch, but I unfortunately have to deal more often with people who decide that it's always faster not to correct the inaccuracies at all.

    2. Grunchy Silver badge

      Re: See Icon

      “Want to hook teenagers on mindless short-form crap videos?”

      The addictive nature of television was explained back in 1985 (and which applies perfectly to modern YouTube videos) as “jolts,” or rapid changes in scenes, camera angles, action, etc. It even has a unit of measurement, the “jpm” or jolts-per-minute. It’s a measure of how short your attention span has shrunk.

      Sadly, one of the first places this was innovated was in 1960s-era Sesame Street! Which show consists of one short humorous skit after another, in which the duration of attention children have to pay each character or concept is rarely more than a few seconds, and in the most entertaining sequences is rarely as much as 1 second.

      Book review here:

      https://www.cmreviews.ca/cm/cmarchive/vol14no2/revjolts.html

  5. Michael Strorm Silver badge

    Indeed. We're long past the time where improvements in the underlying technology were a major factor- the current Gen AI boom is now almost entirely being fuelled simply by spending exponentially increasing- and borderline unbelievable- amounts of money on hardware. It's already getting to ludicrous levels.

    Did you know that- according to some reports- the deal Sam Altman signed with Samsung and SK Hynix at the start of October was for potentially 40% of the entire world production of DRAM?

    That's not just ludicrous in itself, the amount of money needed to fund that must be even more so. I missed the significance of it at the time, but it would explain *exactly* why RAM has gone up so horribly in price. (*)

    From what I've read, OpenAI isn't even using all the RAM yet, is getting it in the form of RAW wafers(!) and it's not entirely clear how much this is driven by need and how much it's a paranoid attempt to retain a lead by choking off the supply of RAM their competitors require.

    That being the case would make the amount of money such a huge purchase would involve even more unbelievable and a massive risk. Even if OpenAI "wins" what they seem to think is a "winner takes all" scenario, they'd *still* need to make enough profit over the long term to get that ludicrous investment back- or at least continue to convince the market that they can do so.

    There's no clear sign of that yet, and as soon as the market stops believing in it, it could go very wrong, very fast. The whole current situation smacks of an incredibly risky bubble that could- and likely will- go wrong very rapidly and very badly. Ditto the slightest sign of trouble that OpenAI might not have trouble paying for the hardware they've committed to.

    When this bubble bursts- and I'm going to say when, not if- the problem is that it's going to be very bad for everyone else too, particularly in the technology sector.

    (*) It's only incredible luck that, after repeatedly putting it off, I finally got round to ordering the parts for my new PC- including RAM- on 12 October, just before the shit *really* hit the fan. I paid £110 for 2 x 16GB Corsair RAM modules. Exactly one month later (12 November), the price for the *exact* same SKU was £174. Six days later (18 Nov) it was £234. Another six days later (24 Nov) it was £331.49. Today that exact same RAM kit, from the same seller (Scan), costs £350.

    1. Nelbert Noggins

      “From what I've read, OpenAI isn't even using all the RAM yet, is getting it in the form of RAW wafers(!) and it's not entirely clear how much this is driven by need and how much it's a paranoid attempt to retain a lead by choking off the supply of RAM their competitors require.

      That being the case would make the amount of money such a huge purchase would involve even more unbelievable and a massive risk.”

      Or given the state of ram price increases, provide them a stockpile of wafers they can sell to the highest bidder for a tidy profit. A deal of that size is likely going to come with a far lower unit price than others will be able to negotiate when their current supply contracts end.

      Being know as a company selling product A, doesn’t mean that is where you make your money. For example, Porsche had become a very profitable hedge fund company that happened to sell a few cars in the late 2000s. The hedge fund side was responsible for ~$11 Billion of the ~$13 Billion pre tax profits in 2008. If they hadn’t wrecked their finances trying to buy VW they have been laughing all the way to the bonus payouts for years.

      OpenAI seem to be having trouble making money selling AI, but selling memory in volume could be quite profitable for a while.

      1. cyberdemon Silver badge
        Unhappy

        > Or given the state of ram price increases, provide them a stockpile of wafers they can sell to the highest bidder for a tidy profit.

        Yeah, that fits, with the whole Taiwan thing ...

        A bit like stockpiling gold, art, and faberge eggs before WWII

      2. doublelayer Silver badge

        I don't know about that. As soon as they started trying, RAM makers could increase production to their capacity and undercut them. Those companies have already been paid or entered into contracts, so they don't have an incentive not to. I'm not sure the market is complex enough for OpenAI to successfully set up a middleman position in it.

        1. Michael Strorm Silver badge

          > "As soon as they started trying, RAM makers could increase production to their capacity and undercut them. "

          This relies upon the flawed assumption that RAM manufacturers are able to respond to demand in an idealised, perfect textbook manner. But the RAM industry has never worked like that.

          We're already clearly at the limit of existing production capacity, so the only way to manufacture significantly more is to build new facilities.

          First problem is that this would certainly *not* happen overnight. Second problem is that such plants are expensive- measured in the billions- so you need to know you're going to make your money back.

          That's okay *if* you know you're going to cover your costs selling RAM at such-and-such a price.

          But if- as many suspect- we're in a bubble likely to burst sooner or later, then demand from the AI industry will almost certainly collapse, likely resulting in a glut of RAM and a collapse in the market price.

          So it's likely that RAM manufacturers don't want to risk building new expensive capacity only for it to come online just as the shit hits the fan and everything is in the gutter.

          As I suggested elsewhere then, it's likely that OpenAI *could* exploit that reluctance of the suppliers to respond to demand, but it would be a very high risk strategy for them, likely ending in financial disaster if it went wrong or they misjudged it. Also, I don't think they'd want that distraction at a time they're already paranoid about maintaining their hold on the AI market.

          > "Those companies have already been paid or entered into contracts, so they don't have an incentive not to."

          Such a contract, no matter how watertight, is only as good as the other company's ability to pay what was agreed. And the problem here is that the company is OpenAI, not (e.g.) Microsoft.

          OpenAI is a company still spending *way* more cash than it's making and reliant upon continued investment. If the bubble bursts tomorrow and investors abruptly cut that off, where do you think OpenAI is going to get the money to meet those eyewateringly expensive commitments?

          I would assume that Samsung and SK Hynix have already factored this in.

          1. David Hicklin Silver badge

            > so the only way to manufacture significantly more is to build new facilities.

            One side effect is that the older less profitable RAM is being scaled back even though much of it is still in use/demand - so its price skyrockets as well.

      3. Pascal Monett Silver badge

        Ah, Porsche

        You're talking about the company that makes cars that don't work if they aren't conected to the mothership via satellite ?

        Yeah, that's a real incentive to buy one.

        1. Nelbert Noggins

          Re: Ah, Porsche

          Ah the power of brand and marketing. You’re talking about Volkswagen’s in different clothing. If I owned a high end Audi, Lamborghini, Bentley, Bugatti or Rimac, I might be wondering if VAG have the same control over my car.

          I was talking about the Porsche who actually made their own cars and nearly went bankrupt until they found they were very good at gambling on the finance markets

      4. Michael Strorm Silver badge

        It's *possible* they could corner the RAM market, but a very high risk strategy with many factors that could go wrong very quickly.

        It's unlikely given they're paranoid about losing their lead in the AI market and not about to risk taking their eye off the ball with that. As I said, they *might* do it to hit their rivals, though.

        But if they did... someone else here said that manufacturers could increase production. But that would require more capacity, and the manufacturers have to bet on the existing AI bubble/demand not collapsing and leaving them with a glut of capacity and the collapsed RAM price not covering their investment costs.

        Maybe they OpenAI would bet on that being the case, but they'd still have to be able to continue paying for the RAM they were buying, even if the contract was watertight.

        And conversely, any lucrative contracts for the manufacturers to supply RAM are worthless if OpenAI collapses which, as a company dependent on continued funding, would likely happen quickly if that was cut off by investors no longer confident in its chances.

      5. MachDiamond Silver badge

        "

        “From what I've read, OpenAI isn't even using all the RAM yet, is getting it in the form of RAW wafers(!) and it's not entirely clear how much this is driven by need and how much it's a paranoid attempt to retain a lead by choking off the supply of RAM their competitors require."

        It reeks of Chevron's policy of not licensing NiMh battery tech for automotive use nor allowing the production of cells large enough and useful enough for that application after they bought the patents from GM's Magnaquench division. Li chemistry batteries are superior now, but, at the time, NiMh was mature enough for some applications. It was pure blocking of EV's in fear of losing petrol/diesel market. Perhaps it delayed EV's a wee bit, but the gate's off the hinges now.

    2. Zack Mollusc

      Ethical RAM fabs

      If i were an unethical RAM fab who was being paid to produce wafers that were going into a stockpile, I would 'accidentally' send them all the duff ones and flog the good ones into the scarce and lucrative open market.

  6. chuckufarley
    Boffin

    I keep saying this but...

    ...I guess I keep saying it to the wrong people. I'll try to keep it short. There are fundamental gaps in humanity's understanding of how brains work. We know that that they are made of neurons of various types and a few years ago we even discovered that some type of neurons come in sizes ranging from the microscopic to several centimeters long. What we don't know for sure is: How do neurons make the biological equivalent of logic gates, random access memory, hard drives, etc. We also can't definitively say much at all about how brain chemistry works or what role quantum mechanics might or might play in a brain and those the tip of the iceberg. There is a laundry list of other unknowns and it in my opinion humanity would be better served by dumping trillions of dollars into scientific research to answer these questions.

    I honestly think it can be done because just look at we have done with science in the last 700 years. We have come from a flat Earth where tiny gnomes living in our stomachs' caused abdominal pains to cat videos in space. https://www.theregister.com/2023/12/20/nasa_psyche_cat_video/ Once the scientists have the funding and other resources they need then maybe we can find a better way to make AI. Maybe even AGI.

    1. Brave Coward Bronze badge

      Re: I keep saying this but...

      With all due respect, and not arguing about your main point, the largely shared view that people, 700 years ago, believed the Earth to be flat have been proven totally wrong by current historical research.

      1. cyberdemon Silver badge
        Alien

        Re: I keep saying this but...

        > the largely shared view that people, 700 years ago, believed the Earth to be flat have been proven totally wrong by current historical research.

        Please do be a good citizen and provide a link that supports your somewhat dubious offtopic assertion? And no, YouTube does not count.

        1. Claude Yeller

          Re: Earth has been round since before 200 BCE

          From Wikipedia:

          Eratosthenes of Cyrene (276-194 BCE) was the first known person to calculate the Earth's circumference. He was also the first to calculate Earth's axial tilt, which similarly proved to have remarkable accuracy. He created the first global projection of the world incorporating parallels and meridians based on the available geographic knowledge of his era.

          Astronomers, Parmenides and Anaxagoras, had by then already explained lunar eclipses as the shadow of a round earth covering the moon.

          Obviously, there were still people who believed the earth was flat. But that is not different now.

          1. Anonymous Coward
            Anonymous Coward

            Re: Earth has been round since before 200 BCE

            And fewer people believed the world was flat in those days, than do today.

            (or so I expect based on the relative population size)

      2. chuckufarley

        Re: I keep saying this but...

        While some people did know that the Earth was in fact round they were in the minority. They vast majority of all humans that have ever lived would and did argue with them about.

        1. Claude Yeller

          Re: I keep saying this but...

          "While some people did know that the Earth was in fact round they were in the minority. "

          Just like people who could read were always a minority. But that doesn't mean the information from books didn't make it out into the rest of the population. Especially to those to whom it would matter, ie, long distance traders and merchants and those investing in them.

          When Columbus pitched his plans to reach India by going West, he got the money. The objections then were not that the earth was flat, but that the earth was much bigger than he claimed and he would starve before ever reaching India. Which would have indeed happened had he not bumped into the Americas just in time.

        2. Filippo Silver badge

          Re: I keep saying this but...

          That's true, but it has more to do with the fact that access to knowledge was extremely unequal. When you talk about the scientific progress of mankind, as OP was doing, you are generally considering top knowledge workers, not the general population. Top knowledge workers in the middle ages did know that the Earth isn't flat. As another posted noted, most people in the middle ages couldn't read, but you wouldn't say that writing was unknown.

          1. chuckufarley

            Re: I keep saying this but...

            And again we can thank Science for that no longer being the case. That's my point here. Science is good for everyone and makes all of our lives easier. From the privileged few of today to the unwashed masses of ancient times, Science has been the only reason behind human progress. Sea fairing peoples have existed for at least 20,000 years and they were likely the first to learn that the earth was a sphere. This is a fact. It's also a fact that most of the people that have ever lived were not born into sea fairing cultures. Stop nitpicking and trying to find where a two paragraph post about humanity's progress and lack thereof breaks down. It's only two paragraphs! Of course I have to gloss over and simplify my arguments. I never actually wrote what I would consider a real thesis because the professors make you keep it short.

            Now go out there and yell at the people running AI Companies about how they have abandoned Science for their own greed!

        3. Bebu sa Ware Silver badge
          Windows

          Re: I keep saying this but...

          "While some people did know that the Earth was in fact round they were in the minority."

          I would think that the vast majority of the population at least in pre literate societies had very little idea of "the Earth" — their "world" would extend little more than a day's walk around their village which for all practical purposes was effectively flat.

          Their world beyond their local environs was the cosmography of Heaven and Hell† propounded the mediaeval Church the topology of which wasn't addressed. ;)

          † From the Inferno is clear that Dante was well aware the Earth was a sphere and quite uncontroversial.

          1. Adair Silver badge

            Re: I keep saying this but...

            While plenty of people in ancient times roamed no more than a day's walk from their home, plenty of others roamed plenty farther. In fact there was a hell of a lot of coming and going (tens, hundred, thousands of miles), everywhere people were, for as far back as the records allow us to look—a looong way in human lifetimes.

            Knowledge got around on the back of all the wanderings. Ignorance tended to hitch a ride too. Some things never change.

        4. werdsmith Silver badge

          Re: I keep saying this but...

          . They vast majority of all humans that have ever lived would and did argue with them would not have cared about.

      3. Mage Silver badge

        Re: believed the Earth to be flat

        Probably didn't think about, but no. Flat Earth was a deliberately invented idea in the 19th by atheists in England and ascribed to the Church to try and discredit it. Ironically then adopted by some USA Catholics and then by niche "Protestants".

  7. Pascal Monett Silver badge

    Some people still think the Earth is flat

    Funnily enough, although they talk about proving it by going on a ship to the South Pole (why there ? Why not any straight line ?), I have yet to hear the any one of them actually did it and, more importantly, did he fall off ?

    1. doublelayer Silver badge

      Re: Some people still think the Earth is flat

      As I understand their obsession with the south pole, one explanation for why there's not an edge to fall off if the world is flat is that Antarctica is a big ice wall that keeps the oceans from flowing off the edge and onto the floor (or whatever is below the flat which they never seem to have thought about).

      I don't know who came up with that theory, and if you try to work anything out geographically, it breaks down. It would put the north pole at the center of the planet, and I have yet to hear an explanation for why the center is cold, halfway from the center to the edge is warm, and the edge is also cold. Maybe there's a ring-shaped heater suspended over it somehow. The reason for the north pole being at the center and the south pole being the edge is because they argue that planes never fly over the south pole which proves that airlines are covering up the flatness. I've also never heard a theory for why anyone is advantaged by covering this up. Those times I've talked to flat earth people, both of them, I've always secretly suspected that nobody, or at least nobody I've met, actually believes it and they just find it fun to play-act that they do.

      1. Claude Yeller

        Re: Some people still think the Earth is flat

        "I have yet to hear an explanation "

        Flat Earthers don't have explanations, they have excuses and lust so stories.

        They can't say why there are tides, why you cannot see the Pole Star south of the equator or the Southern Cross north of the equator, and why they cannot draw an accurate flat map of the USA which they claim is flat.

        All they have is excuses and just so stories.

        1. werdsmith Silver badge

          Re: Some people still think the Earth is flat

          It’s not possible for a reasonably intelligent person to look up at the moon and see its phase changing and still believe the Earth is flat.

    2. Claude Yeller

      Re: South pole flat earth

      "Funnily enough, although they talk about proving it by going on a ship to the South Pole"

      They actually did go to the South Pole. It was to show that there is no 24h daylight on the South Pole (something that does not fit the flat earth idea).

      And it is just as funny as that gyroscope experiment in "Behind the curve".

  8. amanfromMars 1 Silver badge

    A Diabological Problem with No Simply Complex Solution is a Heaven Sent Opportunity

    AI Super IntelAIgents love your cynicism and skepticism. Can you imagine the infinite variety and countless number of virgin fields human disbelief and persistent fearful resistance to the helpful acceptance and assistance from IT and LLLMs creates for their unquestioned and unquestionable and alien endeavours ‽ .

    Thank you for your service, well-known? Ai2 researcher Tim Dettmers.

    Breaking News: .... Your physicalised worlds of/for tomorrow are virtually realised in order to permit accommodation and utilisation of out of this world command and otherworldly nonhuman zero exclusive human centric control of future direction/Presentations of ProgramMING Productions. And as such it is not hardware dependent.

    And the gazillion dollar question you nowadays may not like to ask of yourself and of that and/or those in control of you is .... Is such, as is the above, to be remembered and recorded as an Easter Egg or accepted as an Exotic and Erotic Oriental Device or welcomed as a Wild Wacky Western Confection or summarily dismissed as Quite Unbelievable and Wholly Nonsensical ?

    Would an Unholy Trinity AIMix of all Three be Yet Another Diabological Problem for Y’all to Dismiss and Try to Deny Exists‽ .

    1. Ian Johnston Silver badge

      Re: A Diabological Problem with No Simply Complex Solution is a Heaven Sent Opportunity

      Once again, amanfromMars proves that AI is certainly artificial and indubitably devoid of intelligence.

      1. amanfromMars 1 Silver badge

        Re: A Diabological Problem with No Simply Complex Solution is a Heaven Sent Opportunity

        Once again, amanfromMars proves that AI is certainly artificial and indubitably devoid of intelligence. ... Ian Johnston

        That is most definitely not proven, Ian Johnston, and whenever just as only really needs to be a very few others in humanity, ..... as in considerably smarter than the best of the best in the throngs of the rest, .... realise its awesome potential as an almighty indefensible weapon for successful use against the revolting unbeliever, is it game over and lights out for such an enemy and all unwilling to accept and aid fundamentally different radical change.

        And should you like to gamble and have a flutter on novel changes to the future, you can place bets on those sorts of happenings too apparently .... if you choose to believe the New York Times editorial board.

        The Times board said that defending Taiwan “won’t be easy” and called on the US to invest more in new technologies, such as drones, rather than “symbols of might,” referring to large aircraft and warships.

        “America must prepare for the future of war. This is the opinion of The New York Times editorial board. You might be thinking America should focus on peace, not war. But one of the most effective ways to prevent a war is to be strong enough to win it. That’s why it’s imperative that we change,” the board said.

        The board suggested several steps for the US to take to prepare for war with China, including building “new autonomous weapons and leading the world in controlling them” and relaxing rules on purchasing weapons to “make bets on young companies.” ..... https://www.zerohedge.com/military/nyt-editorial-board-urges-us-prepare-future-war-china

        Que sera, sera ..... no matter what anybody/everybody thinks to think and not support.

    2. Anonymous Coward
      Anonymous Coward

      Re: A Diabological Problem with No Simply Complex Solution is a Heaven Sent Opportunity

      Yeah, if I read Dettmers' "blog post" right, he's all about "incremental improvements within physical constraints", that reminds me of your "Erotic Oriental Device" and "Unholy Trinity AIMix"; both unfortunately require throbbing pants-stretching "exponential costs of linear progress", "that rewards [fabric] over rigor" (mostly), or somesuch.

      I didn't understand everything though ... ;(

      1. Anonymous Coward
        Anonymous Coward

        Re: A Diabological Problem with No Simply Complex Solution is a Heaven Sent Opportunity

        It misspelled "Erotic Orificial Device"?

  9. SnailFerrous Silver badge
    Terminator

    Already Here

    Artificial superintelligence is already here and is the author of all these reports saying it isn't possible to build one. Clearly distracting rhe humans while it gathers the resources it needs to eliminate them.

  10. Bebu sa Ware Silver badge
    Windows

    "not because they are well founded but because they serve as a compelling narrative"

    Pretty much defines any confidence tricker's stock·in·trade and just about any scam, most cults and politics universally.

    Narrativium is almost as potent on Roundworld as it is on the Disc and arguably more dangerous.

  11. Smeagolberg

    Quickly enough?

    "This is because AI infrastructure is no longer advancing quickly enough to keep up with the exponentially larger number of resources needed to deliver linear improvements in models."

    The first part of that sentence is credible.

    The last part about "deliver linear improvements" makes an unsubstantiated, and almost certainly incorrect, assumption about LLMs continuing to improve noticably simply as a result of throwing more data at them. Many problems are not susceptible to that approach.

    E.g., an experiment to find the probability of a coin landing on Heads.

    Toss the coin 10 times. Maybe get 7 heads. Experimental estimate: 70% heads.

    Now toss it 100 times...

    Then 1000 times...

    (By now the estimate is probably getting very close to 50%...)

    A million times...

    A google times (original meaning)...

    Result: the experimental estimate improves less and less as you throw more and more data at it even though it gets extremely expensive to process more data. Any improvement is, eventually, effectively imperceptible / noise regardless of how much raw data the model can process.

    I suspect that compute-resource-based improvememts in LLMs are already reducing similarly.

    1. Ken Shabby Silver badge
      Boffin

      Re: Quickly enough?

      I think therefore I’m DRAM.

      1. Pickle Rick
        Angel

        Re: Quickly enough?

        > I think therefore I’m DRAM.

        I doubt that!

        *whoosh* *pop* Aaargh! I'm existential!!!

    2. Pickle Rick
      Headmaster

      A google times (original meaning)...

      Googol (n) - https://dictionary.cambridge.org/dictionary/english/googol

  12. ecofeco Silver badge
    FAIL

    It's quite simple

    The tech douche bros who can't even get basic computing productivity right are trying make a godhead?

    They lost before they even started. They are fundamentally, to the core of their very being, incapable.

  13. xyz123 Silver badge

    "Super Intelligent AI is just a fantasy" said a researcher. "so you can kindly just leave the nuclear launch codes in Notepad -- They'll be totally safe."

    1. Pickle Rick
      Trollface

      Notepad?! _Real_ sentient beings use vim!

  14. frankyunderwood123 Bronze badge

    Tech bros want you to drink the coolaid, the fear should not be about AGI

    I'm sure anthropic, xAI, openAI etc. are well aware that the bubble will burst, but they are all well placed to weather the storm and come out in an even more monopolistic position.

    It's in their interests to whip up the insane fervour around AI from some quarters, in particular, an obsessed media.

    This is why they constantly drop hype around AI, including warnings.

    It makes it all sound super sexy and exciting.

    There's a constant manufactured buzz around "When AGI"?, with the implication that they are close, that someone will manage it very soon.

    There's FUD about AGI possibly already existing "in the lab".

    All of this drives sales, drives the hype machine.

    "No company can afford to get left behind!"

    It's AI with everything and right now, the concern about AGI is not the issue.

    The issue is having AI rammed into our eyes and ears and daily lives with hardly any proper checks and balances.

    It doesn't take much to imagine the following:

    No longer any humans manning call centres at all. Economies of scale have become such that even smaller companies can afford the compute power required to respond in close to natural speech and use some "smoke and mirrors" to make it appear realtime - a few "hold on caller" type niceties whilst doing the compute, which includes sophisticated caching techniques.

    No longer any humans a GP practices and a vast majority of diagnosis's done by AI - this is already starting to happen.

    "traditional" search engines become useless, everyone "trusts" the AI response - this has already mostly happened.

    The data which drives the vast bulk of AI conversations in the hands of just a few companies. This is already the case.

    What this means is those companies are able to adjust parameters in the LLM responses that impact the decisions of millions.

    Once Jane and Joe public have all been weaned on instant accurate sounding answers at their fingertips, which makes everyone an "expert" , it becomes vanishingly easy to slightly skew results.

    Couple this with further advances in deep fakes and you can change history.

    This is where the fear lies for me.

    Tech Bros already have a huge amount of control, a bunch of them were sitting behind Trump at his inauguration.

    What kind of dystopian future is being unleashed?

    1. amanfromMars 1 Silver badge

      The Slow Death of War Party Politics and Politicians is Writ Large by AI ... Hallelujah, Amen!

      Tech Bros already have a huge amount of control, a bunch of them were sitting behind Trump at his inauguration.

      What kind of dystopian future is being unleashed? .... frankyunderwood

      One in which maybe just only a chosen few with more than just a titter of wit are lucky enough to be exercised and rewarded as worthy useful pawns if not condemned by their own ignorance and arrogance to be dismissed and exiled as a useless and unnecessary fool's tool would appear to be as evidenced by all current honest accounts the most likely unfolding reality for future narrative telling, frankyunderwood.

      And the recognised and reported MAGA Contingent Fear is Overall Almighty Exclusive Leadership for/with Global Operating Devices of Exotic and Erotic and Esoteric Oriental Persuasion and Origin as be revealed here ........ EMERGING TECHNOLOGIES VIEWPOINT: Who’s Steering the Machine? Governing AI Before It Governs Us

      Burst that bubble and there's whole worlds of unnatural and otherworldly pain for that and those responsible for such as will be shown and seen as retrograde activity.

  15. Grunchy Silver badge

    NVidia short

    ASICs are measurably more efficient than GPUs, which were good enough up until now, but it has always been a case of using a wrench as a hammer until you can find the right hammer (in a pinch anything is a hammer). NVidia could start producing ASICs but I think they probably won’t.

    I agree with everyone else, the LLM is very effective at mimicry, it sure does resemble intelligence, but the technology is fundamentally interpolative: it can only mimic what it had already seen. That is surprisingly effective for A.I.-generated covers of existing songs. But to invent something new, the technology has no insight into how things really work. It has zero real-world experience, only words and images.

  16. steelpillow Silver badge
    Facepalm

    The semantic elephant in ther room

    > most of the discussion around AGI is philosophical

    So here's a wally schooled in both philosophy and IT.

    The current brick wall is that AIs have no concept of meaning (or, semantics to the erudite). They process bit-patterns called "tokens", not the meaning of those tokens. The ability to do this requires understanding of the context in which the token appears. That understanding of context is known as cognition. As yet, we are at the primitive stage of still bickering over whether spiders or bees show cognitive behaviours, while ignoring the carefully-studied neurology of cognitive brain structures in creatures as diverse as octopuses and birds. Efforts to close the gap remain embryonic, while so-called "semantic processing" is just next year's AI hype awaiting its turn. Don't just whine about the bloody philosophers, listen to what we have to say! And I say, talk to the neurologists and the animal behaviourists. The come back and ask me a sensible question.

    Next, please!

  17. Torben Mogensen

    Scaling isues

    The issue with scaling is not so much the end of Moore's Law (the number of transistors per mm² doubles every few years), but the end of Dennard scaling, which states that the power usage per mm² is roughly the same when you increase the number of transistors per mm² (everything else, such as voltage and clock frequency, being the same). That stopped about 20 years ago, which is why clock rates haven't increased much since then (after increasing from 2MHz around 1980 to 2GHz around 2000). Voltages have been lowered slightly, but the main increase in computer speed has been through multicores, where GPUs are the extreme example.

    To solve this issue, we have two choices: Move to less power-hungry technologies, or improve algorithms. We have seen very little of the first, though there is a company that aims to reduce power usage for AI through reversible processing (https://vaire.co/), and not much of the latter either -- most advances in generative AI such as large language models have been through throwing more compute power and more training data at them.

    I agree with the people who predict that the AI bubble will burst soon. Maybe next year, possibly the year after that. Some of the larger AI companies already prepare for this by using their inflated stock value to buy other companies and technologies, so they have something to fall back on when their main products fails to generate income.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon