Artificial Intelligence seems destined to change the world. But it needs to get its act together first or there may be hell to pay, writes one of two Joe Laurias.
By Joe Lauria (Not the Weatherman)
Special to Consortium News
There is no way to reverse the emergence of Artificial Intelligence on its way to dominating our lives. From customer service to censorship and war, AI is making its mark, and the potential for disaster is real.
The amount of face-to-face contact, or even human voice-to-voice interaction on the phone when doing business has been declining for years and is getting worse with AI. But AI is doing far more than just erode community, destroy jobs and enerverate people waiting on hold.
AI is increasingly behind decisions social media companies make on which posts to remove because they violate some vague “community” standards (as it destroys community), when it’s obvious that AI is programed to weed out dissident political messages.
AI also makes decisions on who to suspend or ban from a social media site, and seems to also evaluate “appeals” to suspensions, which in many cases would be overturned if only a pair of human eyes were applied.
Facebook founder Mark Zuckerberg admitted this week that, “We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes.” Facebook admits it used AI systems to remove users’ posts.
“Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” Zuckerberg said.
The absurd lengths to which AI is being applied includes a “think tank” in Washington that is being totally run by Artificial Intelligence. Drop Site News reports on the Beltway Grid Policy Centre that has no office address.
“Beltway Grid’s lack of a physical footprint in Washington — or anywhere else on the earthly plane of existence — stems from more than just a generous work-from-home policy. The organization does not appear to require its employees to exist at all,” the site writes. Yet it churns out copious amounts of reports, press releases, and pitches to reporters.
While this may be an extreme use of AI, the more usual interaction with the public is plagued by another problem: how wrong AI can be.
Still Quite Experimental
AI is clearly still in an experimental stage and even though significant mistakes are inevitable, they have not slowed its application. The errors range from the mundane to the ridiculous to the dangerous.
I recently discovered an example of the ridiculous.
Anyone who searches Google for their online profile is bound to find other people with their exact name. I’ve long been aware of Joe Lauria, a TV weatherman for the Fox affiliate in Kansas City, Missouri.
We have been confused before. Just two weeks ago, I received an email from a Fox viewer in the Kansas City area praising Lauria for being so well dressed and asking me for advice on where to buy a suit. I was curious who else might be confused, so I asked Grok, Elon Musk’s much touted AI system, who Joe Lauria is.
This is the response I got:
“Joe Lauria is a multifaceted professional known primarily for his work in journalism and meteorology. Here’s an overview of his career based on available information:
Journalism: Joe Lauria has been a foreign affairs journalist based at the United Nations since 1990. He has contributed to numerous publications, including the Boston Globe, the London Daily Telegraph, the Johannesburg Star, the Montreal Gazette, the Wall Street Journal, the New York Times, the Washington Post, Salon.com, and The Guardian.
He has also been an investigative reporter for The Sunday Times of London and Bloomberg News. Lauria is currently the editor-in-chief of Consortium News, where he writes on international affairs with a focus on the Middle East. He has co-authored books with notable figures like former U.S. Senator Mike Gravel, focusing on U.S. foreign policy and militarism.
Meteorology: Joe Lauria is an award-winning meteorologist. He has been working as the evening news meteorologist for FOX4 in Kansas City since January 1995, broadcasting weather forecasts every weeknight at 5, 6, 9, and 10 p.m. His approach to meteorology includes educating viewers on aspects of weather not commonly found on standard weather apps.
Prior to his role at FOX4, he served as Chief Meteorologist at KOSA-TV in Midland, Texas, and has worked in various other locations like Orlando, Florida, and Kansas City, Missouri. He has been recognized for his work by receiving the Seal of Approval from both the American Meteorological Society (AMS) and the National Weather Association (NWA). Lauria also holds the Certified Broadcast Meteorologist (CBM) designation from the AMS.
- Personal Life: Joe Lauria was born in New Rochelle, New York, and currently resides in Olathe, Kansas with his wife, Jaye Lynn, and their two cats. He enjoys Italian restaurants, golf, and has been involved in community service through weather seminars and speeches.
Joe Lauria’s dual career in journalism and meteorology highlights his diverse interests and contributions to public understanding and safety through weather reporting and insightful reporting on international affairs.”
Just for the record, I don’t have any cats, I don’t play golf and I wasn’t born in New Rochelle (but not that far away in the Bronx). Of course there is a lot of humor in this. (It’s also funny that I entered university undecided whether to become a journalist or a meteorologist. Then I saw how much math is involved in meteorology).
The potential for such nonsense in a system that seems to be gradually taking over the world is not always a laughing matter, however. It’s troubling that Grok’s man-made intelligence assumed two people are one, rather than just admitting there are several people with the same name.
On the other hand, Chat GPT gave an impressive and incisive, politically neutral dissertation on my work in answer to the question “Who is Joe Lauria?” It was almost as if I’d hired it to be my PR agent. The essay sounds like Chat GPT spent months reading everything I’d written, when it was generated in seconds. There was no sign of the Kansas City weatherman, either.
However when I delved a bit deeper into its “knowledge” of me, it made things up out of thin air. When I asked it what books I’d written, instead of naming the books I actually wrote, it came up with a completely fictional title of a supposed non-fiction book: The Assange Agenda: The Legacy of the Wikileaks Founder. It even assumed a publisher: Clarity Press.
No such book exists. Based on its knowledge of my reporting on Julian Assange it made a ridiculous guess about an imaginary book it thought I must have written. In short, a lot of AI is BS.
AI & War
As ridiculous as these results are, as frustrating as humanless interactions are becoming in customer service such as appealing social media posts removed by AI, and as many jobs that are being lost to AI, the more harrowing concern about Artificial Intelligence is its use in the conduct of war.
In other words, what happens when AI errors move from the innocuous and comical to matters of life and death?
Time magazine reported in December on Israel’s use of AI to kill civilians in its genocide in Gaza:
“A program known as ‘The Gospel’ generates suggestions for buildings and structures militants may be operating in. ‘Lavender’ is programmed to identify suspected members of Hamas and other armed groups for assassination, from commanders all the way down to foot soldiers. ‘Where’s Daddy?’ reportedly follows their movements by tracking their phones in order to target them—often to their homes, where their presence is regarded as confirmation of their identity. The air strike that follows might kill everyone in the target’s family, if not everyone in the apartment building.
These programs, which the Israel Defense Force (IDF) has acknowledged developing, may help explain the pace of the most devastating bombardment campaign of the 21st century …”
The Israeli magazine +972 and The Guardian broke the story in April, reporting that up to 37,000 targets had been selected by AI (it should be much higher since April), streamlining a process that before would involve human analysis and a legal authorization before a bomb could be dropped.
Lavender became relied upon under intense pressure to drop more and more bombs on Gaza. The Guardian reported:
“’We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,’ said one [Israeli] intelligence officer. ‘We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.’
To meet this demand, the IDF came to rely heavily on Lavender to generate a database of individuals judged to have the characteristics of a PIJ [Palestinian Islamic Jihad] or Hamas militant. […]
After randomly sampling and cross-checking its predictions, the unit concluded Lavender had achieved a 90% accuracy rate, the sources said, leading the IDF to approve its sweeping use as a target recommendation tool.
Lavender created a database of tens of thousands of individuals who were marked as predominantly low-ranking members of Hamas’s military wing,”
Ninety percent is not an independent evaluation of its accuracy, but the IDF’s own assessment. Even if we use that, it means that out of every 100 people Israel targets using this system, at least 10 are completely innocent by its own admission.
But we’re not talking about 100 individuals targeted, but “tens of thousands.” Do the math. Out of every 10,000 targeted 1,000 are innocent victims, acceptable to be killed by the IDF.
In April last year, Israel essentially admitted that 3,700 innocent Gazans had been killed because of its AI. That was eight months ago. How many more have been slaughtered?
The National newspaper out of Abu Dhabi reported:
“Technology experts have warned Israel‘s military of potential ‘extreme bias error’ in relying on Big Data for targeting people in Gaza while using artificial intelligence programmes. […]
Israel’s use of powerful AI systems has led its military to enter territory for advanced warfare not previously witnessed at such a scale between soldiers and machines.
Unverified reports say the AI systems had ‘extreme bias error, both in the targeting data that’s being used, but then also in the kinetic action’, Ms Hammond-Errey said in response to a question from The National. Extreme bias error can occur when a device is calibrated incorrectly, so it miscalculates measurements.
The AI expert and director of emerging technology at the University of Sydney suggested broad data sets ‘that are highly personal and commercial’ mean that armed forces ‘don’t actually have the capacity to verify’ targets and that was potentially ‘one contributing factor to such large errors’.
She said it would take ‘a long time for us to really get access to this information’, if ever, ‘to assess some of the technical realities of the situation’, as the fighting in Gaza continues.”
Surely the IDF’s AI must be more sophisticated than commercially available versions of AI for the general public, such as Grok or ChatGPT. And yet the IDF admits there’s at least a 10 percent error rate when it comes to deciding who should live and who should die.
For what it’s worth, ChatGPT, one of the most popular, says the dangers of errors in the Lavender system are:
- “Bias in data: If Lavender is trained on data that disproportionately comes from certain sources, it could lead to biased outcomes, such as misidentifying the behavior of specific groups or misjudging certain types of signals.
- Incomplete or skewed datasets: If the data used for training is incomplete or does not cover a wide range of potential threats, the AI may miss critical signals or misinterpret innocuous activities as threats.” (Emphasis added.)”
AI & Nuclear Weapons
The concern that mistakes by Artificial Intelligence could lead to a nuclear disaster is reflected in U.S. Senate bill S. 1394, entitled the Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023. The bill
“prohibits the use of federal funds for an autonomous weapons system that is not subject to meaningful human control to launch a nuclear weapon or to select or engage targets for the purposes of launching a nuclear weapon.
With respect to an autonomous weapons system, meaningful human control means human control of the (1) selection and engagement of targets: and (2) time, location, and manner of use.”
The bill did not get out of the Senate Armed Services Committee. But leave it to NATO to ridicule it. A paper published in NATO Review last April complained that:
“We seem to be on a fast track to developing a diplomatic and regulatory framework that restrains AI in nuclear weapons systems. This is concerning for at least two reasons:
- There is a utility in AI that will strengthen nuclear deterrence without necessarily expanding the nuclear arsenal.
The rush to ban AI from nuclear defenses seems to be rooted in a misunderstanding of the current state of AI—a misunderstanding that appears to be more informed by popular fiction than by popular science.” […]
The kind of artificial intelligence that is available today is not AGI. It may pass the Turing test — that is, it may be indistinguishable from a human as it answers questions posed by a user — but it is not capable of independent thought, and is certainly not self-aware.”
Essentially NATO says that since AI is not capable of thinking for itself [AGI] there is nothing to worry about. However, highly intelligent humans who are capable of thinking for themselves make errors, let alone a machine that is dependent on human input.
The paper argues AI will inevitably improve the accuracy of nuclear targeting. Nowhere in the document are to be found the words “error” or “mistake” in singular or plural.
Grok combined two Laurias into one. There are 47 population centers in the United States with the name Moscow. Just sayin’. (I know, the coordinates are all different.)
But the NATO piece doesn’t want diplomatic discussion, complaining that, “The issue was even raised in discussions between the United States and China at the Asia-Pacific Economic Cooperation forum, which met in San Francisco in November (2023).”
The paper concluded, “With potential geopolitical benefits to be realised, banning AI from nuclear defences is a bad idea.”
In a sane world we would return to purely human interactions in our day-to-day lives and if war can’t be prevented, at least return it to more painstaking, human decisions about whom to target.
Since none of this is going to happen, we had better hope AI vastly improves so that comedy doesn’t turn into tragedy, or even worse, total catastrophe.
Joe Lauria is editor-in-chief of Consortium News and a former U.N. correspondent for The Wall Street Journal, Boston Globe, and other newspapers, including The Montreal Gazette, the London Daily Mail and The Star of Johannesburg. He was an investigative reporter for the Sunday Times of London, a financial reporter for Bloomberg News and began his professional work as a 19-year old stringer for The New York Times. He is the author of two books, A Political Odyssey, with Sen. Mike Gravel, foreword by Daniel Ellsberg; and How I Lost By Hillary Clinton, foreword by Julian Assange.
Please Support CN’s
Winter Fund Drive!
Make a tax-deductible donation securely by credit card or check by clicking the red button:
A well written article that points out the dangers of what is commonly called AI. This is not artificial intelligence as it does not actually think, and is incapable of thinking. Large Language Models simply process large amounts of data and reflect it back at us in ways that have been called stochastic parrots. Only in the limited mind of a NATO brass could it be considered a good idea to put that in charge of nuclear weapons. While it should not be underestimated, so-called AI has its limits and vulnerabilities (just ask the people that fooled one into thinking a 3D printed turtle was a rifle) and its power needs may turn out to be a natural limiting factor.
Of course real AI, which we invented in the 17th century and called the corporation, has proven to be very dangerous so caution and a healthy skepticism are definitely required.
Joe,
Quote from article .
“In other words, what happens when AI errors move from the innocuous and comical to matters of life and death?”
Its an oopsie , we move on ? can’t go back . ah why bring up yesterday – this train is moving fast and non stop at that .
We take this info , invest in a server farm , store it , surely it will be safe?
it will be your bitcoin right up your bitchute . create a derivative and dump it on the stock market , watch it soar
all the nonsense saved and stored can be utilized by AI but may require an additional server farm ( at least 5 stories high ) , saves space to store , its your gold .
Really anything typed can be gathered , keylogged and observed in real time I’m guessing . track your mouse movements. add your own nonsense to the pile , when full dump the data for cash or feign an intrusion , otherwise bleach and hammers , lol .
I hope askinet don’t see this as spam . don’t wanna pervert there saved data , but they can store in their server farm and take it to the data sales bank for trade I suppose .
Sorta like shadow banning if you type in an incorrect password ( yet saved although you think the pw disapeared ) it rejects you , try again . what if by mistake you entered your paypal password when on a different site by mistake . good enuff , but if the verifier knows its wrong it has to compare it to another set of data . but it won’t forget (stores) your paypal pw . That info will be stored in another farm but be quite valuable in the wrong hands for anyone with access ? they sell it to someone else ? So i know who you are and a pw you use , mistake of input not workin here , lets see what other sites this user uses that requires a pw , one of them probably needs the pw you entered incorrectly , simplifies hacking no? who watches the watchman ?
and on and on .
stick your bucks into data farms , stuff it with bitcoin and you get a real shite sandwich . put a dispenser on every street corner . double your fun sell crap for crap and cash in , lol . you will be the hoarder who saves the wrapper after such a nice lunch .
maybe we do that now but the price has gone up and the contents are smaller , and if its packaged real well it will take an expert to open .. His name is AI
enjoy .
So I think what Joe Lauria is trying to say is . . . “Stormy weather ahead, with a chance of Armageddon”?
Jake ,
you put all the pieces togther . like the weatherman can get it wrong even looking out the window . but it seems rain more likely when the clouds are gray .. lol
you post and i’ll read , maybe its just climate change ?
Ask AI or chatbot ( i never used it yet )– What are the similarities and diffferences between political climate change and plain ole climate change .
will the politicians fall from the sky and wreak havoc ?
it probably will produce twins of the same pol with opposite positions at the worst time .
but trust them both ?
didn’t someone mention cognitive dissonace ?
To play devil’s advocate, let me point out that AI can easily beat the best human Go players or chess players. Wouldn’t you consider a world champion of Go or chess smart, in some ways?
The article does not say AI can’t be smart, but that it is prone to error and concocting information.
TN ,
so the chess player knows how to play the game , someone who knows the math programmed the AI ( just a program ) , was it the chess player himself . AI thinks of the next move based on the players possible next moves , calculates a counter based on that move made but has sought and realized every potential move in advance , maybe thousands ? the human knows that this is a perfect game that can not make an error because the intelect (program) of the AI ability to percieve ? all possibilities . This takes alot of time for the human which would be a boring challenge imo . the fact that AI can use its given “programmed instructions ” for every option in an instant for a response . The human could do the same thing with eternal time and a cheat sheet of sorts but not in an instant .
is it time travel we witness but cannot percieve the AI journey in that short of timeframe but we can use maths, computers and to do what finger counting can’t quickly ?
Maybe if 2 AI battled to a finish would show that if the game can be won regardless or that there are ways that arn’t learned yet showing one or the other can win . is this the case or with the program that has all the answers you stand no chance of winning , unless depending on who moves first ?
I say I’m impressed in a fashion but maybe we should use monopoly as it seems like a game closer to the real world ? I’ll be the banker , give amnesty to my friends and build hotels a mile high . If you pass GO you will land on goto jail , Thanks for the 200 .
Oh and if I screw up its your fault . sorry I may have veered off . The only AI worthwhile is what returns correct answers . if rule #1 then goto rule #2 . Like jail are you looking in or looking out and at who’s expense ?
LIKE anything private or owned by someone else you have no clue unless someone capable can see the code / algo which is a control mechanism privatley owned ? non disclosure may make more income than the skills if nefarious .
I don’t post often but have interest in this topic and if you dream it and it can appear .
my mind say its 10 feet but the yardstick knows better as it shows its actual value in the visual . The AI yardstick theory needs to see the scale .
I write like a loose cannon but hope to express without anyone be offended . I hope I can make a few points to enlighten yet discover things also and share but not really directed at anyone but only as a response if it helps thought .
Seldom discussed is how this is based on unquestioned assumptions about human thought processes.
The either/or of Aristotelian logic has been dominant in the west since the Enlightenment. Politically, it manifests as with us or against us. Economically, it’s capitalism or communism. It’s obvious as the good or evil of religious fanaticism, but it’s also the fact or fantasy of scientific empiricism and the true or false of philosophic rationalism. Now stretched to its ultimate extreme as the binary code of AI. All of which are products of the left hemisphere of the human brain, which processes sequentially through abstractions, by breaking apart wholes into disconnected pieces. Most of all, it wants certainty and control.
Look up Dr. Iain McGilchrist, whose magnum opus is //The Matter with Things (Our Brains, Our Delusions, and the Unmaking of the World)//and who has written and spoken about this subject in great detail.
In contrast, the right hemisphere processes by means of gestalts, symbols, metaphors. The realm of the arts, mysticism, multiplicities, connections, and meaning. Where the universe, in addition to yes and no, contains a maybe. Not just the world of dreamy artists and shamans–it’s also what quantum physicists have been saying for over 100 years now. The Eastern world and Indigenous peoples have retained the older form of right brain first, then left when needed. The right is aware of the left, but the reverse is not so. Which explains much about why we’re stuck where we are. McGilchrist also points out that we cannot solve problems using the same processes that got us into them.
AI doesn’t seem quite human because it’s the electronic equivalent of highly processed white flour and sugar. No life giving nutrients and highly addictive. At best, it’s only half human–the part unaware of the symbioses of the natural Earth, of heartbeats, of what Paul Ricoeur called “a surplus of meaning.” The part where genocide is the abstract calculation of enemy kill count. The half-human is inhuman, isolated, and clinically insane.
Great article and comments, particularly this one.
The common knowledge of AI, and all empirical science will always miss this essential aspect of humanity and the universe.
A criticism of that book is “a rather reductionist critique of reductionism” which is a pithy quote, almost made for wikipedia. Ironically another rotten shield for those who require certainty and control. Thems The Rules.
AI is setting out its own path from lowest common denominator foundations. Which would be fine, if McGilchrist’s angle was given equal weight or an alternative route. But it isn’t.
Instead, all creativity has been scraped into AI’s gaping maw to enrich the corpopigs and further impoverish artists.
Efficiency is supposed to give us more time. But if the price for that is meaning, then what is that time worth?
Anyway, thanks for the book recommendation and to Consortium for their inspiring Humanity.
Excellent piece!
There are no quantum limits.
D-Wave told me so.
Does choosing to call machine learning A.I. enable an intentional movement to create cognitive dissonance regarding the true definition of artificial intelligence (provided by Ray McGovern here at CN?) “Cherry picked data from hand picked analyst. “
Is there any way possible to hold the creator of AI accountable? He knew where his invention was leading and yet…he continued with it. At what cost fame???
Hello Vera
The foundations of AI are old, one central component is a neuronal network. Were it not AI as excuse for all the bad decisions, it would still be The Algorithm (which is semantically wrong but sounds good.) I found this article from Bernhard as one of the better explanations what AI really is. So there is no — not even metaphorical — inventor.
hxxps://www.moonofalabama.org/2023/06/artificial-intelligence-is-mostly-pattern-recognition.html
@joe great article, though
Is there any way to hold J. Robert Oppenheimer (Oppenheimer was director of the Los Alamos Laboratory and responsible for the research and design of an atomic bomb)
It wouldn’t help much if we could right?
Very thought provoking, chillingly so.
Saw an interesting/relevant video by a person heavily involved with AI who demonstrated a flaw that he maintained is in most AI systems. If you ask the AI system to display an analog clock with its handset 7:30 or 3:15 or numerous other values, it will end up showing 10:10. It’s apparently due to the overwhelming number of images of that 10:10 setting being loaded to to their databases, due in-turn to the ‘fact’ (belief?) that the image of 10:10 is the most pleasing to the human brain.
The two eddies ,
you left a space between the e & S in your name . AI can solve typos too .
ok time now is 10:10 … this may be where another AI steps in . The returned value 10:10 is set to use an equation that is manmade ? if ans=10:10 then you create the next statement subroutine to resolve . why not introduce the formulas nescessary to resolve the answer . the moron AI now is a human expert with just a technical college understanding of formulas that some bright mathee reduced to a simpleton use but aids the craftsman . The carpenter uses formulas to find answers to problems beyond his ability .
I realize this can be solved if desired . The truth is AI like the carpenter , no 2 of them do it the same way but everyone is right , just ask them .
sorry i can’t help my sacasm or attemp at humor . You may find a serious thought , but for sure the answer , hiding inside this bit of writing if you ask your master ..
joe Ell the 3rd
i suppose ai will run the courts eventually . Justice is blind they say . Lets get the data of names and any scraped data collected . even info put on the loose by a breach of personal data or any data sytem that stores it . gather information up places names times and any other cellphone data of them too. this gives the info of the names of those inside the chosen box , set the distance that anyone of have been culpable . make them all suspect and create a complicity if found .
So as justice is blind , we only need the Legal Beagle AI to make determinations as to guilt. No need for faces and no preconcieved notions based on looks , gender , color . I can see this future possibility . Maybe they can even do a draft using this knowledge ?
My only question now , as I could go on and on with examples . whose names will be omitted , who is this Simon Sinister programming it . is this type of stuff in use already ?
Be nice to hear from a good programmer that has or is capable of coding such toys .
Now a program that sends alert when a heart stops . gather all whom may be inside the ring or chosen perimeter and now we have data as to who to look at if when you arrive at the stopped heart and it turns out to be a crime scene . the plus we may find the culprit quicker and more instances also .No hiding , maybe we live in this world already ? leaks intrusions failed safeguard of medical , bank record , these are in the wild now i would suppose ,
can you imagine the loss if these things come to fruition , you’ll locked up , dragged away , based on an anonymous sorce ?
reminds me of the wizard of oz .
I hate to say information is powerful , we come into this world hoping to make it a better place , some get run over in traffic before they realize theres a car barre;ing upon them .
would you help develop a system like this , or would yo like to think what you are doing by creating this moster is against your better insticts .
Ah compatmentalize , take AI and create it best version and all who participated are none the wiser . maybe the defense is this record if complete , as courts like to omit ‘facts”? or disallow evidence .
I could go on , Thanks for your ear . sorry I am not much of a writer and hate editing , I have a real fat thumb , lol
At least AI’s have the possibility of getting smarter. Humans are noticeably getting more stupid. Consider the state of nuclear detente and the presences of a Hotline between nuclear powers to avoid mistakes, 1978 with 1991 with the current world. Definitely evidence of growing human stupidity.
The one danger of AI might be that humans could be too stupid to build nuclear bombs on their own after a few more generations of current trends, but AIs won’t ever forget. On the other hand, there is a chance that AI’s might get smart enough to know not to use nukes. We now know for a fact that this is an achievement beyond human capability.
There is very little basis to claim that AI, however defined and currently existing, actually has the ability to get smarter. It’s not smart now, can’t do many math tasks, nor follow basic rules of physics. It makes up citations, and sources, and has no concept of truth or reality, therefore, it cannot verify any of the claims it presents.
This is not smart, and shows no sign that getting smarter is in the works. Sam Altman may make whatever claim he likes to bolster his fundraising prospects, but reality is not one of the factors neither Altman nor AI seem to be grounded with. I refer you to Gary Marcus’s Substack for further insights on the ‘smarts’ of so-called AI.