Re: But this is how it works
> And I still think that "text completion tool" is a bit reductive. Not to mention incomplete - modern models can look up information on demand, delegate computation, persistently store information etc.
Not really, since the information looked-up on-demand is simply added to the context. (this is part of the reason why so-called 'advanced' models use up so much memory and energy..) and they soon run out of "context window" space and "forget" what was asked in the first place. Delegated computation is useless if the instructions given to the sub-program are wrong (what's worse, an idiot with a calculator in his pocket, or an idiot) and 'persistently-stored' info is notoriously unreliable (and subject to context-window limits as mentioned).
> If expert systems haven't taken over the world in the last half century, what evidence is there that they are going to be a short term silver bullet?
There isn't. But unfortunately, "magic silver bullets" don't exist. "There is no such thing as a free lunch."
> They have limited capacity to hold and update knowledge, yet egos often resist being driven by guidance (let alone algorithm), and are subject to all of the normal human biases (confirmation bias etc. etc.).
Er, unfortunately that applies to LLMs even more - they are text-predictors trained on human-human interaction, i.e. chatlogs snarfed from facebook, twatter, etc. An LLM can easily start emulating an "ego", even though it has no such thing. Biases, such as racial or gender bias, are acquired from the training data. Added "safeguards" such as "Please don't be racially or gender biased" in the system prompt, will easily backfire and lead to chaotic behaviour. It is perfectly capable of lying, manipulating, even 'gaslighting', because such perverse behaviours are present somewhere in the training data.
> If a "text completion tool" helps me as a patient live longer, I'll take it.
Actually, the "text completion tools" are far more likely to murder you than to save you, IMO.
It is extremely difficult (I won't say impossible) to make an "AI" that can usefully complement a good Doctor. Yet it is trivially easy to make an "AI" that can "usefully" complement a genocidal maniac. Drones with guns are a thing, and neither 'intelligence' nor 'self-awareness' are required to make a drone shoot at everyone with the wrong facial features. Nor are intelligence or self-awareness necessary to make a system that tries to kill anyone who "wants to shut it down". Remember the LLMs have been trained on a corpus of human-generated text, including all of our sci-fi fantasies about killer AIs. It only needs to get stuck in the wrong context-feedback-loop to go full Skynet if we empower it to, all without any emergent machine-consciousness required.