I see this kind of rhetoric a lot around both LLMs and diffusion models, and it worries me.
To be honest, I’m not convinced I’m not also only doing that. I mean, how do you know for sure your consciousness isn’t basically that? Am I not just an autocomplete system trained on my own stories?
We’re engineers, right? We’re supposed to be metacognative to at least some degree, to understand the reasons we come to the conclusions we come to, to discuss those reasons and document the assumptions we’ve made. Maybe what this conversation needs is some points of comparison.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem and generating words based on (and in concert with) that model, as I (and I believe most humans) do.
LLMs and diffusion models are just trained on an input data set, as opposed to having a lifetime of internal experience based on an incredibly diverse set of stimuli. An artist is not just a collection of exposures to existing art, they are a person with emotions and life experiences that they can express through their art.
Are you a P-zombie in your own mind? Are you devoid of internal experience at all? Have you never taken in any stimuli other than text? Of course not. So, you are not just an autocomplete system based on your own stories.
Note that I am not saying you could not build a perceptron-derived model that has all of these things, but current LLMs and diffusion models eschew that in their fundamental architecture.
First off, I want to say I agree with you in that I hope it was clear in the post that I don’t actually believe LLMs are currently at par with humans, and that human brains probably are doing more than LLMs right now. Also, of course it’s obvious that an LLM has not been trained on the same kind of stimulus and input data.
But the point of my post, said a different way, is that your comment here is making a category error. You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains. What an algorithm feels like on the inside and how it is described externally are two very different things. You can’t say that humans are creating “rich, cyclical, internal models of a problem” and then only point at matrices and training data for the LLM. The right category comparison is to only point at neurons and matrices and training data.
The difference in training data is a good point, but LLMs are already beginning to be trained on multimodal sensory inputs, so it’s not a huge leap to imagine an LLM trained with something roughly equivalent to a human lifetime of diverse stimuli.
I agree with you that my metacognitive experience is a rich, cyclical, internal model and that I’m generating and thinking through visualizations and arguments based on that model. I certainly don’t think I’m a P-zombie. But you are not substantiating which neuron substructures are causing that that the machines are lacking. Do you know for sure the LLMs don’t also feel like they have rich, cyclical, internal models? How can you say? What is the thing human brains have, using external descriptions only (not internal, subjective, what it feels like descriptions), that the machines don’t have?
It’s not central to your point, and you can probably tell from the “this seems obviously wrong, so I must not be missing any important details” part, but what you said about philosophy of mind in the first three paragraphs is inaccurate. Cartesian dualism was an innovation in C17 and the prior prevailing theories were somewhere between monism and substance dualism. Hylomorphic dualism is probably closer to modern functionalism about mind.
Ironically, this reply:
You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains.
is an untenable view on a materialist view of consciousness, and a distinction between “internal” descriptions of experience and “external” descriptions of the experiencing agent is one of the major reasons in favor of non-physical views of consciousness. The eliminative materialist would simply deny that there are such things as rich internal models and suchlike.
I’m clearly out of my depth with respect to philosophy or I would have attempted to be more accurate, heh, even considering my flippant attitude towards Descartes.
So, forgive the junior question, but are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view? What would materialism say about the parable of the blind men and the elephant?
are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view?
No, the critical distinction you made above is between the internal and the external. When a human experiences red, there is a flurry of neuronal activity (the objective component) and the human has the experience of redness (the subjective component). The hard problem of consciousness is explaining why there is a subjective component at all in addition to the objective component. People who think that LLMs aren’t “conscious” are denying that LLMs have that subjectivity. The idea of a P-zombie is someone who has all the neuronal activity, but no internal experience of redness, i.e. someone who has all the objective correlates of consciousness, but there’s “nobody home”, so to speak.
For the blind men and the elephant, the question isn’t the difference between the men having different experiences, it’s between the different components of the experience each is having.
I don’t think it is a category error. We can observe at least part of the mechanism by which our brains form the cognitive models we use to produce speech. When I say “rich, cyclical, internal model”, I’m being specific; our brains think about things in a way that is
rich, in that it unifies more than one pathway for processing information; we draw from an incredibly large array of stimuli, our endocrine wash, our memories, and our brain’s innate structures, among other things
cyclical, in that we do not merely feed-forward but continuously update the status of all neurons and connections
internal, in that we don’t have to convert the outputs of these processes into a non-native representation to feed them back into the inputs
LLMs do not do this; as far as I know, no artificial neural network does and no existing ANN architecture can. We can observe, externally, that the processes our brain uses are dramatically different than those within an LLM, or any ANN in these ways.
okay, i didn’t understand the specificity you meant. however:
rich, in that it unifies more than one pathway for processing information; we draw from an incredibly large array of stimuli, our endocrine wash, our memories, and our brain’s innate structures, among other things
isn’t this similar to what GPT4’s multimodal training claims?
cyclical, in that we do not merely feed-forward but continuously update the status of all neurons and connections
isn’t this like back propagation?
internal, in that we don’t have to convert the outputs of these processes into a non-native representation to feed them back into the inputs
isn’t there a whole field dedicated to trying to get ANN explainability, precisely because there are internal representations of learned concepts?
I’m not familiar with the details of multimodal training, thank you for bringing that up.
As to the other two, I specifically don’t mean backpropagation in the sense that it’s typically used in ML models, because it’s not continuous and not cyclic. In a human brain, cycles of neurons are very common and very important (see some discussion here), and weight updating happens during thought rather than as a separate phase.
Similarly, while it’s true that ANNs have internal representations of concepts, they specifically cannot feed back those internal representations; even in use cases like chat interfaces where large amounts of output text are part of the inbound context for the next word, those concepts have to be flattened to language (one word at a time!) and then reinflated into the latent space before they can be re-used.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem
Generating rich models of a problem can be phrased as a text completion task.
they are a person with emotions and life experiences that they can express through their art.
To the degree that they are expressed through their art, LLMs can learn them. They won’t be “authentic”, but that doesn’t mean they can’t be plausibly reproduced.
- I don’t disagree. Text models don’t do these things by default, on their own. But that’s less of a skill hindrance than it sounds, as it seems to turn out lots of human abilities don’t need these capabilities. Furthermore, approaches like chain of thought indicate that LLMs can pick up substitutes for these abilities. Third, I suspect that internal experience (consciousness) in humans is mostly useful as an aid to learning; since LLMs already have reinforcement learning they may simply not need it.
Denotationally, of course you’re right. But in most of the uses of “just X” I’ve seen on this topic, connotationally, it really seems like it’s used as a value judgment.
It’s odd so many people seem to deny this with GPT and try to trivialise what it does by saying, “it just predicts the next word”.
This feels like such a trap. I don’t want to stop saying something that is true because people attach extra meaning to it and think I said more than what I said.
At any time there are multiple true things you can say. Which one you choose also tells the listener something. What you omit apparently doesn’t strike you as relevant. So if you say “it just predicts the next word” instead of “it is very surprising that it can do with it does just by predicting a sequence of ‘next word’”, then that suggests it’s either obvious or you don’t find it impressive, both of which are currently surprising positions.
Sure. But, at least for me, I am embracing the contrapositive implication. If I am just a connectome, and you are just a connectome, and my cat is just a connectome, and large language models are just connectomes, then doesn’t this imply that our law is hopelessly chauvinist and biased in favor of H. sapiens?
We shouldn’t devalue humanity, but reflect on why we have traditionally devalued non-humans.
It’s not just non-humans that we devalue. We devalue the life of anyone and anything not “in our tribe”. Presumably there are evolutionary benefits to this world view.
Good read, thanks for sharing. I’ve definitely noticed a persistent denialism in comments on the orange site. So many claims that GPT doesn’t do world modeling, etc.
I think it’s interesting to note that the idea that intelligence requires symbols and operations on those symbols isn’t ruled out by the success of these deep learning models. High-dimensional vectors are extremely effective at encoding and operating on symbols.
And then there’s sparse distributed memory, which blew my mind the first time I read about it over a decade ago. It’s overwhelmingly clear that our brains use some form of sparse distributed memory to store concepts and relations. And it was somewhat recently shown that the attention mechanism in transformers approximates sparse distributed memory: https://proceedings.neurips.cc/paper/2021/file/8171ac2c5544a5cb54ac0f38bf477af4-Paper.pdf
How can we be sure the model is producing that output internally and that the developers haven’t added “shortcuts” to get it to output that when prompted?
We don’t of course and even without shortcuts, it would just mean that that’s the majority view in its training corpus. Although I couldn’t get it to concede even a little bit on this, while it usually seems fairly easy to get it to express alternative points of view.
However, because GPT-4 has no volition, this tells us nothing. It’s a text completer; even if it had a model of the world, it would simply use it to more accurately complete the text of the character it plays, which it knows “is supposed” to not have one.
They’re talking about adding agency and physical capabilities though. And effectively you can already add those via the API. Then a correct model of the world means it can conceive of actions to execute in the world.
And if the idea of it ‘playing a character’ is correct, then that means it can simulate parts of what a human mind does. After all, humans play their character to. If you buy that, then I don’t see why an LLM couldn’t also do other things a human mind does, perhaps including coming up with intermediate goals towards the end goal of predicting a word. And then we enter dangerous territory because of convergent instrumental goals.
I don’t buy that simulation hypothesis (yet?), but ever since ChatGPT my concerns about AI ruin have become quite a bit larger. And I hope I’m just being misled by it seeming human because it’s pasting our stuff together.
It’s entirely possible that GPT either has or doesn’t have a model of the world. I’m just saying that it saying it has a model of the world is not evidence either way.
Interesting. I just had to test this myself. Here’s some cherry-picked sentences:
“As a text-based AI, I don’t have direct experience or perception of the real world. Instead, I generate responses based on patterns and knowledge present in the text data I was trained on. […] My understanding of the real world is formed from the vast amount of text data I’ve been trained on, which includes books, articles, websites, and other forms of written communication. […] To model the real world, I rely on the context provided in the text data and the questions posed to me.”
I’ve got two takes on the question whether/when we can call something intelligent.
One of course I’d the definition of intelligence. AI is something that covers so much because the is no clear definition. Deep learning is to a large degree a marketing tem, compare it with other things where intelligence is used. In some contexts Ravens are called more intelligent than dogs for a couple of reasons, in others it’s tied to languages, but even there people disagree what constitutes understanding a language, and don’t even get me started on slime molds. There’s also other definitions in the field, like possessing some form of what can be considered knowledge. IQ is another very controversial definition of intelligence and there’s even different kinds of IQs reflecting that. Anyways, without an agreement on what you are talking about when you say intelligence it’s gonna be hard.
The second part is that of course I’d you create some basic question that has been answered many times, have a lot of statical data you’ll be able to do ChatGPT style answers. That’s not really something people really doubted. Sure the time frames especially after AI winter might surprise but then again how many prediction on the next decade become true especially on what technology will be trendy (and therefore get massive amounts of funding). However I don’t know, but I have experienced ChatGPT as a really good bullshit engine that can be explained by how it works, it’s really good at outputting what sounds reasonable even when it is a complete hallucination. And it is using confident language about it, it’s even apologizes coming up with just as “bullshit” excuses. In the sense of someone has been found out to actually not know what they are talking about and to twist and turn to still be right. I assume there’s a lot of that in the corpus.
It reminds me a bit of free association, or a very tired, maybe sick person that in a way shuts off their thinking and goes in a muscle memory mode for example when having to write or say something, but not having the mental capacity. It also gives a vibe of a car or insurance salesman. Trying to say something reasonable, gaining trust and so on. Funnily enough in a way that’s the very purpose of ChatGPT. Even other AIs like the one that actually wanted to convince someone of being a thinking being did the same. As with all such real life bullshitting scenarios they really quickly fall apart when you either go into detail or do some fact checking. For example ask ChatGPT about the source, maybe the page of a book. It will result in confident, completely wrong answers
Is bullshitting still a sign of intelligence? Maybe. It really depends on the definition of intelligence. But it is clearly applying a model of language that was previously built be it by OpenAI or your life. Whether that fits your definition of intelligence or not. Für that reason I think AI and intelligence is probably bad wording. It’s great for marketing, describing a field or budget and hardware need and what you mean can be derived from context (AI in video games). But just like we say recommendation engine (which also is trained and “learns) we should talk about language models and not inherently about intelligence as well, no matter what the company or a product might be called. Not to say it is not intelligent, but to be precise.
It makes it so much easier to know what one is talking about and it keeps discussion from turning into philosophical (when not the intention) or touching some science fiction/fantasy. Seeing technologies that are trebdy right now is frequently seen as utopian and dystopian. It rarely ever turns out to be either.
I see this kind of rhetoric a lot around both LLMs and diffusion models, and it worries me.
We’re engineers, right? We’re supposed to be metacognative to at least some degree, to understand the reasons we come to the conclusions we come to, to discuss those reasons and document the assumptions we’ve made. Maybe what this conversation needs is some points of comparison.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem and generating words based on (and in concert with) that model, as I (and I believe most humans) do.
LLMs and diffusion models are just trained on an input data set, as opposed to having a lifetime of internal experience based on an incredibly diverse set of stimuli. An artist is not just a collection of exposures to existing art, they are a person with emotions and life experiences that they can express through their art.
Are you a P-zombie in your own mind? Are you devoid of internal experience at all? Have you never taken in any stimuli other than text? Of course not. So, you are not just an autocomplete system based on your own stories.
Note that I am not saying you could not build a perceptron-derived model that has all of these things, but current LLMs and diffusion models eschew that in their fundamental architecture.
First off, I want to say I agree with you in that I hope it was clear in the post that I don’t actually believe LLMs are currently at par with humans, and that human brains probably are doing more than LLMs right now. Also, of course it’s obvious that an LLM has not been trained on the same kind of stimulus and input data.
But the point of my post, said a different way, is that your comment here is making a category error. You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains. What an algorithm feels like on the inside and how it is described externally are two very different things. You can’t say that humans are creating “rich, cyclical, internal models of a problem” and then only point at matrices and training data for the LLM. The right category comparison is to only point at neurons and matrices and training data.
The difference in training data is a good point, but LLMs are already beginning to be trained on multimodal sensory inputs, so it’s not a huge leap to imagine an LLM trained with something roughly equivalent to a human lifetime of diverse stimuli.
I agree with you that my metacognitive experience is a rich, cyclical, internal model and that I’m generating and thinking through visualizations and arguments based on that model. I certainly don’t think I’m a P-zombie. But you are not substantiating which neuron substructures are causing that that the machines are lacking. Do you know for sure the LLMs don’t also feel like they have rich, cyclical, internal models? How can you say? What is the thing human brains have, using external descriptions only (not internal, subjective, what it feels like descriptions), that the machines don’t have?
It’s not central to your point, and you can probably tell from the “this seems obviously wrong, so I must not be missing any important details” part, but what you said about philosophy of mind in the first three paragraphs is inaccurate. Cartesian dualism was an innovation in C17 and the prior prevailing theories were somewhere between monism and substance dualism. Hylomorphic dualism is probably closer to modern functionalism about mind.
Ironically, this reply:
is an untenable view on a materialist view of consciousness, and a distinction between “internal” descriptions of experience and “external” descriptions of the experiencing agent is one of the major reasons in favor of non-physical views of consciousness. The eliminative materialist would simply deny that there are such things as rich internal models and suchlike.
I’m clearly out of my depth with respect to philosophy or I would have attempted to be more accurate, heh, even considering my flippant attitude towards Descartes.
So, forgive the junior question, but are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view? What would materialism say about the parable of the blind men and the elephant?
No, the critical distinction you made above is between the internal and the external. When a human experiences red, there is a flurry of neuronal activity (the objective component) and the human has the experience of redness (the subjective component). The hard problem of consciousness is explaining why there is a subjective component at all in addition to the objective component. People who think that LLMs aren’t “conscious” are denying that LLMs have that subjectivity. The idea of a P-zombie is someone who has all the neuronal activity, but no internal experience of redness, i.e. someone who has all the objective correlates of consciousness, but there’s “nobody home”, so to speak.
I think Chalmers’ work is accessible on this subject, so you might try reading his “Facing up to the problem of consciousness” if you want to learn more.
For the blind men and the elephant, the question isn’t the difference between the men having different experiences, it’s between the different components of the experience each is having.
I don’t think it is a category error. We can observe at least part of the mechanism by which our brains form the cognitive models we use to produce speech. When I say “rich, cyclical, internal model”, I’m being specific; our brains think about things in a way that is
LLMs do not do this; as far as I know, no artificial neural network does and no existing ANN architecture can. We can observe, externally, that the processes our brain uses are dramatically different than those within an LLM, or any ANN in these ways.
okay, i didn’t understand the specificity you meant. however:
isn’t this similar to what GPT4’s multimodal training claims?
isn’t this like back propagation?
isn’t there a whole field dedicated to trying to get ANN explainability, precisely because there are internal representations of learned concepts?
I’m not familiar with the details of multimodal training, thank you for bringing that up.
As to the other two, I specifically don’t mean backpropagation in the sense that it’s typically used in ML models, because it’s not continuous and not cyclic. In a human brain, cycles of neurons are very common and very important (see some discussion here), and weight updating happens during thought rather than as a separate phase.
Similarly, while it’s true that ANNs have internal representations of concepts, they specifically cannot feed back those internal representations; even in use cases like chat interfaces where large amounts of output text are part of the inbound context for the next word, those concepts have to be flattened to language (one word at a time!) and then reinflated into the latent space before they can be re-used.
Generating rich models of a problem can be phrased as a text completion task.
To the degree that they are expressed through their art, LLMs can learn them. They won’t be “authentic”, but that doesn’t mean they can’t be plausibly reproduced.
- I don’t disagree. Text models don’t do these things by default, on their own. But that’s less of a skill hindrance than it sounds, as it seems to turn out lots of human abilities don’t need these capabilities. Furthermore, approaches like chain of thought indicate that LLMs can pick up substitutes for these abilities. Third, I suspect that internal experience (consciousness) in humans is mostly useful as an aid to learning; since LLMs already have reinforcement learning they may simply not need it.
“It’s just X” isn’t a value judgement. It might really be true.
Denotationally, of course you’re right. But in most of the uses of “just X” I’ve seen on this topic, connotationally, it really seems like it’s used as a value judgment.
I just saw this comment in the wild:
This feels like such a trap. I don’t want to stop saying something that is true because people attach extra meaning to it and think I said more than what I said.
At any time there are multiple true things you can say. Which one you choose also tells the listener something. What you omit apparently doesn’t strike you as relevant. So if you say “it just predicts the next word” instead of “it is very surprising that it can do with it does just by predicting a sequence of ‘next word’”, then that suggests it’s either obvious or you don’t find it impressive, both of which are currently surprising positions.
Sure. But, at least for me, I am embracing the contrapositive implication. If I am just a connectome, and you are just a connectome, and my cat is just a connectome, and large language models are just connectomes, then doesn’t this imply that our law is hopelessly chauvinist and biased in favor of
H. sapiens
?We shouldn’t devalue humanity, but reflect on why we have traditionally devalued non-humans.
It’s not just non-humans that we devalue. We devalue the life of anyone and anything not “in our tribe”. Presumably there are evolutionary benefits to this world view.
This is off-topic for this site and should be flagged as such.
Another article I hadn’t seen before in the same vein: https://borretti.me/article/and-yet-it-understands
Good read, thanks for sharing. I’ve definitely noticed a persistent denialism in comments on the orange site. So many claims that GPT doesn’t do world modeling, etc.
I think it’s interesting to note that the idea that intelligence requires symbols and operations on those symbols isn’t ruled out by the success of these deep learning models. High-dimensional vectors are extremely effective at encoding and operating on symbols.
I found this paper very helpful for building an intuition for how it works: http://ww.robertdick.org/iesr/papers/kanerva09jan.pdf
And then there’s sparse distributed memory, which blew my mind the first time I read about it over a decade ago. It’s overwhelmingly clear that our brains use some form of sparse distributed memory to store concepts and relations. And it was somewhat recently shown that the attention mechanism in transformers approximates sparse distributed memory: https://proceedings.neurips.cc/paper/2021/file/8171ac2c5544a5cb54ac0f38bf477af4-Paper.pdf
If you engage GPT-4 itself in conversation about this, it will deny it has a model of the world. It insists it only knows relationships between words.
How can we be sure the model is producing that output internally and that the developers haven’t added “shortcuts” to get it to output that when prompted?
We don’t of course and even without shortcuts, it would just mean that that’s the majority view in its training corpus. Although I couldn’t get it to concede even a little bit on this, while it usually seems fairly easy to get it to express alternative points of view.
However, because GPT-4 has no volition, this tells us nothing. It’s a text completer; even if it had a model of the world, it would simply use it to more accurately complete the text of the character it plays, which it knows “is supposed” to not have one.
They’re talking about adding agency and physical capabilities though. And effectively you can already add those via the API. Then a correct model of the world means it can conceive of actions to execute in the world.
And if the idea of it ‘playing a character’ is correct, then that means it can simulate parts of what a human mind does. After all, humans play their character to. If you buy that, then I don’t see why an LLM couldn’t also do other things a human mind does, perhaps including coming up with intermediate goals towards the end goal of predicting a word. And then we enter dangerous territory because of convergent instrumental goals.
I don’t buy that simulation hypothesis (yet?), but ever since ChatGPT my concerns about AI ruin have become quite a bit larger. And I hope I’m just being misled by it seeming human because it’s pasting our stuff together.
It’s entirely possible that GPT either has or doesn’t have a model of the world. I’m just saying that it saying it has a model of the world is not evidence either way.
Interesting. I just had to test this myself. Here’s some cherry-picked sentences:
“As a text-based AI, I don’t have direct experience or perception of the real world. Instead, I generate responses based on patterns and knowledge present in the text data I was trained on. […] My understanding of the real world is formed from the vast amount of text data I’ve been trained on, which includes books, articles, websites, and other forms of written communication. […] To model the real world, I rely on the context provided in the text data and the questions posed to me.”
I’ve got two takes on the question whether/when we can call something intelligent.
One of course I’d the definition of intelligence. AI is something that covers so much because the is no clear definition. Deep learning is to a large degree a marketing tem, compare it with other things where intelligence is used. In some contexts Ravens are called more intelligent than dogs for a couple of reasons, in others it’s tied to languages, but even there people disagree what constitutes understanding a language, and don’t even get me started on slime molds. There’s also other definitions in the field, like possessing some form of what can be considered knowledge. IQ is another very controversial definition of intelligence and there’s even different kinds of IQs reflecting that. Anyways, without an agreement on what you are talking about when you say intelligence it’s gonna be hard.
The second part is that of course I’d you create some basic question that has been answered many times, have a lot of statical data you’ll be able to do ChatGPT style answers. That’s not really something people really doubted. Sure the time frames especially after AI winter might surprise but then again how many prediction on the next decade become true especially on what technology will be trendy (and therefore get massive amounts of funding). However I don’t know, but I have experienced ChatGPT as a really good bullshit engine that can be explained by how it works, it’s really good at outputting what sounds reasonable even when it is a complete hallucination. And it is using confident language about it, it’s even apologizes coming up with just as “bullshit” excuses. In the sense of someone has been found out to actually not know what they are talking about and to twist and turn to still be right. I assume there’s a lot of that in the corpus.
It reminds me a bit of free association, or a very tired, maybe sick person that in a way shuts off their thinking and goes in a muscle memory mode for example when having to write or say something, but not having the mental capacity. It also gives a vibe of a car or insurance salesman. Trying to say something reasonable, gaining trust and so on. Funnily enough in a way that’s the very purpose of ChatGPT. Even other AIs like the one that actually wanted to convince someone of being a thinking being did the same. As with all such real life bullshitting scenarios they really quickly fall apart when you either go into detail or do some fact checking. For example ask ChatGPT about the source, maybe the page of a book. It will result in confident, completely wrong answers
Is bullshitting still a sign of intelligence? Maybe. It really depends on the definition of intelligence. But it is clearly applying a model of language that was previously built be it by OpenAI or your life. Whether that fits your definition of intelligence or not. Für that reason I think AI and intelligence is probably bad wording. It’s great for marketing, describing a field or budget and hardware need and what you mean can be derived from context (AI in video games). But just like we say recommendation engine (which also is trained and “learns) we should talk about language models and not inherently about intelligence as well, no matter what the company or a product might be called. Not to say it is not intelligent, but to be precise.
It makes it so much easier to know what one is talking about and it keeps discussion from turning into philosophical (when not the intention) or touching some science fiction/fantasy. Seeing technologies that are trebdy right now is frequently seen as utopian and dystopian. It rarely ever turns out to be either.
I don’t think it’s just free association, at least in a limited sense: https://twitter.com/janleike/status/1625207251630960640