The Chinese Room Argument
Thinking about what âStrong AIâ actually is and means.
I took a bunch of half days last week, because goodness but I was tired. Too long running at full-throttle, and Iâd been running out of steam as a result. And what did I do instead, that ended up being so effective in recharging? Well, mostly⦠read literature reviews on interesting topics in philosophy, at least for the first few days. Dear reader, I am a nerd. But I thought Iâd share a few of the thoughts I jotted down in my notebook from that reading.1
âThe Chinese Room Argumentâ
This was an argument whose influence Iâve certainly encountered, but the actual content of which I was totally unfamiliar with.2
The argument, in exceedingly brief summary, is that âformal operations on symbols cannot produce thoughtââthat syntax is insufficient for conveying semantics. Searle made the argument by way of a thought experiment; a reference in a review on âThought Experimentsâ I may post about later is how I found my way to the discussion. That thought experiment supposes a man being handed symbols (under a door, perhaps) which are questions in Chinese, who has a set of rules for constructing correct answers to those questions, also in Chineseâbut the man himself, however much he gives every appearance of knowing Chinese to the person passing in questions by way of the answers he gives, does not in fact know Chinese. He simply has a set of rules that allow him to give the appearance of knowledge. The Chinese Room argument, in other words, is a(n attempted) refutation of the Turing Test as a metric for evaluating intelligence.
The rejoinders to this are varied, of course, and I encourage you simply to follow the link above and read the summaryâitâs good.
There were two particularly interesting points to me in reading this summary: the Churchland response, and the Other Minds response. To these Iâll add a quick note of my own.
1: The Churchland response
Searleâs argument specifically addressed an approach to AI (and especially so-called âstrong AI,â i.e. AI that is genuinely intelligent) that was very much in vogue when he wrote the article in the 1980s, but which is very much out of vogue now: rule-driven computation. One of the responses, which looks rather prescient in retrospect, was the Churchland reply that the brain is not a symbolic computation machine (i.e. a computer as we tend to think of it) but âa vector transformerâ⦠which is a precise description of the âneural networkâ-based AI that is now dominating research into e.g. self-driving cars and so on.
The main point of interest here is not so much whether the Churchlands were correct in their description of the brainâs behavior, but in their point that any hypothesis about neural networks is not defeated by Searleâs thought experiment. Why not? Because neural networks are not performing symbolic computation.
2: The Other Minds response
The other, and perhaps the most challenging response for Searleâs argument, is the âother mindsâ argument. Whether in other humans, or in intelligent aliens should we encounter them, orâand this is the keyâin a genuinely intelligent machine, we attribute the existence other minds intuitively. Nor do we (in general) doubt our initial conclusion that another mind exists merely because we come to have a greater understanding of the underlying neuro-mechanics. We understand far more about human brains and their relationship to the human mind than we did a hundred years ago; we do not therefore doubt the reality of a human mind. (Most of us, anyway! There are not many hard determinists of the sort who think consciousness is merely an illusion; and there are not many solipsists who think only their own minds exist.)
But the âother mindsâ objection runs into other intuitive problems all its own: supposing that we learned an apparently-conscious thing we interacted with were but a cleverly-arranged set of waterworks, we would certainly revise our opinion. Which intuition is to be trusted? Either, neither, or in some strange way both?
And this gets again at the difficulty of using thought experiments to reason to truth. What a thought experiment can genuinely be said to show is complicated at best. Yet their utilityâat least in raising problems, but also in making genuine advances in understanding the worldâseems clear.
Lowered standards
The other thing I think is worth noting in all these discussions is a point I first saw Alan Jacobs raise a few years ago, but which was only alluded to in this literature review. Jacobs cites Jaron Lanierâs You Are Not A Gadget. (I donât have a copy of the book, so Iâll reproduce Jacobsâ quotation here.)
But the Turing test cuts both ways. You canât tell if a machine has gotten smarter or if youâve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far youâve let your sense of personhood degrade in order to make the illusion work for you?
This is one of the essential points often left aside. Is the test itself useful? Is âability to fool a human into think youâre humanâ actually pointing at what it means to be intelligent? This is sort of the unspoken aspect of the âother mindsâ question. But itâs one we ought to speak when weâre talking about intelligence!
For those of you following along at home: I wrote all but the last 100 or so words of this a week ago and just hadnât gotten around to publishing it. Itâs not the even more absurd contradiction to yesterdayâs post on writing plans than it seems. Really. I promise.â©
Itâs occasionally frustrating to find that there is so much Iâm unfamiliar with despite attempting to read broadly and, as best I can, deeply on subjects relevant to the things Iâm talking about on Winning Slowly, in programming, etc. One of the great humility-drivers of the last few years is finding that, my best efforts to self-educate notwithstanding, I know very little even in the fields I care most about.â©