This version of the site is now archived. See the next version at v5.chriskrycho.com.

The Chinese Room Argument

Thinking about what “Strong AI” actually is and means.

May 19, 2018Filed under Tech#ai#ethics#philosophyMarkdown source

I took a bunch of half days last week, because goodness but I was tired. Too long running at full-throttle, and I’d been running out of steam as a result. And what did I do instead, that ended up being so effective in recharging? Well, mostly… read literature reviews on interesting topics in philosophy, at least for the first few days. Dear reader, I am a nerd. But I thought I’d share a few of the thoughts I jotted down in my notebook from that reading.1


“The Chinese Room Argument”

This was an argument whose influence I’ve certainly encountered, but the actual content of which I was totally unfamiliar with.2

The argument, in exceedingly brief summary, is that “formal operations on symbols cannot produce thought”—that syntax is insufficient for conveying semantics. Searle made the argument by way of a thought experiment; a reference in a review on “Thought Experiments” I may post about later is how I found my way to the discussion. That thought experiment supposes a man being handed symbols (under a door, perhaps) which are questions in Chinese, who has a set of rules for constructing correct answers to those questions, also in Chinese—but the man himself, however much he gives every appearance of knowing Chinese to the person passing in questions by way of the answers he gives, does not in fact know Chinese. He simply has a set of rules that allow him to give the appearance of knowledge. The Chinese Room argument, in other words, is a(n attempted) refutation of the Turing Test as a metric for evaluating intelligence.

The rejoinders to this are varied, of course, and I encourage you simply to follow the link above and read the summary—it’s good.

There were two particularly interesting points to me in reading this summary: the Churchland response, and the Other Minds response. To these I’ll add a quick note of my own.

1: The Churchland response

Searle’s argument specifically addressed an approach to AI (and especially so-called “strong AI,” i.e. AI that is genuinely intelligent) that was very much in vogue when he wrote the article in the 1980s, but which is very much out of vogue now: rule-driven computation. One of the responses, which looks rather prescient in retrospect, was the Churchland reply that the brain is not a symbolic computation machine (i.e. a computer as we tend to think of it) but “a vector transformer”… which is a precise description of the “neural network”-based AI that is now dominating research into e.g. self-driving cars and so on.

The main point of interest here is not so much whether the Churchlands were correct in their description of the brain’s behavior, but in their point that any hypothesis about neural networks is not defeated by Searle’s thought experiment. Why not? Because neural networks are not performing symbolic computation.

2: The Other Minds response

The other, and perhaps the most challenging response for Searle’s argument, is the “other minds” argument. Whether in other humans, or in intelligent aliens should we encounter them, or—and this is the key—in a genuinely intelligent machine, we attribute the existence other minds intuitively. Nor do we (in general) doubt our initial conclusion that another mind exists merely because we come to have a greater understanding of the underlying neuro-mechanics. We understand far more about human brains and their relationship to the human mind than we did a hundred years ago; we do not therefore doubt the reality of a human mind. (Most of us, anyway! There are not many hard determinists of the sort who think consciousness is merely an illusion; and there are not many solipsists who think only their own minds exist.)

But the “other minds” objection runs into other intuitive problems all its own: supposing that we learned an apparently-conscious thing we interacted with were but a cleverly-arranged set of waterworks, we would certainly revise our opinion. Which intuition is to be trusted? Either, neither, or in some strange way both?

And this gets again at the difficulty of using thought experiments to reason to truth. What a thought experiment can genuinely be said to show is complicated at best. Yet their utility—at least in raising problems, but also in making genuine advances in understanding the world—seems clear.

Lowered standards

The other thing I think is worth noting in all these discussions is a point I first saw Alan Jacobs raise a few years ago, but which was only alluded to in this literature review. Jacobs cites Jaron Lanier’s You Are Not A Gadget. (I don’t have a copy of the book, so I’ll reproduce Jacobs’ quotation here.)

But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

This is one of the essential points often left aside. Is the test itself useful? Is “ability to fool a human into think you’re human” actually pointing at what it means to be intelligent? This is sort of the unspoken aspect of the “other minds” question. But it’s one we ought to speak when we’re talking about intelligence!


  1. For those of you following along at home: I wrote all but the last 100 or so words of this a week ago and just hadn’t gotten around to publishing it. It’s not the even more absurd contradiction to yesterday’s post on writing plans than it seems. Really. I promise.↩

  2. It’s occasionally frustrating to find that there is so much I’m unfamiliar with despite attempting to read broadly and, as best I can, deeply on subjects relevant to the things I’m talking about on Winning Slowly, in programming, etc. One of the great humility-drivers of the last few years is finding that, my best efforts to self-educate notwithstanding, I know very little even in the fields I care most about.↩