Jump to content

Chinese room

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 24.172.132.176 (talk) at 08:18, 24 September 2007 (→‎Related works: added another Chinese Room refutation link (link was removed some time ago from Searle main article but never made it to the Chinese Room article)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

The argument is that a computer cannot have understanding, because human beings, when running computer programs by hand, do not acquire understanding. His arguments are taken very seriously in the field of philosophy, but are regarded as obviously invalid by many scientists, even those outside the field of AI.

Searle's Philosophical Argument

Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a recurring trope in the debate over whether computers can truly think and understand. Searle's argues that computers can't think as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.

Criticisms

Searle's argument is based on a fundamental misunderstanding of the nature of computers and of computer programs. The most important thing that is missing from his story is the memory of the computer, the place where the computer stores its internal state, which contains all the information it will need to reference. This is a huge library of papers which he will need to consult in his simulation, a labyrinth of symbols and arcana. He will have to rewrite numbers in millions of ledger books, do complicated sums over and over, and all this before he can even process the first symbol. Then, after millions of years of writing arcana in books, a few symbols will pop out.

It is the system of Searle and the room, especially all those books, that is thinking, not Searle himself. The fact that Searle doesn't gain any knowledge of Chinese is no more persuasive than the fact that neuron number 190 of Searle's brain doesn't understand English.

Searle's argument, at best, shows that a simple computer program which manipulates syntax without storing memory, can never think or understand. This point is well accepted by everybody.

History

In 1980, John Searle published "Minds, Brains and Programs" in the journal Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle's replies to his critics.

Over the last two decades of the 20th century, the Chinese Room argument was the subject of many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, "Is the Brain's Mind a Computer Program?" His piece was followed by a responding article, "Could a Machine Think?", written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

The heart of the argument is an imagined human simulation of a computer, similar to Turing's Paper Machine[2]. The human in the Chinese Room follows English instructions for manipulating Chinese characters, where a computer "follows" a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does — manipulate symbols on the basis of their syntax alone — no computer, merely by following a program, comes to genuinely understand Chinese.

This argument, based closely on the Chinese Room scenario, is directed at a position Searle calls "Strong AI". Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong AI, a computer may play chess intelligently, make a clever move, or understand language. By contrast, "weak AI" is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers can actually understand or be intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a "program for L" is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

  1. If Strong AI is true, then there is a program for L such that if any computing system runs that program, that system thereby comes to understand L.
  2. I could run a program for L without thereby coming to understand L.
  3. Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

The core of Searle's argument is the distinction between syntax and semantics. The room is able to shuffle characters according to the rule book. That is, the room’s behaviour can be described as following syntactical rules. But in Searle's account it does not know the meaning of what it has done; that is, it has no semantic content. The characters do not even count as symbols because they are not interpreted at any stage of the process.

Formal arguments

In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:

  1. Brains cause minds.
  2. Syntax is not sufficient for semantics.
  3. Computer programs are entirely defined by their formal, or syntactical, structure.
  4. Minds have mental contents; specifically, they have semantic contents.

The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese. Searle posits that these lead directly to four conclusions:

  1. No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
  2. The way that brain functions cause minds cannot be solely in virtue of running a computer program.
  3. Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
  4. The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain.

Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.

Replies

There are many criticisms of Searle’s argument. Most can be categorized as either systems replies or robot replies.

The system reply

Although the individual in the Chinese room does not understand Chinese, perhaps the person and the room, including the rule book, considered together as a system, do.

Searle’s reply to this is that someone might in principle memorize the rule book; they would then be able to interact as if they understood Chinese, but would still just be following a set of rules, with no understanding of the significance of the symbols they are manipulating. This leads to the interesting problem of a person being able to converse fluently in Chinese without "knowing" Chinese. Such a person would face the formidable task of learning when to say certain things (and learning a huge number of rules for "getting by" in a conversation) without understanding what the words mean. To Searle, the two are still clearly separate.

In Consciousness Explained, Daniel C. Dennett does not portray them as separate. He offers an extension to the systems reply, which is basically that Searle's example is intended to mislead the imaginer. We are being asked to imagine a machine which would pass the Turing test simply by manipulating symbols in a look-up table. It is highly unlikely that such a crude system could pass the Turing test. Of course, critics of Dennett have countered that a computer program is simply a logical list of commands, which could of course be put into a book and followed - just as a computer could follow them. So, if any computer program could pass the Turing test, then a person with the same instructions could also "pass" the test, except MUCH more slowly.

If the system were extended to include the various necessary detection-systems to lead to consistently sensible responses, and were presumably re-written into a massive parallel system rather than serial Von Neumann architecture, it quickly becomes much less "obvious" that there's no conscious awareness going on. For the Chinese Room to pass the Turing Test, either the operator would have to be supported by vast numbers of equal minions, or else the amount of time given to produce an answer to even the most basic question would have to be absolutely enormous—many millions or perhaps even billions of years.

The point made by Dennett is that by imagining "Yes, it's conceivable for someone to use a look-up table to take input and give output and pass the Turing Test," we distort the complexities genuinely involved to such an extent that it does indeed seem "obvious" that this system would not be conscious. However, such a system is irrelevant. Any real system able to genuinely fulfill the necessary requirements would be so complex that it would not be at all "obvious" that it lacked a true understanding of Chinese. It would clearly need to weigh up concepts and formulate possible answers, then prune its options and so forth until it would either look like a slow and detailed analysis of the semantics of the input or else it would just behave entirely like any other speaker of Chinese. So, according to Dennet's version of the system reply, unless we're forced to "prove" that a billion Chinese speakers are all more than massive parallel networks simulating a Von Neumann machine for output, we'll have to accept that the Chinese Room is every bit as much a "true" Chinese speaker as any Chinese speaker alive.[1]

The robot reply

Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. Surely then it would be said to understand what it is doing? Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean.[2]

Suppose that the program instantiated in the rule book simulated in fine detail the interaction of the neurons in the brain of a Chinese speaker. Then surely the program must be said to understand Chinese? Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states.[3]

But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain — perhaps linked to a robot of the type described above? Then surely it would be able to think. Searle agrees that it is in principle possible to create an artificial intelligence, but points out that such a machine would have to have the same causal powers as a brain. It would be more than just a computer program.[4]


References

  1. ^ Dennett, Daniel (1991), in Allen Lane, Consciousness Explained, The Penguin Press, ISBN 0-7139-9037-6 (UK Hardcover edition, 1992).
  2. ^ Searle, John R. (1990). Is the Brain's Mind a Computer Program? Scientific American, Jan. 1990, 26-31.
  3. ^ Searle, John R. (1990). Is the Brain's Mind a Computer Program? Scientific American, Jan. 1990, 26-31.
  4. ^ Searle, John R. (1990). Is the Brain's Mind a Computer Program? Scientific American, Jan. 1990, 26-31.