Jump to content

Talk:Chinese room

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 66.137.234.217 (talk) at 04:24, 5 December 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

older stuff

OK, I'll bite... Since Hofstadter's reply was the one Searle called the Systems reply and since you edited it out for no apparent reason, I'll put it and the other responses here and then we'll talk. My paper was actually on the whole drawn out debate between Searle & H/D. The next part of my original paper was on Hofstader's reply, but I left that out, since it has little bearing on the Chinese Room article. --Eventi
I was only cleaning up the language of this article--if I removed something it was probably a mistake (something like a cut with intention to paste at another location, which was forgotten). Thanks for putting it back. I agree that responses to Searle ought to go into other article(s). I'll see if I can come up with a few. I admit that my totally unsupported comment is just that--it's on a Talk page, after all--and I may well choose to back it up if I can find some time, but the impression I get from those in the AI field I know well--Minksy, Kurzweil, and others with whom I have conversed--is that no one takes Searle seriously except Searle. --LDC

I think the essay was well written, though a little out of date. It's far from neutral point of view, which would be the hardest part to fix. -LC

Thanks... What do you think is out of date? --Eventi

Replies to the Chinese Room

The first of the six replies is called the "Systems Response", coming from Berkeley University. Those who support this response claim that it is true that the individual in the room does not understand Chinese, but the system does. The operator of the room is merely a part of a system that understands Chinese.

Searle's reply to this is, in his words "quite simple". If a person were to internalize the entire system, memorize the rules for manipulating Chinese symbols and doing all the necessary manipulations in his head, he would still be unable to understand the Chinese questions, though he could give the correct answers using the system. "All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn?t anything in the system that isn?t in him". The user of the mental Chinese system still does not understand Chinese, so neither could the subsystem.

The second reply presented by Searle is called "The Robot Reply," and it comes from Yale. The reply supposes that the Chinese room could be attached to a robot, and that certain symbols coming into the room would be coming from other "sensory organs" on the robot, like cameras, and speakers. Furthermore, some of the symbols passed out of the room would cause the robot to perform various activities, enabling it to interact with the world, much as a human does. According to supporters of this reply, such a machine would have genuine understanding.

Searle's reply to this suggestion is that the person inside the room still has no understanding of the input he receives or the output he gives. It makes no difference to the operator of the room whether the input symbols are coming from a video camera or an interviewer, or whether the output symbols are replies to a question or commands to the robot.

The third reply, coming from Berkeley and MIT, is called the "Brain Simulator" reply. Suppose that the program acting as a mind is not intended to simply answer questions at the level of human understanding, but instead is designed to simulate the functioning of the brain. Such a computer would duplicate the sequence of neuron firings at the synapses of the brain of a native Chinese speaker, in massive parallel, "In the manner that actual human brains presumably operate when they process natural language (Searle 1980)." With this intricate simulation of the brain, wouldn?t the causal powers of the brain also be duplicated? Then wouldn?t the machine truly understand?

Searle first points out that this reply is actually inconsistent with the strong AI belief that one does not have to know how the brain works to know how the mind works, since the mind is a program that runs on the brain?s hardware. "On the assumption of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology (363)". Searle's answer to this suggestion is yet another mental experiment. Imagine that instead of using a computer to simulate the brain's operation, we use a complex system of water pipes and valves to simulate it. A person receives Chinese symbols as before, but instead of looking up rules and the calculating the answer as before, he adjusts the valves in the plumbing. The water pipes return their output, and the person passes the correct symbol back out. The person still has no understanding.

Searle foresees objections to his water pipe computer, namely the systems response, that it is the conjunction of the operator and the pipes that understand. Again he answers that in principle, the operator could internalize the system as before, and there would still be no understanding. Searle says that this is because the system simulates the wrong things about the brain. Only the formal description of the brain, and not its ability to produce "intentional states".

One reply which comes from Berkeley takes a different tack from the others, and really asks a different philosophical question. Searle calls it the "Other Minds" reply. How does anyone know that other people understand Chinese, or for that matter, anything at all? The only way to tell if anything has cognitive ability is by its behavior. Therefore, if a machine could pass the same behavioral tests as a person, you must attribute understanding to the machine.

Searle's response to this objection, is that to study a cognitive science, such as proponents of strong AI claim to do, you must assume that cognition exists, just as in the physical sciences you assume that there are real, knowable physical objects (366). Cognitive states, such as we assume our own minds create, are not demonstrable by conventional empirical means, unlike the components of physical science. These cognitive states are only demonstrable by cognition; we only know we think because we know we think.

The last reply printed in The Mind?s I is called the "Many Mansions" reply, and comes also from Berkeley. The proponents of this reply state that Searle?s argument presupposes that the hypothetical machine uses only the technology available today. They believe that there will some day be devices manufactured to duplicate the "causal" powers of the brain that Searle believes to be the real difference between machines and brains. Such machines would have artificial intelligence, and along with it cognition.

Searle writes that he has no objection to this claim, but argues that this is fundamentally different from the claim made by strong AI. Searle states that his interest was in challenging the thesis that "mental processes are computational processes over formally defined elements." Since this response redefines the goals of strong AI, it also trivializes the original claim. The new claim is that strong AI is "whatever artificially produces and explains cognition," and not that a formal program can produce and explain cognition.

Why aren't the criticisms a larger part of the article? I think they'd go better in this article than on their own -- "criticism of the Chinese Room" doesn't stand independently. No one brings up the opposition to the Chinese Room except when presented with the Chinese Room argument. Also, it's an important part of the presentation of the argument to note that not many people take the experiment seriously for these reasons (unless I'm mistaken...). I wouldn't support breaking the section off unless the main article got too long -- which seems unlikely. Is anyone putting this material in somewhere/are there significant objections to its insertion (duly NPOVed, etc.)? Mindspillage 14:47, 27 Dec 2004 (UTC)

What I'm not sure about, and which I'll ask Searle or look up information on the web, is whether what Searle is concerned about is paralel to consciousness, or what is called the experience of qualia. Hopefully, this would be easier to get agreement from AI scientists about: that the Chinese room system, or a series of pipes, whatever, wouldn't have consciousness like people do. AI theorists have to either propose a machine can have what we term consciousness, or that there is a mind-body dualism where the mind is either dispensible or epiphenomenal. What I guess Searle also wants to forward in addition to this, is that the understanding and Intentionality of the human mind is unable to be explained without consciousness, and can't be emulated/simulated without it. Aside from Searle's argument, this seems intuitive: because, if it wasn't necessary, why would we have it? Of course, we wouldn't experience existence if we didn't have it, but thats not the point. That it exists, and we exist, as being aware, suggests that its necessary to thought. I should note that I have other concerns about consciousness; one is with epiphenomenalism and Wittgenstein's characterization of it (which I assume Searle rejects)--my other is substantially broader--I think that the issue of conscioussness (how are we conscious) and the issue of existence (why does anything exist at all) have to be the same issue, and can't be separated and ultimately we have to look at ontology. But I wont get into that here. Brianshapiro

I'm pretty sure Searle is critiquing functionalisms explanation of consciousness, not qualia. Why would you get agreement from functionalists? This is their hypothesis, that consciousness is not in principle human-specific. --snoyes 05:27, 30 Nov 2003 (UTC)

casual powers of the brain

From the perspective of somebody who has never heard this thought experiment before and therefore comes in with no pre-conceptions: The article makes sense, except that I am at a loss to what the "casual powers of the brain" are. Perhaps this can be expanded upon, or referenced to another article?

You’re quite right. The only trouble is that the 'causal powers of the brain' are left fairly vague by Searle. I think he says in effect that 'computational power alone is insufficient to produce mind, so what ever causes mind is not computational – so let's just call it that causal part of the mind'. But this is just my POV, so I am loath to include it. Banno 21:10, Jan 2, 2005 (UTC)
Yes, "Causal," not "Casual."  :) I believe the "causal powers of the brain" refers to the particular capacity of a brain to "cause" a mind, through neurological activity. Strong AI wants a computer to be able to create a mind by running software, without needing to simulate the neurological activity of a brain. Such a simulation would presumably require complete understanding of brain function.

False premise reply

I'm not sure if the false premise argument is present in the historical literature, but it is commonly heard in discussions of the Chinese room. The version I put in the article is not quoted from one particular source, but is a simplified version of the typical argument. Here is one example of the false premise argument from a philosophy discussion site:
"So, how do you decide whether or not a system understands semantics? The Turing Test is one possible test. Indeed, the only way to test for semantics is to test for understanding in general. There is no measurable difference between understanding in general and an understanding of semantics. Syntax is sufficient for semantics. Searle's distinction between purely syntactical systems and semantic systems is illogical. There is no observable difference. If a system passes the Turing Test, it has demonstrated an understanding of semantics."

Given that this argument is not necessarily historic, does it merit inclusion in the article? Kaldari 00:06, 24 Jun 2005 (UTC)

Well, let's pick at the argument and see how it stands up.
1. the only way to test for semantics is to test for understanding in general
2. There is no measurable difference between understanding in general and an understanding of semantics.
therefore,
3. Syntax is sufficient for semantics.
4. But this contradicts Searle's second premise;
5. So Searle's argument cannot stand.
Is this the correct interpretation? If this is what you are suggesting, then the argument is a reductio. As such one can conclude the either (Syntax is sufficient for semantics)xor(Syntax is insufficient for semantics). Given this, pedantically, the argument does not reach the conclusion it claims.
Remember that the Chinese room is an argument in support of (Syntax is insufficient for semantics). What argument is presented against it? (1), above, appears non-controversial; it is almost tautologous, given that semantics is understanding. (2) above is less clear - I can;t quite see what it might mean. But certainly, (3) simply does not follow from (1) and (2). Syntaxis not even mentioned in (1) and (2), it just appears in (3).
So I;d say no, the argument is neither valid nor cogent, nor does it have a place in the literature. So it should not be included int he article. Banno 22:16, Jun 24, 2005 (UTC)

second version

Hi Banno, the example I put on the talk page above was just a casual example of someone using the false premise argument as part of their criticism on a discussion board. The example was not especially well written or meant to be presented as a formal argument. Forget points 1 through 5. The false premise argument is actually extremely simple:
In the Chinese Room thought experiment, Searle asserts that purely syntactic rules (without semantics) are theoretically sufficient to pass the Turing test. He doesn't offer any justification for this assertion or its plausability, he just says to "suppose" it happens. The false premise argument simply says that this assertion is wrong, i.e. it is impossible to pass the Turing test with purely syntactic rules as the Turning test is essentially a test for semantics. Thus the conclusions drawn from this experiment are not justified since it is based on a false premise. Here is the relevant original material from Searle:
"Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers."
The false premise reply simply rejects this supposition as impossible. This seems like a pretty straight-forward and valid argument. What do you think? Kaldari 22:42, 24 Jun 2005 (UTC)

From the article:
It would be impossible for the person in the room to pass the Turing test merely with a book of syntactical rules. The rules would have to include semantics in order to actually fool the Chinese-speaking interviewer. If the rules included semantics, the person in the room would actually gain some understanding of Chinese. In other words, the Turing test is essentially a test for semantics.
By definition, if it is in the book of rules, it is syntax, not semantics. The point of the Chinese Room is that the person in the room does not understand the symbols in the way a native speaker would; all they are doing is following the rules. So the idea that the rules include semantics does not make sense. Perhaps Kaldari, you could fill out the argument to overcome this? Banno 22:29, Jun 24, 2005 (UTC)

You misunderstand the argument. Again, please disregared the example I gave above. It was confusing. The false premise argument does not assert that the rules include semantics. It accepts Searle's scenario that the rules do not include semantics. It merely states that such rules would not be sufficient to pass the Turing test. Kaldari 22:46, 24 Jun 2005 (UTC)
Here is perhaps a better statement of the argument:
Searle's assumption (Banno's emphasis) that it is possible to pass the Turing test using purely syntactical rules (without semantics) is wrong, since the Turing test is essentially a test for semantics. Thus the conclusions that Searle makes are based on a false premise.
Kaldari 23:25, 24 Jun 2005 (UTC)
Here is a better citation of the false premise argument: [1]. The most relevant section is the one titled "Holes in the Chinese Room". Kaldari 23:55, 24 Jun 2005 (UTC)

But - as it says in the article - Searle does not just assume that syntax is insufficient for semantics - he argues for it by presenting the Chinese Room argument: "Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese". Neither Searle-in-the-room, nor the room as a whole, understand Chinese; all they do is follow a set of rules for moving symbols around. Remember that the point of the Chinese Room argument is to show that it is not true, as strong AI claims, that "if a machine were to pass a Turing test, then it can be regarded as 'thinking' in the same sense as human thought". Nothing in the room "thinks" like a human; yet the room, by supposition, passes the test. Therefore according to Searle the claim of strong AI is false. Banno

Can you find a better citation? and unidentified Blogger might not be the most reliable source; furthermore, point one and two appear to misunderstand the hypothetical nature of the Chinese Room argument; it is a presupposition of the thought experiment that the room passes the Turing test, so it is pointless to say that it might not. Point three is correct, but misses the point, which is that: ifthe room passes the Turing Test, and because it does not think like a human, it is wrong to suppose that a machine that passes the Turing Test ipso facto thinks like a human. Banno 07:48, Jun 25, 2005 (UTC)

monkeys and systems

I didn't write that Searle assumes that syntax is insufficient for semantics. I wrote that he assumes it is possible that the room passes the Turing test (without semantics). You write the same thing in your reply: "it is a presupposition of the thought experiment that the room passes the Turing test". And it is certainly not pointless to say that his presupposition is flawed! What if I made the following argument:
I have a monkey that is really smart. Suppose that I make it study day and night for five years so that it passes the SAT. The SAT is supposed to be a test to determine if someone is smart enough to get into college, but of course no matter how smart a monkey is, it would never belong in college. In this case, however, the monkey made it into college because of it's great SAT score. Therefore the SAT is a flawed test.
The problem here is not in my logic, but in the assumtion that if a monkey studies enough it would pass the SAT. Clearly, criticizing the presupposition is critical to debunking this rediculous example. As the writer of the source I cited states: "for such an experiment to have any applicable results, the assumptions have to be possible at least."
As I have said before, please ignore that initial example I posted on the talk page, as it is far too scatterbrained to be a good example of the false premise argument. Kaldari 16:18, 25 Jun 2005 (UTC)

This is fun; we seem to be at cross-purposes here, and although there may be something hiding in your argument, it remains unclear to me what exactly that something is.

Remember that it is the proponents of strong AI that say that a machine that passes the Turing Test can think like a human, not Searle. Searle set the Chinese Room up as an example of a machine that does not think like a human, yet ex hypothesis it passes the test. Now, are you claiming that:

a) it is impossible for any machine, including the Chinese Room, to pass the Turing Test;

b) some machines might pass the Turing Test, but the Chinese Room is not one of them;

c) if the room passes the Turing Test, then ipso facto it is cognisant (it understands semantics)?

If (a), then you are simply denying the efficacy of the Turing Test, and not addressing the issues raised by Searle. If (b), then could you provide reasons for your claim? Banno

But I suspect that you wish to claim (c), which is the most interesting case. But this seems to me to be no more than a repeat of the systems reply. The systems reply says that the room indeed understands Chinese; Searle's reply is that, if it does, it does not do so in the same way that a human does. When Searle-in-the-room is talking about "rice", for instance, there is no understanding, no intentionality associated with "rice" - all that is done is the shuffling of symbols. Banno

So it seems to me that your reply is just a variation on the systems reply, and does not merit its own subheading.Banno 21:36, Jun 25, 2005 (UTC)

Banno, the false premise reply clearly claims B, as I have patiently tried to explain multiple times. If you need reasons to back up claim B, please refer to the citation in the article, particularly the section labeled "1". Basically, any set of purely syntactic rules is going to be limited in what questions it can answer in a meaningful or human-like way. As Chomsky points out, human language is "discreet but infinite". Thus in order for a set of syntactic rules to be able to pass the Turing test, the set of rules would also have to be infinite, which is impossible.
Also, I would appreciate it if you would remove your false characterization of the reply as representing claim C, as that claim is not central to the false premise argument. Thanks. Kaldari 28 June 2005 18:38 (UTC)

Style & POV

Just a note on style - your versions have asserted that Searle is wrong, rather than that if the reply is accepted, thenSearle is wrong. so for instance, you had: "Thus the conclusions that Searle makes are based on a false premise", which asserts that Searle is wrong: clearly POV. This needs to be couched in a conditional, as in my: "if Searle-in-the-room passes the Turing Test, then ipso facto Searle-in-the-room understands Chinese; and therefore his second assumption is incorrect".

Also, it is a good idea to leave the "dubious" in until this discussion reaches some conclusion, at which point I will remove it. Banno 21:36, Jun 25, 2005 (UTC)

Banno, I wrote the false premise reply in the same style as the other replies. They all very obviously represent POVs. That is why they are called criticisms. If you want to NPOV the false premise reply, why not NPOV the other replies as well? For example, "Houser contends that blah blah blah" rather than "blah, blah, blah"? Adding your own rebuttal is not NPOVing! Kaldari 28 June 2005 18:51 (UTC)
I've used Searle as the main reference, essentially for convenience, since it is Searle that brings the objections together and replies to them. If someone wants to add an additional criticism, then given the contentious nature of the topic, it is reasonable to ask for attribution and for clarity. So, what is it that you see as POV in those replies? One could not object to the POV, as long as it is attributed. Since what you have presented lacks clarity and is not present in the literature, the only reasonable thing to do is to remove it until such time as it can be presented more suitable. Banno June 29, 2005 08:09 (UTC)
The objections themselves are not attributed, they are simply given as if dictated from the gods. Who says "perhaps the person and the room considered together as a system" understands Chinese? Who says that if you place the room inside a robot surely it can be said to "understand what it is doing"? They are written in the same sourceless POV style as I used for the false premise reply. The only reason I wrote it in that style was to be consistant with the other sections. The only sources attributed in the other 2 replies are the replies to the replies: which of course is Searle. If you want to completely NPOV the criticisms section, go ahead, but don't single me out just because I tried to match the style of the other sections! If anything, the false premise section is the most NPOV as it actually gives a small bit of context now: "Another argument, possibly only found outside academic discussion, is:" Kaldari 29 June 2005 17:56 (UTC)

That's why there are links are at the bottom - where they should be. I recommend [Minds, Brains, and Programs], as it has replies to another four objections, much better than the concatenation I constructed here. I wrote the criticism section as an introduction, not an exhaustive account. If you want to improve it, please do, but use something with a bit more grunt than one Blogger's opinion.

Far too many folk read one account and decide that the Chinese Room must be wrong, usually because they do not see what it is the argument is saying. The argument does not claim that AI is impossible. Nor that the brain is not a machine. But undergrad computer scientists the world over, after a quick scan of some third party summary, decide it tries to do either or both. The argument claims that a machines' being able to pass the Turing test is not sufficient reason to suppose that the machine has a mind that is the same as a human mind; Your "false Premise" does no more than claim that a machine that passes the Turing test passes the Turing test. It does not even address the issue. The statement "Searle's assumption that it is possible to pass the Turing test using purely syntactical rules is wrong" is not even close, since that assumption is not made by Searle, but by the advocates of strong AI. If you think otherwise, then tell us what more is involved in the Chinese room than syntactic rules.

So far as I can see, the argument you present is without merit, and should be removed. There is plenty of far better material you could use to mount a case against Searle. Banno June 30, 2005 12:34 (UTC)

Banno, I am not "mounting a case against Searle". I added the false premise reply because it is a point that often comes up in discussions of the Chinese Room on philosophy discussion boards. I'm sorry if the best written version of it I could find was written by a blogger. It seems clear to me that you are rather obsessed with defending the Chinese Room argument, rather than writing a comprehensive article about it. Since it is apparent that you "own" the article and will only be satisfied once you have bullied me off with your nonsensical analyses of the "merits" of the argument, which however simple, you seem completely incapable of grasping. Rather than try to explain for the nth time why your rebuttal is both absurd and original research, I'm going to stop wasting my time, and leave the article to your disposal. I'm taking the article off my watchlist, so feel free to delete whatever you want and add pictures of pink bunnies or whatever suites your fancy. Kaldari 30 June 2005 14:56 (UTC)
That's unfair. The argument in "false premise" is inadequate; I had hoped that you would meet the challenge and produce a better argument. I think that the article as it stands is biased in favour of Searle, but I don't think that this should be fixed by simply adding any old junk that disagrees with him. I have been quite explicit, I think, in explaining the inadequacies of the section in question. If that peeves you, then tough, but don't blame me if you are unable to fix those inadequacies. Banno June 30, 2005 19:47 (UTC)

Argument's Failure

My comments in parentheses - Banno June 29, 2005 08:27 (UTC)

Searle's Chinese room argument's failing (clearly POV - Searle and others contend that the argument does not fail -Banno)stems from his inability to differentiate between the computational substrate and the computation itself. (given what is said below, it appears that the anonymous author of this section is not aware of Searle's substantial discussion of the Background, which corresponds reasonable well with "computational substrate", but is a much clearer concept -Banno)

For a Chinese individual the computational substrate is the physical world with all its rules for chemical reactions and atomic interactions, the program is his brain. His computationally active brain (program) understands Chinese (the equation of "brain" and "program" is problematic - "brain" and "mind" might work -Banno).

In the Chinese room, the individual is the computational substrate. (Why? why isn't the physical world outside the room the computational substrate, as it is for the individual above? -Banno) He knows nothing of Chinese, merely manipulating characters. The program is the rulebook. The computationally active rulebook (program) computed by the individual understands Chinese.

In the Chinese room example, Searle takes the individual and claims to debunk strong AI by stating that he doesn't really understand Chinese. What Searle is doing is taking the computational substrate and stating it doesn't understand. None of Searle's brain's atoms and chemical reactions understands English. His active brain however does. (This appears to be a variation of the systems reply - the parts do not understand Chinese, but the whole does -Banno)

[interjection] Similarly one could say that in the water-pipe example Searle misinterprets the function of the human delivering the results. If we imagine the waterpipes as connections within the brain, the water as impulses, and the valves as synapses we have (assuming equivalent structure and complexity) a snapshot of a chinese speakers brain and therefore the system assumes any level of cognition that we could attribute to the chinese speaker who served as a model for the system. The human in the room forms an input/output conduit and nothing more - transmitting stimuli/triggers to the brain and returning the appropriate responses dumbly as directed. A completely paralysed chinese speaker may only have hearing for input and can only return sequences of blinks to signal his understanding however we would not say that this limitation was a result of his intelligence or understanding ... ~What Searle does here is to deny that any such understanding takes place because the eyelids and ears themselves do not understand chinese. This seems rather absurd to me however I appreciate that I may be reading his objection out of context.
I think, instead, a better argument would be that a chinese speakers brain was capable of configuring and adapting to ongoing stimuli to reach the point where it was able to understand chinese. The waterpipe system, though complex enough to deliver snapshots of the chinese speakers current processes, is assumed incapable of such growth or self adaptation. Sureley the true test of any complex systems 'intelligence' is its ability to adapt to its environment ... one would require a system of pipes and plumbers which, from an initial configuration, could be taken anywhere in the world and would, over time, learn to provide 'appropriate' and 'self-determined' responses. To 'function' if you prefer.
Of course, I am a strong believer that there is nothing magical about the human brain or the thought processes. I believe that intelligence, free will, and determinism is largely illusory and results only from complexity at a level we will, as individuals, never understand. The facade is necessary for our evolutionary function and psychological wellbeing so I think it is quite normal for many find such a concept unbearable. Most of us would rather believe that there is a fundamental difference between brain and machine or between mind and program/data. I think we all need to get over our own sense of self-importance and accept ourselves as nothing more than complex pattern-matching systems which, in time, given the massive parallelism promised by quantum computing, will be not only emulated but inevitably superceded. From the humblest ant to a music composing songbird, from algae to mankind ... we are all gods unto ourselves and dust unto the universe.
Of course, I have never composed a sonnet or symphony - so, in some peoples evaluation, I may not be qualified to bring any meaningful ideas to this debate.GMC)

Searle's counter argument that the individual could memorise the rulebook changes nothing. The substrate is still the individual's brain; the program is in his memory. When the individual runs the rulebook from memory, he doesn't understand Chinese but the active program does (This appears confused, since the active program would appear to be no more than the individual acting from the rule book...so how can the individual not understand Chinese when the active program does? - Banno)

To extend his counter argument, if a real Chinese person were to interact with the individual merely running the Chinese rule book from memory, we would have two entities that understand Chinese conversing with one another.

In a computer analogy, in the case of the real Chinese, the "Chinese" program, his brain, is run in "hardware" mode (what does "hardware mode" mean? -Banno). For the other individual, the "Chinese" program is run in "software" mode, the individual's brain's computational capabilities are used to run the "Chinese" program. In the end, there are two entities that understand Chinese conversing together.

The conclusion is one that has been known in computer science for a very long time. That computational substrate is irrelevant to the active computation. There is no such thing as fake computation. All computation is genuine. Computation is computation regardless of whether the substrate is physical or emulated. (? -Banno)

Mario eating magic mushrooms on a real Nintendo is the same Mario that eats magic mushrooms on a Nintendo emulated on my computer, and is the same Mario that eats magic mushrooms on a Nintendo emulated a computer emulated on another computer.

there are indeed responses to Searle that really on differentiating emulation from simulation. Perhaps the author had these in mind. A clearer exposition would be most welcome. Banno June 29, 2005 08:27 (UTC)
This is all Original Research. Where are the sources that are making these arguments/counterarguments? Kaldari 29 June 2005 18:09 (UTC)
Agreed; and therefore the section should not be included. Banno June 29, 2005 18:32 (UTC)

The short version

In this diagram, we show that "PROCESSOR + PROGRAM = PROCESS".

           |-----------------------------------------------------------------------|
           |   PROCESSOR    |      PROGRAM       |           PROCESS               |
|----------|----------------|--------------------|---------------------------------|
| Searle:  |Laws of physics | Brain              | Mind, no Chinese understanding  |
|----------|----------------|--------------------|---------------------------------|
| Chinese: |Laws of physics | Brain              | Mind with Chinese understanding |
|----------|----------------|--------------------|---------------------------------|
| Room:    |Mind            | Rulebook           | Chinese understanding           |
|----------|----------------|--------------------|---------------------------------|
| Memory:  |Mind            | Memorized rulebook | Chinese understanding           |
|----------------------------------------------------------------------------------|

Notice in the last two cases that the mind(PROCESSOR) is not the one that has Chinese understanding.

From Searle's 3 premise argument

  1. Programs are purely formal (syntactic). (That is true, all the programs in the "PROGRAM" column are purely syntactic.)
  2. Human minds have mental contents (semantics). (That is true, the mind is a PROCESS that has semantics.)
  3. Syntax by itself is neither constitutive of, nor sufficient for, semantic content. (That is true, the syntax(PROGRAM) is no closer to understanding Chinese than the PROCESSOR.)
  4. Therefore, programs by themselves are not constitutive of nor sufficient for minds. (That is true, the PROGRAM requires a PROCESSOR to create a PROCESS.)

We have to be careful not to compare apples and oranges. Although all of the above premises and the final deduction are true, the conclusion that Searle extrapolates from it, that computers(PROCESSOR) running human made software(PROGRAM) cannot understand Chinese(PROCESS), is a step beyond his logical arguments.

This is not Searle's conclusion. Quite the opposite. He does claim in several places that a suitable arrangement of a processor and interface could understand Chinese in the way a human does; he must, since he thinks that the human mind is such an arrangement. What the Chinese Room shows is that satisfaction of the Turing test does not imply possession of such a mind. Banno June 29, 2005 18:41 (UTC)
Is that all it's meant to demonstrate? It's not "directed at" the more general claim that "the appropriately programmed computer literally has cognitive states"? If not -- if this is a misconception so widespread that Searle himself believes it -- then we should take pains in the article to correct that misconception, starting with radical changes to the introduction. --Echeneida
Why? Banno 21:31, August 14, 2005 (UTC)

Does Wikipedia Understand the Chinese Room Argument?

One would think that, since the Chinese Room argument is within Wikipedia, and I can ask Wikipedia about it, that Wikipedia would thus understand the Chinese Room argument.

Ha! Too bad that's hardly a turing test. --Echeneida
Too bad that Wikipedia has no Proccessor and therefore we can't argue that its a machine. Besides, this discussion has nothing to do with improving this article.--Sampi 23:34, 20 November 2005 (UTC)[reply]

Searle's responses

Searle's defense of the systems argument is a little flawed. He suggests that if the man in the room memorizes all of the rules and leaves the room that he's able to converse in Chinese without knowing the language - then we have a case where he doesn't understand chinese but yet is able to speak in it.

I would argue that we then have two intelligences in one brain. If you ask the guy a question in Chinese, he responds, intelligently, using his internalized rules. However, if you ask him in English what it was he just said, he won't know. This is the heart of Searle's defense - he doesn't understand Chinese. However, you can simply reverse the argument. Ask him a question in English - and then ask (in Chinese) what he just said - and his 'Chinese self' will be unable to answer.

There is complete symmetry here. You simply have two brains inside one head with no way (apart perhaps from speed of response and flaws in the Chinese rulebook) to know which is the 'real' man. That being the case, what reason have you to assume that either the Chinese brain or the English brain is the 'real' one? If you can't tell, what is to say that the Chinese half isn't intelligent?