Jump to content

Talk:Chinese room: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
added section disclosure
Line 2: Line 2:


{{Archive box|[[/Archive 1|Archive 1 (Jan 2006 to Nov 2007)]]}}
{{Archive box|[[/Archive 1|Archive 1 (Jan 2006 to Nov 2007)]]}}

== Disclosure ==
I just did a little work on the Further Reading section, which links to the arXiv preprint "Demolishing Searle's Chinese Room", which was written by me. I'd like to point out that I did not place the link, and was in fact a bit surprised to discover it here.


== Original Research ==
== Original Research ==

Revision as of 13:04, 28 April 2009

Please add {{WikiProject banner shell}} to this page and add the quality rating to that template instead of this project banner. See WP:PIQA for details.
WikiProject iconPhilosophy: Logic / Mind / Analytic / Contemporary B‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Logic
Taskforce icon
Philosophy of mind
Taskforce icon
Analytic philosophy
Taskforce icon
Contemporary philosophy

Disclosure

I just did a little work on the Further Reading section, which links to the arXiv preprint "Demolishing Searle's Chinese Room", which was written by me. I'd like to point out that I did not place the link, and was in fact a bit surprised to discover it here.

Original Research

<Moved to the bottom>

Teacher look! Johnny is cheating on the (Turing) test!

I always thought that Turing anticipated many arguments against AI including Searle-like arguments when he created the test. The questioner may ask anything to determine whether the box is thinking, understanding, has Buddha nature or whatever else they feel separated human thought from machine's. One rule: no peeking.

There's the rub. Searle looked inside and says "Hey wait a minute! Clearly nothing is understanding Chinese because there are only syntactic rules etc. So there is no understanding and therefore there can't be strong AI".

But he misses the point of the Turing Test. He doesn't get to look inside to detemine if there is "understanding", he must determine it from the outside.

Can Searle determine, from outside the box, that the machine is not really "understanding" Chinese? By the description of the test, he cannot. So, the Turinig Test has been passed and the machine is "thinking". The difference between syntax and semantics, the necessity of some component within the box to "understand" or for there to be important distinctions between strong and weak AI are either not well defined or red herrings.

Does this make sense or am I missing something?

Gwilson 15:53, 21 October 2007 (UTC)[reply]

You're right, Turing did anticipate Searle's argument. He called it "the argument from consciousness". He didn't answer it, he just dismissed it as being tangential to his main question "can machines think?" He wrote: "I do not wish to give give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." (see philosophy of artificial intelligence). When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks". But he's aware this is only a convention and that he hasn't solved the hard problem of consciousness.
Turing never intended for his test to be able determine if a machine has consciousness or mental states like "understanding". Part of Searle's point here is to show that it fails to do so. Turing was too smart to fall into this trap. Searle's real targets aren't careful thinkers like Turing, but others, who are given to loose talk like "machines with minds" (John Haugeland) or who think the computational theory of mind solves the hard problem of consciousness, like Jerry Fodor, Steven Pinker or Daniel Dennett. ---- CharlesGillingham 17:03, 22 October 2007 (UTC)[reply]
It's a shame if Charles's statement above can't be sourced anywhere, because it is eminently interesting and really well suited to a spot on the article (if it can be sourced somehow). If not, good job Charles on this interesting bit of thought.
Et Amiti Gel (talk) 07:39, 14 January 2008 (UTC)[reply]
Turing's answer to the "argument from consciousness" is in his famous 1950 paper Computing Machinery and Intelligence. In the last chapter of Norvig & Russell's standard AI textbook, they equate Searle's argument with the one that Turing answers. Turing's reply is a version of the "Other Minds Reply", which is mentioned in this article. ---- CharlesGillingham (talk) 18:52, 15 January 2008 (UTC)[reply]

Rewrite of replies

I have rewritten the "replies" section. I wanted to include a number of very strong and interesting proposals (such as Churchland's luminous room and Dennett's argument from natural selection) that weren't in the previous version. I also wanted to organize the replies (as Cole 2004 does) by what they do and don't prove.

I am sorry to have deleted the previous versions of the System and Robot replies. These were quite well written and very clear. I tried to preserve as much of the text as I could. All of the points that were made are included in the new version. (Daniel Dennett's perspectives are amply represented, for example). However, because there were so many points to be made, some of this text had to be lost. My apologies to the authors of those sections. ---- CharlesGillingham 14:52, 9 November 2007 (UTC)[reply]

searle fall into a trap

He set himself up a trap and fell into it. If the computer can pass the turing test, than it is irrelevant whether or not it "understands" Chinese. In order for it to be able to respond in a human manner, it would have to be able to simulate conversation. The answers have to come from somewhere, regardless of the language if they are to seem natural. The thing is, Searle doesn't seem to realize that his argument is essentially equivalent to the normal definition of a turing test. The human in his experiment is a manual turing machine simulator. He basically tries to deny that a turing machine can do something, but posits it as a premise in his argument. He presupposes his conclusion that a computer has no mind, and then uses an argument that has nothing to do with this conclusion at all. To sum up his argument: A computer can be built that easily passes a turing test. A human can compute this program by hand. Therefore Computers are stupid and smell bad. The only thing that the argument proves is that the human brain is at least turing complete; I think everyone already knew that Mr. Searle.--66.153.117.118 (talk) 20:44, 25 November 2007 (UTC)[reply]

This is encyuclopedially irrelevant, and misses Searle' point that a TT passer can lack genuine semantics.1Z (talk) 10:35, 26 November 2007 (UTC)[reply]
I'm glad that you seem to have gotten the point that the Chinese room is a universal Turing machine, and so anything a computer can do, the Chinese room can do. If a "mind" can "emerge" from any digital machine, of any architecture, it can "emerge" from the Chinese room. That's not Searle's main point (as 1Z points out), but it's essential to the argument. Searle's main point takes the form of an intuition: he can not imagine that a mind (with "genuine semantics") could emerge from such a simple set up. Of course, a lot of the rest of us can. The power of the argument is the stark contrast between Searle's understanding and the room's understanding, and the way it forces AI believers to "put up or shut up". Searle is saying "there's a another mind in the Chinese room? I don't think so. Why don't you prove it!" And of course, at the end of the day, we really can't. We can only make it seem more plausible. But we thought it was plausible to begin with, and nothing will convince Searle. ---- CharlesGillingham (talk) 21:01, 26 November 2007 (UTC)[reply]
The irony is that the Turing test is also a "put up or shut up" test. I imagine Turing would have said to Searle "if you think there is some difference in causality or understanding (or whatever ill-defined concept you posit is important) between the artificial and the human "mind". Prove it. Show that you can determine, using the Test which is which". Since the Test is passed in the Chinese room Argument we should conclude that "causality", "understanding" or "mind" are really just philosophical mumbo-jumbo and have nothing to do with the issue. I think Searle's "success" is that he sucked everyone trying to "find the mind" in the CR. (A task equally impossible as "finding the mind" in a living breathing human) The response should have been "Show me yours and then I'll show you mine". Gwilson 14:51, 30 November 2007 (UTC)[reply]
I think I am with Gwilson here. If the CR really does communicate in Chinese, and we accept that there is no "mind" or "understanding" in there, then it follows that a person who communicates in Chinese does not really require a mind or understanding either - whatever they mean.Myrvin (talk) 20:34, 6 April 2009 (UTC)[reply]
I wonder if the CR is really a Universal Turing Machine. I've always wondered what happens if the CR is asked "What is the time?". It would seem that the man in the room would have to understand the question in order to look at his watch. The CR could give an avoidance reply (I don't have a watch) but, if this happens often enough, I would become very suspicious. Similar difficult questions could include "How many question marks are there in this question???"; and "What is the third Chinese character in this question?" I can't help feeling that there are whole classes of questions that the CR could not answer, but a person or even a computer could. Myrvin (talk) 10:38, 30 March 2009 (UTC)[reply]
"What time is it?" is a tricky example, because Searle is billions of times slower than (we hope) a computer would be. First he would match up the new characters one at a time to the books and charts in the room, and this would give him the number of a file cabinet (one of the millions of file cabinets in the warehouse space of his room). He would open the drawer and pull out his next instruction, which would tell him to copy something he can't read and ask him to go to another file cabinet. He would putter around like this for awhile, and eventually he would find and instruction that said, "If the hour on the clock is ten then write a big X on the 46th piece of paper in the stack on your cart and goto file cabinet 34,599, drawer 2, file 168." Eventually all this puttering would lead to him pushing several dozen chinese characters under the door that would translate to, "I think it was around ten when you asked me. I don't have a watch obviously, because I'm actually just a disembodied mind without a body. You didn't know that?" Searle might have been puttering around for hours, or even years, before he put these characters under the door.
Searle, sitting in the room, has all the essential features of a UTM. He has paper, pencils and a program. This is all you need. Alan Turing showed that anything which uses just paper and pencils can simulate any conceivable process of computation (see Church-Turing thesis). Therefor, if any computer can do it, Searle can do it (given enough time and paper).
If there is some intelligent behavior that Searle can't do then there is a serious problem, not with Searle's argument, but with "strong AI". Strong AI claims that a machine can have mind, which implies that a machine can simulate any intelligent behaviour. (This weaker claim is Searle's "hypothetical premise", which he grants at the top of the argument.) Church-Turing implies that Searle can simulate anything the machine can do. If it turns out that there is some intelligent behaviour that Searle can't simulate, then, in a simple proof by contradiction, a machine can't simulate all intelligent behaviors, and strong AI is mistaken (about the weaker claim, not to mention the stronger claim). Do you see how this works? "A machine can simulate intelligence" implies that "the chinese room can simulate intelligence". If the room can't simulate intelligence, then no other machine can either. ---- CharlesGillingham (talk) 05:37, 6 April 2009 (UTC)[reply]
I think some of S's mind snuck into that explanation, because there is a place where S looks at the clock. The instructions might as well say "When you see these characters, look at the clock and write down the time. Then look up your time characters in some filing cabinet, write down what they match with there, and pass that out."
I think that the filing cabinets must be updated continuously for the room to work well. Otherwise any questions about events that happened after the CR was built would be difficult to answer. If this were so, then part of that updating could be a change to the matching characters for the "What is the time?" question. In effect, the filing cabinets would have a clock, unbeknown to the operator. I don't think this works for "How many question marks are there in this question???"Myrvin (talk) 20:34, 6 April 2009 (UTC)[reply]
Searle himself is updating the file cabinets, with precisely the same unreadable symbols that the computer program used to update its memory. Remember that Searle is simulating a computer program that, we assume, was capable of fully intelligent behavior. If he can't, or if the program can't exist, then the thought experiment is sort of pointless.
The original computer program could only answer the question about time if it had access to some kind of clock. To tell you the truth, I don't know how operating systems actually access their clocks. I assume it's some kind of special assembler statement. For the thought experiment to make sense, we have to assume that Searle has access to the same resources as the original program. In my example, I substituted "look at the clock" for whatever special hardware call a program needs to get at the host computer's clock. ---- CharlesGillingham (talk) 21:10, 6 April 2009 (UTC)[reply]
As I remember it, most computers keep their own clock by updating a field every millisecond or less. A Super Searle might do this by being instructed to change an entry very frequently. Again using symbols he doesn't understand. However, the new information for updating the filing cabinets must come from the outside. S could receive cryptic symbols and follow instructions as to what to do with them. I am happy now with the "What is the time?" question, but am still iffy about the "How many question marks???" one.Myrvin (talk) 19:43, 7 April 2009 (UTC)[reply]

To understand or not understand...

I have a more fundamental question on this whole experiment. What does it mean "to understand." We can make any claim as to humans that do or don't understand, and the room that does/not. But how do we determine that anything "understands"? In other words, we make distinction between syntactic and semantics, but how do they differ? These two are typically (to me) the two extremes of a continuous attribute. Humans typically "categorize" everything and create artificial boundaries, in order for "logic" to be applicable to it. Syntax is the "simple" side of thought, where application of grammar rules is used, ex. The ball kicks the boy. Grammar wise very correct. But semantically wrong (the other side of the spectrum) - rules of the world, in addition to grammar, tells us that who ever utters this, does not "understand". In effect, we say that environmental information is captured also as rules, that validates an utterance on top of grammar. To understand, is to perceive meaning, which in turns imply that you are able to infer additional information from a predicate, by the application of generalized rules of the environment. These rules are just as write able as grammar, into this experiment's little black book. For me, the categorization of rules, and the baptism of "to understand" as "founded in causal properties" (again undefined) creates a false thought milieu in which to stage this experiment. (To me, a better argument in this debate on AI vs. thought is that a single thought is processing an infinite amount of data - think chaos theory and analog processing, where as digital processes cannot. But this is probably more relevant elsewhere.) —Preceding unsigned comment added by 163.200.81.4 (talk) 05:35, 11 December 2007 (UTC)[reply]

I think Searle imagines that the program has syntactically defined the grammar to avoid this. Instead of something simple like <noun> <verb> <noun> the grammar could be defined with rules like <animate object noun> <verb requiring animate object> <animate or inanimate noun>. So "kick" is a <verb requiring animate object>, "boy" is an <animate object noun> and ball is a <inanimate object noun>. The sentence "The ball kicks the boy" is then parsed to be <inanimate object noun> <verb requiring animate object> <animate object noun> which doesn't parse correctly. Therefore a computer program could recognize this statement as nonsense without having understanding of balls, boys or kicking. It just manipulated the symbols into the category to which they belonged and applied the rules.
This is a simple example and the actual rules would have to be very complex ("The ball is kicked by the boy" is meaningful so obviously more rules are needed). I'm not sure if anyone has been able to define English syntax in such a way as to avoid these kind of semantic errors (or Chinese for that matter). Additionally, it is unclear to me how a syntax could be defined which took into account the "semantic" of previous sentences. (For example, "A boy and his dog were playing with a ball. The boy kicked it over the house". What did he kick? Searle also cites a more complex example of a man whose hamburger order is burnt to a crisp. He stomps out of the restaurant without paying or leaving a tip. Did he eat the hamburger? Presumably not.) However, if we assume that some program can pass the Turing Test then we must assume that it can process syntax in such a way.
I agree with you, however, that Searle fails to define what he means by some key terms like "understanding". He argues that a calculator clearly doesn't understand while a human mind does. This argument falls flat since the point in question is whether the Chinese Room is "understanding" or not. It also begs the question, if the Chinese Room (which has no understanding) cannot be differentiated from a human mind then how are we sure that understanding is important to "mind" or that a human mind really does have "understanding"? Gwilson (talk) 15:58, 5 January 2008 (UTC)[reply]

The Refactoring

The "blockhead" map which turns a simulation into a lookup table ( or "refactors" or whatever) requires bounded size input--- if the input can be arbitrarily long, you cannot refactor it as written. However, it is easy to get around a size limitation by doing this

"I was wondering, Mr. Putative Program, if you could comment on Shakespeare's monologue in Hamlet (to be continued)"

"Go on"

"Where hamlet says ..."

But then there's a "goto X" at each reply step, which effectively stores the information received in each chunk of data in the quantity X. If the chunks are of size N characters, The refactored program has to be immensely long, so that the jumps can go to 256^N different states at each reply, and that length must be multiplied by the number of mental states, which is enormous. So again, the argument is intentionally perversely misleading. The length of the program is so enormous, the mental state is entirely encoded in the "instruction pointer" of the computer, which tells you what line of code the program is executing. There is so much code, that this pointer is of size equal to the number of bits in a human mind.Likebox (talk) 19:47, 5 February 2008 (UTC)[reply]

Your analysis of the blockhead argument is absolutely correct. Computationalism and strong AI assume that "mental states" can be represented as symbols, which in turn can be coded as extremely large numbers (represented as X in this example). "Thinking" or "conscious awareness" can be represented as a dynamic process of applying a function recursively to a very large number. Refactoring this function into a goto-table is of course possible, and requires the exponential expansion of memory that you calculated.
However, since this is only a thought experiment, the fact that no such table could ever be constructed is irrelevant. The blockhead example just drives the point home that we are talking about "lifeless" numbers here. The details of the size of the program are not really the issue—the issue is whether the mind, the self, consciousness can be encoded as numbers at all. Our intuitions about "mind" and "self" tend to slip away when faced with the utter coldness of numbers. The failure of our intuitions has to do with our inability to see that extremely large numbers are as complex and interesting as the human spirit itself. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)[reply]
I'm not sure that this fact (the table could not be constructed) is irrelevant. If the table cannot be constructed then it cannot be used as an argument in support of Searle's intuition. I might suggest that a Turing machine could not encode such a table even with an infinite tape because the number of entries in the table might be uncountably infinite (ie infinite number of entries in an infinite number of combinations).
I wanted to bring forth another point and this seems as good a place as any. What provision does Searle or the refactor algorithm make for words/characters which aren't in the lexicon, but which still make sense to those who "understand" the language? For one example, we've probably all seen the puzzles where one has to come up with the common phrase from entries like: |r|e|a|d|i|n|g| or cor|stuck|ner (reading between the lines and stuck in a corner) and we can decipher smilies like ;-> and :-O. To refactor one must anticipate an infinite number of seemingly garbage/nonsense entries as well those which are "in the dictionary". How would Searle process such a string of characters or even a string chinese characters one of which was deliberately listed upside down or on it's side? Gwilson (talk) 19:48, 21 February 2008 (UTC)[reply]
Well, such a table could be constructed in theory (by an alien race living in a much larger universe with god-like powers). I meant only that building such a table is impractical for human beings living on earth. (My rough upper bound on the table length is 2^(10^15) -- one entry for each possible configuration of the memory of a computer with human level intelligence.)
The size of the program or the complexity of its code are not the issue. The issue is whether a program, of any size or complexity, can actually have mind. A convincing refutation of Searle should apply to either program—the ludicrous simple but astronomically large "blockhead" version or the ludicrously complex but reasonably sized neuron-by-neuron brain simulation. You haven't refuted Searle until you prove that both cases could, in theory, have a mind.
On the issue of infinities. It doesn't really affect the argument significantly to assume that the machine's memory has some upper bound, or that the input comes in packets (as Likebox proposes above). In the real world, all computers have limits to the amount of input they can accept or memory they can hold, so we can safely assume that our "Chinese speaking program" operates within some limits when it's running on it's regular hardware. This implies that, for example, a Turing Machine implementation would only require a finite (but possibly very large) amount of tape and would have a finite number of states. Searle's argument (that he "still understands nothing") applies in this case just as easily as to a case with no limit on the memory, so the issue of infinities really does nothing to knock down Searle.
The answer to your second question is that, if the program can successfully pass the Turing test, then it should react to all those weird inputs exactly like a Chinese speaker would. Searle (in the room) is simply following the program, and the program should tell him what to do in these cases. Note that Searle's argument still works if he is only looking at a digitized representation of his input, i.e. he is only seeing cards that say "1" or "0". Searle "still understands nothing" which is all he thinks he needs to prove his point.
(And here is my usual disclaimer that just because I am defending Searle, it doesn't mean that I agree with Searle.) ---- CharlesGillingham (talk) 01:06, 22 February 2008 (UTC)[reply]
What I was hoping to show was that if you could drive the table size to infinity then the algorithm could not be guaranteed to terminate and hence would not be guaranteed to pass the TT. The Blockhead argument only works if the program can pass TT since everyone agrees that a program that fails TT does not have mind. I realize that this is pointless because Blockhead is really just copying it's answers from the mind of a living human. Given two human interlocutors (A and B), one could easily program a pseudo-Blockhead program which will pass TT. The pseudo-Blockhead takes A's input and presents it to B. It copies B response and presents it back to A and so on. Provided A and B are unaware of each other they will consider pseudo-Blockhead to have passed TT. The only difference between Blockhead and pseudo-Blockhead is that Blockhead asks B beforehand what his answer will be for ever possible conversation with A. At the end of the day though, Blockhead is using the mind of B to the answer A, the same as pseudo-Blockhead.
So, if Searle asks us to "Show me the mind" in Blockhead or pseudo-Blockhead, it's easy. It is B's mind which came up with the answers. I'm hoping this means that both Blockhead and pseudo-Blockhead do nothing to support Searle since they are in fact merely displaying the product of a mind.
Getting back to the smilies and such. I recall back around the time that Searle was writing his paper one of the popular uses of computers was to produce "ASCII ART". Pictures, some basic stick figures and others huge and detailed, printed on line printers or terminals using Ascii keyboard characters. These are instantly recognizable to anyone with "mind" however, they do not follow any rules of syntax. In essence, they are all semantic and no syntax. Can Searle's argument, that the program is merely manipulating "symbols" according to syntactical rules without "understanding", apply when the input has no symbols and no syntax? Having those things in the input is, I think, somewhat crucial to Searle's argument. However, I find that Searle's argument is slippery, when cornered on one front his argument seems to change. Does the CR not have mind because it processes only syntax without semantic or because computers don't have causality? Are the symbols the 1's and 0's of the computer or the symbols of the language? I don't know. Gwilson (talk) 21:24, 23 February 2008 (UTC)[reply]
Well, no, the ASCII pictures are still made of symbols, and he's still manipulating them according to the syntactic rules in his program. So he's still just following syntactic rules on symbols. The semantics is the picture (as I recall, back when I was a kid, it was usually a playboy centerfold). It's important for his argument that he can't figure out what the symbols mean, so it's important that he's never able to actually see the picture --- like if he gets the characters one at time. He only manipulates them syntactically (i.e. meaninglessly, e.g. sorting them into piles, comparing them to tables, putting some of them into the filing cabinets, taking others out, comparing #20 to #401 to see if they're the same, counting the ones that match, writing that down and putting it in drawer #3421, going to filing cabinet #44539 and getting his next instruction, etc.), never noticing that all of these characters would make a picture if laid out the floor in the right order. Eventually he gets to an instruction that says "Grab big squiggly boxy character #73 and roundish dotted character #23 (etc) and put them through the slot." And the guy outside the room reads: "Wow. Reminds of me of my ex-wife. You got any more?" The virtual Chinese Mind saw the picture, but Searle didn't.
The point is, his program never lets him know what the input or output means, and that's the sense in which his actions are syntactic. It's syntax because the symbols don't mean anything to him. He doesn't know what the Chinese Mind is hearing (or seeing) and he doesn't know what the Chinese Mind is saying.
In answer to your question about the argument, is supposed to go something like this. The only step in the argument that is controversial is marked with a "*".
  1. CR has syntax. CR doesn't have semantics.* Therefor syntax is insufficient for semantics.
  2. Brains cause minds, i.e. brains must have something that causes a mind to exist, we don't know what it is, but it's something. Let's call it "causal powers". Brains use causal powers to make a mind.
  3. Every mind has semantics. CR doesn't have semantics. Therefor CR doesn't have a mind. Therefor CR doesn't have causal powers.
  4. Computers only have syntax. Syntax is insufficient for semantics. Every mind has semantics. Therefor computers can't have a mind. Therefor computers don't have causal powers.
Again, the only real issue is "CR doesn't have semantics". Everything else should be pretty obvious.
"Has syntax" means "uses symbols as if they didn't stand for anything, as if they were just objects."
"Has semantics" means "uses symbols that mean something", or in the case of brains or minds, "has thoughts that mean something."
Does that help? ---- CharlesGillingham (talk) 09:06, 24 February 2008 (UTC)[reply]
Yes, thanks, it helps me understand Searle's argument better. Part of the reason Searle's description of the CR is so engaging (to me) is that he makes it easy to see where the "illusion of understanding" comes from. The syntax of the input language (Chinese in this case) allows the program to decode inputs like <man symbol> <bites symbol> <dog symbol> and produce output <dog symbol> <hurt symbol> <? symbol> without an "understanding" of what dogs are or biting is. When the input has no rules of syntax for the program exploit, I can't imagine how the program parses it and produces the "illusion of understanding". Of course, this is unimportant to Searle's argument. The CR can only processes using it's syntax rules, it has no "understanding". It doesn't matter to Searle where the "illusion" comes from, it only matters that there is no "real' understanding since the CR uses only syntax to arrive at a response.
I want to ponder on this thought: if the input contains no rules of syntax which encode/hide/embed a semantic then any apparent semantic produced must be "real semantic" and not "illusion of semantic". Once again, that will depend on what is meant by "semantic" and "understanding". Like a magician pulling a quarter out of your ear when beforehand we checked every possible place in the room and on the magician and on you (except your ear) and found no hidden quarters. If he could pull a quarter out of your ear, it's either really magic or the quarter was in your ear. Gwilson (talk) 15:33, 25 February 2008 (UTC)[reply]

(deindent) There is a presumption in the discussion here, and with Searly argument in general, that it is relatively easy to imagine a machine which does not have "real semantics" and yet behaves as if it does, producing symbols like "dog hurt" from "man bites dog" in a sensible way without including data structures which correspond to any deep understanding of what the symbols mean.

This intuition is entirely false, and I don't think many people who have done serious programming believe it. If you actually sit down to try to write a computer program that tries to extract syntactical structure from text for the purpose of manipulating it into a reasonable answer, you will very quickly come to the conclusion that the depth of data structures that is required to make sense out of the written text is equal to the depth of the data structures in your own mind as you are making sense out of the text. If the sentence is about "dogs" the program must have an internal representation of a dog capable of producing facts about dogs, like the fact that they have legs, and bark, and that they live with people and are related to wolves. The "dog description" module must be so sophisticated that it must be able to answer any concievable intuitive question about dogs that people are capable of producing without thinking. In fact, the amount of data is so large and so intricately structured that it is inconceivable that the answer could predictibly come out with the right meaning without the program having enough data stored that the manipulations of the data include a complete understanding. Since the data structures in current computers are so limited and remote from the data structures of our minds, there is not a single program that comes close to being able to read and understand anything, not even "One fish two fish red fish blue fish".

This is known to all artificial intelligence people, and is the reason that they have not succeeded very well in doing intuitive human things like picture recognition. Searle rewords the central difficulty into a principle "It is impossible to produce a computational description of meaning!" But if you are going to argue that the Turing test program is trivial, you should at least first show how to construct a reasonable example of a program that passes the Turing test, where by reasonable I only mean requiring resources than can fit in the observable universe.Likebox (talk) 17:40, 25 February 2008 (UTC)[reply]

You've just given the "contextualist" or "commonsense knowledge" reply, served as as is customary with a liberal sprinkling of the "complexity" reply. (Which I find very convincing, by the way. And so does Daniel Dennett and Marvin Minsky. Your reply is very similar to Dennett's discussion in Consciousness Explained.) You're right that "the depth of data structures that is required ... is equal to the depth of the data structures in your own mind", as it must be. Unfortunately for defeating Searle, he doesn't agree there are 'data structures' in your mind at all. He argues that, whatever's in your head, it's not "data structures", it's not symbolic at all. It's something else. He argues that, whatever it is, it is far more complicated than any program that you can imagine, in fact, far more complicated than any possible program.
Note that, as the article discusses (based on Cole, Harnad and their primary sources), an argument that starts out "here's what an AI program would really be like" can, at best, only make it seem more plausible that there is a mind in the Chinese Room. At best, they can only function as "appeals to intuition". My intuition is satisfied. Searle's isn't. What else can be said? ---- CharlesGillingham (talk) 18:34, 25 February 2008 (UTC)[reply]

Forest & Trees

I've made a number of changes designed to address the concerns of an anonymous editor who felt the article contained too much "gossip". I assume the editor was talking about the material in the introduction that gave the Chinese Room's historical context and philosophical context. I agree that this material is less on-point than the thought experiment itself, so I moved the experiment up to the top and tucked this material away into sections lower in the article where hopefully it is set up properly. I put the context in context, so to speak. If these sections are inaccurate in any way (i.e., if there are reliable sources that have a different perspective) please feel free to improve them. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)[reply]

Definition of Mind, Understanding etc

I think we've touched on this before in the talk section, but one of the things Searle doesn't do is define what he means by "has mind" or "Understands". He says in his paper "There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.". He makes this claim, because we can all agree that whatever "understanding" is , we know that Searle doesn't have it in regards to Chinese.

However, at the end of the day he has left a vital part of his argument undefined and in doing so prevented people from discovering potential undiscovered flaws in his argument. While Searle's argument has a certain mathematical "proofiness" about it, because he doesn't define key terms like "understanding" or "has mind" it isn't a real proof, only an interesting philosophical point of view.

What I'm wondering is, can we somehow get the fact that Searle doesn't define understanding into the first few paragraphs? Something like: "Searle does not attempt to define what is meant by understanding. He notes that "There are clear...." ".

The Turing test deliberately avoids defining terms like mind and understanding as well. So, I think we could follow those words with something like CharlesGillingham early words here "When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks"."

Does anyone feel that would improve the article? —Preceding unsigned comment added by Gwilson (talkcontribs) 14:49, 28 February 2008 (UTC)[reply]

This could fit in nicely right after (or mixed in with) the paragraph where David Chalmers argues that Searle is talking about consciousness. I like the Searle quote. The truth is, defining "understanding" (or what philosophers call intentionality) is a major philosophical problem in its own right. Searle comes from the ordinary language philosophy tradition of Ludwig Wittgenstein, J. L. Austin, Gilbert Ryle and W. V. O. Quine. These philosophers insisted that we always use words in their normal ordinary sense. They argue that, "understanding" is defined as: 'what you're doing when you would ordinarily say "Yes, I understand."' If you try to create some abstract definition based on first principles, you're going to leave something out, you're going to fool yourself, you're going to twist the meaning to suit your argument. That's what has usually happened in philosophy, and is the main reason that it consistently fails to get anywhere. You have to rely on people's common sense. Use words in their ordinary context -- don't push them beyond their normal limits. That's what Searle is doing here.
Turing could fit in two places. (1) In the paragraph that argues that the Chinese Room doesn't create any problems for AI research, because they only care about behavior, and Searle's argument explicitly doesn't care how the machine behaves. (2) In the "other minds" reply, which argues that behavior is what we use to judge the understanding of people. It's a little awkward because Turing is writing 30 years before Searle, and so isn't directly replying to the Chinese Room, and is actually talking about intelligence vs. consciousness, rather than acting intelligent vs. understanding. But, as I said above, I think that Turing's reply applies to Searle, and so do Norvig & Russell. ---- CharlesGillingham (talk) 10:00, 4 March 2008 (UTC)[reply]
Turing is in there now, under "Other minds". ---- CharlesGillingham (talk) 17:41, 2 April 2009 (UTC)[reply]

Footnote format

This is just as aesthetic choice, but, as for me, I don't care for long strings of footnotes like this.[1][2][3][4][5] It looks ugly and breaks up the text. (I use Apple's Safari web browser. Perhaps footnotes are less obtrusive in other browsers.) So I usually consolidate the references for a single logical point into a single footnote that lists all the sources.[6] This may mean that there is some overlap between footnotes and there are occasionally several footnotes that refer to the same page of the same source. I don't see this is as a problem, since, even with the redundancy it still performs the essential functions of citations, i.e. it verifies the text and provides access to further reading. Any one else have an opinion? (I'm also posting this at Wikipedia talk:Footnotes, to see what they think.) ---- CharlesGillingham (talk) 17:30, 24 March 2008 (UTC)[reply]

I have recombined the footnotes by undoing the edit that split them up. I admire the effort of the anonymous editor who undertook this difficult and time consuming task, but unfortunately I found mistakes. For examples a references to Hearn, p. 44 was accidentally combined with a reference to Hearn p. 47. Also, as I said above, I don't think the effort really improved the article for the reader. Sorry to undo so much work. ---- CharlesGillingham (talk) 06:17, 27 March 2008 (UTC)[reply]
This link no longer works (aol killed the site): Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457, http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html, retrieved on October 8 2008 . Myrvin (talk) 09:23, 9 April 2009 (UTC)[reply]

Empirical Chinese Rooms

I'm posting here in the discussion section because I have a conflict of interest, but I think this is the best place to bring my question....

Recently I self-published an article on Philica called The Real Chinese Room, in which I replicated Searle's experiment using a version of the ELIZA code. As I discovered later, Harre and Wang conducted a similar experiment in 1999, and published it in the Journal of Experimental & Theoretical Artificial Intelligence. (11 #2 April) I haven't been able to find a copy, but from their very terse abstract it would appear that their experiment confired Searle's assumption. Mine did not.

It seems like at least Harre and Wang's work would be a useful contribution to the article if anyone could locate it. My own article has not been peer-reviewed to date, so it is not a reliable source (yet). But I would hope that the article could be expanded to look at the empirical work done on this problem. Ethan Mitchell (talk) 21:31, 8 May 2008 (UTC)[reply]

Removed text (could be re-added)

I removed this new paragraph because it was unencyclopedic and it had no sources.

But if the man has memorized all the rules, allowing him to produce Chinese using only his mind, has he then not learned Chinese? Is that not how learning a language goes, by memorizing larger and larger portions of grammar and thereby producing valid sentences?

However, this is an legitimate argument that has been made by some scholars. This could be used if it was written like this:

Gustav Boullet has argued that, to the the contrary, the man who has memorized the rules does understand Chinese, although in an unusual way.[7]

That is, with an attribution to a scholar and footnote containing a reference to that scholar's work. ---- CharlesGillingham (talk) 15:50, 15 October 2008 (UTC)[reply]

Chinese Box is not another name for Chinese Room argument

This argument is not, to my knowledge, ever known as the "Chinese Box" argument, so I have removed this from the intro.

A "Chinese Box" is a box that contains other boxes. It's a metaphor for a complicated problem. Larry Hauser wrote a paper called "Searle's Chinese Box: Debunking the Chinese Room Argument," which is a kind of play on words, connected the "chinese box" metaphor with the the "chinese room" argument.

If you Googol '"chinese box" Searle' I don't believe you will get anything substantial except Hauser's paper, citations of Hauser's paper, etc. Correct me if I'm wrong. ---- CharlesGillingham (talk) 05:15, 10 December 2008 (UTC)[reply]

Original research

Dears, I'd really like to delete all paragraphs titled "What they do and don't prove." Such personal opinions on the topics are not acceptable in an encyclopaedia as they represent original research, don't they? This discussion page is the right place for such discussion paragraphs. Or am I allowed to write down my personal opinion on the strenght of some argument everywhere in any main article?

91.6.122.241 (talk) 10:01, 6 January 2009 (UTC) Marco P[reply]

This article presents a series of arguments and counter arguments and counter-counter arguments. Each of the arguments presented is based on reliable sources. (including those in the "what they do and don't prove" sections). None of them are based on personal speculation or original research. The ample footnotes should indicate exactly where this material is coming from. ---- CharlesGillingham (talk) 12:05, 9 January 2009 (UTC)[reply]
Oh, by the way, I dislike the title "What they do and don't prove", because it's too chatty for an encyclopedia. I just couldn't come up with a better title for these parts. (The article needs these parts because Searle makes similar counter-counter arguments to all the replies in each section. "What they do and do prove" is a way to put these all in one place. Unless we want to repeat ourselves relentlessly in a twenty page article, we need to place Searle's reply-to-the-replies in one place.) ---- CharlesGillingham (talk) 17:52, 2 April 2009 (UTC)[reply]

Comment about a ref

I removed from the main article the following comment by an unregistered user "Sorry, I don't know where else to put this edit: note #58 references a Dennett work from 1997, but no such work appears in the References section. Maybe it is supposed to be his 1991 work." Kaarel (talk) 20:04, 21 January 2009 (UTC)[reply]

This is fixed. The anon is right: the quote comes from Dennett's Consciousness Explained, (1991) ---- CharlesGillingham (talk) 09:38, 22 January 2009 (UTC)[reply]

Why is this even here?

This argument is silly. If the machine was instead a human chinese speaker, you could make the same claim. That is the ignorant man could simply hand the symbols to the chinese speaker, and hand back the results. He could also memorize the symbols in this manner.

The attempted formal argument just begs the question. It assumes Strong AI is false by assuming that computers are only symbol manipulating syntactic machines. In real life, computers can take images, sounds etc as input just as humans can.

His abstract claims regarding the computer not being a mind have to do with a bunch of metaphor based reasoning involving things like the fact that the machine doesn't look like a human... —Preceding unsigned comment added by 96.32.188.25 (talk) 11:32, 16 April 2009 (UTC)[reply]

This whole article is just one big fallacy. —Preceding unsigned comment added by 96.32.188.25 (talk) 11:29, 16 April 2009 (UTC)[reply]

Most scholars agree that the argument is "obviously wrong", but in order to show this, you need to be able to tell Searle exactly what a "mind" is and how we would know that a machine has one. This is harder than it looks at first glance, and that is why the thought experiment is so notable.
By the way, the images and sounds that a computer uses are, in fact, composed of symbols: the 0s and 1s of computer memory. Searle would argue that the computer only sees these bits, and can never see the picture. ---- CharlesGillingham (talk) 19:02, 18 April 2009 (UTC)[reply]
  1. ^ source 1
  2. ^ source 2
  3. ^ source 3
  4. ^ source 4
  5. ^ source 5
  6. ^ source 1, source 2, source 3, source 4
  7. ^ Boullet 1984