Jump to content

Turing test

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 84.228.211.165 (talk) at 19:20, 20 February 2009 (grammar: "each of these try to appear" should be "each of these tries to appear"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. Described by Alan Turing in the 1950 paper "Computing Machinery and Intelligence", it proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.[1]

The "standard interpretation" of the Turing Test, in which player C, the interrogator, is tasked with trying to determine which player - A or B - is a computer and which is a human. The interrogator is limited to using the responses to written questions in order to make the determination.

History

Philosophical background

Although the field of artificial intelligence research was founded in 1956,[2] its philosophical roots extend back considerably further. The question of whether or not it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. From the perspective of dualism, the mind is non-physical (or, at the very least, has non-physical properties[3]) and, therefore, cannot be explained in purely physical terms. The materialist perspective, on the other hand, argues that the mind can be explained physically, and thus leaves open the possibility of minds that are artificially produced.[4]

In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book Language, Truth and Logic Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined".[5] This suggestion is very similar to the Turing test, but it is not certain that Ayer's popular philosophical classic was familiar to Turing.

Alan Turing

Researchers in Britain had been exploring "machine intelligence" for up to ten years prior to 1956. It was a common topic among the members of the Ratio Club, an informal group of British cybernetics and electronics researchers that included Alan Turing, after whom the test is named.[6]

Turing in particular had been tackling the notion of machine intelligence since at least 1941,[7] and one of the earliest-known mentions of "computer intelligence" was made by him in 1947.[8] In Turing's report, "Intelligent Machinery", he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour"[9] and, as part of that investigation, proposed what may be considered the forerunner to his later tests:

It is not difficult to devise a paper machine which will play a not very bad game of chess.[10] Now get three men as subjects for the experiment. A, B and C. A and C are to be rather poor chess players, B is the operator who works the paper machine. [...] Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.

Thus, by the time Turing published "Computing Machinery and Intelligence", he had been considering the possibility of artificial intelligence for many years. This, however, was the first published paper[11] by Turing to focus exclusively on the notion.

Turing begins his 1950 paper with the claim "I propose to consider the question 'Can machines think?'"[12] As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "intelligence". Turing, however, chooses not to do so; instead, he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words".[12] In essence, he proposes to change the question from "Do machines think?" to "Can machines do what we (as thinking entities) can do?"[13] The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man".[14]

To demonstrate this approach, Turing proposes a test inspired by a party game known as the "Imitation Game", in which a man and a woman go into separate rooms, and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other. Turing proposes recreating the game as follows:

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"[15]

Later in the paper, Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man.[16] While neither of these formulations precisely match the version of the Turing Test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer, and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[17]

Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since his paper was first published. (See Computing Machinery and Intelligence.)[18]

ELIZA and PARRY

Blay Whitby lists four major turning points in the history of the Turing Test — the publication of "Computing Machinery and Intelligence" in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990.[19]

ELIZA works by examining a user's typed comments for keywords. If a keyword is found, a rule is applied which transforms the user's comments, and the resulting sentence is returned. If a keyword is not found, ELIZA responds with either a generic riposte or by repeating one of the earlier comments.[20] In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be "free to assume the pose of knowing almost nothing of the real world."[21] With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being "very hard to convince that ELIZA [...] is not human."[21] Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing Test.[22][21], although this view is highly contentious, since the human "interrogators" had been primed to expect interaction with a human psychotherapist, and were initially unaware of the possibility that they were interacting with a computer.[citation needed]

Colby's PARRY has been described as "ELIZA with attitude":[23] it attempts to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. In order to validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teletype machines. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs.[24] The psychiatrists were only able to make the correct identification 48 per cent of the time — a figure consistent with random guessing.[25]. Note that these experiments were not Turing tests, since a Turing test requires that the integerator is able to ask interactive questions, rather than an offline transcript, in order to decide whether the subject is a human or a computer program.[citation needed]

The Chinese room

John Searle's 1980 paper Minds, Brains, and Programs proposed an argument against the Turing Test known as the "Chinese room" thought experiment. Searle argued that software (such as ELIZA) could pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people do. Therefore—Searle concludes—the Turing Test cannot prove that a machine can think, contrary to Turing's original proposal.[26]

Arguments such as that proposed by Searle and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.[27]

Turing Colloquium

1990 was the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and thus saw renewed interest in the test. Two significant events occurred in that year: the first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing Test in terms of its past, present and future; the second was the formation of the annual Loebner Prize competition.

Loebner Prize

The Loebner Prize provides an annual platform for practical Turing Tests with the first competition held in November, 1991.[28]. It is underwritten by Hugh Loebner; the Cambridge Center for Behavioral Studies in Massachusetts, United States organised the Prizes up to and including the 2003 contest. As Loebner described it, the competition was created to advance the state of AI research, at least in part because "no one had taken steps to implement it."[29]

The silver (audio) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behavior among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.[2]) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006. Its creators have proposed a personalized variation: the ability to pass the imitation test while attempting specifically to imitate the human player, with whom the machine will have conversed at length before the test.[30]

The Loebner Prize tests conversational intelligence; winners are typically chatterbot programs, or Artificial Conversational Entities (ACE)s. Early Loebner Prizes ruled restricted conversations: each entry and hidden-human conversed on a single topic, thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007 the interaction time allowed in Loebner Prizes was more than twenty minutes. In 2008 the interrogation duration allowed was five minutes per pair because the organiser (Kevin Warwick [3]), and coordinator (Huma Shah [4]) felt the artificial conversational entities were not technically advanced to converse for longer. Ironically, the 2008 winning entry, Elbot does not mimic a human; its personality is that of a robot yet it deceived three human judges it was the human during human-parallel comparisons. Transcripts can be found at [5].

The Loebner Prize led to renewed discussion of the viability of the Turing Test and the value of pursuing it. The Economist, in an article entitled "Artificial Stupidity", commented that the first Loebner winner won, at least in part, because it was able to "imitate human typing errors".[31] (Turing had suggested that programs add errors into their output, so as to be better "players" of the game.)[32] Others have argued that trying to pass the Turing Test is merely a distraction from more fruitful research.[33]. A second issue was revealed in early Prizes: the use of "unsophisticated" interrogators, make it possible to pass through cleverly crafted manipulation, rather than anything one could plausibly consider intelligence.[34] However, since 2004 the Loebner Prizes have deployed philosophers, computer scientists and journalists among the interrogators.

2005 Colloquium on Conversational Systems

In November 2005, the University of Surrey hosted an inaugural one-day meeting of artificial conversational entity developers [6], attended by winners of practical Turing Tests in the Loebner Prize: Robby Garner, Richard Wallace and Rollo Carpenter. Invited speakers included David Hamill [7], Hugh Loebner (sponsor of the Loebner Prize) and Huma Shah [8].

AISB 2008 Symposium on the Turing Test

In parallel to the 2008 Loebner Prize held at the University of Reading [9], The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB [10]), hosted a one-day symposium to discuss the Turing Test, organised by John Barnden [11], Mark Bishop, Huma Shah and Kevin Warwick. Speakers included Royal Institution's Director Baroness Susan Greenfield [12], Selmer Bringsjord [13], Turing's biographer Andrew Hodges [14] and consciousness scientist Owen Holland [15]. No agreement emerged for a canonical Turing Test, however Bringsjord expressed that a sizeable prize would result in the Turing Test being passed sooner.

Turing100 in 2012

A committee set up to organise events celebrating the 100th anniversary of Turing's birth in 2012, with a goal to take Turing's idea for a thinking machine, picturised in Hollywood movies such as Blade Runner, to a wider audience including children. Provisional members include Kevin Warwick, Chair [16], Huma Shah, coordinator, Ian Bland, Chris Chapman, Marc Allen, Rory Dunlop, Loebner winners Robby Garner and Fred Roberts. It is supported by Women in Technology and Daden Ltd.

Versions of the Turing test

The Imitation Game, as described by Alan Turing in "Computing Machinery and Intelligence". Player C, through a series of written questions, attempts to determine which of the other two players is a man, and which of the two is the woman. Player A, the man, tries to trick player C into making the wrong decision, while player B tries to help player C.

There are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one which Saul Traiger describes as the "Standard Interpretation".[35] While there is some debate as to whether or not the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent,[35] and their strengths and weaknesses are distinct.

The Imitation Game

Turing, as we have seen, described a simple party game involving three players. Player A is a man, player B a woman and player C (who plays the role of the interrogator) of either gender. In the Imitation Game, player C is unable to see either player A or player B, and can only communicate with them through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.[36]

In what SG Sterret refers to as the "Original Imitation Game Test",[37] Turing proposes that the role of player A be filled by a computer. The computer's task is thus to pretend to be a woman and attempt to trick the interrogator into making an incorrect evaluation. The success of the computer is determined by comparing the outcome of the game when player A is a computer against the outcome when player A is a man. If, as Turing puts it, "the interrogator decide[s] wrongly as often when the game is played [with the computer] as he does when the game is played between a man and a woman"[14], it may be argued that the computer is intelligent.

The Original Imitation Game Test, in which the player A is replaced with a computer. The computer is now charged with the role of the woman, while player B continues to attempt to assist the interrogator.

The second version appears later in Turing's 1950 paper. As with the Original Imitation Game Test, the role of player A is performed by a computer, the difference being that the role of player B is now to be performed by a man rather than a woman.

"Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?"

— Turing 1950, p. 442

In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision.[38]

The standard interpretation

Common understanding has it that the purpose of the Turing Test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a woman, but rather whether or not a computer could imitate a human.[38] While there is some dispute as to whether or not this interpretation was intended by Turing — Sterrett believes that it was[37] and thus conflates the second version with this one, while others, such as Traiger, do not[35] — this has nevertheless led to what can be viewed as the "standard interpretation". In this version, player A is a computer and player B a person of either gender. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human.[39]

Imitation Game vs. Standard Turing Test

There has arisen some controversy over which of the alternative formulations of the test Turing intended.[37] Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, pace Turing's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test", whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test", noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the Standard Turing Test (STT) has the problems that its critics cite but feels that, in contrast, the Original Imitation Game Test (OIG Test) so defined is immune to many of them, due to a crucial difference: unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG Test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: the OIG Test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour". The general structure of the OIG Test could even be used with non-verbal versions of imitation games.[40]

Still other writers[41] have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.

Should the interrogator know about the computer?

Turing never makes clear whether or not the interrogator in his tests is aware that one of the participants is a computer. To return to the Original Imitation Game, he states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement.[14] When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation.[42] As Ayse Saygin and others highlight, however, this makes a big difference to the implementation and outcome of the test.[43]

Strengths of the test

Breadth of subject matter

The power of the Turing test derives from the fact that it is possible to talk about anything. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include."[44] John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well."[45]

In order to pass a well-designed Turing test, the machine must use natural language, reason, have knowledge and learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate the skill of vision and robotics as well. Together, these represent almost all of the major problems of artificial intelligence.[46]

Weaknesses of the test

For all its strengths and its fame, the test has been criticised on several grounds.

Human intelligence vs intelligence in general

The Turing Test is explicitly anthropomorphic, testing only whether or not the computer resembles a human being, not if it is generally "intelligent" or "sentient". It fails to test for general intelligence in two ways:

  • Some human behaviour is unintelligent, but the Turing test requires that the machine be able to execute all human behaviours, regardless of whether or not they are intelligent. It even tests for behaviours that we may not consider intelligent at all, such as the susceptibility to insults, the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate human behaviour in detail, bad typing and all, it fails the test, regardless of how intelligent it may be.
  • Some intelligent behaviour is inhuman. The Turing test does not test for highly intelligent behaviours, such as the ability to solve difficult problems or come up with original insights. In fact, it practically requires deception on the part of the machine: if it quickly solves a computational problem that is impossible for a human to solve, it by definition fails the test.

Impracticality

Stuart J. Russell and Peter Norvig argue that the anthropomorphism of the test prevents it from being truly useful for the task of engineering intelligent machines. "Aeronautical engineering texts," they write by way of analogy, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"[47] Because of this impracticality, trying to pass the Turing test in its full generality is not, as of 2009, an active focus of much mainstream academic or commercial effort. Current research in AI-related fields is aimed at more modest and specific goals.

Russell and Norvig note that "AI researchers have devoted little attention to passing the Turing Test",[48] since there are easier ways to test their programs, as, for example, by giving them a task directly rather than through the roundabout method of first posing a question in a chat room populated by machines and people. Turing never intended his test to be used as a real, day-to-day measure of intelligence in AI programs; he wanted to provide a clear and understandable example in aid of the discussion of the philosophy of artificial intelligence.[49]

Real intelligence vs simulated intelligence

The test is also explicitly behaviourist or functionalist: it only tests how the subject acts. A machine passing the test may be able to simulate human conversational behaviour merely by following "mindless" mechanical rules. Two famous examples of this line of argument against the Turing test are John Searle's Chinese Room argument (Searle 1980) and Ned Block's Blockhead argument. (Block 1981) The key issue, for Searle, is whether the machine is merely "simulating" thinking or is "actually" thinking. Even if the Turing test is a good operational definition of intelligence, Searle argues, it may not indicate that the machine has a mind, consciousness, the ability to "understand" or have thoughts that "mean" anything (what philosophers call intentionality).

Turing responded to this line of criticism in his original paper, writing that:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

Predictions

Turing predicted that machines would eventually be able to pass the test; in fact, he estimated that by the year 2000, machines with 109 bits (about 119.2 MiB or approximately 120 megabytes) of memory would be able to fool thirty per cent of human judges in a five-minute test. He also predicted that people would then no longer consider the phrase "thinking machine" contradictory. He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.

By extrapolating an exponential growth of technology over several decades, futurist Raymond Kurzweil predicted that Turing test-capable computers would be manufactured around the year 2020, roughly speaking.[50] See the "Moore's Law" article and the references therein for discussions of the plausibility of this argument.

The Long Bet Project is of $10,000 between Mitch Kapor (pessimist) and Kurzweil (optimist) about whether a computer will pass a Turing Test by the year 2029. The bet specifies the conditions in some detail.[51]

Variations of the Turing test

Numerous other versions of the Turing test, including those expounded above, have been mooted through the years.

Reverse Turing test and CAPTCHA

A modification of the Turing test wherein the objective or one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion,[52] who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. Carrying this idea forward, R. D. Hinshelwood[53] described the mind as a "mind recognizing apparatus", noting that this might be some sort of "supplement" to the Turing test. The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human.

CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from abusing the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human. The implication would appear to be (although it not necessary is) that artificial intelligence has not as yet been achieved.

Subject matter expert Turing test

Another variation is described as the subject matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. As brain and body scanning techniques improve, it may also be possible to replicate the essential data elements of a person to a computer system.

Immortality test

The Immortality-test variation of the Turing test would determine if a person's essential character is reproduced with enough fidelity to make it impossible to distinguish a reproduction of a person from the original person.

Minimum Intelligent Signal Test

The Minimum Intelligent Signal Test, proposed by Chris McKinstry, is another variation of Turing's test, where only binary responses are permitted. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured.

Meta Turing test

Yet another variation is the Meta Turing test, in which the subject being tested (say, a computer) is classified as intelligent if it itself has created something that the subject itself wants to test for intelligence.

Hutter Prize

The organizers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test.

The data compression test has some advantages over most versions and variations of a Turing test, including:

  • It gives a single number that can be directly used to compare which of two machines is "more intelligent".
  • It doesn't require the computer to lie to the judge -- teaching computers to lie is widely regarded as a bad idea.[54]

The main disadvantages of using data compression as a test are:

  • It is not possible to test humans this way.
  • It is unknown what particular "score" on this test -- if any -- is equivalent to passing a human-level Turing test.

Other intelligence tests

There are a variety of intelligence tests used to test humans. It may be possible to use such tests to test artificial intelligences. Some tests (such as the C-test [17]) derived from Kolmogorov Complexity have been used to both evaluate humans and computers.

See also

Notes

  1. ^ Turing originally suggested a teletype machine, one of the few text-only communication systems available in 1950.
  2. ^ Crevier 1993, pp. 47–49, Russell & Norvig 2003, p. 17 and Copeland 2003, p. 1
  3. ^ For an example of property dualism, see Qualia.
  4. ^ Noting that materialism does not necessitate the possibility of artificial minds (for example, Roger Penrose), any more than dualism necessarily precludes the possibility. (See, for example, Property dualism.)
  5. ^ Language, Truth and Logic (p. 140), Penguin 2001.
  6. ^ McCorduck 2004, p. 95
  7. ^ Copeland 2003, p. 1
  8. ^ Copeland 2003, p. 2
  9. ^ Turing 1948, p. 412
  10. ^ In 1948, working with his former undergraduate colleague, DG Champernowne, Turing began writing a chess program for a computer that did not yet exist and, in 1952, lacking a computer powerful enough to execute the program, played a game in which he simulated it, taking about half an hour over each move. The game was recorded, and the program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife.
  11. ^ "Intelligent Machinery" was not published by Turing, and did not see publication until 1968 in Evans, C. R. & Robertson, A. D. J. (1968) Cybernetics: Key Papers, University Park Press.
  12. ^ a b Turing 1950, p. 433
  13. ^ Harnad, p. 1
  14. ^ a b c Turing 1950, p. 434
  15. ^ Harvnb|Turing|1950|p=434
  16. ^ Turing 1950, p. 446
  17. ^ Turing 1952, pp. 524–525. Turing does not seem to distinguish between "man" as a gender and "man" as a human. In the former case, this formulation would be closer to the Imitation Game, while in the latter it would be closer to current depictions of the test.
  18. ^ Turing 1950 and see Russell & Norvig 2003, p. 948, where they comment, "Turing examined a wide variety of possible objections to the possibility of intelligent machines, including virtually all of those that have been raised in the half century since his paper appeared."
  19. ^ Whitby 1996, p. 53
  20. ^ Weizenbaum 1966, p. 37
  21. ^ a b c Weizenbaum 1966, p. 42
  22. ^ Thomas 1995, p. 112
  23. ^ Bowden 2006, p. 370
  24. ^ Coby et al. 1972, p. 42
  25. ^ Saygin, Cicekli & Akman 2000, p. 501
  26. ^ Searle 1980
  27. ^ Saygin, Cicekli & Akman 2000, p. 479
  28. ^ Sundman 2003
  29. ^ Loebner 1994
  30. ^ See [1].
  31. ^ "Artificial Stupidity" 1992
  32. ^ (Turing 1950, p. 448)
  33. ^ Shieber 1994, p. 77
  34. ^ Shapiro 1992, p. 10-11 and Shieber 1994, amongst others.
  35. ^ a b c Traiger 2000
  36. ^ Turing 1950, p. 433-434
  37. ^ a b c Moor 2003 harvnb error: multiple targets (2×): CITEREFMoor2003 (help)
  38. ^ a b Saygin et al 2000, p. 252
  39. ^ Traiger 2000, p. 99
  40. ^ Sterrett 2000
  41. ^ Genova 1994, Hayes & Ford 1995, Heil 1998, Dreyfus 1979
  42. ^ Colby et al 1972
  43. ^ Saygin et al 2000, p. 60
  44. ^ Turing 1950 under "Critique of the New Problem"
  45. ^ Haugeland 1985, p. 8
  46. ^ "These six disciplines," write Stuart J. Russell and Peter Norvig, "represent most of AI". Russell & Norvig 2003, p. 3
  47. ^ Russell & Norvig 2003, p. 3
  48. ^ Russell & Norvig 2003, p. 3
  49. ^ Turing 1950, under the heading "The Imitation Game", where he writes, "Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."
  50. ^ Kurzweil 1990
  51. ^ Long Bets - By 2029 no computer - or "machine intelligence" - will have passed the Turing Test
  52. ^ Bion 1979
  53. ^ Hinshelwood 2001
  54. ^ "What if a Computer Lies?" Lakshminarayanan Subramanian

References

  • "Artificial Stupidity", The Economist, 324 (7770): 14, 1992-09-01{{citation}}: CS1 maint: date and year (link)
  • Bion, W.S. (1979), "Making the best of a bad job", Clinical Seminars and Four Papers, Abingdon: Fleetwood Press.
  • Bowden, Margaret A. (2006), Mind As Machine: A History of Cognitive Science, Oxford University Press, ISBN 9780199241446
  • Colby, K. M.; Hilf, F. D.; Weber, S.; Kraemer (1972), "Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes", Artificial Intelligence, 3: 199–221 {{citation}}: Unknown parameter |fisrt4= ignored (help)
  • Copeland, Jack (2003), Moor, James (ed.), "The Turing Test", The Turing Test: The Elusive Standard of Artificial Intelligence, Springer, ISBN 1-40-201205-5
  • Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
  • Dreyfus, Hubert (1979), What Computers Still Can't Do, New York: MIT Press, ISBN ISBN 0-06-090613-8 {{citation}}: Check |isbn= value: invalid character (help)
  • Genova, J. (1994), "Turing's Sexual Guessing Game", Social Epistemology, 8 (4): 314–326, ISSN 0269-1728
  • Harnad, Stevan (2004), "The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence", in Epstein, Robert; Peters, Grace (eds.), The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer, Klewer
  • Haugeland, John (1985), Artificial Intelligence: The Very Idea, MIT Press {{citation}}: Unknown parameter |publisher-place= ignored (help).
  • Hayes, Patrick; Ford, Kenneth (1995), "Turing Test Considered Harmful", Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI95-1), Montreal, Quebec, Canada.: 972–997
  • Heil, John (1998), Philosophy of Mind: A Contemporary Introduction, London and New York: Routledge, ISBN 0-415-13060-3
  • Hinshelwood, R.D. (2001), Group Mentality and Having a Mind: Reflections on Bion's work on groups and on psychosis
  • Kurzweil, Ray (1990), The Age of Intelligent Machines, ISBN 0-262-61079-5
  • Loebner, Hugh Gene (1994), "In response", Communications of the ACM, 37 (6): 79–82, retrieved 2008-03-22
  • Moor, James, ed. (2003), The Turing Test: The Elusive Standard of Artificial Intelligence, ISBN 1-4020-1205-5
  • Penrose, Roger (1989), The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics, Oxford University Press, ISBN 0-14-014534-6
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2
  • Saygin, Ayse Pinar; Cicekli, Ilyas; Akman, Varol (2000), Turing Test: 50 Years Later in Moor, James, ed. (2003), The Turing Test: The Elusive Standard of Artificial Intelligence, Springer, ISBN 1-40-201205-5
  • Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences, 3 (3): 417–457. Page numbers above refer to a standard pdf print of the article. See also Searle's original draft.
  • Shapiro, Stuart C. (1992), "The Turing Test and the economist", ACM SIGART Bulletin, 3 (4): 10–11
  • Shieber, Stuart M. (1994), "Lessons from a Restricted Turing Test", Communications of the ACM, 37 (6): 70–78, retrieved 2008-03-25
  • Sterrett, S. G. (2000), "Turing's Two Test of Intelligence", Minds and Machines, 10 (4), ISSN 0924-6495 (reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence edited by James H. Moor, Kluwer Academic 2003) ISBN 1-4020-1205-5
  • Sundman, John (February 26, 2003), "Artificial stupidity", Salon.com, retrieved 2008-03-22{{citation}}: CS1 maint: date and year (link)
  • Thomas, Peter J. (1995), The Social and Interactional Dimensions of Human-Computer Interfaces, Cambridge University Press, ISBN 052145302X
  • Traiger, Saul (2000), "Making the Right Identification in the Turing Test", Minds and Machines, 10 (4), ISSN 0924-6495 (reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence edited by James H. Moor, Kluwer Academic 2003) ISBN 1-4020-1205-5
  • Turing, Alan (1948), "Machine Intelligence", in Copeland, B. Jack (ed.), The Essential Turing: The ideas that gave birth to the computer age, ISBN 0-19-825080-0
  • Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423
  • Turing, Alan (1952), "Can Automatic Calculating Machines be Said to Think?", in Copeland, B. Jack (ed.), The Essential Turing: The ideas that gave birth to the computer age, ISBN 0-19-825080-0
  • Zylberberg, A.; Calot, E. (2007), "Optimizing Lies in State Oriented Domains based on Genetic Algorithms", Proceedings VI Ibero-American Symposium on Software Engineering: 11–18, ISBN 978-9972-2885-1-7
  • Weizenbaum, Joseph (January, 1966), "ELIZA - A Computer Program For the Study of Natural Language Communication Between Man And Machine", Communications of the ACM, 9 (1): 36–45 {{citation}}: Check date values in: |date= (help)CS1 maint: date and year (link)
  • Whitby, Blay (1996), "The Turing Test: AI's Biggest Blind Alley?", in Millican, Peter & Clark, Andy (ed.), Machines and Thought: The Legacy of Alan Turing, vol. 1, Oxford University Press, pp. 53–62, ISBN 0-19-823876-2{{citation}}: CS1 maint: multiple names: editors list (link)
  • Adams, Scott (2008), Dilbert {{citation}}: Unknown parameter |Distributed by= ignored (help)

Further reading

  • B. Jack Copeland, ed., The Essential Turing: The ideas that gave birth to the computer age (2004). ISBN 0-19-825080-0
  • Larry Gonick, The Cartoon Guide to the Computer (1983, originally The Cartoon Guide to Computer Science). ISBN 0-06-273097-5.
  • S. G. Sterrett "Nested Algorithms and the 'Original Imitation Game Test'," Minds and Machines (2002). ISSN 0924-6495
  • A.P. Saygin, I. Cicekli, and V Akman (2000), 'Turing Test: 50 Years Later', Minds and Machines 10(4): 463-518. (reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence edited by James H. Moor, Kluwer Academic 2003) ISBN 1-4020-1205-5. (Thorough review. Online version at [18] )
  • Saygin, A.P. & Cicekli I (2002): Pragmatics in human-computer conversations (Abstract and links to pdf, if permitted), Journal of Pragmatics, Volume 34, Issue 3, March 2002, Pages 227-258.
  • Shah, H. (2006): "Chatterbox Challenge 2005: Geography of a Modern Eliza" Proceedings of 3rd International Workshop on Natural Language Understanding and Cognitive Science – NLUCS 2006 in conjunction with ICEIS 2006 Cyprus, Paphos, May 2006 ISBN: 972-8865-50-3 pp 133-138
  • Shah, H. (2005): A.L.I.C.E.: an ACE in Digitaland TripleC, Vol 4, No 2
  • Shah, H. & Henry, O. (2005): Confederate Effect in Human-Machine Textual Interaction Proceedings of 5th WSEAS Int. Conf. on Information Science, Communications and Applications (WSEAS ISCA), May 11– 14, 2005, Cancun, Mexico, ISBN: 960-8457-22-X, pp 109-114

External links