Reverse Turing test
Conventionally, the Turing test is conceived as having a human judge and a computer subject which attempts to appear human. Critical to the concept is the parallel situation of a human judge and a human subject, who also attempts to appear human. The intent of the test is for the judge to attempt to distinguish which of these two situations is actually occurring. It is presumed that a human subject will always be judged human, and a computer is then said to "pass the Turing test" if it too is judged human. Any of these roles may be changed to form a "reverse Turing test".[clarification needed]
Reversal of objective
Arguably the standard form of the reverse Turing test is one in which the subjects attempt to appear to be a computer rather than a human.
A formal reverse Turing test follows the same format as a Turing test. Human subjects attempt to imitate the conversational style of a conversation program such as ELIZA. Doing this well involves deliberately ignoring, to some degree, the meaning of the conversation that is immediately apparent to a human, and the simulation of the kinds of errors that conversational programs typically make. Arguably unlike the conventional Turing test, this is most interesting when the judges are very familiar with the art of conversation programs, meaning that in the regular Turing test they can very rapidly tell the difference between a computer program and a human acting normally.
The humans that perform best (some would say worst) in the reverse Turing test are those that know computers best, and so know the types of errors that computers can be expected to make in conversation. There is much shared ground between the skill of the reverse Turing test and the skill of mentally simulating a program's operation in the course of computer programming and especially debugging. As a result, programmers (especially hackers) will sometimes indulge in an informal reverse Turing test for recreation.
An informal reverse Turing test involves an attempt to simulate a computer without the formal structure of the Turing test. The judges of the test are typically not aware in advance that a reverse Turing test is occurring, and the test subject attempts to elicit from the 'judges' (who, correctly, think they are speaking to a human) a response along the lines of "is this really a human?". Describing such a situation as a "reverse Turing test" typically occurs retroactively.
There are also cases of accidental reverse Turing tests, occurring when a programmer is in a sufficiently non-human mood that his conversation unintentionally resembles that of a computer. In these cases the description is invariably retroactive and humorously intended. The subject may be described as having passed or failed a reverse Turing test or as having failed a Turing test. The latter description is arguably more accurate in these cases; see also the next section.
Failure by control subjects
Since Turing test judges are sometimes presented with genuinely human subjects, as a control, it inevitably occurs that a small proportion of such control subjects are judged to be computers. This is considered humorous and often embarrassing for the subject.
This situation may be described literally as the human "failing the Turing test", for a computer (the intended subject of the test) achieving the same result would be described in the same terms as having failed. The same situation may also be described as the human "failing the reverse Turing test" because to consider the human to be a subject of the test involves reversing the roles of the real and control subjects.
Reversal of real and control subjects
When a Turing test is applied to a mixed group of human and computer subjects, the computers are the nominal subjects. However, the humans are judged in exactly the same way, and so their Turing test scores can also be calculated.
This is another way of viewing the situation described in the previous section.
Judgement by computer
The term "reverse Turing test" has also been applied to a Turing test (test of humanity) that is administered by a computer. In other words, a computer administers a test to determine if the subject is or is not human. As of 2004, such procedures, called CAPTCHAs, are used in some anti-spam systems to prevent automated bulk use of communications systems.
The use of captchas is controversial. Circumvention methods exist that reduce their effectiveness. Also, many implementations of captchas (particularly ones desired to counter circumvention) are inaccessible to humans with disabilities, and/or are difficult for humans to pass.
Note that "CAPTCHA" is an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart" so that the original designers of the test regard the test as a Turing test to some degree.
Judgement of sufficient input
An alternative conception of a Reverse Turing Test is to use the test to determine whether sufficient information is being transmitted between the tester and the subject. For example, if the information sent by the tester is insufficient for the human doctor to perform diagnosis accurately, then a medical diagnostic program could not be blamed for also failing to diagnose accurately.
This formulation is of particular use in developing Artificial Intelligence programs, because it gives an indication of the input needed for a system that attempts to emulate human activities.