Jump to content

Artificial consciousness: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
loosened constraint
Ataturk (talk | contribs)
at the request of Paul Beardsell
Line 9: Line 9:
AC must be capable of achieving all verifiable aspects of consciousness of average human, but must not have all of them in any particular moment. Therefore AC always remains AC, and is only as close to consciousness as we objectively know about consciousness.
AC must be capable of achieving all verifiable aspects of consciousness of average human, but must not have all of them in any particular moment. Therefore AC always remains AC, and is only as close to consciousness as we objectively know about consciousness.


Another area of contention is which subset of possible aspects of consciousness must be verifiably present before a device would be deemed conscious. One view is that all aspects of consciousness (whatever they are) must be present before a device passes. An obvious problem with that point of view, which could nevertheless be correct, is that some functioning human beings might then not be judged conscious by the same comprehensive tests. Another view is that AC must be capable of achieving these aspects, therefore test may fail just because the system is not developed to necessary level, AC must be capable of achieving these abilities what average human has because consciousness is described through human abilities.
Another area of contention is which subset of possible aspects of consciousness must be verifiably present before a device would be deemed conscious. One view is that all aspects of consciousness (whatever they are) must be present before a device passes. An obvious problem with that point of view, which could nevertheless be correct, is that some functioning human beings might then not be judged conscious by the same comprehensive tests. Another view is that AC must be capable of achieving these aspects, therefore a test may fail just because the system is not developed to the necessary level. In the opinion of some AC must be capable of achieving the same abilities as the average human because consciousness is described by them in reference to human abilities.


As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.
As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

Revision as of 22:39, 13 March 2004

An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness.

Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined it manifests aspects of attention which equate to those of a human at the neurological level, and, if attention is deemed a necessary pre-requisite for consciousness, then the fly is claimed to have a lot going for it.

It is asserted that one necessary ability of consciousness is the ability to predict external events where it is possible for an average human, i.e. to anticipate events in order to be ready to respond to them when they occur.

The system must then fit that anticipation into an engine that factors it in with the other drivers of the artificially intelligent creature. Without telepathy, thought can not be known to occur anywhere other than in your own head, and yet you can know that an entity that you are observing is conscious. Therefore an artificially conscious creature need none of the intelligence borne of thought in order to be convincing, i.e. it can appear pretty dumb but still be considered conscious. Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine disqualifies it from a human perspective from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that a simulation of a depiction of a conscious machine, such as the robots in Star Wars, could count as examples of artificial consciousness.

AC must be capable of achieving all verifiable aspects of consciousness of average human, but must not have all of them in any particular moment. Therefore AC always remains AC, and is only as close to consciousness as we objectively know about consciousness.

Another area of contention is which subset of possible aspects of consciousness must be verifiably present before a device would be deemed conscious. One view is that all aspects of consciousness (whatever they are) must be present before a device passes. An obvious problem with that point of view, which could nevertheless be correct, is that some functioning human beings might then not be judged conscious by the same comprehensive tests. Another view is that AC must be capable of achieving these aspects, therefore a test may fail just because the system is not developed to the necessary level. In the opinion of some AC must be capable of achieving the same abilities as the average human because consciousness is described by them in reference to human abilities.

As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

Examples of artificial consciousness from literature and movies are:

Professor Igor Aleksander of Imperial College, London, stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language. This is a controversial statement, given that artificial consciousness is thought by most observers to require strong AI. Some people deny the very possibility of strong AI; whether or not they are correct, certainly no artificial intelligence of this type has yet been created.