Artificial consciousness: Difference between revisions
mNo edit summary |
rv. Restore removal of info re ancient history of the concept AND remove non-grammatical, non-sequiter addition by Tkorrovi which is UNFIXABLE |
||
Line 1: | Line 1: | ||
'''Artificial consciousness''' (AC), also known as '''machine consciousness''' (MC) or '''synthetic consciousness''', is |
'''Artificial consciousness''' (AC), also known as '''machine consciousness''' (MC) or '''synthetic consciousness''', is [[consciousness]] possessed by man-made devices. It is an ancient idea dating back to the ancient Greek [[prometheus | promethean]] myth in which conscious people were manufactured from clay, pottery being an advanced technology in those days. In [[science fiction]], artificial conscious beings often take the form of [[robot]]s or [[artificial intelligence]]s. Artificial consciousness is an interesting philosophical problem because, with increased understanding of [[genetics]], [[neuroscience]] and [[information processing]], it may soon be possible to create a conscious entity. |
||
It may be possible biologically to create a being by manufacturing a [[genome]] that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing? |
It may be possible biologically to create a being by manufacturing a [[genome]] that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing? |
Revision as of 07:40, 28 March 2005
Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness, is consciousness possessed by man-made devices. It is an ancient idea dating back to the ancient Greek promethean myth in which conscious people were manufactured from clay, pottery being an advanced technology in those days. In science fiction, artificial conscious beings often take the form of robots or artificial intelligences. Artificial consciousness is an interesting philosophical problem because, with increased understanding of genetics, neuroscience and information processing, it may soon be possible to create a conscious entity.
It may be possible biologically to create a being by manufacturing a genome that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing?
Neuroscience hypothesizes that consciousness is the synergy generated with the interoperation of various parts of our brain, what have come to be called the neuronal correlates of consciousness, or NCC. The brain seems to do this whilst avoiding the problem described in the Homunculus fallacy and overcoming the problems described below in the section on the nature of consciousness. A quest for proponents of artificial consciousness is therefore to manufacture a machine to emulate this interoperation, which no one yet claims fully to understand.
The nature of consciousness
Consciousness is described at length in the consciousness article in Wikipedia. According to naïve realism and direct realism we perceive things in the world directly and our brains perform processing. On the other hand, according to indirect realism and dualism our brains contain data about the world that is obtained by processing but what we perceive is some sort of mental model or state that appears to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes dualism). Which of these general approaches to consciousness is correct has not been resolved and is the subject of fierce debate.
The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to supervene directly on the world outside the brain. On the other hand, if we perceive things indirectly, via a model of the world in our brains, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience.
If we perceive things directly self-awareness is difficult to explain because one of the principle reasons for proposing direct perception is to avoid Ryle's regress where internal processing becomes an infinite loop or recursion. The belief in direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion.
Self awareness is less problematic for entities that perceive indirectly because, by definition, they are perceiving their own state. However, as mentioned above, proponents of indirect perception must suggest some phenomenon, either physical or dualist to prevent Ryle's regress. If we perceive things indirectly then self awareness might result from the extension of experience in time described by Immanuel Kant, William James and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics.
Information processing and consciousness
Information processing consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implements the instructions need not be electronic, it could be mechanical or fluidic.
Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. The Wikipedia article on Artificial Intelligence (AI) considers this problem in depth.
If technologists were limited to the use of the principles of digital computing when creating a conscious entity they would have the problems associated with the philosophy of strong AI. The most serious problem is John Searle's chinese room argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc.
Searle's objection does not convince those who believe in direct perception because they would maintain that 'meaning' is only to be found in the objects of perception, which they believe are the world itself. The objection is also countered by the concept of emergentism in which it is proposed that some unspecified new physical phenomenon arises in very complex processors as a result of their complexity.
It is interesting that the misnomer digital sentience is sometimes used in the context of artificial intelligence research. Sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It draws attention to the way that conscious experience is a state rather than a process that might occur in processors.
Artificial consciousness beyond information processing
The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualists believe that there is something non-physical about consciousness whilst physicalists hold that all things are physical.
Those who believe that consciousness is physical are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness is due to some other physical phenomenon. The eminent neurologist Wilder Penfield was of this opinion and scientists such as Arthur Stanley Eddington, Roger Penrose, Herman Weyl, Karl Pribram and Henry Stapp amongst many others, have also proposed that consciousness involves physical phenomena that are more subtle than simple information processing. Even some of the most ardent supporters of consciousness in information processors such as Dennett suggest that some new, emergent, scientific theory may be required to account for consciousness.
As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.
Consciousness in digital computers
Some technologists working in the field of artificial consciousness are trying to create devices that appear conscious. These devices might either simulate consciousness or actually be conscious but provided they appear conscious the desired result has been achieved.
In computer science, the term digital sentience is used to describe the concept that digital computers could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com).
In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness so that the device appears conscious. Another definition of the word conscious is: "Possessing knowledge, whether by internal, conscious experience or by external observation; cognizant; aware; sensible" (public domain 1913 Webster's Dictionary).
Aspects of AC
There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; these are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however; there are many others that are not covered.
The ability to predict (or anticipate) foreseeable events is considered a highly desirable attribute of AC by Igor Aleksander: He writes in Artificial Neuroconsciousness: An Update: "Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness." The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment.
Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test.
Another test of AC, in the opinion of some, should include a demonstration that machine can learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. By Antonio Chella from University of Palermo [1] "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attention mechanism implemented by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted."
Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness. To illustrate this point the philosopher David Chalmers (1996) controversially puts forward the panpsychist argument that a thermostat could be considered conscious (pp283-299): it has states corresponding to too hot, too cold, or at the correct temperature. The results of the experiments of neuroscanning on monkeys suggest that a process, not a state or object activates neurons [2]. For such reaction there must be created a model of the process based on the information received through the senses, creating models in such a way demands a lot of flexibility, and is also useful for making predictions.
Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more.
Learning is also considered necessary for AC. By "Engineering consciousness", a summary by Ron Chrisley, University of Sussex [3] consiousness is/involves self, transparency, learning (of dynamics), planning, heterophenomenology, split of attentional signal, action selection, attention and timing management. Daniel Dennett said in his article "Consciousness in Human and Robot Minds" [4] "It might be vastly easier to make an initially unconscious or nonconscious "infant" robot and let it "grow up" into consciousness, more or less the way we all do." He explained that the robot Cog, described there, "Will not be an adult at first, in spite of its adult size. It is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world." And "Nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access to literally millions if not billions of logically independent items of world knowledge. Either these must be hand-coded individually by human programmers--a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas--or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world." An interesting article about learning is Implicit learning and consciousness [5] by Axel Cleeremans, University of Brussels and Luis Jiménez, University of Santiago, where learning is defined as “a set of philogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments”
Anticipation is the final characteristic that could possibly be used to make a machine appear conscious. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.
Schools of thought
There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms Genuine and Not-genuine refer not to the capability of the artificial consciousness but to its reality (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real. E.g. Some believers in Genuine AC say the thermostat is really conscious but they do not claim the thermostat is capable of an appreciation of music. In an interview [6] Chalmers called his statement that thermostat is conscious "very speculative" and he is not a keen proponent of pan psychism (see page 298 of Chalmers (1996) whither panpsychism).
Objective less Genuine AC
By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.
An AC system that appears conscious must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, computers that appear conscious are a form of AC that may considered to be strong artificial intelligence, but this also depends on how strong AI is defined.
Not-genuine AC
Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other sentient beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. By more strict view, subject of another field as AI should not be subject of AC, so by that only a study what(?) cannot be categorized anywhere else, such as artificial emotions, can be considered "Not-genuine AC".
Genuine AC
See Strong AI
Human-like AC
See Strong AI
Nihilistic view
It is impossible to test if anything is conscious. To ask a thermometer to appreciate music is like asking a human to think in five dimensions. It is unnecessary for humans to think in five dimensions, as much as it is irrelevant for thermostats to understand music. Consciousness is just a word attributed to things that appear to make their own choices and perhaps things that are too complex for our mind to comprehend. Things seems to be conscient, but that is just because our morale tells us to believe in it, or because of our feelings for other things. Consciousness is an illusion.
Alternative Views
One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing logical ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The richness or completeness of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete.
Today's computers are not generally considered conscious. A Unix (or derivative thereof) computer's response to the wc -w
command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the top
command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.
Artificial consciousness as a field of study
Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.
The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so.
Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.
At this time analog holographic sentience modeled after humans is more like to be a successful approach.
Practical approaches
AC research has moved beyond realm of philosophy; several serious attempts are underway to instill consciousness in machines. Two of these are described below; others exist and more will undoubtedly follow.
Franklin’s Intelligent Distribution Agent
Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars’ Global Workspace Theory (1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA’s task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual’s skills and preferences with the Navy’s needs. IDA interacts with Navy databases and communicates with the sailors via natural language email dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996-2001 at Stan Franklin’s "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of [Java] code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA’s top-down architecture, high-level cognitive functions are explicitly modeled; see Franklin (1995, 2003) for details. While IDA is functionally conscious by definition, Franklin does “not attribute phenomenal consciousness to [his] own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that’s how I do it' while watching IDA’s internal and external actions as she performs her task."
Haikonen’s Cognitive Architecture
Pennti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2004) was reportedly not capable of AC, but did exhibit emotions as expected.
Testing for artificial consciousness
Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation.
The Turing test is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.
A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machine could be conscious and not be able to pass the Turing test.
As mentioned above, the Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be conscious.
Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness.
Indeed, for those who argue for indirect perception no test of behaviour can prove or disprove the existence of consciousness because a conscious entity can have dreams and other features of an inner life. This point is made forcibly by those who stress the subjective nature of conscious experience such as Thomas Nagel who, in his essay, What is it like to be a bat?, argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism.
Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, the failure of any particular test would not disprove consciousness. Ultimately it will only be possible to assess whether a machine is conscious when a universally accepted understanding of consciousness is available.
The ethics of artificial consciousness
In the absence of a true physical understanding of consciousness researchers do not even know why they want to construct a machine that is conscious. If it was certain that a particular machine was conscious it would probably need to be given rights under law and could not be used as a slave.
References
- David Chalmers (1996), The Conscious Mind. Oxford University Press.
- Baars, Bernard (1988), A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.
- Baars, Bernard (1997), In the Theater of Consciousness. New York, NY: Oxford University Press.
- Cotterill, Rodney (2003), 'Cyberchild: a Simulation Test-Bed for Consciousness Studies' in Machine Consciousness, ed. Owen Holland. Exeter, UK: Imprint Academic.
- Franklin, Stan (1995), Artificial Minds Boston, MA: MIT Press.
- Franklin, Stan (2003), 'IDA: A Conscious Artefact?' in Machine Consciousness. Ed. Owen Holland. Exeter, UK: Imprint Academic.
- Freeman, Walter (1999), How Brains make up their Minds. London, UK: Phoenix.
- Haikonen, Pennti (2003), The Cognitive Approach to Conscious Machines. Exeter, UK: Imprint Academic.
- Haikonen, Pennti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004.
Artificial consciousness in literature and movies
Fictional instances of artificial consciousness:
- Vanamonde in Arthur C. Clarke's The City and the Stars
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide, Children of the Mind, and The Investment Counselor
- HAL in 2001: A Space Odyssey
- Data in Star Trek
- Robots in Isaac Asimov's Robot Series
- Andrew Martin in The Bicentennial Man
- Blade Runner
- The Matrix
External links
- Are People Computers? Strong AI, The Simulation Argument and Naive Realism
- http://www.ph.tn.tudelft.nl/~davidt/consciousness.html
- Artefactual consciousness depiction by Professor Igor Aleksander Requires Microsoft Powerpoint to view
- Proposed mechanisms for AC implemented by computer program: absolutely dynamic systems
- http://www.consciousentities.com
- David Chalmers
- Consciousness in the Artificial Mind non-mainstream
- Course notes/slides on Neurophilosophy
- Models of Consciousness - ESF Exploratory Workshop - Scientific Report
- Anton P. Zeleznikar's Home Page