Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).
Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation.
- 1 Philosophical views of artificial consciousness
- 2 Sarah Blackmore's Memetic Theory
- 3 Consciousness in digital computers
- 4 Mental Life
- 5 ASys Programme
- 5.1 Symbolic or hybrid proposals
- 5.2 Neural network proposals
- 6 Testing for artificial consciousness
- 7 Artificial consciousness in literature, movies, and television
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
Philosophical views of artificial consciousness
As there are many designations of consciousness, there are many potential types of AC. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that are amenable to a functional description, while phenomenal consciousness concerns those aspects of experience that seem to defy functional depiction, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block, 1997).
Domenico Parisi, researcher at the Institute of Cognitive Science and Technologies, writes in his article "Mental Robotics" that in order for robots to possess artificial consciousness, they must also have what he calls "mental life". According to Parisi, mental life is "To have internal representations of sensory input in the absence of the input." 
- Giorgio Buttazzo
Giorgio Buttazzo has already been mentioned for the insight of the following words on machine consciousness: "Working in a fully automated mode, they cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components", says Giorgio Buttazzo of the University of Pavia. Buttazzo contributes to the debate of whether or not artificial (or machine) consciousness is possible to attain. He claims that, despite our current ability to design machines of technology that accurately simulate autonomy, those machines lack the aforementioned intangible sensibilities that would give it a mental life and therefore artificial consciousness. Those machines are also, as Buttazzo says, controlled by outside influences; its man-made internal workings and its coding.
Buttazzo also discusses the philosophical issues that plague any hopes that artificial consciousness could one day be achieved., The mind/body problem, as famously pondered by the philosopher René Descartes, defines the mind (that with which we experience qualia) as being a completely separate entity from the body. This rift between the physical body and the intangible mind makes consciousness extremely difficult to study, and therefore even more difficult to replicate. The theory of reductionism, on the other hand, does “not recognize the existence of mind as a subjective, private sense-data construct and consider[s] all mental activities as specific neural states of the brain” (Buttazzo, 2001, p. 4). This is to say, any activity that occurs in the brain is physically observable. Finally, a third school of thought called idealism which rejects the physical world as being anything other than mental constructions. In this case, even the things we say are tangible are immeasurable, as they exist in the individual’s own perception of the things around him.
The world is constantly changing and advancing technologically, so the ideas of reductionism and idealism are now largely outdated. Today, “scientists and philosophers developed a new approach that considers he mind a form of computation emerging at a higher level of abstraction with respect to neural activity.” (Buttazzo, 2001, p. 4) While a mere change in perspective does not necessarily alter the status quo, it does the door to new ways of thinking creatively about solving the mind/body problem.
The debate over the plausibility of artificial consciousness
A view skeptical of AC is held by theorists (e.g., type-identity theorists) who hold that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block, 1978; Bickle, 2003).
In his article "Artificial Consciousness: Utopia or Real Possibility" Giorgio Buttazzo says that despite our current technology's ability to simulate autonomy, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components." 
For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam, 1967).
Some theorists, eg. functionalists, see no issue with the phenomenon of artificial consciousness. Functionalism, according to the Stanford Encyclopedia of Philosophy, is the idea within the study of consciousness that “what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part” (Levin, 2013). As such, a functionalist would be inclined to argue that any machine able to perform the tasks, or functions, of consciousness, must be doing so through a form of consciousness itself. This reasoning is due to the fact that functionalists believe that identifying something depends more on what one is able to do, rather than on the sum of one’s parts or one’s composition Artificial consciousness seems to be an entirely plausible feat to functionalists, assuming that it will be achieved as soon as scientists are able to program machines able to perform the tasks of consciousness.
Alan Turing was a functionalist who was famous for his development of a system to test consciousness via machine intelligence in 1950 (Harnard, 2008). His test, aptly named the “Turing Machine”, aimed to determine whether a computer could “think” (Harnard, 2008). A machine would be labeled “intelligent” if a human interrogator was unable to tell it apart from another human via a conversation (Harnard, 2008). The fact that Turing created a system to test computers for “intelligence” (a function of consciousness), signifies that not only believed it plausible, but imminent, that artificial consciousness be produced in machines. Turing clearly believed that it was not only plausible, but imminent that artificial consciousness be produced in machines, which is why he created a means to test the functions of computers.
- Emergence Theorists
Emergence theorists have been known to argue against the plausibility of artificial machine consciousness. Emergence theorists believe that complex systems and patterns such as consciousness arise out of an assortment of relatively simple interactions to create novel entities, which are irreducible with respect to them (O'Connor, T., & Wong, H., 2012). They believe that the complex systems of the mind, and the consciousness it beholds, are consequences which arise from biological neurophysiology, in the same way that the more complex system of chemistry arises from simple physics (Shlangel, 1999).
- The Role of Organic Material
There is debate over the importance that organic matter plays in the development of consciousness (Shlangel, 1999). Functionalists subscribe to the idea that if a machine can independently monitor and preform the necessary tasks of a sentient being, that machine must possess some level of consciousness, no matter it’s composition. However, theorists of some other schools of thought sometimes tie consciousness to biology and neurology, claiming that conscious is due in part to some essential property of organic material (Searle, 1980). This clause renders the creation of artificial consciousness through purely mechanical manipulation impossible. Some emergence theorists argue that human-like consciousness is not attainable to man-made hardware on it’s own, as it’s functions are derived in part from the organic material humans are composed of (Shlangel, 1999). Theorists of this party reason that some implicit property of organic material is required to develop the emergent consciousness (Searle, 1980). Some call this characteristic a causal power. Jon Searle, a known emergence theorist, explains the importance of the presence of these causal powers of organic material in everyday functions of consciousness,
- “It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality, but, because I am a certain sort of organism with a certain biological (i.e., chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning and other intentional phenomena.” (Searle, 1980).:
As Searle explains, organic material, in his opinion, is one essential quality that enables one with the causal powers of the emergence of consciousness (Searle, 1980). This is a material that is not present in computers of this time, and thus computers are unlikely to develop true consciousness any time soon.
- Different Means of Processing
Some theorists challenge the plausibility of artificial consciousness by underlining the contrasting ways in which computers, when compared to humans, process information. Machines consider environmental stimuli very differently from humans not only experientially (due to lack of qualia), but quantitatively as well. According to Richard Shlangel of George Washington University, humans interpret the world around them in terms of symbolic meaning. When a human considers the world, stimuli is perceived and transformed into complex and often abstract intrinsic conceptual representations. This internal awareness does not exist in the form of rote mathematical computations (as a machine does), but instead, through a semantic understanding of forms. And thus, this awareness and contemplation of the outside world not only enables an understanding of one’s environment but also the contemplation of hypothetical, abstract, artistic, emotional, and philosophical contemplation through one’s ability to think outside of numerical formulas (Shlangal, 1999). This metaphorical awareness ability is where some theorists tie the emergence of thoughts and formal consciousness. In comparison, the mathematic, designation-style, processing that is programmed into machines does not allow computers to preform the same symbolic type of conceptual consideration that humans preform on a daily basis (Shlangel, 1999). Computers are programmed with fixed definitional algorithmic functions, which simply do not amount to the emergence of the same symbolic reflections necessary for true thought or consciousness to arise.
- Programing vs Understanding
Jon Searle, a well-respected philosopher, argues that even if a computer were able to preform all of the tasks of consciousness, including but not limited to perception, walking around, and other formal operations, they would still not be conscious. Even if it were the case that a computer program could produce answers or actions that mimicked that of a human’s in response to prompting, they would still lack an essential understanding of the functions they were preforming, as well as the world around them (Searle, 1980). Searle argues that all computers can do is apply strictly programmed instructions about manipulating formal symbols to produce outcomes (Searle, 1980). No matter how close to a human’s responses these answers were able to get, the computer would still lack a very necessary understanding. According to Searle, computers merely follow a complex coding of equations to exchange symbols according to instruction with no proper understanding of the symbols or the interactions themselves. This inhibits them from ever properly reaching consciousness (Searle, 1980). According to Searle, a program cannot give a computer understanding leading to any consciousness or mind, no matter how “intelligently” or convincingly it may behave.
- The Chinese Room Argument
Searle’s Chinese Room argument explains computational shortcomings on machine understanding by equating the process to that of a person mastering the exchange process of written symbols without understanding their meaning (Searle, 1980). The argument follows as such: Suppose an English speaker with no knowledge of Chinese is locked in a room, alone. Notes written in Chinese are slipped into said room and the English speaker is instructed to produce responses to send back out. To complete this task, the English speaker is supplied only with an English manual which explains which Chinese characters to produce when each combination of Chinese characters is presented on the incoming note (this manual includes no translations or intimations of word meanings). The manual is so thorough that those outside the room are convinced that the creator of the response notes understands the Chinese, but they are merely manipulating symbols according to instruction with no knowledge of the meanings of the notes themselves at all. This scenario is similar to that of a computer following programming; and illustrates that even if computers were supplied with programming thorough enough to allow them to preform naturalistic language, they would merely be manipulating symbols, with no capabilities to understand the language (Searle, 1980). This critical lack of understanding of meaning is a critical weakness of computers that disables them from reaching true artificial consciousness.
- Inconsistencies in Philosophical Beliefs
An additional flaw in the subscription to artificial consciousness, as pointed out by Searle, stems from the inconsistency of the organization of machine vs human anatomy (Searle, 1980). Extreme dualism is not widely accepted in the field of modern human consciousness, for most believe that the mind and body are strongly intertwined. However, for one to define computer processing as consciousness, they would have to ignore the fact that the computer’s software computation is entirely divided from its mechanical hardware. As Searle explains, “unless you believe that the mind is separable from the brain both conceptually and empirically- dualism in a strong form- you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains” (Searle, 1980). This is a clear inconsistency of philosophical definitions. One cannot accept holism in terms of human consciousness and then accept dualism in the case of machine consciousness.
Chalmers' argument for artificial consciousness
One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his manuscript A Computational Foundation for the Study of Cognition, is roughly that computers perform computations and the right kinds of computations are sufficient for the possession of a conscious mind. In outline, he defends his claim thus: Computers perform computations. Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.
The most controversial part of Chalmers’ proposal is that mental properties are “organizationally invariant;” i.e., nothing over and above abstract causal organization. His rough argument for which is the following. Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are “characterized by their causal role” within an overall causal system. He adverts to the work of Armstrong (1968) and Lewis (1972) in claiming that “[s]ystems with the same causal topology…will share their psychological properties.”
Phenomenological properties, on the other hand, are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his “Dancing Qualia Argument” for this purpose.
Chalmers begins by assuming that agents with identical causal organizations could have different experiences in virtue of having different material constitutions (silicon vs. neurons, e.g.). He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could “notice” the shift in experience.
Critics of AC object that Chalmers begs the question in assuming that all mental properties are sufficiently captured by abstract causal organization.
Sarah Blackmore's Memetic Theory
Susan Blackmore, a doctor of parapsychology, believes that those who strive to solve the mind/body problem in order to unlock the mystery of artificial consciousness are focusing on the wrong problem. Instead, she says, they ought to be acknowledging that it is impossible to create a test that will prove or disprove that a machine is experiencing consciousness. Consciousness is assumed to be a subjective experience, ergo it would be impossible to design a test to objectively identify it in any machine (Blackmore, 2003). Blackmore argues that consciousness cannot be simply found and inserted into a machine, and she refutes the belief that machine consciousness is impossible. Rather, consciousness is an illusion in the sense that it is not necessarily what we think it is; we can be mislead by it.
Blackmore introduces a theory, which she calls the Theory of Memetics. Her theory of memetics is based on the idea that memes are bits of information that are transferred or passed from person to person via imitation (Dawkins, 1976). These bits of imitated information are not copied perfectly from one person to the next, but each imitation becomes its own version of the information as each imitator translates and interprets differently from other imitators (Blackmore, 2003). If human consciousness can be boiled down to the imitation of subjectively translated and experienced memes, then perhaps artificial consciousness is attainable by programming a machine to replicate memes as humans do. Humans possess a “memetic drive” that creates a competition amongst various memes to be replicated, and as humans create new memes out of old ones – human intelligence advances. As humans evolve in intelligence, so their machines co-evolve with them (Blackmore, 2003).
Creating machines or robots with an ability to imitate memes and internalize them as “thoughts” and “ideas” that would expand their intellect as memetic drive begins to take hold. Blackmore’s theory states that machines would become increasingly more sophisticated, as they would become able to distinguish themselves as having subjective experiences, beliefs, and desires (Blackmore, 2003). And yet, since their machinery will be digitally advanced, they will not experience the shortcomings of the rudimentary process of imitation. Considering the current state of technology, where an Internet connection allows two machines to share information with each other, conscious machines would be able to fire larger bits of information (larger memes) back and forth in rapid advancement.
Ethics of artificial consciousness
If it was certain that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building or large machine is a particular ambiguity. Should laws be made for such a case, consciousness would also require a legal definition (for example a machine's ability to experience pleasure or pain, known as sentience). Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).
The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
Consciousness in digital computers
There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for Artificial Consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Such model creating includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.
There are at least three types of awareness agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.
Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments" (Cleeremans 2001).
The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take premptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
- Subjective experience
Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there are problems in other fields of science which limit that which we can observe, such as uncertainty principle in physics, which have not made the research in these fields of science impossible.
Domenico Parisi, a researcher at the Institute of Cognitive Science and Technologies in Italy, defines mental life as the cognitive or neurological ability “to have internal representation of sensory input in the absence of the input” (Parisi, 2007, p. 4) He believes that robots must have this mental life in order to achieve truly artificial consciousness In order for a robot to think for itself, it must not be simply a reactive robot, whose internal representations of reality are incited by sensory stimulation/inputs from the world around it. Rather, it must be able to organically and mentally generate representations of consciousness, without the necessity of external sensory input (Parisi, 2007, p. 4). Parisi defines two types of representations in order to underscore the reasons why reactive robots do not possess true artificial consciousness. First, he defines “internal representations” to be the “activation patterns in a neural network’s internal units that mediate between sensory input and motor output.” Internal representations are indeed present in reactive robots, but only to the extent that they are created by external sensory stimuli. “Mental Representations”, on the other hand, are defined as the “internal representations caused by external input but are self-generated in the absence of the input.” (Parisi, 2007, p. 6) Ergo, true artificial consciousness must be able to recreate the organic, self-generation of those neural network patterns in the absence of the specific sensory stimuli that would activate that particular network.
Within the past ten years, researchers have begun to more deeply explore solutions to the problem of creating artificial consciousness in robots. In 2013, researchers Bermejo-Alonso, López, and Sanz wrote an article about designing and creating autonomous robot controls, and how the ASys Programme is being used to advance machines’ ability to creatively and organically solve problems. The Programme has been designed to “pursue the identification of core architectural traits that enable a system to handle any kind of uncertainty, whether environmental or internal” (Bermejo-Alonso, López & Sanz, 2013) . This program has been in use and in the midst of development for nearly a decade, and it allows researchers to rapidly bridge the gap between programmed autonomy in robots and artificial consciousness.
Symbolic or hybrid proposals
Franklin's Intelligent Distribution Agent
Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."
Ron Sun's cognitive architecture CLARION
CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.
CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.
The simulations using CLARION provide detailed, process-based interpretations of experimental data related to consciousness, in the context of a broadly scoped cognitive architecture and a unified theory of cognition. Such interpretations are important for a precise, process-based understanding of consciousness and other aspects of cognition, leading up to better appreciations of the role of consciousness in human cognition (Sun 1999). CLARION also makes quantitative and qualitative predictions regarding cognition in the areas of memory, learning, motivation, meta-cognition, and so on. These predictions either have been experimentally tested already or are in the process of being tested.
Ben Goertzel's OpenCog
Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the robotics lab of Hugo de Garis at Xiamen University.
Neural network proposals
Haikonen's cognitive architecture
Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture. An updated account of Haikonen's architecture, along with a summary of his philosophical views, is given in Haikonen (2012).
Shanahan's cognitive architecture
Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechansim for internal simulation ("imagination") (Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008) and (Reggia 2013) and Chapter 20 of (Haikonen 2012).
Takeno's self-awareness research
Self-awareness in robots is being investigated by Junichi Takeno  at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it , and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis (Takeno 2008 ). He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).
Aleksander's impossible mind
Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language. Whether this is true remains to be demonstrated and the basic principle stated in Impossible minds—that the brain is a neural state machine—is open to doubt.
Thaler's Creativity Machine Paradigm
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information"  in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.
Testing for artificial consciousness
The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.
Other tests, such as ConsScale, test the presence of features inspired by biological systems, or measure the cognitive development of artificial systems.
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of consciousness, a test of presence of consciousness in AC may be impossible.
Artificial consciousness in literature, movies, and television
- Vanamonde in Arthur C. Clarke's The City and the Stars—an artificial being that was immensely powerful but entirely child-like.
- The Ship (the result of a large-scale AC experiment) in Frank Herbert's Destination: Void and sequels, despite past edicts warning against "Making a Machine in the Image of a Man's Mind."
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide, Children of the Mind, and Investment Counselor
- HAL 9000 in 2001: A Space Odyssey
- Robots in Isaac Asimov's Robot series
- The Minds in Iain M. Banks' Culture novels.
- Puppet Master in Ghost in the Shell manga and anime.
- Commander Data in Star Trek: The Next Generation
- The Geth in Mass Effect
- The MATRIX movie where computers came to life and overtook humans.
- Artificial Intelligence
- Brain-computer interface
- Consciousness in animals
- Greedy reductionism
- Identity of indiscernibles
- Kismet (robot)
- Philosophy of mind
- Simulated reality
- Strong AI
- The Emperor's New Mind, a 1989 Roger Penrose book
- William Grey Walter
- Alan Rosen
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2012)|
- Mental Robotics Parisi, Domenico
- Buttazzo, G. (2001). Artificial consciousness: Utopia or real possibility?. Computer, 34(7), 24-30.
- "Why not artificial consciousness or thought?", Schlagel, R. H., 1999, Minds and Machines, 9(1), 3-28
- "Minds, brains, and programs", Searle, J. R., 1980, Behavioral and brain sciences, 3(3), 417-457
- Artificial consciousness: Utopia or real possibility? Buttazzo, Giorgio, July 2001, Computer, ISSN 0018-9162
-  Levin, Janet, 2013, Computer
- [ http://eprints.soton.ac.uk/257741/] Harnard, Steven, 2008, Computer
-  O'Connor, T., & Wong, H., 2012, Computer
-  Schlagel, R. H., 1999, Computer
-  Searle, J. R., 1980, Computer
- Blackmore, S. (2003). Consciousness in meme machines. Journal of Consciousness Studies, 10(4-5), 4-5
- Loebner Prize Contest Official Rules — Version 2.0 The competition was directed by David Hamill and the rules were developed by members of the Robitron Yahoo group.
- Joëlle Proust in Neural Correlates of Consciousness, Thomas Metzinger, 2000, MIT, pages 307-324
- Christof Koch, The Quest for Consciousness, 2004, page 2 footnote 2
- Aleksander 1995
- Parisi, D. (2007). Mental robotics. Artificial consciousness, 191-211.
- Sanz, R., López, I., & Bermejo-Alonso, J. (2007). A rationale and vision for machine consciousness in complex controllers. Artificial Consciousness, 141-155.
- Aleksander I (1996) Impossible Minds: My neurons, My Consciousness, Imperial College Press ISBN 1-86094-036-6
- Wilson RJ (1998) review of Impossible Minds. Journal of Consciousness Studies 5(1), 115-6.
- Thaler, S.L., "Device for the autonomous generation of useful information," http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=5659666.PN.&OS=PN/5659666&RS=PN/5659666
- Thaler, S. L. (2013)The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship, (ed.) E.G. Carayannis, Springer Science+Business Media, available at http://www.springerreference.com/docs/html/chapterdbid/358097.html
- Thaler, S. L. (2011), "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers
- Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995
- Thaler, S. L. (1995). "Virtual Input Phenomena" Within the Death of a Simple Pattern Associator, Neural Networks, 8(1), 55–65
- Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
- , Mapping the Landscape of Human-Level Artificial General Intelligence
- "Consciousness". In Honderich T. The Oxford companion to philosophy. Oxford University Press. ISBN 978-0-19-926479-7
- Ericsson-Zenith, Steven (2010), Explaining Experience In Nature, Sunnyvale, CA: Institute for Advanced Science & Engineering
- Aleksander, Igor (1995), Artificial Neuroconsciousness: An Update, IWANN, archived from the original on 1997-03-02 BibTex Internet Archive
- Armstrong, David (1968), A Materialist Theory of Mind, Routledge
- Arrabales, Raul (2009), "Establishing a Roadmap and Metrics for Conscious Machines Development", Proceedings of the 8th IEEE International Conference on Cognitive Informatics (Hong Kong): 94–101
- Baars, Bernard (1988), A Cognitive Theory of Consciousness, Cambridge, MA: Cambridge University Press, ISBN 0-521-30133-5
- Baars, Bernard (1997), In the Theater of Consciousness, New York, NY: Oxford University Press, ISBN 0-19-510265-7
- Bickle, John (2003), Philosophy and Neuroscience: A Ruthless Reductive Account, New York, NY: Springer-Verlag
- Block, Ned (1978), "Troubles for Functionalism", Minnesota Studies in the Philosophy of Science 9: 261-325
- Block, Ned (1997), On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates, MIT Press
- Chalmers, David (1996), The Conscious Mind, Oxford University Press, ISBN 0-19-510553-2
- Cotterill, Rodney (2003), "Cyberchild: a Simulation Test-Bed for Consciousness Studies", in Holland, Owen, Machine Consciousness, Exeter, UK: Imprint Academic
- Doan, Trung (2009), Pentti Haikonen's architecture for conscious machines
- Franklin, Stan (1995), Artificial Minds, Boston, MA: MIT Press, ISBN 0-262-06178-3
- Franklin, Stan (2003), "IDA: A Conscious Artefact", in Holland, Owen, Machine Consciousness, Exeter, UK: Imprint Academic
- Freeman, Walter (1999), How Brains make up their Minds, London, UK: Phoenix, ISBN 0-231-12008-7
- Gamez, David (2008), "Progress in machine consciousness", Consciousness and Cognition 17: 887–910, doi:10.1016/j.concog.2007.04.005
- Haikonen, Pentti (2003), The Cognitive Approach to Conscious Machines, Exeter, UK: Imprint Academic, ISBN 0-907845-42-8
- Haikonen, Pentti (2012), Consciousness and Robot Sentience, Singapore: World Scientific, ISBN 978-981-4407-15-1
- Koch, Christof (2004), The Quest for Consciousness: A Neurobiological Approach, Pasadena, CA: Roberts & Company Publishers, ISBN 0-9747077-0-8
- Lewis, David (1972), "Psychophysical and theoretical identifications", Australasian Journal of Philosophy 50:249-258
- Putnam, Hilary (1967), The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion, University of Pittsburgh Press
- Reggia, James (2013), "The rise of machine consciousness: Studying consciousness with computational models", Neural Networks 44: 112–131, doi:10.1016/j.neunet.2013.03.011
- Sanz, Ricardo; López, I; Rodríguez, M; Hernández, C (2007), "Principles for consciousness in integrated cognitive control", Neural Networks 20 (9): 938–946, doi:10.1016/j.neunet.2007.09.012, PMID 17936581
- Searle, John (2004), Mind: A Brief Introduction, Oxford University Press
- Shanahan, Murray (2006), "A cognitive architecture that combines internal simulation with a global workspace", Consciousness and Cognition 15: 443–449, doi:10.1016/j.concog.2005.11.005
- Sun, Ron (December 1999), "Accounting for the computational basis of consciousness: A connectionist approach", Consciousness and Cognition 8 (4): 529–565, doi:10.1006/ccog.1999.0405, PMID 10600249
- Sun, Ron (2001), "Computation, reduction, and teleology of consciousness", Cognitive Systems Research 1 (4): 241–249, doi:10.1016/S1389-0417(00)00013-9
- Takeno, Junichi; Inaba, K; Suzuki, T (June 27–30, 2005), "Experiments and examination of mirror image cognition using a small robot", The 6th IEEE International Symposium on Computational Intelligence in Robotics and Automation (Espoo Finland: CIRA 2005): 493–498, doi:10.1109/CIRA.2005.1554325, ISBN 0-7803-9355-4
- Cleeremans, Axel (2001), Implicit learning and consciousness
- Baars, Bernard and Franklin, Stan. 2003. How conscious experience and working memory interact. Trends in Cognitive Science 7: 166–172.
- Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group , 1998
- Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. The role of consciousness in memory. Brains, Minds and Media 1: 1–38, pdf.
- Haikonen, Pentti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004.
- McCarthy, John (1971–1987), Generality in Artificial Intelligence . Stanford University, 1971-1987.
- Sternberg, Eliezer J. (2007) Are You a Machine? Tha Brain the Mind and What it Means to be Human. Amherst, NY: Prometheus Books.
- Suzuki T., Inaba K., Takeno, Junichi (2005), Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior, ( Best Paper of IEA/AIE2005), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005.
- Takeno, Junichi (2006), The Self-Aware Robot -A Response to Reactions to Discovery News- , HRI Press, August 2006.
- Zagal, J.C., Lipson, H. (2009) "Self-Reflection in Evolutionary Robotics" , Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009.
- Ron Sun's papers on consciousness
- Humanoid Robotics Ethical Considerations
- Scientific Ethics
- Artefactual consciousness depiction by Professor Igor Aleksander
- David Chalmers
- Online papers on the possible mechanisms of Higher-Order Thought
- ESF Models of Consciousness Workshop and its Scientific Report
- Machine Consciousness - Complexity Aspects Workshop
- Robot In Touch with Its Emotions 5-Sep-2005
- Robot Demonstrates Self-awareness 21-Dec-2005
- FOCS 2009: Manuel Blum - Can (Theoretical Computer) Science come to grips with Consciousness?
- www.Conscious-Robots.com, Machine Consciousness and Conscious Robots Portal.