Talk:Artificial consciousness/NPOV Version/discuss

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Is AC equivalent to consciousness or not[edit]

I consider it necessary to answer here a question asked in the beginning of the dispute over "artificial consciousness" article, because due to the dispute my answer may remain unclear.

"To say Artificial Consciousness is not Consciousness is simply to define Consciousness as being something human beings cannot build. If "it", whatever "it" is, is built by humans, then by definition it would not be conscious." (Paul Beardsell)

Yes, I said that artificial consciousness is not consciousness and I insist it, considering that we determine consciousness as the totality of a person's thoughts and feelings. This is the most general description of consciousness, the more narrow definitions supposed to be used in specific context, but it may again be understood differently by different people. That way we determine consciousness through human abilities, to be a totality of human abilities. And this may remain so, as we measure consciousness through our own abilities. As consciousness is subjective (Searle etc, feelings), then we can never determine if there is an equivalent to it. So if there is something similar to human consciousness in other than human, then we should call it with some other name, not just consciousness. But there will always be many subjective abilities or aspects like certain feelings etc even because there will always be new conditions to what different people react differently and therefore understand related ideas or experiences differently, what means that these ideas or experiences will be subjective. And we cannot build a machine to fully satisfy subjective concepts, at least just because people can never determine whether such machine finally does what it supposed to do or not. Therefore machine is something made by humans based on what they objectively know, not even anything what theoretically can be emulated by algorithm. So yes, by that if a machine is built by humans, then by definition it cannot be conscious. I don't know any AC effort to build a machine equivalent to human, so it is not commonly considered that AC must be equivalent to consciousness. But in spite I don't agree with that, I was not against including opposite point of view into the article. Tkorrovi 20 Mar 2004

So is the human being only a machine or is it something more than a machine? Paul Beardsell 11:32, 22 Mar 2004 (UTC)

Paul, this is a very interesting philosophical problem, and this article may be worth to be in Wikipadia even only because of mentioning this problem. It has its origins in writings of Dennett probably. The main problem there is what is a machine. If we consider that machine is anything what satisfies Church-Turing thesis, then the logic is correct. But the question is, are humans capable of making all possible machines what satisfy Church-Turing thesis? If by any reason they cannot, then the difference between human and machine is simply what we don't know yet or cannot objectively model because it is subjective (what different people understand differently), what may be rational and satisfy Church-Turing thesis (at least maybe when applied to everything), not necessarily soul or "magic spark". And in spite that we learn more all the time, there still remain things what we don't know, even just because people are constantly changing and interdependent on their environment and each other. So the dilemma remains and none of the views are proved wrong. Tkorrovi 22 Mar 2004


By Occam's Razor, the simplest explanation consistent with the facts is likely to be the correct one, and the Copernican principle, that no special or priveleged position should unnecessarily be given to any part of the problem, Artificial Consciousness will be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only good reasons we have are arrogant ones: Humans are too complicated, too special, too something for their brains to be built or copied artificially. Surely, here you are correct: we have lots to learn, we learn more all the time, many things are possible, each POV must be in the article. But where you are wrong, if you hold this view, is that each POV has equal merit. No, the approach consistent with the scientific method says: Artificial consciousness is likely to be real consciousness, by Occam's Razor and the Copernican principle. And that will remain the most likely true POV until contradictory evidence is discovered. Paul Beardsell 16:43, 22 Mar 2004 (UTC)

If I remember correctly, Occam's Razor did fight against catholicism with this argument. The problem here is that the concept that artificial consciousness is equivalent to consciousness is also not the simplest solution, we don't know everything about consciousness and so making artificial consciousness in that way would be not only much more complicated, but unfeasible task. So concerning these two, the second is much simpler approach to build AC, and so also maybe the only meaningful approach for AC in general. OK at least, none of the views are proved wrong by this argument either. And it's simpler for us to just write what different views there can be. Tkorrovi 22 Mar 2004

Copernicus had to keep his head down when it came to the Church too. The Catholic Church's POV is, of course, that there is a magic spark. Paul Beardsell 17:21, 22 Mar 2004 (UTC)

I don't agree that "The approach consistent with the scientific method says: Artificial consciousness is likely to be real consciousness", there is not only one approach in scientific method also. What then about Chalmers, accordance to who a simple awareness, like that of thermostat, can considered to be artificial consciousness? Why not just leave the different approaches in the article, without charging how "equal" they are. (BTW I don't agree with Chalmers version or so-called "Weak AC" either.) I think there is no "magic spark", just the things we don't know yet, whatever they are then. And your example of Copernicus reminded me some other example, what if Galileo did say that the Earth orbits around the Sun and Sun orbits around the Earth are both true, instead of insisting that only the first case is true? He might still been right because in accordance with general relativity we can look at the things from any point and the equations still describe the things correctly. Tkorrovi 22 Mar 2004

There is only one scientific method. Two scientists can take two different approaches to solving the same problem, and each approach can be consistent with the scientific method. It is a methodology, not a recipe.

Occam's Razor does not say there is only one correct way of explaining something, it says do not bother with the more complicated way when the simpler way accords with all the known facts. Of course, we now know that Galileo was wrong: The earth and the sun revolve around their common centre of gravity. That is the Einsteinian view also. Special relativity allows you any location and any linear velocity, but angular velocities (being acceleration) are NOT relative.

Paul Beardsell 18:04, 22 Mar 2004 (UTC)

OK, and it's not completely proved what is the simpler way for AC either. Tkorrovi 22 Mar 2004

Well, if strong AC is shown to be impossible that will mean new physics (Penrose), or the existence of the magic spark (Catholic Church), or at least the Church Turing thesis to be shown wrong (OK, here you have a fighting chance but don't bet on it). The simpler way has to be no new science or religious revelation. Well, that is what Mr Occam says. Paul Beardsell 18:26, 22 Mar 2004 (UTC)

I think nothing is so dramatic, if strong AC is shown to be impossible then there are just thing what we yet don't know, not even necessarily very different from what we know. But why bother with finding out whether strong AC is correct or not, just include it together with other views. But this is interesting philosophical problem, and most of them are not solved, some are even kind of "eternal". Penrose said consciousness is non-computable, so in accordance to him there couldn't be no AC and almost no AI either. Tkorrovi 22 Mar 2004

Penrose proposes a what physicists consider a REVOLUTION in physics to support his view. Penrose is a mathematician, not a physicist. The position is every bit as dramatic as I state. I agree, MAYBE strong AC is impossible but IF SO either (i) there will be new physics, (ii) or there will be metaphysical/religious revelation or (iii) Church-Turing thesis is wrong. This is NOT a matter of opinion, but of fact. We do not know about strong AC (as it does not yet verifiably exist) but if it is SHOWN TO BE IMPOSSIBLE one of the three alternatives is required. Or (4) Possibly this might be one of those problems to which we will never know the truth or (5) possibly consciousness does not exist at all, not even in humans, we are unconsciously deluded. Paul Beardsell 19:00, 22 Mar 2004 (UTC)

Sorry, but doesn't the argument of Penrose that consciousness is non-computable already say that strong AC is impossible? I don't agree with that argument and I also don't see a need for anything non-computable (whether then soul or whatever else). But this is again a matter of views, some scientists agree with Penrose, some don't, but in the article about the matter all views must be included. Tkorrovi 22 Mar 2004

Background on Penrose: Most (practically all) scientists do not agree with Penrose, he is widely regarded as a crank in this field. He posits that human consciousness depends on quantum processes in microtubules in the neurons of the brain! Other scientists correctly ask: What is your evidence for that? Penrose ultimately has to resort to the circular argument that something special is required for consciousness. Paul Beardsell 08:58, 26 Mar 2004 (UTC)
NPOV: Wikipedia rules do not say that all views must be included. We can fairly leave the more cranky ones out. But, I agree, Penrose's views must be included. But it is neutral to expose his views to the fair criticism of others. And, before you complain, I think you are saying nothing different. Paul Beardsell 08:58, 26 Mar 2004 (UTC)

And then, wouldn't it be better to concentrate from tremendous philosopical and scientific problems on how to write the article, just include all the views there are and that's it. I'm by far not against discussing, but we may not go much forward that way. Tkorrovi 22 Mar 2004

Yes, but you started this section of this page to address the question "Is AC equivalent to consciousness or not?" I am simply staying on topic. It seems to me that you must have a view: Is the human being a machine or not? According to tkorrovi, what is correct? If you say yes, OK, we are just machines, then I ask what is special about the type of machine that we are that other machines cannot be properly conscious. If you say no, obviously we are more than machines, then you are saying that true consciousness depends on some magic spark or, if you prefer, it is a gift of God. And that would explain your belief that AC can not be true consciousness. If you are undecided then I suggest that your belief might be prematurely held. Paul Beardsell 09:33, 26 Mar 2004 (UTC)

Case to exclude 'thought' from definition of consciousness[edit]

I do not believe the definition of consciousness is yet right. No one has discussed how artificial consciousness is to embody thought. Artificial thought? Has anyone thought about this? Matt Stan 11:41, 22 Mar 2004 (UTC)

This is why the requirement to embody only what we objectively know, otherwise artificial consciousness would be nonsense indeed. Tkorrovi 22 Mar 2004
Sorry Matt but we don't know a lot about thought, and by the definition (rather description) AC is required to be capable to achieve only of what we know, not exactly "be capable of thought". Or didn't I understand something? Would it not be enough to say that there are other definitions of consciousness in different dictionaries? (In fact yes, very different, but in almost every dictionary there is one possible interpretation of consciousness to be more or less a totality of person's thoughts, and often also other abilities). Would it not be right to include other alternatives to other alternative definition? Tkorrovi 22 Mar 2004
I changed it by now to essentially say that by first definition some views would require AC to be capable of thought (what is impossible requirement). Don't know how good the result is, change it if you like. But better add comments to the views, not in separate item, or maybe as subparagraph under certain view. Tkorrovi 22 Mar 2004


Perhaps general thought is the preserve of (artificial) intelligence: Proving a theorem does not require consciousness (I suggest). Whereas reflexive thought, that which humans, dogs and thermostats do all the time, is a preserve of (artificial) consciousness.

  • Humans: "I think therefore I am."
  • Dogs: "Why am I here again?"
  • Thermostats: "I'm hot."

Paul Beardsell 14:30, 22 Mar 2004 (UTC)

It's good that you noticed that not everything requires consciousness. Therefore my opinion is that consciousness is a totality of all the abilities, only all mental abilities of an average human together give something what we call consciousness (and feel like consciousness). Except some special cases in restricted context (patient is considered conscious when he blinks his eye). Tkorrovi 22 Mar 2004

Preparing for a merger[edit]

At the village pump tkorrovi asked what I thought about this version. And I replied I would comment here.

I think that a lot of work has gone into it and in some important ways it is better than the main article. I also think that the main article is better than this one in some important ways.

I can spot some obvious minor flaws (e.g. grammar, wording) here. I also can spot one or two larger mistakes made when tkorrovi made what is an obviously honest and well-meaning attempt to incorporate views that he himself does not hold. Experience tells me that correcting these errors here might be problematic.

I want to incorporate some of the main article's talk page into the article itself. Then the two pages need merging.

Paul Beardsell 14:23, 22 Mar 2004 (UTC)

OK then, thank you, prepare it here before merging. Sure it needs work. I have some hope that we may agree. Not so many people interested in this article anyway, so when we the only ones who talk about it don't agree, this would be highly unreasonable -- we would create weakness where there could be strength. Tkorrovi 22 Mar 2004

I think a merger needs to be done quickly and might not be perfect. If it is not done quickly then for a time we still have multiple versions which then allows for further differences to occur. I think we must allow for temporary reductions of quality and even loss of some content. Sometimes going forward must allow for the occasional backward step. We will soon recover from any mistakes made. What I think you are suggesting is that there must be a consensus to have a new version, that you would still like a veto. Paul Beardsell 16:17, 22 Mar 2004 (UTC)

Yes it's better to reach consensus in discussion, at least in the most important thing -- how to organise the article. I suggest the same way as in NPOV version with the comment in the beginning, that views must be separated, included. Because it's clear that there are different views what remain opposite, like the "strong AC" and "weaker" AC schools of thought. Tkorrovi 22 Mar 2004

And instead of merging (or as a way of merging) I suggest adding everything what is considered to be missing into this version, and then just replace the main article with this version. I think it would be much easier to do it that way. What you think? Tkorrovi 22 Mar 2004


If you bring everything accross to this page and delete the current version, renaming this, the edit history will be lost. That is not necessarily a bad thing, but it is not my preference. A way around this is not to rename but copy'n'paste back onto Artificial consciousness from here.

Can I also suggest, if you are going to take this huge task on, that you bring over everything to here, word for word, without editing, and only edit when you get here. Then the change log of the merge exercise will all be in one place.

I think once it all is one place that possibly some of the merging work can be shared, if we are careful, but if you would prefer to have a go at it first that's fine by me, as long as we can tell from the log what has happened, and so I can revert you no more than three times. (Joke!) Paul Beardsell 17:33, 22 Mar 2004 (UTC)

No, I exactly thought that we make the changes here, and then copy and paste the entire version into the main article. Edit history would not be lost then. But bringing over of what you talk about? I brought over everything from main article into NPOV version what I considered necessary, there was more but if I didn't include it, then it was just because I (and Matt Stan also) would like the article to be a little shorter. I don't want to bring more, but you feel free to do so, if you consider it necessary. I made a few spelling and grammar corrections to what I included from main article, please compare these paragraphs to these in the main article, and change them (or revert my changes) if it is not the way you like. In particular, I changed a bit the wording of "Strong AC" argument, do you agree that it is much more clearly said that way? Tkorrovi 22 Mar 2004

But a problem was caused. You brought accross things but, in one or two areas, misinterpreted what someone else said. That may have been their fault, not yours, as they might not have been clear in what they wrote. The difficulty is that Wikipedia will not let you compare two different articles to find out what the edit was that you did to the text while it was in transit from one to the other. Please, bring it all over as is, possibly at the end of the article, and save that version BEFORE editing. Then cull, cut, reinterprete, because the author of the corrupted paragraph can then see what has happened. No article should be longer than necessary. But it is sometimes necessary to get longer before getting shorter, as the bishop said to the actress. Paul Beardsell 18:12, 22 Mar 2004 (UTC)

OK, I may do so, but for this discussion it's important to know what exactly you consider that I misinterpreted? I just want to know, it may be important for editing. I didn't want to misinterprete anything, but different people always understand things slightly in different ways, this is why it's better when several people look at the text, one notices what other doesn't. Tkorrovi 22 Mar 2004

The Issues[edit]

Would it be worth summarising what the issues are about this topic? I'll have a go, and perhaps we can reach some consensus:

1) The epimestological question of whether artificial consciousness is possible or whether the term is an oxymoron, i.e. that by definition consciousness cannot be artificial because it wouldn't then be consciousness at all. To get around this, we either have to remove the need for thought from the definition of consciousness or change the title of the piece to simulated concsiousness.

2) The question of whether consciousness or indeed artificial consciousness necessarily requires a predictive capability, as suggested in the original article. Evidence from alternative sources should be provided to justify the original claim, and I have suggested that the alternative of anticipation should be included to cover his requirement.

3) The question of whether it is possible to define an average human for the purposes of setting criteria against which to measure the capabilities of an artificially conscious machine. No attempt has been made to indicate what this average is, and I have suggested that even a totally paralysed person or a highly mentally retarded person is still deemed to be conscious by humane people. I woukld add that a newborn baby or an Altzeimer's sufferer are also both conscious, although the latter probably in an impaired way.

4) the question of whether merely the ability to demonstrate consciousness of some phenomenon should be deemed as consciousness (consciousness in the transitive sense) or whether consciousness is absolute and doesn't require its experiencer to be conscious of anything in particular in order to be conscious (consciousness in the intransitive sense). If we accept that any inanimate object that is used to engineer some outcome is itself conscious by virtue of its function, then there isn't really anything to artificial consciousness and it could simply be defined as anything instrumental in achieving some end.

5) The question of reliable academic sources to back up claims made about a technical subject for the purposes of its entry in an encyclopedia, which I haven't seen any evidence of yet.

Matt Stan 18:21, 22 Mar 2004 (UTC)

That's a very good approach to take and it needs some conscious attention, it being 2:30AM here I will be back later. Paul Beardsell 18:37, 22 Mar 2004 (UTC)

[1] The term is bad but it was coined by others like Igor Aleksander and it is not for us to change it. You may start "simulated consciousness" page of your own, this may be even better term, but unfortunately is not much accepted term. But the term is just term, it must be defined and this determines the meaning. It does not necessarily have to mean *artificial* *consciousness*. it may also mean simulating of consciousness by artificial means, and this is not oxymoron. Whether to remove thought is another question, we may also simulate everything what we can simulate about thought. But by some views there may be need to exclude it, all these views must be included in the article.

[2] The article by Igor Alexander where predictive capability is considered as one requirment for AC is included to NPOV version. For NPOV all requirements what may be considered necessary should be listed, including anticipation, awareness etc, but in addition to predictive capability, not to delete one requirement because other requirement is included.

[3] What means average person is more or less self-evident for most of the people. They are considered conscious in other context (medical, whether person can move his body or not). People often don't say that mentally retarded person has a consciousness of average human. New-born baby is another question, this is again a matter of views, but it is likely more than any artificial consciousness, in a sense that by learning it can mostly achieve all abilities and aspects of consciousness of average human.

[4] These are the term "consciousness" defined to be used for specific contexts again. One view is to proceed from the most general definition, and this demands almost all mental abilities of average person to be present for it to qualify to have consciousness.

[5] Of course sources must be included, but as term is used, and also in some sense important, it qualifies the entry into encyclopedia much more than some other subject.

And maybe it's better to discuss a bit more slowly, there would not be enough quality of such discussion this way Nothing wrong in asking 5 questions at once, but is it always the best.

Tkorrovi 22 Mar 2004

A link "lectures by Igor Aleksander" to show that the term "artificial consciousness" has been used in scientific context http://www.i-c-r.org.uk/lectures/spr2000/aleksander13may2000.htm Tkorrovi 22 Mar 2004

Also see http://www.ph.tn.tudelft.nl/People/bob/papers/conscious_99.html

Thank you indeed Matthew for the paper.

About oxymoron. I talked to several people, including some PhD-s about artificial consciousness and not all consider artificial consciousness just a nonsense. And then again, some scientists consider for example consciousness studies (and everything related) nonsense as well, this is a matter of views again. So if you don't want that anybody considers what you do nonsense, then don't work on anythink what is related to consciousness. At the same time artificial consciousness is likely to be an important link between consciousness and AI. What most of the people I talked to say though is that the term "artificial consciousness" is somewhat misleading because of the words used. Without knowing any definition or anything about it, the first association would be human consciousness built artificially, or even consciousness to replace natural consciousness (very bad meaning). Many people don't like the idea that consciousness can be made by artificial means and think that it must be some cranky effort to build an artificial human. Without definition one cannot realize that a mere simulation of conscious abilities, to be as close to natural abilities as we can get based on our knowlegde of the subject, are meant. Some efforts also involve simulating artificially certain feelings (emotions). Some are the systems intended to be unrestricted enough to be used in enabling the development necessary to achieve certain abilities of consciousness like prediction, or then imagination by enabling creation of different alternatives in certain circumstances. But these are systems not so immensely complicated (often also not easy though), at least very very far from any artificial thinking at the level of the human. So yes, the term is bad and misleading, "simulated consciousness" or similar may be much better. But it was started to use the term "artificial consciousness" in scientific context, and my opinion is that it's not for us to change it. But if you think so, feel free to create "simulated consciousness" article, but "artificial consciousness" article must remain, because this term is in use. Maybe it must be written that some people think it's nonsense, but then to AI also, because some people think that this is nonsene as well. Maybe once AI article was edited by people who thought that it's a failed field (have such impression when I read the older entries), but later people who remained to edit it were people who didn't think so. Comparing AC to AI, AC is by far less significant of course. These were my somewhat random thoughts about the subject. Tkorrovi 22 Mar 2004

Matthew, notice that thoughts, and even feelings were included in the definition of consciousness in the paper you presented. What I don't like though is using the word "soul". Even if it has a strictly defined and objective meaning, I think that it's not right to use such word in scientific context, as it comes from religion or belief. There is such a variety of different ideas and interpretations concerning artificial consciousness, artefactual consciousness, simulated consciousness etc, so that the only possibility is to write different views separately, there is no general consensus about that in science yet, but still the research is being done. Tkorrovi 22 Mar 2004

Prediction[edit]

I am puzzled about the idea of consciousness being associatd with prediction. I thought that perhaps it meant anticipation in the short term, i.e. immediate cogent reaction to imagined possible events (including internal events as might emanate from thought processes). Can anyone explain, in relation to consciousness, what is being predicted and by whom, and why this is thought to be an essential component of consciousness? Matt Stan 08:49, 25 Mar 2004 (UTC)

In accordance with my Concise Oxford Dictionary, "anticipate" in the wider sense means "foresee", "regard as probable" etc, so it means the same as "predict" ("foretell"). The difference of the word "anticipate" is that it has a narrower meaning "deal with before the proper time". If you talk about immediate reaction to imagined events, then you most likely consider that meaning. No, "predict" is not used in that sense in AC. In the paper I added to NPOV version, Igor Aleksander talks about "Ability to predict changes that result from action depictively". It's also said in paper by Rod Goodman and Owen Holland www.rodgoodman.ws/pdf/DARPA.2.pdf that "Good control requires the ability both to predict events, and to exploit those predictions". Why we need to predict changes what result from action is that we can then compare them with the events what really happened, what enables us to control the environment and ourselves (ie act so that we can predict the results of our action). This is also important for training AC -- the system tries to predict an outside event, and if this event indeed happens, then that gives it a positive signal. What is necessary for that is imagination, ie generating all relevant possibilities for certain case, for what the system must be very unrestricted. And what is also necessary is some sort of "natural selection" so that only these models (processes) survive, what fit in their environment. So the events are imagined not in order to react to them immediately, but they are stored to exploit them later, the time when the predicted outside event should occur. Tkorrovi 18:50, 25 Mar 2004 (UTC)

I think that Ability to predict changes that result from action depictively is a bit too abstract for an encyclopedia, but your subsequent explanation gets the point across. I'm not sure, however, that the heuristic process that you describe above regarding learning is necessary for consciousness per se, although it is important for artificial intelligence, and I'm concerned that we shouldn't confuse the two - though they must indeed go together to some extent in any implementation. I think your point is perhaps important for a machine that has to learn, and therefore might be important in order to attain consciousness, i.e. part of the engineering/programming process aimed at achieving consciousness. However, once consciousness has been achieved then the ability to go on learning is not essential in order for consciousness to continue. I can remain conscious in a totally chaotic environment in which it is not possible to predict anything accurately. You seem to be saying that it is my constant and continuous attempts to predict what is going to happen next, moment by moment, every moment, that are a defining part of my consciousness. Whilst I am writing this, you might say that in the process of making my utterances I am having to predict what I am going to write next. But I would put that differently. I don't predict what I am going to do - I just do it, in what is called a stream of consciousness. Therefore I still question whether predictive capability has to be accepted into the definition of what constitutes AC. Matt Stan 19:39, 25 Mar 2004 (UTC)
Almost all "potential" AC systems (neural networks, genetic algorithms etc) are trainable, ie the abilities are not necessarily attained during engineering/programming process, but during training, so necessary condition to have certain ability is to be capable to achieve that ability. It's the same with humans, the only possibility to achieve certain abilities is to learn from childhood, so abilities what enable us to learn are important part of our consciousness, especially thinking. You said "I can remain conscious in a totally chaotic environment in which it is not possible to predict anything accurately" -- are you sure, many people loose their mind during chaotic periods like wars. And you must think what you write before you write, maybe this is part of stream of consciousness, but then it necessarily don't exclude prediction. That predictive capability is necessary condition for AC is a view of several scientists, therefore it must be included in the article. And other views, what you support, must be included too. Tkorrovi 20:13, 25 Mar 2004 (UTC)
It's Ok to cite authorities to back up one's own understanding. But it's no good citing authorities to back up one's lack of understanding or lack of capability to put across one's understanding. I keep trying to work back to a definition of consciousness which is exclusive of the things one associates with consciousness but which are not the same as consciousness. I disagree with Paul about inanimate passive objects possessing consciousness, e.g. a thermostat. I have to work from a human model, as this is the only real thing one can relate consciousness to. A new born baby is either conscious - while it is awake, or unconscious, while it is asleep. Is that not a good starting point? Can anyone disagree with that and if so on what grounds? Now a new born baby, with its mind half formed, has an enormous capacity for learning, but it doesn't become more conscious as it grows up; it stays conscious while it is awake and unconscious while it is asleep, right thoughout its whole life. It learns to think, it learns to anticipate, but what it starts off with, which it has right from the start and never leaves it throughout its life, is the ability to be attentive, first to its parent who feeds it, to sounds, to colours, to movement. It does none of these things while it is asleep. Surely this is the basis upon which we can start to define simple consciousness. Never mind what abstract scientists may have said, which none of us can understand. We can't write about things we don't understand - only things that we do understand. Matt Stan 01:16, 26 Mar 2004 (UTC)
"A new born baby is either conscious - while it is awake, or unconscious, while it is asleep. Is that not a good starting point?" Not at all. If we want to model mental abilities of the functioning human being, then we must define consciousness in the widest possible sense, from consciousness to be a totality of all mental abilities, otherwise the model will not be very good. This is why in many cases at least thinking is considered to be an aspect of consciousness. This is the view I support, but this doesn't mean that I consider the views of others a lack of understanding. There are different and often opposite views, and even in two opposite views, something may be right in both Tkorrovi 12:21, 26 Mar 2004 (UTC)
I'm wondering whether Tkorrovi's rejection of small humans as a model of consciousness is indicative that Tkorrovi is actually a chat bot that has gone wrong, and hence Tkorrovi fails the Turing test, I'm afraid! :-) Matt Stan 14:30, 26 Mar 2004 (UTC)
But I did not reject small humans as a model of consciousness, I said here before that children are mostly capable to achieve all the abilities of consciousness of average human, therefore they are very good "models" of consciousness, at least for AC. This idea is also reflected in science fiction, for example Vanamonde was an artificial being with immense potential, but with almost no maturity, it was built so that it would develop itself to the level necessary to win the Mad Mind. Tkorrovi 15:56, 26 Mar 2004 (UTC)

This hinges on the word "necessary". Anticipation is a very useful, desirable attribute for a conscious being to have. But that does not mean it is a necessary attribute of consciousness. I agree with tkorrovi about all the advantages of anticipation, just not that it is necessary. That it is necessary has not been shown. Desirable, yes. Useful, yes. Necessary, no. Therefore that is supposedly necessary does not merit a prominent, headline, first definition position in the article. Paul Beardsell 09:13, 26 Mar 2004 (UTC)

Absolutely. I maintain that it is attentiveness that is the primary characteristic of consciouness, which any simulation must demonstrate. The point about prediction fails to indicate whether the prediction has to be a correct prediction. If we are to accept prediction/anticipation as an acceptance criterion then surely it must be qualified by whether the predictions are correct. Since I do not believe it is possible to predict the future with any degree of reliability at all, then we are left with asking how reliable any set of predictions must be before they qualify. If the AC machine makes predictions that are always wrong then what value is that to the model? The AC machine can't make predictions that are always right, so should it be right 5% of the time 15%, 30%, 50%, 60%, or what? Matt Stan 14:30, 26 Mar 2004 (UTC)
This is also something the fuzzy logic is meant to deal with. Tkorrovi 16:04, 26 Mar 2004 (UTC)

Passive Inanimate Objects and Consciousness[edit]

If a human is passive then it can be conscious. So, "passive animate objects" can be conscious. If all "inanimate objects" can not be conscious then the big question is answered and, in my view, we can go home. So, passiveness disqualifies an inanimate object from consciousness but not an animate one. Which is just too blatant an adoption of a priveleged position to be allowed.

But only one of many. Luckily, for my argument, the thermostat is not passive.

Paul Beardsell 09:06, 26 Mar 2004 (UTC)

I think we get into a circular argument here, because what is animate? I don't think we can define animate without reference to consciousness itself. You perceptively picked up my difficulty when I put passive inanimate object. A thermostat is a passive device in my book, as a resistor is a passive device in a circuit. OK, a thermostat is active in the sense that it moves, if it's an electomechanical one, but it's passive in the sense that it plays no active role in setting itself. A thermostat that decided at what temperature to switch would be an active device, and such are used in so called intelligent buildings. You might argue therefore that an intelligent building had a consciousnes of sorts, but I think this is again confusing AI with AC, and it's important to maintain the distinction, as the two don't really have anything to do with one another, at least for the purposes of this discussion. Matt Stan 16:56, 26 Mar 2004 (UTC)

No. Nonsense. Any word now seems to mean consciousness. Or not. When the word is used in relation to a human then it means consciousness. When the same word is used in relation to a machine then it means not consciousness. Or the word is inadmissabable because, why? It's a machine! Matthew, it is not an unreasonable prejudice to have, but it is unreasonable not to recognise it as a prejudice: You as good as define consciousness as something only a human (or possibly some higher animals) can have. I put you to the same test I put Tkorrovi: Magic spark or new physics? Paul Beardsell 17:15, 26 Mar 2004 (UTC)


My rationale is not that difficult. Once we have established a working definition of consciousness, a model, then we can specify the engineering required to simlulate that. That will be artificial, or simulated, consciousness. When our processes for delivering such an artifact have become refined, then the final assessment might be that the product will be deemed to be conscious, as subjectively observed by other humans. Then, and only then, might it become true to state that consciousness had been obtained by artificial means. Suggesting that rocks and thermostats and blobs and bacteria and smoke and pieces of glass (and, of course, sealing wax) or whatever are already conscious is a trollish distraction and takes the discussion no further forward - in fact the opposite.

My method can be summarised as follows:

1. Define consciousness. In computing terms this equates to the business requirements. This definition becomes de facto the set of acceptance criteria against which an artifact might be deemed to possess consciousness. To make the task easier I am suggesting that consciousness should be defined in terms of its minimum requirements. If we can accept that a newborn baby is conscious then a machine that emulated the type of consciousness that a newborn baby has is going to be good enough. (We can't go down to the level of inanimate objects for the purpose of the definition of that which we are aiming to emulate, unless you are suggesting that I should try to make an artificial thermostat, in which case please define that!!! The term reductio ad absurdum comes to mind here.)
2. Analyse the components of consciousness and develop a model that could feasibly be implemented to deploy consciousness in a machine. These would be the system requirements.
3. Devise tests to assess whether the machine met the acceptance criteria for consciousness. This is, I suggest, a fairly trivial task (and one which a thermostat would fail). Incidentally, I'm happy to define the average type of human that might be a candidate tester (but I'll have a think about that one).
4. Design and build an artifact that implemented that model. This would be the candidate (artifically) conscious machine.
5. Run the tests and declare the result. It is likely that, as so often happens in software development, defects would become apparent that had not been anticipated in the design and that would therefore cause not all the acceptance criteria to be met.
6. Make improvements by repeating steps 2 onwards until the machine passed all the tests, i.e. met the acceptance criteria.

Is that such a controversial approach? We are still, unfortunately, bogged down at Step 1, or at least some of us are!

Artificial or simulated?[edit]

Final point here: the use of artificial (or simulated, which effectively comes down to the same thing, from earlier discussion) merely denotes the idea of consciousness being made by means other than the natural means by which consciousness usually arises. Any attempt at artificial consciousness deployment that fails the tests will by definition preclude the artifact under test from being deemed conscious (or artifically conscious, if people wish to make that distinction), i.e. from possessing (artificial) consciousness.

How's that? Can we move forward now? Have you still got your Lego set [1], and the time and money to become the pioneer. I will award a prize of one anonymous Wikipedia log in to the person who comes up with the first implementation. (See User talk:Paul Beardsell for decryption of the last sentence.) Matt Stan 19:01, 26 Mar 2004 (UTC)


Whether or not I am in a double-bind in another discussion has no impact here. Or should not! You are seemingly irritated with me pointing out contradictory use of language and asking you basic relevant questions which you do not address. I suggest this is because you are reluctant to challenge your own fundamental beliefs.  :-)

When you assert that simulated and artificial are the same you must recognise that this does firmly peg you into the "artificial consciousness can never be real consciousness" school. You insist on a recipe for consciousness which implies a set of values which makes consciousness human-like: So you are firmly pegged in that school too. These are perfectly reasonable if anthropomorphic views to hold. But you seem to deny the admissibility of other views.

What if (real) consciousness could be built, but not one that was sufficiently human-like to pass your tests? That would be a real tragedy: Refusing to recognise a possibly rich otherness.

I haven't written the rant page yet! Frustration, rather than irritation, I think. My narrowness is because I am concerning myself with one aspect. The fictitious implementations of AC in films and literature could probably stretch to a Wikipedia of their own. And of course if you stretch the definition of consciousness to include inconceived of forms of consciousness, that opens up interesting theories. I could conceive that the new swiss army penknife, which you can now get with a memory pod, might one day have successors which are conscious penknives, with a penknife's view of the world. Matt Stan 08:43, 27 Mar 2004 (UTC)

After that can I buy you a beer sometime between 9 April and 1 May? Of course!

Paul Beardsell 04:40, 27 Mar 2004 (UTC)

And it must have eyes that move. There have been non-anthropormorphic attempts. See Aibo. Matt Stan 08:46, 27 Mar 2004 (UTC)

Microbotics Domotics Domobot Digital pet Tamagotchi Neopets

And so, on to my next question. What will it be for, assuming we are talking about engineering an artifact, the exact requirements for which have not all been defined? It could end up an interesting curiosity, that might even warrant putting in a thermostat alongside it in the telling of the story of how it came about. Or what else could it be? I'm suggesting that if we made the initial criteria for passing the first test, then we would have achieved something which could be improved, and the notion of giving it heuristrics of its own opens endless possibilities by which it might far exceed the constraints of mere human consciousness. As for for being accused of anthropomorphism, I am not arguing necessarily that the model should be a baby, just that it should be considered, and alternatives proposed. And that for us to be able to say that AC exists, rather than just being an idea on a discussion page, then we need some artifact to mention. At the moment the only other candidate we have is a thermostat. I don't see why we should be averse to the idea of using ourselves as our model for something we only understand. dogbot If we are talking about a consciousness that is other then I suggest we switch to entanglement theory and the idea that the future can alter the past, and see whether the patterns that are observible in the universe manifest what one might construe as a godly consciousness. But that would be neither simulated nor artificial. Please remind me of the qualities of this otherness. What should I read (or re-read) in order to gain its appreciation? Matt Stan 08:43, 27 Mar 2004 (UTC)

User: www.wikipedia.org
Wiki: Do you want to use the wikipedia conscious interface?
User: OK
Wiki: Verbose option?
User: OK
Wiki: I've forgotten your name. That securibot has probably been round again deleting your cookies. Last time I tried to change your settings, I got told off for generating unnecessary application event log messages and a flood of unfriendly port scans ensued. I think you're going to have to do something. But in the mean time, can you log in?
User: [logs in]
Wiki: Someone vandalised all your pages last night, and I've put out an alert to the sysops to get the culprits, all 143, dive
      • Woops, the network's going down. Back in a mo.
rted to our shadowpedia. Go make your coffee now, and I'll have a full report of everything that's happened about everything you are interested in by the time you get back."
User:OK
Note that I had the Use your name when talking to you option turned off in my Preferences, which the conscious interface manages for me.

Matt Stan 11:09, 27 Mar 2004 (UTC)



While I attempt to craft a more thoughtful response, this struck me after my last contribution:

One view is that AC will not be real because we are too dumb to build real C. This is a defeatist view which tempts us to give up before we start, but maybe it is a realistic view. AC which is really C might be so otherly (is that a word, I asked), othernessly (also no good). Otherworldly! There is my example. Should we be visited by aliens how will we test that they are conscious? Easy! After testing their intelligence using the Turing test we will test their consciousness with the Stannard test. We will test these bilaterally symetrical, bipedal, two-eared, swivelly-eyed aliens with our anthropomorphic (both meanings) tests!

If we assume aliens exist (at least for this argument) we have no good reason to expect aliens to be bipedal or even to breathe air. Yet we would expect (some) aliens to be conscious, I suggest. But that consciousness is less likely to be human-like, I suggest, than their locomotion is to be bipedal.

By this thought experiment I hope to have established that consciousness of a non-human type is possible or, depending on your cosmic view, likely.

Paul Beardsell 11:20, 27 Mar 2004 (UTC)

As to the quantum entanglement point: Certainly it is not me who seems to want to invoke new science or ignore old science: I have been pointing out that the existing science indicates there is no obstacle to AC being real. Paul Beardsell 11:20, 27 Mar 2004 (UTC)

I'm still not clear on the issue you take with my notions about artificial vs simulated. I had intended that they should refer to the same thing. but was pointing out that artificial consciousness is oxymoronic because once AC is achieved it ceases to be artificial and becomes real, whereas simulated consciousness can be as real-like as we make it and no semantic problems arise. Are you suggesting that simulated concsiousness is actually something different, which I haven't taken into account?

I was also indicating that the test should be that humans should be the judge. I was not stipulating what the business requirements are, but putting forward a set that might meet the test requirement. You might come up with a philosophic argument that identifies an artifact as conscious, as per that philosophic argument, but that would not count in the popular view as consciousness. When the aliens come, will they expected to give us logical proofs of their consciousness (to help us decide whether their consciousness, such as it is, is real or artificial?), or will it just remain a human perception as to whether they are or not? The less like us that they are the more difficult it might be judge, but I am maintaining that ultimately we can only judge by what we consider to be consciousness, based on our own experience. Therefore consciousness is necessarily anthropomorphically defined. And it has to interact with humans at some level in order to be tested. Argument from ignorance Anthropic principle Matt Stan 11:56, 27 Mar 2004 (UTC)

Anthropic principle[edit]

You raise two points I can readily address: The definition of artificial and the anthropic principle.

Back when there was no assisted locomotion other than that provided by animals had the concept of artificial locomotion been discussed some might have held that it was impossible. That any locomotion so achieved would be simulated, not real. And therefore simulated and artificial are synonyms. But they would have been wrong. I suggest that we stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally.

Formally: If A and B are both properties of X then it does not follow that A is B or that A is a subset of B or vice versa. A and B can be disjoint, distinct. Let X be "consciousness", let A be "simulated", and let B be "artificial".

I don't think this proves anything other than to say that the possibility exists that simulated and artificial could be mutually exclusive, not that they necessarily are. Artificial implies an artifice, a cunning trick. Well may be that so, but I thought the argument had reached the point where we were hypothising that AC could become real, i.e. there was no deception about it, at which point it ceases to be artificial. Matt Stan 14:31, 27 Mar 2004 (UTC)
And that is my point precisely. You have been saying artificial and simulated are the same or that one is a subset of the other. Or, perhaps this more accurately characterises your argument: Artificial means not real which is the meaning of simulated. Paul Beardsell 04:37, 30 Mar 2004 (UTC)
It really is difficult not to be indignant, so take that as read. There is nothing derogatory about artificial. Artificial does NOT mean unreal. Conscious could be real and entirely artificial. Your prejudices are showing. Paul Beardsell 13:38, 29 Mar 2004 (UTC)

And had the test of detecting something to be locomotion been that its propulsion must be similar to the locomotion known, by legs, then the motorcycle would have failed the test. That consciousness must be tested against and by humans is your assertion. That it can be so tested, I agree. But a more objective test would be useful. You seem to use the fact that we are conscious as a handicap to us recognising consciousness elsewhere. I am not a snail yet I can recognise a snail. If I were a snail recognising a snail surely would be easier, not more difficult? Let us decide on a simple, non-anthropomorphic (I believe I have shown this is necessary when dealing with aliens) definition of consciousness, and procede from there.

A more objective test would verily be useful, but I have seen none proposed. Matt Stan 14:35, 27 Mar 2004 (UTC)
Compliance with the definitions of artificial and comsciousness is my proposal. Paul Beardsell 13:38, 29 Mar 2004 (UTC)
That is a useful test requirement, but a test, please? Matt Stan 19:32, 29 Mar 2004 (UTC)

The anthropic principle is where you go when forced. It is not supposed to be your first refuge.

Paul Beardsell 12:27, 27 Mar 2004 (UTC)

I am saying I am forced to the anthropic principle, by the nature of that which we are discussing, as I would be forced into the hedgehog-otropic principle if we were discussing artificial hedgehogs. Matt Stan 14:35, 27 Mar 2004 (UTC)
We are not discussing artificial humans but artificial consciousness. It is you who say only the former can be the latter. Paul Beardsell 13:38, 29 Mar 2004 (UTC)
Take the set of all consciousnesses and make each type of consciousness a subset whose members are the attributes of that type of consciousness. The set of all consciousnesses contains human consciousness and all non-human consciousnesses. There may be intersections between human and some of the non-human consciousnesses. Now take artificial consciousness and draw it on the Venn diagram. Must it necessarily enclose all the attributes of at least one type of consciousness in order to qualify as consciousness? Let us assume that it does. So draw it anyway, and explain what that subset might contain, i.e. its attributes. Matt Stan 19:45, 29 Mar 2004 (UTC)
As possibly the only necessary attribute of consciousness is self-awareness, it will be an attribute which is an element of all your subsets. You won't allow the transitive definition of consciousness, conscious of, yet this would allow a discussion of attributes, which you suggest would be a good idea. I believe you prefer the intransitive consciousness, the internal-watcher idea (which is just the Descartian way of saying self-awareness and which Dennett suggests is a consequence of recursion). I am not saying that self-awareness is not capable of analysis, nor am I saying that other attributes might also be found common to all consciousnesses but I am saying that many of the attributes discussed so far seem to me to be attributes of human-ness or usefulness: Not necessary attributes of consciousness. Paul Beardsell 05:10, 30 Mar 2004 (UTC)

Damn! I wish I had used flight not locomotion. Then I would characterise your argument as saying that for something to be flying it must have flapping wings. No, I would say, let's look at the definition of flight. Here, too, I say, let's look at the definition of consciousness. Is it self-aware? Then it is conscious. Is it man-made? Then it is artificial. Paul Beardsell 12:43, 27 Mar 2004 (UTC)

Flight and locomotion can be constrained by anthropomorphism as you say. But they can also easily be alternatively defined: locomotion as movement between points on the earth's surface; flight as movement between points on the earth's surface without touching any other points on the earth's surface in between. Therefore the means of achieving each of these need not be defined and there is no loss of understanding. Would that it were so easy for consciousness.
Why do you allow the dictionary definitions to prevail with flight but not with consciousness? Paul Beardsell 13:38, 29 Mar 2004 (UTC)
A flight simulator presents a different set of problems, just to confuse the issue. I went on the earthquake simulator at the Geology Museum (part of the Natural History Museum) a few years ago. The visual cues that it gives, in addition to its rumbling and motion, make one behave in a way that is consistent with experiencing a real earthquake, with people who aren't holding on falling over, even though the forces involved are not such that one can't easily resist falling over.
But it is not a real earthquake because it is not the earth which is quaking. Definition! Paul Beardsell 13:38, 29 Mar 2004 (UTC)
If I wanted to make an artificial hedgehog, but decided that to use a real hedgehog as a model was too hedgehog-ocentric, I don't reckon I'd make a very good artificial one. Similarly, I had inferred in this debate that the consciousness that we are talking about here is human consciousness, the only type of consciousness that I have direct experience of, both in myself and others. If we are defining artificial consciousness to mean consciousness that doesn't share any of the attributes of human consciousness, then that needs to be made explicit, and obviously a different set of tests would be required. These tests would have to conclude that because of the AC machine's dissimilarities from human consciousness the artifact passed a test for artificiality as well as some (as yet unspecified as far as I can see) test for consciousness of an other nature.
Matt Stan 13:43, 27 Mar 2004 (UTC)
This is dealt with. We are not discussing artificial humans. Paul Beardsell 13:38, 29 Mar 2004 (UTC)

I'm happy to stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally, though simulate is defined as: Imitate the conditions of (a situation or process); spec. produce a computer model of (a process). (SOED mid-20th Century usage). If the AC machine were to contribute to its own consciousness by virtue of its heuristics, would that instance of its consciousness be artificial, or could it be said to have naturally arisen as a result of it having received a wake-up call from an external source?. I think simulated indicates a more robust approach, and of course is less anthropomorphic than artificial. Matt Stan 13:57, 27 Mar 2004 (UTC)

Simulate implies not real. Simulated consciousness has the meaning that your emotional overloading of the term artificial has when you say artificial consciousness which is why you think they are equivalent. Paul Beardsell 13:38, 29 Mar 2004 (UTC)

Self-awareness[edit]

Have we been round this loop yet? Self-awareness implies to me the notion of the self that is aware and that which it is aware of, and allows this for stimuli arising in its internal environment, but it leaves out the idea of the external environment, i.e. awareness that is not self-awareness. That it is aware of either is determined by it's paying attention to one or other (or both). Therefore, this self-awareness is subsumed within attentiveness - it is just one part of it. The notion of the self that is aware/attentive is an important prerequisite though. Perhaps self-awareness is the wrong term for the fundamental characteristic of consciousness and should be replaced with awareness of environment, where environment includes input from external sources via senses and input from internal resources such as memory.

Also see [Consciousness-only].

About awareness http://tkorrovi.proboards16.com/index.cgi?board=general&action=display&num=1080491783 Tkorrovi 16:20, 28 Mar 2004 (UTC)

Mmm.I read the piece on the bulletin board at the URI previous. It doesn't seem to bring any new material into the discussion. In my thesis for an implementation (which I realise is not the totality of all material that should be in an encyclopedia article on AC) presented during this discussion, I have merely been trying to test the ideas presented with a view to reach a working and workable definition of AC. I am suggesting attentiveness as an alternative to self-awareness as a defining characteristic, on the basis of my contention that the latter is subsumed within the former (as outlined in my previous posting above). On the theme of verifiability, I maintain that it would be relatively easy to test whether an AC artifact was capable of manifesting attentiveness, by providing it with some unprompted input and observing how its operation was affected by that input. For example, if it had eyes, I would walk past it and see if its eyes followed me. This would be a simple test. (It wouldn't necessarily be conclusive, because the machine might be blind and still conscious, or so attentive to another (internal) activity that it did not to see me. Nevertheless, if the test passed, I could weigh that in favour of a case to say that the machine was conscious.) I now ask, how would you test that it had self-awareness? We know we are conscious because we have self-awareness; that is a given. But we are here talking about a simulation of consciousness, so we can't use the a priori argument that a machine must have self-awareness. If we can't define a test for self-awareness, i.e. it is not testable, then it can't be a requirement. Matt Stan 18:08, 28 Mar 2004 (UTC)
Attention is defined in Wikipedia as "conscious concentration on something". It certainly is one of the abilities of consciousness, but I don't know why you consider it the most important. Perhaps it's a problem for some, how human mind can switch its attention from one subject to another, but it is not so big problem if we consider that many processes run in human mind simultaneously. Then it just means that more priority is given for example to processes what are more related to what happens in the environment at present, this is just one, and quite formal aspect. "...if it had eyes, I would walk past it and see if its eyes followed me" -- such machine is done, by Japanese if I'm not mistaken, and it's by far not certain that this is AC, not commonly considered as such at least. But that self-awareness is any kind of defining ability of AC, is not my view at all, this term was used in digital sentience what is now merged, they may tell more where it comes from. But of course it can be considered as one ability of consciousness necessary for AC, even if it maybe can be tested together with training the system to achieve other abilities. There must not necessarily be any defining ability, this is why it must be capable to achieve all known and objectively observable mental abilities of average human, just have means what make it capable for this, like a small child has. I consider it logical, in spite that self-awareness etc is not defining ability and maybe cannot be separately tested, it would not be a proper simulation of human consciousness if it cannot develop any self-awareness. Tkorrovi 18:51, 28 Mar 2004 (UTC)
If self-awareness really isn't objectively observable, then it cannot be a requirement, but this is by far not certain. Tkorrovi 19:09, 28 Mar 2004 (UTC)
My point about self-awareness not being a requirement of AC is solely that it is not testable. It isn't even really testable in humans. We only accept it as a defining characteristic of human consciousness because we all have it, and therefore no one can deny it. The AC machine may or may not have self-awareness as you or I understand that term, but neither of us could prove that it did. There is an interesting psychological argument that self-awareness is anyway only an illusion created by people to enable them to relate to other people. If self-awareness is therefore itself an illusion it cannot be said to exist in any objective sense. The attributes of attention are what define our consciousness as observed by other people. Moving my eyes to watch something does not in itself prove my consciousness, and, as you say, Japanese robot dogs do this. But my not moving my eyes might be seen as evidence of my absence of consciousness. I am suggesting that is we simulate all the attributes of consciousness - as observable by a human - then the artifact that manifests all those characteristics would pass tha AC test. Whether or not that artifact possessed the non-observable charactersistics (such as self-awareness) then becomes irrelevant.

Average Human[edit]

You are still using the term average human, but I maintain there is in any event no such thing, and that a human who manifests the minimum requirements of consciousness is nevertheless conscious. Therefore we should aim in the first instance that an AC implementation emulates the minimum requirements rather than any notional average. The problem is hard enough without making it more difficult unnecessarily. Matt Stan 21:35, 28 Mar 2004 (UTC)

I agree. Whereas Hawking is super-(average human) in some respects he is sub-(average human) in others. This demonstrates that the concept of average human is a difficult one and even if could it be determined I deny the usefulness of the concept in respect of consciousness. I do not think consciousness is the preserve of humans, and I think the consciousness exhibited by them(speak for yourselves - mostly I do not identify with other humans never mind the "average" one) is merely one example of consciousness. Paul Beardsell 14:17, 29 Mar 2004 (UTC)
You may be right what concerns self-awareness. But concerning average human, we define consciousness through human consciousness (most dictionaries do this), and we must have some measure. If you say mininum requirements, then it would be that of an idiot (not disabled person who can only move his eyes, because this doesn't say anything about his mental abilities). And I don't think that artificial idiot would be simulation of human consciousness, in fact even programs like Eliza may be considered to be one. And, average is well-determined statistical term, we can calculate average from any objective test results. Tkorrovi 22:00, 28 Mar 2004 (UTC)
NO! If a dictionary does define consciousness thus (and I do not agree most do) then they are doing so merely becuase human consciouness is the only undisputed consciousness currently. The dictionary similarly defined fabric in terms consistent with natural fabric back when artificial fabric was unknown.
Average is ok if you have some metric. Do you have average fingers? You might have average length fingers, or the average number of fingers (5 on each hand), but just average fingers? There is no such concept. Same with humans. A statistic has to have some measure and there is no such measure of humanness as far as I am aware. I am saying that the first implementation of AC would be that of what you call an idiot, provided that people saw it as something conscious. It would be better if it had intelligence too, but not essential. There is still a difficulty with what characteristics it should have in order to pass the test, which is why I suggested earlier that there should be a defined model upon which to base the implementation. The best I could think of was C3-PO from Star Wars, who is an idiot! He has no brain at all! All the intelligence is provided by R2-D2. What C3-P0 has, though, is a fairly good representation of a caring consciousness which, I suggest, if we could get a machine to emulate (for real, as it were) whould pass the test of AC. Matt Stan 22:13, 28 Mar 2004 (UTC)
Hence my thermostat, or my artificially conscious penknife. The one that knows what is the most suitable tool from its proximity to other objects. The one that offers the short blade because it knows its long blade is blunt, the one that retracts the blade and extends the tweezers when its owner's eye is close. The man-made (artificial) self-aware (conscious) penknife. Paul Beardsell 14:17, 29 Mar 2004 (UTC)
We may take imaginable human, who has average results in all tests (IQ test, mathematics test, art test, history test, whatever). Maybe there is no such human, but we can mostly say who is closer to this and who has extraordinary abilities, we don't even have to define it very exactly, just to be commonly understood. But OK, then consider "capable human", "capable" is well-defined legal term. But not an idiot who doesn't have anything even close to normal human consciousness (in its widest sense), most importantly almost no thinking. I hope it's clear enough, but even that view is now included (ie exclude thought from definition of consciousness), though I and likely many others don't agree with that. Tkorrovi 22:39, 28 Mar 2004 (UTC)
I am happy with using capable rather than average, but this still begs the question of if we are trying to make an AC machine with the minimum resources necessary to be convincing, then if we reduced any of its capabilities slightly, would it still be convincing? If so, then leaving that capability at its higher level is not necessary to demonstrate AC, and we should be able to go on reducing each of the machine's capabilities to a notional minimum level, which might be well below those of any model that we took in the first place to represent the capable human. Matt Stan 07:50, 29 Mar 2004 (UTC)
I am proposing that the benefit of the doubt be given to an object i.r.o. artificial consciousness. We can be interested in justice and allow the accused the presumption of innocence. I suggest that being interested in consciouness we allow the artificial object the benefit of the doubt when it comes to consciousness. Paul Beardsell 14:17, 29 Mar 2004 (UTC)
It is convincing when it is built so that it would be theoretically capable to achieve all objectively observable mental abilities of a capable human, so the question is in principles by what it is built, not in the exact capability of the trained system. Tkorrovi 11:51, 29 Mar 2004 (UTC)
That would be convincing but it sets the barrier too high. Paul Beardsell 14:17, 29 Mar 2004 (UTC)

A Body[edit]

I was interested to read in the articles starting with Artificial intelligence (which, incidentally, cover much of the ground we have been attempting to cover here) that one of the commentators had indicated that a body is an essential prerequisite for digital sentience. I need to go back to those articles to resolve what in effect are the intersections/distinctions between AC and other forms of artificial humanity (or whatever we want to call it), but I pose here the question as to whether the artifact that we are postulating for the purposes of proving the existence of AC must necessarily have some robotic element, i.e. that it cannot be entirely entirely absorbed in self-awareness, or put rather more crudely, onanistic. For example, even if we decided not to build a mechanical robot to demonstrate AC, the representation of an image on a screen, coupled with a camera pointing at whoever was watching that screen, could help to give the AC machine the necessary response mechanisms to be verifiable. Without such, or similar, could it ever be convincing? I suppose I am specifying something very basic, i.e. that the thing needs outputs, in order that we can observe it; and inputs, in order that we can test it. These may not be prerequisites of AC per se, but for the purposes of testability I am suggesting that they are prerequisites of any verifiable implementation. Matt Stan 07:50, 29 Mar 2004 (UTC)

Stephen Hawking has very few ways to express himself, but he understands things much better than you. In computer the pulses go in and the pulses go out, we can interpret them as text, as images, as sounds etc. The same happens in brain. Tkorrovi 12:28, 29 Mar 2004 (UTC)

To avoid causing offense I thunk you should say that Hawking understands things better that "you or I". Paul Beardsell 13:47, 29 Mar 2004 (UTC)

I agree about reading the other Wikipedia articles: Consciousness needs tidying up but there is some good stuff there. Paul Beardsell 13:51, 29 Mar 2004 (UTC)

I reckon the body could be simulated but the consciousness be entirely real, even if artificial. The conscious entity would, in this example, be deluded about the existence of its body. Paul Beardsell 13:56, 29 Mar 2004 (UTC)


Necessary attributes of consciousness[edit]

In this section I suggest we list those attributes of consciousness which are necessary. I.e. If any one of the listed attributes is missing from an entity then the entity is not conscious. Having all these attributes does not necessarily make the entity conscious either!

Self-awareness[edit]

The conscious entity should know something about its own state. The thermostat knows if it is too cold or too hot.

It should know its own physical limits, it's (real or simulated) body. Insects qualify here. Trivially: A tamper resistant device could be said to have this.

It should understand something about its identity. That it is distinct from other possibly similar objects. Many vertebrates seem to get this right. Trivially: Some devices (e.g. RFID) are accutely aware of their own serial number.

Paul Beardsell 05:10, 30 Mar 2004 (UTC)

Epistemology[edit]

My problem here is with know. How do you know if a thermostat knows anything, as distinct from how I know you know anything. In your case, I can ask you, 'How do you know you are too hot?' as opposed to just 'Are you too hot?'. Not so with a thermostat. there is a distinction between knowing and just being - it's an epistemological question that needs addressing in terms of AC entities. Matt Stan 08:08, 30 Mar 2004 (UTC)

When I say I know something you recognise that I am at least superficially similar to you, and that I might, therefore, mean something similar by that term as you do. You have said that you are forced into the anthropic principal in these circumstances when talking about consciousness because of problems like this. Interestingly, when I say I am too hot you know what I mean but you might disagree and think it too cold! When I say it is hot I am not really making a comment only about the temperature: Amongst other things I am referring to is how quickly I am gaining or losing heat. This is a complicated function of several factors: My current metabolic rate (a function itself of how recently I ate, recent exercise etc), the wind speed, the humidity, how I am dressed, whether the heat I receive is from radiation or conduction, etc. When a thermostat "says" it "knows" it is too hot it makes a reliable comment about the temperature. I do not. Yet you allow me "knowledge" about the temperature but you deny it of the thermostat. Essentially, once again, you reserve the word "know" for humans. Fine, say I, what word will you allow for thermostats and I will use that for humans too. Paul Beardsell 08:52, 30 Mar 2004 (UTC)

You know I am too hot because I turned the airconditioning on. You know the thermostat is too hot because it has turned the airconditioning on. Paul Beardsell 09:09, 30 Mar 2004 (UTC)

Dicussion continues on AC talk page[edit]

This page was copied to Talk:artificial consciousness. As NPOV version is merged, we should continue the discussion there. Tkorrovi 12:34, 26 Mar 2004 (UTC)

The discussion has continued here.Matt Stan 08:10, 27 Mar 2004 (UTC)