Talk:Artificial consciousness/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

AC as a field of study

At 19:05, 11 Aug 2003 24.214.173.244 commented "Absrud Science" and questioned if this is a worthy topic of discussion thus:

A totally and completely false assumption:

There is no accepted definition or understanding regarding real consciousness yet there is a field of artificial consciousness? How absurd!

http://www.enticy.org

tkorrovi replied:

In ai-forum was a passionate debate about the same question just not to repeat it here, but result was rather that it must be clearly stated that all abilities of consciousness mentioned must be known and observable. AC is not consciousness.

http://www.ai-forum.org

Is artificial consciousness real consciousness?

To say Artificial Consciousness is not Consciousness is simply to define Consciousness as being something human beings cannot build. If "it", whatever "it" is, is built by humans, then by definition it would not be conscious. The Philosophical Criticisms section of artificial intelligence applies directly to this topic too.

What is the special thing about humans that allows them consciousness? Humans are either machines (in which case the Church-Turing thesis applies) or they are not (in which case there is some magic spark). You (whoever wrote what I am commenting on) has now to decide: What is it? For you view to be consistent either you require a new computer science possibly requiring new physics, or you have a soul. Speak up now. Paul Beardsell 01:40, 7 Mar 2004 (UTC)


Yes this comment was written by me and I meant that artificial consciousness and consciousness are different terms, what doesn't mean that artificial consciousness necessarily must not be the same as consciousness, or that it must be the same as consciousness, just because of the subjective nature of consciousness as a whole we can never decide whether artificial consciousness shall be the same as consciousness or not. tkorrovi


A question you leave open is this: How similar is my consciousness to yours? Were I to build a machine which has the same characteristics as my brain - artificial neurons with the same latencies, triggering thresholds etc - and I was to scan my brain, take a backup, and load it into the machine, might not that machine be artificially conscious yet more similar to my consciousness than it would be to your consciousness? Paul Beardsell 13:24, 7 Mar 2004 (UTC)

The machine's AC would be more like my C than your C is to my C. Paul Beardsell 13:26, 7 Mar 2004 (UTC)


I think that the question of what is different in consciousness of different people always remains open, also what is in brain depends a lot on everything outside, and changes a lot, but then we may also look at the differencies between human consciousness and systems what can never become conscious, like your text editor. tkorrovi

AC forum http://tkorrovi.proboards16.com/


The questions are difficult, but I do not think they will "always remain open". If I understand you then you are saying that the difference between AC and C is simply terminolgical. I.e. Artifical Consciousness is Consciousness in all but name. But you started off with a remark that concludes that they are not the same: "AC is not consciousness."

What is it? Where do you stand?

Paul Beardsell 06:11, 8 Mar 2004 (UTC)

external source

Paul, please discuss before you remove something. This definition was a collective effort, discussed before in different forums. In the definition similar to "the ability to predict the external events etc" was proposed for intelligence (this form was written by me, Rob Hoogers proposed "the ability to predict how the external processes will develop"). But then it was considered to overdefine intelligence, the ability to predict demands imagination, creativity etc, what even may require feelings etc, so don't entirely go under intelligence. Then it was replaced by more narrow definition, what most likely didn't define intelligence entirely though. Because of that this was added as one ability of consciousness. What exactly makes it strong is the requirement to be able to do it in any possible environment when possible, this very demanding and so indeed makes AC a *strong* AI. This also corresponds to my program in terms of theory. The first part was also discussed in ai-forum and decided to include it exactly as it was, except of later change by "195.218.198.164" of removing "theoretically" what I agree with because the rest says the same. Your proposed definition "An artificial consciousness (AC) is a man-made or otherwise constructed system which is conscious" simply is not proper because it doesn't define anything at all, as "conscious" isn't defined and *cannot* be defined because consciousness is subjective term. Please understand one thing -- subjective term cannot be defined. BTW sorry, I restored the definition. At least please discuss before you are going to remove anything. So if you have any questions concerning the definition etc, please ask and we will discuss it either here or in AC forum or in place you like.

AC and C are different terms, but the difference is not only terminological, AC is artificially created, while C is natural, and AC is objective while C is subjective. Concerning what you said about Igor Aleksander I may agree, if you indeed have evidence that AC was used before Igor Aleksander did it. Concerning that my evidence is unfortunately confined to Internet. tkorrovi


The external source for your definition is your artificial consciousness forum which you dominate. That it represents a broad consensus I doubt. I had difficulty even parsing it. That an ability to tell the future is necessary for consciousness seems risble to me. The claim that AC is AI is also rubbish.

I would have thought that you would have read the literature widely before claiming that the term artificial consciousness was first used in 1996. If you had you would know that claim is not true, you would also know that the term has been better defined by reputed computer scientists and philosophers than the definition you use. I refer you to the popular works of Daniel C Dennett, Douglas Hofstadter and Roger Penrose, for starters, which you have read, I presume.

Paul Beardsell 14:26, 8 Mar 2004 (UTC)

please discuss

Please discuss, would we please try to act reasonably. Defining artificial consciousness through consciousness is not circular because these are different terms and artificial consciousness is subset of consciousness what becomes as close to consciousness as much we objectively know about consciousness. Also though consciousness as a whole cannot be defined, things can be defined through it, ie through abilities of what what are known and objective and so can be determined. tkorrovi


I *am* discussing it. But I can not allow a nonsense (i.e. does not make sense) definition to survive.

Despite verbiage you are not discussing the issues. You make questionable assertions as fact without being prepared to back them up. Who says you cannot define something which is subjective? Who? You. Who says that consciousness is subjective? You. Who is it who insists on defining artificial consciousness? You. CONTRADICTION.

You say AC is not C because one is subjective the other not. Rubbish. Your C should be objectively discernible to me. As should the AC of a machine.

I carefully constructed an argument which shows that AC and C are the same. This you have ignored.

You are not discussing the issues. I have been but you ignore what I have said. Instead we get meaningless nonsense like your latest para above.

Paul Beardsell 14:40, 8 Mar 2004 (UTC)


> The external source for your definition is your artificial consciousness forum which you dominate. That it represents a broad consensus I doubt. I had difficulty even parsing it. That an ability to tell the future is necessary for consciousness seems risble to me. The claim that AC is AI is also rubbish.

Where I said that, or where you find that? I created a forum to discuss AC, I thought there is a lot to discuss and this is a good thing, and I *don't* dominate it, I did never delete not a single post from that forum and likely never do, my only policy there is only to delete post what are obviously offensive, no such so far, I hope you don't call it "domination". Otherwise I said there my opinion what everybody has a right to do. Some places where the definition was discussed were the ai-forum, the Hawking forum and the astronomy.net forum.

When you read the dictionaries, then you see that almost always there are several explanations for consciousness, to be self-aware is only one of these, other is "totality of thoughts and feelings" (Oxford dictionary) etc. The ability to predict was said to be one ability of consciousness, if you think that consciousness doesn't include that, then this is also widely disputed. (I don't know the word "risble" to be any English word, honestly, are you making jokes on me?) OK then, if you argue defining through selw-aware to be correct, then only reasonable option is to have a compromise, and include both possibilities.

I think the question whether AC is AI or a separate field is open. Some AI people want AC to be part of AI, others maybe not, this again depends on how we determine AI.

> If you had you would know that claim is not true, you would also know that the term has been better defined by reputed computer scientists and philosophers than the definition you use. I refer you to the popular works of Daniel C Dennett, Douglas Hofstadter and Roger Penrose, for starters, which you have read, I presume.

OK then why you don't include the facts you know.

tkorrovi


I will go with "totality of thoughts and feelings" but not with predicting the future.

I have found a posting on your AC forum where you (July of an unspecified year) state flatly that AC and AI are not the same. Here you hold the other view. Which is it? For the record I did not state in the article that I thought them different (it happens that I do) but that most consider AI a prerequisite for AC. That most do so is a fact.

You ask me to post what I know. This I did. You reverted it. A google search for '"artificial consciousness" conference' finds one in 1995. There are earlier mentions of AC on the web.

Paul Beardsell 15:15, 8 Mar 2004 (UTC)

it's not "tell the future" but "anticipate"

I think this talk of being able to predict the future is a misinterpretation of what was originally written, and which I had trolled, with my daft reference to psychics, in order to elicit some clarification. That light has now dawned: I think what he is talking about is anticipation: the ability to imagine a short way into the future: for the fly to be able to anticipate where your hand is going to go, based on where it has been and where it is now. Matt Stan 21:06, 12 Mar 2004 (UTC)
Matt, the fly is not able to predict, it just flies away (to the opposite direction) from every big fastly moving object. If you try to catch one sitting on the table, you hit it towards its nose. Then the fly does the same - tries to fly away from your hand, but for that it must go to the air and turn around, what it has no time to do, and it flies directly into your hand. If the fly could predict the movement of your hand, it could find that your hand would not hit it, it shall not go to the air and will be saved. But fly never learns it, no matter how much time you try it, the fly does the same. So it likely has no consciousness, and we can say it because of failing a *single* test. tkorrovi
What if the fly watches on helplessly as it seems unable to consciously change its reflex action to always react the same way. Using tkorrovi's argument combined with tkorrovi's involuntary knee reflex we would prove tkorrovi without consciousness. Indeed, tkorrovi could prove it about himself. Seems to me failing *one* test proves nothing. Paul Beardsell 14:59, 13 Mar 2004 (UTC)

self-promotional link

> You ask me to post what I know. This I did. You reverted it.

I didn't revert of what you changed concerning Igor Aleksander, only repaired a semantical error.

> external link to crank article removed

This wis offensive, you didn't substantiate it anyhow. I hope you understand that even only because of that our further conversation may not make sense. Maybe you just should take a rest awhile and think about it.

tkorrovi



I am going to delete the external link again because it is a link to a crank article. The article professes to discuss a program which exists which is conscious. As such a program would be the biggest news in computer science in a decade one of these things are true. Either the article is being ignored by the computer science establishment or the article is a hoax, or the article is by a crank. Paul Beardsell 15:28, 8 Mar 2004 (UTC)

Haha! The linked article is by tkorrovi! All becomes clear. Paul Beardsell 15:38, 8 Mar 2004 (UTC)


First, it is nowhere argued that it is consciousness, but it is *proposed* mechanism for artificial consciousness, proposed means that it is not accepted, but also not rejected. If you like, add a comment that it is disputed or what you consider proper, but this link is important for the theory of AC, as only in the beginning now, you can remove it only if you prove that it is of no importanse for AC or completely wrong. If you like and know, add other links. tkorrovi


Well, in my defence, it is difficult to work out exactly what the article is about. But Wikipedia is not for self-promotion in any event. I have deleted the link again. Paul Beardsell 15:57, 8 Mar 2004 (UTC)

And it seems the onus of proof is not on me but on you. Just say in which esteemed journal your article has been published then perhaps we can have a link . Paul Beardsell 15:58, 8 Mar 2004 (UTC)

more predicting the future

Just give your source for your defining consciousness as predicting the future. Who says so? Paul Beardsell 16:01, 8 Mar 2004 (UTC)


Please substantiate that predicting the future is not and is not considered by anybody to be an ability of consciousness (included in consciousness). tkorrovi

No, I cannot demonstrate that what you say does not appear in all learned books and papers. You simply have to show one authoritative use and I will back down as graciously as I can.

Paul Beardsell 16:15, 8 Mar 2004 (UTC)

must all tests be passed?

I am going to remove the "all" from the following sentence:

An artificial consciousness (AC) system is a man-made or otherwise constructed artifact capable of achieving all known objectively observable abilities of consciousness i.e. a totality of thoughts and feelings or self-awareness.

Imagine that we had a device which we were testing to see if it was (artificially) conscious. Imagine it passes all of our varied test except one. Is it conscious? I think it might be. E.g. If one of our tests was to recognise oneself in a mirror (a reasonable test, I suggest) and that test were failed but all the other tests ot passed (e.g. that it could recognise when a joke was funny, it demonstrated sympathy when someone was hurt, it said it felt guilty when speeding) then I would say, OK, conscious. So the "all" is not a necessary condition.

Paul Beardsell 16:16, 8 Mar 2004 (UTC)


Paul Beardsell, you don't understand and start to change something what you don't understand. Read the definition, it says intentionally "capable of achieving", ie it doesn't *have* all these abilities, it must be *capable* of achieving them, so failing one test still doesn't mean that it doesn't satisfy the criteria. tkorrovi

No. If it is not capable of recognising itself in a mirror and it is not capable of achieving that in the future it could still be conscious. Paul Beardsell 16:37, 8 Mar 2004 (UTC)


Time and time again I make a point here, I advance an argument supporting my edits, I point out contradictions in tkorrovi's posts. Much of this is ignored by him and he insists on including a link to his own article which, being kind, is not hard science. He will not substabtiate his points. He insists on reverting to versions which are flawed and which he will not support by cogent argument.

I have been typically combative but fair, I think. This article, as it stands, is not worthy of an encyclopedia. External review requested.

Paul Beardsell 16:38, 8 Mar 2004 (UTC)


> No. If it is not capable of recognising itself in a mirror and it is not capable of achieving that in the future it could still be conscious.

This is your opinion Paul. Nobody forbids you to say your point of view or write it in Wikipedia articles, but then you should not delete a point of view of the others. I accept adding opinions of everybody as I always did, the more people do it the better. In fact I didn't remove anything what you wanted to add, but concerning "all" it's more difficult, you should then add your version of the definition there, what you could do from the beginning instead of all this dispute. tkorrovi


I have provided an argument as to why "all" is wrong. You act as if the argument has no force. Instead you say that is my opinion. No. The argument is logically compelling. Refrain from your ad hominem attacks: Attack the argument. What is wrong with it?

Have you abandoned your "predicting the future" definition of consciousness? Where is the but one authoritative source? Can't you find one? In which case it is just your opinion and it must go. Or you must construct an argument as to why your opinion is correct.

Also Wikipedia is not supposed be a place where all opinions are aired. It is supposed to be an encyclopedia. If you cannot argue your points then give way.

There are Wikipedia articles about Wikipedia itself that makes this plain. Also, self-promotion is not allowed. That is why the link has to go.

Paul Beardsell 17:11, 8 Mar 2004 (UTC)

Your argument may be logically compelling if you consider consciousness as self-awareness. Does self-awareness include recognizing yourself in the mirror or not? But self-awareness is not the only explanation of consciousness, this is a view of some, and most likely your view, but not a proved definition of consciousness. In this sense I said that it's your view, not only your personal view but view of many people what you agree with. I don't honestly see why you think it is as a personal attack, please please refrain to become personal. But then other people consider that for example an animal, who is incapable of achieving the ability to recognize itself in the mirror, has no consciousness. And my link was added not because of self-promotion, AC is still very much in the beginning and there is not much established material, everything what fits in then is necessary for the field, also for understanding the field, this is why AC is exceptional, this was included as one of the few proposed AC programs, so far not proved to fail to satisfy the AC conditions, not as worthless and questionnable one of many AI programs. You certainly know that I'm not bad or dishonest person, so I would not remove other links if they would be added, in order my to prevail, what some may do without doubt. You can say that it is self-promotion if you and others prove that it is worthless and doesn't belong where it is. And finally, making this program added my experience in AC and helped me to understand it much better, what is so bad in that I use this experience to help people to put together more information about AC and understand it better?

So as a conclusion, for such relatively new and often much disputed fields as artificial consciousness, the only reasonable option would be that different points of view would be added in the same article, without removing one when adding the other, or changing one to comply with the other, to give reader the opportunity to decide what approach he would prefer.

tkorrovi


I never said that recognizing oneself in a mirror was an essential test for consciousness. All I said was that there might be a set of tests, and that one of them (the self-recognition one) might be one of those tests, that that test might be failed, but all other tests passed, and that the tested device might still be conscious. It was a thought experiment. I quote myself:

Imagine that we had a device which we were testing to see if it was (artificially) conscious. Imagine it passes all of our varied test except one. Is it conscious? I think it might be. E.g. If one of our tests was to recognise oneself in a mirror (a reasonable test, I suggest) and that test were failed but all the other tests ot passed (e.g. that it could recognise when a joke was funny, it demonstrated sympathy when someone was hurt, it said it felt guilty when speeding) then I would say, OK, conscious. So the "all" is not a necessary condition.

All the argument was doing was arguing for the removal of the word "all". It did not set out the definitive list of tests for consciousness. tkorrovi cannot attack the argument by attacking the examples. He must demonstrate why all the tests, whatever they might be, must be passed. He must demonstrate it because he insists that the word *all* is important. And if he doesn't, I will remove it again.

Now, I could deal with each of tkorrovi's other points in turn, but I am not going to, for the by now obvious reason.

Paul Beardsell 01:36, 9 Mar 2004 (UTC)


It's necessary to be capable to achieve all the abilities of consciousness for it to be artificial consciousness, otherwise if we satisfy only with one ability (or aspect) of consciousness, we may as well say that ability to calculate is one aspect of consciousness and state that calculator is artificial consciousness. Paul, you caused me a pain without a right or need to do so. I'm very tired of it. tkorrovi


Logic 101: "Not all" is not the same as "one". How painful is that?

Let us say that one of the tests for consciousness was an ability to argue logically but that the entity being tested failed only that particular test. Would it be conscious? I suggest it would be, but frustratingly so.

Paul Beardsell 02:15, 9 Mar 2004 (UTC)


Pay attention! Neuroscience vs Computer Science

"To neuroscientists, attention is a profoundly interesting and important phenomenon. We are constantly bombarded by information - smells, sounds, sights - yet we attend to only the slenderest sliver of the whole; the rest we tune out, just as you tune out the rumble of passing traffic as you read. Exactly how the brain achieves this feat is one of neuroscience's biggest questions, and for good reason: attention is intimately associated with consciousness. What you pay attention to defines how you experience the world from moment to moment." quoted from New Scientist vol 181 issue 2434 - 14 February 2004, page 32

None of the stuff here seems to be informed about developments in neuroscience, which surely must be a better key to understanding consciousness, and by inference, artifical consciousness. If we get bogged down in epistemological arguments about whether even non-human creatures are capable of consciousness and rely instead on introspection to grapple with the nature of consciousness then I fear we will get nowhere, as appears to have been happening here! Matt Stan 19:29, 8 Mar 2004 (UTC)


I would be happy to get nowhere as opposed to where tkorrovi would like us to go which is into falsehood or at least speculation. But to claim that neuroscience is the key to understanding of consciousness doesn't seem right to me. (I know I set myself up here against half of AI and AC researchers.) Brain scientists have a (rather limited) understanding of the brain, and the human brain represents perhaps the only but certainly one of the few conscious devices we can study. But imagine we tried to kickstart the development of automobiles by sending a modern motor vehicle back into the 1800's but with its bonnet welded shut. (That is where we are with the brain.) All it would do is act as inspiration: It would not help Benz (or whoever) to develop the internal combustion engine.

Currently a neuroscientist is to consciousness what a Mercedes mechanic is to engineering. A blind, quadraplegic Mercedes mechanic.

No, I think that developments will more likely come from computer science with the likes of Conway's Game of Life (OK, simplistic) and proposals for the Godel Machine (OK, not cogent). As Hofstadter seems to demonstrate there is a need to swallow one's tail in all complex things. Getting a program to modify itself recursively and to select a better version of itself seems a good way to go about it. Such programs already exist and are fascinating: Using toolsets developed by others I have provided the starting conditions which allowed a 1000's of generations of food-hunting program to eventually develop their own efficient algorithm, for example. The techniques I together with the excellent toolkit used are likely described here: genetic programming.

That is all Schmidhuber's proposed Godel Machine is: It is genetic programming layered on top of a starting condition that defines consciousness. No big deal. It is the starting condition which is interesting and to which he pays scant attention in his paper.

Paul Beardsell 02:11, 9 Mar 2004 (UTC)

Actually, to be fair, the Godel Machine supposedly has an important advance. Scmidhuber claims he has found an optimal method of genetic programming or evolutionary programming. These techniques are notoriously slow, unreliable, sub-optimal. It is this claim which he "proves" in Theorem 2.1

Paul Beardsell 02:34, 9 Mar 2004 (UTC)

The Fly and Descartes

Descartes.c

main() {while(true) printf("I think therefore I am.\n");}

fly.c

main() {while(true) if(lightonleft) turnleft(); else turnright();}

Paul Beardsell 04:02, 9 Mar 2004 (UTC)

That's a fine program - now build a machine that can't be swatted easily. Get the thing to learn from its mistakes - or at least to build various replicas of itself that don't survive if they get swatted, by allowing variations to occur in some of the parameters, by ensuring that inheritors don't lose the memes that help survival, and by varying the methods of swatting (very important, that one). Ultimately, wouldn't you arrive at consciousness as we understand it?

I define consciousness solely as the ability to pay attention, moment to moment, to something. Consciousness doesn't exist when you are asleep (or unconscious).

sleepyfly.c
main() {while(true) {while(true) if(lightonleft) turnleft(); else turnright();};}
Only one iteration of the outer loop is executed and so any decent optimising compiler would get rid of it. Paul Beardsell 15:07, 13 Mar 2004 (UTC)
Or does that contain a bug? You need a bootstrap, wakeupfly.c
Abstract: The relation between mind and matter is considered in terms of recent ideas from both phenomenology and brain science. Phenomenology is used to give clues to help bridge the brain–mind gap by providing constraints on any underlying neural architecture suggested from brain science. A tentative reduction of mind to matter is suggested and used to explain various features of phenomenological experience and of ownership of conscious experience. The crucial mechanism is the extended duration of the corollary discharge of attention movement, with its gating of activity for related content. Aspects of experience considered in terms of the model are the discontinuous nature of consciousness, immunity to error through misidentification, and the state of ‘pure’ consciousness as experienced through meditation. Corollary discharge of attention movement is proposed as the key idea bringing together basic features of meditation, consciousness and neuroscience, and helping to bridge the gap between mind and matter. [1]
meditation.c
main() {while(true) null;}

Matt Stan 20:43, 12 Mar 2004 (UTC)


Aha! Prejudice. Why doesn't Descartes need a bootstrap? Paul Beardsell 15:07, 13 Mar 2004 (UTC)