Jump to content

Talk:Three Laws of Robotics: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Filippos2 (talk | contribs)
4th law: new section
Line 92: Line 92:


In fact, you can view the entire episode [https://archive.org/details/superman_the_mechanical_monsters here], and it's much as I remembered it. The robots are semi-autonomous, but never turn on their creator or any similar trope. [[Special:Contributions/162.252.201.32|162.252.201.32]] ([[User talk:162.252.201.32|talk]]) 12:09, 25 September 2016 (UTC)
In fact, you can view the entire episode [https://archive.org/details/superman_the_mechanical_monsters here], and it's much as I remembered it. The robots are semi-autonomous, but never turn on their creator or any similar trope. [[Special:Contributions/162.252.201.32|162.252.201.32]] ([[User talk:162.252.201.32|talk]]) 12:09, 25 September 2016 (UTC)

== 4th law ==

Robots cannot alter or delete the above laws. [[User:Filippos2|Filippos2]] ([[User talk:Filippos2|talk]]) 07:01, 30 November 2016 (UTC)

Revision as of 07:01, 30 November 2016

Former featured articleThree Laws of Robotics is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophyThis article appeared on Wikipedia's Main Page as Today's featured article on July 5, 2006.
Article milestones
DateProcessResult
May 1, 2005Featured article candidatePromoted
December 18, 2009Featured article reviewDemoted
November 27, 2010Peer reviewReviewed
December 24, 2010Good article nomineeNot listed
Current status: Former featured article

Template:Find sources notice

the third law

What is the usefulness of this law? the first two laws make sense to be so crucially required, but the third? 82.77.116.227 (talk) 09:59, 16 January 2011 (UTC)[reply]

So you buy a robot to protect your bank and one to help around your house. You come home and find the robot in the garden on the ground sparking all over from where it was hit by a falling tree while weeding. You pay £1000 to get it fixed thinking "stupid robot - why did it just stand there while it got squashed?". A few days later a burglar comes to your home and tries to break in. The robot sees him and sits there while the burglar bashes its positronic brain in and then steals all your stuff. The next morning you go to your bank and find that the same burglar decided to break into the bank where he bashed in the brains of all the robots and walked off with all your gold bullion. You say to yourself "Wow - I wish the robots had tried to protect themselves by running away and alerting me, the police or someone" Chaosdruid (talk) 10:24, 16 January 2011 (UTC)[reply]
Also I remember reading (long ago so I have no source) that Asimov wanted exactly three laws in imitation of the Three laws of thermodynamics. As a biochemistry professor, he was very aware of thermodynamics. Dirac66 (talk) 12:28, 11 September 2012 (UTC)[reply]

Three laws in real life

According to the back of The Complete Robot the laws were once programmed into real computers at the Massachusetts Institute of Technology with "interesting" results? What were these results and are they notable?Dalek9 (talk) 16:23, 17 March 2011 (UTC)[reply]

LOL! As of 2014, there is no (known) robot capable of higher (abstract) thought. That is, there is no platform on which implementation of these "Laws" is possible.173.189.78.173 (talk) 12:43, 4 September 2014 (UTC)[reply]

Original Research

(Moved from my talkpage:) When you reviewed the article there were several areas where you thought that OR was prevalent. Is there any chance you could take a quick look and tell me whether or not you think those areas have been addressed? My intention is to put it up for GAN again in the next 4 weeks so would appreciate your input in particular. Thanks Chaosdruid (talk) 00:47, 17 June 2011 (UTC)[reply]

I'll respond on the article talkpage. SilkTork *Tea time 08:34, 17 June 2011 (UTC)[reply]
  • Having dipped into a few sections in the article I found some statements which are not securely cited. I tidied a few statements, and have tagged a few others as examples. This was fairly random and should not be seen as comprehensive. Sorting out the statements tagged may not be enough, and it would be appropriate to go through the entire article carefully. I still feel that the more appropriate approach would be to scrap this article and start again from scratch. I think the framework and focus of the article is grounded in OR, and encourages editors to join in with opinion and speculation based on personal observation and knowledge. What is required is appropriate reliable sources which significantly discuss the Three Laws. Comments based on editorial observation of various novels, and making comparisons between them is OR, and is to be avoided. SilkTork *Tea time 08:58, 17 June 2011 (UTC)[reply]
At the start of the section Three_Laws_of_Robotics#History_of_the_Three_Laws you have placed a {{cn}} on "Before Asimov began writing, the majority of artificial intelligence in fiction followed the Frankenstein pattern,[citation needed] one that Asimov found unbearably tedious:"
Can you tell me why you do not think the quote and ref which follow it are not good enough? Chaosdruid (talk) 13:00, 18 June 2011 (UTC)[reply]
@Chaosdruid: I agree with you, and I believe that this section adequately explains the Frankenstein reference. I'm going to remove the {{cn}} until a good explaination for it is provided. HappyGod

Roboassessment - B class

I have reinstated the Robotics project "B" class. If anyone who is not familiar with the Robotics assessment scale would like to check it out before deciding whether they should reassess the article for us, they would note that the article clearly falls within the B class parameters. If they feel those parameters need changing, there is a forum for discussion at the Robotics project. I did not make those guidelines, but I am following them. Chaosdruid (talk) 10:06, 10 February 2012 (UTC)[reply]

In Translation

Very good Wikipedia editorial cabal: You've replaced Asimov's own words with your own inaccurate paraphrase of them, in addition to removing the French words that provide the evidence for your claim. Well done. Just what I would expect.

See http://en.wikipedia.org/w/index.php?title=Three_Laws_of_Robotics&diff=next&oldid=607488250

Modern Understanding?

I have seen references to Asimov's Three Laws of Robotics in AI (academic) literature. My understanding is that the Laws have fundamental problems which should be addressed in this article. The most obvious, from a practical pov, is that each of them require a huge body of data and computation - effectively an infinite amount - which precludes any robot ever acting. Second is the implication that harm is black and white. The probability of harm is rarely either 100% or 0%. There is a huge body of relevant neuropsychological literature on moral judgements (the child on the train tracks, the fat man on the bridge, etc.) It turns out that our behavior is generally rationally justified AFTER the act, which is often NOT rational. There are several other problems (and this is OR, in the sense that I've not seen it in print (although extremely unlikely to actually be original!)). What is meant by "harm"? What is the meaning of "human being"? What is the definition of "humanity"? EG. Would a robot force you to eat exactly what is "best" for you? Would it take away your car keys and make you walk to the store for exercise? (I forget who wrote the series about the robots occupying the galaxy(?)and becoming our caretakers until we 'matured' enough to make 'better' choices.) There is harm to our cells, harm to our organs, harm to our bodies, harm to our minds, harm to our careers... If you were tired would a robot steal you some amphetamine? Would a robot tell a child that Santa Claus doesn't exist? (That your wife is cheating on you?) The idea of "harm" requires a clear model of what a "healthy" state requires. It conflicts with free will. A person ages and develops, is this "harm"? Is education/learning potentially harmful? (imho, yes - as well as beneficial). As far as "humanity", would/should a robot prevent reproduction by people carrying the sickle cell gene? (Which is harmful in many circumstances but imparts protection from malaria.) Same thing with robot harming itself. For most machines, movement (as well as use of electronic circuitry) erodes, wears, and ages the mechanism. How does a robot protect itself? And finally, how likely is it that "orders" can be comprehensive enough so that compliance is feasible? It requires not just a theory of mind, but a very accurate model of what the person means. Just my two cents.173.189.78.173 (talk) 13:26, 4 September 2014 (UTC)[reply]

Or just consider aliens different from humans. I think these rules are quite human-centric and apparently the robots are allowed to wipe out any other civilizations they might come across. 213.46.51.199 (talk) 08:13, 17 September 2015 (UTC)[reply]

Merge

I think this should be merged with Laws of robotics. They seem to be the exact same topic. — Preceding unsigned comment added by 87.210.58.57 (talk) 01:54, 23 September 2014 (UTC)[reply]

No merger needed. https://en.wikipedia.org/w/index.php?title=Laws_of_Robotics&redirect=no is a WP:Redirect to this article. Which is why they are identical, and the exact same topic. — Lentower (talk) 02:49, 24 September 2014 (UTC)[reply]

Google's amendments

Google recently is reported to make its "5 amendments" to the laws: http://www.cnet.com/news/google-goes-asimov-and-spells-out-concrete-ai-safety-concerns/ I think this should be included here, or elsewhere? Or is it already included somewhere? I am not qualified enough at the time with the topic, but I may try to add this later. --ssr (talk) 18:50, 28 June 2016 (UTC)[reply]

Is the Superman Robot Picture Applicable?

It's been a very long time since I saw that particular episode of Superman, but I'm like 99% certain it was *not* acting of its own volition, and was instead remotely controlled by a human villain. The screen capture is attached to a paragraph that talks about robots destroying their own creator(s), so it seems that image is not a very good example of the kind of story Asimov was trying to avoid.

In fact, you can view the entire episode here, and it's much as I remembered it. The robots are semi-autonomous, but never turn on their creator or any similar trope. 162.252.201.32 (talk) 12:09, 25 September 2016 (UTC)[reply]

4th law

Robots cannot alter or delete the above laws. Filippos2 (talk) 07:01, 30 November 2016 (UTC)[reply]