Laws of robotics

From Wikipedia, the free encyclopedia
  (Redirected from Robotic laws)
Jump to: navigation, search

Laws of Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

This includes the legal and legal-philosophical conditions surrounding the use and application of robots. The legal and technical terminology in today’s law is becoming ever more difficult to apply in practice as the field of robotics progresses. Robot law is a legal discussion on the assessment of the status of advanced machines that focuses on the ever-increasing autonomy (or quasi-autonomy) of technical systems equipped with artificial intelligence. This discussion leads to a reassessment of legal rights and obligations in addition to traditional legal categories (e.g. messenger, agent, and tool).

The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.

Isaac Asimov's "Three Laws of Robotics"[edit]

The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In later books, a zeroth law was introduced: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Certain robots developed the zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans.[example needed] The classic example involves a robot who sees a runaway train which will kill ten humans trapped on the tracks, whose only choice is to switch the track the train is following so it only kills one human. Even so, the robots who theorized the zeroth law only developed it as a hypothetical, almost philosophical abstraction. Some robots use this as a license to try to conquer humanity for its own protection, but others are hesitant to implement the Zeroth Law, because in practice they aren't even certain what it means. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that "humanity" is such an abstract concept that they wouldn't even know if they were harming it or not. A few even point out that they aren't certain what qualifies as "harm": if this restriction would simply prohibit physical harm, or if social harm is also forbidden. In this last case, conquering humanity in order to implement tyrannical controls to prevent physical harm between humans (i.e. ending all human warfare) might nonetheless constitute a social harm to humanity as a whole.[examples needed][citation needed]

Adaptations and extensions exist based upon this framework. As of 2011 they remain a "fictional device".[1]

EPRSC / AHRC principles of robotics[edit]

In 2011, the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC) of Great Britain jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:[2][3][1]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

The messages intended to be conveyed were:

  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice hurts us all.
  3. Addressing obvious public concerns will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.

AIonAI (artificial intelligence-on-artificial intelligence) Law[edit]

In 2013 Hutan Ashrafian at Imperial College London, proposed an additional law that for the first time considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves – the so-called AIonAI law.[4] This law states:

All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood.


Judicial development[edit]

Another comprehensive terminological codification for the legal assessment of the technological developments in the robotics industry has already begun mainly in Asian countries.[5] This progress represents a contemporary reinterpretation of the law (and ethics) in the field of robotics, an interpretation that assumes a rethinking of traditional legal constellations. These include primarily legal liability issues in civil and criminal law.

Liability for robot actions[edit]

Within the common law tradition general liability standards are limited to damages arising in the use or application of robots; the person using or applying the robot has general liability. This is due to a number of ambiguities surrounding the application of liability to something that is not a person under the law (e.g. developers, producers, distributors, and users). These ambiguities arise in relation to the concept of “third parties,” the law’s present causality and accountability structures, the concept of “due care” relating to negligence, and existing legal justifications. Progress in the law of robots seeks to clarify these ambiguities.

See also[edit]

References[edit]

  1. ^ a b Stewart, Jon (2011-10-03). "Ready for the robot revolution?". BBC News. Retrieved 2011-10-03. 
  2. ^ "Principles of robotics: Regulating Robots in the Real World". Engineering and Physical Sciences Research Council. Retrieved 2011-10-03. 
  3. ^ Winfield, Alan. "Five roboethical principles – for humans". New Scientist. Retrieved 2011-10-03. 
  4. ^ Ashrafian, Hutan (2014). "AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics". Science and Engineering Ethics (Springer). doi:10.1007/s11948-013-9513-9. Retrieved 20 January 2014. 
  5. ^ bcc.co.uk: Robot age poses ethical dilemma. Link