Laws of robotics
|Laws of robotics|
Tilden's Laws of Robotics
by Mark Tilden
Laws of Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.
Isaac Asimov's "Three Laws of Robotics"
The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In later books, a zeroth law was introduced: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Certain robots developed the zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans.[example needed] The classic example involves a robot who sees a runaway train which will kill ten humans trapped on the tracks, whose only choice is to switch the track the train is following so it only kills one human. Even so, the robots who theorized the zeroth law only developed it as a hypothetical, almost philosophical abstraction. Some robots use this as a license to try to conquer humanity for its own protection, but others are hesitant to implement the Zeroth Law, because in practice they aren't even certain what it means. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that "humanity" is such an abstract concept that they wouldn't even know if they were harming it or not. A few even point out that they aren't certain what qualifies as "harm": if this restriction would simply prohibit physical harm, or if social harm is also forbidden. In this last case, conquering humanity in order to implement tyrannical controls to prevent physical harm between humans (i.e. ending all human warfare) might nonetheless constitute a social harm to humanity as a whole.[examples needed]
EPRSC / AHRC principles of robotics
In 2011, the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC) of Great Britain jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:
- Robots should not be designed solely or primarily to kill or harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
The messages intended to be conveyed were:
- We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
- Bad practice hurts us all.
- Addressing obvious public concerns will help us all make progress.
- It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
- To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.
- We should consider the ethics of transparency: are there limits to what should be openly available?
- When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
AIonAI (artificial intelligence-on-artificial intelligence) Law
In 2013 Hutan Ashrafian at Imperial College London, proposed an additional law that considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves – the so-called AIonAI law. This law states:
All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood.
Another comprehensive terminological codification for the legal assessment of the technological developments in the robotics industry has already begun mainly in Asian countries. This progress represents a contemporary reinterpretation of the law (and ethics) in the field of robotics, an interpretation that assumes a rethinking of traditional legal constellations. These include primarily legal liability issues in civil and criminal law.
- Stewart, Jon (2011-10-03). "Ready for the robot revolution?". BBC News. Retrieved 2011-10-03.
- "Principles of robotics: Regulating Robots in the Real World". Engineering and Physical Sciences Research Council. Retrieved 2011-10-03.
- Winfield, Alan. "Five roboethical principles – for humans". New Scientist. Retrieved 2011-10-03.
- Ashrafian, Hutan (2014). "AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics". Science and Engineering Ethics (Springer). doi:10.1007/s11948-013-9513-9. Retrieved 20 January 2014.
- bcc.co.uk: Robot age poses ethical dilemma. Link