Machine ethics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 193.197.171.98 (talk) at 09:14, 29 April 2014 (→‎History). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Partisan sources

Machine Ethics (or machine morality) is the part of the ethics of artificial intelligence concerned with the moral behavior of Artificial Moral Agents (AMAs) (e.g. robots and other artificially intelligent beings). It contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings. Machine ethics is sometimes referred to as computational ethics and computational morality.

History

In 2009, Oxford University Press published Moral Machines, Teaching Robots Right from Wrong, which it advertised as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics." It cited some 450 sources, about 100 of which addressed major questions of machine ethics. Few were written before the 21st century.[1]

In 2011, Cambridge University Press published a collection of essays about machine ethics edited by Michael and Susan Leigh Anderson,[2] who also edited a special issue of IEEE Intelligent Systems on the topic in 2006.[3]

Articles about machine ethics appear on a regular basis in the journal Ethics and Information Technology.[4]

Major questions

Is this study urgent (or even non-fiction)?

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[5]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[6] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[7][8] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[9] They point to programs like the Language Acquisition Device which can emulate human interaction.

Which specific learning algorithms should be used?

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[10] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable than machines to criminal "hackers".[11] He has advocated for evaluative diversity among machines to match the diversity found in effective human teams.[12]

How should AMAs be trained (e.g. instructed/rewarded/punished)?

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other in searching out a beneficial resource and avoiding a poisonous one eventually learned to lie to each other in an attempt to hoard the beneficial resource.[13] One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).[11]

What implications (if any) do answers to the above have for human ethics?

In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.[1]

Machine ethics in fiction

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[14]

Related fields

See also

Notes

  1. ^ a b Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9. {{cite book}}: Invalid |ref=harv (help)
  2. ^ Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2. {{cite book}}: Invalid |ref=harv (help)
  3. ^ Anderson, Michael; Anderson, Susan Leigh, eds. (July–August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–63. ISSN 1541-1672.
  4. ^ C. d. Hoven, M. J.; Manders-Huits, N. (eds.). Ethics and Information Technology. ISSN 1572-8439 http://www.springer.com/computer/swe/journal/10676. {{cite journal}}: Missing or empty |title= (help)
  5. ^ Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  6. ^ Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  7. ^ Science New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
  8. ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  9. ^ AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  10. ^ Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
  11. ^ a b Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences".
  12. ^ Santos-Lang, Christopher (2014). "Chapter 6: Moral Ecology Approaches". In van Rysewyk, Simon; Pontier, Matthijs (eds.). Machine Medical Ethics (PDF). New York: Springer. pp. 74–96.
  13. ^ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
  14. ^ Asimov, Isaac (2008). I, robot. New York: Bantam. ISBN 0-553-38256-X. {{cite book}}: Invalid |ref=harv (help)

References

  • Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press.
  • Anderson, Michael; Anderson, Susan Leigh, eds (July 2011). Machine Ethics. Cambridge University Press.
  • Storrs Hall, J. (May 30, 2007). Beyond AI: Creating the Conscience of the Machine Prometheus Books.

Further reading

External links

  • Machine Ethics, Hartford University links to conferences, articles and researchers.