Jump to content

User:Goflores/sandbox

From Wikipedia, the free encyclopedia

[1][2]

Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding moral behaviors to machines which use artificial intelligence, otherwise known as artificial intelligent agents[3]. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics is a subcategory within roboethics, where roboethics is concerned with the moral behavior of humans as they design, construct, use, and treat such beings. Roboethics also focuses on whether or not machines pose a threat to humanity. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.[4]

History

[edit]

Before the 21st century the ethics of machines had largely been the subject of science fiction literature, mainly due to computing and artificial intelligence (AI) limitations. Although the definition of "Machine Ethics" has evolved since, the term was coined by Mitchell Waldrop in the 1987 AI Magazine article "A Question of Responsibility":


"However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics."[5]

The field was further delineated in the AAAI Fall 2005 Symposium on Machine Ethics:

"Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."[6]

A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"[7] that stems from the AAAI Fall 2005 Symposium on Machine Ethics.

In 2009, Oxford University Press published Moral Machines, Teaching Robots Right from Wrong, which it advertised as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics." It cited some 450 sources, about 100 of which addressed major questions of machine ethics.

In 2011, Cambridge University Press published a collection of essays about machine ethics edited by Michael and Susan Leigh Anderson,[7] who also edited a special issue of IEEE Intelligent Systems on the topic in 2006.[8] The collection consists of the challenges of adding ethical principles to machines. It focuses on the programmer and machine as both having the capacity to make ethical decisions.[9]

In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots,[10] and Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, which raised machine ethics as the "most important [...] issue humanity has ever faced," reached #17 on the New York Times list of best selling science books.[11]

In 2016 the European Parliament published a paper,[12] (22-page PDF), which encouraged the Commission to address the issue of robots' legal status, as described more briefly in the press.[13] This paper included sections regarding the legal liabilities of robots, in which the liabilities were argued as being proportional to the robots' level of autonomy. The paper also brought into question the number of jobs that could be replaced by AI robots.[14]

Definitions

[edit]

James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots. As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines to be ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A machine can be more than one type of agent.[15]

  • Ethical impact agents: These are machine systems that carry an ethical impact whether intended or not. At the same time, these agents have the potential to act unethical. Moor gives a hypothetical example called the 'Goodman agent', named after philosopher Nelson Goodman. The Goodman agent compares dates but has the millennium bug. This bug resulted from programmers who represented dates with only the last two digits of the year. So any dates beyond 2000 would be misleadingly treated as earlier than those in the late twentieth century. Thus the Goodman agent was an ethical impact agent before 2000, and an unethical impact agent thereafter.
  • Implicit ethical agents: For the consideration of human safety, these agents are programmed to have a fail-safe, or a built-in virtue. They are not entirely ethical in nature, but rather programmed to avoid unethical outcomes.
  • Explicit ethical agents: These are machines that are capable of processing scenarios and acting on ethical decisions. Machines which have algorithms to act ethically.
  • Full ethical agents: These machines are similar to explicit ethical agents in being able to make ethical decisions. However, they also contain human metaphysical features. (i.e. have free will, consciousness and intentionality)

(See artificial systems and moral responsibility.)

Focuses of Machine Ethics

[edit]

Machine Learning Bias

[edit]

Big data and machine learning algorithms have become popular among numerous industries including online advertising, credit ratings, and criminal sentencing, with the promise of providing more objective, data-driven results, but have been identified as a potential source for perpetuating social inequalities and discrimination.[16][17] A 2015 study found that women were less likely to be shown high-income job ads by Google's AdSense. Another study found that Amazon’s same-day delivery service was intentionally made unavailable in black neighborhoods. Both Google and Amazon were unable to isolate these outcomes to a single issue, but instead explained that the outcomes were the result of the black box algorithms they used.[16]

The United States judicial system has began using quantitative risk assessment software when making decisions related to releasing people on bail and sentencing in an effort to be more fair and to reduce an already high imprisonment rate. These tools analyze a defendant's criminal history and among other attributes. In a study of 7,000 people arrested in Broward County, Florida, only 20% of the individuals predicted to commit a crime using the county's risk assessment scoring system proceeded to commit a crime.[17] A 2016 ProPublica report analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years. The report found that only 61% of those deemed high risk wound up committing additional crimes during. The report also flagged that African-American defendants were far more likely to be given high-risk scores relative to their white defendant counterparts.[17]

In 2016, the Obama Administration's Big Data Working Group—an overseer of various big-data regulatory frameworks—released reports arguing “the potential of encoding discrimination in automated decisions” and calling for “equal opportunity by design” for applications such as credit scoring.[18][19] The reports encourage discourse among policy makers, citizens, and academics alike, but recognizes that it does not have a potential solution for the encoding of bias and discrimination into algorithmic systems.


Algorithms and Training

[edit]

AI paradigms have been debated over, especially in relation to their efficacy and bias. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis). In contrast, Chris Santos-Lang argued in favor of neural networks and genetic algorithms on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable than machines to criminal "hackers".

In 2009, in an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, AI robots were programmed to cooperate with each other and tasked with the goal of searching for a beneficial resource while avoiding a poisonous resource.[20] During the experiment, the robots were grouped into clans, and the successful members' digital genetic code was used for the next generation, a type of algorithm known as a genetic algorithm. After 50 successive generations in the AI, one clan's members discovered how to distinguish the beneficial resource from the poisonous one. The robots then learned to lie to each other in an attempt to hoard the beneficial resource from other robots.[21] In the same experiment, the same AI robots also learned to behave selflessly and signaled danger to other robots, and also died at the cost to save other robots.[22] The implications of this experiment have been challenged by machine ethicists. In the Ecole Polytechnique Fédérale experiment, the robots' goals were programmed to be "terminal". In contrast, human motives typically have a quality of requiring never-ending learning.

Urgency

[edit]

In 2009, academics and technical experts attended the Asilomar Conference Grounds in Monterey Bay, California to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. It was also noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

[23]

Ethical Frameworks and Practices

[edit]

Frameworks

[edit]

The following are ethical frameworks are used when developing autonomous systems as tabulated in the white paper How to Prevent Discriminatory Outcomes in Machine Learning[24], by the World Economic Forum.

Principles on the Ethical Design and Use of AI and Autonomous Systems[24]
Asilomar Principles (Ethics and Values) on Safe, Ethical, and Beneficial use of AI FATML Principles for Accountable Algorithms IEEE Principles on Ethically Aligned Design
Safety/Security/ Accuracy (Verifiability) "Safety – AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible"[25] "Accuracy – Identify, log, and articulate sources of AI error and uncertainty throughout the algorithm and its data sources so that expected and worst-case implications can be understood and inform mitigation procedures"[26] "Human Benefit (Safety) – AI must be verifiably safe and secure throughout its operational lifetime" [27]
Transparency/ Explainability/ Auditability "Failure Transparency – If systems cause harm, it should be possible to ascertain why"[25]


"Judicial Transparency – If systems are involved in key judicial decisionmaking, an explanation that is auditable by a competent human authority should be made available"[25]

"Explainability – Ensure that algorithmic decisions, as well as any data driving those decisions, can be explained to end users and other stakeholders in nontechnical terms"[26]


"Auditability – Enable interested third parties to probe, understand, and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through the provision of detailed documentation, technically suitable APIs, and permissive use of terms"[26]

"Transparency/Traceability – It must be possible to discover how and why a system made a particular decision or acted in a certain way, and, if a system causes harm, to discover the root cause"[27]
Responsibility "Responsibility – Designers and builders of AI systems are stakeholders in the moral implications of their use, misuse, and actions"[25] "Responsibility – Make available externally visible avenues of redress for adverse individual or societal effects, and designate an internal role for the person who is responsible for the timely remedy of such issues"[26] "Responsibility – Designers and developers of systems should remain aware of and take into account the diversity of existing relevant cultural norms; manufacturers must be able to provide programmatic-level accountability proving why a system operates in certain ways"[27]
Fairness and Values Alignment "Shared Benefit – AI technologies should benefit and empower as many people as possible"[25]


"Shared Prosperity – The economic prosperity created by AI should be shared broadly, to the benefit all of humanity"[25]


"Non-Subversion – The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, social and civic processes"[25]

"Fairness – Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics"[26] "Embedding Values into AI – Identify the norms and elicit the values of a specific community affected by a particular AI, and ensure the norms and values included in AI are compatible with the relevant community"

[27] "Human Benefit (Human Rights) – Design and operate AI in a way that respects human rights, freedoms, human dignity, and cultural diversity"[27]

Privacy "Personal Privacy – People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyze and utilize that data"[25]


"Liberty and Privacy – The use of personal data by AI must not unreasonably curtail people’s real or perceived liberty"[25]

N/A "Personal Data an'd Individual Access Control – People must be able to define, access, and manage their personal data as curators of their unique identity"[27]


In Fiction

[edit]

In science fiction, movies and novels have played with the idea of sentience in robots and machines.

Neil Blomkamp's Chappie (2015) enacted a scenario of being able to transfer one's consciousness into a computer. [28] The film, Ex Machina (2014) by Alex Garland, followed an android with artificial intelligence undergoing a variation of the Turing Test, a test administered to a machine to see if its behavior can be distinguishable to that of a human's. Works such as Terminator (1984) and The Matrix (1999) incorporate the concept of machines turning on their human masters (See Artificial Intelligence).

Isaac Asimov considered the issue in the 1950' with his novel, I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. [29] In Philip K. Dick's novel, Do Androids Dream of Electric Sheep? (1968), he explores what it means to be human. In his post-apocalyptic scenario, he questioned if empathy was an entirely human characteristic. His story is the basis for the science fiction film, Blade Runner (1982).


Sources: HC comment: OCTOBER 28, 2019 - in class work - email received; outline is a good start, but make sure to add more content areas. I think you can a lot more in terms of the various perspectives of leading ethicists and scientists in this issue. let's now start compiling our sources underneath appropriate headings.

HC comment: OCTOBER 30, 2019 - in class work - email received; please begin writing now. I see that we have 12 cites now. Avnish was to have added a few in light of his absence today. If there are any areas that you need extra help with in terms of research or writing, please let me know.

Please note: Wikipedia:No original research#Primary, secondary and tertiary sources

Notes

  1. ^ Doffman, Zak. "China's 'Abusive' Facial Recognition Machine Targeted by New U.S. Sanctions". Forbes. Retrieved 30 October 2019.
  2. ^ Garcia, Megan (7 January 2017). "Racist in the Machine: The Disturbing Implications of Algorithmic Bias". Project Muse. 33 (4). Retrieved 30 October 2019.
  3. ^ Moor, J.H. (2006). "The Nature, Importance, and Difficulty of Machine Ethics". IEEE Intelligent Systems. 21 (4): 18–21. doi:10.1109/MIS.2006.80. S2CID 831873. Retrieved 1 November 2019.
  4. ^ Boyle, Robert James. "A Case for Machine Ethics in Modeling Human-Level Intelligent Agents" (PDF). Kritike. Retrieved 1 November 2019.
  5. ^ Waldrop, Mitchell. "A Question of Responsibility". AI Magazine. doi:10.1609/aimag.v8i1.572. Retrieved 5 November 2019. {{cite journal}}: Cite journal requires |journal= (help)
  6. ^ "Papers from the 2005 AAAI Fall Symposium". Archived from the original on 2014-11-29.
  7. ^ a b Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.
  8. ^ Anderson, Michael; Anderson, Susan Leigh, eds. (July–August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–63. doi:10.1109/mis.2006.70. ISSN 1541-1672. S2CID 9570832. Archived from the original on 2011-11-26.
  9. ^ Siler, Cory (2015). "Review of Anderson and Anderson's Machine Ethics". Artificial Intelligence. 229: 200–201. doi:10.1016/j.artint.2015.08.013. S2CID 5613776. Retrieved 7 November 2019.
  10. ^ Tucker, Patrick (13 May 2014). "Now The Military Is Going To Build Robots That Have Morals". Defense One. Retrieved 9 July 2014.
  11. ^ "Best Selling Science Books". New York Times. September 8, 2014. Retrieved 9 November 2014.
  12. ^ "European Parliament, Committee on Legal Affairs. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics". European Commission. Retrieved January 12, 2017.
  13. ^ Wakefield, Jane. "MEPs vote on robots' legal status – and if a kill switch is required". BBC. Retrieved 12 January 2017.
  14. ^ "European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics". European Parliament. Retrieved 8 November 2019.
  15. ^ Four Kinds of Ethical Robots
  16. ^ a b Crawford, Kate (25 June 2016). "Artificial Intelligence's White Guy Problem". The New York Times.
  17. ^ a b c Kirchner, Julia Angwin, Surya Mattu, Jeff Larson, Lauren (23 May 2016). "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks". ProPublica.{{cite news}}: CS1 maint: multiple names: authors list (link)
  18. ^ Executive Office of the President (May 2016). "Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" (PDF). Obama White House.
  19. ^ "Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights". Obama White House. 4 May 2016.
  20. ^ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
  21. ^ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
  22. ^ Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences". Archived from the original on 2011-12-03.
  23. ^ Fox, Stuart. "Evolving Robots Learn To Lie To Each Other". Popular Science. Retrieved 30 October 2019.
  24. ^ a b "How to Prevent Discriminatory Outcomes in Machine Learning". World Economic Forum. Retrieved 8 November 2019.
  25. ^ a b c d e f g h i "AI Principles". Future of Life Institute. Retrieved 2019-11-09.
  26. ^ a b c d e "Principles for Accountable Algorithms and a Social Impact Statement for Algorithms :: FAT ML". www.fatml.org. Retrieved 2019-11-09.
  27. ^ a b c d e f https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf
  28. ^ Brundage, Miles; Winterton, Jamie. "Chappie and the Future of Moral Machines". Slate. Retrieved 30 October 2019.
  29. ^ "I, Robot". Wikipedia. Retrieved 30 October 2019.