Ethics of artificial intelligence

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The ethics of artificial intelligence is the part of the ethics of technology specific to artificially intelligent systems. It is sometimes[1] divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

Robot ethics[edit]

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[2] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.

Robot rights[edit]

"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights.[3] It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[4] These could include the right to life and liberty, freedom of thought and expression and equality before the law.[5] The issue has been considered by the Institute for the Future[6] and by the U.K. Department of Trade and Industry.[7]

Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[7] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[8] Ray Kurzweil sets the date at 2029.[9] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[10]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[11]

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition.[12] Some saw this gesture as openly denigrating of human rights and the rule of law.[13]

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[14]

Threat to human dignity[edit]

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[15]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[15] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[16] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.

Bill Hibbard[17] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Transparency, accountability, and open source[edit]

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[18] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[19] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open-source AI beneficial to humanity.[20] There are numerous other open-source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE has a standardisation effort on AI transparency.[21] The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.[22]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. An updated collection (list) of AI Ethics is maintained by AlgorithmWatch. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[23] The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[24][25][26]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”.[27] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[28]

Biases in AI systems[edit]

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases.[29][30][31][32] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender.[33] These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[34] Similarly, Amazon' Inc's termination of AI hiring and recruitment is another example which exhibits that AI cannot be fair. The algorithm preferred more male candidates than female. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.[35]

Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[36] In a highly influential branch of AI known as "natural language processing," problems can arise from the "text corpus"—the source material the algorithm uses to learn about the relationships between different words.[37]

Large companies such as IBM, Google, etc. started researching and addressing bias.[38][39][40] One solution for addressing bias is to create documentation for the data used to train AI systems.[41][42]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it.[43]

Liability for self-driving cars[edit]

The wide use of partial to fully autonomous cars seems to be imminent in the future. But fully autonomous technologies present new issues and challenges.[44][45] Recently, a debate over the legal liability have risen over the responsible party if these cars get into accidents.[46][47] In one of the reports [48] a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers.

In one case that took place on March 19, 2018, a self-driving Uber car killed a pedestrian in Arizona-Death of Elaine Herzberg. Without further investigation on how the pedestrian got injury/death in such a case. It is important for people to reconsider the liability not only for those parts or fully automated cars, but those stakeholders who should be responsible for such a situation as well. In this case, the automated cars have the function of detecting nearby possible cars and objects in order to run the function of self-driven, but it did not have the ability to react to a nearby pedestrian within its original function due to the fact that there will not be people appear on the road in a normal sense. This leads to the issue of whether the driver, pedestrian, the car company, or the government should be responsible in such a case.[citation needed]

According to this article,[49] with the current partial or fully automated cars' function are still amateur which still require driver to pay attention with fully control the vehicle since all these functions/feature are just supporting driver to be less tried while they driving, but not let go. Thus, the government should be most responsible for current situation on how they should regulate the car company and driver who are over-rely on self-driven feature as well educated them that these are just technologies that bring convenience to people life but not a short-cut. Before autonomous cars become widely used, these issues need to be tackled through new policies.[50][51][52]

Weaponization of artificial intelligence[edit]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[53][54] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[55] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[56][57] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.[citation needed]

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[58] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[59]

There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[60] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[61]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[62]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[61]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[63]

Machine ethics[edit]

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[64][65][66][67] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[68]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[69] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[70] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[71]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[72]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[53] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[73][74] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[75] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[76] He suggests that it may be somewhat or possibly very dangerous for humans.[77] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[78]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[76]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[79] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[80] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[81] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[82]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.[83]


Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[84] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[85]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[86]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[84][85] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[87] AI researchers such as Stuart J. Russell[88]:173 and Bill Hibbard[17] have proposed design strategies for developing beneficial machines.

AI ethics organisations[edit]

There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[89]

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.

In fiction[edit]

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[citation needed]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[citation needed] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability,[citation needed] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[94]

See also[edit]


  1. ^ Müller, Vincent C. (30.04.2020). "Ethics of Artificial Intelligence and Robotics". Stanford Encyclopedia of Philosophy. Retrieved 26.09.2020. Check date values in: |access-date=, |date= (help)
  2. ^ Veruggio, Gianmarco (2007). "The Roboethics Roadmap". Scuola di Robotica: 2. CiteSeerX Cite journal requires |journal= (help)
  3. ^ Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
  4. ^ Sheliazhenko, Yurii (2017). "Artificial Personal Autonomy and Concept of Robot Rights". European journal of law and political sciences. Retrieved 10 May 2017. Cite journal requires |journal= (help)
  5. ^ The American Heritage Dictionary of the English Language, Fourth Edition
  6. ^ "Robots could demand legal rights". BBC News. December 21, 2006. Retrieved January 3, 2010.
  7. ^ a b Henderson, Mark (April 24, 2007). "Human rights for robots? We're getting carried away". The Times Online. The Times of London. Retrieved May 2, 2010.
  8. ^ McGee, Glenn. "A Robot Code of Ethics". The Scientist.
  9. ^ Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3.CS1 maint: ref=harv (link)
  10. ^ The Big Question: Should the human race be worried by the rise of robots?, Independent Newspaper,
  11. ^ Loebner Prize Contest Official Rules — Version 2.0 The competition was directed by David Hamill and the rules were developed by members of the Robitron Yahoo group.
  12. ^ Saudi Arabia bestows citizenship on a robot named Sophia
  13. ^ Vincent, James (30 October 2017). "Pretending to give a robot citizenship helps no one". The Verge.
  14. ^ Close engagements with artificial companions: key social, psychological, ethical and design issues. Wilks, Yorick, 1939-. Amsterdam: John Benjamins Pub. Co. 2010. ISBN 978-9027249944. OCLC 642206106.CS1 maint: others (link)
  15. ^ a b Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
  16. ^ Kaplan, Andreas; Haenlein, Michael (January 2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004.
  17. ^ a b Hibbard, Bill (17 November 2015). "Ethical Artificial Intelligence". arXiv:1411.1373. Cite journal requires |journal= (help)
  18. ^ Open Source AI. Bill Hibbard. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.
  19. ^ OpenCog: A Software Framework for Integrative Artificial General Intelligence. David Hart and Ben Goertzel. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.
  20. ^ Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free Cade Metz, Wired 27 April 2016.
  21. ^ "P7001 - Transparency of Autonomous Systems". P7001 - Transparency of Autonomous Systems. IEEE. Retrieved 10 January 2019..
  23. ^ Bastin, Roland; Wantz, Georges (June 2017). "The General Data Protection Regulation Cross-industry innovation" (PDF). Inside magazine. Deloitte.
  24. ^ "UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges'". UN News. 2017-06-07. Retrieved 2019-07-26.
  25. ^ "Artificial intelligence - Organisation for Economic Co-operation and Development". Retrieved 2019-07-26.
  26. ^ Anonymous (2018-06-14). "The European AI Alliance". Digital Single Market - European Commission. Retrieved 2019-07-26.
  27. ^ European Commission High-Level Expert Group on AI (2019-06-26). "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-03-16.
  28. ^ "EU Tech Policy Brief: July 2019 Recap". Center for Democracy & Technology. Retrieved 2019-08-09.
  29. ^ Society, DeepMind Ethics & (2018-03-14). "The case for fairer algorithms - DeepMind Ethics & Society". Medium. Retrieved 2019-07-22.
  30. ^ "5 unexpected sources of bias in artificial intelligence". TechCrunch. Retrieved 2019-07-22.
  31. ^ Knight, Will. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Retrieved 2019-07-22.
  32. ^ Villasenor, John (2019-01-03). "Artificial intelligence and bias: Four key challenges". Brookings. Retrieved 2019-07-22.
  33. ^ Lohr, Steve (9 February 2018). "Facial Recognition Is Accurate, if You're a White Guy". The New York Times.
  34. ^ Koenecke, Allison; Nam, Andrew; Lake, Emily; Nudell, Joe; Quartey, Minnie; Mengesha, Zion; Toups, Connor; Rickford, John R.; Jurafsky, Dan; Goel, Sharad (7 April 2020). "Racial disparities in automated speech recognition". Proceedings of the National Academy of Sciences. 117 (14): 7684–7689. doi:10.1073/pnas.1915768117. PMC 7149386. PMID 32205437.
  35. ^ "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 2018-10-10. Retrieved 2019-05-29.
  36. ^ Friedman, Batya; Nissenbaum, Helen (July 1996). "Bias in computer systems". ACM Transactions on Information Systems (TOIS). 14 (3): 330–347. doi:10.1145/230538.230561. S2CID 207195759.
  37. ^ "Eliminating bias in AI". Retrieved 2019-07-26.
  38. ^ Olson, Parmy. "Google's DeepMind Has An Idea For Stopping Biased AI". Forbes. Retrieved 2019-07-26.
  39. ^ "Machine Learning Fairness | ML Fairness". Google Developers. Retrieved 2019-07-26.
  40. ^ "AI and bias - IBM Research - US". Retrieved 2019-07-26.
  41. ^ Bender, Emily M.; Friedman, Batya (December 2018). "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science". Transactions of the Association for Computational Linguistics. 6: 587–604. doi:10.1162/tacl_a_00041.
  42. ^ Gebru, Timnit; Morgenstern, Jamie; Vecchione, Briana; Vaughan, Jennifer Wortman; Wallach, Hanna; Daumé III, Hal; Crawford, Kate (2018). "Datasheets for Datasets". arXiv:1803.09010.
  43. ^ Knight, Will. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Retrieved 2019-07-26.
  44. ^ Davies, Alex (29 February 2016). "Google's Self-Driving Car Caused Its First Crash". Wired.
  45. ^ Levin, Sam; Wong, Julia Carrie (19 March 2018). "Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian". The Guardian.
  46. ^ "Who is responsible when a self-driving car has an accident?". Futurism. Retrieved 2019-07-26.
  47. ^ Radio, Business; Policy, Law and Public; Podcasts; America, North. "Autonomous Car Crashes: Who - or What - Is to Blame?". Knowledge@Wharton. Retrieved 2019-07-26.
  48. ^ Delbridge, Emily. "Driverless Cars Gone Wild". The Balance. Retrieved 2019-05-29.
  49. ^ Maxmen, Amy (October 2018). "Self-driving car dilemmas reveal that moral choices are not universal". Nature. 562 (7728): 469–470. Bibcode:2018Natur.562..469M. doi:10.1038/d41586-018-07135-0. PMID 30356197.
  50. ^ "Regulations for driverless cars". GOV.UK. Retrieved 2019-07-26.
  51. ^ "Automated Driving: Legislative and Regulatory Action - CyberWiki". Retrieved 2019-07-26.
  52. ^ "Autonomous Vehicles | Self-Driving Vehicles Enacted Legislation". Retrieved 2019-07-26.
  53. ^ a b Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  54. ^ Robot Three-Way Portends Autonomous Future, By David Axe, August 13, 2009.
  55. ^ United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  56. ^ New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog),, February 17, 2009.
  57. ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley, Feb 18th 2009.
  58. ^ Hellström, Thomas (June 2013). "On the moral responsibility of military robots". Ethics and Information Technology. 15 (2): 99–107. doi:10.1007/s10676-012-9301-2. S2CID 15205810. ProQuest 1372020233.
  59. ^ Mitra, Ambarish. "We can train AI to identify good and evil, and then use it to teach us morality". Quartz. Retrieved 2019-07-26.
  60. ^ "AI Principles". Future of Life Institute. Retrieved 2019-07-26.
  61. ^ a b Zach Musgrave and Bryan W. Roberts (2015-08-14). "Why Artificial Intelligence Can Too Easily Be Weaponized - The Atlantic". The Atlantic.
  62. ^ Cat Zakrzewski (2015-07-27). "Musk, Hawking Warn of Artificial Intelligence Weapons". WSJ.
  63. ^ GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015.
  64. ^ Anderson. "Machine Ethics". Retrieved 27 June 2011.
  65. ^ Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.CS1 maint: ref=harv (link)
  66. ^ Anderson, M.; Anderson, S.L. (July 2006). "Guest Editors' Introduction: Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–11. doi:10.1109/mis.2006.70. S2CID 9570832.
  67. ^ Anderson, Michael; Anderson, Susan Leigh (15 December 2007). "Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine. 28 (4): 15. doi:10.1609/aimag.v28i4.2065.
  68. ^ Boyles, Robert James M. (2017). "Philosophical Signposts for Artificial Moral Agent Frameworks". Suri. 6 (2): 92–109.
  69. ^ Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 978-0-553-38256-3.CS1 maint: ref=harv (link)
  70. ^ Bryson, Joanna; Diamantis, Mihailis; Grant, Thomas (September 2017). "Of, for, and by the people: the legal lacuna of synthetic persons". Artificial Intelligence and Law. 25 (3): 273–291. doi:10.1007/s10506-017-9214-9.
  71. ^ "Principles of robotics". UK's EPSRC. September 2010. Retrieved 10 January 2019.
  72. ^ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
  73. ^ Science New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog),, February 17, 2009.
  74. ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley, Feb 18th 2009.
  75. ^ AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  76. ^ a b Markoff, John (25 July 2009). "Scientists Worry Machines May Outsmart Man". The New York Times.
  77. ^ The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
  78. ^ Article at Archived May 24, 2012, at the Wayback Machine, July 2004, accessed 7/27/09.
  79. ^ Al-Rodhan, Nayef (7 December 2015). "The Moral Code".
  80. ^ Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9.CS1 maint: ref=harv (link)
  81. ^ Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
  82. ^ Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences".
  83. ^ Howard, Ayanna. "The Regulation of AI – Should Organizations Be Worried? | Ayanna Howard". MIT Sloan Management Review. Retrieved 2019-08-14.
  84. ^ a b Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics". In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  85. ^ a b Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence". In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  86. ^ "Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem". WIRED. 2014-12-04. Retrieved 2015-11-04.
  87. ^ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI". In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  88. ^ Russell, Stuart (October 8, 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  89. ^ Fiegerman, Seth (28 September 2016). "Facebook, Google, Amazon create group to ease AI concerns". CNNMoney.
  90. ^ "Ethics guidelines for trustworthy AI". Shaping Europe’s digital future - European Commission. European Commission. 2019-04-08. Retrieved 2020-02-20.
  91. ^ White Paper on Artificial Intelligence: a European approach to excellence and trust. Brussels: European Commission. 2020.
  92. ^ "CCC Offers Draft 20-Year AI Roadmap; Seeks Comments". HPCwire. 2019-05-14. Retrieved 2019-07-22.
  93. ^ "Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog". Retrieved 2019-07-22.
  94. ^ Cave, Stephen; Dihal, Kanta (6 August 2020). "The Whiteness of AI". Philosophy & Technology. doi:10.1007/s13347-020-00415-6.

External links[edit]