Regulation of algorithms: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
AI algorithms are a subset of algorithms
m subsection for AI, blockchain regulation
Line 5: Line 5:
The motivation for regulation of algorithms is the apprehension of [[AI_control_problem|losing control over the algorithms]], whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—[[right to explanation]] is mandatory for those algorithms.<ref>[[Consumer Financial Protection Bureau]], [https://www.consumerfinance.gov/eregulations/1002-9/2011-31714#1002-9-b-2 §1002.9(b)(2)]</ref><ref name=":2">{{Cite journal|last=Edwards|first=Lilian|last2=Veale|first2=Michael|date=2018|title=Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?|journal=IEEE Security & Privacy|volume=|pages=|ssrn=3052831}}</ref>
The motivation for regulation of algorithms is the apprehension of [[AI_control_problem|losing control over the algorithms]], whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—[[right to explanation]] is mandatory for those algorithms.<ref>[[Consumer Financial Protection Bureau]], [https://www.consumerfinance.gov/eregulations/1002-9/2011-31714#1002-9-b-2 §1002.9(b)(2)]</ref><ref name=":2">{{Cite journal|last=Edwards|first=Lilian|last2=Veale|first2=Michael|date=2018|title=Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?|journal=IEEE Security & Privacy|volume=|pages=|ssrn=3052831}}</ref>


==Regulation of Artificial Intelligence==
==Public discussion==

===Public discussion===


In 2017 [[Elon Musk]] advocated regulation of algorithms in the context of the [[existential risk from artificial general intelligence]].<ref name=":0">{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref><ref>{{cite news|last1=Gibbs|first1=Samuel|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|accessdate=27 November 2017|work=The Guardian|date=17 July 2017}}</ref><ref name=cnbc>{{cite news|last1=Kharpal|first1=Arjun|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|accessdate=27 November 2017|work=CNBC|date=7 November 2017}}</ref> According to [[National Public Radio|NPR]], the [[Tesla, Inc.|Tesla]] CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."<ref name=":0" />
In 2017 [[Elon Musk]] advocated regulation of algorithms in the context of the [[existential risk from artificial general intelligence]].<ref name=":0">{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref><ref>{{cite news|last1=Gibbs|first1=Samuel|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|accessdate=27 November 2017|work=The Guardian|date=17 July 2017}}</ref><ref name=cnbc>{{cite news|last1=Kharpal|first1=Arjun|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|accessdate=27 November 2017|work=CNBC|date=7 November 2017}}</ref> According to [[National Public Radio|NPR]], the [[Tesla, Inc.|Tesla]] CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."<ref name=":0" />
Line 11: Line 13:
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.<ref>{{cite news|last1=Gibbs|first1=Samuel|url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|date=17 July 2017|work=The Guardian|accessdate=27 November 2017}}</ref> Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO [[Brian Krzanich]] has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology.<ref name="cnbc1">{{cite news|last1=Kharpal|first1=Arjun|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|date=7 November 2017|work=CNBC|accessdate=27 November 2017}}</ref> Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.<ref>{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|year=2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|pages=15–25|doi=10.1016/j.bushor.2018.08.004}}</ref> One suggestion has been for the development of a global governance board to regulate AI development.<ref>{{Cite journal|last=Boyd|first=Matthew|last2=Wilson|first2=Nick|date=2017-11-01|title=Rapid developments in Artificial Intelligence: how might the New Zealand government respond?|url=http://dx.doi.org/10.26686/pq.v13i4.4619|journal=Policy Quarterly|volume=13|issue=4|doi=10.26686/pq.v13i4.4619|issn=2324-1101}}</ref> In 2020, the European Union published its draft strategy paper for promoting and regulating AI.<ref name=":12" />
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.<ref>{{cite news|last1=Gibbs|first1=Samuel|url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|date=17 July 2017|work=The Guardian|accessdate=27 November 2017}}</ref> Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO [[Brian Krzanich]] has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology.<ref name="cnbc1">{{cite news|last1=Kharpal|first1=Arjun|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|date=7 November 2017|work=CNBC|accessdate=27 November 2017}}</ref> Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.<ref>{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|year=2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|pages=15–25|doi=10.1016/j.bushor.2018.08.004}}</ref> One suggestion has been for the development of a global governance board to regulate AI development.<ref>{{Cite journal|last=Boyd|first=Matthew|last2=Wilson|first2=Nick|date=2017-11-01|title=Rapid developments in Artificial Intelligence: how might the New Zealand government respond?|url=http://dx.doi.org/10.26686/pq.v13i4.4619|journal=Policy Quarterly|volume=13|issue=4|doi=10.26686/pq.v13i4.4619|issn=2324-1101}}</ref> In 2020, the European Union published its draft strategy paper for promoting and regulating AI.<ref name=":12" />


==Implementation==
===Implementation===


AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.<ref name=wirtz/> The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,<ref name=":9">{{Cite journal|last=Bredt|first=Stephan|date=2019-10-04|title=Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies|url=http://dx.doi.org/10.3389/frai.2019.00016|journal=Frontiers in Artificial Intelligence|volume=2|doi=10.3389/frai.2019.00016|issn=2624-8212}}</ref> and international levels<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence – A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref> and in a variety of fields, from public service management<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Müller|first2=Wilhelm M.|date=2018-12-03|title=An integrated artificial intelligence framework for public management|url=http://dx.doi.org/10.1080/14719037.2018.1549268|journal=Public Management Review|volume=21|issue=7|pages=1076–1100|doi=10.1080/14719037.2018.1549268|issn=1471-9037}}</ref> to law enforcement,<ref name=":12" /> the financial sector,<ref name=":9" /> robotics,<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|url=http://dx.doi.org/10.1080/21582041.2018.1563803|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> the military,<ref>{{Cite book|last=United States. Defense Innovation Board.|url=http://worldcat.org/oclc/1126650738|title=AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}}</ref> and international law.<ref name=":10">{{cite news|url=https://www.snopes.com/2017/04/21/robots-with-guns/|title=Robots with Guns: The Rise of Autonomous Weapons Systems|date=21 April 2017|work=Snopes.com|accessdate=24 December 2017}}</ref><ref>{{Cite web|url=https://dash.harvard.edu/handle/1/33813394|title=No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law|last=Bento|first=Lucas|date=2017|website=Harvard Scholarship Depository|accessdate=2019-09-14}}</ref>
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.<ref name=wirtz/> The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,<ref name=":9">{{Cite journal|last=Bredt|first=Stephan|date=2019-10-04|title=Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies|url=http://dx.doi.org/10.3389/frai.2019.00016|journal=Frontiers in Artificial Intelligence|volume=2|doi=10.3389/frai.2019.00016|issn=2624-8212}}</ref> and international levels<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence – A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref> and in a variety of fields, from public service management<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Müller|first2=Wilhelm M.|date=2018-12-03|title=An integrated artificial intelligence framework for public management|url=http://dx.doi.org/10.1080/14719037.2018.1549268|journal=Public Management Review|volume=21|issue=7|pages=1076–1100|doi=10.1080/14719037.2018.1549268|issn=1471-9037}}</ref> to law enforcement,<ref name=":12" /> the financial sector,<ref name=":9" /> robotics,<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|url=http://dx.doi.org/10.1080/21582041.2018.1563803|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> the military,<ref>{{Cite book|last=United States. Defense Innovation Board.|url=http://worldcat.org/oclc/1126650738|title=AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}}</ref> and international law.<ref name=":10">{{cite news|url=https://www.snopes.com/2017/04/21/robots-with-guns/|title=Robots with Guns: The Rise of Autonomous Weapons Systems|date=21 April 2017|work=Snopes.com|accessdate=24 December 2017}}</ref><ref>{{Cite web|url=https://dash.harvard.edu/handle/1/33813394|title=No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law|last=Bento|first=Lucas|date=2017|website=Harvard Scholarship Depository|accessdate=2019-09-14}}</ref>
Line 20: Line 22:


Regulation of research into [[artificial general intelligence]] (AGI) focuses on the role of review boards and encouraging research into safe AI, and the possibility of [[differential technological development]] (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. Proposed regulation of conscious AGI systems focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|url=http://dx.doi.org/10.1088/0031-8949/90/1/018001|journal=Physica Scripta|volume=90|issue=1|pages=018001|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949}}</ref>
Regulation of research into [[artificial general intelligence]] (AGI) focuses on the role of review boards and encouraging research into safe AI, and the possibility of [[differential technological development]] (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. Proposed regulation of conscious AGI systems focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|url=http://dx.doi.org/10.1088/0031-8949/90/1/018001|journal=Physica Scripta|volume=90|issue=1|pages=018001|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949}}</ref>

==Regulation of Blockchain & Cryptocurrency==
{{Expand section|date=March 2020}}
Regulation of [[blockchain]] algorithms is mentioned alongside with regulation of AI algorithms.<ref>{{cite book |last1=Fitsilis |first1=Fotios |title=Imposing Regulation on Advanced Algorithms |date=2019 |publisher=Springer International Publishing |isbn=978-3-030-27978-3 |url=https://www.springer.com/gp/book/9783030279783 |language=en}}</ref> Blockchain systems provide transparent and fixed records of transactions and hereby contradict the very nature of the European [[GDPR]].<ref>{{Cite web|url=https://www.siliconrepublic.com/enterprise/blockchain-gdpr-report-bai|title=A recent report issued by the Blockchain Association of Ireland has found there are many more questions than answers when it comes to GDPR|website=siliconrepublic.com|access-date=5 March 2018|archive-url=https://web.archive.org/web/20180305202537/https://www.siliconrepublic.com/enterprise/blockchain-gdpr-report-bai|archive-date=5 March 2018|url-status=live|df=dmy-all}}</ref><ref>{{cite web |title=Blockchain and the General Data Protection Regulation - Think Tank |url=https://www.europarl.europa.eu/thinktank/de/document.html?reference=EPRS_STU%282019%29634445 |website=www.europarl.europa.eu |accessdate=28 March 2020 |language=de}}</ref>


==In popular culture==
==In popular culture==

Revision as of 13:08, 28 March 2020

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly artificial intelligence and its subfield of machine learning.[1][2][3] For the subset of AI algorithms, the term regulation of artificial intelligence is used. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.[4]

The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms.[5][6]

Regulation of Artificial Intelligence

Public discussion

In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence.[7][8][9] According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."[7]

In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[10] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology.[11] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[12] One suggestion has been for the development of a global governance board to regulate AI development.[13] In 2020, the European Union published its draft strategy paper for promoting and regulating AI.[14]

Implementation

AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[4] The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,[15] and international levels[14] and in a variety of fields, from public service management[16] to law enforcement,[14] the financial sector,[15] robotics,[17] the military,[18] and international law.[19][20]

In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.[21][22] In response, the National Institute of Standards and Technology has released a position paper,[23] the National Security Commission on Artificial Intelligence has published an interim report,[24] and the Defense Innovation Board has issued recommendations on the ethical use of AI.[25]

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue,[19] and leading to proposals for global regulation.[26] In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.[27]

Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards and encouraging research into safe AI, and the possibility of differential technological development (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. Proposed regulation of conscious AGI systems focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[28]

Regulation of Blockchain & Cryptocurrency

Regulation of blockchain algorithms is mentioned alongside with regulation of AI algorithms.[29] Blockchain systems provide transparent and fixed records of transactions and hereby contradict the very nature of the European GDPR.[30][31]

In popular culture

In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[32]

See also

References

  1. ^ "Algorithms have gotten out of control. It's time to regulate them". theweek.com. 3 April 2019. Retrieved 22 March 2020.
  2. ^ Martini, Mario. "FUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES" (PDF). Retrieved 22 March 2020.
  3. ^ "Rise and Regulation of Algorithms". Berkeley Global Society. Retrieved 22 March 2020.
  4. ^ a b Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692.
  5. ^ Consumer Financial Protection Bureau, §1002.9(b)(2)
  6. ^ Edwards, Lilian; Veale, Michael (2018). "Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?". IEEE Security & Privacy. SSRN 3052831.
  7. ^ a b "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017.
  8. ^ Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Retrieved 27 November 2017.
  9. ^ Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Retrieved 27 November 2017.
  10. ^ Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Retrieved 27 November 2017.
  11. ^ Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Retrieved 27 November 2017.
  12. ^ Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004.
  13. ^ Boyd, Matthew; Wilson, Nick (2017-11-01). "Rapid developments in Artificial Intelligence: how might the New Zealand government respond?". Policy Quarterly. 13 (4). doi:10.26686/pq.v13i4.4619. ISSN 2324-1101.
  14. ^ a b c White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1.
  15. ^ a b Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2. doi:10.3389/frai.2019.00016. ISSN 2624-8212.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  16. ^ Wirtz, Bernd W.; Müller, Wilhelm M. (2018-12-03). "An integrated artificial intelligence framework for public management". Public Management Review. 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268. ISSN 1471-9037.
  17. ^ Iphofen, Ron; Kritikos, Mihalis (2019-01-03). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science: 1–15. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041.
  18. ^ United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  19. ^ a b "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Retrieved 24 December 2017.
  20. ^ Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Retrieved 2019-09-14.
  21. ^ "AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation". Inside Tech Media. 2020-01-14. Retrieved 2020-03-25.
  22. ^ Memorandum for the Heads of Executive Departments and Agencies (PDF). Washington, D.C.: White House Office of Science and Technology Policy. 2020.
  23. ^ U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF). National Institute of Science and Technology. 2019.
  24. ^ NSCAI Interim Report for Congress. The National Security Commission on Artificial Intelligence. 2019.
  25. ^ AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (PDF). Washington, DC: Defense Innovation Board. 2020.
  26. ^ Baum, Seth (2018-09-30). "Countering Superintelligence Misinformation". Information. 9 (10): 244. doi:10.3390/info9100244. ISSN 2078-2489.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  27. ^ Stefanik, Elise M. (2018-05-22). "H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018". www.congress.gov. Retrieved 2020-03-13.
  28. ^ Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  29. ^ Fitsilis, Fotios (2019). Imposing Regulation on Advanced Algorithms. Springer International Publishing. ISBN 978-3-030-27978-3.
  30. ^ "A recent report issued by the Blockchain Association of Ireland has found there are many more questions than answers when it comes to GDPR". siliconrepublic.com. Archived from the original on 5 March 2018. Retrieved 5 March 2018.
  31. ^ "Blockchain and the General Data Protection Regulation - Think Tank". www.europarl.europa.eu (in German). Retrieved 28 March 2020.
  32. ^ Asimov, Isaac (1950). "Runaround". I, Robot (The Isaac Asimov Collection ed.). New York City: Doubleday. p. 40. ISBN 978-0-385-42304-5. This is an exact transcription of the laws. They also appear in the front of the book, and in both places there is no "to" in the 2nd law.