Automated decision-making: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Applications: added recommender systems
Para on Data sources
Line 1: Line 1:
'''Automated decision-making''' (ADM) involves the use of data, machines and [[algorithm]]s to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data sourced from databases, text, social media, sensors, images or speech that is processed using a range of technologies including computer software, algorithms, [[machine learning]], [[natural language processing]], [[artificial intelligence]], augmented intelligence and [[robotics]]. The increasing use automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational and economic consequences.<ref name=":1">{{Cite journal|last1=Larus|first1=James|last2=Hankin|first2=Chris|last3=Carson|first3=Siri Granum|last4=Christen|first4=Markus|last5=Crafa|first5=Silvia|last6=Grau|first6=Oliver|last7=Kirchner|first7=Claude|last8=Knowles|first8=Bran|last9=McGettrick|first9=Andrew|last10=Tamburri|first10=Damian Andrew|last11=Werthner|first11=Hannes|year=2018|title=When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making|url=https://dl.acm.org/doi/book/10.1145/3185595|journal=|location=New York, NY, USA|publisher=Association for Computing Machinery|doi=10.1145/3185595}}</ref><ref>{{Cite journal|last1=Mökander|first1=Jakob|last2=Morley|first2=Jessica|last3=Taddeo|first3=Mariarosaria|last4=Floridi|first4=Luciano|date=2021-07-06|title=Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations|url=https://doi.org/10.1007/s11948-021-00319-4|journal=Science and Engineering Ethics|language=en|volume=27|issue=4|pages=44|doi=10.1007/s11948-021-00319-4|issn=1471-5546|pmc=8260507|pmid=34231029}}</ref>
'''Automated decision-making''' (ADM) involves the use of data, machines and [[algorithm]]s to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data sourced from databases, text, social media, sensors, images or speech that is processed using a range of technologies including computer software, algorithms, [[machine learning]], [[natural language processing]], [[artificial intelligence]], augmented intelligence and [[robotics]]. The increasing use automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational and economic consequences.<ref name=":1">{{Cite journal|last1=Larus|first1=James|last2=Hankin|first2=Chris|last3=Carson|first3=Siri Granum|last4=Christen|first4=Markus|last5=Crafa|first5=Silvia|last6=Grau|first6=Oliver|last7=Kirchner|first7=Claude|last8=Knowles|first8=Bran|last9=McGettrick|first9=Andrew|last10=Tamburri|first10=Damian Andrew|last11=Werthner|first11=Hannes|year=2018|title=When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making|url=https://dl.acm.org/doi/book/10.1145/3185595|journal=|location=New York, NY, USA|publisher=Association for Computing Machinery|doi=10.1145/3185595}}</ref><ref>{{Cite journal|last1=Mökander|first1=Jakob|last2=Morley|first2=Jessica|last3=Taddeo|first3=Mariarosaria|last4=Floridi|first4=Luciano|date=2021-07-06|title=Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations|url=https://doi.org/10.1007/s11948-021-00319-4|journal=Science and Engineering Ethics|language=en|volume=27|issue=4|pages=44|doi=10.1007/s11948-021-00319-4|issn=1471-5546|pmc=8260507|pmid=34231029}}</ref>


== History ==
== Overview ==
While some definitions of ADM suggest it involves decisions made through purely technological means,<ref>{{Cite web|last=UK Information Commissioner's Office|date=2021-09-24|title=Guide to the UK General Data Protection Regulation (UK GDPR)|url=https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/|url-status=live|access-date=2021-10-05|website=|publisher=Information Commissioner's Office UK|language=en}}</ref> in reality ADM can take many forms ranging from decision-support systems that make recommendations for human decision-makers to act on, sometimes known as augmented intelligence<ref>{{Cite journal|date=2019-02-01|title=Making Policy on Augmented Intelligence in Health Care|url=https://journalofethics.ama-assn.org/article/making-policy-augmented-intelligence-health-care/2019-02|journal=AMA Journal of Ethics|language=en|volume=21|issue=2|pages=E188–191|doi=10.1001/amajethics.2019.188|issn=2376-6980}}</ref> or 'shared decision-making'<ref name=":1" />, to fully automated decision-making processes that make decisions on behalf of institutions or organizations without human involvement.<ref name=":3" /> Models used in automated decision-making systems can be as simple as checklists and [[decision tree]]s through to artificial intelligence and deep [[Artificial neural network|neural networks]] (DNN).
Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, game play, scientific and medical analysis and inferencing across multiple data sources.

Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, game play, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport.


== Data and Technologies ==
== Data and Technologies ==
Automated decision-making uses a range of data sources and technologies to make decisions that drive the behaviour of complex systems in many different contexts including self-driving cars, robotics, security systems, public administration, health, law and commerce.
Models used in automated decision-making systems can be as simple as checklists and [[decision tree]]s through to artificial intelligence and deep [[Artificial neural network|neural networks]] (DNN).

=== Data quality ===
The quality of the data that is available and able to be used in ADM systems is fundamental to the outcomes and is often highly problematic for many reasons. Datasets are often highly variable, controlled by corporations or governments, restricted for privacy or security reasons, incomplete, biased, limited in terms of time or coverage, measuring and describing terms in different ways, and many other issues.

For machines to learn from data, large corpuses are often required which can be difficult to obtain or compute, however where available, have provided significant breakthroughs, for example in diagnosing chest x-rays.<ref>{{Cite journal|last=Seah|first=Jarrel C Y|last2=Tang|first2=Cyril H M|last3=Buchlak|first3=Quinlan D|last4=Holt|first4=Xavier G|last5=Wardman|first5=Jeffrey B|last6=Aimoldin|first6=Anuar|last7=Esmaili|first7=Nazanin|last8=Ahmad|first8=Hassan|last9=Pham|first9=Hung|last10=Lambert|first10=John F|last11=Hachey|first11=Ben|date=2021-08|title=Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study|url=https://doi.org/10.1016/S2589-7500(21)00106-0|journal=The Lancet Digital Health|volume=3|issue=8|pages=e496–e506|doi=10.1016/s2589-7500(21)00106-0|issn=2589-7500}}</ref>


=== Machine learning ===
=== Machine learning ===
Line 12: Line 19:


== Applications ==
== Applications ==
ADM is being used to replace or augment administrative decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, and enable new solutions to complex problems.<ref>{{Cite journal|last1=Taddeo|first1=Mariarosaria|last2=Floridi|first2=Luciano|date=2018-08-24|title=How AI can be a force for good|url=https://www.sciencemag.org/lookup/doi/10.1126/science.aat5991|journal=Science|language=en|volume=361|issue=6404|pages=751–752|doi=10.1126/science.aat5991|pmid=30139858|bibcode=2018Sci...361..751T|s2cid=52075037|issn=0036-8075|doi-access=free}}</ref>
Automated decision-making uses a range of data sources and technologies to make decisions that drive the behaviour of complex systems in many different contexts including self-driving cars, robotics, security systems, public administration, health, law and commerce.

ADM is being used to replace or augment administrative decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, and enable new solutions to complex problems.<ref>{{Cite journal|last1=Taddeo|first1=Mariarosaria|last2=Floridi|first2=Luciano|date=2018-08-24|title=How AI can be a force for good|url=https://www.sciencemag.org/lookup/doi/10.1126/science.aat5991|journal=Science|language=en|volume=361|issue=6404|pages=751–752|doi=10.1126/science.aat5991|pmid=30139858|bibcode=2018Sci...361..751T|s2cid=52075037|issn=0036-8075|doi-access=free}}</ref>


In [[List of national legal systems|legal systems]] around the world, algorithmic tools such as [[risk assessment instruments]] (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts.<ref name=":2">{{Cite book|last=Chohlas-Wood|first=Alex|url=https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/|title=Understanding risk assessment instruments in criminal justice|publisher=Brookings Institute|year=2020}}</ref> In the United States RAs are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions,<ref>{{Cite web|last=Angwin|first=Julia|last2=Larson|first2=Jeff|last3=Mattu|first3=Surya|date=23 May 2016|title=Machine Bias|url=https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?token=kEFLF-0TKt0pC8fA7TbUZGFrm2mn5Ihm|url-status=live|access-date=2021-10-04|website=ProPublica|language=en}}</ref> evaluate parole for prisoners and to predict “hot spots” for future crime.<ref>{{Cite journal|last=Nissan|first=Ephraim|date=2017-08-01|title=Digital technologies and artificial intelligence’s present and foreseeable impact on lawyering, judging, policing and law enforcement|url=https://doi.org/10.1007/s00146-015-0596-5|journal=AI & SOCIETY|language=en|volume=32|issue=3|pages=441–464|doi=10.1007/s00146-015-0596-5|issn=1435-5655}}</ref><ref>{{Cite journal|last=Dressel|first=Julia|last2=Farid|first2=Hany|title=The accuracy, fairness, and limits of predicting recidivism|url=https://www.science.org/doi/10.1126/sciadv.aao5580|journal=Science Advances|volume=4|issue=1|pages=eaao5580|doi=10.1126/sciadv.aao5580|pmc=PMC5777393|pmid=29376122}}</ref> <ref>{{Cite book|last=Ferguson|first=Andrew Guthrie|title=The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement|publisher=NYU Press|year=2017|isbn=9781479869978|location=New York}}</ref> These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system.<ref name=":2" /> In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.<ref name=":0">{{Cite journal|last=Molnar|first=Petra|last2=Gill|first2=Lex|date=2018|title=Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System|url=https://tspace.library.utoronto.ca/handle/1807/94802|journal=|language=en-ca|publisher=Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto)}}</ref>
In [[List of national legal systems|legal systems]] around the world, algorithmic tools such as [[risk assessment instruments]] (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts.<ref name=":2">{{Cite book|last=Chohlas-Wood|first=Alex|url=https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/|title=Understanding risk assessment instruments in criminal justice|publisher=Brookings Institute|year=2020}}</ref> In the United States RAs are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions,<ref>{{Cite web|last=Angwin|first=Julia|last2=Larson|first2=Jeff|last3=Mattu|first3=Surya|date=23 May 2016|title=Machine Bias|url=https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?token=kEFLF-0TKt0pC8fA7TbUZGFrm2mn5Ihm|url-status=live|access-date=2021-10-04|website=ProPublica|language=en}}</ref> evaluate parole for prisoners and to predict “hot spots” for future crime.<ref>{{Cite journal|last=Nissan|first=Ephraim|date=2017-08-01|title=Digital technologies and artificial intelligence’s present and foreseeable impact on lawyering, judging, policing and law enforcement|url=https://doi.org/10.1007/s00146-015-0596-5|journal=AI & SOCIETY|language=en|volume=32|issue=3|pages=441–464|doi=10.1007/s00146-015-0596-5|issn=1435-5655}}</ref><ref>{{Cite journal|last=Dressel|first=Julia|last2=Farid|first2=Hany|title=The accuracy, fairness, and limits of predicting recidivism|url=https://www.science.org/doi/10.1126/sciadv.aao5580|journal=Science Advances|volume=4|issue=1|pages=eaao5580|doi=10.1126/sciadv.aao5580|pmc=PMC5777393|pmid=29376122}}</ref> <ref>{{Cite book|last=Ferguson|first=Andrew Guthrie|title=The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement|publisher=NYU Press|year=2017|isbn=9781479869978|location=New York}}</ref> These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system.<ref name=":2" /> In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.<ref name=":0">{{Cite journal|last=Molnar|first=Petra|last2=Gill|first2=Lex|date=2018|title=Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System|url=https://tspace.library.utoronto.ca/handle/1807/94802|journal=|language=en-ca|publisher=Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto)}}</ref>


Digital information and entertainment platforms increasingly provide automated recommendations ([[Recommender system|recommender systems]]) to users based on demographic information, previous selections, [[collaborative filtering]] or content-based filtering. This includes music and video platforms, academic publishing, health advice, product databases and search engines. Many recommender systems provide autonomy to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user.<ref>{{Cite journal|last=Araujo|first=Theo|last2=Helberger|first2=Natali|last3=Kruikemeier|first3=Sanne|last4=de Vreese|first4=Claes H.|date=2020-09-01|title=In AI we trust? Perceptions about automated decision-making by artificial intelligence|url=https://doi.org/10.1007/s00146-019-00931-w|journal=AI & SOCIETY|language=en|volume=35|issue=3|pages=611–623|doi=10.1007/s00146-019-00931-w|issn=1435-5655}}</ref>
Digital information and entertainment platforms increasingly provide automated recommendations ([[Recommender system|recommender systems]]) to users based on demographic information, previous selections, [[collaborative filtering]] or content-based filtering. This includes music and video platforms, academic publishing, health advice, product databases and search engines. Many recommender systems provide autonomy to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user.<ref name=":3">{{Cite journal|last=Araujo|first=Theo|last2=Helberger|first2=Natali|last3=Kruikemeier|first3=Sanne|last4=de Vreese|first4=Claes H.|date=2020-09-01|title=In AI we trust? Perceptions about automated decision-making by artificial intelligence|url=https://doi.org/10.1007/s00146-019-00931-w|journal=AI & SOCIETY|language=en|volume=35|issue=3|pages=611–623|doi=10.1007/s00146-019-00931-w|issn=1435-5655}}</ref>


[[Vehicular automation|Autonomous vehicles]] such as [[Self-driving car|self-driving cars]] and other forms of transport is another area where automated decision-making systems are being used to replace various aspects of human control, ranging from level 0 (complete human driving) to level 5 (completely autonomous).<ref name=":1" /> At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment.
[[Vehicular automation|Autonomous vehicles]] such as [[Self-driving car|self-driving cars]] and other forms of transport is another area where automated decision-making systems are being used to replace various aspects of human control, ranging from level 0 (complete human driving) to level 5 (completely autonomous).<ref name=":1" /> At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment. Cars with levels 1 to 3 are already available on the market. Self-driving cars raise many questions in terms of liability and ethical decision-making in the case of accidents, as well as privacy issues. The German government established an ‘Ethics Commission on Automated and Connected Driving’ in 2016 which presented a report with 20 ethical rules for the adaptation of automated and connected driving.


== Ethical and legal issues ==
== Ethical and legal issues ==
There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and [[algorithmic bias]], intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others.<ref>{{Cite book|last=Eubanks|first=Virginia|url=https://www.worldcat.org/oclc/1013516195|title=Automating inequality : how high-tech tools profile, police, and punish the poor|date=2018|isbn=978-1-250-07431-7|edition=First|location=New York, NY|oclc=1013516195}}</ref> As ADMS become more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies.<ref>{{Cite journal|last=Cath|first=Corinne|date=2018-11-28|title=Governing artificial intelligence: ethical, legal and technical opportunities and challenges|url=https://royalsocietypublishing.org/doi/10.1098/rsta.2018.0080|journal=Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences|volume=376|issue=2133|pages=20180080|doi=10.1098/rsta.2018.0080|pmc=6191666|pmid=30322996|bibcode=2018RSPTA.37680080C}}</ref>
There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and [[algorithmic bias]], intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others.<ref>{{Cite book|last=Eubanks|first=Virginia|url=https://www.worldcat.org/oclc/1013516195|title=Automating inequality : how high-tech tools profile, police, and punish the poor|date=2018|isbn=978-1-250-07431-7|edition=First|location=New York, NY|oclc=1013516195}}</ref> As ADMS become more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies.<ref>{{Cite journal|last=Cath|first=Corinne|date=2018-11-28|title=Governing artificial intelligence: ethical, legal and technical opportunities and challenges|url=https://royalsocietypublishing.org/doi/10.1098/rsta.2018.0080|journal=Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences|volume=376|issue=2133|pages=20180080|doi=10.1098/rsta.2018.0080|pmc=6191666|pmid=30322996|bibcode=2018RSPTA.37680080C}}</ref>


ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable.<ref name=":1" />
ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable.<ref name=":1" />


A report from [[Citizen Lab|Citizen lab]] in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person.<ref name=":0" />
A report from [[Citizen Lab|Citizen lab]] in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person.<ref name=":0" />

Revision as of 04:31, 5 October 2021

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data sourced from databases, text, social media, sensors, images or speech that is processed using a range of technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational and economic consequences.[1][2]

Overview

While some definitions of ADM suggest it involves decisions made through purely technological means,[3] in reality ADM can take many forms ranging from decision-support systems that make recommendations for human decision-makers to act on, sometimes known as augmented intelligence[4] or 'shared decision-making'[1], to fully automated decision-making processes that make decisions on behalf of institutions or organizations without human involvement.[5] Models used in automated decision-making systems can be as simple as checklists and decision trees through to artificial intelligence and deep neural networks (DNN).

Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, game play, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport.

Data and Technologies

Automated decision-making uses a range of data sources and technologies to make decisions that drive the behaviour of complex systems in many different contexts including self-driving cars, robotics, security systems, public administration, health, law and commerce.

Data quality

The quality of the data that is available and able to be used in ADM systems is fundamental to the outcomes and is often highly problematic for many reasons. Datasets are often highly variable, controlled by corporations or governments, restricted for privacy or security reasons, incomplete, biased, limited in terms of time or coverage, measuring and describing terms in different ways, and many other issues.

For machines to learn from data, large corpuses are often required which can be difficult to obtain or compute, however where available, have provided significant breakthroughs, for example in diagnosing chest x-rays.[6]

Machine learning

Machine learning (ML) involves training computer programs through exposure to large data sets and examples to learn from experience and solve problems.[1] Machine learning can be used to generate and analyse data as well as make algorithmic calculations and has been applied to image and speech recognition, translations, text, data and simulations. While machine learning has been around for some time, it is becoming increasingly powerful due to recent breakthroughs in training deep neural networks (DNNs), and dramatic increases in data storage capacity and computational power with GPU coprocessors and cloud computing.[1]

Applications

ADM is being used to replace or augment administrative decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, and enable new solutions to complex problems.[7]

In legal systems around the world, algorithmic tools such as risk assessment instruments (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts.[8] In the United States RAs are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions,[9] evaluate parole for prisoners and to predict “hot spots” for future crime.[10][11] [12] These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system.[8] In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.[13]

Digital information and entertainment platforms increasingly provide automated recommendations (recommender systems) to users based on demographic information, previous selections, collaborative filtering or content-based filtering. This includes music and video platforms, academic publishing, health advice, product databases and search engines. Many recommender systems provide autonomy to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user.[5]

Autonomous vehicles such as self-driving cars and other forms of transport is another area where automated decision-making systems are being used to replace various aspects of human control, ranging from level 0 (complete human driving) to level 5 (completely autonomous).[1] At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment. Cars with levels 1 to 3 are already available on the market. Self-driving cars raise many questions in terms of liability and ethical decision-making in the case of accidents, as well as privacy issues. The German government established an ‘Ethics Commission on Automated and Connected Driving’ in 2016 which presented a report with 20 ethical rules for the adaptation of automated and connected driving.

Ethical and legal issues

There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others.[14] As ADMS become more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies.[15]

ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable.[1]

A report from Citizen lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person.[13]

Legislative responses to ADM include:

  • The European General Data Protection Regulation (GDPR), introduced in 2016, is a regulation in EU law on data protection and privacy in the European Union (EU). Article 22(1) enshrines the right of data subjects not to be subject to decisions, which have legal or other significant effects, being based solely on automatic individual decision making.[16][17]

See also

References

  1. ^ a b c d e f Larus, James; Hankin, Chris; Carson, Siri Granum; Christen, Markus; Crafa, Silvia; Grau, Oliver; Kirchner, Claude; Knowles, Bran; McGettrick, Andrew; Tamburri, Damian Andrew; Werthner, Hannes (2018). "When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making". New York, NY, USA: Association for Computing Machinery. doi:10.1145/3185595. {{cite journal}}: Cite journal requires |journal= (help)
  2. ^ Mökander, Jakob; Morley, Jessica; Taddeo, Mariarosaria; Floridi, Luciano (2021-07-06). "Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations". Science and Engineering Ethics. 27 (4): 44. doi:10.1007/s11948-021-00319-4. ISSN 1471-5546. PMC 8260507. PMID 34231029.
  3. ^ UK Information Commissioner's Office (2021-09-24). "Guide to the UK General Data Protection Regulation (UK GDPR)". Information Commissioner's Office UK. Retrieved 2021-10-05.{{cite web}}: CS1 maint: url-status (link)
  4. ^ "Making Policy on Augmented Intelligence in Health Care". AMA Journal of Ethics. 21 (2): E188–191. 2019-02-01. doi:10.1001/amajethics.2019.188. ISSN 2376-6980.
  5. ^ a b Araujo, Theo; Helberger, Natali; Kruikemeier, Sanne; de Vreese, Claes H. (2020-09-01). "In AI we trust? Perceptions about automated decision-making by artificial intelligence". AI & SOCIETY. 35 (3): 611–623. doi:10.1007/s00146-019-00931-w. ISSN 1435-5655.
  6. ^ Seah, Jarrel C Y; Tang, Cyril H M; Buchlak, Quinlan D; Holt, Xavier G; Wardman, Jeffrey B; Aimoldin, Anuar; Esmaili, Nazanin; Ahmad, Hassan; Pham, Hung; Lambert, John F; Hachey, Ben (2021-08). "Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study". The Lancet Digital Health. 3 (8): e496–e506. doi:10.1016/s2589-7500(21)00106-0. ISSN 2589-7500. {{cite journal}}: Check date values in: |date= (help)
  7. ^ Taddeo, Mariarosaria; Floridi, Luciano (2018-08-24). "How AI can be a force for good". Science. 361 (6404): 751–752. Bibcode:2018Sci...361..751T. doi:10.1126/science.aat5991. ISSN 0036-8075. PMID 30139858. S2CID 52075037.
  8. ^ a b Chohlas-Wood, Alex (2020). Understanding risk assessment instruments in criminal justice. Brookings Institute.
  9. ^ Angwin, Julia; Larson, Jeff; Mattu, Surya (23 May 2016). "Machine Bias". ProPublica. Retrieved 2021-10-04.{{cite web}}: CS1 maint: url-status (link)
  10. ^ Nissan, Ephraim (2017-08-01). "Digital technologies and artificial intelligence's present and foreseeable impact on lawyering, judging, policing and law enforcement". AI & SOCIETY. 32 (3): 441–464. doi:10.1007/s00146-015-0596-5. ISSN 1435-5655.
  11. ^ Dressel, Julia; Farid, Hany. "The accuracy, fairness, and limits of predicting recidivism". Science Advances. 4 (1): eaao5580. doi:10.1126/sciadv.aao5580. PMC 5777393. PMID 29376122.{{cite journal}}: CS1 maint: PMC format (link)
  12. ^ Ferguson, Andrew Guthrie (2017). The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York: NYU Press. ISBN 9781479869978.
  13. ^ a b Molnar, Petra; Gill, Lex (2018). "Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada's Immigration and Refugee System". Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto). {{cite journal}}: Cite journal requires |journal= (help)
  14. ^ Eubanks, Virginia (2018). Automating inequality : how high-tech tools profile, police, and punish the poor (First ed.). New York, NY. ISBN 978-1-250-07431-7. OCLC 1013516195.{{cite book}}: CS1 maint: location missing publisher (link)
  15. ^ Cath, Corinne (2018-11-28). "Governing artificial intelligence: ethical, legal and technical opportunities and challenges". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376 (2133): 20180080. Bibcode:2018RSPTA.37680080C. doi:10.1098/rsta.2018.0080. PMC 6191666. PMID 30322996.
  16. ^ "EUR-Lex - 32016R0679 - EN - EUR-Lex". eur-lex.europa.eu. Retrieved 2021-09-13.
  17. ^ Brkan, Maja (2017-06-12). "AI-supported decision-making under the general data protection regulation". Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law. ICAIL '17. London, United Kingdom: Association for Computing Machinery: 3–8. doi:10.1145/3086512.3086513. ISBN 978-1-4503-4891-1. S2CID 23933541.