Algorithmic Justice League
Abbreviation | AJL |
---|---|
Formation | 2016 |
Founder | Joy Buolamwini |
Purpose | AI activism |
Location | |
Website | https://www.ajlunited.org/ |
The Algorithmic Justice League (AJL) is a digital advocacy organization based in Cambridge, Massachusetts. Founded by computer scientist Joy Buolamwini in 2016, AJL aims to raise awareness of the social implications of artificial intelligence through art and research.[1] It was featured in the 2020 documentary Coded Bias.[2]
History
Buolamwini founded the Algorithmic Justice League in 2016 after a personal experience with biased facial detection software: the software could not detect her "highly melanated" face until she donned a white mask. AJL was formed to expose the ubiquity of such bias in artificial intelligence and the threat it poses to civil rights.[3] Early AJL campaigns focused primarily on face recognition software, while recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including algorithmic bias, algorithmic decision-making, algorithmic governance, and algorithmic auditing.
Activities
Face recognition
In 2018, founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a landmark study on racial and gender bias in facial recognition algorithms. Their research, entitled Gender Shades, determined that facial analysis software released by IBM and Microsoft was less accurate when analyzing dark-skinned and feminine faces, compared to light-skinned and masculine faces.[4][5][6] The work has been frequently cited by advocates and researchers since its publication, with over 1,000 academic citations as of December 2020. The Gender Shades project and subsequent advocacy undertaken by AJL and similar groups led multiple tech companies, including Amazon and IBM, to address biases in the development of their algorithms and even temporarily ban the use of their products by police in 2020.[7][8]
A research collaboration involving AJL led to the release of a white paper in May 2020 calling for the creation of a federal office to regulate government use of face recognition.[9] In July, AJL joined the ACLU and the Georgetown University Law Center in calling for a federal moratorium on face recognition technology.[10]
Speech recognition
In March 2020, AJL released a spoken word project, titled Voicing Erasure, that addresses racial bias in speech recognition algorithms. The piece was performed by numerous female activists and researchers in the field, including Ruha Benjamin, Sasha Costanza-Chock, Safiya Noble, and Kimberlé Crenshaw.[11]
Algorithmic governance
In 2019, Buolamwini represented AJL at a congressional hearing of the US House Committee on Science, Space, and Technology, discussing the societal and ethical implications of AI.[12][13]
References
- ^ "Learn More - The Algorithmic Justice League". www.ajl.org. Retrieved 12 December 2020.
- ^ Lee, Jennifer (8 February 2020). "When Bias Is Coded Into Our Technology". NPR.org. Retrieved 12 December 2020.
- ^ Trahan, Erin (18 November 2020). "Documentary 'Coded Bias' Unmasks The Racism Of Artificial Intelligence". www.wbur.org. Retrieved 12 December 2020.
- ^ Buolamwini, Joy; Gebru, Timnit (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" (PDF). Proceedings of the 1st Conference on Fairness, Accountability and Transparency. 81: 77–91. Retrieved 12 December 2020.
- ^ "Gender Shades". gendershades.org. Retrieved 12 December 2020.
- ^ Buell, Spencer (23 February 2018). "MIT Researcher: AI Has a Race Problem, and We Need to Fix It". Boston Magazine. Retrieved 12 December 2020.
- ^ Hao, Karen (12 June 2020). "The two-year fight to stop Amazon from selling face recognition to the police". MIT Technology Review. Retrieved 12 December 2020.
- ^ Meyer, David (9 June 2020). "IBM pulls out of facial recognition, fearing racial profiling and mass surveillance". Fortune. Retrieved 12 December 2020.
- ^ Burt, Chris (8 June 2020). "Biometrics experts call for creation of FDA-style government body to regulate facial recognition". Biometric Update. Retrieved 12 December 2020.
- ^ Rodrigo, Chris Mills (2 July 2020). "Dozens of advocacy groups push for Congress to ban facial recognition technology". TheHill. Retrieved 12 December 2020.
- ^ Johnson, Khari (1 April 2020). "Algorithmic Justice League protests bias in voice AI and media coverage". VentureBeat. Retrieved 11 December 2020.
- ^ Quach, Katyanna (22 May 2019). "We listened to more than 3 hours of US Congress testimony on facial recognition so you didn't have to go through it". www.theregister.com. Retrieved 12 December 2020.
- ^ "Artificial Intelligence: Societal and Ethical Implications | House Committee on Science, Space and Technology". science.house.gov. Retrieved 12 December 2020.
External links
- Civil liberties advocacy groups in the United States
- Digital rights organizations
- Artificial intelligence associations
- Politics and technology
- Ethics of science and technology
- Diversity in computing
- Information ethics
- Existential risk from artificial general intelligence
- Organizations based in Cambridge, Massachusetts
- Government by algorithm