Timnit Gebru

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Timnit Gebru
Gebru in 2018
Gebru in 2018
Born1982/1983 (age 38–39)[1]
Alma materStanford University
Known forAlgorithmic bias
Fairness in machine learning
Scientific career
FieldsComputer Science
InstitutionsMicrosoft Research
Google
Apple
Doctoral advisorFei-Fei Li

Timnit Gebru (Amharic: ትምኒት ገብሩ; born 1983/1984)[1] is an American[3] computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence (AI). She is the founder of Distributed Artificial Intelligence Research Institute (DAIR), which will work with AI researchers around the world, with a focus on Africa and African immigration to the United States, to examine outcomes of utilizing the technology.[4] In 2021, Gebru was named one of the world's 50 great leaders by Fortune.[5]

In December 2020, Gebru was the center of a public controversy stemming from her abrupt and contentious departure from Google as technical co-lead of the Ethical Artificial Intelligence Team. Higher management had requested she withdraw an as-yet-unpublished paper that detailed multiple risks and biases of large language models or remove the names of all Google coauthors, claiming that the paper ignored recent research that showed methods of mitigating the bias in those systems.[6][16] She requested insight into the decision, and pressed that non-compliance would result in her negotiating her resignation.[1][17][18] Google terminated her employment immediately, claiming they were accepting her resignation.[4]

Education[edit]

In 2001, Gebru was accepted to study at Stanford University.[2][19] There she earned her Bachelor of Science and Master of Science degrees in electrical engineering,[20] and her Doctorate in 2017.[21]

Gebru presented her doctoral research at the 2017 LDV Capital Vision Summit competition, where computer vision scientists present their work to members of industry and venture capitalists.[22] Gebru won the competition, starting a series of collaborations with other entrepreneurs and investors.[22]

Both during her PhD program in 2016 and in 2018, Gebru returned to Ethiopia with Jelani Nelson's programming campaign AddisCoder.[23][24]

Career and research[edit]

Gebru discussing her findings that one can predict, with some reliability, the way an American will vote from the type of vehicle they drive.

Apple (2004-2013)[edit]

Gebru joined Apple as an intern in 2004 working in their hardware division making circuitry for audio components, and was offered a full-time position the following year. Of her work as an audio engineer, her manager told Wired she was "fearless," and well-liked by her colleagues. During her tenure at Apple, Gebru became more interested in building software, namely computer vision that could detect human figures.[19] She went on to develop signal processing algorithms for the first iPad.[25] At the time, she said she did not consider the potential use for surveillance, saying "I just found it technically interesting."[19]

Long after leaving the company, during the #AppleToo movement in the summer of 2021, which was started by Apple employees, including Cher Scarlett, who consulted with Gebru,[26][27] Gebru revealed she experienced "so many egregious things" and "always wondered how they manage[d] to get out of the spotlight." She said that accountability at Apple was long overdue, and warned they couldn't continue to fly under the radar for much longer.[28][29] Gebru criticized the way the media covers Apple, and other tech giants, saying that the press helps shield the companies from public scrutiny.[27]

2013-2017[edit]

In 2013, Gebru joined Fei-Fei Li's lab at Stanford. She used data mining of publicly available images.[21] She was interested in the amount of money spent by governmental and non-governmental organisations trying to collect information about communities.[30] To investigate alternatives, Gebru combined deep learning with Google Street View to estimate the demographics of United States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race, and education can be inferred from observations of cars.[20] If the number of pickup trucks outnumbers the number of sedans, the community are more likely to vote for the Republican party.[31] They analysed over 15 million images from the 200 most populated US cities.[32] The work was extensively covered in the media, being picked up by BBC News, Newsweek, The Economist, and The New York Times.[33][34][35]

In 2015, Gebru attended the field's top conference, Neural Information Processing Systems (NIPS), in Montreal, Canada. Out of 3,700 attendees, she noted she was one of few Black researchers.[36] When she attended again the following year, she kept a tally, and noted that there were only five Black men, and that she was the only Black woman out of 8,500 delegates.[19] Together with her colleague Dr. Rediet Abebe, Gebru founded Black in AI, a community of black researchers working in artificial intelligence.[37]

In the summer of 2017, Gebru joined Microsoft as a postdoctoral researcher in the Fairness, Accountability, Transparency and Ethics in AI (FATE) lab.[19][32][38] In 2017, Gebru spoke at the Fairness and Transparency conference, where MIT Technology Review interviewed her about biases that exist in AI systems and how adding diversity in AI teams can fix that issue. In her interview with Jackie Snow, Snow asked Gebru, "How does the lack of diversity distort artificial intelligence and specifically computer vision?" and Gebru pointed out that there are biases that exist in the software developers.[39] While at Microsoft, Gebru co-authored a research paper called Gender Shades,[4] which became the namesake of a project of a broader Massachusetts Institute of Technology project led by co-author Dr. Joy Buolamwini. The pair investigated facial recognition software; finding that black women were 35% less likely to be recognised than white men.[40]

Google (2018-2020)[edit]

Gebru joined Google in 2018, where she co-led a team on the ethics of artificial intelligence with Dr. Margaret Mitchell. She studied the implications of artificial intelligence, looking to improve the ability of technology to do social good.[41]

In 2019, Gebru and other artificial intelligence researchers "signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color", citing a study that was conducted by MIT researchers showing that Amazon's facial recognition system had more trouble identifying darker-skinned females than any other technology company's facial recognition software.[42] In a New York Times interview, Gebru has further expressed that she believes facial recognition is too dangerous to be used for law enforcement and security purposes at present.[43]

Exit from Google[edit]

By the end of her career at Google in 2020, Gebru had determined that publishing research papers were more effective at bringing forth the ethical change she was focused on than pressing her superiors in the company. She and five others coauthored a research paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?", that covered the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, the inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people.[44]

In December 2020, her employment with Google ended after higher Google managers asked her to either withdraw the as-yet-unpublished paper, or remove the names of all the Google employees from that paper[1] (that is, five of the six coauthors, leaving only Emily M. Bender[44]). In a mail sent to an internal collaboration list, Gebru describes how she was summoned to a meeting at short notice where she was asked to withdraw the paper and requested to know the names and reasons of everyone who made that decision. She said she would work with Google on an employment end date after an appropriate amount of time if not provided with that information.[1][4] Google did not meet her request and terminated her employment immediately, declaring that they accepted her resignation. Dr. Jeff Dean, Google's head of AI research, replied with an email saying that they made the decision because the paper ignored too much relevant recent research on ways to mitigate some of the problems described in it, about environmental impact and bias of these models.[6][4]

Dean went on to publish his internal email regarding Gebru's departure, and his thoughts on the matter, defending Google's research paper process to "tackle ambitious problems, but to do so responsibly." Gebru and others blame this initial publication, and Dean's subsequent silence on the matter, to have both catalyzed and enabled harassment that followed close behind his response. Gebru was serially harassed by a number of sock puppet accounts and internet trolls on Twitter, making racist and obscene comments. Gebru and her supporters alleged some of the harassment to be from machine learning researcher Pedro Domingos and businessman Michael Lissack, who had said that her work was "advocacy disguised as science."[17][45] Of Domingos, Gebru said he "hide[s] behind civility and enable[s] the trolls." Lissack also allegedly harassed Mitchell and Bender, along with other colleagues on Gebru's former team. Twitter permanently suspended Lissack's account access on 1 February 2020.[46]

Gebru has repeatedly maintained that she was fired, and close to 2,700 Google employees and more than 4,300 academics and civil society supporters signed a letter condemning Gebru's alleged firing.[47][48] Nine members of Congress sent a letter to Google asking it to clarify the circumstances around Timnit Gebru's exit.[49] Gebru's former team demanded that Vice President Megan Kacholia be removed from the team's management chain.[50] Kacholia had allegedly fired Gebru without notifying Gebru's direct manager Dr. Samy Bengio first,[51] and demanded Kacholia and Dean apologize for how Gebru was treated.[52] Mitchell took to Twitter to criticize Google's treatment of employees working to eliminate bias and toxicity in AI, including its alleged dismissal of Gebru. Mitchell was later terminated.[53][54]

Following the negative publicity over the circumstances of her exit, Sundar Pichai, CEO of Alphabet, Google's parent company, publicly apologized on Twitter without clarifying whether she was terminated or resigned,[55] and initiated a months-long investigation into the incident.[53][56] Upon conclusion of the review, Dean announced Google would be changing its "approach for handling how certain employees leave the company," but still did not clarify whether or not Gebru's leaving Google was voluntary.[53] Additionally, Dean said there would be changes to how research papers with "sensitive" topics would be reviewed, and diversity, equity, and inclusion goals would be reported to Alphabet's board of directors quarterly. Gebru wrote on Twitter that she "expected nothing more" from Google, and pointed out that the changes were due to the requests she was allegedly terminated for, but that no one was held accountable for it.[57] In the aftermath, two Google employees resigned from their positions at the company.[58]

Following her departure, Google held a forum to discuss experiences with racism at the company, and employees reported to NBC News that half of it was spent discrediting Gebru, which they took as them making an example of her for speaking out. The town hall was followed up with a group psychotherapy session for Google's Black employees with a licensed therapist, which the employees said was dismissive over the harm they felt Gebru's alleged termination had caused.[59]

Post-Google[edit]

In November 2021, The Nathan Cummings Foundation, partnered with Open MIC and endorsed by Color of Change,[60] filed a shareholder proposal calling for a "racial equity audit," to analyze its "adverse impact" on "Black, Indigenous and People of Color (BIPOC) communities." The proposal also requests investigation into whether or not Google retaliated against minority employees who raised concerns of discrimination,[61] citing Gebru's firing, her previous urge for Google to hire more BIPOC, and her research into racially-based biases in Google's technology.[62][63] The proposal followed a less formal request from a group of Senate Democratic Caucus members led by Cory Booker from earlier that year, also citing Gebru's separation from the company and her work.[64]

In December 2021, Reuters reported that Google was under investigation by California Department of Fair Employment and Housing (DFEH) for its treatment of Black women,[65] after numerous formal complaints of discrimination and harassment by current and former workers.[63][66] The probe comes after Gebru, and other BIPOC employees, reported that when they brought up their experiences with racism and sexism to Human Resources, they were advised to take medical leave and therapy through the company's Employee Assistance Program (EAP).[59] Gebru, and others, believe that her alleged dismissal was retaliatory and evidence that Google is institutionally racist.[67] Google said that it "continue[s] to focus on this important work and thoroughly investigate[s] any concerns, to make sure [Google] is representative and equitable."[65]

2021-present[edit]

In June 2021, Gebru announced that she was raising money to "launch an independent research institute modeled on her work on Google’s Ethical AI team and her experience in Black in AI".[19]

On 2 December 2021 she launched DAIR, which is expected to document the effect of artificial intelligence on marginalized groups, with a focus on Africa and African immigrants in the United States.[68][69] One of the organization's initial projects plans to analyze satellite imagery of townships in South Africa with AI to better understand legacies of apartheid.[4]

Awards and recognition[edit]

Gebru, Buolamwini, and Inioluwa Deborah Raji won VentureBeat's 2019 AI Innovations Award in the category AI for Good for their research highlighting the significant problem of algorithmic bias in facial recognition.[70][71] Gebru was named one of the world’s 50 greatest leaders by Fortune in 2021.[5] Gebru was included in a list of ten scientists who had had important roles in scientific developments in 2021 compiled by the scientific journal Nature.[72]

Selected publications[edit]

  • Gebru, Timnit; Krause, Jonathan; Wang, Yilun; Chen, Duyun; Deng, Jia; Aiden, Erez Lieberman; Fei-Fei, Li (12 December 2017). "Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States". Proceedings of the National Academy of Sciences. 114 (50): 13108–13113. doi:10.1073/pnas.1700035114. ISSN 0027-8424. PMC 5740675. PMID 29183967.
  • Buolamwini, Joy; Gebru, Timnit (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification". Proceedings of Machine Learning Research. 81: 1–15. ISSN 1938-7288.
  • Gebru, Timnit (9 July 2020). "Race and Gender". In Dubber, Markus D.; Pasquale, Frank; Das, Sunit (eds.). The Oxford Handbook of Ethics of AI. Oxford University Press. pp. 251–269. doi:10.1093/oxfordhb/9780190067397.013.16. ISBN 978-0-19-006739-7.
  • Gebru, Timnit (1 August 2017). Visual computational sociology: computer vision methods and challenges (PDF) (Thesis).

Personal life[edit]

Gebru was born and raised in Addis Ababa, Ethiopia.[2] Her father died when she was five years old and she was raised by her mother.[73] Both her parents are from Eritrea. She eventually received political asylum in the United States,[74] an experience she said was "miserable." Gebru settled in Massachusetts to attend high school where she says she immediately started to experience racially-based discrimination, with some teachers refusing to allow her to take certain Advanced Placement courses, despite being a high-achiever.[4]

After completing high school, an encounter Gebru experienced with the police set her on a course toward a focus on ethics in technology. A friend of hers, a Black woman, was assaulted in a bar, and Gebru called the police to report it. She says that instead of filing the assault report, her friend was arrested and remanded in a cell. Gebru called it a pivotal moment and a "blatant example of systemic racism."[4]

While working on her PhD, Gebru authored a paper that was never published about her concern over the future of AI. She wrote of the dangers of the lack of diversity in the field, centered on her experiences with the police, the demographics, and a ProPublica investigation into predictive policing, which revealed a projection of human biases in machine learning.[4] In the paper, she scathed the "boy's club culture," reflecting on her experiences at conference gatherings of drunken male attendees sexually harassing her, and criticized the hero worship of the field's celebrities.[19]

During the 2008 United States presidential election, Gebru canvassed in support of Barack Obama.[19]

See also[edit]

References[edit]

  1. ^ a b c d e f Metz, Cade; Wakabayashi, Daisuke (3 December 2020). "Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I." The New York Times. Archived from the original on 11 December 2020. Retrieved 12 December 2020. After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns. The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked. In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.
  2. ^ a b c Lahde, Lisa. "AI Innovators: How One Woman Followed Her Passion and Brought Diversity to AI". Forbes. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  3. ^ Bass, Dina (2 December 2021). "Timnit Gebru's New Research Group Wants to Free AI From Its Corporate Ills". Bloomberg. Retrieved 30 December 2021. Gebru, who is Eritrean and fled Ethiopia in her teens during a war between the two countries, wants to research the impact of social media companies on regions where she feels not enough effort is being placed on preventing and removing dangerous content.
  4. ^ a b c d e f g h i Perrigo, Billy (18 January 2022). "Why Timnit Gebru Isn't Waiting for Big Tech to Fix AI's Problems". Time. Retrieved 19 January 2022.{{cite web}}: CS1 maint: url-status (link)
  5. ^ a b "Timnit Gebru". Fortune. Retrieved 16 May 2021.
  6. ^ a b Fried, Ina. "Google CEO pledges to investigate exit of top AI ethicist". Axios. Archived from the original on 10 December 2020. Retrieved 10 December 2020.
  7. ^ Hao, Karen (4 December 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review. Archived from the original on 25 December 2020. Retrieved 5 December 2020.
  8. ^ Dastin, Paresh Dave (4 December 2020). "Top AI ethics researcher says Google fired her; company denies it". Reuters. Archived from the original on 5 December 2020. Retrieved 4 December 2020.
  9. ^ Ghaffary, Shirin (4 December 2020). "The controversy behind a star Google AI researcher's departure". Vox. Archived from the original on 6 December 2020. Retrieved 6 December 2020.
  10. ^ "Timnit Gebru: Google staff rally behind fired AI researcher". BBC News. 4 December 2020. Archived from the original on 4 December 2020. Retrieved 4 December 2020.
  11. ^ "Google fires prominent AI ethicist Timnit Gebru". The Verge. 3 December 2020. Archived from the original on 3 December 2020. Retrieved 4 December 2020.
  12. ^ "Google's co-lead of Ethical AI team says she was fired for sending an email". TechCrunch. Archived from the original on 4 December 2020. Retrieved 4 December 2020.
  13. ^ "Renowned AI researcher says Google abruptly fired her, spurring industrywide criticism of the company". CNBC. 3 December 2020. Archived from the original on 4 December 2020. Retrieved 4 December 2020.
  14. ^ Langley, Hugh (3 December 2020). "One of Google's leading AI researchers says she's been fired in retaliation for an email to other employees". Business Insider. Archived from the original on 5 December 2020. Retrieved 3 December 2020.
  15. ^ "Timnit Gebru: Google staff rally behind fired AI researcher". MIT Technology Review. Archived from the original on 25 December 2020. Retrieved 5 December 2020.
  16. ^ On the name of the paper;[7] On it still being planned for publication;[8] Higher managers;[9] Dispute escalating[10][11][12][13][14][15][excessive citations]
  17. ^ a b Goforth, Claire (3 February 2021). "Men in tech are harassing Black female computer scientist after her Google ouster". Daily Dot. Gebru’s departure from Google, where she was co-lead of the ethical artificial intelligence (AI) team, was precipitated by a paper she co-authored. ... At the last minute, Google refused to allow the paper to be published with its name on it. Gebru drew a line in the sand, saying she’d only comply if certain conditions were met; otherwise, she’d resign, Technology Review reports. The company swiftly responded by saying it accepted her resignation.
  18. ^ Allyn, Bobby (17 December 2020). "Ousted Black Google Researcher: 'They Wanted To Have My Presence, But Not Me Exactly'". NPR. Archived from the original on 18 December 2020. Retrieved 19 December 2020. After Google demanded that Gebru retract the paper for not meeting the company's bar for publication, Gebru asked that the process be explained to her, including a list of everyone who was part of the decision. If Google refused, Gebru said she would talk to her manager about "a last date." Google took that to mean Gebru offered to resign immediately, and Google leadership say they accepted, but Gebru herself said no such offer was ever extended, only threatened.
  19. ^ a b c d e f g h Simonite, Tom (June 8, 2021). "What Really Happened When Google Ousted Timnit Gebru". Wired (in American English). ISSN 1059-1028.
  20. ^ a b AI, People In (16 September 2017). "Timnit Gebru honored as an Alicorn of Artificial Intelligence by People in AI". Selfpreneur. Archived from the original on 5 December 2020. Retrieved 9 January 2019.
  21. ^ a b "Understanding the Limits of AI: When Algorithms Fail". MIT Tech Review. Archived from the original on 5 December 2020. Retrieved 9 January 2019.
  22. ^ a b "Timnit Gebru Wins 2017 ECVC: Leveraging Computer Vision to Predict Race, Education and Income via Google Streetview Images". LDV Capital (in American English). Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  23. ^ Magazine, Tadias. "Timnit Gebru: Among Incredible Women Advancing A.I. Research at Tadias Magazine". Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  24. ^ "History | AddisCoder". www.addiscoder.com. Archived from the original on 21 March 2018. Retrieved 10 January 2019.
  25. ^ "Timnit Gebru". Databricks (in American English). Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  26. ^ Coulter, Martin; Langley, Hugh (18 October 2021). "Meet 18 Big Tech workers turned activists forcing scrutiny of everything from NDAs to military contracts". Business Insider (in American English). Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  27. ^ a b Albergotti, Reed (14 October 2021). "She pulled herself from addiction by learning to code. Now she's leading a worker uprising at Apple". The Washington Post. Retrieved 20 January 2022.
  28. ^ Schiffer, Zoe (23 August 2021). "Apple employees are organizing, now under the banner #AppleToo". The Verge. Retrieved 20 January 2022.
  29. ^ Anguino, Dani (3 September 2021). "#AppleToo: employees organize and allege harassment and discrimination". the Guardian. Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  30. ^ Capital, L. D. V. (1 August 2017), Timnit Gebru - 2017 Entrepreneurial Computer Vision Challenge Finalist Presentations, archived from the original on 5 December 2020, retrieved 9 January 2019
  31. ^ Fei-Fei, Li; Aiden, Erez Lieberman; Deng, Jia; Chen, Duyun; Wang, Yilun; Krause, Jonathan; Gebru, Timnit (12 December 2017). "Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States". Proceedings of the National Academy of Sciences. 114 (50): 13108–13113. doi:10.1073/pnas.1700035114. ISSN 1091-6490. PMC 5740675. PMID 29183967.
  32. ^ a b "Timnit Gebru". Design Better. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  33. ^ Lufkin, Bryan (6 January 2018). "What Google Street View tells us about income". Worklife. BBC. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  34. ^ "A machine-learning census of America's cities". The Economist. 2 March 2017. ISSN 0013-0613. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  35. ^ Lohr, Steve (31 December 2017). "How Do You Vote? 50 Million Google Images Give a Clue". The New York Times (in American English). ISSN 0362-4331. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  36. ^ Hao, Karen (14 June 2021). "Inside the fight to reclaim AI from Big Tech's control". MIT Technology Review. Retrieved 22 January 2022.{{cite web}}: CS1 maint: url-status (link)
  37. ^ Bass, Dina (20 September 2021). "Google's Former AI Ethics Chief Has a Plan to Rethink Big Tech". Bloomberg. Retrieved 19 January 2022.
  38. ^ "Timnit Gebru". World Science Festival (in American English). Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  39. ^ Mitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* '19. New York, New York, USA: ACM Press: 220–229. arXiv:1810.03993. doi:10.1145/3287560.3287596. ISBN 978-1-4503-6125-5. S2CID 52946140.
  40. ^ Lohr, Steve (9 February 2018). "Facial Recognition Is Accurate, if You're a White Guy". The New York Times (in American English). ISSN 0362-4331. Archived from the original on 9 January 2019. Retrieved 9 January 2019.
  41. ^ University, Office of Web Communications, Cornell. "Digital Life Seminar | Timnit Gebru". Cornell. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  42. ^ Mitchell, Andrea (April 2019). "A.I. Experts Question Amazons Facial-Recognition Technology". ICT Monitor Worldwide; Amman. Archived from the original on 26 November 2019.
  43. ^ Ovide, Shira (9 June 2020). "A Case for Banning Facial Recognition". The New York Times (in American English). ISSN 0362-4331. Archived from the original on 17 July 2020. Retrieved 12 July 2020.
  44. ^ a b Haoarchive, Karen (4 December 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review. Retrieved 19 January 2022.{{cite web}}: CS1 maint: url-status (link)
  45. ^ "Retired UW computer science professor embroiled in Twitter spat over AI ethics and 'cancel culture'". GeekWire (in American English). 16 December 2020. Retrieved 13 November 2021.
  46. ^ Schiffer, Zoe (5 March 2021). "Timnit Gebru was fired from Google — then the harassers arrived". The Verge. Retrieved 13 November 2021.
  47. ^ Wong, Julia Carrie (4 December 2020). "More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru". The Guardian. Archived from the original on 15 December 2020. Retrieved 15 December 2020.
  48. ^ Grant, Nico; Bass, Dina (20 January 2021). "Google Sidelines Second Artificial Intelligence Researcher". Bloomberg.
  49. ^ Hao, Karen (17 December 2020). "Congress wants answers from Google about Timnit Gebru's firing". MIT Technology Review. Archived from the original on 25 December 2020. Retrieved 17 December 2020.
  50. ^ Metz, Rachel (17 December 2020). "Tensions in Google's ethical AI group increase as it sends demands to CEO". CNN. Archived from the original on 17 December 2020. Retrieved 17 December 2020.
  51. ^ Dastin, Jeffrey; Dave, Paresh (17 December 2020). "Google staff demand exec step aside after ethicist's firing - document". Reuters. Archived from the original on 17 December 2020. Retrieved 17 December 2020.
  52. ^ Eidelson, Josh; Bergen, Mark (17 December 2020). "Google AI Researchers Lay Out Demands, Escalating Internal Fight". Bloomberg. Retrieved 17 December 2020.
  53. ^ a b c Metz, Rachel (19 February 2020). "Google is trying to end the controversy over its Ethical AI team. It's not going well". CNN. Retrieved 19 January 2022.{{cite web}}: CS1 maint: url-status (link)
  54. ^ Metz, Cade (20 February 2021). "A second Google A.I. researcher says the company fired her". The New York Times (in American English). ISSN 0362-4331. Retrieved 19 January 2022.
  55. ^ Wakabayashi, Daisuke (9 December 2020). "Google Chief Apologizes for A.I. Researcher's Dismissal". The New York Times (in American English). ISSN 0362-4331. Archived from the original on 10 December 2020. Retrieved 10 December 2020.
  56. ^ Metz, Rachel (9 December 2020). "Google to examine the departure of a leading AI ethics researcher". CNN. Retrieved 19 January 2022.{{cite web}}: CS1 maint: url-status (link)
  57. ^ Kramer, Anna (19 February 2021). "Google vows to do better on DEI. Timnit Gebru is not impressed". Protocol — The people, power and politics of tech. Retrieved 20 January 2022.
  58. ^ Dastin, Jeffrey; Paresh, Dave (4 February 2021). "Two Google engineers resign over firing of AI ethics researcher Timnit Gebru". Reuters.
  59. ^ a b Glaser, April; Adams, Char (7 March 2021). "Complaints to Google about racism and sexism met with therapy referrals". NBC News. Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  60. ^ "Alphabet Shareholders File Proposal for Racial Equity Audit, Activists and Employees Endorse". Open MIC (in American English). Retrieved 20 January 2022.
  61. ^ Langley, Hugh. "A Google shareholder is pressing the company to investigate whether its products harm racial minorities". Business Insider (in American English). Retrieved 20 January 2022.
  62. ^ Herrera, Sonya (12 November 2021). "A foundation that promotes social justice is calling on Alphabet to commission a racial equity audit". Biz Journals. Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  63. ^ a b Herrera, Sonya (20 December 2021). "Google is under investigation over its treatment of Black female workers". Biz Journals. Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  64. ^ Lima, Cristiano (2 June 2021). "Senate Democrats call on Google to conduct racial equity audit". POLITICO. Retrieved 20 January 2022.{{cite web}}: CS1 maint: url-status (link)
  65. ^ a b Dave, Paresh (17 December 2021). "EXCLUSIVE-California probes Google's treatment of Black female workers". Reuters. Retrieved 20 January 2022.
  66. ^ Clark, Mitchell (17 December 2021). "Google reportedly under investigation for how it treats Black female workers". The Verge. Retrieved 20 January 2022.
  67. ^ Clayton, James (14 December 2020). "Timnit Gebru: Google and big tech are 'institutionally racist'". BBC News (in British English). Retrieved 20 January 2022.
  68. ^ Tiku, Nitasha (2 December 2021). "Google fired its star AI researcher one year ago. Now she's launching her own institute". The Washington Post.
  69. ^ Coldewey, Devin (2 December 2021). "After being pushed out of Google, Timnit Gebru forms her own AI research institute: DAIR". TechCrunch (in American English).
  70. ^ "AI innovation winners announced in San Francisco". Innovation Matrix (in American English). 12 July 2019. Archived from the original on 12 July 2020. Retrieved 12 July 2020.
  71. ^ Burt, Chris (18 July 2019). "Buolamwini, Gebru and Raji win AI Innovation Award for research into biometric bias". Biometric Update (in American English). Archived from the original on 12 July 2020. Retrieved 12 July 2020.
  72. ^ "Nature's 10 Ten people who helped shape science in 2021". Nature. Retrieved 19 December 2021.
  73. ^ Chisling, Ava (24 July 2017). "Excuse me, sir, but where are all the women?". ROSS Intelligence. Archived from the original on 10 January 2019. Retrieved 9 January 2019.
  74. ^ "Timnit Gebru". Archived from the original on 7 December 2020. Retrieved 14 December 2020.

External links[edit]