Timnit Gebru | |
|---|---|
Timnit Gebru in 2018 | |
| Born | Timnit W. Gebru 1982 or 1983 (age 42–43)[2] |
| Alma mater | Stanford University |
| Known for | Algorithmic bias Stochastic parrots[4] |
| Scientific career | |
| Fields | Fairness in machine learning[1] |
| Institutions | |
| Thesis | Visual computational sociology: computer vision methods and challenges (2017) |
| Doctoral advisor | Fei-Fei Li |
| Website | ai |
Timnit W. Gebru (Amharic andTigrinya:ትምኒት ገብሩ; 1982/1983) is an Eritrean Ethiopian-borncomputer scientist who works in the fields ofartificial intelligence (AI),algorithmic bias anddata mining.[5] She is a co-founder ofBlack in AI, an advocacy group that has pushed for more Black roles in AI development and research.[5] She is the founder of theDistributed Artificial Intelligence Research Institute (DAIR).
In December 2020, public controversy erupted over the circumstances surrounding Gebru's departure fromGoogle, where she was technical co-lead of the Ethical Artificial Intelligence Team. Gebru had coauthored a paper[4] on the risks oflarge language models (LLMs) acting asstochastic parrots, and submitted it for publication. According toJeff Dean, the paper was submitted without waiting for Google's internal review, which then asserted that it ignored too much relevant research. Google management requested that Gebru either withdraw the paper or remove the names of all the authors employed by Google. Gebru requested the identity and feedback of every reviewer, and stated that if Google refused she would talk to her manager about "a last date". Google terminated her employment immediately, stating that they were accepting her resignation. Gebru maintained that she had not formally offered to resign, and only threatened to.[6]
Gebru has been recognized widely for her expertise in the ethics of artificial intelligence. She was named one of the World's 50 Greatest Leaders byFortune and one ofNature's ten people who shaped science in 2021, and in 2022, one ofTime's most influential people.
Gebru was raised inAddis Ababa, Ethiopia.[7] Her father, an electrical engineer with aDoctor of Philosophy (PhD), died when she was five years old, and she was raised by her mother, an economist.[8][9] Both her parents are fromEritrea. When Gebru was 15, during theEritrean–Ethiopian War, she fled Ethiopia after some of her family were deported to Eritrea andcompelled to fight in the war. She was initially denied aU.S. visa and briefly lived inIreland, but she eventually receivedpolitical asylum in the U.S.,[9][10] an experience she said was "miserable". Gebru settled inSomerville, Massachusetts, to attend high school, where she says she immediately started to experienceracial discrimination, with some teachers refusing to allow her to take certainAdvanced Placement courses, despite being a high-achiever.[11][9]
After she completed high school, an encounter with the police set Gebru on a course toward a focus on ethics in technology. A friend of hers, a Black woman, was assaulted in abar, and Gebru called the police to report it. She says that instead of filing the assault report, her friend wasarrested andremanded to a cell. Gebru called it a pivotal moment and a "blatant example of systemic racism."[11]
In 2001, Gebru was accepted atStanford University.[3][9] There she earned herBachelor of Science andMaster of Science degrees inelectrical engineering[12] and her PhD incomputer vision[13] in 2017.[14] Gebru was advised during her PhD program byFei-Fei Li.[14][15]
During the2008 United States presidential election, Gebrucanvassed in support ofBarack Obama.[9]
Gebru presented her doctoral research at the 2017 LDV Capital Vision Summit competition, where computer vision scientists present their work to members of industry andventure capitalists. Gebru won the competition, starting a series of collaborations with other entrepreneurs and investors.[16][17]
Both during her PhD program in 2016 and in 2018, Gebru returned to Ethiopia withJelani Nelson's programming campaign AddisCoder.[18][19]
While working on her PhD, Gebru authored a paper that was never published about her concern over the future of AI. She wrote of the dangers of the lack of diversity in the field, centered on her experiences with the police and on aProPublica investigation intopredictive policing, which revealed a projection of human biases inmachine learning.[11] In the paper, she scathed the "boy's club culture", reflecting on her experiences at conference gatherings of drunken male attendees sexually harassing her, and criticized the hero worship of the field's celebrities.[9]

Gebru joinedApple as anintern while at Stanford, working in their hardware division makingcircuitry for audio components, and was offered a full-time position the following year. Of her work as anaudio engineer, her manager toldWired she was "fearless", and well-liked by her colleagues. During her tenure at Apple, Gebru became more interested in building software, namelycomputer vision that could detect human figures.[9] She went on to develop signal processing algorithms for the firstiPad.[20] At the time, she said she did not consider the potential use for surveillance, saying "I just found it technically interesting."[9]
Long after leaving the company, during the#AppleToo movement in the summer of 2021, which was led by Apple engineerCher Scarlett, who consulted with Gebru,[21][22][23] Gebru revealed she experienced "so many egregious things" and "always wondered how they manage[d] to get out of the spotlight."[24] She said that accountability at Apple was long overdue, and warned they could not continue to fly under the radar for much longer.[25] Gebru also criticized the way the media covers Apple and other tech giants, saying that the press helps shield such companies from public scrutiny.[23]
In 2013, Gebru joinedFei-Fei Li's lab at Stanford, where she combineddeep learning withGoogle Street View to estimate the demographics ofUnited States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race, and education can be inferred from observations of cars.[12][26][27][28]
In 2015, Gebru attended the field's top conference,Neural Information Processing Systems (NIPS), inMontreal, Canada. Out of 3,700 attendees, she noted she was one of only a few Black researchers.[29] When she attended again the following year, she kept a tally and noted that there were only five Black men and that she was the only Black woman out of 8,500 delegates.[9] Together with her colleagueRediet Abebe, Gebru foundedBlack in AI, a community of Black researchers working in artificial intelligence that aims to increase the presence, visibility, and well-being of Black professionals and leaders within the field.[30][31]
In the summer of 2017, Gebru joinedMicrosoft as a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) lab.[9][32][33] In 2017, Gebru spoke at the Fairness and Transparency conference, whereMIT Technology Review interviewed her about biases that exist in AI systems and how adding diversity in AI teams can fix that issue. In her interview with Jackie Snow, Snow asked Gebru, "How does the lack of diversity distort artificial intelligence and specifically computer vision?" and Gebru pointed out that there are biases that exist in the software developers.[34] While at Microsoft, Gebru co-authored a research paper calledGender Shades,[11] which became the namesake of a project of a broaderMassachusetts Institute of Technology project led by co-authorJoy Buolamwini. The pair investigatedfacial recognition software, finding that in one particular implementation Black women were 35% less likely to be recognized than White men.[35]
Gebru joined Google in 2018, where she co-led a team on theethics of artificial intelligence withMargaret Mitchell. She studied the implications of artificial intelligence, looking to improve the ability of technology to do social good.[36]
In 2019, Gebru and other artificial intelligence researchers "signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color", citing a study that was conducted by MIT researchers showing that Amazon's facial recognition system had more trouble identifying darker-skinned females than any other technology company's facial recognition software.[37] In aNew York Times interview, Gebru has further expressed that she believes facial recognition is too dangerous to be used for law enforcement and security purposes at present.[38]
In 2020 Gebru and five co-authors wrote a paper titled "On the Dangers ofStochastic Parrots: Can Language Models Be Too Big? 🦜".[4] The paper examined risks of very large language models, including their environmental footprint, financial costs, the inscrutability of large models, the potential for LLMs to display prejudice against certain groups, the inability of LLMs to understand the language they process, and the use of LLMs to spreaddisinformation.[39]
In December 2020, her employment with Google ended after Google management asked her to either withdraw the paper before publication, or remove the names of all the Google employees from the paper.[2] Of the six authors, onlyEmily M. Bender was not at the time employed at Google.[39] In response, Gebru sent an email offering to remove herself from the paper if Google provided an account of who had reviewed the work and how, and established a more transparent review process for future research. She continued that she would work with Google on an employment end date after an appropriate amount of time if her conditions were not met.[2][11][9] Gebru also sent a second email to a email list for women who worked inGoogle Brain accusing the company of “silencing marginalized voices” and dismissing Google’s internal diversity programs as a waste of time.[9] Google did not meet her request and terminated her employment immediately, declaring that they accepted her resignation.[40] Gebru has maintained that she was fired.
In the aftermath,Jeff Dean, Google's head of AI research, sent a email to Google staff addressing Gebru's departure. In the email, he wrote that the paper didn't meet Google's standards for publication because it ignored too much relevant recent research on ways to mitigate some of the problems of bias and energy usage described in it.[39][2][6] He also defended Google's research paper process as aiming to "tackle ambitious problems, but to do so responsibly."[11] Roughly 2,700 Google employees and more than 4,300 academics and civil society supporters signed a letter condemning Gebru's alleged firing.[41][42] Nine members of Congress sent a letter to Google asking it to clarify the circumstances around Timnit Gebru's exit.[43] Mitchell took to Twitter to criticize Google's treatment of employees working to eliminate bias and toxicity in AI, including its alleged dismissal of Gebru. Mitchell was later terminated after allegedly creating automated scripts to crawl Google's internal servers for evidence of Gebru's mistreatment.[44][45]
Following the negative publicity over the circumstances of her exit,Sundar Pichai,CEO ofAlphabet, Google's parent company, initiated a months-long investigation into the incident.[44][46] Upon conclusion of the review, Dean announced Google would be changing its "approach for handling how certain employees leave the company."[44] Additionally, Dean said there would be changes to how research papers with "sensitive" topics would be reviewed, and diversity, equity, and inclusion goals would be reported to Alphabet's board of directors quarterly.[47] Google also held a forum for Black employees to discuss experiences with racism at the company, followed by agroup psychotherapy session with a licensedtherapist. Some employees said the therapy referrals were dismissive of the harm they felt Gebru's alleged termination had caused.[48]
In November 2021, theNathan Cummings Foundation, partnered with Open MIC and endorsed byColor of Change,[49] filed a shareholder proposal calling for a "racial equity audit" to analyze Alphabet's "adverse impact" on "Black, Indigenous andPeople of Color (BIPOC) communities". The proposal also requests investigation into whether or not Google retaliated against minority employees who raised concerns of discrimination,[50] citing Gebru's firing, her previous urge for Google to hire more BIPOC, and her research into racially based biases in Google's technology.[51][52] The proposal followed a less formal request from a group ofSenate Democratic Caucus members led byCory Booker from earlier that year, also citing Gebru's separation from the company and her work.[53]
In December 2021,Reuters reported that Google was under investigation byCalifornia Department of Fair Employment and Housing (DFEH) for its treatment of Black women,[54] after numerous formal complaints of discrimination and harassment by current and former workers.[52][55] The probe comes after Gebru, and other BIPOC employees, reported that when they brought up their experiences with racism and sexism toHuman Resources, they were advised to takemedical leave and therapy through the company'sEmployee Assistance Program (EAP).[48] Gebru, and others, believe that her alleged dismissal was retaliatory and evidence that Google is institutionally racist.[56] Google said that it "continue[s] to focus on this important work and thoroughly investigate[s] any concerns, to make sure [Google] is representative and equitable."[54]
In June 2021, Gebru announced that she was raising money to "launch an independent research institute modeled on her work on Google's Ethical AI team and her experience inBlack in AI."[9] On 2 December 2021 she launched the Distributed Artificial Intelligence Research Institute (DAIR), which is expected to document the effect of artificial intelligence on marginalized groups, with a focus on Africa and African immigrants in the United States.[57][58] One of the organization's initial projects plans to analyze satellite imagery of townships in South Africa with AI to better understand legacies of apartheid.[11]
Gebru andÉmile P. Torres coined the acronym neologismTESCREAL to criticize what they see as a group of overlapping futurist philosophies:transhumanism,extropianism,singularitarianism,cosmism,rationalism,effective altruism, andlongtermism. Gebru considers these to be a right-leaning influence inBig Tech and compares proponents to "theeugenicists of the 20th century" in their production of harmful projects they portray as "benefiting humanity".[59][60] Gebru has criticized research intoartificial general intelligence (AGI) as being rooted in eugenics. Gebru states that focus should be shifted away from AGI and that trying to build AGI is an inherently unsafe practice.[61]
Gebru, Buolamwini, andInioluwa Deborah Raji wonVentureBeat's 2019 AI Innovations Award in the categoryAI for Good for their research highlighting the significant problem of algorithmic bias in facial recognition.[62][63] Gebru was named one of the world's 50 greatest leaders byFortune in 2021.[64] Gebru was included in a list of ten scientists who had had important roles in scientific developments in 2021 compiled by the scientific journalNature.[65]
Gebru was named one ofTime's most influential people of 2022.[66]
In 2023, Gebru was named byCarnegie Corporation of New York as an honoree of theGreat Immigrants Awards.[67] She was awarded as recognition for her significant contributions to the field of ethical artificial intelligence.[68]
In November 2023, she was named to theBBC's100 Women list as one of the world's inspiring and influential women.[69]
After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns. The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked. In his note to employees, Mr. Dean said Google respected "her decision to resign". Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.
{{cite web}}: CS1 maint: multiple names: authors list (link)After Google demanded that Gebru retract the paper for not meeting the company's bar for publication, Gebru asked that the process be explained to her, including a list of everyone who was part of the decision. If Google refused, Gebru said she would talk to her manager about "a last date." Google took that to mean Gebru offered to resign immediately, and Google leadership say they accepted, but Gebru herself said no such offer was ever extended, only threatened.