CNN Business
How one employee’s exit shook Google and the AI industry
In September, Timnit Gebru, then co-leader of the ethical AI team at Google, sent a private message on Twitter to Emily Bender, a computational linguistics professor at the University of Washington.
“Hi Emily, I’m wondering if you’ve written something regarding ethical considerations of large language models or something you could recommend from others?” she asked, referring to a buzzy kind of artificial intelligence software trained on text from an enormous number of webpages.
The question may sound unassuming but it touched on something central to the future of Google’s foundational product: search. This kind of AI has become increasingly capable and popular in the last couple years, driven largely by language models from Google and research lab OpenAI. Such AI can generate text, mimicking everything from news articles and recipes to poetry, and it has quickly become key to Google Search, which the company said responds to trillions of queries each year. In late 2019, the company started relying on such AI to help answer one in 10 English-language queries from US users; nearly a year later, the company said it was handling nearly all English queries and is also being used to answer queries in dozens of other languages.
“Sorry, I haven’t!” Bender quickly replied to Gebru, according to messages viewed by CNN Business. But Bender, who at the time mostly knew Gebru from her presence on Twitter, was intrigued by the question. Within minutes she fired back several ideas about the ethical implications of such state-of-the-art AI models, including the “Carbon cost of creating the damn things” and “AI hype/people claiming it’s understanding when it isn’t,” and cited some relevant academic papers.
Gebru, a prominent Black woman in AI — a field that’s largely White and male — is known for her research into bias and inequality in AI. It’s a relatively new area of study that explores how the technology, which is made by humans, soaks up our biases. The research scientist is also cofounder of Black in AI, a group focused on getting more Black people into the field. She responded to Bender that she was trying to get Google to consider the ethical implications of large language models.
Bender suggested co-authoring an academic paper looking at these AI models and related ethical pitfalls. Within two days, Bender sent Gebru an outline for a paper. A month later, the women had written that paper (helped by other coauthors, including Gebru’s co-team leader at Google, Margaret Mitchell) and submitted it to the ACM Conference on Fairness, Accountability, and Transparency, or FAccT. The paper’s title was “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” and it included a tiny parrot emoji after the question mark. (The phrase “stochastic parrots” refers to the idea that these enormous AI models are pulling together words without truly understanding what they mean, similar to how a parrot learns to repeat things it hears.)
The paper considers the risks of building ever-larger AI language models trained on huge swaths of the internet, such as the environmental costs and the perpetuation of biases, as well as what can be done to diminish those risks. It turned out to be a much bigger deal than Gebru or Bender could have anticipated.
Timnit Gebru said she was fired by Google after criticizing its approach to minority hiring and the biases built into today's artificial intelligence systems.
Before they were even notified in December about whether it had been accepted by the conference, Gebru abruptly left Google. On Wednesday, December 2, she tweeted that she had been “immediately fired” for an email she sent to an internal mailing list. In the email she expressed dismay over the ongoing lack of diversity at the company and frustration over an internal process related to the review of that not-yet-public research paper. (Google said it had accepted Gebru’s resignation over a list of demands she had sent via email that needed to be met for her to continue working at the company.)
Gebru’s exit from Google’s ethical AI team kickstarted a months-long crisis for the tech giant’s AI division, including employee departures, a leadership shuffle, and widening distrust of the company’s historically well-regarded scholarship in the larger AI community. The conflict quickly escalated to the top of Google’s leadership, forcing CEO Sundar Pichai to announce the company would investigate what happened and to apologize for how the circumstances of Gebru’s departure caused some employees to question their place at the company. The company finished its months-long review in February.
Academics should be able to critique these companies without repercussion.
—TIMNIT GEBRU, FORMER CO-LEADER OF GOOGLE’S ETHICAL AI TEAM
But her ousting, and the fallout from it, reignites concerns about an issue with implications beyond Google: how tech companies attempt to police themselves. With very few laws regulating AI in the United States, companies and academic institutions often make their own rules about what is and isn’t okay when developing increasingly powerful software. Ethical AI teams, such as the one Gebru co-led at Google, can help with that accountability. But the crisis at Google shows the tensions that can arise when academic research is conducted within a company whose future depends on the same technology that’s under examination.
“Academics should be able to critique these companies without repercussion,” Gebru told CNN Business.
Google declined to make anyone available to interview for this piece. In a statement, Google said it has hundreds of people working on responsible AI, and has produced more than 200 publications related to building responsible AI in the past year. “This research is incredibly important and we’re continuing to expand our work in this area in keeping with our AI Principles,” a company spokesperson said.
“A constant battle from day one”
Gebru joined Google in September 2018, at Mitchell’s urging, as the co-leader of the Ethical AI team. According to those who have worked on it, the team was a small, diverse group of about a dozen employees including research and social scientists and software engineers — and it was initially brought together by Mitchell about three years ago. It researches the ethical repercussions of AI and advises the company on AI policies and products.
Gebru, who earned her doctorate degree in computer vision at Stanford and held a postdoctoral position at Microsoft Research, said she was initially unsure about joining the company.
—
Related:
Spotlight: The Media Firestorm Concerning AI Researcher Timnit Gebru & Google
Timnit Gebru: Among Incredible Women Advancing A.I. Research
Spotlight: Blacks in AI Co-Founders Timnit Gebru & Rediet Abebe