This year has held many things, among them bold claims of artificial intelligence breakthroughs. Industry commentators speculated that the language-generation model GPT-3 may have achieved “artificial general intelligence,” while others lauded Alphabet subsidiary DeepMind’s protein-folding algorithm—Alphafold—and its capacity to “transform biology.” While the basis of such claims is thinner than the effusive headlines, this hasn’t done much to dampen enthusiasm across the industry, whose profits and prestige are dependent on AI’s proliferation.
It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)
Google’s appalling treatment of Gebru exposes a dual crisis in AI research. The field is dominated by an elite, primarily white male workforce, and it is controlled and funded primarily by large industry players—Microsoft, Facebook, Amazon, IBM, and yes, Google. With Gebru’s firing, the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart, bringing questions about the racial homogeneity of the AI workforce and the inefficacy of corporate diversity programs to the center of the discourse. But this situation has also made clear that—however sincere a company like Google’s promises may seem—corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.
This should concern us all. With the proliferation of AI into domains such as health care, criminal justice, and education, researchers and advocates are raising urgent concerns. These systems make determinations that directly shape lives, at the same time that they are embedded in organizations structured to reinforce histories of racial discrimination. AI systems also concentrate power in the hands of those designing and using them, while obscuring responsibility (and liability) behind the veneer of complex computation. The risks are profound, and the incentives are decidedly perverse.
The current crisis exposes the structural barriers limiting our ability to build effective protections around AI systems. This is especially important because the populations subject to harm and bias from AI’s predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor—those who’ve borne the brunt of structural discrimination. Here we have a clear racialized divide between those benefiting—the corporations and the primarily white male researchers and developers—and those most likely to be harmed.
Take facial-recognition technologies, for instance, which have been shown to “recognize” darker skinned people less frequently than those with lighter skin. This alone is alarming. But these racialized “errors” aren’t the only problems with facial recognition. Tawana Petty, director of organizing at Data for Black Lives, points out that these systems are disproportionately deployed in predominantly Black neighborhoods and cities, while cities that have had success in banning and pushing back against facial recognition’s use are predominately white.
Without independent, critical research that centers the perspectives and experiences of those who bear the harms of these technologies, our ability to understand and contest the overhyped claims made by industry is significantly hampered. Google’s treatment of Gebru makes increasingly clear where the company’s priorities seem to lie when critical work pushes back on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their damage.
Checks on the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions. Researchers from corporations and academia publish papers together and rub elbows at the same conferences, with some researchers even holding concurrent positions at tech companies and universities. This blurs the boundary between academic and corporate research and obscures the incentives underwriting such work. It also means that the two groups look awfully similar—AI research in academia suffers from the same pernicious racial and gender homogeneity issues as its corporate counterparts. Moreover, the top computer science departments accept copious amounts of Big Tech research funding. We have only to look to Big Tobacco and Big Oil for troubling templates that expose just how much influence over the public understanding of complex scientific issues large companies can exert when knowledge creation is left in their hands.