Smoke screen

One of the questions I always ask of stories is how they work. Who do they serve? Who benefits? Who, if anyone, is burdened or harmed by them? Who is uplifted? What modes or methods or structures do they employ? Stories—and metaphors, which are often just stories in miniature—are never neutral actors. They always seek some change, whether through resistance or encouragement or both.

We are surrounded with illustrative examples. The phrase “office politics” frames the critical work of negotiating information, power, and agency within an organization as mere gossip, thereby serving to uphold existing hierarchical structures and preventing (or arresting) structural change. Similarly, the “cloud” obscures the deep sea cables and data centers and massive carbon costs of digital experiences behind a vision of the kind of fluffy, ephemeral, and happy image that Bob Ross delighted in. Where “office politics” cloaks structural inequities in a sheen of disgrace, the “cloud” hides the very real and visceral harms of digital technologies behind a friendly facade.

But there’s a different story I want to talk about today, and it’s a timely one: this story says that a certain kind of technology is different from all other technologies by virtue of its wit. Where other machines merely follow the instructions given them, this new kind of machine learns and discovers and creates novel and surprising results. This tech is so smart that there’s in fact a risk that it becomes too smart and gains sentience—a possibility so dangerous it requires that we rapidly expand the capability of this technology so that we have a chance to stay a few steps ahead of it, so that we have a chance to make certain it serves us instead of itself.

I’m talking about machine learning. Which is itself one kind of story—one in which machines do something like “learn,” but which really means to memorize or put into storage, and includes nothing so pedestrian as understanding or interpreting. But the more common parlance—“artificial intelligence”—expands on that story to suggest that not only are the machines learning, but they have acquired the ability to think, or to intellectualize, implying that they have desires and personalities and behaviors. One way this story works is that by ascribing “thinking” to the machines, it triggers associations many people have with “higher” beings—whether species that are smarter than others, or people that are. (I’ll come back to this hierarchical notion of intelligence in a moment.)

Stories about machines that learn or achieve something like intelligence serve to dress up what the machines can do, to make something as basic as what amounts to a very expensive autocomplete seem like toddlers preordained to become gods or dictators. So one way this story works is to inflate the value of the technology, something investors and technocrats have long been skilled at and are obviously incentivized towards. But there are other ways this story works too: fears about so-called AIs eventually exceeding their creators’ abilities and taking over the world function to obfuscate the very real harm these machines are doing right now, to people that are alive today. We already have ample evidence of the ways that the application of AI and adjacent methods are being used to issue automatic and capricious denials of medical care; to target people for surveillance or arrest; to create content that is racist, sexist, ableist, and so on—not to mention that its penchant for bullshit makes it a highly scalable tool for generating and disseminating disinformation. All of which is to say, so-called AI is yet another tool for accelerating the already-happening efforts of precaritization, austerity, and inequality. But it’s difficult to locate those concerns—or address them—within the tale of a new intelligence coming into being.

Another way this story works is that it embeds a notion of a hierarchy of intelligence within it. The risk isn’t that AI is smart—it’s right there in the name, we made it to be like that—the risk is that it becomes too smart, so smart it is either capable of destroying humankind or becomes so uncaring it sees us as mere ants in its path (or both). This presumes, first, that intelligence is quantifiable and, second, that more of it is better. Leaving aside whether or not either of those things are true,1 the question then becomes how would we measure that intelligence accurately, and through what political lens. By way of illustration, consider that our typical methods for measuring intelligence—IQ tests and various university-style examinations—rarely if ever consider someone’s ability to, say, effectively deescalate a violent encounter, or interpret body language within and across cultures, or sit meditatively without looking at one’s phone every ten seconds. Those skills are positioned, at best, as supplementary to actual intelligence (that is, logic and rationality) and are often used to dismissively reference the skills of women and people of color. That right there is a tell: embedded within the dominant notion of intelligence is the assertion that certain kinds of intelligence are gendered and racialized, and therefore inferior. The tests, of course, serve to prove this fact a priori: that they reproduce well-known biases (in the form of test scores that can be correlated to race and gender) is taken not as an indictment of the tests but as confirmation that intelligence is not equally distributed.

The fact is that if you scratch the surface of any notion of intelligence, you run headlong into a belief system that renders some people more intelligent—and therefore more valuable, more worthy of attention or care—than others. Here’s Dan McQuillan, writing in Resisting AI:

The general assumption of AGI [artificial general intelligence] believers is that mind is the same as intelligence, which is itself understood as logic and rationality. A commitment to AGI and the associated reification of rationalism often comes with social imaginaries and biologized superiority. It’s at this point that a belief in AGI starts to evoke deeper historical notions of hierarchies of being. If intelligence is something that can be ranked and is taken as a marker of worth, then that is presumably something that also applies to people. The hierarchy of intelligence, which comes automatically with the concept of AGI, merges with the idea that such a hierarchy already exists in humans. This belief is shared by those AI experts who welcome AGI and those who rail against it in apocalyptic terms; the latter simply fear that they will lose their superior status to a machine. On a pragmatic level, the notion of a natural hierarchy of intelligence isn’t a problem for engineering and business elites as it provides a rationale for their privilege, but the historical significance of this perspective is the way it has been deployed to legitimize oppressive social and political orders. In particular, it is a racialized and gendered concept that has been widely applied to justify the domination of one group of people over another, especially under colonialism.

McQuillan, Resisting AI, page 90

(Emphasis mine.) To expand on this, the fear that some people may lose their superior status to a machine is the same fear that they may lose it to people they already deem inferior. It’s part and parcel of a blowback against human rights being extended to Black people, to women, to trans folks, to the disabled, to everyone they long assumed was deservedly less worthy (of money, care, attention, or respect) than themselves.

The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished. We have the power to tell different stories, to counter the narrative of “artificial intelligence” with one that is rooted in democracy and equality, in a vision of a living world in which life is not ranked according to perceived value under capitalism but in which care is extended to all. But—and here’s the trick of it—in order to do that we have to let go of the notion that any one of us is worth more than any other. Countering the story of so-called AI demands that we relinquish the habit of presuming that some people are deserving of care and some are not, that some people are more intelligent than others, that some kinds of intelligences are superior. This is a challenge, but it is also a gift. The question is, are we ready to accept it?

  1. They are not.

Related books

Resisting AI

Dan McQuillan

“AI presents a technological shift in the framework of society that will amplify austerity while enabling authoritarian politics.”