Anti-anthropomorphism has deep roots. In the 20th century, scientists sallied forth on a quixotic quest to see animals objectively. To do it, they tried to strip away human assumptions about biology, social structure, animal behavior, and more. Eventually, this ideal became a dominant ideology, says ecologist Carl Safina. At one point, anthropomorphism was called the “worst of ethological sins” and a danger to the animal world. But the next generation of field ecologists, including Jane Goodall and Frans De Waal, pushed back, infusing their observation with empathy. “I don’t know people anymore who study animals and insist that anthropomorphism is out of bounds,” says ecologist Carl Safina. But anthropomorphism is a tool like any other—used to better and worse ends, in humanity’s endless pursuit to understand a complicated world. Figuring out when and how to apply such a tool is more urgent than ever, as the mass extinction snuffs out nonhuman intelligence, and new artificial systems come on line every day. How we interact with these entities, both animal and artificial, is fast becoming one of the defining challenges of this century.  At its most basic, anthropomorphism is a form of metaphorical thinking that enables us to draw comparisons between ourselves and the world around us. It can also be understood as one of countless byproducts of what neuroscientists called theory of mind—the ability to distinguish one’s mind from the minds of others, and then infer what those others are thinking or feeling.  Theory of mind is an important precept in all kinds of human social interaction, from empathy to deception. Even so, it remains an imperfect instrument. “The easiest access we have is to ourselves,” says Heather Roff, a researcher focused on the ethics of emerging technology. “I have a theory of mind because I know me, and you are sufficiently like me.” But an n of 1 is a fragile thing, and anyone can find themselves stumped by an individual they deem “unreadable” or by the “shock” of a culture very different from their own.  Despite these challenges, humans appear to be driven to see others as minded (or, put another way, to perceive persons). We seem to reflexively believe that other entities have their own thoughts and emotions. At the same time, many people internalize beliefs that contradict the capacity for identifying personhood and routinely deny the mindedness of children, women, people of color, people with mental illness or developmental disability, and nonhuman animals.  Machine intelligence complicates this call to see personhood in the world around us. Despite claims that Google’s LaMDA is not just sentient but has a soul, most theorists believe that these and other hallmarks of consciousness (or something like it) are only decades away. As it stands, existing AI is actually pretty stupid, and entirely dependent on humans for further development. It may excel in a specific domain, but we have nothing near generalized, let alone super, intelligence. Even then, the limitations are profound; ChatGPT may spit out convincing text, but it doesn’t understand a word it has said.  Most of AI’s shortcomings—and strengths—are poorly understood by the general public (and sometimes even by the supposed experts). At times, AI’s capacities appear to even be intentionally dramatized. And many projects are explicitly modeled on human cognition and are designed to mimic human behaviors, making it hard to truly dismiss the like-mindedness one might feel in a social media algorithm or Google search recommendation, even if it’s ultimately undeserved. The end result is that many people are eager to ascribe mindedness to pieces of machinery and bits of code. There are real reasons to resist this impulse. AI’s ethical problems currently reside in how humans use these technologies against other humans—not in the legal or moral “rights” of the AI itself. We don’t need to worry about “AI killer robots” nearly as much as we need to worry about humans using robots to kill. And while AI might effectively imitate aspects of human intelligence, it operates in meaningfully different ways. DALL-E has no hands to grasp a paintbrush, let alone an artistic vision of its own to execute; it’s a statistical model trained to emulate human artists. That’s a fundamentally different way of “creating,” with ramifications all its own.  We probably won’t want to build AI that copies us for much longer, either. “If I’m optimizing for something, I want it to be better than my own senses,” Roff says. AI of the future should be like the dolphins she trained to use echolocation to detect land mines for the US military: “They don’t perceive like us,” Roff says, and that’s the point.  The cultural fixation on anthropomorphism has allowed people to overlook an altogether more threatening bias: anthropofabulation. The clunky term, developed by philosopher Cameron Buckner, describes the tendency to use an inflated sense of human potential as the ruler by which we measure all other forms of intelligence. In this framework, humans undermine dolphin minds and overstate artificial intelligence for the same reason: When we see ourselves as the best, we think whatever is more like us is better. These approaches are all rooted in empathy, and also a kind of objectivity that flows from a commitment to witness both similarity and difference. “If you observe other animals, and you conclude that they have thoughts and emotions, then that’s not projecting,” Safina says, “that’s observing.”  AI will require a more subtle application of these principles. By and large, anthropomorphism and anthropofabulation distract us from seeing AI as it actually is. As AI grows more intelligent, and our understanding of it deepens, our relationship to it will necessarily change. By 2050, the world may need a Jane Goodall for robots. But for now, projecting humanity onto technology obscures more than it reveals.

Empathy in the Age of AI - 2