We Need to Talk About How We Talk About 'AI'
Emily M. Bender, Nanna Inie / Jan 7, 2026
New and unprecedented, is it? by Ada Jušić & Eleonora Lima (KCL) / Better Images of AI / CC by 4.0
“AI” is not your friend. Nor is it an intelligent tutor, an empathetic ear, or a helpful assistant. It can not “make up” facts, and it does not make “mistakes”. It does not actually answer your questions. Such anthropomorphizing language, however, permeates the public discussion of so-called artificial intelligence technologies. The problem with anthropomorphic descriptions is that they risk masking important limitations of probabilistic automation systems, which make them fundamentally different from human cognition.
The people and companies selling “AI” technologies routinely use language that portrays their systems as human-like — “reasoning capabilities”, “hallucinating”, and artificial “intelligence”. The media has largely let them set the terms of the debate, right down to the terminology used in any discussion of these systems. But even the most flawless execution of a task typically associated with intelligence does not make a system “intelligent”, and the framing of systems as humans or human-like is misleading at best, deadly at worst.
Anthropomorphizing language influences how people perceive a system on multiple levels. It over-sells a system which is likely to under-deliver, and portrays a world view in which the people responsible for developing the systems are not held accountable for the system’s inaccurate, inappropriate, and sometimes deadly output. It promotes misplaced trust, over-reliance, and dehumanization.
The problematic nature of anthropomorphization — wishful mnemonics — is by no means a novel critique in the field of computing. In fact, it was raised half a century ago by the computer scientist Drew McDermott who wrote in 1976, when “artificial intelligence” was still a relatively new field:
If a researcher […] calls the main loop of his program “UNDERSTAND,” he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself. […] What he should do instead is refer to this main loop as “G0034,” and see if he can convince himself or anyone else that G0034 implements some part of understanding. [...] Many instructive examples of wishful mnemonics by AI researchers come to mind once you see the point.
In order to make more informed decisions about so-called AI, it helps to be able to recognize the different ways in which the language used to describe it is anthropomorphizing and thus misleading. The most prominent category of anthropomorphization includes terms that describe systems in terms of cognition or even emotions. These words can be verbs, describing what the system supposedly does (“think”, “recognize”, “understand”) or nouns describing those actions or the result of those actions (“chain of thought”, “reasoning”, “skills”). Words that describe cognitive failures, like “ignore”, also belong here, since they describe the “ignoring” entity as something that could conversely pay attention. The term “artificial intelligence” may itself even be particularly problematic, given some research has shown that people associate high machine competence with this term compared to, e.g., “decision support systems”, “sophisticated statistical models” or even “machine learning”.
Metaphors are helpful shortcuts, but they are also seductive, because they create an impression of understanding. Communicating accurate mental models of “AI” systems is challenging when technical descriptions are not meaningful to the average user. This difficulty does not let journalists and researchers off the hook, however. It remains our job to find clear and non-misleading ways to talk about the technology.
Another way that the metaphor of anthropomorphizing language misleads is by putting the automated system in the driver’s seat, treating it as an agent in its own right. This is pervasive, and serves to obscure the actions and accountability of people building and using the systems. Examples include phrasings like “ChatGPT helps the students…”, “the model creates realistic video”, or “AI systems need more and more power every year”. A variant of this positions a model as a collaborator to the person using it, rather than as a tool they are using, with words like “co-write”, “co-create”, etc.
We are also anthropomorphizing automated systems when we describe them as participating in acts of communication. If you say you “asked” the system a question, it “told” you something or “lied”, this is overselling what actually happened. These words entail communicative ability and intent, that is, the ability to understand communicative acts and a desire to reciprocate communication, plus the choice to do it in a certain way. Rephrasing the language we use to describe these interactions is truly swimming upstream, because not only do the companies selling these systems describe them as communicators, they also make many design choices to support this illusion. From the very chat interface to the use of I/me pronouns by many such systems, these systems are designed to provide the illusion of a conversation partner. But they are outputting text that actually no one is accountable for, and playing on our very human tendency to make sense of any linguistic activity in languages we are familiar with.
No matter how strongly a person may feel comfort, relief, even connectedness towards a chatbot, this does not make the chatbot a friend, a therapist, or a romantic partner. People may form friendly feelings towards inanimate objects or technology, but they are entirely unidirectional — surely, we would not call a child’s plush toy a friend of theirs without at least a prefix of “imaginary”. Framing is an exceptionally powerful cognitive device that can make the difference between what we consider real and unreal. Take, for example, the many recent cases of “AI” psychosis and disastrous “therapeutic” interactions between people and chatbots — for people prone to delusions, the tendency to anthropomorphize chatbots is particularly perilous. Frequent use of these technologies in “conversations” that mimic romantic exchanges is directly associated with higher levels of depression and lower life satisfaction.
From wishful mnemonics to accurate nomenclature
We argue that we should aim for higher linguistic accuracy in our descriptions of “AI” systems. In scientific and journalistic writing, in public debate, and in everyday use. It requires deliberate rephrasing and might feel awkward at first. But the thing about patterns of language use is that we learn them from each other — and yesterday’s oddities, if used persistently enough, become part of our linguistic landscape.
The inaccuracies incited by anthropomorphic descriptions are likely to have a disproportionate impact on vulnerable populations. A 2025 article shows a negative correlation between people’s “AI” literacy and their “AI” receptivity; the more people know about how “AI” works, the less likely they are to want to use it: “people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes.” Somewhat absurdly, the authors use this finding as an argument against educating the public about “AI”, as this would consequently reduce their adaptivity. We believe that language is a device to increase people’s “AI” literacy, helping them make informed choices about technology acceptance.
A more deliberate and thoughtful way forward is to talk about “AI” systems in terms of what we use systems to do, often specifying input and/or output. That is, talk about functionalities that serve our purposes, rather than “capabilities” of the system. Rather than saying a model is “good at” something (suggesting the model has skills) we can talk about what it is “good for”. Who is using the model to do something, and what are they using it to do?
It takes effort to swim upstream against anthropomorphizing language embedded in commonly-used technical terms and popular discourse, both in recognizing the language at all but also in finding suitable alternatives. Whether we are participating in local discussions making decisions for our workplaces, schools or communities or writing for broad audiences we share a responsibility to create and use empowering metaphors rather than misleading language that embeds the tech companies’ marketing pitches.
Authors

