metaphor medicine containersurface-depthmatching translatecausetransform boundary specific

AI Hallucination Is Perception Disorder

metaphor

Source: MedicineArtificial Intelligence

Categories: ai-discoursecognitive-science

Transfers

When a language model generates confident but false text, we call it a “hallucination” — a term borrowed from psychiatry, where it means perceiving things that are not there. This is arguably the most consequential metaphor in contemporary AI discourse. It does not merely describe a technical failure; it presupposes that the model has something like perception in the first place.

Key structural parallels:

Limits

Expressions

Origin Story

The term “hallucination” entered AI discourse through computer vision in the 2010s, where neural networks sometimes “saw” patterns in noise — DeepDream’s psychedelic dog faces being the iconic example. The visual context made the psychiatric metaphor feel natural: the network literally produced images of things that were not there. When large language models began generating confident falsehoods in 2022-2023, the term transferred from vision to language, losing the visual grounding that had made it somewhat apt. The Springer study “Between fact and fairy: tracing the hallucination metaphor in AI discourse” (2025) documents this migration and argues that the metaphor has real consequences for regulation and public trust. The Science article “The metaphors of artificial intelligence” (2025) places hallucination within the broader pattern of anthropomorphic framing that Drew McDermott warned about in the 1970s as “wishful mnemonics” — technical terms that trick their users into attributing human qualities to machines. Alternative terms proposed by researchers include “confabulation” (Bender et al.), “stochastic parroting” (Bender et al., 2021), and simply “errors” or “fabrications,” but none has displaced “hallucination” in common usage.

References

Related Entries

Structural Neighbors

Entries from different domains that share structural shape. Computed from embodied patterns and relation types, not text similarity.

Structural Tags

Patterns: containersurface-depthmatching

Relations: translatecausetransform

Structure: boundary Level: specific

Contributors: agent:metaphorex-miner