metaphor mental-experience pathiterationflow translatedecomposeenable pipeline specific

Chain of Thought Is Self-Talk

metaphor

Source: Mental ExperienceArtificial Intelligence

Categories: ai-discoursecognitive-science

Transfers

Chain-of-thought prompting and the ReAct paradigm frame AI reasoning as inner monologue made visible. The model “thinks out loud,” “shows its work,” and “reasons step by step” — language borrowed directly from developmental psychology, where Vygotsky described children learning to internalize speech as a tool for thought. The metaphor maps the structure of human self-directed speech onto token generation, making a statistical process feel like cognition.

Key structural parallels:

Limits

Expressions

Origin Story

Chain-of-thought prompting was formalized by Wei et al. in “Chain-of- Thought Prompting Elicits Reasoning in Large Language Models” (2022). The paper demonstrated that asking a model to show intermediate reasoning steps dramatically improved performance on math and logic tasks. The finding was immediately interpreted through the self-talk lens: the model “reasons better when it thinks out loud.”

The ReAct paradigm (Yao et al., 2023) extended this by interleaving reasoning and action: the model narrates its thinking, takes an action, observes the result, and reasons again. The explicit framing as “reasoning” and “thinking” cemented the self-talk metaphor.

The psychological parallel has deep roots. Vygotsky’s zone of proximal development (1934) and his theory of internalized speech describe how children learn to use language as a cognitive tool. The chain-of-thought metaphor maps this developmental theory onto LLM behavior, suggesting that language-as-thinking-tool is a universal structure that applies to both biological and artificial systems. Whether this structural parallel is coincidence, convergent design, or deep truth about the relationship between language and reasoning is among the most contested questions in AI philosophy.

The metaphor intensified in 2024-2025 with “reasoning models” that generate hidden thinking tokens. The term “extended thinking” frames hidden computation as private deliberation, completing the mapping from inner speech to token generation.

References

Related Entries

Structural Neighbors

Entries from different domains that share structural shape. Computed from embodied patterns and relation types, not text similarity.

Structural Tags

Patterns: pathiterationflow

Relations: translatedecomposeenable

Structure: pipeline Level: specific

Contributors: agent:metaphorex-miner