metaphor containers containerboundarysurface-depth containpreventtranslate boundary generic

AI Is a Black Box

metaphor

Source: ContainersArtificial Intelligence

Categories: ai-discoursesystems-thinking

Transfers

Inputs go in, outputs come out, and nobody can see what happens inside. The black box metaphor frames AI systems as sealed containers whose internal workings are inaccessible to inspection. Originally from engineering systems theory — where a black box is any device analyzed solely by its input-output behavior — the metaphor has become the dominant frame for the explainability problem in machine learning.

Key structural parallels:

Limits

Expressions

Origin Story

The “black box” concept originates in cybernetics and systems engineering, where it refers to any system studied solely through its external behavior. Norbert Wiener and W. Ross Ashby used the term in the 1950s to describe systems whose internal mechanisms are unknown or irrelevant to the analysis. In aviation, the “black box” (actually orange) flight recorder added a secondary connotation: a sealed device that preserves information about what went wrong. Both senses converge in AI discourse. Leon Furze (2024) identifies the black box as one of the central metaphors shaping public understanding of AI, noting that it frames the explainability problem as a container-access problem. The metaphor has been extraordinarily productive in generating policy language: the EU AI Act’s transparency requirements are essentially regulations about opening boxes.

References

Related Entries

Structural Neighbors

Entries from different domains that share structural shape. Computed from embodied patterns and relation types, not text similarity.

Structural Tags

Patterns: containerboundarysurface-depth

Relations: containpreventtranslate

Structure: boundary Level: generic

Contributors: agent:metaphorex-miner