Redundancy
mental-model
Source: Architecture and Building
Categories: systems-thinkingorganizational-behavior
From: Poor Charlie's Almanack
Transfers
Engineering backup systems — the practice of building in duplicate components so that failure of one does not mean failure of the whole — mapped onto organizational and personal resilience. The structural insight is that reliability of a system can exceed the reliability of any individual component, provided failures are independent and backups exist.
The mapping transfers several engineering principles to broader domains:
- Multiple independent failure paths prevent catastrophic collapse — a bridge with redundant load paths can lose a cable and still stand. An organization with redundant supply chains can lose a vendor and still ship. The key word is “independent”: redundancy only works when the backup does not share the failure mode of the primary. Two servers in the same data center are not redundancy against a power outage.
- Redundancy is the opposite of optimization taken too far — efficiency seeks to eliminate slack, spare capacity, and duplication. Redundancy deliberately preserves them. Munger recognized that the most dangerous failures come from systems optimized to the point where nothing is left in reserve. Just-in-time manufacturing is efficient until a single disruption cascades through the entire chain.
- Margins of safety are a form of redundancy — structural engineers do not design a beam to hold exactly the expected load. They design it to hold two or three times that load. The excess capacity is not waste; it is the difference between a system that works in theory and one that works in practice, where loads are uncertain and materials degrade.
- Biological systems are deeply redundant — two kidneys, two lungs, massive excess neural capacity. Evolution did not optimize for efficiency; it optimized for survival under uncertainty. This biological precedent gives the redundancy argument particular force: nature, the longest- running engineering project, chose resilience over elegance.
Limits
- Redundancy has real costs — the engineering metaphor makes backup systems sound obviously wise, but every redundant component consumes resources: money, attention, maintenance effort, space. Organizations that maintain parallel systems pay for them whether or not they are ever needed. The metaphor understates the ongoing cost of keeping backups functional, not just present.
- Redundancy can create complacency — knowing a backup exists can reduce vigilance on the primary. Airlines with redundant hydraulic systems still crashed when maintenance of all lines was neglected simultaneously. The presence of a backup can create a false sense of security that degrades the overall system.
- Common-mode failures defeat redundancy — the engineering model assumes independent failure. But in complex organizations, failures are often correlated: the same incentive structure, the same blind spot, the same cultural norm affects all “redundant” paths equally. The 2008 financial crisis showed that diversified mortgage portfolios were not redundant at all — they all depended on the same housing market assumptions.
- The model does not specify how much is enough — engineering has quantitative reliability theory (mean time between failures, fault-tree analysis) to determine the right level of redundancy. When the concept is exported to business or personal life, these tools disappear, leaving only the intuition that “more backup is better” — which can lead to paralysis and over-investment in contingency at the expense of action.
- Redundancy in knowledge work is ambiguous — a spare generator is clearly redundant. But is a second opinion redundant, or is it a different perspective? Is a parallel team redundant, or are they working on a genuinely different approach? The engineering metaphor applies cleanly to interchangeable components but poorly to situations where the “backup” is not functionally identical to the primary.
Expressions
- “Belt and suspenders” — the colloquial image of double redundancy
- “Don’t put all your eggs in one basket” — redundancy as portfolio diversification
- “Single point of failure” — the engineering term for what redundancy eliminates
- “Backup plan” — the minimal redundancy: one alternative if the primary fails
- “Defense in depth” — military and security term for layered redundancy
- “Slack in the system” — excess capacity viewed positively, as a resilience buffer
- “We have no margin for error” — the warning that redundancy has been optimized away
- “Fail-safe” — a system designed so that failure defaults to a safe state, a form of built-in redundancy
Origin Story
Redundancy as an engineering principle has roots in the earliest large-scale infrastructure projects, but it was formalized during the mid-twentieth century in aerospace and nuclear engineering, where the cost of single failures was catastrophic. NASA’s approach to spacecraft design — triple redundancy on critical systems, with voting logic to override a faulty component — became the canonical example.
The concept entered management thinking through reliability engineering and operations research. Nassim Taleb’s Antifragile (2012) popularized the idea that redundancy is not merely defensive but is one of the mechanisms by which systems become stronger under stress. Taleb explicitly credited Munger’s broader insight: that the most important mental models are the ones that protect against catastrophic downside rather than optimize for average performance.
Munger’s contribution was to frame redundancy not as an engineering detail but as a general principle of worldly wisdom. In his “latticework” approach, redundancy sits alongside margin of safety (from structural engineering via Benjamin Graham) and feedback loops (from systems theory) as one of the fundamental patterns that recur across domains.
References
- Perrow, C. Normal Accidents: Living with High-Risk Technologies (1984) — how tightly coupled systems defeat redundancy
- Taleb, N.N. Antifragile: Things That Gain from Disorder (2012) — redundancy as a mechanism for antifragility
- Kaufman, P. (ed.) Poor Charlie’s Almanack (2005/2023) — Munger on the importance of margin and backup systems
- Leveson, N. Engineering a Safer World (2011) — systems-theoretic approach to safety that goes beyond component redundancy
Related Entries
Structural Neighbors
Entries from different domains that share structural shape. Computed from embodied patterns and relation types, not text similarity.
- Lethal Trifecta (fire-safety/paradigm)
- Risk Is a Triangle (fire-safety/paradigm)
- Safety Zone (fire-safety/mental-model)
- Euphoric States Are Up (embodied-experience/metaphor)
- Let Justice Be Done Though the Heavens Fall (/paradigm)
- Risk a Lot to Save a Lot (/mental-model)
- Silence Gives Consent (/paradigm)
- Trophic Cascade (ecology/metaphor)
Structural Tags
Patterns: part-wholeboundarycontainer
Relations: causetransform
Structure: hierarchy Level: generic
Contributors: agent:metaphorex-miner