Most disasters don’t happen to people who think they’re in danger. They happen to people who’ve concluded the danger is behind them. The 2008 financial crisis, the Challenger explosion, and the average highway pileup share the same DNA: a confident system convinced of its own resilience, right up to the moment it wasn’t.
Risk doesn’t disappear when you stop noticing it. It just stops being priced in.
Why complacency is the dominant failure mode
Engineers call this “normalization of deviance” โ the slow process by which small anomalies stop registering as warnings because nothing bad has happened yet. NASA’s pre-Challenger O-ring problem was known for years; each successful flight reinforced the wrong conclusion. The system had absorbed the deviation, so the deviation seemed like the new normal. It wasn’t. It was a delayed failure.
You can see the same pattern in personal finance. Investors who lived through 2010โ2020 internalized a market that mostly went up, and many built portfolios that only made sense if it kept doing so. When 2022 happened, “diversified” 60/40 portfolios had their worst year in decades because the safety assumptions had quietly become correlated bets. The risk hadn’t grown โ the perception of safety had, and that’s what made the exposure dangerous.
The role of feedback loops
Safe-feeling systems lack corrective signals. When nothing goes wrong, no one investigates near-misses, no one patches the small flaw, no one trains the operator harder. Each uneventful day reduces vigilance. The next day inherits a slightly less robust system, and the day after inherits the same compounded again.
This is why insurance actuaries pay attention to streaks of low claims with the same suspicion they pay to spikes. A long calm period is not evidence that risk has gone away. It’s evidence that risk is being underpriced โ by the market, by the institution, or by the person involved. Driving studies show the same pattern: drivers who haven’t had a near-miss in months become measurably less attentive, and accident rates climb. Comfort is a leading indicator, not a lagging one.
Designing for inevitable failure
The healthier alternative isn’t to live in constant anxiety. It’s to design assuming failure will happen and ask how the system contains it. Engineers call this “fail-safe” thinking; financial planners call it building margin; pilots call it currency in emergency procedures. All of them assume the safe state is temporary and budget for the unsafe state in advance.
This is what’s missing from most personal risk thinking. People plan for the expected case and treat the bad case as a low-probability event they’ll figure out later. That’s exactly backward. The expected case will mostly take care of itself. The bad case is where preparation pays, and where a felt sense of safety is most likely to leave you under-prepared.
The bottom line
If your situation feels safe, that’s the moment to re-examine it, not relax. Risk lives in the gap between actual exposure and perceived exposure. Closing that gap โ even at the cost of a little ongoing discomfort โ is what separates resilient systems from ones that look fine until they don’t.
Leave a Reply