Engineers love adding safety features. Layers of sensors, redundant systems, automated overrides โ the more, the better, the thinking goes. But a growing body of accident analysis suggests something uncomfortable: each new layer brings its own failure modes, and at some point complexity becomes its own hazard. The Boeing 737 MAX didn’t crash because it lacked automation. It crashed partly because of it.
This isn’t an argument against safety engineering. It’s an argument for taking complexity seriously as a risk in itself, the way we already treat fatigue, weather, or human error.
Redundancy isn’t free
The classic safety move is redundancy: two sensors instead of one, two engines instead of one, a backup system that kicks in if the primary fails. In theory this multiplies reliability. In practice, redundant systems share components, software, and assumptions, which means a single bad input can defeat the whole stack. The 2009 Air France 447 crash involved iced pitot tubes feeding bad airspeed data to multiple flight computers simultaneously. The redundancy didn’t help because all the redundant systems were trusting the same corrupted source. Designers call this common-cause failure, and it’s everywhere โ from nuclear plants to medical devices. The more interconnected the redundancy, the more likely it is to fail together.
Automation creates new error modes
When a system handles routine cases automatically, operators lose practice handling the edge cases. This is the irony of automation, well-documented since the 1980s. Pilots whose autopilots fly almost the entire trip become rusty at hand-flying when something goes wrong. Drivers using lane-keeping assist disengage their attention until the moment they need to take over โ which is exactly when their reaction times are worst. Automated braking, blind-spot warnings, and stability control demonstrably reduce some crashes, but they shift others into a category where humans are slower to notice problems because they’ve outsourced noticing. The net safety benefit is usually positive, but it’s smaller than the marketing suggests, and concentrated in specific scenarios.
Simpler can be more reliable
Engineers in fields like aviation and nuclear power increasingly talk about resilience rather than redundancy โ building systems that fail gracefully and remain understandable to operators. A mechanical valve that fails closed under pressure is more predictable than a software-controlled valve with seventeen sensor inputs and a firmware update schedule. Hospitals have found that simple checklists reduce infection rates more reliably than expensive automated tracking systems, partly because the checklist is something a tired nurse can actually follow at 3 a.m. The lesson isn’t that low tech is better โ it’s that the safety calculation has to include whether humans can actually use the system under stress, with imperfect information.
Bottom line
Adding a safety feature feels like progress. Sometimes it is. But every layer of complexity is also a new place for failure to hide, and the most expensive accidents tend to involve systems that were designed to be foolproof. Good safety engineering means knowing when to stop adding, when to subtract, and when to admit that the human at the controls is still the most important component in the loop.
Leave a Reply