If you investigate enough disasters, an uncomfortable pattern emerges. The headline almost always blames a person. The deeper analysis almost always blames a system that depended on that person not making a mistake. Aviation, medicine, nuclear power, and finance have all spent decades learning that the question is not whether humans will err. They will. The question is whether the system that surrounds them assumes they will or pretends they won’t.
The aviation revolution started with this insight
Commercial aviation in the 1970s had a problem nobody could solve by hiring better pilots. Crashes kept happening, and the post-incident analyses kept finding that the crew had made mistakes a more careful pilot would not have made. The industry had two choices: keep demanding more careful pilots, or redesign the system to assume that even excellent pilots would have bad days. They chose the second, and the result is one of the most successful safety transformations in industrial history. Crew Resource Management trained pilots to expect their own errors and check each other openly. Standardized checklists removed reliance on memory for routine tasks. Cockpit design was rebuilt to make certain mistakes physically harder to make. The result is that commercial aviation is now safer per mile than walking, and most of that gain came from accepting human fallibility rather than fighting it.
Medicine is in the middle of the same conversation
Healthcare adopted the lessons more slowly, partly because the culture of medicine has historically treated errors as individual failures of competence or character. The work of researchers like Atul Gawande and Lucian Leape pushed the field toward checklists, structured handoffs, and surgical timeouts, and the data on these interventions is unambiguous. Hospitals that adopted surgical safety checklists saw measurable drops in mortality and complications. The interventions are simple enough that they sound almost insulting to skilled clinicians, which is part of why adoption took so long. The point of a checklist is not that the surgeon does not know the steps. The point is that even experts forget steps under pressure, and a piece of paper does not.
The other domains are catching up unevenly
Software engineering has internalized a version of this through code review, automated testing, and incident postmortems that emphasize systems over individuals. Finance has done it partially, with regulations like position limits and risk dashboards, though the 2008 crisis revealed how much the industry was still relying on individual judgment under conditions designed to overwhelm it. Consumer product design lags behind, partly because there is no equivalent of an FAA forcing accountability for predictable misuse. The gap between domains that have absorbed the lesson and domains that have not is one of the more reliable predictors of where the next preventable disaster will come from.
The takeaway
The adult position on human error is that it is a constant, like gravity or weather. Systems that ignore it eventually pay a large bill. Systems that design around it pay a small bill continuously and avoid the large one. The question is not whether you can stop making mistakes. It is whether the next one will be allowed to matter.
Leave a Reply