When a public-facing system delivers bad outcomes, the standard response is to call it broken. But systems rarely break in the ways people assumeโthey usually deliver the outcomes they were optimized for. The problem isn’t malfunction; it’s that fairness was never the metric being maximized. Recognizing the actual objective is the first step toward not being blindsided by it.
Efficiency beats accuracy in mass systems
Court systems, social benefits, and insurance claims all operate at scale, which means they’re optimized for throughput. A judge who spends an hour on every traffic case clears no docket. A benefits adjudicator who carefully evaluates each application slows the queue. Built-in heuristicsโstandard offers, default denials, automated screeningโexist because the alternative is collapse. The cost of this efficiency is a rate of incorrect outcomes that the system treats as acceptable, so long as the average case moves. If you’re the wrongly denied applicant or the wrongly convicted defendant, the average doesn’t help you. Appeals exist precisely because errors are baked in, but appeals require time, money, and knowledge that the people most affected often lack.
Risk reduction favors the risk-averse
Lending, hiring, insurance underwriting, and licensing all reward institutions that minimize their downside. That’s reasonable in isolation, but the cumulative effect tilts the playing field. Someone with a thin credit file isn’t necessarily a worse borrower; they’re an unknown borrower, and unknowns get priced higher or rejected. The system isn’t lying when it says its decisions are “based on data.” The data just happens to encode whatever historical patterns existed, including patterns shaped by exclusion. Fairness would require accepting more risk, which means accepting more losses, which means somebody’s quarterly numbers take a hit. Until that tradeoff gets made explicitly, the defaults will keep producing the same results.
Procedural fairness isn’t outcome fairness
Many systems are scrupulously fair in their proceduresโeveryone gets the same form, the same timeline, the same hearingโand still produce wildly unequal outcomes. That’s because procedural fairness assumes equal capacity to navigate the procedure. Someone with a lawyer, a flexible job, and a printer at home experiences “fill out this form within 30 days” very differently than someone working two shifts without internet. The forms don’t discriminate; the prerequisites do. Reform efforts that focus only on procedure miss this entirely, which is why decades of process tweaks often leave outcomes unchanged. Substantive fairness requires looking at who actually wins and loses, not just whether the rules were followed.
Bottom line
When a system feels rigged, the productive question isn’t “why is it broken?” but “what is it actually trying to optimize?” The answer is usually some combination of speed, cost, and institutional riskโnot the welfare of the people moving through it. That’s not a moral failure of any individual operator; it’s a design choice, often made decades ago by people who weren’t thinking about you. Changing the outcome means changing the objective function, which is harder than complaining about the symptom but more likely to produce something different.
Leave a Reply