The default assumption in most modern organizations is that better data produces better decisions, and therefore more data must produce even better ones. This is a comforting belief and a profitable one for the consulting and software firms that sell dashboards. It’s also, in many measurable ways, wrong. Beyond a surprisingly modest threshold, additional information degrades decision quality, and the people most confident they’re being data-driven are often the ones most affected.
The information-paralysis curve
Decision quality, plotted against information available, doesn’t rise indefinitely. It rises sharply with the first useful inputs, plateaus, and then falls as additional data introduces noise, contradiction, and analysis cost. Herbert Simon called this “bounded rationality” in the 1950s, and the effect has been replicated across domains. A 1979 study by economist Paul Slovic found that horse race handicappers given five pieces of information made better predictions than handicappers given forty, but the forty-data-point group reported much higher confidence. Confidence and accuracy decoupled. That’s the dangerous shape of the curve: people don’t just get worse with more data; they get worse and more certain. Modern dashboards, by surfacing forty metrics where five would do, push organizations into precisely this zone โ high confidence, mediocre judgment.
False precision is the tax on big data
Granular data feels rigorous because it carries decimal places. The decimal places are often manufactured. A revenue forecast accurate to the dollar is a hallucination wearing a tie. Nate Silver, Philip Tetlock, and others who’ve spent careers studying forecasting consistently note that prediction skill at fine granularity is often statistically indistinguishable from chance, even when the precision suggests otherwise. Large language models have made this worse, not better, because they generate plausible numbers without underlying epistemic warrant. The organizational problem is that decisions made on false-precision inputs are very hard to overturn: the data has more rhetorical weight than the seasoned judgment of someone saying “this seems off.” The dashboard wins arguments it shouldn’t.
What actually correlates with good decisions
Empirically, the best-performing decision processes share a few features that have nothing to do with data volume. They involve small numbers of high-quality inputs rather than many low-quality ones. They include explicit handling of uncertainty โ confidence intervals, base rates, prior probabilities โ rather than point estimates. They preserve the ability to be surprised, which usually means a human or small group reviewing the output and asking whether it matches reality. And they shorten feedback loops so decisions get tested against outcomes quickly, allowing miscalibration to be detected. None of this requires more data. Most of it requires better questions and faster cycles. The Apollo program, the Manhattan Project, and most successful product companies were not characterized by abundant data; they were characterized by crisp problem definitions and tight iteration.
Bottom line
The phrase “data-driven” became a virtue signal somewhere along the way, and it’s now hard to argue against without sounding like you don’t believe in evidence. But there’s a real difference between using data to inform judgment and outsourcing judgment to data. The first is what good decisions look like. The second is what dashboards quietly substitute when no one is watching.
Leave a Reply