The supplement industry generates roughly $50 billion in annual US sales on a foundation that should embarrass it: most products tested in well-designed randomized trials show no meaningful effect on the outcomes consumers buy them for. Yet customers report feeling better, repurchase enthusiastically, and tell friends. The machinery that produces this disconnect is well-mapped in psychology and statistics, and it explains why bad products survive in a market that nominally rewards results.
The pills aren’t lying. The feedback system is.
Why “I feel better” isn’t evidence
Three forces conspire here. First, regression to the mean: people start supplements when they feel worst, and most symptoms drift back toward baseline regardless of treatment. The supplement gets credit for the recovery the body would have produced anyway. Second, placebo effects are real and substantial, especially for symptoms with subjective components โ fatigue, mood, sleep quality, joint pain, focus. Trials routinely find placebo response rates of 30% or more, and the active product has to beat that, not zero. Third, confirmation bias filters memory: good days after taking the supplement get encoded as confirming evidence, bad days get attributed to other causes. Add the sunk-cost commitment of having paid for a 90-day bottle, and the perceptual machinery is fully loaded toward feeling improvement whether or not improvement exists.
What the trials actually show
The Cochrane Collaboration and large independent meta-analyses have evaluated most popular supplement categories, and the results are humbling. Multivitamins do not reduce cardiovascular events, cancer, or all-cause mortality in well-nourished populations. Vitamin C does not prevent or shorten common colds in most adults. Glucosamine and chondroitin perform comparably to placebo for most knee osteoarthritis outcomes. Most antioxidants show no benefit and some show harm at high doses. Omega-3 supplementation has mixed evidence with shrinking effect sizes as trials get larger. Specific exceptions exist โ vitamin D and B12 for documented deficiency, folate during pregnancy, iron for diagnosed anemia โ but these are deficiency corrections, not the broad-spectrum optimization the marketing promises. Branded “proprietary blends” almost never come with trial data on the actual blend.
The regulatory frame is part of the problem
The 1994 Dietary Supplement Health and Education Act treats supplements as foods rather than drugs, which means manufacturers don’t have to prove safety or efficacy before sale. The FDA can act against products only after harm is documented, and even then enforcement is slow. Structure-function claims โ “supports immune health,” “promotes joint comfort” โ are permitted without trial evidence as long as a small disclaimer appears. Third-party testing programs like USP and NSF verify that what’s on the label is in the bottle, but they don’t validate that what’s in the bottle does anything. The consumer is left to evaluate evidence in a market designed to make evaluation hard.
The takeaway
If you take a supplement, periodically stop it for two months and see if anything genuinely changes. If your blood work shows a deficiency, supplement that specific deficiency under medical guidance. Otherwise, the money is mostly buying placebo, and there are cheaper ways to feel better.
Leave a Reply