The five-star rating system trained an entire generation to outsource purchasing decisions to crowdsourced wisdom. In aggregate, that wisdom is sometimes useful and sometimes weaponized, and the gap between the two has widened as the incentives to manipulate reviews have grown. Reading reviews well requires treating them as evidence rather than verdictsโand recognizing that the platforms hosting them have their own incentives that don’t always align with yours.
Selection bias quietly distorts everything
People review products at extreme ends of their experience. The customer who got exactly what they expected often doesn’t bother. The delighted customer and the furious customer both write paragraphs. The middleโwhere most actual experiences liveโis underrepresented in the data. This produces a bimodal distribution that gets averaged into a misleading single number. A 4.2-star product might be 60 percent five-star reviews from genuinely happy users plus 25 percent one-star reviews from people whose package was lost in shipping, with very little signal about typical performance. Reading the three-star reviews specificallyโthe ones from people who tried to be balancedโoften reveals more useful information than the average. The platforms know about this distortion and rarely surface it because the average sells better than the truth.
Fake reviews are an industrialized economy
Review fraud is now a multi-million-dollar global business. Networks of paid reviewers, incentivized refunds, and AI-generated text production have made review manipulation cheap and scalable. The FTC has fined a handful of companies, and Amazon has filed lawsuits against fake-review brokers, but the supply continues to outpace enforcement. Detection tools like Fakespot and ReviewMeta exist to flag suspicious patternsโsudden bursts of reviews, identical phrasing across products, reviewers who only review one brandโand they catch a meaningful share of obvious fraud. They miss subtler manipulation, particularly when established sellers seed early reviews with friends and family. The takeaway isn’t that reviews are useless; it’s that the high ratings on a brand-new product with no track record deserve heavy skepticism.
Platforms are not neutral
Yelp, Google, Amazon, and TripAdvisor each have their own algorithms determining which reviews appear, which get filtered, and which get prioritized. These algorithms aren’t transparent and don’t optimize for consumer truthโthey optimize for platform engagement, advertiser relationships, and legal liability. Yelp has been accused for years of disproportionately filtering reviews from non-advertisers; Yelp denies the connection and the academic evidence is mixed. Amazon’s “verified purchase” filter helps but doesn’t eliminate manipulation because verified purchasers can still be incentivized through outside-platform refund schemes. The reviews you see are a curated subset of the reviews submitted, and the curation method is a black box.
Bottom line
Reviews are still useful, especially in aggregate and especially when you read the middle of the distribution rather than the average. A handful of approaches help: filter for verified purchases, sort by recent rather than highest-rated, look for specific operational details that fake reviewers don’t bother including, and discount any product whose reviews all arrived in a one-month window. Treat the star rating as one signal among several, not as a verdict, and you’ll lose less money to review manipulation.
Leave a Reply