The conventional wisdom on the SAT and ACT calcified during the pandemic: the tests are biased, score gaps reflect privilege, and going test-optional is the equity-minded move. It’s a tidy narrative, and large parts of it don’t survive contact with the data. Recent research from Opportunity Insights, MIT, Dartmouth, and Yale all points the same direction: standardized tests predict college performance better than grades, especially for disadvantaged students, and removing them often makes admissions less fair, not more.
The pushback against this finding has been emotional rather than empirical. The numbers are what they are.
What the data actually shows
Opportunity Insights’ analysis of admitted students at highly selective colleges found SAT and ACT scores predicted academic success, including grades, graduation, and post-college earnings, more strongly than high school GPA. The advantage of test scores was largest among students from disadvantaged backgrounds, the group test-optional was supposedly designed to help. MIT’s reinstatement of testing in 2022 explicitly cited evidence that going test-optional made it harder, not easier, to identify capable applicants from under-resourced high schools. Dartmouth and Yale followed in 2024 with similar reasoning. The mechanism is straightforward: a strong SAT score from an unknown rural school carries information that a 3.9 GPA from that same school can’t, because grade inflation and curriculum quality vary wildly across high schools while a test score is a common yardstick.
Why other admissions inputs are worse
Recommendations, essays, extracurriculars, and interviews are all dramatically more correlated with family income than test scores are. Wealthy applicants hire essay coaches, fund summer research at universities, and attend high schools with counselors who write polished letters and shape narrative arcs. Sociologist Jerome Karabel and others have documented how holistic admissions emerged historically as a tool to exclude high-scoring Jewish applicants from elite colleges in the 1920s, precisely because pure tests would have admitted them in numbers institutions found inconvenient. Holistic review can be valuable, but its history and its measurable income correlations make calling it the “fairer” alternative to a standardized test difficult to defend with data.
The score gap question
Test score gaps by race and income are real, and they don’t reflect test bias in the technical sense. They reflect structural inequalities in K-12 education that the test then accurately measures. Eliminating the test doesn’t eliminate the inequality; it just hides it from the admissions decision while the same inequality continues to shape every other input, including grades, course rigor, and access to AP and IB programs. Targeted investments in K-12 quality, free test prep, and contextual interpretation of scores against school characteristics address the underlying problem. Killing the messenger does not.
The bottom line
Standardized tests have real flaws, and selective colleges should weight them in context, alongside a candidate’s school environment and resources. But the framing that they’re the bias-laden part of admissions is backwards. They’re the most consistent, hardest-to-game, lowest-cost-to-prepare-for measure in the file. The components that look fairer often fail when you actually run the regressions. The evidence has shifted. The conversation should too.
Leave a Reply