Television has done forensic science a generational disservice. Decades of crime dramas have trained jurors โ and prosecutors โ to treat forensic testimony as essentially infallible. The reality, documented in a 2009 National Academy of Sciences report and a 2016 PCAST review, is that several forensic disciplines have weak scientific foundations, and even the strong ones can produce wrong answers when handled badly.
The disciplines with serious problems
Bite-mark analysis is the most embarrassing example. The PCAST report found “no scientific basis” for matching a bite mark to a specific person. People have spent decades in prison on bite-mark testimony, and the Innocence Project has documented multiple exonerations where bite-mark evidence was the central piece of the case. The discipline persists in some courtrooms anyway because expert witnesses still testify and judges still admit them.
Microscopic hair comparison fared similarly. The FBI itself acknowledged in 2015 that examiners had given erroneous testimony in 96% of trial cases reviewed at that point. Tool-mark analysis, footprint matching, and aspects of bullet-lead comparison have all faced similar critiques. None of this means the underlying observations are useless โ but the leap from “this could be consistent with” to “this is a match” was rarely justified by the science.
Even the strong disciplines have failure modes
DNA analysis is the gold standard, and rightly so. But DNA evidence isn’t immune to problems. Contamination at the crime scene, in collection, or in the lab can produce false matches. Touch DNA โ small amounts of biological material from incidental contact โ can place innocent people at scenes they were never near. Mixture interpretation, where multiple people’s DNA appears together, is mathematically harder than juries are usually told.
Fingerprint analysis, long treated as definitive, has documented error rates. The FBI’s misidentification of Brandon Mayfield in the 2004 Madrid bombing case is the canonical example: three examiners independently “matched” his print to one from the scene. He was nowhere near Spain. Fingerprints are useful evidence, but the framing of them as a 100% match is sloppy science presented as certainty.
The courtroom problem
Courts have been slow to discipline weak forensics. Daubert and Frye standards, which govern expert testimony admissibility, are applied unevenly. Defense attorneys often lack resources to retain rebuttal experts. Prosecutors lean on credentialed examiners whose testimony juries find authoritative. Judges, mostly not scientists, defer to convention.
The result is a system where outdated forensic methods continue to influence verdicts, and where exonerations โ when they happen โ typically take decades. The Innocence Project has documented hundreds of cases involving flawed forensic testimony. Each one represents a system that treated certainty as available when the underlying science offered, at best, probability with wide confidence intervals.
The takeaway
Forensic evidence isn’t worthless. DNA, when handled cleanly, is genuinely powerful. But the broader category of “forensic evidence” includes disciplines ranging from rigorous to discredited, and the courtroom rarely communicates that range well. Defense lawyers, journalists, and citizens evaluating cases are right to ask which forensic discipline is in play, what its error rate is, and what the testimony is actually claiming versus what it sounds like it’s claiming.
Leave a Reply