Can You Trust the Test? The Science of Diagnosing Trial Fairness

In science, medicine, and law, a basic principle holds true: a good test doesn’t always say “yes.” For a test to be useful, it must sometimes confirm a claim and sometimes reject it. A thermometer that always reads 98.6°F isn’t telling you who has a fever. A smoke detector that always goes off isn’t telling you when there’s a real fire. Likewise, a diagnostic test that always signals “unfair trial” tells us nothing meaningful about whether a trial was fair or not.

This logic is one reason DNA testing has become so powerful in the courtroom. We trust DNA tests to convict the guilty because we also trust them to exonerate the innocent. They’re not designed to always point in one direction—they’re designed to distinguish. And they’ve proven themselves over and over again. Dozens of people have been released from prison thanks to DNA evidence, including individuals who served decades for crimes they didn’t commit. These exonerations have only been possible because the same DNA technology was previously used to support convictions. Its power lies in its discrimination—its ability to say “yes” when the evidence warrants it, and “no” when it does not.

This same principle guides the work of Fair Trial Analysis.

We are developing tools to diagnose whether a trial was fair or unfair. That’s not an easy question, but it’s a critical one—especially in the post-conviction process. If we’re going to claim a trial was unfair, we need a way to back that up scientifically. And for our analysis to be taken seriously, it has to get it right most of the time.

Type I and Type II Errors

All diagnostic tests face the possibility of getting it wrong. Statisticians classify these mistakes into two categories:

  • Type I error (false negative): Saying a trial was fair when it was actually unfair. This is the more dangerous error in post-conviction review because it allows an injustice to stand.
  • Type II error (false positive): Saying a trial was unfair when it was actually fair. This may lead to unnecessary reversals and loss of finality.

We can visualize these possibilities with a simple classification table:

Fair TrialUnfair Trial
Test Says “Fair”✅ Correct❌ Type I Error
(missed injustice)
Test Says “Unfair”❌ Type II Error
(false alarm)
✅ Correct

Why We Can’t Just Assume All Trials Are Unfair

One way to reduce Type I errors would be to declare every trial unfair. That might sound appealing if your goal is to avoid overlooking injustice (Type I errors). But this approach dramatically increases Type II errors—wrongly rejecting fair trials. If we can’t identify which trials were actually fair, then every conviction is in doubt, cases are retried indefinitely, and the justice system loses its ability to distinguish valid outcomes from flawed ones.

Instead, we need a diagnostic approach that gets both things right—recognizing true injustices and affirming truly fair trials.

What Makes a Good Diagnostic Test?

A good diagnostic tool doesn’t just flag problems. It distinguishes clearly and consistently. It has:

  • Low Type I error: It rarely misses unfair trials.
  • Low Type II error: It rarely overturns fair ones.
  • Reproducibility: Others can use the method and reach the same result.
  • Credibility: Courts and the public can trust the outcome.

At Fair Trial Analysis, our mission is to build these tools. We apply transparent, replicable methods—grounded in social science and statistics—to evaluate trial processes. Like DNA testing, our goal is not to replace the courts, but to give them better information: data that helps identify when a person’s right to a fair trial was violated.

At Fair Trial Analysis, we are committed to building tools that distinguish—not just accuse. Our methods are grounded in statistics, open to scrutiny, and designed to support justice, not undermine it. Like DNA evidence, our approach is valuable because it can cut both ways: it can reveal serious trial errors, and it can affirm when the process was sound. That balance is essential. Because in the pursuit of justice, real change depends not on louder claims—but on better evidence.

Scroll to Top