Hero image for "The Supreme Court's Voter Turnout Claim Fails the "Compared to What?" Test"

The Supreme Court's Voter Turnout Claim Fails the "Compared to What?" Test


The Headline Number

"Black voters now participate in elections at similar rates as the rest of the electorate, even turning out at higher rates than white voters in two of the five most recent Presidential elections nationwide and in Louisiana." — Justice Samuel Alito, majority opinion, Louisiana v. Callais (2026)

The Audit

The number didn't come from Alito's clerks. It came almost verbatim from a Department of Justice amicus brief. Which means the first question isn't whether Alito got it right — it's whether the DoJ's underlying methodology was sound.

It wasn't.

The Guardian's analysis found that the DoJ calculated Black and white voter turnout in Louisiana as a share of each group's total population over 18 — not as a share of the citizen voting age population (CVAP) or the voter eligible population (VEP). Those two latter measures are the standard tools for this kind of analysis, because they exclude non-citizens, people with felony convictions, and others legally barred from voting.

The denominator choice is everything here. When you include people who cannot legally vote in your denominator, you mechanically suppress the turnout rate for any group with a higher share of ineligible residents. Louisiana's Black population has a higher rate of felony disenfranchisement than its white population — a direct legacy of the discriminatory criminal justice policies the Voting Rights Act was designed to address. Using the raw over-18 population as the base doesn't just introduce noise. It bakes in a structural bias that flatters the conclusion Alito needed.

The verdict flips when you use the right denominator. The Guardian's reanalysis using CVAP found that Black voter turnout in Louisiana exceeded white voter turnout in exactly one of the five most recent presidential elections — 2012 — not two. The entire "two elections" claim, the one Alito used to argue that the discrimination requiring the VRA no longer exists, dissolves under standard methodology.

This is the denominator problem in its most consequential form. The number isn't fabricated. The underlying CBP-style data is real. But the framing — choosing a denominator that produces the desired comparison — is the move. It's the same trick as the Biden-era immigration chart that counted CBP "encounters" (which can log the same person multiple times) as individual people, then expressed the result as a percentage of each country's total population. Real data. Misleading denominator. Viral conclusion.

What makes the Alito case more serious than a viral chart is the institutional weight behind it. A Supreme Court majority opinion is not a tweet. The DoJ brief that supplied the methodology carried the authority of the federal government. And the claim wasn't incidental — it was load-bearing. Alito used it to establish that "vast social change" had rendered Section 2 protections obsolete.

Verdict: Misleading. The claim is technically defensible only under a non-standard methodology that experts don't prefer for precisely this kind of analysis. Under the standard approach, the factual predicate for Alito's argument weakens materially. That's not a rounding error. That's a denominator chosen to support a conclusion.


By the Numbers

95% → 25%. The "95% of AI pilots are failing" stat — covered in the May 4 issue — keeps circulating. The actual figure from the underlying report: among companies that did pilot a custom AI tool, roughly 25% reached production deployment. The 95% figure counted companies that never ran a pilot as "failures." Per 80,000 Hours' analysis, that's like calling 95% of Tinder users failed marriages because 80% never went on a date.

"8% of Nicaragua's population." A viral chart claimed 8% of Nicaragua's population entered the U.S. illegally under Biden. Snopes traced it to CBP "encounters" data — which counts interactions, not individuals, meaning one person crossing multiple times counts multiple times. The chart's creator later deleted it and acknowledged an error. The chart kept circulating anyway.

Community notes work — sometimes. A Nature Communications study found community-based fact-checking on X reduces the spread of flagged misleading posts. The summary available doesn't specify effect size or sample construction, so treat the directional finding as signal, not settled science. The methodology matters — and the abstract doesn't show it.


The through-line this week: bad denominators don't need to be invented. They just need to be chosen carefully. The data is real. The math is real. The conclusion is the part that requires scrutiny.