On the contrary, PolitiFact's founding editor, Bill Adair, said decisions about the "Truth-O-Meter" ratings are "entirely subjective." And current editor Angie Drobnic Holan in 2014 explained the difference between the "False" and "Pants on Fire" ratings by saying "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."
Given the understanding that the difference between "False" and "Pants on Fire" rests on subjective grounds, we have conducted ongoing research on the chances a claim PolitiFact considers false will receive the "Pants on Fire" designation.
Our research suggests at least two things.
First, PolitiFact National is biased against Republicans.
Second, the statement selection process renders "Truth-O-Meter" ratings an entirely unreliable guide to candidate truthfulness even assuming the subjective ratings are objectively accurate(!).
Without further ado, an updated chart for both political parties showing the percentage of false ratings given the "Pants on Fire" rating:
We'll address one potential criticism right off the bat.
We should expect a higher percentage for the party that lies more!
We would agree with that criticism if the PolitiFact data stemmed from objective considerations in the fact checks. We have no evidence to support that and considerable evidence to counter it (see above). All the evidence suggests the "Pants on Fire" rating is a purely subjective judgment.
Subjective judgment is incompatible with neutrality.
A Review of the FindingsFalse statements from Democrats were rated "Pants on Fire" just 9.09 percent of the time in 2019, tying the record low set in 2009. The Republican percentage stayed very close to its historic baseline, which cumulatively stands at 27.21 percent. The long-term average for Democrats dropped slightly to 17.41 percent. Over PolitiFact National's entire history, Republicans are about 60 percent more likely to receive the subjective "Pants on Fire" rating.
PolitiFact's wildly unscientific selection processThe Trump presidency ought to end permanently any supposition that PolitiFact's story selection process approximates random (scientific) representative selection in any way.
Of the 14 "Pants on Fire" ratings given to Republicans in our 2019 data, 13 went to President Trump. The other one went to Mr. Trump's son-in-law, Jared Kushner.
Of the 39 "False" ratings given to Republicans in our 2019 data, 29 went to Mr. Trump.
Combined, then, 42 of 53 of Republicans' false "Truth-O-Meter" ratings went to Mr. Trump.
For comparison, in 2011 PolitiFact rated 88 Republican claims false with none of them coming from Mr. Trump. From 88 down to 10? Is the Republican Party, aside from Trump, that much more honest with the passage of time? Nonsense. That hypothesis is completely implausible on its face.
The explanation is painfully simple: As PolitiFact admits, its editors choose the claims PolitiFact rates. They use editorial judgment to select stories, not scientific selection. The editors are, in the words of the editors, "not social scientists."
If anything counts as a proof that the supposedly unbiased fact checkers at PolitiFact are deliberately pulling the wool over the eyes of their readers, it is PolitiFact's unrepentant use of its aggregated ratings as voter guides.
They have to know better.
They do it anyway. And the practice was part of PolitiFact's aim from the start, which helps explain why it won't go away.