As with PolitiFact's first supposed evidence, this one does not appear to work without assuming PolitiFact lacks bias. It's not biased to let the factual chips fall where they may if the evaluation of the facts was unbiased. But it's biased to let the factual chips fall where they may if the evaluation of the facts was biased.
2. We follow the facts, not fact-check count formulas.We let the factual chips fall where they may. This is not bias; this is sticking to our mission of correcting falsehoods as we find them.
So far, this item offers us no solid reason for concluding PolitiFact lacks bias.
Our Little League doesn't keep score!Holan continues:
We don’t worry about who got the last False rating or how long since some group got a True rating. We look at each statement and each set of evidence separately and give it a rating that stands on its own.Concerning the first sentence, our data hint that PolitiFact does consider the proportion of "Pants on Fire" ratings it gives in relation to false ratings overall. Note the pattern under Holan (2014 onward) exhibits much more stability than PolitiFact's record under her predecessor, Bill Adair. Note that we started publishing our data midway through 2011. PolitiFact editors may have looked at the data in 2011 or later and acted on it, which may explain the reduced variation.
Bringing the chart up to date as of today would bring the blue 11.11 percent for 2018 up to 20 percent. That's thanks to a "Pants on Fire" rating given to the North Dakota Democratic-Nonpartisan League Party (who?). Is there even a PolitiFact North Dakota? No. And the other "Pants on Fire" rating given to a Democrat this year went to Alexandria Ocasio-Cortez, who would have easily won her race in New York with 3,000 "Pants on Fire" ratings.
Only once under Holan's tenure (omitting 2013, which she shared with Adair) has either party had a percentage outside the 20 percent to 30 percent range. Under Adair it happened seven times (again omitting 2013).
Holan's assurances ring hollow because PolitiFact's Truth-O-Meter ratings get picked by a fairly consistent group of editors. They know the ratings they are giving even if they're not looking at a scoreboard, just like members of those Little League teams in leagues that avoid hurting feelings by not keeping score.
On top of that, PolitiFact constantly encourages readers to view candidate "report cards" that show all the "Truth-O-Meter" ratings PolitiFact has meted out to a given candidate.
Does this look like PolitiFact isn't keeping score?
But Let's Assume PolitiFact Does Not Keep ScoreEven assuming Holan is right that PolitiFact does not keep score with its "Truth-O-Meter" ratings, that offers no assurance that PolitiFact lacks bias. Think of an umpire in one of those "no score" little league games. Would not keeping a tally of the number of runs scored prevent an umpire from calling a bigger strike zone for one team than the other? We don't see what would prevent it.
Tweaking the Little League Analogy: Yes We Keep Score, But It Does Not Make Us BiasedThe Little League analogy breaks down in the end because PolitiFact does keep score, as Holan acknowledges:
Our database of fact-checks make it easy to see the ratings people or parties have received over the years. Our readers tell us they like seeing these summaries and find them easy to browse. But we are not driven by those numbers; they have no bearing on how we rate the next statement we choose to fact-check.So Holan is saying yeah, we keep score but we don't let it bias our decisions therefore we are not biased. It's circular reasoning again. Where's the evidence?
How does Holan know PolitiFact does not let the score affect its work? What is PolitiFact's secret for exterminating normal human bias? Wouldn't we all like to know?
We're not going to know from Holan's explanation, that's for sure.
There's nothing in this section of Holan's article that offers any kind of legitimate assurance that PolitiFact filters bias from its work.