An interview of new editor Angie Drognic Holan published on Feb. 27 gives us a hint that some things won't be changing. Here's an item of special note to us here at PolitiFact Bias (bold emphasis added):
"We try to fact-check approximately the same number of Democrats and Republicans but we don’t keep hard-and-fast count, and one thing that we don’t do is try to balance the ratings. We don’t think about if we get a false on one side, we want to go and get a false on the other side. We do not do that."We've pointed out before that trying to rate about the same number of statements by party serves to skew the sample of statements PolitiFact rates.
Suppose PolitiFact's editorial ears perk up over 10 suspicious-sounding Republican statements but only four for Democrats. Trying to keep the numbers approximately even creates pressure to change the editorial criteria in order to move the numbers closer together.
What does this mean in practical terms? It's unwise to collect PolitiFact data and use it as though it's a scientific sample. It's not a scientific sample.
And we can't complete our post on the Holan interview without noting how she blatantly/wisely dodged one of the questions:
No, it's not entirely objective, and PolitiFact doesn't draw consistent conclusions. Read the article to see how Holan skirts the question. Adair used to do the same thing.
4. You are working with partners across the country, how does everybody draw consistent conclusions? Can this be entirely objective?
Oh, and another thing ...
We don't know when this interview took place, but Holan's citing 10 PolitiFact state affiliates and one international one even though the loss of PolitiFact Ohio apparently brings the number of state affiliates down to nine, and PolitiFact has taken down its link to PolitiFact Australia, which has gone on hiatus. Perhaps somebody should have fact-checked her claims.