Monday, March 3, 2014

Meet the New Boss Same as the Old Boss ...

We've wondered since PolitiFact's founding editor Bill Adair moved on to the realm of academia in 2013 how things would shake out at PolitiFact.

An interview of new editor Angie Drognic Holan published on Feb. 27 gives us a hint that some things won't be changing.  Here's an item of special note to us here at PolitiFact Bias (bold emphasis added):
"We try to fact-check approximately the same number of Democrats and Republicans but we don’t keep hard-and-fast count, and one thing that we don’t do is try to balance the ratings. We don’t think about if we get a false on one side, we want to go and get a false on the other side. We do not do that."
We've pointed out before that trying to rate about the same number of statements by party serves to skew the sample of statements PolitiFact rates.

Suppose PolitiFact's editorial ears perk up over 10 suspicious-sounding Republican statements but only four for Democrats.  Trying to keep the numbers approximately even creates pressure to change the editorial criteria in order to move the numbers closer together.

What does this mean in practical terms?  It's unwise to collect PolitiFact data and use it as though it's a scientific sample.  It's not a scientific sample.

And we can't complete our post on the Holan interview without noting how she blatantly/wisely dodged one of the questions:

4. You are working with partners across the country, how does everybody draw consistent conclusions? Can this be entirely objective?

No, it's not entirely objective, and PolitiFact doesn't draw consistent conclusions.  Read the article to see how Holan skirts the question.  Adair used to do the same thing.

Oh, and another thing ...

We don't know when this interview took place, but Holan's citing 10 PolitiFact state affiliates and one international one even though the loss of PolitiFact Ohio apparently brings the number of state affiliates down to nine, and PolitiFact has taken down its link to PolitiFact Australia, which has gone on hiatus.  Perhaps somebody should have fact-checked her claims.


  1. I get it. It's not a scientific sample.
    Who said it was? Nobody.

    Keep grinding that axe.

  2. Former PolitiFact editor Bill Adair has explicitly told readers that PolitiFact's work can be used to make informed judgements about a politicians overall record honesty:

    "Collectively, those fact-checks have formed report cards for each candidate that reveal patterns and trends about their truth-telling."

    "The PolitiFact report cards represent the candidates' career statistics, like the back of their baseball cards."

    "The tallies are not scientific, but they provide interesting insights into a candidate's overall record for accuracy."

  3. As I wrote in June of last year:

    "Adair is pulling a sneaky trick. In one instance he's claiming a tally of PolitiFact's ratings reveals nothing, yet on multiple occasions he's implicitly telling readers the opposite and claims tallying up their ratings is a valuable tool in order to determine a politician's honesty. Adair cannot simultaneously use that method as both a shield and a sword."

  4. Augmenting what Jeff wrote, we've been urging PolitiFact to regularly remind readers that its ratings do not represent a scientific sample. If they fail to do that while promoting their report cards, they're assuredly misleading their readers (have you even looked at the comments on their Facebook page?).

    PolitiFact doesn't regularly remind its readers that it doesn't use a scientific sample. It misleads its readers. Exactly what we want from a fact checker?


Thanks to commenters who refuse to honor various requests from the blog administrators, all comments are now moderated. Pseudonymous commenters who do not choose distinctive pseudonyms will not be published, period. No "Anonymous." No "Unknown." Etc.