Monday, April 15, 2019

PolitiFact Bias fails to win a Pulitzer Prize for its Eighth Straight Year

Sad news: PolitiFact Bias failed to win a Pulitzer Prize in 2019. That makes eight years in a row PolitiFact Bias has failed to win a Pulitzer.

But there's an upside.

Pulitzer Prize-winning PolitiFact has failed to win a Pulitzer for 10 straight years, beating our streak by two years.

We track these numbers, by the way, because PolitiFact tries to use its Pulitzer Prize from 2009 as a type of mark of excellence endorsing the quality of its fact-checking.

We call that a crock. We've documented that Pulitzer juries do not fact check entries submitted for Pulitzer Prize consideration. And PolitiFact's set of entries in 2009 included its preposterous ruling that it was "Mostly True" Barack Obama's uncle helped liberate Auschwitz.

We created this video a few years ago to commemorate PolitiFact's long-running failure to repeat its Pulitzer Prize success from 2009.

We still think it's funny. It's funnier every year, in fact.

Tuesday, April 9, 2019

PolitiFact: 'Tweets' is to blame!


As if we needed a new reason to condemn the utility of PolitiFact's "report card" featurettes.

PolitiFact, once a candidate has accumulated 10 or more "Truth-O-Meter" ratings, features a page showing a graphic display of the distribution of those ratings. Here's a typical one:


We've always objected to these "report cards" because PolitiFact allows selection bias to serve as the basis for its datasets. In other words, the set of statements making up the basis for the graph is not representative. PolitiFact makes no effort to ensure that it is representative.

Today we ran across a fresh reason for regarding the report cards as unrepresentative. A PolitiFact fact check found that a charge against President Trump, that he was calling illegal immigrants "animals," was "False." PolitiFact's fact check notes that Democratic presidential candidates Kirsten Gillibrand and Pete Buttigieg retweeted the falsehood along with comments condemning Trump's supposed choice of words. Trump, it turned out, was referring to gang members and not ordinary illegal immigrants.

By blaming the falsehood on "Tweets," PolitiFact need not sully the report cards of Gillibrand and Buttigieg.



Bad, naughty "Tweets"!

Good, virtuous Gillibrand. Just look at that report card! No "False" and no "Pants on Fire."


The likewise good and virtuous Buttigieg does not yet have enough Truth-O-Meter ratings to qualify for a report card. He has just one "True" rating and one "Half True" rating. And you can rest assured that when Buttigieg does have a report card graphic that his retweet of a falsehood about Trump will not appear on his record. "Tweets" gets the blame instead.

With just a glance at "Tweets'" report card, one can tell "Tweets" is less virtuous than either Gillibrand or Buttigieg.


Except we're kidding because comparing the report cards is a worthless exercise.

We've repeatedly called on PolitiFact to add disclaimers to its "report cards" informing readers that the report cards serve as no useful guide in deciding which candidate to support.

We believe PolitiFact resists that suggestion because it wants its worthless report cards to influence voters. Don't vote for naughty "Tweets." Vote for virtuous Kirsten Gillibrand. Or virtuous Pete Buttigieg. It's from PolitiFact. And it's science-ish.

Thursday, April 4, 2019

The Worst of PolitiFact's April 2, 2019 Reddit AMA

As we mentioned in a Feb. 2, 2019 post, we love it when PolitiFact folks do interviews. It's a near guarantee of generating material worth posting. In celebration of "International Fact-Checking Day," PolitiFact Director Aaron Sharockman and PolitiFact Editor Angie Drobnic Holan conducted a Reddit "Ask Me Anything" event.

I asked PolitiFact to describe why it advocates transparency while keeping the identities and votes of its "Star Chamber" secret. PolitiFact's "Star Chamber" votes on each "Truth-O-Meter" rating. The majority vote rules, though PolitiFact claims it achieves unanimity for most votes. My question wasn't answered (no great surprise there).

Most of the interactions were boilerplate answers to boilerplate questions. But there were a few items of special interest.


Observing the PolitiFact Code?

Card

Though Holan flatly said "Everything that gets a correction or an update gets tagged (see all tagged items)," we were ready with two recent cases contradicting her claim. And we let that cat out of the bag.

Card

PolitiFact makes statements giving readers the impression that it scrupulously follows its code of principles. In fact, PolitiFact loosely follows its code of principles, as in this example. How do Holan and Sharockman not know this?

One of the examples we used was corrected on approximately March 16, 2019. Most of the uncertainly about the correction date comes from PolitiFact Virginia's decision not to mark the date of the correction. As of April 3, 2019 PolitiFact Virginia had not added the "Corrections or Updates" tag and the story did not appear on PolitiFact's supposed list of all of its corrected or updated stories.


Mythical Truth-O-Meter Consistency

One participant asked a question suggesting PolitiFact does not rate statements consistently (suggesting contemporary ratings of Trump make past ratings look far too harsh). Sharockman implied PolitiFact has kept its system consistent over the years:
But beyond the sheer volume [of Trump ratings--ed.], the standards we use to use [sic] to issue our ratings really hasn't [sic] evolved in the 11 years we've been doing this. In that sense, a Pants on Fire in 2009 should still be a Pants on Fire claim today, and vice versa.
There are two big problems with Sharockman's claim. First, PolitiFact itself announced a change to its rating methodology back in 2012.

Second, PolitiFact has admitted that its ratings are pretty much subjective. Sharockman's chosen example, the dividing line between "False" and "Pants on Fire" is perhaps the most sensational example of that subjectivity. How does Sharockman not know that?


The Vast Right-Wing Conspiracy Against PolitiFact?

Someone (not me!) asked "Who fact checks you [PolitiFact--ed.]?"

We found Sharockman's response fascinating and very probably false:

Card

Sharockman's answer paints PolitiFact as the focus of a concentrated group of hostile editors. With all those people combing PolitiFact material for hours on end, it's amazing that PolitiFact makes mistakes so rarely.

Right?

But is there any evidence at all supporting Sharockman's supposition that "a lot of people are reading everything we write looking for mistakes"? We at PolitiFact Bias announced long ago that we could not do a thorough job vetting PolitiFact's body of work:
As PolitiFact expands its state operations, the number of stories it produces far exceeds our capacity to review and correct even just the most egregious examples of journalistic error or bias.  We aim to encourage an army of Davids to counteract the mistakes and bias in PolitiFact's stories.
Who else could Sharockman have had in mind? Media Matters For America? The (defunct) Weekly Standard?

(We asked Sharockman via Twitter the other day whom he had in mind but received no immediate reply)

We suspect Sharockman of Trumpian exaggeration. He knows at least some people look at some of PolitiFact's work for errors. So to convey his point he turns that into "a lot of people" looking at "everything" PolitiFact publishes looking for errors. It's likely the only organization combing over PolitiFact's entire body of work looking for errors is PolitiFact itself.

And look how many times it fails, without swallowing the fiction that this represents the entire number.

***

Holan and Sharockman are politicians advocating for PolitiFact. It appears we cannot trust PolitiFact to hold its own to account.