Friday, December 23, 2016

Evidence of PolitiFact's bias? The Paradox Project I

Matt Shapiro, a data analyst, started publishing a series of PolitiFact evaluations on Dec. 16, 2012. It appears at the Paradox Project website as well as at the Federalist.

Given our deep and abiding interest in the evidences showing PolitiFact's liberal bias, we cannot resist reviewing Shapiro's approach to the subject.

Shapiro's first installment focuses on truth averages and disparities in the lengths of fact checks.

Truth Averages

Shapiro looks at how various politicians compare using averaged "Truth-O-Meter" ratings:
We decided to start by ranking truth values to see how PolitiFact rates different individuals and aggregate groups on a truth scale. PolitiFact has 6 ratings: True, Mostly True, Half-True, Mostly False, False, and Pants on Fire. Giving each of these a value from 0 to 5, we can find an “average ruling” for each person and for groups of people.
Unlike many (not all) past attempts to produce "Truth-O-Meter" averages for politicians, Shapiro uses his averages to gain insight into PolitiFact:
Using averages alone, we already start to see some interesting patterns in the data. PolitiFact is much more likely to rate Republicans as their worst of the worst “Pants on Fire” rating, usually only reserved for when they feel a candidate is not only wrong, but lying in an aggressive and malicious way.
Using 2012 Republican presidential candidate Mitt Romney as his example, Shapiro suggests bias serves as the most reasonable explanation of the wide disparities.

We agree, noting that Shapiro's insight stems from the same type of inference we used in our ongoing study of PolitiFact's application of its "Pants on Fire" rating. But Shapiro disappointed us by defining the "Pants on Fire" rating differently than PolitiFact defines it. PolitiFact does not define a "Pants on Fire" statement as an aggressive or malicious lie. It is defined as "The statement is not accurate and makes a ridiculous claim."

As our study argued, the focus on the "Pants on Fire" rating serves as a useful measure of PolitiFact's bias given that PolitiFact offers nothing at all in its definitions to allow an objective distinction between "False" and "Pants on Fire." On the contrary, PolitiFact's principals on occasion confirm the arbitrary distinction between the two.

Shapiro's first evidence is pretty good, at least as an inference toward the best explanation. But it's been done before and with greater rigor.

Word Count

Shapiro says disparities in the word counts for PolitiFact fact checks offer an indication of PolitiFact's bias:
The most interesting metric we found when examining PolitiFact articles was word count. We found that word count was indicative of how much explaining a given article has to do in order to justify its rating.
While Shapiro offered plenty of evidence showing PolitiFact devotes more words to ratings of Republicans than to its ratings of Democrats, he gave little explanation supporting the inference that the disparities show an ideological bias.

While it makes intuitive sense that selection bias could lead toward spending more words on fact checks of Republicans, as when the fact checker gives greater scrutiny to a Republican's compound statement than to a Democrat's (recent example), we think Shapiro ought to craft a stronger case if he intends to change any minds with his research.


Summary

Shapiro's analysis based on rating averages suffers the same types of problems that we think we addressed with our "Pants on Fire" study: Poor averages for Republicans make a weak argument unless the analysis defuses the excuse that Republicans simply lie more.

As for Shapiro's examination of word counts, we certainly agree that the differences are so significant that they mean something. But Shapiro needs a stronger argument to convince skeptics that greater word length for fact checks of Republicans shows liberal bias.


Update Dec. 23, 2016: Made a few tweaks to the formatting and punctuation, as well as adding links to Shapiro's article at the Paradox Project and the Federalist (-bww).


Jeff adds:

I fail to see how Shapiro contributes anything worthwhile to the conversation, and he certainly doesn't offer anything new. Every criticism of PolitiFact in his piece has been written about in depth before and, in my view, written much better.

Shapiro's description of PolitiFact's "Pants on Fire" rating is flatly wrong. The definition is published at PolitiFact if he had an interest in looking it up. Shapiro asserts that a "Pants on Fire" rating "requires the statement to be both false and malicious" and "usually only reserved for when they feel a candidate is not only wrong, but aggressively and maliciously lying." This is pure fiction. Whether this indicates sloppiness or laziness I'm not sure, but in any event mischaracterizing PolitiFact's ratings only gives fuel to PolitiFact's defenders. Shapiro's error at the very least shows an unfamiliarity with his subject.

Shapiro continues the terribly flawed tradition of some conservative outlets, including the Federalist, where his article was published, by attempting to find clear evidence of bias by simply adding up  PolitiFact's ratings. Someone with Shapiro's skills should know this is a dubious method.

In fact, he does know it:
This method assumes this or that article might have a problem, but you have to look at the “big picture” of dozens of fact-checks, which inevitably means glossing over the fact that biased details do not add up to an unbiased whole.
That's all well and good, but then Shapiro goes on to ask his readers to accept that exact same method for his own study. He even came up with his own chart that simply mirrors the same dishonest charts PolitiFact pushes.

At first blush, his "word count" theory seems novel and unique, but does it prove anything? If it is evidence of something, Shapiro failed to convince me. And I'm eager to believe it.

Unfortunately, it seems Shapiro assumes what his word count data is suppose to prove. Higher word counts do not necessarily show political bias. It's entirely plausible those extra words were the result of PolitiFact giving someone the benefit of the doubt, or granting extra space for a subject to explain themselves. Shapiro's making his assertion without offering evidence. It's true that he offered a few examples, but unless he scientifically surveyed the thousands of articles and confirmed the additional words are directly tied to justifying the rating he could reasonably be accused of cherry-picking.

“When you’re explaining, you’re losing,” may well be a rock solid tenet of lawyers and politicians, but as data-based analysis it is unpersuasive.

We founded this website to promote and share the best criticisms of PolitiFact. While we doubt it matters to him or the Federalist, Shapiro's work fails to meet that standard. 

Shapiro offers nothing new and nothing better. This is a shame because, judging from his Twitter feed and previous writings, Shapiro is a very bright, thoughtful and clever person. We hope his next installments in this series do a better job of exposing PolitiFact's bias.

We've been criticizing and compiling quality critiques of PolitiFact for six years now. Documenting PolitiFact's bias is the main reason for this site's existence. We're exceedingly predisposed to accept and promote good evidence of PolitiFact's flaws.

If your data project can't convince two guys that started a website called PolitiFact Bias and who devote countless hours of our free time preaching to people that PolitiFact is biased, then perhaps your data project isn't very convincing.

No comments:

Post a Comment