Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

No comments:

Post a Comment