Monday, June 3, 2013

Elspeth Reeve's "tea party" hypothesis

We've written a good amount recently about the George Mason University study of PolitiFacts ratings.  GMU's Center for Media and Public Affairs looked at PolitiFact's ratings for President Obama's second term--the term plagued by numerous instances of dubious congressional testimony--and found PolitiFact giving harsher ratings to Republicans by a 3 to 1 margin.

A number of media outlets, including Slate, misinterpreted the study to say that Republicans tell more untruths than Democrats.  Slower-reacting outlets tended to get the story right.  Elspeth Reeve, writing for the Atlantic Wire gets the reason behind the survey right, but offers a silly explanation for the numbers:
This month, 60 percent of Republican claims have been rated as lies, while 29 percent of Democratic claims have been.

Why is that? It's possible the fact-checkers are intentionally or unintentionally letting some bias show through. Whether or not that's true, the state of each party right now most certainly plays a role. A lot of very conservative Republicans got elected in 2010, and the Tea Party got a lot of attention, and some Tea Party Republicans have had a tendency to say inflammatory things. Like, say, Michele Bachmann.
The tea party received a large amount of media attention.  And that's supposed to mitigate the appearance of liberal bias by media fact checkers?  Media fact checkers are part of the media.  Of course media fact checkers follow the stories that get media attention.  The mainstream media have roughly the same liberal bias as PolitiFact.  Reeve's excuse explains media bias by positing an alternative cause that itself amounts to media bias.

It's easy to pop the air-filled idea.  Ask Reeve how she knows people like Michele Bachmann say such outrageous things.  She'll name personally observed anecdotes (mostly from the media) and fact checks like the ones from PolitiFact.  Anecdotal evidence is weak, of course.  But PolitiFact, now there's a source we can trust.

Oh, wait, PolitiFact is precisely the entity that has drawn scrutiny for its potential bias.  If we use PolitiFact's ratings to justify PolitiFact's ratings we're fallaciously arguing in a circle.

Has Reeve got anything other than her own personal observations about the outrageous comments from tea party folks?  If so, she should remember to include that information in support of her hypothesis.

Until Reeve provides some sort of real evidence in favor of this hypothesis, hers is just the latest ad hoc excuse for PolitiFact's appearance of bias.  Spare us the excuses.

Sunday, June 2, 2013

PolitiFact's Paradoxical Positions: The Conflicting Claims of Bill Adair

A press release for a study by George Mason University (CMPA) last week garnered wide attention, though, hilariously, for the wrong reasons. In a case of comically misinterpreted data, the supposedly fact-based left took to the Twitterverse and other media outlets to trumpet what they thought the study said about Republican honesty, when in reality the study was attempting to quantify PolitiFact's liberal tilt. (We don't think it was successful, but more on that later.)

While distorted headlines may comfort some liberals in the warm blanket of confirmation bias, a study highlighting PolitiFact's harsher treatment of the GOP doesn't fit with PolitiFact's self-proclaimed non-partisanship. With headlines like "Study: Republicans are “the less credible party” and tweets claiming "Republicans lie 3x more often than Democrats, says @politifact" quickly being repeated, PolitiFact founder Bill Adair felt compelled to issue a response:
Actually, PolitiFact rates the factual accuracy of specific claims; we do not seek to measure which party tells more falsehoods.

The authors of the press release seem to have counted up a small number of our Truth-O-Meter ratings over a few months, and then drew their own conclusions.
We actually agree with Adair on a few points in his response, but there's still problems with it. Mark Hemingway was quick to spot one of them:
Adair's statement is lawyerly and bordering on dishonest. CMPA did not draw their own conclusions—they simply tallied up all of PolitiFact's ratings during a specific time period to get a representative sample. All the CMPA did was present relevant data, they most certainly did not "draw their own conclusions."
Hemingway is right. Apparently all CMPA did was tally up PolitiFact's ratings and report the results. On the one hand, that's a much less dubious version of fact-checking than anything Adair's ever done. But that's also one of the reasons we don't think it's valuable as a tool to measure PolitiFact's bias.

We've written before about why that method is so flawed. Simply adding up the ratings provides data that has too many plausible explanations to reach a conclusion. The most obvious problem is selection bias, which, by the way, Adair openly admits to in his response:
We are journalists, not social scientists. We select statements to fact-check based on our news judgment -- whether a statement is timely, provocative, whether it's been repeated and whether readers would wonder if it is true.
It's good to see Adair finally admit to selection bias, but we've spoken at length about the numerous other problems with this methodology. The bottom line is a tally of PolitiFact's ratings falls well short of proving ideological bias. A lopsided tally may be consistent with a political bias, but in and of itself it isn't unassailable evidence.

So we agree with Adair that a tally of ratings is a poor measure of which party tells more falsehoods. Of course, that means we also disagree with him.

What Adair forgot to mention in his response to the CMPA study is Adair himself uses the exact same method to promote PolitiFact's ratings:
Instead of traditional articles, our Truth-O-Meter fact-checks are a new form that allows you to see a politician’s report card, to see all fact-checks on a subject or see all the Pants on Fire ratings. We can make larger journalistic points through the automatic tallies and summaries of our work.
You see, when PolitiFact tallies up its ratings, it provides "larger journalistic points" about their work. When academics do it in order to highlight PolitiFact's harsher treatment of the GOP, hey, PolitiFact is just checking specific claims, a tally doesn't tell you anything.
Granted, it's possible Adair misspoke that one time. It's also possible that in 2008 he wrote an article giving readers "tips and tricks on how to find what you want on PolitiFact":
•  Check a candidate's report card  — Our candidate pages...provide a helpful overview. Each one includes a running tally of their Truth-O-Meter ratings, the most recent claims and attacks they've made, attacks against them and their Flip-O-Meter ratings.
Helpful overview? OK. Well, at least Adair knows that a tally of PolitiFact's ratings can't reveal patterns and trends about a candidates truth telling. Right?
Collectively, those fact-checks have formed report cards for each candidate that reveal patterns and trends about their truth-telling.
Wait. What?
The PolitiFact report cards represent the candidates' career statistics, like the back of their baseball cards.
But, but, I thought PolitiFact only checked specific claims and tallies don't show insight about an overall record?
The tallies are not scientific, but they provide interesting insights into a candidate's overall record for accuracy.
Adair is pulling a sneaky trick. In one instance he's claiming a tally of PolitiFact's ratings reveals nothing, yet on multiple occasions he's implicitly telling readers the opposite and claims tallying up their ratings is a valuable tool in order to determine a politician's honesty. Adair cannot simultaneously use that method as both a shield and a sword.

Adair's response to the CMPA study doesn't pass the sniff test. His dismissal of CMPA's method contradicts years' worth of his own lauding of that exact practice. When Adair stops pimping PolitiFact's report cards as if they're some kind of objective referendum on a politician's honesty, we'll be happy to give him kudos. Until then, he's little more than a disingenuous hack trying to have it both ways.



Afters:


At the very least I'll give Adair points for hypocritical chutzpah. The very same day he published his response whining that the study seemed "to have counted up a small number of our Truth-O-Meter ratings" and then "drew their own conclusions", PolitiFact published a recap of Michele Bachmann's record with this admonition:
"At this point, Bachmann's record on the Truth-O-Meter skews toward the red [False]." 

If 100 ratings is a "small number" to draw from for your own conclusions about PolitiFact, then how can 59 ratings tell you anything about Bachmann?

Additional Reading:

-Check out Bryan's piece over at Zebra Fact Check, PolitiFact's Artful Dodger, for a more straightforward analysis of both the CMPA press release and Adair's response.

-We always think Hemingway is worth reading and his take on Adair's response to the study is no exception. He has several good points we didn't mention here.

-You can review our empirical evidence highlighting PolitiFact's liberal bias. Our method avoids the flaws we've seen in so many other studies. You can find it here.