Wednesday, September 5, 2012

PFB Smackdown: Dylan Otto Krider vs. Jon Cassidy (Updated)

Dylan Otto Krider's orbit around truth-hustler Chris Mooney helped bring him into direct conflict with Ohio Watchdog writer Jon Cassidy this week.  We've featured some of Cassidy's PolitiFact critiques here at PolitiFact Bias, and our review of Cassidy's longer article for Human Events is pending.

Krider, like Mooney, believes that statistics from PolitiFact indicating that Republicans receive harsher ratings than Democrats help show that Republicans simply have a more cavalier attitude toward telling the truth.  Cassidy's Human Events story challenged that interpretation and prompted a story in reply from Krider.

Krider's central point carries partial merit.  He challenges Cassidy's headline with his own:  "Does PolitiFact say Republicans lie nine times more? Really?"

That answer to that question is "no," but Krider used specious reasoning to reach the conclusion.  Examples follow.
1)  Krider criticizes Cassidy for relying on a study I conducted for publication through PolitiFact Bias.  Krider's most damning criticism of the study consists of the fact that it was not peer-reviewed.  While true, any power in that criticism relies on a fallacious appeal to ignorance:  Since academics have not reviewed and approved the content, therefore we do not know that the content is reliable.  If we do not know that the content is reliable, then the content is not reliable.  For the record, the version of the study thus far published is deliberately written to make the data accessible to persons of average or slightly above average education.

2)  Krider claims that Cassidy's numbers "don't pass the smell test."  But Krider flubbed almost all the math in making his judgment.  He tried comparing Cassidy's claims and the PolitiFact Bias study with one Eric Ostermeier did at the University of Minnesota.

Ostermeier (formatting from the original):
In total, 74 of the 98 statements by political figures judged "false" or "pants on fire" over the last 13 months were given to Republicans, or 76 percent, compared to just 22 statements for Democrats (22 percent). 
Krider wonders how Cassidy's numbers could vary so dramatically from Ostermeier's.  The answer is simple:  Krider's comparing apples and oranges.  Cassidy compared the raw number of "Pants on Fire" ratings.  Ostermeier's 76 percent figure, as one can see in Ostermeier's words above, was derived by combining "False" and "Pants on Fire" ratings together.  In addition, Cassidy used a different mathematical comparison than did Ostermeier.  Cassidy gave the numbers in comparison to each other:  the total times one side was rated "Pants on Fire" divided by the number of times the other side was rated "Pants on Fire."  Ostermeier's 76 percent figure, in contrast, comes from combining the numbers for both parties into one total and then figuring the percentages--by party--of that total.  It's a different mathematical operation.

If we use Cassidy's math on Ostermeier's 2010 data, the number isn't too far off from the one Cassidy used.  Republicans received 23 "Pants on Fire" ratings.  Democrats received four.  Republicans received the "Pants on Fire" rating 5.75 times more than Democrats did.


Table from Smart Politics

Krider did a sensational job of obscuring the similarities in the numbers.

3)  Later in his post, Krider badly mischaracterizes both his quotation of me and the data I shared with him (bold emphasis added):
(N)owhere in the (PolitiFact Bias) study could I find where PolitiFact finds Republicans wrong nine times more often. I asked White if he had any idea where the number comes from. He writes:
That 9x figure had me puzzled for a bit as well, since it is not a figure anywhere emphasized in my study. But I think I have the answer you're looking for. Cassidy looked purely at the disparity between the total number of supposed "Pants on Fire" statements in my "C" group since the end of the partnership with Congressional Quarterly… That was the group I hypothesized probably represented an exaggerated measure of bias since it includes email claims (predominantly from conservatives and predominantly far-fetched). I think the best way to look at the disparities is to examine them as a percentage of the total number of false statements by party. If you read the study, as you appear to have done, you'll know why.
White sent me his data, and the numbers do appear to match with the numbers including chain emails.
That's how the story reads now.  Originally, Krider followed the quotation of my email message with this (bold emphasis added):
White sent me his data, and the numbers do appear to match with the numbers for chain emails alone.
I alerted Krider about the inaccuracy via email. He elected to correct the error without appending a correction notice. Nobody needs to know he bungled the description, apparently. Google confirmed the original wording via phrase search and partial screen capture:
Krider told me the error was unintentional, and I believe him. But it's the sort of error one hopes is absent in the work of somebody trying to verify facts.


4)  Krider delivers yet another sloppy error:

Cassidy was happy to let his readers assume he was referring to PolitiFact findings as a whole, when in fact White found PolitiFact issued “Pants on Fire” ratings for conservative falsehoods 74% more often. In other words, 2% below the UM study (White thinks it’s a coincidence because they measure different things: “Perhaps the numbers match because the degree of bias is similar, but it's probably a stretch to credit me with accuracy based on the similarity of the numbers.”)
As explained above, my study measures something entirely different than Ostermeier's.  His 76 percent figure represents the Republican percentage of the total "False" plus "Pants on Fire" ratings for 2010.  My 74 percent figure represents the greater likelihood that a false statement (the combined number of "False" and "Pants on Fire" statements) from a Republican will receive a "Pants on Fire" rating.  The 74 percent figure ignores the fact that Republicans receive more false ratings and focuses on PolitiFact's application of the "Pants on Fire" rating.  It just isn't reasonable to take the similarity of the numbers as anything but coincidence.  One is left to conclude that Krider simply did not understand the non-scholarly PolitiFact Bias study despite its simplified language and approach.

5)  Krider gives us a rigged and inapt comparison:
When Cassidy and I write about PolitiFact’s skew, I chose to reference the scholarly study. Cassidy preferred the study of a blog with an agenda similar to his own, and then misled his readers as to what the study actually said.
Somewhat to my chagrin, Cassidy left the central thesis of the PFB study untouched and unmentioned.  Instead, the material he took from my study consisted of the easily verified raw numbers.  A careful comparison of my numbers with Ostermeier's shows little difference where our studies overlap despite some differences in the way we separated the data.  In short, Cassidy didn't use anything from my study that should require a rigorous peer review.  As I wrote to Krider, virtually anyone can duplicate my numbers.  Krider's dispute of the study's findings as Cassidy uses them isn't reasonable.  The PFB study is both more current and more complete in terms of what it measures than Ostermeier's.

Summary:

So what's left?  Krider failed with all of his key criticisms save one:  Cassidy made the group C findings the headline instead of group A.  That decision produced a moderate exaggeration of the study's findings, but nothing nearly as severe as what Krider charges.

Be careful whom you trust for fact checking.


Afters

We'll have our own review of Cassidy's “PolitiFact bias: Does the GOP tell nine times more lies than left? Really?” at a later time. I've offered Cassidy the opportunity to explain the decision about the title in an addendum to this post.

Cassidy responds:

Jon Casssidy on the Human Events headline:

"The 9x figure is clearly supported -- 119-13. "GOP" is an approximation of "claims by Republicans and conservatives" that fits into a headline. It's explained in detail in the text."
--Jon Cassidy, via email


Additional note about Dylan Otto Krider

Krider and I ended up having an extensive email exchange concerning his story and methods.  I've asked Krider twice for permission to quote material from his emails to highlight the flaws in thinking and methods.  So far, he has not responded to the repeated request.

I could probably justify using his emails without his explicit permission, but I'm trying to offer him a sterling example of journalism ethics.


Update 9/6/2012, 4 p.m.: 

Dylan Otto Krider tweets:

@PolitiFactBias Did you really do an entire critique of my post without linking back to it so people can see for themselves?

The answer is yes, at least temporarily.  That's fixed with this update with my apologies.  The last thing I want is for people not to see Krider's mistakes for themselves.  We invite Krider to point out anything we got wrong that will come to light when people take a look for themselves.

No comments:

Post a Comment

Thanks to commenters who refuse to honor various requests from the blog administrators, all comments are now moderated. Pseudonymous commenters who do not choose distinctive pseudonyms will not be published, period. No "Anonymous." No "Unknown." Etc.