Showing posts with label Studies. Show all posts
Showing posts with label Studies. Show all posts

Wednesday, May 31, 2017

What does the "Partisan Selective Sharing" study say about PolitiFact?

A recent study called "Partisan Selective Sharing" (hereafter PSS) noted that Twitter users were more likely to share fact checks that aided their own side of the political aisle.

Duh?

On the other hand, the paper came up in a search we did of scholarly works mentioning "PolitiFact."

The search preview mentioned the University of Minnesota's Eric Ostermeier. So we couldn't resist taking a peek to see how the paper handled the data hinting at PolitiFact's selection bias problem.

The mention of Ostermeier's work was effectively neutral, we're happy to say. And the paper had some surprising value to it.

PSS coded tweets from the "elite three" fact checkers, FactCheck.org, PolitiFact and the Washington Post Fact Checker, classifying them as neutral, beneficial to Republicans or beneficial to Democrats.

In our opinion, that's where the study proved briefly interesting:
Preliminary analysis
Fact-checking tweets
42.3% of the 194 fact-check (n=82) tweets posted by the three accounts in October 2012 contained rulings that were advantageous to the Democratic Party (i.e., either positive to Obama or negative to Romney), while 23.7% of them (n=46) were advantageous to the Republican Party (i.e., either positive to Romney or negative to Obama). The remaining 34% (n=66) were neutral, as their statements contained either a contextualized analysis or a neutral anchor.

In addition to the relative advantage of the fact checks, the valence of the fact-checking tweet toward each candidate was also analyzed. Of the 194 fact checks, 34.5% (n=67) were positive toward Obama, 46.9% (n=91) were neutral toward Obama, and 18.6% (n=36) were negative toward Obama. On the other hand, 14.9% (n=29) of the 194 fact checks contained positive valence toward Romney, 53.6% (n=104) were neutral toward Romney, and 31.4% (n=61) were negative valence toward Romney.
Of course, many have no problem interpreting results like these as a strong indication that Republicans lie more than Democrats. And we cheerfully admit the data show consistency with the assumption that Republicans lie more.

Still, if one has some interest in applying the methods of science, on what do we base the hypothesis that Republicans lie more? We cannot base that hypothesis on these data without ruling out the idea that fact-checking journalists lean to the left. And unfortunately for the "Republicans lie more" hypothesis, we have some pretty good data showing that American journalists tend to lean to the left.

Until we have some reasonable argument why left-leaning journalists do not allow their bias to affect their work, the results of studies like PSS give us more evidence that the media (and the mainstream media subset "fact checkers") lean left while they're working.

The "liberal bias" explanation has better evidence than the "Republicans lie more" hypothesis. As PolitiFact tweeted 126 of the total 194 fact check tweets, a healthy share of the blame likely falls on PolitiFact.


We wish the authors of the study, Jieun Shin and Kjerstin Thorson, had separated the three fact checkers in their results.

Friday, May 19, 2017

What "Checking How Fact Checkers Check" says about PolitiFact

A study by doctoral student Chloe Lim (Political Science) of Stanford University gained some attention this week, inspiring some unflattering headlines like this one from Vocativ: "Great, Even Fact Checkers Can’t Agree On What Is True."

Katie Eddy and Natasha Elsner explain inter-rater reliability

Lim's research approach somewhat resembled research by Michelle A. Amazeen of Rider University. Amazeen and Lim both used tests of coding consistency to assess the accuracy of fact checkers, but the two reached roughly opposite conclusions. Amazeen concluded that consistent results helped strengthen the inference that fact-checkers fact-check accurately. Lim concluded that inconsistent fact-checker ratings may undermine the public impact of fact-checking.

Key differences in the research procedure help explain why Amazeen and Lim reached differing conclusions.

Data Classification

Lim used two different methods for classifying data from PolitiFact and the Washington Post Fact Checker. She converted PolitiFact ratings to a five-point scale corresponding to the Washington Post Fact Checker's "Pinocchio" ratings, and she divided ratings into "True" and "False" groups using the line between "Mostly False" and "Half True" as the barrier between true and false statements.

Amazeen opted for a different approach. Amazeen did not try to reconcile the two different rating systems at PolitiFact and the Fact Checker, electing to use a binary system that counted every statement rated other than "True" or "Geppetto check mark" as false.

Amazeen's method essentially guaranteed high inter-rater reliability, because "True" judgments from the fact checkers are rare.  Imagine comparing movie reviewers who use a five-point scale but with their data divided up into great movies or not-great movies. A one-star rating of "Ishtar" by one reviewer would show agreement with a four-star rating of the same movie by another reviewer. Disagreements only occur when one reviewer gives five stars while the other one gives a lower rating.

Professor Joseph Uscinski's reply to Amazeen's research, published in Critical Review, put it succinctly:
Amazeen’s analysis sets the bar for agreement so low that it cannot be taken seriously.
Amazeen found high agreement among fact checkers because her method guaranteed that outcome.

Lim's methods provide for more varied and robust data sets, though Lim experienced the same problem Amazeen found in that two different fact-checking organizations only rarely check the same claims. Both researchers used relatively small data sets.

The meaning of Lim's study

In our view, Lim's study rushes to its conclusion that fact-checkers disagree without giving proper attention to the most obvious explanation for the disagreement she measures.

The rating systems the fact checkers use lend themselves to subjective evaluations. We should expect that condition to lead to inconsistent ratings. When I reviewed Amazeen's method at Zebra Fact Check, I criticized it for applying inter-coder reliability standards to a process much less rigorous than social science coding.

Klaus Krippendorff, creator of the K-alpha measure Amazeen used in her research, explained the importance of giving coders good instructions to follow:
The key to reliable content analyses is reproducible coding instructions. All phenomena afford multiple interpretations. Texts typically support alternative interpretations or readings. Content analysts, however, tend to be interested in only a few, not all. When several coders are employed in generating comparable data, especially large volumes and/or over some time, they need to focus their attention on what is to be studied. Coding instructions are intended to do just this. They must delineate the phenomena of interest and define the recording units to be described in analyzable terms, a common data language, the categories relevant to the research project, and their organization into a system of separate variables.
The rating systems of PolitiFact and the Washington Post Fact Checker are gimmicks, not coding instructions. The definitions mean next to nothing, and PolitiFact's creator, Bill Adair, has called PolitiFact's determination of Truth-O-Meter ratings "entirely subjective."

Lim's conclusion is right. The fact checkers are inconsistent. But Lim's use of coder reliability ratings is, in our view, a little like using a plumb line to measure whether a building has collapsed due to earthquake. The tool is too sophisticated for the job. The "Truth-O-Meter" and "Pinocchio" rating systems as described and used by the fact checkers do not qualify as adequate sets of coding instructions.

We've belabored the point about PolitiFact's rating system for years. It's a gimmick that tends to mislead people. And the fact-checking organizations that do not use a rating system avoid it for precisely that reason.

Lucas Graves' history of the modern fact-checking movement, "Deciding What's True: The Rise of Political Fact-Checking in American Journalism," (Page 41) offers an example of the dispute:
The tradeoffs of rating systems became a central theme of the 2014 Global Summit of fact-checkers. Reprising a debate from an earlier journalism conference, Bill Adair staged a "steel-cage death match" with the director of Full Fact, a London-based fact-checking outlet that abandoned its own five-point rating scheme (indicated by a magnifying lens) as lacking precision and rigor. Will Moy explained that Full Fact decided to forgo "higher attention" in favor of "long-term reputation," adding that "a dodgy rating system--and I'm afraid they are inherently dodgy--doesn't help us with that."
Coding instructions should provide coders with clear guidelines preventing most or all debate in deciding between two rating categories.

Lim's study in its present form does its best work in creating questions about fact checkers' use of rating systems.

Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

Friday, December 23, 2016

Evidence of PolitiFact's bias? The Paradox Project I

Matt Shapiro, a data analyst, started publishing a series of PolitiFact evaluations on Dec. 16, 2012. It appears at the Paradox Project website as well as at the Federalist.

Given our deep and abiding interest in the evidences showing PolitiFact's liberal bias, we cannot resist reviewing Shapiro's approach to the subject.

Shapiro's first installment focuses on truth averages and disparities in the lengths of fact checks.

Truth Averages

Shapiro looks at how various politicians compare using averaged "Truth-O-Meter" ratings:
We decided to start by ranking truth values to see how PolitiFact rates different individuals and aggregate groups on a truth scale. PolitiFact has 6 ratings: True, Mostly True, Half-True, Mostly False, False, and Pants on Fire. Giving each of these a value from 0 to 5, we can find an “average ruling” for each person and for groups of people.
Unlike many (not all) past attempts to produce "Truth-O-Meter" averages for politicians, Shapiro uses his averages to gain insight into PolitiFact:
Using averages alone, we already start to see some interesting patterns in the data. PolitiFact is much more likely to rate Republicans as their worst of the worst “Pants on Fire” rating, usually only reserved for when they feel a candidate is not only wrong, but lying in an aggressive and malicious way.
Using 2012 Republican presidential candidate Mitt Romney as his example, Shapiro suggests bias serves as the most reasonable explanation of the wide disparities.

We agree, noting that Shapiro's insight stems from the same type of inference we used in our ongoing study of PolitiFact's application of its "Pants on Fire" rating. But Shapiro disappointed us by defining the "Pants on Fire" rating differently than PolitiFact defines it. PolitiFact does not define a "Pants on Fire" statement as an aggressive or malicious lie. It is defined as "The statement is not accurate and makes a ridiculous claim."

As our study argued, the focus on the "Pants on Fire" rating serves as a useful measure of PolitiFact's bias given that PolitiFact offers nothing at all in its definitions to allow an objective distinction between "False" and "Pants on Fire." On the contrary, PolitiFact's principals on occasion confirm the arbitrary distinction between the two.

Shapiro's first evidence is pretty good, at least as an inference toward the best explanation. But it's been done before and with greater rigor.

Word Count

Shapiro says disparities in the word counts for PolitiFact fact checks offer an indication of PolitiFact's bias:
The most interesting metric we found when examining PolitiFact articles was word count. We found that word count was indicative of how much explaining a given article has to do in order to justify its rating.
While Shapiro offered plenty of evidence showing PolitiFact devotes more words to ratings of Republicans than to its ratings of Democrats, he gave little explanation supporting the inference that the disparities show an ideological bias.

While it makes intuitive sense that selection bias could lead toward spending more words on fact checks of Republicans, as when the fact checker gives greater scrutiny to a Republican's compound statement than to a Democrat's (recent example), we think Shapiro ought to craft a stronger case if he intends to change any minds with his research.


Summary

Shapiro's analysis based on rating averages suffers the same types of problems that we think we addressed with our "Pants on Fire" study: Poor averages for Republicans make a weak argument unless the analysis defuses the excuse that Republicans simply lie more.

As for Shapiro's examination of word counts, we certainly agree that the differences are so significant that they mean something. But Shapiro needs a stronger argument to convince skeptics that greater word length for fact checks of Republicans shows liberal bias.


Update Dec. 23, 2016: Made a few tweaks to the formatting and punctuation, as well as adding links to Shapiro's article at the Paradox Project and the Federalist (-bww).


Jeff adds:

I fail to see how Shapiro contributes anything worthwhile to the conversation, and he certainly doesn't offer anything new. Every criticism of PolitiFact in his piece has been written about in depth before and, in my view, written much better.

Shapiro's description of PolitiFact's "Pants on Fire" rating is flatly wrong. The definition is published at PolitiFact if he had an interest in looking it up. Shapiro asserts that a "Pants on Fire" rating "requires the statement to be both false and malicious" and "usually only reserved for when they feel a candidate is not only wrong, but aggressively and maliciously lying." This is pure fiction. Whether this indicates sloppiness or laziness I'm not sure, but in any event mischaracterizing PolitiFact's ratings only gives fuel to PolitiFact's defenders. Shapiro's error at the very least shows an unfamiliarity with his subject.

Shapiro continues the terribly flawed tradition of some conservative outlets, including the Federalist, where his article was published, by attempting to find clear evidence of bias by simply adding up  PolitiFact's ratings. Someone with Shapiro's skills should know this is a dubious method.

In fact, he does know it:
This method assumes this or that article might have a problem, but you have to look at the “big picture” of dozens of fact-checks, which inevitably means glossing over the fact that biased details do not add up to an unbiased whole.
That's all well and good, but then Shapiro goes on to ask his readers to accept that exact same method for his own study. He even came up with his own chart that simply mirrors the same dishonest charts PolitiFact pushes.

At first blush, his "word count" theory seems novel and unique, but does it prove anything? If it is evidence of something, Shapiro failed to convince me. And I'm eager to believe it.

Unfortunately, it seems Shapiro assumes what his word count data is suppose to prove. Higher word counts do not necessarily show political bias. It's entirely plausible those extra words were the result of PolitiFact giving someone the benefit of the doubt, or granting extra space for a subject to explain themselves. Shapiro's making his assertion without offering evidence. It's true that he offered a few examples, but unless he scientifically surveyed the thousands of articles and confirmed the additional words are directly tied to justifying the rating he could reasonably be accused of cherry-picking.

“When you’re explaining, you’re losing,” may well be a rock solid tenet of lawyers and politicians, but as data-based analysis it is unpersuasive.

We founded this website to promote and share the best criticisms of PolitiFact. While we doubt it matters to him or the Federalist, Shapiro's work fails to meet that standard. 

Shapiro offers nothing new and nothing better. This is a shame because, judging from his Twitter feed and previous writings, Shapiro is a very bright, thoughtful and clever person. We hope his next installments in this series do a better job of exposing PolitiFact's bias.

We've been criticizing and compiling quality critiques of PolitiFact for six years now. Documenting PolitiFact's bias is the main reason for this site's existence. We're exceedingly predisposed to accept and promote good evidence of PolitiFact's flaws.

If your data project can't convince two guys that started a website called PolitiFact Bias and who devote countless hours of our free time preaching to people that PolitiFact is biased, then perhaps your data project isn't very convincing.

Friday, January 2, 2015

PolitiFact still biased after all these years

It's time for the annual update to our ongoing research project measuring PolitiFact's bias in its application of "Pants on Fire" ratings.

In spite of its frequent publication and pimping of "report cards" showing how various persons and organizations rate on its trademarked "Truth-O-Meter," PolitiFact openly admits that its process for choosing which claims to rate is not scientific. PolitiFact maintains it is making no effort to figure out which persons or groups lie more, albeit at the same time publishing its stories in a way that encourages readers to draw such conclusions based on unscientific data.

We at PolitiFact Bias have helped pioneer the practice of using the fact checkers' rating systems to measure ideological bias in journalism. The "Pants on Fire" bias study was the first we published. It examines differences in how PolitiFact applies its "Pants on Fire" rating compared to its other rating for false statements, "False."

PolitiFact's liberal defenders have a ready defense when it turns out that PolitiFact shows much more enthusiasm for giving "Pants on Fire" ratings to Republicans than it does for Democrats. "Republicans simply tell the biggest lies," they say, or some such.

Not so fast, we say.

One PolitiFact critic insider called PolitiFact's ratings "coin flips." Current PolitiFact editor Angie Drobnic Holan recently described the difference between a "False" rating and a "Pants on Fire" rating:
"We have definitions for all of our ratings. The definition for "False" is the statement is not accurate. The definition for "Pants on Fire" is the statement is not accurate and makes a ridiculous claim. So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other."
If awarding a claim a "Pants on Fire" rating was based on something objective, we suggest that somebody at PolitiFact could describe how that objective process operates. We've yet to see it in over seven years of observation.

Given PolitiFact's disinclination or inability to reveal any objective basis for the distinction between "False" and "Pants on Fire" statements, we conclude that the difference between the two is substantially or perhaps entirely subjective. So we compare the percentage of all false statements ("False" plus "Pants on Fire") rated "Pants on Fire" by party. It reveals that PolitiFact, after 2007, has consistently given false statements by Republicans "Pants on Fire" ratings at a much higher rate than those from Democrats.

And that, we claim, is an objective measure of ideological bias. The "Pants on Fire" rating amounts to an opinion poll of PolitiFact journalists as to which false ratings are ridiculous. PolitiFact subjectively finds the false claims of Republicans more ridiculous than those of Democrats.

It's not the only measure of ideological bias, of course, and it is subject to some limitations that we've described in earlier publications.

The new data for 2014

Our findings for 2014 proved very consistent with PolitiFact's performance from 2008 through 2013.

The PoF bias number for 2014 was 1.95, meaning a Republican's false statement, as designated by PolitiFact, was 95 percent more likely to receive a "Pants on Fire" rating than a false statement from a Democrat. That's nearly twice as likely.

The selection bias number for 2014 was 2.56, meaning the total number of false statements, "False" and "Pants on Fire" ratings combined, was 156 percent higher for Republicans than for Democrats.

Overall, that means PolitiFact rates more Republican statements false and rates Republican statements "Pants on Fire" at a much higher rate than for Democrats.


PolitiFact continued its recent trend of finding Democrats ever more truthful. Democrats tied their all-time record with only 16 statements found false, and improved on their performance in 2013 with only two of those 16 rated "Pants on Fire."

Republicans had also shown a recent trend toward fewer false ratings since a high mark in 2011, but that reversed in 2014. Still, Republicans posted their lowest percentage of "Pants on Fire" ratings since 2008, the year for which PolitiFact won its 2009 Pulitzer Prize.

The 1.95 PoF Bias number for 2014 boosted PolitiFact's cumulative figure a few hundredths to 1.74. Since its inception, PolitiFact has been 74 percent more likely to rate a false statement as "Pants on Fire" if the claim comes from a Republican instead of a Democrat.

Note

The above figures include only elected or appointed persons or party organizations such as the Democratic National Committee. We call these "Group A" figures and consider them the most reliable group in this study for showing partisan bias at PolitiFact.

We'll soon publish our findings for PunditFact, PolitiFact's effort focused on pundits, which represents "Group B" data for purposes of our research.

Sunday, June 2, 2013

PolitiFact's Paradoxical Positions: The Conflicting Claims of Bill Adair

A press release for a study by George Mason University (CMPA) last week garnered wide attention, though, hilariously, for the wrong reasons. In a case of comically misinterpreted data, the supposedly fact-based left took to the Twitterverse and other media outlets to trumpet what they thought the study said about Republican honesty, when in reality the study was attempting to quantify PolitiFact's liberal tilt. (We don't think it was successful, but more on that later.)

While distorted headlines may comfort some liberals in the warm blanket of confirmation bias, a study highlighting PolitiFact's harsher treatment of the GOP doesn't fit with PolitiFact's self-proclaimed non-partisanship. With headlines like "Study: Republicans are “the less credible party” and tweets claiming "Republicans lie 3x more often than Democrats, says @politifact" quickly being repeated, PolitiFact founder Bill Adair felt compelled to issue a response:
Actually, PolitiFact rates the factual accuracy of specific claims; we do not seek to measure which party tells more falsehoods.

The authors of the press release seem to have counted up a small number of our Truth-O-Meter ratings over a few months, and then drew their own conclusions.
We actually agree with Adair on a few points in his response, but there's still problems with it. Mark Hemingway was quick to spot one of them:
Adair's statement is lawyerly and bordering on dishonest. CMPA did not draw their own conclusions—they simply tallied up all of PolitiFact's ratings during a specific time period to get a representative sample. All the CMPA did was present relevant data, they most certainly did not "draw their own conclusions."
Hemingway is right. Apparently all CMPA did was tally up PolitiFact's ratings and report the results. On the one hand, that's a much less dubious version of fact-checking than anything Adair's ever done. But that's also one of the reasons we don't think it's valuable as a tool to measure PolitiFact's bias.

We've written before about why that method is so flawed. Simply adding up the ratings provides data that has too many plausible explanations to reach a conclusion. The most obvious problem is selection bias, which, by the way, Adair openly admits to in his response:
We are journalists, not social scientists. We select statements to fact-check based on our news judgment -- whether a statement is timely, provocative, whether it's been repeated and whether readers would wonder if it is true.
It's good to see Adair finally admit to selection bias, but we've spoken at length about the numerous other problems with this methodology. The bottom line is a tally of PolitiFact's ratings falls well short of proving ideological bias. A lopsided tally may be consistent with a political bias, but in and of itself it isn't unassailable evidence.

So we agree with Adair that a tally of ratings is a poor measure of which party tells more falsehoods. Of course, that means we also disagree with him.

What Adair forgot to mention in his response to the CMPA study is Adair himself uses the exact same method to promote PolitiFact's ratings:
Instead of traditional articles, our Truth-O-Meter fact-checks are a new form that allows you to see a politician’s report card, to see all fact-checks on a subject or see all the Pants on Fire ratings. We can make larger journalistic points through the automatic tallies and summaries of our work.
You see, when PolitiFact tallies up its ratings, it provides "larger journalistic points" about their work. When academics do it in order to highlight PolitiFact's harsher treatment of the GOP, hey, PolitiFact is just checking specific claims, a tally doesn't tell you anything.
Granted, it's possible Adair misspoke that one time. It's also possible that in 2008 he wrote an article giving readers "tips and tricks on how to find what you want on PolitiFact":
•  Check a candidate's report card  — Our candidate pages...provide a helpful overview. Each one includes a running tally of their Truth-O-Meter ratings, the most recent claims and attacks they've made, attacks against them and their Flip-O-Meter ratings.
Helpful overview? OK. Well, at least Adair knows that a tally of PolitiFact's ratings can't reveal patterns and trends about a candidates truth telling. Right?
Collectively, those fact-checks have formed report cards for each candidate that reveal patterns and trends about their truth-telling.
Wait. What?
The PolitiFact report cards represent the candidates' career statistics, like the back of their baseball cards.
But, but, I thought PolitiFact only checked specific claims and tallies don't show insight about an overall record?
The tallies are not scientific, but they provide interesting insights into a candidate's overall record for accuracy.
Adair is pulling a sneaky trick. In one instance he's claiming a tally of PolitiFact's ratings reveals nothing, yet on multiple occasions he's implicitly telling readers the opposite and claims tallying up their ratings is a valuable tool in order to determine a politician's honesty. Adair cannot simultaneously use that method as both a shield and a sword.

Adair's response to the CMPA study doesn't pass the sniff test. His dismissal of CMPA's method contradicts years' worth of his own lauding of that exact practice. When Adair stops pimping PolitiFact's report cards as if they're some kind of objective referendum on a politician's honesty, we'll be happy to give him kudos. Until then, he's little more than a disingenuous hack trying to have it both ways.



Afters:


At the very least I'll give Adair points for hypocritical chutzpah. The very same day he published his response whining that the study seemed "to have counted up a small number of our Truth-O-Meter ratings" and then "drew their own conclusions", PolitiFact published a recap of Michele Bachmann's record with this admonition:
"At this point, Bachmann's record on the Truth-O-Meter skews toward the red [False]." 

If 100 ratings is a "small number" to draw from for your own conclusions about PolitiFact, then how can 59 ratings tell you anything about Bachmann?

Additional Reading:

-Check out Bryan's piece over at Zebra Fact Check, PolitiFact's Artful Dodger, for a more straightforward analysis of both the CMPA press release and Adair's response.

-We always think Hemingway is worth reading and his take on Adair's response to the study is no exception. He has several good points we didn't mention here.

-You can review our empirical evidence highlighting PolitiFact's liberal bias. Our method avoids the flaws we've seen in so many other studies. You can find it here.

Wednesday, May 29, 2013

About that George Mason University study showing PolitiFact rates Republicans as less truthful ...

Just about every media outlet has flubbed the reporting on that study from George Mason University that says PolitiFact finds Republicans less truthful.

Most media outlets lean (or fall) toward the view that the study is saying something about the veracity of Republicans.  That's not the point of the study.  It's a media study.  It's studying PolitiFact, not politicians.   Conclusions from the study apply to PolitiFact, not to politicians.

What's the value of this study?  Not much at all.  It proves nothing, as John Sides points out, because so many different explanations may explain the facts.  This study simply records what PolitiFact did with its ratings over a given time period.  So as much as we might like to see a study that quantifies PolitiFact's selection bias or outright spin in writing stories, this isn't it.  Our study probably remains the best of the lot when it comes to showing PolitiFact's bias.

We've run across a couple of media reports that get things mostly right:  Peter Roff at U.S. News & World Report and John Sides of Washington Monthly and "The Monkey Cage."

Roff doesn't clearly describe the point of the study except in terms of his own view (bold emphasis added):
The fact that, as the Lichter study shows, "A majority of Democratic statements (54 percent) were rated as mostly or entirely true, compared to only 18 percent of Republican statements," probably has more to do with how the statements were picked and the subjective bias of the fact checker involved than anything remotely empirical. Likewise, the fact that "a majority of Republican statements (52 percent) were rated as mostly or entirely false, compared to only 24 percent of Democratic statements" probably has more to do with spinning stories than it does with evaluating statements.
It's likely Roff is describing the purpose of the study.  He's not explaining anything new to the researchers (nor to Sides at The Monkey Cage). 

But, hilariously, the media have largely interpreted the GMU press release in terms of liberal orthodoxy.

The Poynter Institute, owner of the Tampa Times and PolitiFact, ran the ambiguous headline "Study: PolitiFact finds Republicans ‘less trustworthy than Democrats’" and published comments from long-time PolitiFact editor Bill Adair to the effect that PolitiFact doesn't try to measure "which party tells more falsehoods."  Newsflash, Bill Adair:  That's not the point of the study.

Typically the media published semi-accurate accounts like the one at Poynter.  But a few others flatly interpreted the study as saying Republicans tell more falsehoods.

Ambiguous

The Huffington Post
Mediaite 



Evidence Republicans tell more falsehoods

The Raw Story
Talking Points Memo
Salon
PoliticsUSA


The two in the "ambiguous" category should write clarifications.  The four in the latter category should write corrections.

Wednesday, September 5, 2012

PFB Smackdown: Dylan Otto Krider vs. Jon Cassidy (Updated)

Dylan Otto Krider's orbit around truth-hustler Chris Mooney helped bring him into direct conflict with Ohio Watchdog writer Jon Cassidy this week.  We've featured some of Cassidy's PolitiFact critiques here at PolitiFact Bias, and our review of Cassidy's longer article for Human Events is pending.

Krider, like Mooney, believes that statistics from PolitiFact indicating that Republicans receive harsher ratings than Democrats help show that Republicans simply have a more cavalier attitude toward telling the truth.  Cassidy's Human Events story challenged that interpretation and prompted a story in reply from Krider.

Krider's central point carries partial merit.  He challenges Cassidy's headline with his own:  "Does PolitiFact say Republicans lie nine times more? Really?"

That answer to that question is "no," but Krider used specious reasoning to reach the conclusion.  Examples follow.

Wednesday, August 1, 2012

PFB Research: Yes, Virginia, PolitiFact is biased against Republicans

It was late during PolitiFact's first year, 2007, when I started to recognize its Achilles' heel.

PolitiFact was putting a rating on its fact checks.  PolitiFact was giving researchers a tool for measuring differing treatment of its subjects according to their politics.

I thought "They have to realize what they're doing.  Don't they?"

PolitiFact's been with us for over five years, now, and the national operation appears to lack any inkling as to how its rulings can serve to expose journalistic bias. 

It took me a few months after the first realization to develop a means of measuring PolitiFact's bias.  During the first few years I did a number of word studies that helped confirm an anti-conservative bias at PolitiFact.  But there was a problem with those studies.  PolitiFact's existence made up such a short span that I had to work with a very limited pool of data.  The small pool of data limited the usefulness of the studies.  So I sat on them, waiting for PolitiFact to expand the database and dreaming up new research methods.

Early last year I realized that PolitiFact's own rating system created a natural opinion poll for PolitiFact journalists.  PolitiFact distinguishes between its "False" and "Pants on Fire" claims according to a single criterion:
FALSE – The statement is not accurate.

PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.

"False" and "Pants on Fire" claims are not accurate.  "Pants on Fire" claims are, in addition, ridiculous.  Ridiculous means subject to ridicule.  Ridicule is, of course, the specialty of objective journalists.

Eventually I moved the "Pants on Fire" study to the highest priority on the research list.  I did so for the simple reason that PolitiFact has provided a good amount of data for this study.  And those data, now that the study is finished, tell a story of a strong anti-GOP bias at PolitiFact's national operation.

We've posted a fairly detailed account of the study at Google Docs.  Anyone with a Google account can log on with Google Docs and read the detailed version.  With this post I'll offer a brief summary.

Republicans are about 74 percent more likely than Democrats to receive a "Pants on Fire" rating--74 percent more likely to speak not just the false but the ridiculous in the eyes of PolitiFact's national operation.  It's not an easy statistic to explain without liberal bias.

Republicans lie more?  That gets us to the conclusion that Republicans will receive more combined "False" and "Pants on Fire" ratings by percentage as well as in comparison to Democrats.  It doesn't explain "ridiculous" without some sort of objective measure.  So what's that measure?  I looked for it in the texts of the stories based on the idea that PolitiFact has the criteria but doesn't reveal them on its "Principles" page.  I couldn't find the criteria in the texts.

So, yes, PolitiFact national is biased against Republicans in its use of the "Pants on Fire" ratings.  That doesn't rule out other forms of bias, either.  We're still working on those, but we've got the data for "Pants on Fire" comparisons on the states lined up except for PolitiFact New Hampshire, which has a pretty thin record so far.


More to come on the research front.  Check our PFB research page.

Monday, July 23, 2012

Ohio Watchdog: "PolitiFact slams GOP spokeswoman for ‘literally true’ statement"

Jon Cassidy and Ohio Watchdog give us an 8th installment in its series on PolitiFact Ohio, this time examining PolitiFact's rating of the Ohio Republican Party and spokeswoman Izzy Santa (bold emphasis in the original):
Joe Guillen, the Cleveland Plain Dealer reporter writing for PolitiFact Ohio, was determined to find fault.

“The claim is literally true because it includes both Brown and his allies,” Guillen wrote, and he should have stopped right there. If it’s literally true, are we supposed to worry it might be figuratively untrue? It’s a number, not a simile.

It turns out that Guillen’s beef is that Santa’s declaration changed the subject.
It turns out that we have to rely on Guillen alone for the context of Santa's remarks.  Guillen insists that the context indicates Santa was talking about "outside money."  Part of Guillen's evidence for the PolitiFact story is Guillen's July 10 story for the Plain Dealer that likewise insists--based on a partial quotation and Guillen's paraphrase--that Santa was talking about "outside money":
Izzy Santa, a spokeswoman for the Ohio Republican Party, said Redfern’s criticisms are not credible because special interest groups supporting Brown “are plotting to spend over $13 million.”
Was Guillen's paraphrase justified?

Cassidy apparently has the text of the email Santa sent to reporters (reformatted quotation):
After Redfern’s July 10 press conference, she sent out an email to reporters:
Redfern is the least credible person to be commenting on outside spending when it comes to Ohio’s U.S. Senate race. Sherrod Brown and his special interest allies in Washington are plotting to spend over $13 million, with no end in sight. It’s clear that Brown and his supporters are having to spend this type of money because Brown’s out-of-touch record has exposed him to Ohioans as a 38-year politician and Washington insider who puts politics over people.

If the above represents the full context of Santa's response, then Guillen has misrepresented her.  Santa specifically wrote "Brown and his special interest allies" and Guillen Sentenceshopped that into "special interest groups" minus Brown.  Guillen's paraphrase, in other words, changed Santa's meaning.  And Guillen proceeds to fact check his paraphrase and blame it on Santa.

Guillen probably shouldn't expect more than a lump of coal for Christmas this year.

Rather than interpreting "Sherrod Brown and his special interest allies" contrary to its literal meaning, he should have inquired further as to how Santa justified calling Redfern "the least credible person" to comment on outside spending.

Visit Ohio Watchdog to read the whole of Cassidy's report.

Friday, July 20, 2012

The Republican Party of Virginia vs. PolitiFact Virginia

We promised to take a closer look at the Republican Party of Virginia's challenge to PolitiFact Virginia's objectivity.

The document works on some levels and not on others.  The best evidence it contains showing PolitiFact Virginia's lack of objectivity comes from anecdote and circumstantial evidence.

Open Letter

The "open letter" section comes across well, but almost immediately afterward the document suffers from accuracy issues. 

Overall Proportions

The graph of rulings by number and by party is off, as pointed out by the semi-daily clockwork accuracy of Karen Street:  The "False" column for Democrats is too short.  The document uses the correct figure for "False" rulings in determining the proportion of "False" statements attributed to Republicans but incorrectly asserts that PolitiFact "ruled disproportionately against Republicans" in that category.  The 40 percent figure used in the comparison is disproportionately low compared to the 48 percent baseline derived from the listed numbers.

Individual Proportions

The criticism based on the individual breakdown mostly rings true.  Virginia has two Democrats in the U.S. Senate.  How does ex-senator George Allen warrant more fact checks than both combined?  Complaints about the attention on House Majority Leader Eric Cantor don't carry much weight.  Cantor serves as a major voice for congressional Republicans.

The Weekend Dump

While it served as an intriguing idea to criticize PolitiFact Virginia for the timing of its stories, we were instantly skeptical of this claim.  News dumps by the government are fundamentally different from the news reporting cycle, yet the GOP document relies on the comparison.  Here's the problem:  Dumping stories over the weekend can put them in the Sunday newspaper, which is often the most widely read portion of a major newspaper.  No case is made for the significance of a weekend dump for either a daily paper or an Internet news site.  If the Richmond Times-Dispatch literally publishes the most positive Republican stories in its least popular editions then the Virginia GOP may have a legitimate gripe, but that evidence does not appear in this document.

Case Studies 1 & 2

The case studies hit the mark more often than not, pointing out a good number of times where PolitiFact Virginia used stilted reasoning to reach conclusions unfavorable to Republicans.

Comparative Case Study


The argument from the case study makes PolitiFact Virginia's actions look fishy, but it's far from conclusive without better evidence.  It does contribute to the stated aim of the letter, however.  This section is at its best when criticizing individual rulings from PolitiFact Virginia.

Appendix starting on Page 54

From Page 54 through the end of the 86-page document, the Appendix simply gives a rundown of PolitiFact Virginia's ratings without any commentary or criticism.  It's hard to see the point, other than to help produce stories about an 86-page criticism of PolitiFact Virginia.  If that was the case, the mission was accomplished.

In summary, the report scores with the anecdotes and not much else.  The presentation softens the potential impact.

Wednesday, July 4, 2012

Ohio Watchdog: the "PolitiFact or Fiction" series

The Franklin Center for Government and Public Integrity is onto PolitiFact, in the form of Watchdog Ohio and its seven-part (so far?) series "PolitiFact or Fiction."

Each of the seven parts reviews a questionable ruling by PolitiFact Ohio, with the focus falling on the U.S. Senate contest between incumbent Democrat Sherrod Brown and Republican challenger Josh Mandel.

Part 1

The opening salvo by Jon Cassidy blasts PolitiFact Ohio for rating two nearly identical claims from Mandel differently.  One version received a "Half True" while the other garnered a "Mostly True."  Cassidy argues that both versions were true and explains the flaw in the reasoning PolitiFact used to justify its "Half True" rating in one case.

Part 2

This installment, again by Cassidy, criticizes PolitiFact's use of softball ratings in the context of its candidate report cards. The report cards graphically total PolitiFact's ratings for a given candidate and PolitiFact encourages readers to compare the report cards when deciding for whom to vote.

Cassidy:
Here’s Democrat Brown’s claim, which got a rating of “true”:
Rooting for the Red Sox is like rooting for the drug companies. I mean it’s like they have so much money, they buy championships against the working-class, middle-America Cleveland Indians. It’s just the way you are.
Yes, it's questionable whether Brown's statement is even worthy of a fact check.  Cassidy goes further, showing that PolitiFact's rating doesn't make any sense given Brown's failure to draw an apt analogy:
Brown didn’t pick a dominant pharmaceutical company for his comparison. He picked all pharmaceutical companies, as though we should root against an entire industry because of its size.
Touche.

Part 3

With the third installment Cassidy absolutely clobbers PolitiFact for a botched rating of Brown's claim regarding average student loan debt for Ohio graduates.  PolitiFact originally gave Brown a "True" rating but changed it to "Half True" after readers pointed out problems with the rating.  Brown claimed the average graduate owed about $27,000 on student loans but in fact that number only applies to students who had taken out student loans.  Cassidy did the calculations including the students without student loans:
Since 32 percent of Ohio graduates have no student debt at all, Brown overstated the average debt by half.
And that warrants a "Half True" on PolitiFact's "Truth-O-Meter."  Supposedly.

Part 4

As with Part 3, Part 4 hits PolitiFact Ohio for choosing a softball issue on which to grade Brown while also giving him an inflated grade.  Cassidy points out how PolitiFact's use of equivocal language gets Brown off the hook for using a very misleading statistic.  Brown ends up with a "Mostly True" from PolitiFact.

Part 5

The fifth installment adds another example in kind with the previous two.

PolitiFact uses equivocal language--well beyond using mere charitable interpretation--to help defend another of Brown's dubious claims.

Cassidy:
“You’d think it would be as easy as comparing the value of goods and services exported from the United States with those imported from other countries,” [PolitiFact Ohio's Stephen] Koff writes.

Note to Koff: it’s exactly that easy.
 Cassidy could have shown PolitiFact's spin more graphically than he did.

PolitiFact:
For January through September 2010, the most recent measurement available, the trade balance was a negative $379.1 billion. Assuming the monthly trends hold through December, this year’s annual trade deficit should reach $500 billion.

Divide that by the days of the year and you’d have a daily trade deficit of $1.37 billion a day. That’s 32 percent lower than Brown’s claim of $2 billion a day.
The accurate figure should always serve as the baseline.  PolitiFact uses Brown's number as the baseline instead, finding the real figure lower by 32 percent.  A 32 percent error doesn't sound so bad.  Use $1.37 billion as the baseline and it turns out that Brown's number represents an inflation of 46 percent.  In PolitiFact terms, that's "Mostly True."  PolitiFact tried to justify the rating based on higher trade deficits from prior to 2009.

Cassidy's right again that Brown benefited from grade inflation.

Part 6

With Part 6, Cassidy offers an example of PolitiFact Ohio nitpicking Mandel down to "Mostly True" for a plainly true statement.

Mandel claimed his election to the office of state treasurer came from a constituency where Democrats outnumber Republicans  2 to 1.  People would understand that to mean a count of voter registration records.

PolitiFact justified its ruling according to an expert's claim that voter registration numbers aren't "the best litmus test."  But think how much more context was missing from Sherrod Brown's statement in Part 5.  There's no comparison.

Part 7

In Part 7 Cassidy switches gears and defends one of Brown's statements from the truth-torturers at PolitiFact, but uses the minor slight as a contrast to yet another example of grade inflation.

Cassidy:
When Brown said “we buy 35 percent of all Chinese exports” and the actual number turned out to be 25 percent, they gave him a “half-true.”

We’re not sure which half. If you take out the middle four words, “we buy… Chinese imports” is true. You can argue Brown’s claim is close enough, or that it’s way off the mark, but whatever you call it, it isn’t half-true.
Looking at the original story we find PolitiFact again favoring Brown by using the errant figure as the baseline for comparison:
But while PolitiFact Ohio isn’t looking to play Gotcha!, a key tenet is that words matter. In this case, Brown’s number is nearly 30 percent greater than the correct figure.
Yes, words matter.  PolitiFact uses words that suggest the accurate figure was used as the baseline.  But the math tells a different story.  The 10 percentage point difference between 25 percent and 35 percent is nearly 30 percent of Brown's incorrect figure--the wrong one to use as the baseline.  In fact, Brown's number is 40 percent greater than the correct figure.

Summary

Overall, Cassidy did a fine job of assembling a set of PolitiFact Ohio's miscues and explaining where the ratings went wrong.  When PolitiFact botches the math on percentages, as we point out, it helps out Sherrod Brown all the more.

We appreciate Ohio Watchdog's contribution toward holding PolitiFact to account.

Wednesday, November 16, 2011

Media Trackers' PolitiFact series

Recently the media watchdogs Media Trackers published a five part series on PolitiFact.

Intro: Media Trackers Announces Series on PolitiFact
Part 1: PolitiFact and the Political Parties
Part 2: PolitiFact and Third-Party Organizations
Part 3: PolitiFact and Talk Radio
Part 4: PolitiFact and Governor Scott Walker
Part 5: Conclusion on PolitiFact 

We were unimpressed with the start of the series, but by the conclusion Media Trackers reached solid ground.


Part 1

Concern over the direction of the series started early:
On the whole, PolitiFact can’t be called completely biased towards conservatives or liberals. By Media Trackers count, PolitiFact has devoted nearly equal ink to conservative/Republican statements as to liberal/Democrat.
Comparing the number of stories devoted to each party tells nothing of ideological slant.  PolitiFact, if it was so inclined, could set a quota of 50 Republican stories and 50 Democrat stories and then proceed to write every single one of them with a liberal bias.

The remainder of Part 1 built a comparison between PolitiFact Wisconsin's treatment of state Republican Party statements with those of its Democratic Party counterpart.  The number of statements involved was very small (11 combined), but suggested that PolitiFact's editorial focus fixed more on the Democratic Party and doled out harsher ratings.

Part 2

The second installment focused on the treatment of what Media Trackers calls "third party" organizations.  That is, political action groups not directly associated with the political parties.

Media Trackers noted a trend opposite that from part one, albeit the two mini-studies share the problem of small sample size.  The conclusion of the second part found Media Trackers on top of a live spoor:
(D)oes PolitiFact lead readers to believe that conservative third-party organizations are less likely to tell the truth? How come the organization that spent the most on negative advertising in the recall elections had just one statement reviewed? Why more scrutiny to Pro-Life groups than Pro-Choice? Why were One Wisconsin Now’s statements reviewed four times more than the MacIver Institute? And what about statements on critical stories such as the denial by Citizen Action of Wisconsin of a connection to Wisconsin Jobs Now!? Why did PolitiFact choose not to tackle that statement?

No one expects PolitiFact to be the “be all end all” of watchdog journalism. But when they set themselves up as the judge and jury for all political statements in the state, one has to question how they select stories and why certain groups receive far and away more scrutiny than others.
In other words, the selection bias problem at PolitiFact is pretty obvious.

Part 3

Part three looked at PolitiFact Wisconsin's treatment of local radio personalities and established Media Trackers' modern day record for small sample size.  Conservative Charlie Sykes received two ratings while fellow conservative Mark Belling received one.  All three ratings were of the "Pants on Fire" variety.  Again, it smells like selection bias.

Part 4

The fourth installment examined PolitiFact's treatment of Republican governor Scott Walker.

Media Trackers forgave PolitiFact for rating a high number of Walker's statements because of his position of power.  Time will reveal the reliability of that measure.

The Media Trackers analysis noted that PolitiFact appeared to go a bit hard on Walker:
It seems that PolitiFact’s burden for truth is a bit higher for Governor Walker than it is for others. Given the “lightening rod” status of Walker, it certainly seems a bit disingenuous to call the Governor’s claim that Wisconsin is “broke” a false claim because he could just layoff workers and raise taxes to fix the deficit. And to say that Walker did not campaign on the reforms found in the Budget Repair Bill is also disingenuous given that the Governor spoke on a number of the reforms he sought, even though he did not spell out the eventual changes to collective bargaining.
The anecdotes can add up.

Part 5

Media Trackers seized on the common thread in its conclusion:
As Media Trackers has shown with this series, PolitiFact arbitrarily applies its scrutiny. Statements from the Democratic Party of Wisconsin have been evaluated seven times to the Republican’s two. Conservative Club For Growth have been examined seven times (three during the recall elections) while We Are Wisconsin was examined just once. Pro-Life groups have been scrutinized twice and never a Pro-Choice group.

Each of these political groups and officials are putting out an equal number of statements on a myriad of issues every day. If PolitiFact intends to claim the mantle of watchdog journalism by “calling balls and strikes” in the name of “public service,” PolitiFact needs more transparency about how they select their stories and a review of why certain groups and individuals receive more scrutiny than others.
Sample sizes aside, Media Trackers settles on a conclusion well supported by a huge mass of anecdotal material collected by others.  The final installment also refers to Eric Ostermeier's study pointing out PolitiFact's selection bias problem (highlighted at PolitiFact Bias here).

Though the Media Trackers conclusion about PolitiFact isn't exactly groundbreaking, the outfit deserves credit for overcoming its initial stumble and doing an independent examination of its local version of PolitiFact with the conclusion supported on those data.



Jeff adds: It's worth mentioning that PolitiFact Wisconsin is by far the most frequent target of accusations of right-wing bias. We've never found anything that sufficiently corroborates those claims and Media Trackers seems to do a capable job of dispelling that myth.

Thursday, February 10, 2011

Smart Politics on selection bias at PolitiFact

Research associate Eric Ostermeier of "Smart Politics" at the University of Minnesota has published a study of PolitiFact's content and finds a disturbing vein of partisanship in PolitiFact's selection bias:
Assuming for the purposes of this report that the grades assigned by PolitiFact are fair (though some would challenge this assumption), there has nonetheless been a great discrepancy regarding which political parties' officials and officeholders receive the top ratings and those that are accused of not telling the truth.
Ostermeier goes on to note PolitiFact's focus on untrue statements by the party out of power--perhaps unusual considering the press considers itself the watchdog of government.

Ostermeier's approach is exactly right in taking PolitiFact's grade groupings as an indication of PolitiFact's selection bias rather than as a measure of candidate truthfulness.

A key chart from the report:


As Hot Air's Ed Morrissey points out, it remains possible to interpret the data to mean that Republicans simply utter more false and "Pants on Fire" statements than Democrats.

But is that the best explanation in terms of the empirical data?

I would again call attention to the divide between the "False" and "Pants on Fire" ratings.  PolitiFact to date has offered no metric for objectively distinguishing one from the other.  This in turn strongly suggests that subjective judgment serves as the ultimate criterion for grading a statement as "Pants on Fire."

 ***

Importantly, Ostermeier also points out that PolitiFact's presentation encourages readers to use the ratings to draw conclusions about the subjects receiving the ratings:
The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case.



Feb 11, 2011: Corrected typo affecting the word "Minnesota." My apologies to that fine state.
June 14, 2018: Clarified the third paragraph (first after the quotation) by removing a "that" and an -ed suffix.