What tale does the "Truth-O-Meter" tell?
For years, we at PolitiFact Bias have argued that PolitiFact's "Truth-O-Meter" ratings serve poorly to tell us about the people and organizations PolitiFact rates on the meter. But the ratings may tell us quite a bit about the people who run PolitiFact.
To put this notion into practice, we devised a simple examination of the line of demarcation between two ratings, "False" and "Pants on Fire." PolitiFact offers no objective means of distinguishing between a "False" rating and a "Pants on Fire" rating. In fact, PolitiFact's founding editor, Bill Adair (now on staff at Duke University) described the decision about the ratings as "
entirely subjective."
Angie Drobnic Holan, who took over for Adair in 2013 after Adair took the position at Duke, said "the line between 'False' and 'Pants on Fire' is just, you know,
sometimes we decide one way and sometimes decide the other."
After searching in vain for dependable objective markers distinguishing the "Pants on Fire" rating from the "False" rating, we took PolitiFact at its word and assumed the difference between the two is subjective. We researched the way PolitiFact applied the two ratings as an expression of PolitiFact's opinion, reasoning that we could use the opinions to potentially detect PolitiFact's bias (details of how we sorted the data
here).
Our earliest research showed that, after PolitiFact's first year, Republicans were much more likely than Democrats to have a false claim rated "Pants on Fire" instead of merely "False."
Adair has said that the "Pants on Fire" rating was treated as a lighthearted joke at first--see
this rating of a claim by Democrat Joe Biden as an example--and that probably accounts for the unusual results from 2007.
In 2007, the lighthearted joke year, Democrats were
150 percent more likely to receive a "Pants on Fire" rating for a false statement.
In 2008, Republicans were
31 percent more likely to receive a "Pants on Fire" rating for a false statement.
In 2009, Republicans were
214 percent more likely to receive a "Pants on Fire" rating for a false statement (not a typo).
In 2010, Republicans were
175 percent more likely to receive a "Pants on Fire" rating for a false statement (again, not a typo).
We published our first version of this research in August 2011, based on PolitiFact's first four years of operation.
In 2011, Republicans were
57 percent more likely to receive a "Pants on Fire" rating for a false statement.
In 2012, Republicans were
125 percent more likely to receive a "Pants on Fire" rating for a false statement.
Early in 2013,
PolitiFact announced Adair would leave the project that summer to take on his new job at Duke. Deputy editor Angie Drobnic Holan was
named as Adair's replacement on Oct. 2, 2013.
In 2013, the transition year, Republicans were
24 percent more likely to receive a "Pants on Fire" rating for a false statement.
Had Republicans started to curb their appetite for telling outrageous falsehoods?
In 2014, Republicans were
95 percent more likely to receive a "Pants on Fire" rating for a false statement.
In 2015, Republicans were
2 percent (not a typo) more likely to receive a "Pants on Fire" rating for a false statement.
In 2016, Republicans were
17 percent more likely to receive a "Pants on Fire" rating for a false statement.
In 2017,
Democrats were 13 percent more likely to receive a "Pants on Fire" rating for a false statement.
Had Republicans gotten better than Democrats at reigning in their impulse to utter their false statements in a ridiculous form?
We suggest that our data through 2017 help confirm our hypothesis that the ratings tell us more about PolitiFact than they do about the politicians and organizations receiving the ratings.
Do the data give us trends in political lying, or separate journalistic trends for Adair and Holan?
We never made any attempt to keep our research secret from PolitiFact. From the first, we recognized that PolitiFact might encounter our work and change its practices to decrease or eliminate the appearance of bias from its application of the "Pants on Fire" rating. We did not worry about it, knowing that if PolitiFact corrected the problem it would help confirm the problem existed regardless of what fixed it.
Has PolitiFact moderated or fixed the problem? Let's look at more numbers.
The "Pants on Fire" bias
From 2007 through 2012, PolitiFact under Adair graded
29.2 percent of its false claims from the GOP "Pants on Fire." For Democrats the percentage was
16.1 percent.
From 2014 through 2017, PolitiFact under Holan graded
26 percent of its false claims from the GOP "Pants on Fire" and
21.9 percent for Democrats.
It follows that under Adair PolitiFact was
81.4 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one for a Democrat. That includes the anomalous 2007 data showing a strong "Pants on Fire" bias against Democrats.
Under Holan, PolitiFact was just
18.7 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one from a Democrat.
Story selection bias
While tracking the percentage of false ratings given a "Pants on Fire" rating, we naturally tracked the sheer number of times PolitiFact issued false ratings (either "False" or "Pants on Fire"). That figure speaks to PolitiFact's story selection.
From 2007 through 2012, PolitiFact under Adair found an average of
55.3 false claims per year from Republicans and
25.8 false claims per year from Democrats. That includes 2007, when PolitiFact was only active for part of the year.
From 2014 through 2017, PolitiFact under Holan found an average of
81 false claims per year from Republicans and
16 false claims per year from Democrats.
Under Holan, the annual finding of false claims by Republicans increased by nearly 58 percent. At the same time, PolitiFact's annual finding of false claims by Democrats fell by 38 percent.
|
Update Jan. 1, 2018: GOP false claims reached 90 by year's end. |
One might excuse the increase for the GOP by pointing to staff increases. But the same reasoning serves poorly to explain the decrease for the Democrats. Likewise, increased lying by Republicans does not automatically mean Democrats decreased their lying.
Did the Democrats as a party tend strongly toward greater truth-telling? With the notable blemish a greater tendency to go "Pants on Fire" when relating a falsehood?
Conclusion
We suggest that changes in PolitiFact's practices more easily make sense of these data than do substantial changes in the truth-telling patterns of the two major U.S. political parties. When Adair stepped down as PolitiFact's editor, a different person started running the "star chamber" meetings that decide the "Truth-O-Meter" ratings and a different set of editors voted on the outcomes.
Changing the group of people who decide subjective ratings will obviously have a substantial potential effect on the ratings.
We suggest that these results support the hypothesis that subjectivity plays a large role in PolitiFact's rating process. That conclusion should not surprise anyone who has paid attention to the way PolitiFact describes its rating process.
Has Holan cured PolitiFact of liberal bias?
We recognized from the first that the "Pant on Fire" bias served as only one measure of PolitiFact's ideological bias, and one that PolitiFact might address. Under Holan, the "Pants on Fire" bias serves poorly to demonstrate a clear ideological bias at PolitiFact.
On the other hand, PolitiFact continues to churn out anecdotal examples of biased work, and the difficulty Holan's PolitiFact has in finding false statements from Democrats compared to Adair's PolitiFact suggests our data simply show something of a trade-off.
When we started evaluating PolitiFact's state operations, such as PolitiFact Georgia, we noticed that lopsided numbers of false statements were often accompanied by a higher percentage of "Pants on Fire" statements from the party receiving many fewer false ratings. We hypothesized a compensatory bias might produce that effect when the fact checkers, consciously or unconsciously, encourage the appearance of fairness.
PolitiFact, after all, hardly needs to grade false Republican statements more harshly to support the narrative that Republicans lie more when it is finding, on average, five times more false statements from Republicans than Democrats.
We doubt not that defenders of PolitiFact can dream up some manner of excusing PolitiFact based on the "fact' that Republicans lie more. But we deeply doubt that any such approach can find a basis in empirical evidence. Subjective rating systems do not count as empirical evidence of the rate of lying.
In addition to empirically justifying the increase in GOP falsehoods, defenders will need to explain the decrease in Democratic Party falsehoods implied in PolitiFact's ratings. Why, with a bigger staff, is PolitiFact having a more difficult time finding false statements from Democrats than it did when Adair was steering the ship?
If Truth-O-Meter data were ostensibly objective, it would make sense to question the reliability of the data given the differing trends we see for PolitiFact under Adair and Holan.
Given PolitiFact's admissions that its story selection and ratings are substantially subjective, it makes sense for the objective researcher to first look to the most obvious explanation: PolitiFact bias.
Notes on the research method
Our research on the "Pants on Fire" bias looks at partisan elected officials or officeholders as well as candidates and campaign officials (including family members participating in the campaign). We exclude PolitiFact ratings where a Republican attacked a Republican or a Democrat attacked a Democrat, reasoning that such cases may muddy the water in terms of ideological preference. The party-on-party exclusions occur rarely, however, and do not likely affect the overall picture much at all.
In the research, we use the term "false claims" to refer to claims PolitiFact rated either "False" or "Pants on Fire." We do not assume PolitiFact correctly judged the claims false.
Find the data spreadsheet
here.
Afters
We have completed alternative versions of our charts with the data for Donald Trump removed, and we'll publish those separately from this article at a later time. The number of false claims from Republicans went down from 2015-2017 but with PolitiFact still issuing far more false ratings to Republicans. The "Pants on Fire" percentages were almost identical except for 2016. With Trump removed from the data the Republicans would have set an all-time record for either party for lowest percentage of "Pants on Fire" claims.
These results remain consistent with our hypothesis that PolitiFact's "False" and "Pants on Fire" ratings reflect a high degree of subjectivity (with the former perhaps largely influenced by story selection bias).
Update Dec. 19, 2017: Added intended hyperlink to explanations of the research and the notable Biden "Pants on Fire."
Update Dec. 21, 2017: Corrected date of previous update (incorrectly said Dec. 12), and updated some numbers to reflect new PolitiFact ratings of Donald Trump through Dec. 21, 2017: "13 percent"=>"10 percent", "87.3 claims per year"=>"80.5 claims per year", "23.8"=>"26.1" and "8.7"="19.2." The original 87.3 and 23.8 figures were wrong for reasons apart from the new data. We will update the charts once the calendar year finishes out. Likewise the 8.7 figure derived in part from the incorrect 23.8.
Update Jan 1, 2017: Changed "10 percent" back to "13 percent" to reflect updated data for the whole year. "80.5 claims per year" updated to "81 claims per year." We also changed "26.1" to "26" and "8.7" to "18.7." The latter change shows that we neglected to make the "8.7" to "19.2" change we announced in the description of the Dec. 21, 2017 update, for which we apologize.