Thursday, March 17, 2022

PolitiFact's "Pants on Fire" bias in 2021

As we noted in our post about the "Pants on Fire" research for 2020, we have changed the way we do the research.

PolitiFact revamped its website in 2020, and the update made it next to impossible to reliably identify which of PolitiFact's various franchises were responsible for a fact check. Instead of focusing on PolitiFact National, it makes more sense to lump all of PolitiFact together. But the new approach has a drawback. The new evaluations represent an apples-to-oranges comparison to the old evaluations.

To deal with that problem, we went back and did PolitiFact's entire history since 2007 using the new method.

With the research updated using the new method, we are now able to compare the new research with the old method.

Spoiler: Using the new method, PolitiFact was 2.66 times more likely to rule a claim it viewed as false as "Pants on Fire" from a Republican than for a Democrat. That's PolitiFact's third-highest bias figure of all time, though PolitiFact National, considered separately, has exceeded that figure at least three times.

 

Method Comparison: New vs. Old 

Our new graph shows the old method, running from 2007 through 2019, along with the new method graphed from 2007 through 2021.


The black line represents the old method. The red line represents the new.

The numbers represent the what we term the "PoF bias number," which is an expression of how much more likely it is that PolitiFact will a claim it regards as false a "Pants on Fire" rating for a Republican over a Democrat. So, for 2009 under the old method (black line), the GOP was 3.14 times more likely to have one of its supposedly false statements rated "Pants on Fire."

As our research has documented, PolitiFact has never offered an objective means of determining the ridiculousness of a claim viewed as false. The "Pants on Fire" rating, to all appearance, has to qualify as a subjective judgment. In other words, the rating represents PolitiFact's opinion.

In 2017, under the old method, the bias number dropped to 0.89, showing a bias against Democrats for that year at PolitiFact National. On average over time, of course, Republicans were significantly more likely to have their false claims regarded as "ridiculous" by PolitiFact.

Notably, the new method (red line) shows a moderating effect on PolitiFact's "Pants on Fire" bias from 2008 through 2014. The red line hovers near 1.00 for much of that stretch. After 2015 the red line tends to run higher than the black line, with the notable exception of 2019.

Explaining the Numbers?

We found two correlations that might help explain the patterns we see in the graphs.

PolitiFact changes over time. From 2007 through 2009, PolitiFact National did nearly every rating. Accordingly, the red and black lines track very closely for those years. But in 2010 PolitiFact added several franchises in addition to PolitiFact Florida. Those franchises served to moderate the PoF bias number until 2015, where we measured hardly any bias at all in the application of PolitiFact's harshest rating.

After 2015, a number of franchises cut way back on their contributions to the PolitiFact "database" and a number ceased operations altogether, such as PolitiFact New Jersey and PolitiFact Tennessee. And in 2016 PolitiFact added eight new state franchises (in alphabetical order): Arizona, Colorado, Illinois, Nevada, New York, North Carolina, Vermont and West Virginia.

The Franchise Shift

We made graphs to help illustrate the franchise shift. PolitiFact has had over 20 franchises over its history, so we'll divide the graph into two time segments to aid the visualization.

First, the franchises from 2010 through 2015 (click for larger view):

We see Florida, Texas, Rhode Island and Wisconsin established as consistent contributors. Tennessee lasts one year. Ohio drops after four years. Oregon drops after five and New Jersey after three.

Next, the franchises from 2016 through 2022 (click for larger view):


I omitted minor contributions from PolitiFact Georgia in 2016 (12) and 2017 (2). The orange bar near the top of 2016 is six states combined (hard to make out in the columns after 2016).

Note that the contributions are skinny, except for the one from Wisconsin. But even Wisconsin cut its output compared to the previous graph. We have a correlation suggesting that the participation of different state franchises impacted the bias measure.

But there's another correlation.

Republicans Lie More! Democrats Lie Less!

Liberals like to explain PolitiFact ratings that look bad for Republicans by saying that Republicans lie more. Seriously, they do that. But we found that spikes--especially recent ones--in the "Pants on Fire" bias measure were influenced by PolitiFact's spiking reluctance to give Democrats a "Pants on Fire" rating.

That correlation popped out when we created a graph showing the percentage of false statement given the "Pants on Fire" rating by party. The graph for Republicans stays pretty steady between 20 and 30 percent. The graph for Democrats fluctuates wildly, and the recent spikes in the bias measure correlate with very low percentages of "Pants on Fire" ratings for Democrats.


As is always the case, our findings support the hypothesis that PolitiFact applies its "Pants on Fire" rating subjectively, with Republicans receiving the bulk of the unfair harm. And in this case Republicans receive the bulk of the unfair harm through PolitiFact's avoidance of rating Democrat claims "Pants on Fire."

Do Democrats lie less? We don't really know. We suspect not, given the number of Democrat whoppers PolitiFact allows to escape its notice (such as this recent gem--transcript). We think PolitiFact's bias explains the numbers better than the idea Democrats lie less.



Notes on the PolitiFact franchise numbers: As we noted from the outset, PolitiFact's revamped website made it all but impossible to identify which franchise was responsible for which fact check. So how did we get our numbers?

We mostly ignored tags such as "Texas" or "Wisconsin" and looked for the names of staffers connected to the partnered newsroom. This was a fallible method because the new-look website departs from PolitiFact's old practice of listing any staffers who helped write or research an article. The new site only lists the first one mentioned from the old lists. And it has long been the case that staffers from PolitiFact National would publish fact checks under franchise banners. So our franchise fact check numbers are best taken as estimates.

Saturday, March 12, 2022

Yes, Virginia, state franchise "star chambers" are still a thing

As I noted over at Zebra Fact Check, PolitiFact is saying the people who decide a "Truth-O-Meter" rating have years of PolitiFact experience.

It doesn't appear true. In the past, PolitiFact admitted that state franchises were expected to supply their own board of editors to determine ratings, with PolitiFact supplying additional editors as needed.

It seems that's still the case. But where are the years of experience supposed to come from?



Thursday, March 3, 2022

A handful of baloney from PolitiFact

"At PolitiFact, we wrote "Principles of the Truth-O-Meter" to help guide our work. Words matter was the first principle."

--Neil Brown, Poynter Institute President 



"PolitiFact, thy name is Hypocrisy."

--PolitiFact Bias, longtime PolitiFact critics


What is a "handful"?

What is a "handful"? We could go to a dictionary for a definition. Or we could go to a higher source, such as the fact checkers at PolitiFact.

PolitiFact does the Youngkin handful

"Vaxxed and Relaxed" (@PorterPints) on March 1, 2022 highlighted a PolitiFact fact check of a "handful" claim made by Gov. Glenn Youngkin (R-Va.). Youngkin said Virginia is one of "a handful" of states that taxes veterans' retirement benefits.

In the text of the fact check, PolitiFact informs us that 12 out of 50 states is certainly more than a handful:

Virginia is one of three states that fully tax military pensions. Twelve more states tax the pensions at reduced rates, which is what Youngkin wants to do in Virginia.

All told, 15 states tax military pensions. That’s a minority, but certainly more than the "handful" Youngkin describes.

We rate Youngkin's claim Half True.

So, thanks to PolitiFact we know that the upper boundary for a "handful" is 12 or less, or perhaps 24% or less of a total if we use percentages.

PolitiFact does the Summers handful

Not long after "Vaxxed and Relaxxed" tweeted about the Youngkin "handful," we found another PolitiFact fact check of a "handful" claim, with this claim coming from Democrat Paul Summers.

In this fact check, PolitiFact taught us that 34 out of 66, or perhaps a 51 percent majority, clearly falls below the upper boundary for a "handful" (bold emphasis added):
Early in that year, two of the five incumbent Supreme Court justices stepped aside, reportedly after failing to gather enough political support among party activists on the Democratic Executive Committee. The Democratic nominees wound up being the only candidates on the ballot and were elected to full eight-year terms.

That was clearly a case where, as Summers states, a majority of the committee – 34 of the 66 members, or a "handful of party officials" if you will – was able to choose Supreme Court justices.

PolitiFact, then, has determined that a "handful" has an upper boundary of 14 or less and also an upper boundary no less than 34. Or, by percentage, an upper boundary of 24% and an upper boundary of no less than 51%.

In short, PolitiFact hilariously contradicted itself regarding the matter of the word "handful."

But just out of idle curiosity, what does the dictionary say?

Huh.

Moral of the story

It's folly for a fact checker to try to place definite numerical boundaries around indefinite terms. Claims that include such terms serve as poor fact check fodder.

Pre-publication update:

We note that Matt Palumbo somewhat pre-empted us on this story with a March 2, 2022 item. We will publish our version anyway, as the research locating the Simpson "handful" fact check was original with us. We're entitled to publish on the website the same comparison we made on Twitter on March 1, 2022.