Thursday, March 17, 2022

PolitiFact's "Pants on Fire" bias in 2021

As we noted in our post about the "Pants on Fire" research for 2020, we have changed the way we do the research.

PolitiFact revamped its website in 2020, and the update made it next to impossible to reliably identify which of PolitiFact's various franchises were responsible for a fact check. Instead of focusing on PolitiFact National, it makes more sense to lump all of PolitiFact together. But the new approach has a drawback. The new evaluations represent an apples-to-oranges comparison to the old evaluations.

To deal with that problem, we went back and did PolitiFact's entire history since 2007 using the new method.

With the research updated using the new method, we are now able to compare the new research with the old method.

Spoiler: Using the new method, PolitiFact was 2.66 times more likely to rule a claim it viewed as false as "Pants on Fire" from a Republican than for a Democrat. That's PolitiFact's third-highest bias figure of all time, though PolitiFact National, considered separately, has exceeded that figure at least three times.

 

Method Comparison: New vs. Old 

Our new graph shows the old method, running from 2007 through 2019, along with the new method graphed from 2007 through 2021.


The black line represents the old method. The red line represents the new.

The numbers represent the what we term the "PoF bias number," which is an expression of how much more likely it is that PolitiFact will a claim it regards as false a "Pants on Fire" rating for a Republican over a Democrat. So, for 2009 under the old method (black line), the GOP was 3.14 times more likely to have one of its supposedly false statements rated "Pants on Fire."

As our research has documented, PolitiFact has never offered an objective means of determining the ridiculousness of a claim viewed as false. The "Pants on Fire" rating, to all appearance, has to qualify as a subjective judgment. In other words, the rating represents PolitiFact's opinion.

In 2017, under the old method, the bias number dropped to 0.89, showing a bias against Democrats for that year at PolitiFact National. On average over time, of course, Republicans were significantly more likely to have their false claims regarded as "ridiculous" by PolitiFact.

Notably, the new method (red line) shows a moderating effect on PolitiFact's "Pants on Fire" bias from 2008 through 2014. The red line hovers near 1.00 for much of that stretch. After 2015 the red line tends to run higher than the black line, with the notable exception of 2019.

Explaining the Numbers?

We found two correlations that might help explain the patterns we see in the graphs.

PolitiFact changes over time. From 2007 through 2009, PolitiFact National did nearly every rating. Accordingly, the red and black lines track very closely for those years. But in 2010 PolitiFact added several franchises in addition to PolitiFact Florida. Those franchises served to moderate the PoF bias number until 2015, where we measured hardly any bias at all in the application of PolitiFact's harshest rating.

After 2015, a number of franchises cut way back on their contributions to the PolitiFact "database" and a number ceased operations altogether, such as PolitiFact New Jersey and PolitiFact Tennessee. And in 2016 PolitiFact added eight new state franchises (in alphabetical order): Arizona, Colorado, Illinois, Nevada, New York, North Carolina, Vermont and West Virginia.

The Franchise Shift

We made graphs to help illustrate the franchise shift. PolitiFact has had over 20 franchises over its history, so we'll divide the graph into two time segments to aid the visualization.

First, the franchises from 2010 through 2015 (click for larger view):

We see Florida, Texas, Rhode Island and Wisconsin established as consistent contributors. Tennessee lasts one year. Ohio drops after four years. Oregon drops after five and New Jersey after three.

Next, the franchises from 2016 through 2022 (click for larger view):


I omitted minor contributions from PolitiFact Georgia in 2016 (12) and 2017 (2). The orange bar near the top of 2016 is six states combined (hard to make out in the columns after 2016).

Note that the contributions are skinny, except for the one from Wisconsin. But even Wisconsin cut its output compared to the previous graph. We have a correlation suggesting that the participation of different state franchises impacted the bias measure.

But there's another correlation.

Republicans Lie More! Democrats Lie Less!

Liberals like to explain PolitiFact ratings that look bad for Republicans by saying that Republicans lie more. Seriously, they do that. But we found that spikes--especially recent ones--in the "Pants on Fire" bias measure were influenced by PolitiFact's spiking reluctance to give Democrats a "Pants on Fire" rating.

That correlation popped out when we created a graph showing the percentage of false statement given the "Pants on Fire" rating by party. The graph for Republicans stays pretty steady between 20 and 30 percent. The graph for Democrats fluctuates wildly, and the recent spikes in the bias measure correlate with very low percentages of "Pants on Fire" ratings for Democrats.


As is always the case, our findings support the hypothesis that PolitiFact applies its "Pants on Fire" rating subjectively, with Republicans receiving the bulk of the unfair harm. And in this case Republicans receive the bulk of the unfair harm through PolitiFact's avoidance of rating Democrat claims "Pants on Fire."

Do Democrats lie less? We don't really know. We suspect not, given the number of Democrat whoppers PolitiFact allows to escape its notice (such as this recent gem--transcript). We think PolitiFact's bias explains the numbers better than the idea Democrats lie less.



Notes on the PolitiFact franchise numbers: As we noted from the outset, PolitiFact's revamped website made it all but impossible to identify which franchise was responsible for which fact check. So how did we get our numbers?

We mostly ignored tags such as "Texas" or "Wisconsin" and looked for the names of staffers connected to the partnered newsroom. This was a fallible method because the new-look website departs from PolitiFact's old practice of listing any staffers who helped write or research an article. The new site only lists the first one mentioned from the old lists. And it has long been the case that staffers from PolitiFact National would publish fact checks under franchise banners. So our franchise fact check numbers are best taken as estimates.

Saturday, March 12, 2022

Yes, Virginia, state franchise "star chambers" are still a thing

As I noted over at Zebra Fact Check, PolitiFact is saying the people who decide a "Truth-O-Meter" rating have years of PolitiFact experience.

It doesn't appear true. In the past, PolitiFact admitted that state franchises were expected to supply their own board of editors to determine ratings, with PolitiFact supplying additional editors as needed.

It seems that's still the case. But where are the years of experience supposed to come from?



Thursday, March 3, 2022

A handful of baloney from PolitiFact

"At PolitiFact, we wrote "Principles of the Truth-O-Meter" to help guide our work. Words matter was the first principle."

--Neil Brown, Poynter Institute President 



"PolitiFact, thy name is Hypocrisy."

--PolitiFact Bias, longtime PolitiFact critics


What is a "handful"?

What is a "handful"? We could go to a dictionary for a definition. Or we could go to a higher source, such as the fact checkers at PolitiFact.

PolitiFact does the Youngkin handful

"Vaxxed and Relaxed" (@PorterPints) on March 1, 2022 highlighted a PolitiFact fact check of a "handful" claim made by Gov. Glenn Youngkin (R-Va.). Youngkin said Virginia is one of "a handful" of states that taxes veterans' retirement benefits.

In the text of the fact check, PolitiFact informs us that 12 out of 50 states is certainly more than a handful:

Virginia is one of three states that fully tax military pensions. Twelve more states tax the pensions at reduced rates, which is what Youngkin wants to do in Virginia.

All told, 15 states tax military pensions. That’s a minority, but certainly more than the "handful" Youngkin describes.

We rate Youngkin's claim Half True.

So, thanks to PolitiFact we know that the upper boundary for a "handful" is 12 or less, or perhaps 24% or less of a total if we use percentages.

PolitiFact does the Summers handful

Not long after "Vaxxed and Relaxxed" tweeted about the Youngkin "handful," we found another PolitiFact fact check of a "handful" claim, with this claim coming from Democrat Paul Summers.

In this fact check, PolitiFact taught us that 34 out of 66, or perhaps a 51 percent majority, clearly falls below the upper boundary for a "handful" (bold emphasis added):
Early in that year, two of the five incumbent Supreme Court justices stepped aside, reportedly after failing to gather enough political support among party activists on the Democratic Executive Committee. The Democratic nominees wound up being the only candidates on the ballot and were elected to full eight-year terms.

That was clearly a case where, as Summers states, a majority of the committee – 34 of the 66 members, or a "handful of party officials" if you will – was able to choose Supreme Court justices.

PolitiFact, then, has determined that a "handful" has an upper boundary of 14 or less and also an upper boundary no less than 34. Or, by percentage, an upper boundary of 24% and an upper boundary of no less than 51%.

In short, PolitiFact hilariously contradicted itself regarding the matter of the word "handful."

But just out of idle curiosity, what does the dictionary say?

Huh.

Moral of the story

It's folly for a fact checker to try to place definite numerical boundaries around indefinite terms. Claims that include such terms serve as poor fact check fodder.

Pre-publication update:

We note that Matt Palumbo somewhat pre-empted us on this story with a March 2, 2022 item. We will publish our version anyway, as the research locating the Simpson "handful" fact check was original with us. We're entitled to publish on the website the same comparison we made on Twitter on March 1, 2022.

Monday, January 31, 2022

PolitiFact doesn't know the meaning of 'and'?

 Well, well, well.

I've had to focus on things other than PolitiFact Bias posts lately, but PolitiFact and its owner, the Poynter Institute have pulled me out of semi-retirement with an extraordinary clunker of a fact check.

Behold:


PolitiFact's fact check defies logic and establishes an early leader in the "Worst Fact Check of the Year" contest.

The "False" conclusion fails because it rests on a failure to understand the simple logic of "and." The conclusion would work for the logic of "or." But Hannity said "and" not "or."

If Bill says the square is green or red then the square confirms Bill's statement if it is red. Likewise the square confirms Bill's statement if it is green. Bill would be right either way.

But if Bill says the square is green and red then it's a different ballgame.

In the second case the square confirms Bill's statement if it is both green and red. So a square that is half green and half red could confirm Bill's statement. A square that is simply green would contradict Bill's statement. The same goes for a square that is red and not green.

This is extraordinarily basic logic and PolitiFact doesn't get it. 

 Observe how PolitiFact looks to prove Hannity false:

The claim ignored that both Trump and Reagan made similar vows to nominate women to the Supreme Court, then followed through on those promises. Other presidents in history have also considered race and religion as they have made their picks.

We rate Hannity’s claim False.

So, by analogy, PolitiFact says if Trump and Reagan both nominated green squares then both Trump and Reagan each nominated squares that were both green and red.

That's 2+2=5 territory.

Making this even more hilarious, PolitiFact's parent company, the esteemed Poynter Institute, chose to highlight this fact check at the main site. In the title, Poynter's headline writer substituted a comma for the "and," masking the error of logic for those who do not read the "fact check."


Sunday, December 19, 2021

BizPacReview: PolitiFact's 2021 LOTY 'just a liberal talking point all dressed up for prom'

Sierra Marlee, in an article published at BizPacReview, fairly summed up PolitiFact's ho-hum "Lie of the Year" for 2021.

PolitiFact, a fact-checking site that purports to be an unbiased source of information, has chosen its 2021 “Lie of the Year” and it’s a doozy.

The organization picked the one topic that could be found on every Democratic talking points sheet from January until December: The Capitol Hill riot. Specifically, they decided to select all of the claims and statements “downplaying the realities and significance of the Capitol insurrection.”

That opening description ended up under the title "PolitiFact’s lie of the year is just a liberal talking point all dressed up for prom."

That sounds about right. 

Marlee lets tweets from PolitiFact and its critics tell most of the story. Apparently that's a new journalistic genre.

Wednesday, December 15, 2021

LIndsey Graham out of context

Here we go again. PolitiFact has had quite a run in 2021 when it comes to taking Republicans' claims out of context.

This latest one forced me to set aside other projects that have crow(d)ed out PolitiFact Bias posts.


Did Sen. Graham say the CBO says the "Build Back Better" Act would amount to $3 trillion in deficit spending. 

He did say that, but PolitiFact took it out of context.

PolitiFact explained to its readers that Graham was talking about a modified version of the "Build Back Better" Act (bold emphasis added):

Graham said the CBO predicted the Build Back Better Act would add $3 trillion to deficits over 10 years.

He’s referring to a bill that’s not the Build Back Better Act. At Graham’s request, the CBO looked at the impact of extending the temporary programs in the bill for a full 10 years. That is an assessment of a hypothetical situation, not the bill at hand. 

We rate this claim False.

What's the problem with PolitiFact's reasoning?

It was clear in context that Graham was talking about the CBO's scoring of permanent versions of the bill's temporary provisions. The Fox News interviewer, Chris Wallace, made that clear at the outset of the interview (bold for the portion PolitiFact may have relied on for its quotation of Graham):

WALLACE: You commissioned the Congressional Budget Office to project how much Build Back Better will cost over the 10 years, assuming that the programs that are in it, the spending programs that are in it, go on for 10 years and are not as in the case with child care just for one year.

GRAHAM: Right.

WALLACE: The CBO found, instead of adding $200 billion to the deficit, it will add $3 trillion to the deficit. But, Senator, the White House says that that's fake because if the programs are extended, they'll find ways to pay for them.

GRAHAM: Well, give me a plan to pay for them then. President Biden said the bill was fully -- fully paid for. Vice President Harris said it was paid for. Schumer, Pelosi, Secretary of Treasury Yellen. The CBO says it's not paid for. It's $3 trillion of deficit spending. It's not $1.75 trillion over 10 years, it's $4.9 trillion.
We doubt PolitiFact's headline version of Graham's statement qualifies as proper application of AP style for quotations. But the main point is that, in context, Graham would be understood to be talking about the added cost of making the temporary measures permanent. And PolitiFact affirms what Graham says about that CBO projection.

So how does Graham warrant a "False" rating if he wasn't trying to fool people into thinking the new CBO scoring was for the version of the bill with the temporary provisions?

PolitiFact's Twist on the Committee For a Responsible Budget

Also of note, PolitiFact's fact check takes the Committee For a Responsible Budget out of context, using a part of one of its articles to make Graham look out of line for citing the CBO's scoring of the bill with the temporary provisions made permanent:

Modified means the CBO scored a bill that’s different from the one on the table.

"These estimates do not reflect what is actually written in the Build Back Better Act nor its official cost for scorekeeping purposes," the deficit hawk group Committee for a Responsible Federal Budget wrote. "Lawmakers may choose to allow some provisions to expire, to extend some as written, and to modify some."

That's exactly what the Committee said, but it was in the context of explaining the CBO's alternative scoring and comparing that scoring to the Committee's own alternative scoring of "Build Back Better" with its temporary provisions made permanent (highlights for the portion PolitiFact cherry picked):

Importantly, these estimates do not reflect what is actually written in the Build Back Better Act nor its official cost for scorekeeping purposes. Lawmakers may choose to allow some provisions to expire, to extend some as written, and to modify some. To offset the cost of extending these provisions as President Biden has committed, they would need to more than double current offsets in the bill. Extending programs without these offsets would substantially increase in the debt. $3 trillion of new debt would increase debt to over 116 percent of Gross Domestic Product in 2031, up from 107.5 percent under current law.

The Build Back Better Act relies on a substantial amount of short-term policies and arbitrary sunsets to reduce its cost, raising the possibility of deficit-financed extensions in future years. A more robust and fiscally responsible package would not rely on these gimmicks to achieve deficit neutrality.

The second paragraph in particular aligns well with Sen. Graham's criticism of "Build Back Better."

PolitiFact hid that also from its readers, along with the fact that Graham was obviously talking about the CBO's scoring of temporary provisions made permanent.

Such fact-checking is no better than lying.

Thursday, September 2, 2021

MetaFact Group: PolitiFact “fact-checks” accurate reporting about study showing vaccines provide less immunity than prior infections

A relative newcomer to the fact-checking the fact-checkers club, MetaFact Group, today published an on-target item showing yet another example of a misleading PolitiFact technique.

It's PolitiFact's method of putting words, or at least an implied argument, into the mouth of another.

PolitiFact has rated as “half true” a headline by the Gateway Pundit that accurately summarizes the findings of a study by Maccabi Healthcare and Tel Aviv University, showing those vaccinated against COVID-19 were 13 times more likely to still be infected than those not vaccinated (but recovered from covid--Ed.). The study states “SARS-CoV-2-naïve vaccinees had a 13.06-fold (95% CI, 8.08 to 21.11) increased risk for breakthrough infection with the Delta variant compared to those previously infected.”

PolitiFact said the headline was misleading because the study had not yet passed peer review and the headline also supposedly implied that it was a good idea not to receive the vaccine (bold emphasis added):

The headline accurately reflects some of the study’s findings but ignores the study’s limitations, including that only one vaccine was tested, and that other studies have found that COVID-19 poses much greater danger to people who have not been vaccinated.

Without that context, the headline leaves the impression that it’s safer to get COVID-19 and hope to recover than to try to avoid it by getting vaccinated. That’s not true.

This is the same PolitiFact that recently told us that fossil fuel power plants kill millions of birds annually without informing its readers that the estimate was based almost entirely on predictions of how many birds climate change might kill in the future. The research paper averaged predicted future bird deaths out over a 40-year period. Because science. See more details at Bryan's Zebra Fact Check site.

It's okay for PolitiFact fact checkers to skimp on context. But it's not okay for you, me, or Gateway Pundit.

MetaFact Group also made a critical point about the legitimacy of the Gateway Pundit article:

(K)nowing that natural immunity maybe [sic] superior to vaccine-based immunity is a relevant point of discussion for a university considering whether it can mandate its students, faculty and staff take the COVID-19 vaccines.
Read the article at Meta Fact Group and bookmark the site.




Tuesday, August 31, 2021

Search engine update!

Occasionally we get curious about how search engines are treating the PolitiFact Bias site. We have a number of SEO advantages, perhaps the strongest being the lack of advertising.

One of our advantage was using the (still free!) Google Blogger platform. Once upon a time, using that platform gained an SEO advantage from the Google search engine. But times change, and Google's algorithms also change.

Results are mine. Thanks to algorithms, your results may vary:

DuckDuckGo: No. 1, if we don't count the sponsored link. Otherwise No. 2.

Bing: No. 1.

Google: Our Twitter account is No. 4, thanks to Jeff's fine work.

Our Facebook page ends up at the bottom of the second page of hits.

This website comes in as the fourth hit on Google's third page of results.

We count this as a result of Google's successful effort to elevate "reliable" websites and downgrade dubious ones in its search results.

Even when the dubious ones are right and the "reliable" ones are wrong.