Thursday, December 31, 2020

PolitiFact North Carolina struggles with Twitter context

PolitiFact North Carolina's "Pants on Fire" rating awarded to the North Carolina Republican Party on Dec. 29, 2020 likely caps the data for our study of PolitiFact's "Pants on Fire" bias. And it gives us the opportunity to show again how PolitiFact struggles to properly apply interpretive principles when looking at Republican claims.

"Democrat Governor @RoyCooperNC has not left the Governor's Mansion since the start of the #COVID19 crisis," the party tweeted on Dec. 27.

Compared to similar claims and barbs, this particular tweet stood out.

That's PolitiFact's presentation of the GOP tweet. The only elaboration occurs in the summary section ("If Your Time Is Short") and later in the story when addressing the explanation from North Carolina GOP spokesperson Tim Wigginton.

Here's that section of the story (bold emphasis added):

Party spokesman Tim Wigginton told PolitiFact NC that the tweet is not meant to be taken literally.

"The tweet is meant metaphorically," Wigginton said, adding that it’s meant to critique the frequency of Cooper’s visits with business owners. He accused Cooper of living "in a bubble … instead of meeting with people devastated by his orders." 

The NC GOP’s tweet gave no indication that the party was calling on Cooper to meet with business owners.

The part in bold is the type of line that attracts a fact checker of fact checkers. 

Was there no indication the tweet was not intended literally?

It turns out that finding something worth widening the investigation merely took clicking the link to the GOP tweet.

I don't see how to embed the tweet, but here's the image accompanying the tweet:


It should strike anyone, even a left-biased fact checker, that the "Where's Cooper" comical graphic is a bit of a strange marriage for the claim Cooper hasn't left the governor's mansion.

That enough isn't enough to take Wigginton at his word, perhaps, but as we noted it does point toward a need for more investigation.

It turns out that the NCRP has tweeted out the image repeatedly in late December, accompanied by a number of statements.





Twitter counts as a new literary animal. Individual tweets are necessarily short on context. Twitter users may provide context a number of ways, such a creating a thread of linked tweets. Or tweeting periodically on a theme. The NCRP "Where's Cooper?" series seems to qualify as the latter. The tweets are tied together contextually by the "Where's Cooper" image, which provides a comical and mocking approach to the series of tweets.

In short, it looks like Wigginton has support for his explanation, and the PolitiFact fact checker, Paul Specht, either didn't notice or did not think the context was important enough to share with his readers.

It's okay for PolitiFact to nitpick whether Cooper had truly refrained from leaving the governor's mansion. The GOP tweets may have left a false impression on that point. But PolitiFact just as surely left a false impression that Wigginton's explanation had no grounding in fact. Specht didn't even mention the Waldo parody image.

"Pants on Fire"?

Hyperbole. Does PolitiFact have a license for hyperbole?



Monday, December 21, 2020

PolitiFact's "Pants on Fire" bias in 2020 [Updated Dec. 31, 2020]

Readers, please do not neglect the Dec. 31, 2020 update near the bottom of the post. Thanks!

 

As we noted in a post one week ago, we changed how we're conducting the "Pants on Fire" bias study to include all of PolitiFact and not just PolitiFact National.

In the past, that might have meant a softening of the liberal bias we find at PolitiFact. But the data appear to show that the state franchises have shifted left to more closely match the leftward lean at PolitiFact National.

The study looks at the percentage of false statements (that's "False" plus "Pants on Fire") that PolitiFact labels "Pants on Fire" for Republicans and Democrats. We count candidates, partisan elected officials or partisan political appointees. Attorney General William Barr, for example, would count as a Republican while holding the AG office under a Republican administration but not as a civilian outside the government.

In 2020, through today, PolitiFact was 4.61 times more likely to rate a claim it regarded as false as "Pants on Fire" if it came from a Republican instead of a Democrat. [Note: 12-31-2020: These numbers may reflect a minor transcription error for the number of "Pants on Fire" claims and probably slightly exaggerate PolitiFact's left-leaning bias. See update below]

Some may think to ask, "Why would that mean PolitiFact is biased? Maybe Republicans just lie more."

It counts as bias because all the evidence shows PolitiFact's "Pants on Fire" rating is a subjective judgment. PolitiFact has never offered a justification for the distinction between the two ratings that runs any deeper than because we felt like it.

PolitiFact defines a "False" rating as "The statement is not accurate."

PolitiFact defines a "Pants on Fire" rating as "The statement is not accurate and makes a ridiculous claim."

We posit that "ridiculous" is not an objective measure unless somebody makes the effort to define it in objective terms. We find an absence of that effort over the whole of PolitiFact's history.

We also note that PolitiFact's founding editor Bill Adair and current Editor-in-Chief Angie Drobnic Holan have made statements that appear to admit the "Pants on Fire" rating is subjective.

Unless an objective basis exists for the "Pants on Fire" rating, the "Republicans lie more" premise does not help explain it except in terms of confirmation bias.


Inside the Numbers

Republican claims PolitiFact regarded as false were rated "Pants on Fire" 28.8 percent of the time. That's only very slightly above the average PolitiFact National established between 2007 and 2019. So why was the bias measurement so much higher this year?

This: PolitiFact was extremely reluctant to give a "Pants on Fire" rating to a Democrat.

PolitiFact only issued three "Pants on Fire" ratings to Democrats in 2020. That's out of 48 claims it regarded as false, resulting in a figure of 6.25 percent. That's far below the PolitiFact National average for Democrats between 2007 and 2019 (about 17 percent).


This Was Predictable

When PolitiFact unveiled and subsequently developed its "Truth-O-Meter" rating system, we saw the day coming when the ratings would clearly reveal bias. We expected to see similar claims receiving different ratings. And once we identified the very likely subjective nature of the "Pants on Fire" rating, we anticipated we would be able to produce a solid evidence of bias using analysis of PolitiFact's ratings.

The problem for PolitiFact stems from the fact that it is impossible to assign ratings objectively and subjectively at the same time and in the same sense.

We have sent our 2020 findings to the International Fact-Checking Network, along with the suggestion that it examine the compatibility between subjective fact checker rating systems and its stipulation (2.1) requiring fact checkers to rate claims by the same standard no matter the source of the claim.

We think either the IFCN must erase that stipulation from its requirements or else fact checkers seeking IFCN verification need to abandon the use of subjective rating systems.


Update Dec. 22, 2020

A new "Pants on Fire" rating for Claudia Tenney (R-NY) brings the PoF percentage for Republicans up to 29.1 percent from the 28.8 percent reported above.

Here's a chart based on the updated data:



Update Dec. 31, 2020

PolitiFact North Carolina came through with a late "Pants on Fire" rating that changes our numbers yet again, and helped bring to our attention a transcription error that probably affected the percentages reported above. We added notes in the above text to highlight that imprecision.

Still, the late "Pants on Fire" rating to the North Carolina Republican Party, which will soon get its own write up here at PFB, boosts the GOP percentage above what we have on our graph to 29.2 percent. That means falsehoods PolitiFact regarded as "Pants on Fire" were 4.58 times more likely to receive a "Pants on Fire" rating coming from a Republican compared to a Democrat.


PolitiFact botches one in Marco Rubio's favor

Though PolitiFact Bias finds PolitiFact biased to the left, we also find that PolitiFact simply stinks at fact-checking. PolitiFact stinketh so much that its mistakes sometimes run against its biased tendencies to unfairly harm Democrats or unfairly help Republicans.

We ran across a clear case of the latter this week while putting together a spreadsheet collection of PolitiFact's "True" ratings. Sen. Marco Rubio (R-Fla.) received a "True" for a significantly flawed claim about Social Security:

Image capture from PolitiFact.com


Rubio was right that Social Security had to draw down the Trust Fund balance to pay benefits. But PolitiFact simply didn't bother to look at whether it was happening "for the first time."

It wasn't happening for the first time. It happened often during the 1970s. And in the 1970s Social Security was on-budget. That means that when people claim that Social Security has never contributed to the federal deficit they are quite clearly wrong as a matter of fact.

PolitiFact only looked at one government source in fact-checking Rubio. That source had nothing about whether the Trust Fund drawdown was happening for the first time.

A chart from the Committee for a Responsible Federal Budget makes the shortfall from the 1970s clear:

It's unlikely PolitiFact was trying to do Rubio a favor. Rather, the staff at PolitiFact probably thought they knew Social Security's financial history was solid and simply did not question when Rubio affirmed that expectation.

We'll attach the "Left Jab" tag to this item even though it did not come from a left-leaning critic of PolitiFact.

Tuesday, December 15, 2020

Shark-jumping in 2020: YouTube age-restricts PolitiFact Bias' Hitler parody video

Years ago, we joined the trend of creating a parody video using a segment of the movie "Downfall." We dubbed our version "Hitler Finds Out No Pulitzer for PolitiFact in 2014."

Early this morning YouTube sent us an email informing us that it has slapped age restrictions on the video. Supposedly it fails some unnamed aspect of YouTube's community guidelines.

We wanted to let you know that our team has reviewed your content and we don't think it's in line with our Community Guidelines. As a result, we've age-restricted the following content: Video: Hitler Learns PolitiFact has Failed to Win a Pulitzer Prize for the Fifth Straight Year We haven't applied a strike to your channel, and your content is still live for some users on YouTube.

We're mystified as to what community guideline the video might transgress, unless there's something like "Thou Shalt Not Mock PolitiFact" in there somewhere.  Or maybe "shiznit" is on a list of forbidden words.

We still think it's funny.


Monday, December 14, 2020

Changes to the "Pants on Fire Bias" study

Back in 2011 PolitiFact Bias started a study of one measure of PolitiFact's bias. Our study took PolitiFact's "Truth--O-Meter" ratings and looked at the differential between its application of the "False" rating and the "Pants on Fire" rating.

We had noted that the only difference PolitiFact advertised between a "False" rating and a "Pants on Fire" rating is that PolitiFact considers the latter "ridiculous" but not the former. So a "False" statement is a false statement and a "Pants on Fire" statement is a statement that is both false and ridiculous.

Save us your comments like "Well, to me a "Pants on Fire" means PolitiFact believes it was an intentional lie." That's not how PolitiFact has ever defined it.

We kept the study updated for every year from 2007 through 2019. For PolitiFact National, claims viewed as false by PolitiFact (rated either "False" or "Pants on Fire") were over 50 percent more likely to receive a "Pants on Fire" rating than one from a Democrat.

Because "ridiculous" seems like a subjective measure and we could find no unspoken or hidden objective rationale justifying a "Pants on Fire" rating, we judged it was likely a subjective measure. So PolitiFact's preference for doling out the "Pants on Fire" rating to Repubicans instead of the "False" rating, compared to the same measure for Democrats, we argue counts as a legitimate measure of political bias.

PolitiFact's various state operations have varied considerably from PolitiFact National in terms of their "Pants on Fire" bias measure. We say the variations between them support the hypothesis that the ratings are subjective.

Change Time

In 2020, PolitiFact revamped its website. Instead of publishing material from the state franchises to a special domain for each state, PolitiFact changed to a tagging system.

That was not good for our study.

PolitiFact did not, for example, reserve the tag "Wisconsin" for fact checks stemming from the staff at PolitiFact Wisconsin. Any post or article with content dealing with Wisconsin might receive that tag. That means that categorizing the data to match what we did from 2007 through 2019 would mean a giant headache.

To keep things simple, from 2020 onward we'll be lumping all of PolitiFact into one hopper. We will no longer track the state operations separately. That makes things easy. We just have to review that the claimant counts as a Republican or Democrat and log appropriately.

We don't think the trends will change much for all of PolitiFact compared to what we measured for PolitiFact National over the years. The operations that were kinder to Republicans have either shifted left (bubble effect?) or dropped out. We expect Repubicans to be about 50 percent more likely than a Democrat to receive a "Pants on Fire" rating for a statement deemed false. But we'll see.

Future PoF bias charts will not appropriately compare directly to those from the past, like this one:

 

That will be the final series graph from the first run of the study.

Afters:

It's worth noting again, we suppose, that "Republicans lie more!" does not help explain the disparity in the graph if our hypothesis about the subjectivity of the ratings is correct. We have no good reason for thinking it is incorrect.

The graph measures percentages, not raw numbers of ratings. Democrats making one "Pants on Fire" statement to go with nine "False" statements ends up as the same percentage as Republicans making 100 "Pants on Fire" statements to go with 900 "false" statements.

Wednesday, December 9, 2020

Does PolitiFact use consistent standards? No.

PolitiFact misleads when it tells its readers "we are applying the same standards to both sides." PolitiFact's methodology leaves open myriad ways to put fingers on the scale. The scale has fingerprints all over it.

In this article we'll focus on yet another example of uneven application of standards. We'll look at two PolitiFact fact checks in the category of health care, one from a Republican and one from a Democrat.


The Republican



On Nov. 30, 2020 PolitiFact published a fact check of Sen. Kelly Loeffler (R-Ga.) looking at her claim that her healthcare plan would protect Americans with preexisting conditions. PolitiFact issued a "False" judgment on Loeffler's claim.

Why the "False" rating?

PolitiFact's subheading suggested a lack of proof led to the rating: "No proof that Kelly Loeffler will ensure protections for preexisting conditions." 

Aside from the lack of proof, PolitiFact noted that Loeffler's plan proposed using something like high risk pools to help people get their preexisting conditions covered. PolitiFact's "If Your Time is Short" story summary gave Loeffler credit for protections that fall short of those offered by the Affordable Care Act (second bullet):

If Your Time is short

  • The GOP Georgia senator’s new plan offers no details on how protections for people with preexisting health conditions would be ensured.

  • Two provisions in the plan indicate protections will be less than those provided by the Affordable Care Act, experts say.

 

Why did the protections in Loeffler's plan count for nothing on PolitiFact's "Truth-O-Meter"? The special insurance groups designed for those with preexisting conditions couldn't even budget the rating up to "Mostly False"? Did PolitiFact assume that when Loeffler said "Americans" she meant "all Americans"? If so, that rationale failed to find its way into the fact check.

The Democrat

People these days tend to know (using that term advisedly) that President Obama's "You can keep your plan" pledge received PolitiFact's "Lie of the Year" in 2013. They've tended to forget, with help from PolitiFact, that the claim never received a Truth-O-Meter rating below "Half True." PolitiFact rated Obama's claim twice, in 2009 and in 2012. Both times it received a "Half True" rating. 

We'll use the 2012 rating to see how PolitiFact's application of standards compared to the ones it used for Loeffler.


PolitiFact's summary paragraphs encapsulate its reasoning:

Obama has a reasonable point: His health care law does take pains to allow Americans to keep their health plan if they want to remain on it. But Obama suggests that keeping the insurance you like is guaranteed.

In reality, Americans are not simply able to keep their insurance through thick and thin. Even before the law has taken effect, the rate of forced plan-switching among policyholders every year is substantial, and the CBO figures suggest that the law could increase that rate, at least modestly, even if Americans on balance benefit from the law’s provisions. We rate Obama’s claim Half True.

PolitiFact says Obama has a reasonable point. PolitiFact made no mention in its fact check of Loeffler to detect whether she had a reasonable point that her health care plan offered protections for preexisting conditions. Is that the same standard?

PolitiFact says Obama "suggested" that keeping one's preferred insurance is guaranteed. That might parallel the assumption that Loeffler was saying her plan guarantees coverage for preexisting conditions. PolitiFact's ruling suggests it made that assumption, though the fact check does not say so specifically. But if Obama was similarly making a guarantee, how did he skate with a "Half True" instead of the "False" rating Loeffler's claim received? Is that the same standard?

And speaking of guarantees, remember that PolitiFact docked Loeffler for not having proof that her plain would cover (all?) those with preexisting conditions. What proof did Obama's plan offer? Apparently none, as PolitiFact noted a Congressional Budget Office assessment saying the ACA would accelerate force churn of insurance plans. Is that the same standard?

We say the same standard did not apply to both. If Loeffler's "False" stems from her leading people to falsely believe her plan guarantees coverage for preexisting conditions then Obama's similar misleading would seem to equally earn a "False" rating. Or, both Loeffler and Obama could receive a "Half True" rating.

That they received quite different ratings shows the application of differing standards.

PFB predicts PolitiFact's "Lie of the Year" for 2020

Around this time of year PolitiFact typically publishes a story announcing candidates for its pick for the 2020 "Lie of the Year." That's always been an exercise in editorializing, and we've had fun trying to predict the winner.

Last year we didn't do so well!

This time we're not even going to let PolitiFact distract us with its list of candidates. Pretty often none of the candidates end up winning anyway.

I (Bryan) predict PolitiFact will choose "Falsehoods about voter fraud" or the like as its "Lie of the Year." Why? Because PolitiFact has bent over backward to try to whack every election fraud mole on the noggin. And while PolitiFact also focused on coronavirus whack-a-mole (I'd make that the second most likely outcome), the election fraud claims give it a chance to issue a clear parting shot at President Trump. They may even make the "Lie of the Year" specific to Trump or his campaign surrogates. But this year I suspect PolitiFact will want to go with the big tent and cover all sorts of voter fraud claims under its "Lie of the Year."

 

Jeff adds:

That's a solid pick.

Though I'd argue they hang that around Trump's neck (eg "Trump Campaign's false claims of election fraud")

Bonus points for mentioning how it was an attack on Democracy to question the integrity of our elections.




Thursday, December 3, 2020

TDS symptom: PolitiFact fact checks jokes

 Sure, President Trump says plenty of false things. He truly does.

But that's actually a trap for left-leaning fact checkers who pretend to be nonpartisan. They have a hard time judging when they go too far. Like when they fact check jokes:

PolitiFact's Nov. 1, 2020 item fact-checking President Trump found "Pants on Fire" Trump's claim that his supporters were protecting challenger Joe Biden's campaign bus.

How do we know it was a joke?

We watched video of the Trump appearance where he made the statement. The claim comes in the midst of a segment of a speech done in the style of a classic stand-up comedy routine. Certainly Trump mixed in serious political claims, but many of the lines were intended to provoke laughter, and the one about protecting Biden's bus unquestionably drew laughter. In context, Trump was making a point about the enthusiasm of his supporters, and his story about cars and trucks surrounding the bus emphasized the number of vehicles involved.

PolitiFact played it completely straight:

"You see the way our people, they, you know, they were protecting his bus yesterday," Trump said Nov. 1 during a rally in Michigan. "Because they are nice. They had hundreds of cars."

The FBI’s San Antonio office said Nov. 1 that it is "aware of the incident and investigating."

Trump’s benevolent explanation lacks evidence.

How does a fact checker overlook/omit those contextual clues?

The left-leaning Huffington Post figured it out (bold emphasis added):

President Donald Trump on Sunday mockingly claimed that his supporters were “protecting” a campaign bus belonging to Democratic presidential nominee Joe Biden when a caravan of vehicles dangerously surrounded it on a Texas highway, leading to a vehicular collision.

“They were protecting their bus yesterday because they’re nice,” Trump said at a rally in Michigan to cheers, laughter and applause.

If PolitiFact noticed the audience laughing and intentionally suppressed evidence Trump was joking, then PolitiFact deceived its audience by omission.

When PolitiFact catches politicians doing that sort of thing a "Half True" rating often results.

PolitiFact does not hold itself to the same standard it applies to Republican politicians.


Correction 12/13/2020: We misspelled "PolitiFact" on the title line, omitting the first of two i's.