Wednesday, August 16, 2017

Speaking hearsay to power: Joy Reid & PolitiFact

Sometimes PolitiFact publishes fact-checking so irresponsible that we find it hard to believe that unconscious bias serves as an adequate explanation.

On the Aug. 13, 2017 edition of NBC's "Meet the Press," pundit Joy-Ann Reid directly implied that the Trump White House contains white nationalists. On Aug. 15, 2017, PolitiFact published a fact-check style article without a "Truth-O-Meter" rating but with a "Share the Facts"/Google label conclusion judging her words "a bit too strong."

A reasonable person might translate "a bit too strong" into "Mostly True" or "Half True," but probably not "Mostly False," "False" or "Pants on Fire."

Hold on--Something's not quite alt-right

If the evidence supported something akin to a "Half True" or "Mostly True" rating, then we would not have much to complain about. But the ruling-not-ruling flies in the face of the evidence PolitiFact collected.

PolitiFact went to liberal experts (?) like the Southern Poverty Law Center and could not get a single one of them to declare evidence that one or more white nationalists populate the White House. The article was filled with things like this:
When we asked this question of several independent experts, they all agreed that none of the four were white nationalists themselves. However, several said that they had placed themselves uncomfortably close to white nationalists.
Are we to infer from PolitiFact's "a bit too strong" rating that guilt-by-association is fair game in fact-checking?

More to the point, is it okay to publicly accuse others of racism using guilt-by-association? That is what Reid did, and PolitiFact gave her the equivalent of a "Mostly True" rating.

PolitiFact even tried to downplay its own implicit interpretation ("Are there white nationalists in the White House?") of Reid's claim.

Hey! Let's fact check something Reid supposedly did not say!

PolitiFact flip-flops on whether Reid said there were white nationalists in the White House. PolitiFact's introductory paragraphs paint Reid as having "crystallized" the issue of White Nationalists in the White House:
The "Unite the Right" march in Charlottesville has brought the issue of white nationalism to the top of the nation’s agenda -- specifically, whether white nationalists are part of the White House staff.

Remarks by liberal commentator Joy-Ann Reid on the Aug. 13 edition of NBC’s Meet the Press crystallized these questions.Just a few paragraphs later, Reid's crystal has turned to ash (bold emphasis added):
It’s important to note that Reid did not explicitly accuse any of the four individuals she named of being white nationalists or alt-right members per se. But she suggested that the four were sympathetic to people who do fall into that category.

PolitiFact contradicts its own quotation of Reid (bold emphasis added):
"Who's writing the talking points that he was looking down and reading from? He has people like Stephen Miller, claimed as a mentee by Richard Spencer, who is an avowed open white nationalist. He has Steve Bannon, who's been sort of allowed to … meld into … the normalcy of a governmental employee, but who ran, which I reread today, the post that's still on their website, where they self-describe as the home of the alt-right.

What is the alt-right? It is a dressed-up term for white nationalism. They call themselves white identitarianism. They say that the tribalism that's sort of inherent in the human spirit ought to be also applied to white people.

That is who is in his government. Sebastian Gorka, who wore the medal of Vitézi Rend, a Nazi organization, being paid by the taxpayer, in the government of Donald Trump. The former Publius Decius blogger Michael Anton in the government.

He is surrounded by these people. It isn't both sides. He's in the White House -- they're in the White House with him."

We can't even imagine the level of expertise in mental gymnastics needed to deny the fact that Reid is saying the alt-right is a white nationalist group and is represented in the White House by the people she named. Nothing occurs in the context to diminish Reid's clear implication.

Shame on you, Joy Reid. Shame on you, PolitiFact.

Editor's note: It appears we published while attempting to preview this post. We're not aware of any significant change, other than adding an embedded URL, to the content since the original publication. Most or all the changes only affect HTML formatting.

Friday, August 11, 2017

National Review: "PolitiFact, Wrong Again on Health Care"

We've noted with interest Avik Roy's articles noting that the CBO's assessments of insurance loss from GOP health care reform bills place much of the responsibility on repeal of the individual mandate.

We anticipated this research would impact PolitiFact's fact-checking of GOP reform efforts, and National Review's Ramesh Ponnuru delivers the expected assessment in "PolitiFact, Wrong Again on Health Care."

When House Speaker Paul Ryan (R-Wis.) said most of those losing insurance under a GOP proposal were choosing not to buy something they did not want instead of having something taken away, PolitiFact rated his statement "Mostly False."

Ponnuru explains:
The root problem is that (PolitiFact's Jon) Greenberg assumed that the fines on people without insurance—Obamacare’s “individual mandate”—operate only in the market for individually purchased health insurance and that getting rid of them has no effect on Medicaid enrollment. So he thinks that all of the decline in Medicaid enrollment that CBO projects are the result of reforms to Medicaid that would have kept people who want it from getting it, and Ryan is exaggerating the effect of the fines.
Here's how Greenberg explained it in PolitiFact's fact check (bold emphasis added):
The biggest single chunk of savings under the Senate bill comes out of Medicaid. The CBO said that compared with the laws in place today, 15 million fewer people who need insurance would be able to get it through Medicaid or anywhere else.

Ryan’s answer flipped the CBO presentation. According to the CBO, the Senate bill’s impact on people who would get coverage through Medicaid is double that of people who buy on the insurance market. That’s where people make the kind of choices Ryan was talking about.
It looks like Ponnuru has Greenberg dead to rights.

We made the same assumption as Greenberg, though not published in a fact check, which led us to puzzle how to reconcile the high impact of the individual mandate for the CBO's prediction for insurance loss in 2018 with the apparently shrinking impact of the individual mandate in 2025.

Ponnuru's article helps explain the discrepancy, and his explanation exposes one of PolitiFact's claims as false: "The CBO said that compared with the laws in place today, 15 million fewer people who need insurance would be able to get it through Medicaid or anywhere else."

A decent slice of that 15 million, about 7 million by Ponnuru's estimate, will still maintain Medicaid eligibility. They simply won't sign up if not threatened with a fine.  But they can sign up after they fall ill and obtain retroactive coverage for up to three months. If that segment of the population needs Medicaid insurance, it can get Medicaid insurance, contrary to what PolitiFact claimed.

Yes, PolitiFact was wrong again.


Considering PolitiFact's penchant for declining to change its stories even after critics point out flaws, we wonder if PolitiFact will update its stories affected by the truths Ponnuru mentions.

Friday, August 4, 2017

PolitiFact editor: Principles developed "through sheer repetition"

PolitiFact editor Angie Drobnic Holan this week published her ruminations on PolitiFact's first 10 years of fact-checking.

Her section on the development of PolitiFact's principles drew our attention (bold emphasis added):
We also have made big strides in improving methodology, the system we use for researching, writing and editing thousands of fact-checks, more than 13,000 and counting at

Through sheer repetition, we’ve developed principles and checklists for fact-checking. PolitiFact’s Principles of the Truth-O-Meter describes in detail our approach to ensuring fairness and thoroughness. Our checklist includes contacting the person we’re fact-checking, searching fact-check archives, doing Google and database searches, consulting experts and authors, and then asking ourselves one more time what we’re missing.
The line to which we added bold emphasis doesn't really make any sense. One develops principles and checklists by experience and adaptation, not by "sheer repetition." Sheer repetition results in repeating exactly the procedures one started out with.

PolitiFact's definitions for its "Truth-O-Meter" ratings appear on the earliest Internet Archive page we could load: September 21, 2007, featuring the original definition of "Half True" that PolitiFact not-so-smoothly dumped around 2011. So the definitions do not appear to have resulted from "sheer repetition."

The likely truth is that PolitiFact developed an original set of principles based on what probably felt like careful consideration at the time. And as the organization encountered difficulties it tweaked its process.

Does the contemporary process count as "big strides" in improving PolitiFact's methodology?

We're not really seeing it.

When PolitiFact won its 2008 Pulitzer Prize for National Reporting, one of the stories among the 13 submitted was a "Mostly True" rating for Barack Obama's claim that his uncle had helped liberate Auschwitz. Auschwitz was liberated by the Soviet army. Mr. Obama's uncle was not part of the Soviet army. A false claim received a "Mostly True" rating.

This week, PolitiFact California issued a "Mostly True" rating for the claim a National Academy of Sciences study found undocumented immigrants commit fewer crimes than native-born Americans. If PolitiFact had looked at the claim in terms of raw numbers, it would likely prove true. Native-born Americans, after all, substantially outnumber undocumented immigrants. Such a comparison means very little, of course.

PolitiFact California simply overlooked the fact that the study looked at immigrants generally, not undocumented immigrants. We wish we were kidding. We are not kidding:
We started by checking out the 2015 National Academy of Sciences study Villaraigosa cited. It found: "Immigrants are in fact much less likely to commit crime than natives, and the presence of large numbers of immigrants seems to lower crime rates." The study added that "This disparity also holds for young men most likely to be undocumented immigrants: Mexican, Salvadoran, and Guatemalan men.
While the latter half of the paragraph hints at data specific to undocumented immigrants, we should note two important facts. First, measuring crime rates for Guatemalan immigrants in general serves as a poor method for gauging the criminality of undocumented Guatemalan immigrants. The same goes for immigrants from other nations. Second, PolitiFact California presents this information as though it came from the NAS study. In fact, the NAS study was summarizing the findings of a different study.

Neither study reached conclusions specific to undocumented immigrants, for neither used data permitting such conclusions.

Yet PolitiFact California found the following statement "Mostly True" (bold emphasis added):
"But going after the undocumented is not a crime strategy, when you look at the fact that the National Academy of Sciences in, I think it was November of 2015, the undocumented immigrants commit less crimes than the native born. That’s just a fact."
False statement, "Mostly True" rating.

If PolitiFact has learned anything over the past 10 years, it is that it can largely get away with passing incompetent fact-checking and subjective ratings on to its loyal readers.

Thursday, August 3, 2017

Newsbusters: "PolitiFact's Pretzel Twist for Democrat Gwen Moore: 'Mostly True,' But Not 'Literally Speaking'"

Tim Graham of Newsbusters scores a hit on PolitiFact Wisconsin with his Aug. 2, 2017 item on a rating from PolitiFact Wisconsin.

Rep. Gwen Moore (D--Wis) said, according to PolitiFact, "If you’re killed at 31 years old like Dontre Hamilton, who was shot 14 times by police for resting on a park bench in Milwaukee, nursing home care is not your priority."

PolitiFact Wisconsin admitted Moore's statement was not literally true:
Literally speaking, Hamilton was not killed simply for resting on a bench. He was shot after striking an officer with the officer’s baton.
PolitiFact Wisconsin rated the false statement "Mostly True."

In PolitiFact Wisconsin's defense, it imagined into being a way of viewing Moore's statement as true:
But in making a rhetorical point, Moore is correct that Hamilton had done nothing to attract the attention of police but fall asleep in a park.
Bless PolitiFact's heart for relieving Moore of the responsibility for using appropriate words to make her supposed rhetorical point. Moore did not talk at all about simply "drawing the attention of police." She talked specifically about Hamilton being shot (14 times) "for resting on a park bench."

This case helps illustrate how PolitiFact's "star chamber" feels little constraint from its stated definitions for its "Truth-O-Meter" ratings. PolitiFact defines "Mostly True" as "The statement is accurate but needs clarification or additional information."

In what manner was Moore's statement accurate without PolitiFact rewriting it to focus on the way Hamilton drew the attention of police?

If PolitiFact's "Truth-O-Meter" definitions were worth anything, then no false statement like Moore's would ever receive a rating of "Mostly True" or better. But it happens often.

Is it any wonder that people do not trust mainstream media fact checkers like PolitiFact?

Wednesday, August 2, 2017

Attack of the PolitiFact Twitterbots?

Eagle-eyed PolitiFact Bias co-editor Jeff D. spotted an interesting pattern of Twitter support for PolitiFact's new PolitiFact Illinois franchise.
The pattern consists of tweets identical to the one above made from what appear to be Twitterbots. The Fiona Madura twitter account doesn't look like the account of a real person. For example, Fiona has Tweeted out non-original content nearly every hour out of the past 24 as of this writing. And she appears to truly enjoy sharing a credit counseling advertisement.

What's a Twitterbot?

The New York Times offers an explanation:
Bots are small programs that typically perform repetitive, scripted tasks. On Twitter, they are used for a variety of purposes, including for help and harassment.
PolitiFact Illinois appears very popular with such apparent Twitterbot accounts.

@qodupoClar lists a location in Columbia, but has no followers, follows no one, and typically posts about Illinois.

Jeff screen captured approximately a dozen of these 'bot accounts tweeting out the same tweet about PolitiFact Illinois.

We're supposing that PolitiFact's partner for its PolitiFact Illinois project, The Better Government Association, has an established relationship with a Twitterbot opinion leader. Indeed, the IL Advocacy Network (@iladvnetwork), which looks like the corporate version of an empty suit, has tweeted about the BGA and PolitiFact Illinois in the same canned style as the 'bots mentioned above.

We're intrigued by the possibility that journalists--PolitiFact journalists!--are using 'bots to get their message out. Most regard the technique as deceptive at least on some level.

Sunday, July 30, 2017

"Not a lot of reader confusion" IV

PolitiFact editor Angie Drobnic Holan has famously defended PolitiFact's various "report card" graphs by declaring she does not observe much reader confusion. Readers, Holan believes, realize that PolitiFact fact checkers are not social scientists. Equipped with that understanding, people presumably only draw reasonable conclusions from the graphed results of PolitiFact's "entirely subjective" trademarked "Truth-O-Meter" ratings.

What planet do PolitiFact fact checkers live on, we wonder?

We routinely see people using PolitiFact data as though it was derived scientifically. Jeff spotted a sensational example on Twitter.
Here's an enlarged view of the chart to which Jeff objected:

How did the chart measure the "actual honesty" of the four presidential primary candidates? Just in case it's hard to read, we'll tilt it 90 degrees and zoom in:

That's right. The chart uses PolitiFact's subjective ratings, despite the even more obvious problem of selection bias, to measure candidates "actual honesty."

The guy to whom Jeff replied, T. R. Ramachandran, runs a newsletter that gives us terrific (eh) information on politics. Comprehensive insights & stuff:

It's not plausible that the people who run PolitiFact do not realize that people use their offal (sic) data this way. The fact that PolitiFact resists adding a disclaimer to its ratings and charts leads us inexorably toward the conclusion that PolitiFact really doesn't mind misleading people. At least not to the point of adding the disclaimer that would fix the bulk of the problem.

Why not give this a try, PolitiFact? Hopefully it's not too truthful for you.

Tuesday, July 25, 2017

When PolitiFacts contradict

In PolitiFact's zeal to defend the Affordable Care Act from criticism, it contradicts itself.

In declaring it "False" that the ACA has entered a death spiral, PolitiFact Wisconsin affirms three aspects of a death spiral, one being rising premiums. PolitiFact affirms that premiums are rising. Then, PolitiFact states that none of the three conditions that make up a death spiral have occurred. We must conclude, via PolitiFact, that premiums are increasing and that premiums are not increasing.

In PolitiFact Wisconsin's own words (bold emphasis added):
Our rating

A death spiral is a health industry term for a cycle with three components — shrinking enrollment, healthy people leaving the system and rising premiums.

The latest data shows enrollment is increasing slightly and younger (typically healthier) people are signing up at the same rate as last year. And while premiums are increasing, that isn’t affecting the cost to most consumers due to built-in subsidies.

So none of the three criteria are met, much less all three.
It's not hard to fix. PolitiFact Wisconsin could alter its fact check to note that only one of the conditions of a death spiral is occurring across the board, but that subsidies insulate many customers from the effects of rising premiums.

Subsidizing the cost of buying insurance does not make the cost of the premiums shrink, exactly. Instead, it places part of the responsibility for paying on somebody else. When somebody else foots the bill, higher prices do not drive off consumers nearly as effectively.

We're still waiting for PolitiFact to recognize that the insurance market is not monolithic. When the rules of the ACA leave individual markets without any insurers because adverse selection has driven them out, the conditions of a death spiral have obtained in that market.

We also note, in the context of the ACA, that when the only people who elect to pay for insurance are those who are receiving subsidies, it is fair to say the share of the market that pays full price encountered a death spiral.