Friday, September 8, 2017

PolitiFact's hypocrisy

PolitiFact manifests many examples of hypocrisy. This post will focus on just one.

On August 21, 2017 Speaker of the House Paul Ryan (R-Wis.) said American has dozens of counties with zero insurers. Ryan was talking about insurers committed to serving the exchanges that serve individual market customers.

On August 24, 2017, PolitiFact published a fact check rating Ryan's claim "Pants on Fire." PolitiFact noted that Ryan had relied on outdated information to back his claim. PolitiFact said only one county was expected to risk having no insurer, and Ryan should have been aware of it:
Now technically, that report wasn’t published until two days after Ryan spoke. But the government had the information, and a day before Ryan spoke, Politico reported that just one county remained without a potential insurance carrier in 2018. The Kaiser Family Foundation published the same information the day of Ryan’s CNN town hall.

And a week earlier, the government said there were only two counties at risk of having no participating insurer. Ryan was way off no matter what.
Fast forward to Sept. 7, 2017. PolitiFact elects to republicize its fact check of Ryan, reinforcing its message that only one county remains at risk no not having any insurance provider available through the exchange. PolitiFact publicized it on Twitter:
And PolitiFact publicized it on Facebook as well.

The problem? On Sept. 6, 2017, the Kaiser Family Foundation updated its information to show 63 counties at risk of having no insurer on the exchange. The information in the story PolitiFact shared was outdated.

Paul Ryan got a "Pants on Fire" for peddling outdated information.

What does PolitiFact get for doing the same thing?

Another Pulitzer Prize?

Thursday, September 7, 2017

"Not a lot of reader confusion" V

When will PolitiFact give up its absurd notion that its graphs and tables do not mislead large numbers of people?

Joy Behar of ABC's "The View" recently challenged White House spokesperson Sarah Sanders on the basis that PolitiFact says 95 percent of President Donald Trump's statements are untrue:
Joy Behar asked Sanders about a PolitiFact report that found 95 percent of the president's statements were less than completely true.

"The problem with that, Joy, is that you are doing exactly what we're talking about," Sanders responded. "Pushing a false narrative."
Apparently Sanders was the only person on the set who challenged the false narrative Behar was peddling.

For those PolitiFact continues to mislead, we repeat that if PolitiFact fails to use a representative sample of statements when it publishes its graphs and charts then the percentages tell you the opinions of PolitiFact editors for a select set of statements, not the percentage chance that a typical Trump statement is untrue.

(fingers crossed that the ABC embed works)

Not a lot of reader confusion? Seriously?

Give us a break, PolitiFact.

Wednesday, September 6, 2017

PolitiFact & Roy Moore: A smorgasbord of problems

When PolitiFact unpublished its Sept. 1, 2017 fact check of a claim attacking Alabama Republican Roy Moore, we had our red flag to look into the story. Taking down a published story itself runs against the current of journalistic ethics, so we decided to keep an eye on things to see what else might come of it.

We were rewarded with a smorgasbord of questionable actions by PolitiFact.

Publication and Unpublication

PolitiFact's Sept. 1, 2017 fact check found it "Mostly False" that Republican Roy Moore had taken $1 million from a charity he ran to supplement his pay as as Chief Justice in the Supreme Court of Alabama.

We have yet to read the original fact check, but we know the summary thanks to PolitiFact's Twitter confession issued later on Sept. 1, 2017:

We tweeted criticism of PolitiFact for not making an archived version of the fact check immediately available and for not providing an explanation for those who ended up looking for the story only to find a 404-page-not found-error.  We think readers should not have to rely on Twitter to know what is going on with the PolitiFact website.

John Kruzel takes tens of thousands of dollars from PolitiFact

(a brief lesson in misleading communications)

The way editors word a story's title, or even a subheading like the one above, makes a difference.

What business does John Kruzel have "taking" tens of thousands of dollars from PolitiFact? The answer is easy: Kruzel is an employee of PolitiFact, and PolitiFact pays Kruzel for his work. But we can make that perfectly ordinary and non-controversial relationship look suspicious with a subheading like the one above.

We have a parallel in the fact check of Roy Moore. Moore worked for the charity he ran and was paid for it. Note the title PolitiFact chose for its fact check:

Did Alabama Senate candidate Roy Moore take $1 million from a charity he ran?

 "Mostly True." Hmmm.

Kruzel wrote the fact check we're discussing. He did not necessarily compose the title.

We think it's a bad idea for fact-checkers to engage in the same misleading modes of communication they ought to criticize and hold to account.

Semi-transparent Transparency

For an organization that advocates transparency, PolitiFact sure relishes its semi-transparency. On Sept. 5, 2017, PolitiFact published an explanation of its correction but rationed specifics (bold emphasis added in the second instance):
Correction: When we originally reported this fact-check on Sept. 1, we were unable to determine how the Senate Leadership Fund arrived at its figure of "over $1 million," and the group didn’t respond to our query. The evidence seemed to show a total of under $1 million for salary and other benefits. After publication, a spokesman for the group provided additional evidence showing Moore received compensation as a consultant and through an amended filing, bringing the total to more than $1 million. We have corrected our report, and we have changed the rating from Mostly False to Mostly True.
PolitiFact included a table in its fact check showing relevant information gleaned from tax documents. Two of the entries were marked as for consulting and as an amended filing, which we highlighted for our readers:

Combining the two totals gives us $177,500. Subtracting that figure from the total PolitiFact used in its corrected fact check, we end up with $853,375.

The Senate Leadership Fund PAC (Republican) was off by a measly 14.7 percent and got a "Mostly False" in PolitiFact's original fact check? PolitiFact often barely blinks over much larger errors than that.

Take a claim by Sen. Brad Schneider (D-Ill.) from April 2017, for example. The fact check was published under the "PolitiFact Illinois" banner, but PolitiFact veterans Louis Jacobson and Angie Drobnic Holan did the writing and editing, respectively.

Schneider said that the solar industry accounts for 3 times the jobs from the entire coal mining industry. PolitiFact said the best data resulted in a solar having a 2.3 to 1 job advantage over coal, terming 2.3 "just short of three-to-one" and rating Schneider's claim "Mostly True."

Schneider's claim was off by over 7 percent even if we credit 2.5 as 3 by rounding up.

How could an error of under 15 percent have dropped the rating for the Senate Leadership Fund's claim all the way down to "Mostly False"?

We examine that issue next.

Compound Claim, Or Not?

PolitiFact recognizes in its statement of principles that sometimes claims have more than one part:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
We note that if PolitiFact does not weight the individual pieces equally, we have yet another area where subjective judgment might color "Truth-O-Meter" ratings.

Perhaps this case qualifies as one of those subjectively skewed cases.

The ad attacking Moore looks like a clear compound claim. As PolitiFact puts it (bold emphasis added), "In addition to his compensation as a judge, "Roy Moore and his wife (paid themselves) over $1 million from a charity they ran."

PolitiFact found the first part of the claim flatly false (bold emphasis added):
He began to draw a salary from the foundation in 2005, two years after his dismissal from the bench, according to the foundation’s IRS filings. So the suggestion he drew the two salaries concurrently is wrong.
Without the damning double dipping, the attack ad is a classic deluxe nothingburger with nothingfries and a super-sized nothingsoda.

Moore was ousted as Chief Justice in the Alabama Supreme Court, where he could have expected a raise up to $196,183 per year by 2008. After that ouster Moore was paid a little over $1 million over a nine-year period, counting his wife's salary for one year, getting well under $150,000 per year on average. On what planet is that not a pay cut? With the facts exposed, the attack ad loses all coherence. Where is the "more" that serves as the theme of the ad?

We think the fact checkers lost track of the point of the ad somewhere along the line. If the ad was just about what Moore was paid for running his charity while not doing a different job at the same time, it's more neutral biography than attack ad. The main point of the attack ad was Moore supplementing his generous salary with money from running a charitable (not-for-profit) organization. Without that main point, virtually nothing remains.

PolitiFact covers itself with shame by failing to see the obvious. The original "Mostly False" rating fit the ad pretty well regardless of whether the ad correctly reported the amount of money Moore was paid for working at a not-for-profit organization.

Assuming PolitiFact did not confuse itself?

If PolitiFact denies making a mistake by losing track of the point of the ad, we have another case that helps amplify the point we made with our post on Sept. 1, 2017. In that post, we noted that PolitiFact graded one of Trump's claims as "False" based on not giving Trump credit for his underlying point.

PolitiFact does not address the "underlying point" of claims in a consistent manner.

In our current example, the attack ad on Roy Moore gets PolitiFact's seal of "Mostly" approval only by ignoring its underlying point. The ad actually misled in two ways, first by saying Moore was supplementing his income as judge with income from his charity when the two source of income were not concurrent, and secondly by reporting the charity income while downplaying the period of time over which that income was spread. Despite the dual deceit, PolitiFact graded the claim "Mostly True."

"The decision about a Truth-O-Meter rating is entirely subjective"

Cases like this support our argument that PolitiFact tends to base its ratings on subjective judgments. This case also highlights a systemic failure of transparency at PolitiFact.

We will update this item if PolitiFact surprises us by running a second correction.


On top of problems we described above, PolitiFact neglected to tag its revised/republished story with the "Corrections and Updates" tag its says it uses for all corrected or updated stories.

PolitiFact has a poor record of following this part of its corrections policy.

We note, however, that after we pointed out the problem via Twitter and email PolitiFact fixed it without a long delay.

Friday, September 1, 2017

PolitiFact disallows Trump's underlying point?

PolitiFact's defenders sometimes opine that PolitiFact always justifies its rulings.

We accept that PolitiFact typically includes words in its fact checks intended to justify its rulings. But we detect bias in PolitiFact's inconsistent application of principles when it tries to justify its ratings.

Our example this time comes from an Aug. 31, 2017 fact check of President Donald Trump's claim that illegal border crossings have slowed by 78 percent.

Zebra Fact Check on Aug. 30, 2017 published criticisms of the way PolitiFact and the Washington Post Fact Checker handled this claim. PolitiFact's latest version corrects none of the specified problems, including the failure to attempt a reasonable fact check of how much of a drop in illegal Southwest border crossings Trump can claim.

As with its earlier fact check, PolitiFact offers examples of various cherry-picked statistics, implicitly demonstrating that cherry-picking leads to a divergent set of outcomes:
Here’s how the number of apprehensions have changed:
• From July 2016 to July 2017, down 46 percent;
• From June 2017 to July 2017, up 13 percent;
• From November 2016 to July 2017, down 61 percent.
As I explained over at Zebra Fact Check, a serious attempt to measure a drop in border crossings explains the use of a proxy measure (border apprehensions) and then picks a representative baseline against which to measure the change.

Zebra Fact Check calculated a 56 percent change. calculated 58 percent.

PolitiFact has yet to make a reasonable attempt to establish a representative baseline. The best attempt in the cherry-picked set we quoted was the comparison of July 2016 to July 2017, showing a 46 percent change. That comparison suffers from offering a narrow picture, made up of individual months separated by a year in time. Also, 2016 was not a typical year for border apprehensions under the Obama administration. But at least it compared Trump to Obama in an apples-to-apples sense.

PolitiFact's 46 percent figure ends up in the ballpark with the 58 percent figure produced.

That's where the problem comes in.

Where is Trump's underlying point?

Back in 2008, PolitiFact Editor Bill Adair published an article trying to explain how PolitiFact treats numbers claims.
To assess the truth for a numbers claim, the biggest factor is the underlying message.
Adair used as one of his examples a claim by then-presidential candidate Barack Obama that his mixed-marriage birth was illegal in 12 states when he has born. PolitiFact found it was illegal in 22 states but rated the claim "Mostly True." That is the potential power of the underlying argument. If one makes Obama's claim into "My mixed marriage birth was illegal in many states when I was born" then it's essentially accurate, but you can drop Obama down to just "Mostly True" since he underestimated the number of states by 10.

Does Trump have an underlying point that illegal Southwest border crossings have decreased under his watch?

It appears that he does, and it appears that PolitiFact offers him no credit for it. PolitiFact's fact check shows no interest at all in Trump's underlying point.


Correction Sept. 1, 2017: Swapped out "Did" for "Does" in the third-to-last paragraph.

Wednesday, August 30, 2017

PolitiFact: "There are no sharks swimming in the streets of Houston or anywhere else"

We were amused when we noticed PolitiFact inquiring about the faked image of a shark swimming on a Houston freeway thanks to Hurricane Harvey.

"Is it true PolitiFact wonders if that's true?" we wondered.

Our amusement multiplied when we saw the headline over PolitiFact's story, albeit not a scoop, exposing the fakery:

There are no sharks swimming in the streets of Houston or anywhere else

No, seriously. That is how PolitiFact titled its story.

No sharks swimming in the Mediterranean?

No sharks swimming in the Indian Ocean?

No sharks swimming in the Atlantic Ocean?

No sharks swimming in the Pacific Ocean?

Are these questions silly? Of course, until we consider that PolitiFact is the fact checker that fact checks something if it can be construed to mean something, basing the fact check on the ability of some to construe creatively.

If we have "the streets of Houston" and "anywhere else," we don't see why we can't construe that to mean the Pacific Ocean, or even the shark exhibit at Sea World.

Still, we believe in charitable interpretation. What if PolitiFact was just trying to say that there were no sharks swimming "in the streets" in Houston or anywhere else?

Well, immediately we put that together with PolitiFact's "Half True" ruling on President Obama's claim that fish swim in the streets of Miami at high tide.

If fish can swim in the streets of Miami at high tide, then what about a little 'ol bonnethead shark? Couldn't a Miami-area bonnethead put the lie to PolitiFact's claim that no sharks swim in any streets anywhere? And what about submerged cities such as Port Royal? What keeps the sharks away from those streets?

PolitiFact lives in a glass house, throwing stones.

Wednesday, August 16, 2017

Speaking hearsay to power: Joy Reid & PolitiFact

Sometimes PolitiFact publishes fact-checking so irresponsible that we find it hard to believe that unconscious bias serves as an adequate explanation.

On the Aug. 13, 2017 edition of NBC's "Meet the Press," pundit Joy-Ann Reid directly implied that the Trump White House contains white nationalists. On Aug. 15, 2017, PolitiFact published a fact-check style article without a "Truth-O-Meter" rating but with a "Share the Facts"/Google label conclusion judging her words "a bit too strong."

A reasonable person might translate "a bit too strong" into "Mostly True" or "Half True," but probably not "Mostly False," "False" or "Pants on Fire."

Hold on--Something's not quite alt-right

If the evidence supported something akin to a "Half True" or "Mostly True" rating, then we would not have much to complain about. But the ruling-not-ruling flies in the face of the evidence PolitiFact collected.

PolitiFact went to liberal experts (?) like the Southern Poverty Law Center and could not get a single one of them to declare evidence that one or more white nationalists populate the White House. The article was filled with things like this:
When we asked this question of several independent experts, they all agreed that none of the four were white nationalists themselves. However, several said that they had placed themselves uncomfortably close to white nationalists.
Are we to infer from PolitiFact's "a bit too strong" rating that guilt-by-association is fair game in fact-checking?

More to the point, is it okay to publicly accuse others of racism using guilt-by-association? That is what Reid did, and PolitiFact gave her the equivalent of a "Mostly True" rating.

PolitiFact even tried to downplay its own implicit interpretation ("Are there white nationalists in the White House?") of Reid's claim.

Hey! Let's fact check something Reid supposedly did not say!

PolitiFact flip-flops on whether Reid said there were white nationalists in the White House. PolitiFact's introductory paragraphs paint Reid as having "crystallized" the issue of White Nationalists in the White House:
The "Unite the Right" march in Charlottesville has brought the issue of white nationalism to the top of the nation’s agenda -- specifically, whether white nationalists are part of the White House staff.

Remarks by liberal commentator Joy-Ann Reid on the Aug. 13 edition of NBC’s Meet the Press crystallized these questions.

Just a few paragraphs later, Reid's crystal has turned to ash (bold emphasis added):
It’s important to note that Reid did not explicitly accuse any of the four individuals she named of being white nationalists or alt-right members per se. But she suggested that the four were sympathetic to people who do fall into that category.

PolitiFact contradicts its own quotation of Reid (bold emphasis added):
"Who's writing the talking points that he was looking down and reading from? He has people like Stephen Miller, claimed as a mentee by Richard Spencer, who is an avowed open white nationalist. He has Steve Bannon, who's been sort of allowed to … meld into … the normalcy of a governmental employee, but who ran, which I reread today, the post that's still on their website, where they self-describe as the home of the alt-right.

What is the alt-right? It is a dressed-up term for white nationalism. They call themselves white identitarianism. They say that the tribalism that's sort of inherent in the human spirit ought to be also applied to white people.

That is who is in his government. Sebastian Gorka, who wore the medal of Vitézi Rend, a Nazi organization, being paid by the taxpayer, in the government of Donald Trump. The former Publius Decius blogger Michael Anton in the government.

He is surrounded by these people. It isn't both sides. He's in the White House -- they're in the White House with him."

We can't even imagine the level of expertise in mental gymnastics needed to deny the fact that Reid is saying the alt-right is a white nationalist group and is represented in the White House by the people she named. Nothing occurs in the context to diminish Reid's clear implication.

Shame on you, Joy Reid. Shame on you, PolitiFact.

Editor's note: It appears we published while attempting to preview this post. We're not aware of any significant change, other than adding an embedded URL, to the content since the original publication. Most or all the changes only affect HTML formatting.

Update Aug. 23, 2017: Fixed formatting to make clear the "crystallized" line was a quotation from PolitiFact

Friday, August 11, 2017

National Review: "PolitiFact, Wrong Again on Health Care"

We've noted with interest Avik Roy's articles noting that the CBO's assessments of insurance loss from GOP health care reform bills place much of the responsibility on repeal of the individual mandate.

We anticipated this research would impact PolitiFact's fact-checking of GOP reform efforts, and National Review's Ramesh Ponnuru delivers the expected assessment in "PolitiFact, Wrong Again on Health Care."

When House Speaker Paul Ryan (R-Wis.) said most of those losing insurance under a GOP proposal were choosing not to buy something they did not want instead of having something taken away, PolitiFact rated his statement "Mostly False."

Ponnuru explains:
The root problem is that (PolitiFact's Jon) Greenberg assumed that the fines on people without insurance—Obamacare’s “individual mandate”—operate only in the market for individually purchased health insurance and that getting rid of them has no effect on Medicaid enrollment. So he thinks that all of the decline in Medicaid enrollment that CBO projects are the result of reforms to Medicaid that would have kept people who want it from getting it, and Ryan is exaggerating the effect of the fines.
Here's how Greenberg explained it in PolitiFact's fact check (bold emphasis added):
The biggest single chunk of savings under the Senate bill comes out of Medicaid. The CBO said that compared with the laws in place today, 15 million fewer people who need insurance would be able to get it through Medicaid or anywhere else.

Ryan’s answer flipped the CBO presentation. According to the CBO, the Senate bill’s impact on people who would get coverage through Medicaid is double that of people who buy on the insurance market. That’s where people make the kind of choices Ryan was talking about.
It looks like Ponnuru has Greenberg dead to rights.

We made the same assumption as Greenberg, though not published in a fact check, which led us to puzzle how to reconcile the high impact of the individual mandate for the CBO's prediction for insurance loss in 2018 with the apparently shrinking impact of the individual mandate in 2025.

Ponnuru's article helps explain the discrepancy, and his explanation exposes one of PolitiFact's claims as false: "The CBO said that compared with the laws in place today, 15 million fewer people who need insurance would be able to get it through Medicaid or anywhere else."

A decent slice of that 15 million, about 7 million by Ponnuru's estimate, will still maintain Medicaid eligibility. They simply won't sign up if not threatened with a fine.  But they can sign up after they fall ill and obtain retroactive coverage for up to three months. If that segment of the population needs Medicaid insurance, it can get Medicaid insurance, contrary to what PolitiFact claimed.

Yes, PolitiFact was wrong again.


Considering PolitiFact's penchant for declining to change its stories even after critics point out flaws, we wonder if PolitiFact will update its stories affected by the truths Ponnuru mentions.

Friday, August 4, 2017

PolitiFact editor: Principles developed "through sheer repetition"

PolitiFact editor Angie Drobnic Holan this week published her ruminations on PolitiFact's first 10 years of fact-checking.

Her section on the development of PolitiFact's principles drew our attention (bold emphasis added):
We also have made big strides in improving methodology, the system we use for researching, writing and editing thousands of fact-checks, more than 13,000 and counting at

Through sheer repetition, we’ve developed principles and checklists for fact-checking. PolitiFact’s Principles of the Truth-O-Meter describes in detail our approach to ensuring fairness and thoroughness. Our checklist includes contacting the person we’re fact-checking, searching fact-check archives, doing Google and database searches, consulting experts and authors, and then asking ourselves one more time what we’re missing.
The line to which we added bold emphasis doesn't really make any sense. One develops principles and checklists by experience and adaptation, not by "sheer repetition." Sheer repetition results in repeating exactly the procedures one started out with.

PolitiFact's definitions for its "Truth-O-Meter" ratings appear on the earliest Internet Archive page we could load: September 21, 2007, featuring the original definition of "Half True" that PolitiFact not-so-smoothly dumped around 2011. So the definitions do not appear to have resulted from "sheer repetition."

The likely truth is that PolitiFact developed an original set of principles based on what probably felt like careful consideration at the time. And as the organization encountered difficulties it tweaked its process.

Does the contemporary process count as "big strides" in improving PolitiFact's methodology?

We're not really seeing it.

When PolitiFact won its 2008 Pulitzer Prize for National Reporting, one of the stories among the 13 submitted was a "Mostly True" rating for Barack Obama's claim that his uncle had helped liberate Auschwitz. Auschwitz was liberated by the Soviet army. Mr. Obama's uncle was not part of the Soviet army. A false claim received a "Mostly True" rating.

This week, PolitiFact California issued a "Mostly True" rating for the claim a National Academy of Sciences study found undocumented immigrants commit fewer crimes than native-born Americans. If PolitiFact had looked at the claim in terms of raw numbers, it would likely prove true. Native-born Americans, after all, substantially outnumber undocumented immigrants. Such a comparison means very little, of course.

PolitiFact California simply overlooked the fact that the study looked at immigrants generally, not undocumented immigrants. We wish we were kidding. We are not kidding:
We started by checking out the 2015 National Academy of Sciences study Villaraigosa cited. It found: "Immigrants are in fact much less likely to commit crime than natives, and the presence of large numbers of immigrants seems to lower crime rates." The study added that "This disparity also holds for young men most likely to be undocumented immigrants: Mexican, Salvadoran, and Guatemalan men.
While the latter half of the paragraph hints at data specific to undocumented immigrants, we should note two important facts. First, measuring crime rates for Guatemalan immigrants in general serves as a poor method for gauging the criminality of undocumented Guatemalan immigrants. The same goes for immigrants from other nations. Second, PolitiFact California presents this information as though it came from the NAS study. In fact, the NAS study was summarizing the findings of a different study.

Neither study reached conclusions specific to undocumented immigrants, for neither used data permitting such conclusions.

Yet PolitiFact California found the following statement "Mostly True" (bold emphasis added):
"But going after the undocumented is not a crime strategy, when you look at the fact that the National Academy of Sciences in, I think it was November of 2015, the undocumented immigrants commit less crimes than the native born. That’s just a fact."
False statement, "Mostly True" rating.

If PolitiFact has learned anything over the past 10 years, it is that it can largely get away with passing incompetent fact-checking and subjective ratings on to its loyal readers.

Thursday, August 3, 2017

Newsbusters: "PolitiFact's Pretzel Twist for Democrat Gwen Moore: 'Mostly True,' But Not 'Literally Speaking'"

Tim Graham of Newsbusters scores a hit on PolitiFact Wisconsin with his Aug. 2, 2017 item on a rating from PolitiFact Wisconsin.

Rep. Gwen Moore (D--Wis) said, according to PolitiFact, "If you’re killed at 31 years old like Dontre Hamilton, who was shot 14 times by police for resting on a park bench in Milwaukee, nursing home care is not your priority."

PolitiFact Wisconsin admitted Moore's statement was not literally true:
Literally speaking, Hamilton was not killed simply for resting on a bench. He was shot after striking an officer with the officer’s baton.
PolitiFact Wisconsin rated the false statement "Mostly True."

In PolitiFact Wisconsin's defense, it imagined into being a way of viewing Moore's statement as true:
But in making a rhetorical point, Moore is correct that Hamilton had done nothing to attract the attention of police but fall asleep in a park.
Bless PolitiFact's heart for relieving Moore of the responsibility for using appropriate words to make her supposed rhetorical point. Moore did not talk at all about simply "drawing the attention of police." She talked specifically about Hamilton being shot (14 times) "for resting on a park bench."

This case helps illustrate how PolitiFact's "star chamber" feels little constraint from its stated definitions for its "Truth-O-Meter" ratings. PolitiFact defines "Mostly True" as "The statement is accurate but needs clarification or additional information."

In what manner was Moore's statement accurate without PolitiFact rewriting it to focus on the way Hamilton drew the attention of police?

If PolitiFact's "Truth-O-Meter" definitions were worth anything, then no false statement like Moore's would ever receive a rating of "Mostly True" or better. But it happens often.

Is it any wonder that people do not trust mainstream media fact checkers like PolitiFact?

Wednesday, August 2, 2017

Attack of the PolitiFact Twitterbots?

Eagle-eyed PolitiFact Bias co-editor Jeff D. spotted an interesting pattern of Twitter support for PolitiFact's new PolitiFact Illinois franchise.
The pattern consists of tweets identical to the one above made from what appear to be Twitterbots. The Fiona Madura twitter account doesn't look like the account of a real person. For example, Fiona has Tweeted out non-original content nearly every hour out of the past 24 as of this writing. And she appears to truly enjoy sharing a credit counseling advertisement.

What's a Twitterbot?

The New York Times offers an explanation:
Bots are small programs that typically perform repetitive, scripted tasks. On Twitter, they are used for a variety of purposes, including for help and harassment.
PolitiFact Illinois appears very popular with such apparent Twitterbot accounts.

@qodupoClar lists a location in Columbia, but has no followers, follows no one, and typically posts about Illinois.

Jeff screen captured approximately a dozen of these 'bot accounts tweeting out the same tweet about PolitiFact Illinois.

We're supposing that PolitiFact's partner for its PolitiFact Illinois project, The Better Government Association, has an established relationship with a Twitterbot opinion leader. Indeed, the IL Advocacy Network (@iladvnetwork), which looks like the corporate version of an empty suit, has tweeted about the BGA and PolitiFact Illinois in the same canned style as the 'bots mentioned above.

We're intrigued by the possibility that journalists--PolitiFact journalists!--are using 'bots to get their message out. Most regard the technique as deceptive at least on some level.

Sunday, July 30, 2017

"Not a lot of reader confusion" IV

PolitiFact editor Angie Drobnic Holan has famously defended PolitiFact's various "report card" graphs by declaring she does not observe much reader confusion. Readers, Holan believes, realize that PolitiFact fact checkers are not social scientists. Equipped with that understanding, people presumably only draw reasonable conclusions from the graphed results of PolitiFact's "entirely subjective" trademarked "Truth-O-Meter" ratings.

What planet do PolitiFact fact checkers live on, we wonder?

We routinely see people using PolitiFact data as though it was derived scientifically. Jeff spotted a sensational example on Twitter.
Here's an enlarged view of the chart to which Jeff objected:

How did the chart measure the "actual honesty" of the four presidential primary candidates? Just in case it's hard to read, we'll tilt it 90 degrees and zoom in:

That's right. The chart uses PolitiFact's subjective ratings, despite the even more obvious problem of selection bias, to measure candidates "actual honesty."

The guy to whom Jeff replied, T. R. Ramachandran, runs a newsletter that gives us terrific (eh) information on politics. Comprehensive insights & stuff:

It's not plausible that the people who run PolitiFact do not realize that people use their offal (sic) data this way. The fact that PolitiFact resists adding a disclaimer to its ratings and charts leads us inexorably toward the conclusion that PolitiFact really doesn't mind misleading people. At least not to the point of adding the disclaimer that would fix the bulk of the problem.

Why not give this a try, PolitiFact? Hopefully it's not too truthful for you.

Tuesday, July 25, 2017

When PolitiFacts contradict

In PolitiFact's zeal to defend the Affordable Care Act from criticism, it contradicts itself.

In declaring it "False" that the ACA has entered a death spiral, PolitiFact Wisconsin affirms three aspects of a death spiral, one being rising premiums. PolitiFact affirms that premiums are rising. Then, PolitiFact states that none of the three conditions that make up a death spiral have occurred. We must conclude, via PolitiFact, that premiums are increasing and that premiums are not increasing.

In PolitiFact Wisconsin's own words (bold emphasis added):
Our rating

A death spiral is a health industry term for a cycle with three components — shrinking enrollment, healthy people leaving the system and rising premiums.

The latest data shows enrollment is increasing slightly and younger (typically healthier) people are signing up at the same rate as last year. And while premiums are increasing, that isn’t affecting the cost to most consumers due to built-in subsidies.

So none of the three criteria are met, much less all three.
It's not hard to fix. PolitiFact Wisconsin could alter its fact check to note that only one of the conditions of a death spiral is occurring across the board, but that subsidies insulate many customers from the effects of rising premiums.

Subsidizing the cost of buying insurance does not make the cost of the premiums shrink, exactly. Instead, it places part of the responsibility for paying on somebody else. When somebody else foots the bill, higher prices do not drive off consumers nearly as effectively.

We're still waiting for PolitiFact to recognize that the insurance market is not monolithic. When the rules of the ACA leave individual markets without any insurers because adverse selection has driven them out, the conditions of a death spiral have obtained in that market.

We also note, in the context of the ACA, that when the only people who elect to pay for insurance are those who are receiving subsidies, it is fair to say the share of the market that pays full price encountered a death spiral.

Sunday, July 23, 2017

PolitiFact's facts outpace the truth

"Falsehood flies, and the Truth comes limping after it"

With the speed of the Interwebs at its disposal, PolitiFact on July 22, 2017 declared that no evidence exists to show Senator Bill Nelson (D-Fla.) favors a single-payer health care system for the United States of America.

PolitiFact based its proclamation loosely on its July 19, 2017 fact check of the National Republican Senatorial Committee's ad painting Nelson as a potential supporter of a universal single-payer plan.

We detected signs of very poor reporting from PolitiFact Florida, which will likely receive a closer examination at Zebra Fact Check.

Though PolitiFact reported that Nelson's office declined to give a statement on his support for a single-payer plan, PolitiFact ignored the resulting implied portrait of Nelson: He does not want to go unequivocally on the record supporting single-payer because it would hurt his re-election chances.

PolitiFact relied on a paraphrase of Nelson from the Tampa Tribune (since swallowed by the Tampa Bay Times, which in turn runs PolitiFact) to claim Nelson has said he does not favor a single-payer plan (bold emphasis added):
The ad suggests that Nelson supports Warren on most things, including a single-payer health care system. Actually, Nelson has said he doesn’t support single payer and wants to focus on preserving current law. His voting record is similar to Warren’s, but members of the same party increasingly vote alike due to a lack of bipartisan votes in the Senate.
There's one redeeming feature in PolitiFact Florida's summary. Using the voting agreement between two candidates to predict how they'll vote on one particular issue makes little sense unless the past votes cover that issue. If Nelson and Senator Elizabeth Warren (D-Mass.) had voted together in support of a single-payer plan, then okay.

But PolitiFact downplayed the ad's valid point about the possibility Nelson would support a single-payer plan. And PolitiFact made the mistake of exaggerating its survey of the evidence. In declaring that evidence does not exist, PolitiFact produced the impression that it searched very thoroughly and appropriately for that evidence and could not find it because it does not exist.

In other words, PolitiFact produced a false impression.

"Another major step forward"

We tried two strategies for finding evidence Nelson likes the idea of a single-payer plan. The first strategy failed. But the second strategy quickly produced a hit that sinks PolitiFact's claim that no evidence exists of Nelson favoring a single-payer plan.

A Tampa Bay market television station, WTSP, aired an interview with Nelson earlier in July 2017. The interviewer asked Nelson if he would be willing to join with Democrats who support a single-payer plan.

Nelson replied (bold emphasis added):
Well, I've got enough trouble just trying to fix the Affordable Care Act. I mean, you're talking about another major step forward, and we're not ready for that now.
The quotation supports the view that Nelson is playing the long game on single-payer. He won't jeopardize his political career by unequivocally supporting it until he thinks it's a political winner.

PolitiFact's fact check uncovered part of that evidence by asking Nelson's office to say whether he supports single payer. The office declined to provide a statement, and that pretty much says it all. If Nelson does not support single-payer and does not believe that going on the record would hurt his chances in the election, then nothing should stop him from making his position clear.

PolitiFact will not jeopardize Nelson's political career by finding the evidence that the NRSC has a point. Instead, it will report that the evidence it failed to find does not exist.

It will present this twisted approach to reporting as non-partisan fact-checking.


We let PolitiFact know about the evidence it missed (using email and Twitter). Now we wait for the evidence of PolitiFact's integrity and transparency.

Saturday, July 22, 2017

Video embed, Sen. Bill Nelson

I'm not a fan of Sen. Bill Nelson (D-Fla.). I just need to post this as part of an effort to save it for posterity.

Friday, July 21, 2017

The ongoing stupidity of PolitiFact California

PolitiFact on the whole stinks at fact-checking, but PolitiFact California is special. We don't use the word "stupid" lightly, but PolitiFact California has pried it from our lips multiple times already over its comparatively short lifespan.

PolitiFact's latest affront to reason comes from the following PolitiFact California (@CAPolitiFact) tweet:
The original fact check was stupid enough, but PolitiFact California's tweet twists that train wreck into an unrecognizable heap of metal.

  • The fact check discusses the (per year) odds of falling victim to a fatal terror attack committed by a refugee.
  • The tweet trumpets the (per year) odds of a fatal attack occurring.

The different claims require totally different calculations, and the fact that the tweet confused one claim for the other helps show how stupid it was to take the original fact-checked claim seriously in the first place.

The original claim said "The chances of being killed by a refugee committing a terrorist act is 1 in 3.6 billion." PolitiFact forgave the speaker for omitting "American" and "per year." Whatever.

But using the same data used to justify that claim, the per year chances of a fatal attack on an American (national risk, not personal) by a refugee occurring is 1 in 13.3. That figure comes from taking number of fatal attacks by refugees (3) and dividing by the number of years (40) covered by the data. Population numbers do not figure in the second calculation, unlike the first.

That outcome does a great deal to show the silliness of the original claim, which a responsible fact checker would have pointed out by giving more emphasis to the sensible expert from the Center on Immigration Studies:
Mark Krikorian, executive director of the Center for Immigration Studies, a think tank that favors stricter immigration policies, said the "one in 3.6 billion" statistic from the Cato study includes too many qualifiers. Notably, he said, it excludes terrorist attacks by refugees that did not kill anyone and those "we’ll never know about" foiled by law enforcement.

"It’s not that it’s wrong," Krikorian said of the Cato study, but its author "is doing everything he can to shrink the problem."
Krikorian stated the essence of what the fact check should have found if PolitiFact California wasn't stupid.

Correction July 21, 2017: Fixed typo where "bit" was substituted for "but" in the opening paragraph.

Clarification July 21, 2017: Added "(national risk, not personal)" to the eighth paragraph to enhance the clarity of the argument

Tuesday, July 18, 2017

A "Half True" update

Years ago, I pointed out to PolitiFact that it had offered readers two different definitions of "Half True." In November 2011, I posted to note PolitiFact's apparent acknowledgment of the problem, evidenced by its effort to resolve the discrepant definitions.

It's over five years later. But PolitiFact Florida (archived version, just in case PolitiFact Florida notices something amiss) either did not get the memo or failed to fully implement the change.
We then decide which of our six rulings should apply:

TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains some element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.
PolitiFact Florida still publishes what was apparently the original standard PolitiFact definition of "Half True." PolitiFact National revised its definition in 2011, adding "partially" to the definition so it read "The statement is partially accurate but leaves out important details or takes things out of context."

PolitiFact Florida uses the updated definition on its main page, and directs readers to PolitiFact's main "principles" page for more information.

It's not even clear if PolitiFact Florida's main page links to PolitiFact Florida's "About" page. It may be a vestigial limb of sorts, helping us trace PolitiFact's evolution.

In one sense, the inconsistency means relatively little. After all, PolitiFact's founder, Bill Adair, has himself said that the "Truth-O-Meter" ratings are "entirely subjective." That being the case, it matters little whether "partially" occurs in the definition of "Half True."

The main problem from the changing definition comes from PolitiFact's irresponsible publication of candidate "report cards" that supposedly help voters decide which candidate they ought to support.

Why should subjective report cards make any difference in preferring one candidate over another?

The changing definition creates one other concern--one that I've written about before: Academic researchers (who really ought to know better) keep trying to use PolitiFact's ratings as though they represent reliable truth measurements. That by itself is a preposterous idea, given the level of subjectivity inherent in PolitiFact's system. But the inconsistency of the definition of "Half True" makes it even sillier.

PolitiFact's repeated failure to fix the problems we point out helps keep us convinced that PolitiFact checks facts poorly. We think a left-leaning ideological bubble combined with the Dunning-Kruger effect best explains PolitiFact's behavior in these cases.

Reminder: PolitiFact made a big to-do about changing the label "Barely True" to "Mostly False," but shifted the definition of "Half True" without letting the public in on the long-running discrepancy.

Too much transparency?

Clarification July 18, 2017: Changed "PolitiFact" to "PolitiFact Florida" in the second paragraph after the block quotation.

This post also appears at the blog "Sublime Bloviations"

Monday, July 17, 2017

PolitiFact Georgia's kid gloves for Democratic candidate

Did Democratic Party candidate for Georgia governor Shelley Evans help win a Medicare fraud lawsuit, as she claimed? PolitiFact Georgia says there's no question about it:

PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Evans' statement misses quite a bit, so we will use this as an example of PolitiFact going easy on a Democrat. It's very likely that PolitiFact would have thought of the things we'll point out if it had been rating a Republican candidate. Republicans rarely get the kid gloves treatment from PolitiFact. But it's pretty common for Democrats.

The central problem in the fact check stems from a fallacy of equivocation. In PolitiFact's view, a win is a win, even if Evans implied a win in court covering the issue of fraud when in fact the win was an out-of-court settlement that stopped short of proving the existence of Medicare fraud.

Overlooking that considerable difference in the two kinds of wins counts as the type of error we should expect a partisan fact checker to make. A truly neutral fact-checker would not likely make the mistake.

Evans' claim vs. the facts


Evans: "I helped win one of the biggest private lawsuits against Medicare fraud in history."

Fact: Evans helped with a private lawsuit alleging Medicare fraud

Fact: The case was not decided in court, so none of the plaintiff's attorneys can rightly claim to have won the lawsuit. The lawsuit was made moot by an out-of-court settlement. As part of the settlement, the defendant admitted no wrongdoing (that is, no fraud).

Evans' statement leads her audience toward two false conclusions. First, that her side of the lawsuit won in court. It did not. Second, that the case proved the (DaVica) company was guilty of Medicare fraud. It did not.

How does a fact checker miss something this obvious?

It was plain in the facts as PolitiFact reported them that the court did not decide the case. It was therefore likewise obvious that no lawyer could claim an unambiguous lawsuit victory.

Yet PolitiFact found absolutely no problem with Evans' claim on its "Truth-O-Meter":
Evans said that she "helped win one of the biggest private lawsuits against Medicare fraud in history." The lead counsel on the case corroborated her role in it, and the Justice Department confirmed its historic importance.

Her claim that they recovered $324 million for taxpayers also checks out.

We rate this statement True.
Indeed, PolitiFact's summary reads like a textbook example of confirmation bias, emphasizing what confirms the claim and ignoring whatever does not.
There is an obvious difference between impartially evaluating evidence in order to come to an unbiased conclusion and building a case to justify a conclusion already drawn. In the first instance one seeks evidence on all sides of a question, evaluates it as objectively as one can, and draws the conclusion that the evidence, in the aggregate, seems to dictate. In the second, one selectively gathers, or gives undue weight to, evidence that supports one's position while neglecting to gather, or discounting, evidence that would tell against it.
Evans can only qualify for the "True" rating if PolitiFact's definition of "True" means nothing and the rating is entirely subjective.

Correction July 17, 2017: Changed "out-court settlement" to "out-of-court settlement." Also made some minor changes to the formatting.

Thursday, July 13, 2017

PolitiFact avoids snarky commentary?

PolitiFact, as part of a statement on avoiding political involvement that it developed in order to obtain its status as a "verified signatory" of the International Fact-Checking Network's statement of principles, say it avoids snarky commentary.

Despite that, we got this on Twitter today:

Did PolitiFact investigate to see whether Trump was right that a lot of people do not know that France is the oldest U.S. ally? Apparently not.

Trump is probably right, especially considering that he did not specify any particular group of people. Is it common knowledge in China or India, for example, that France is the oldest U.S. ally?

So, politically neutral PolitiFact, which avoids snarky commentary, is snarking it up in response to a statement from Trump that is very likely true--even if the population he was talking about was the United States, France, or both.

Here's how PolitiFact's statement of principle reads (bold emphasis added):
We don’t lay out our personal political views on social media. We do share news stories and other journalism (especially our colleagues’ work), but we take care not to be seen as endorsing or opposing a political figure or position. We avoid snarky commentary.

(Note that PolitiFact Bias has no policy prohibiting snarky commentary)

Tuesday, July 11, 2017

PolitiFact helps Bernie Sanders with tweezers and imbalance

Our posts carrying the "tweezers or tongs" tag look at how PolitiFact skews its ratings by shifting its story focus.

Today we'll look at PolitiFact's June 27, 2017 fact check of Senator Bernie Sanders (I-Vt.):

Where Sen. Sanders mentions 23 million thrown off of health insurance, PolitiFact treats his statement like a random hypothetical. But the context shows Sanders was not speaking hypothetically (bold emphasis added):
"What the Republican proposal (in the House) does is throw 23 million Americans off of health insurance," Sanders told host Chuck Todd. "What a part of Harvard University -- the scientists there -- determine is when you throw 23 million people off of health insurance, people with cancer, people with heart disease, people with diabetes, thousands of people will die."
The House health care bill does not throw 23 million Americans off of health insurance. The CBO did predict that at the end of 10 years 23 million fewer Americans would have health insurance compared to the current law (Obamacare) projection. There's a huge difference between those two ideas, and PolitiFact may never get around to explaining it.

PolitiFact, despite fact-checkers admitted preference for checking false statements, overlooks the low-hanging fruit in favor of Sanders' claim that thousands will die.

Is Sanders engaging in fearmongering? Sure. But PolitiFact doesn't care.

Instead, PolitiFact focused on Sanders' claim that study after study supports his point that thousands will die if 23 million people get thrown off of insurance.

PolitiFact verified his claim in hilariously one-sided fashion. One would never know from PolitiFact's fact check that the research findings are disputed, as here.

This is the type of research PolitiFact omitted (bold emphasis added) from its fact check:
After determining the characteristics of the uninsured and discovering that being  uninsured does not necessarily mean an individual has no access to health services, the authors turn to the question of mortality. A lack of care is particularly troubling if it leads to differences in mortality based on insurance status. Using data from the Health and Retirement Survey, the authors estimate differences in mortality rates for individuals based on whether they are privately insured, voluntarily uninsured, or involuntarily uninsured. Overall, they find that a lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple disadvantages that are associated with poor health.
So PolitiFact cherry-picked Sanders' claim with tweezers, then did a one-sided fact-check of that cherry-picked part of the claim. Sanders ended up with a "Mostly True" rating next to his false claims.

Does anybody do more to erode trust in fact-checking than PolitiFact?

It's worth noting this stinker was crafted by the veteran fact-checking team of Louis Jacobson and Angie Drobnic Holan.

Correction July 11, 2017: In the fourth paragraph after our quotation of PolitiFact, we had "23,000" instead of the correct figure of "23 million." Thanks to YuriG in the comments section for catching our mistake.

Saturday, July 8, 2017

PolitiFact California: Watching Democrats like a hawk

Is PolitiFact California's Chris Nichols the worst fact checker of all time? His body of evidence continues to grow, thanks to this port-tilted gem from July 7, 2017 (bold emphasis added):
We started with a bold claim by Sen. Harris that the GOP plan "effectively will end Medicaid."

Harris said she based that claim on estimates from the Congressional Budget Office. It projects the Senate bill would cut $772 billion dollars in funding to Medicaid over 10 years. But the CBO score didn’t predict the wholesale demise of Medicaid. Rather, it estimated that the program would operate at a significantly lower budget than if President Obama’s Affordable Care Act (ACA) were to remain in place.

Yearly federal spending on Medicaid would decrease about 26 percent by 2026 as a result of cuts to the program, according to the CBO analysis. At the request of Senate Democrats, the CBO made another somewhat more tentative estimate that Medicaid spending could be cut 35 percent in two decades.

Harris’ statement could be construed as saying Medicaid, as it now exists, would essentially end.
You think?

How else could it be construed, Chris Nichols? Inquiring minds want to know!

PolitiFact California declined to pass judgment on the California Democrats who made the claim about the effective end of Medicaid.


"Wouldn't end the program for good"? So the cuts just temporarily end the program?

Or have we misconstrued Nichols' meaning?

Friday, July 7, 2017

PolitiFact, Lauren Carroll, pathetic CYA

With a post on July 1, 2017, we noted PolitiFact's absurdity in keeping the "True" rating on Hillary Clinton's claim that 17 U.S. intelligence agencies "all concluded" that Russia intervened in the U.S. presidential election.

PolitiFact has noticed that not enough people accept 2+2=5, however, so departing PolitiFact writer Lauren Carroll returned with week with a pathetic attempt to justify her earlier fact check.

This is unbelievable.

Carroll's setup:
Back in October 2016, we rated this statement by then-candidate Hillary Clinton as True: "We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."

Many readers have asked us about this rating since the New York Times and Associated Press issued their corrections.
Carroll then repeats PolitiFact's original excuse that since the Director of National Intelligence speaks for all 17 agencies, it somehow follows that 17 agencies "all concluded" that Russia interfered with the U.S. election.

And the punchline (bold emphasis added):
We asked experts again this week if Clinton’s claim was correct or not.

"In the context of a national debate, her answer was a reasonable inference from the DNI statement," Cordero said, emphasizing that the statement said, "The U.S. Intelligence Community (USIC) is confident" in its assessment.

Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity across the community, and it’s possible that some organizations disagree.

But in the case of the Russia investigation, there is no evidence of disagreement among members of the intelligence community.
Put simply, either the people who work at PolitiFact are stupid, or else they think you're stupid.

PolitiFact claims it asked its cited experts whether Clinton's claim was correct.

PolitiFact then shares with its readers responses that do not tell them whether the experts think Clinton's claim was correct.

1) "In the context of a national debate, her answer was a reasonable inference from the DNI statement" 

It's one thing to make a reasonable inference. It's another thing whether the inference was true. If a person shows up at your home soaking wet, it may be a reasonable inference that it's raining outside. The inference isn't necessarily correct.

The quotation of Carrie Cordero does not answer whether Clinton's claim was correct.

How does a fact checker not know that?

 2) PolitiFact paraphrases expert Steven Aftergood: "Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity [sic] across the community, and it’s possible that some organizations disagree."

The paraphrase of Aftergood appears to make our point. Even if the Director of National Intelligence speaks for all 17 agencies it does not follow that all 17 agencies agreed with the finding. Put another way, even if Clinton's inference was reasonable the more recent reports show that it was wrong. The 17 agencies did not all reach the same conclusion independently, contrary to what Clinton implied.

And that's it.

Seriously, that's it.

PolitiFact trots out this absolutely pathetic CYA attempt and expects people to take it seriously?

May it never be.

The evidence from the experts does not support PolitiFact's judgment, yet PolitiFact uses that evidence to support its judgment.



Maybe they'll be able to teach Carroll some logic at UC Berkeley School of Law.

Correction July 7, 2017: Removed an extraneous "the" preceding "PolitiFact" in our first paragraph following our first quotation of PolitiFact.

Thursday, July 6, 2017

Transparency, Facebook and PolitiFact

We've never been impressed with Facebook as a vehicle for commenting on articles. We at PolitiFact Bias allow it as alternative to posting comments directly on the site, mostly to let people have their say aside from our judgment of whether they've stayed on topic and achieved a baseline level of decorum.

PolitiFact uses Facebook as its more-or-less exclusive forum for article commentary.

It's a great system for PolitiFact, for it allows its left-leaning readership to make things unpleasant for those who criticize PolitiFact, and the "top posts" feature allows the left-leaning mob to keep popular left-leaning comments in the most prominent places.

What it isn't is a place for PolitiFact to entertain, consider and address criticism of its work.

Today I noticed that some of my Facebook comments were not appearing on my Facebook "Activity Log."

A Facebook commenter mocked another person's reference to this website, PolitiFact Bias, facetiously stating that PolitiFact Bias isn't biased at all. It's a classic attack based on the genetic fallacy. We created a post awhile back to address attacks of that type, so I posted the URL. The comment seemed to publish normally, but then did not appear on my Activity Log and I could not find it on PolitiFact's Facebook page.

So it was time for some experimentation.

I replied again to the same comment, this time with the text of the PolitiFact Bias post instead of the URL, reasoning that perhaps I had been tagged as a spammer.

No go.

As before, the comment seemed to publish, but when I tried to edit to clarify some of the formatting, I got a message saying the post did not exist:

We imagine that Facebook may have some plausible reason for this behavior. But regardless of that, incidents like this show yet another lack of transparency for PolitiFact's version of the fact-checking game.

PolitiFact champions democracy, supposedly, but prefers a commenting system that buries or silences critical voices.

PolitiFact Texas uses tongs (2016)

Our "tweezers or tongs" tag applies to cases where PolitiFact had a choice of a narrow focus on one part of a claim or a wider focus on a claim with more than one part.

The tweezers or tongs option allows a fact-check to exercise bias by using the true part of a statement to boost the rating. Or ignoring the true part of the statement to drop the rating.

In this case, from 2016, a Democrat got the benefit of PolitiFact Texas' tongs treatment:

So, it was true that Texas law requires every high school to have a voter registrar.

But it was false that the law requires the registrar to get the children to vote once they're eligible.

PolitiFact averages it out:
Saldaña said a Texas law requires every high school to have a voter registrar "and part of their responsibility is to make sure that when children become 18 and become eligible to vote, that they vote."

A 1983 law requires every high school to have a deputy voter registrar tasked with giving eligible students voter registration applications. Each registrar also must make sure submitted applications are appropriately handled.

However, the law doesn’t require registrars to make every eligible student register; it's up to each student to act or not. Also, as Saldaña acknowledged, registrars aren’t required to ensure that students vote.

We rate this statement Half True.
There are dozens of examples where PolitiFact ignored what was true in favor of emphasizing the false. It's just one more way the PolitiFact system allows bias to creep in.

Here's one for which PolitiFact Pennsylvania breaks out the tweezers:

Sen. Toomey (R-Penn.) correctly says the ACA created a new category of eligibility. That part of his claim does not figure in the "Half True" rating.

We doubt that PolitiFact has ever created an ethical, principled and objective means for deciding when to ignore parts of compound claims.

Certainly we see no evidence of such means in PolitiFact's work.

Tuesday, July 4, 2017

PolitiFact revises its own history

We remember.

We remember when PolitiFact openly admitted that when Republicans charged that Obamacare cut Medicare it tended to rate such claims either "Half True" or "Mostly False."
PolitiFact has looked askance at bare statements that Obamacare cuts Medicare, rating them either Half True or Mostly False depending on how they are worded.
Nowadays, PolitiFact has reconsidered. It now says it generally rated claims that Obamacare cut Medicare as "Half True."
We did a series of fact-checks about the back-and-forth and generally rated the Republican attacks Half True.
PolitiFact doubtless took the latter position in response to criticism of its claims about Republican "cuts" to Medicaid. PolitiFact has flatly said Trump's budget cuts Medicaid (no caveats) despite the fact that outlays for Medicaid rise every year under the Trump budget. PolitiFact has also joined the mainstream media in attacking the Trump administration for denying it cuts Medicaid, rating those statements "Mostly False" or worse.

Given the context, PolitiFact's fact check of Kellyanne Conway looks like a retrospective attempt to excuse PolitiFact's inconsistency.

Unfortunately for PolitiFact writer Jon Greenberg, the facts do not support his defense narrative. The "Half True" ratings he cites tended to come from joint claims, that 1) Obamacare's Medicare cuts went to 2) pay for the Affordable Care Act. It's completely true that the ACA used Medicare savings to cut the price tag for the legislation. Every version of the CBO's assessment of the Democrats' health care reform bill will confirm it.

When PolitiFact rated isolated GOP claims that Democrats' health care reform cut Medicare, the verdict tended to come in as "Mostly False." It wasn't even close. Keep reading below the fold to see the proof.

So, PolitiFact writer Jon Greenberg either isn't so great at checking his facts, or else he is deliberately misleading his audience. The same goes for the editor, Angie Drobnic Holan.

Monday, July 3, 2017

PolitiFact's top 16 myths about Obamacare skips its 2013 "Lie of the Year"

In late 2013 PolitiFact announced its 2013 "Lie of the Year," supposedly President Barack Obama's promise that Americans could keep their plan (and their doctor) under the Affordable Care Act.

We noted at the time (even before the winner was announced) PolitiFact was forced into its choice by a broad public narrative.
With a hat tip to Vicini, it's inconceivable that PolitiFact will choose a claim other than "If you like it" as the "Lie of the Year" from its list of nominees.  Having gone out of the way to nominate a claim from years past made relevant by the events of 2013, PolitiFact must choose it or lose credibility.
Today on Facebook, PolitiFact highlighted its 2013 list of the top 16 myths about Obamacare.

It's "Lie of the Year" for 2013 did not make the list. It wasn't even mentioned in the article.

One item that did make the list, however, was Marco Rubio's "Mostly False" claim that patients won't be able to keep their doctors under the ACA.

Seriously. That one made PolitiFact's list.

If anyone needed proof that PolitiFact reluctantly pinned the 2013 "Lie of the Year" on Obama, PolitiFact has provided it.

Sunday, July 2, 2017

How to fact check like a partisan, featuring PolitiFact

First, find a politician who has made a conditional statement, like this one from Marco Rubio (R-Fla.):
"As long as Florida keeps the same amount of funding or gets an increase, which is what we are working on, per patient being rewarded for having done the right thing -- there is no reason for anybody to be losing any of their current benefits under Medicaid. None," he said in a Facebook Live on June 28."
Rubio starts his statement with the conditional: "As long as Florida keeps the same amount of funding or gets an increase ..." Logic demands that the latter part of Rubio's statement receive its interpretation under the assumption the condition is true.

A partisan fact checker can make a politician look bad by ignoring the condition and taking the remainder of the statement out of context. Like this:

As the partisan fact checker will want its work to pass as a fact check, at least to like-minded partisans and unsuspecting moderates, it should then proceed to check the out-of-context portion of the subject's statement.

For example, if the condition of the statement is the same or increased funding, look for ways the funding might decrease and use those findings as evidence the politician spoke falsely. For a statement like Rubio's one might cite a left-leaning think tank like the Urban Institute, with a finding that predicts lower funding for Medicaid:
The Urban Institute estimated the decline in federal dollars and enrollment for the states.

It found for Florida, that federal funding for Medicaid under ACA would be $16.8 billion in 2022. Under the Senate legislation, it would fall to about $14.6 billion, or a cut of about 13 percent (see table 6). The Urban Institute projects 353,000 fewer people on Medicaid or CHIP in Florida.
Easy-peasy, right?

Then use the rest of the fact check to show that Florida will not be likely to make up the gap predicted by the Urban Institute. That will prove, in a certain misleading and dishonest way, that Rubio's conditional statement was wrong.

The summary of such a partisan fact check might look like this:
Rubio said, "There is no reason for anybody to be losing any of their current benefits under Medicaid."

Rubio is wrong to state that benefit cuts are off the table.

There are reasons that Medicaid recipients could lose benefits if the Senate bill becomes law. The bill curbs the rate of spending by the federal government over the next decade and caps dollar amounts and ultimately reduces the inflation factor. Those changes will put pressure on states to make difficult choices including the possibility of cutting services.

We rate this claim Mostly False.
Ignoring the conditional part of the claim results in the fallacy of denying the antecedent. The partisan fact checker can usually rely on its highly partisan audience not noticing such fallacies.

Any questions?

Correction: July 2, 2017: In the next-to-last paragraph changed "to notice" to "noticing" for the sake of clarity.

Saturday, July 1, 2017

PolitiFact absurdly keeps "True" rating on false statement from Hillary Clinton

Today we were alerted about a story from earlier this week detailing a New York Times correction., from June 30, 2017:
On June 29 The New York Times issued a retraction to an article published on Monday, which originally stated that all 17 intelligence organizations had agreed that Russia orchestrated the hacking. The retraction reads, in part:
The assessment was made by four intelligence agencies — the Office of the Director of National Intelligence, the Central Intelligence Agency, the Federal Bureau of Investigation and the National Security Agency. The assessment was not approved by all 17 organizations in the American intelligence community.”
It should be noted that the four intelligence agencies are not retracting their statements about Russia involvement. But all 17 did not individually come to the assessment, despite what so many people insisted back in October.
The same article went on to point out that PolitiFact had rated "True" Hillary Rodham Clinton's claim that 17 U.S. intelligence agencies found Russia responsible for hacking. That despite acknowledging it had no evidence backing the idea that each agency had reached the conclusion based on its own investigation:
Politifact concluded that 17 agencies had, indeed, agreed on this because “the U.S. Intelligence Community is made up of 17 agencies.” However, the 17 agencies had not independently made the assessment, as many believed. Politifact mentioned this in the story, but still said the statement was correct.
We looked up the PolitiFact story in question. presents PolitiFact's reasoning accurately.

It makes for a great example of horrible fact-checking.

Clinton's statement implied each of the 17 agencies made its own finding:
"We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."
It's very easy to avoid making that implication: "Our intelligence agencies have concluded ..." Such a phrasing fairly represents a finding backed by a figure representing all 17 agencies. But when Clinton emphasized the 17 agencies "all" reached the same conclusion it implied independent investigations.

PolitiFact ignored that false implication in its original rating and in a June 2017 update to the article in response to information from FBI Director James Clapper's testimony earlier in the year:
The January report presented its findings by saying "we assess," with "we" meaning "an assessment by all three agencies."

The October statement, on the other hand, said "The U.S. Intelligence Community (USIC) is confident" in its assessment. As we noted in the article, the 17 separate agencies did not independently come to this conclusion, but as the head of the intelligence community, the Office of the Director of National Intelligence speaks on behalf of the group.

We stand by our rating.
PolitiFact's rating was and is preposterous. Note how PolitiFact defines its "True" and "Mostly True" ratings:
TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
It doesn't pass the sniff test to assert that Clinton's claim about "17 agencies" needs no clarification or additional information. We suppose that only a left-leaning and/or unserious fact-checking organization would conclude otherwise.