Tuesday, July 25, 2017

When PolitiFacts contradict

In PolitiFact's zeal to defend the Affordable Care Act from criticism, it contradicts itself.

In declaring it "False" that the ACA has entered a death spiral, PolitiFact Wisconsin affirms three aspects of a death spiral, one being rising premiums. PolitiFact affirms that premiums are rising. Then, PolitiFact states that none of the three conditions that make up a death spiral have occurred. We must conclude, via PolitiFact, that premiums are increasing and that premiums are not increasing.

In PolitiFact Wisconsin's own words (bold emphasis added):
Our rating

A death spiral is a health industry term for a cycle with three components — shrinking enrollment, healthy people leaving the system and rising premiums.

The latest data shows enrollment is increasing slightly and younger (typically healthier) people are signing up at the same rate as last year. And while premiums are increasing, that isn’t affecting the cost to most consumers due to built-in subsidies.

So none of the three criteria are met, much less all three.
It's not hard to fix. PolitiFact Wisconsin could alter its fact check to note that only one of the conditions of a death spiral is occurring across the board, but that subsidies insulate many customers from the effects of rising premiums.

Subsidizing the cost of buying insurance does not make the cost of the premiums shrink, exactly. Instead, it places part of the responsibility for paying on somebody else. When somebody else foots the bill, higher prices do not drive off consumers nearly as effectively.

We're still waiting for PolitiFact to recognize that the insurance market is not monolithic. When the rules of the ACA leave individual markets without any insurers because adverse selection has driven them out, the conditions of a death spiral have obtained in that market.

We also note, in the context of the ACA, that when the only people who elect to pay for insurance are those who are receiving subsidies, it is fair to say the share of the market that pays full price encountered a death spiral.

Sunday, July 23, 2017

PolitiFact's facts outpace the truth

"Falsehood flies, and the Truth comes limping after it"

With the speed of the Interwebs at its disposal, PolitiFact on July 22, 2017 declared that no evidence exists to show Senator Bill Nelson (D-Fla.) favors a single-payer health care system for the United States of America.

PolitiFact based its proclamation loosely on its July 19, 2017 fact check of the National Republican Senatorial Committee's ad painting Nelson as a potential supporter of a universal single-payer plan.

We detected signs of very poor reporting from PolitiFact Florida, which will likely receive a closer examination at Zebra Fact Check.

Though PolitiFact reported that Nelson's office declined to give a statement on his support for a single-payer plan, PolitiFact ignored the resulting implied portrait of Nelson: He does not want to go unequivocally on the record supporting single-payer because it would hurt his re-election chances.

PolitiFact relied on a paraphrase of Nelson from the Tampa Tribune (since swallowed by the Tampa Bay Times, which in turn runs PolitiFact) to claim Nelson has said he does not favor a single-payer plan (bold emphasis added):
The ad suggests that Nelson supports Warren on most things, including a single-payer health care system. Actually, Nelson has said he doesn’t support single payer and wants to focus on preserving current law. His voting record is similar to Warren’s, but members of the same party increasingly vote alike due to a lack of bipartisan votes in the Senate.
There's one redeeming feature in PolitiFact Florida's summary. Using the voting agreement between two candidates to predict how they'll vote on one particular issue makes little sense unless the past votes cover that issue. If Nelson and Senator Elizabeth Warren (D-Mass.) had voted together in support of a single-payer plan, then okay.

But PolitiFact downplayed the ad's valid point about the possibility Nelson would support a single-payer plan. And PolitiFact made the mistake of exaggerating its survey of the evidence. In declaring that evidence does not exist, PolitiFact produced the impression that it searched very thoroughly and appropriately for that evidence and could not find it because it does not exist.

In other words, PolitiFact produced a false impression.

"Another major step forward"

We tried two strategies for finding evidence Nelson likes the idea of a single-payer plan. The first strategy failed. But the second strategy quickly produced a hit that sinks PolitiFact's claim that no evidence exists of Nelson favoring a single-payer plan.

A Tampa Bay market television station, WTSP, aired an interview with Nelson earlier in July 2017. The interviewer asked Nelson if he would be willing to join with Democrats who support a single-payer plan.

Nelson replied (bold emphasis added):
Well, I've got enough trouble just trying to fix the Affordable Care Act. I mean, you're talking about another major step forward, and we're not ready for that now.
The quotation supports the view that Nelson is playing the long game on single-payer. He won't jeopardize his political career by unequivocally supporting it until he thinks it's a political winner.

PolitiFact's fact check uncovered part of that evidence by asking Nelson's office to say whether he supports single payer. The office declined to provide a statement, and that pretty much says it all. If Nelson does not support single-payer and does not believe that going on the record would hurt his chances in the election, then nothing should stop him from making his position clear.

PolitiFact will not jeopardize Nelson's political career by finding the evidence that the NRSC has a point. Instead, it will report that the evidence it failed to find does not exist.

It will present this twisted approach to reporting as non-partisan fact-checking.


We let PolitiFact know about the evidence it missed (using email and Twitter). Now we wait for the evidence of PolitiFact's integrity and transparency.

Saturday, July 22, 2017

Video embed, Sen. Bill Nelson

I'm not a fan of Sen. Bill Nelson (D-Fla.). I just need to post this as part of an effort to save it for posterity.

Friday, July 21, 2017

The ongoing stupidity of PolitiFact California

PolitiFact on the whole stinks at fact-checking, but PolitiFact California is special. We don't use the word "stupid" lightly, but PolitiFact California has pried it from our lips multiple times already over its comparatively short lifespan.

PolitiFact's latest affront to reason comes from the following PolitiFact California (@CAPolitiFact) tweet:
The original fact check was stupid enough, but PolitiFact California's tweet twists that train wreck into an unrecognizable heap of metal.

  • The fact check discusses the (per year) odds of falling victim to a fatal terror attack committed by a refugee.
  • The tweet trumpets the (per year) odds of a fatal attack occurring.

The different claims require totally different calculations, and the fact that the tweet confused one claim for the other helps show how stupid it was to take the original fact-checked claim seriously in the first place.

The original claim said "The chances of being killed by a refugee committing a terrorist act is 1 in 3.6 billion." PolitiFact forgave the speaker for omitting "American" and "per year." Whatever.

But using the same data used to justify that claim, the per year chances of a fatal attack on an American (national risk, not personal) by a refugee occurring is 1 in 13.3. That figure comes from taking number of fatal attacks by refugees (3) and dividing by the number of years (40) covered by the data. Population numbers do not figure in the second calculation, unlike the first.

That outcome does a great deal to show the silliness of the original claim, which a responsible fact checker would have pointed out by giving more emphasis to the sensible expert from the Center on Immigration Studies:
Mark Krikorian, executive director of the Center for Immigration Studies, a think tank that favors stricter immigration policies, said the "one in 3.6 billion" statistic from the Cato study includes too many qualifiers. Notably, he said, it excludes terrorist attacks by refugees that did not kill anyone and those "we’ll never know about" foiled by law enforcement.

"It’s not that it’s wrong," Krikorian said of the Cato study, but its author "is doing everything he can to shrink the problem."
Krikorian stated the essence of what the fact check should have found if PolitiFact California wasn't stupid.

Correction July 21, 2017: Fixed typo where "bit" was substituted for "but" in the opening paragraph.

Clarification July 21, 2017: Added "(national risk, not personal)" to the eighth paragraph to enhance the clarity of the argument

Tuesday, July 18, 2017

A "Half True" update

Years ago, I pointed out to PolitiFact that it had offered readers two different definitions of "Half True." In November 2011, I posted to note PolitiFact's apparent acknowledgment of the problem, evidenced by its effort to resolve the discrepant definitions.

It's over five years later. But PolitiFact Florida (archived version, just in case PolitiFact Florida notices something amiss) either did not get the memo or failed to fully implement the change.
We then decide which of our six rulings should apply:

TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains some element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.
PolitiFact Florida still publishes what was apparently the original standard PolitiFact definition of "Half True." PolitiFact National revised its definition in 2011, adding "partially" to the definition so it read "The statement is partially accurate but leaves out important details or takes things out of context."

PolitiFact Florida uses the updated definition on its main page, and directs readers to PolitiFact's main "principles" page for more information.

It's not even clear if PolitiFact Florida's main page links to PolitiFact Florida's "About" page. It may be a vestigial limb of sorts, helping us trace PolitiFact's evolution.

In one sense, the inconsistency means relatively little. After all, PolitiFact's founder, Bill Adair, has himself said that the "Truth-O-Meter" ratings are "entirely subjective." That being the case, it matters little whether "partially" occurs in the definition of "Half True."

The main problem from the changing definition comes from PolitiFact's irresponsible publication of candidate "report cards" that supposedly help voters decide which candidate they ought to support.

Why should subjective report cards make any difference in preferring one candidate over another?

The changing definition creates one other concern--one that I've written about before: Academic researchers (who really ought to know better) keep trying to use PolitiFact's ratings as though they represent reliable truth measurements. That by itself is a preposterous idea, given the level of subjectivity inherent in PolitiFact's system. But the inconsistency of the definition of "Half True" makes it even sillier.

PolitiFact's repeated failure to fix the problems we point out helps keep us convinced that PolitiFact checks facts poorly. We think a left-leaning ideological bubble combined with the Dunning-Kruger effect best explains PolitiFact's behavior in these cases.

Reminder: PolitiFact made a big to-do about changing the label "Barely True" to "Mostly False," but shifted the definition of "Half True" without letting the public in on the long-running discrepancy.

Too much transparency?

Clarification July 18, 2017: Changed "PolitiFact" to "PolitiFact Florida" in the second paragraph after the block quotation.

This post also appears at the blog "Sublime Bloviations"

Monday, July 17, 2017

PolitiFact Georgia's kid gloves for Democratic candidate

Did Democratic Party candidate for Georgia governor Shelley Evans help win a Medicare fraud lawsuit, as she claimed? PolitiFact Georgia says there's no question about it:

PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Evans' statement misses quite a bit, so we will use this as an example of PolitiFact going easy on a Democrat. It's very likely that PolitiFact would have thought of the things we'll point out if it had been rating a Republican candidate. Republicans rarely get the kid gloves treatment from PolitiFact. But it's pretty common for Democrats.

The central problem in the fact check stems from a fallacy of equivocation. In PolitiFact's view, a win is a win, even if Evans implied a win in court covering the issue of fraud when in fact the win was an out-of-court settlement that stopped short of proving the existence of Medicare fraud.

Overlooking that considerable difference in the two kinds of wins counts as the type of error we should expect a partisan fact checker to make. A truly neutral fact-checker would not likely make the mistake.

Evans' claim vs. the facts


Evans: "I helped win one of the biggest private lawsuits against Medicare fraud in history."

Fact: Evans helped with a private lawsuit alleging Medicare fraud

Fact: The case was not decided in court, so none of the plaintiff's attorneys can rightly claim to have won the lawsuit. The lawsuit was made moot by an out-of-court settlement. As part of the settlement, the defendant admitted no wrongdoing (that is, no fraud).

Evans' statement leads her audience toward two false conclusions. First, that her side of the lawsuit won in court. It did not. Second, that the case proved the (DaVica) company was guilty of Medicare fraud. It did not.

How does a fact checker miss something this obvious?

It was plain in the facts as PolitiFact reported them that the court did not decide the case. It was therefore likewise obvious that no lawyer could claim an unambiguous lawsuit victory.

Yet PolitiFact found absolutely no problem with Evans' claim on its "Truth-O-Meter":
Evans said that she "helped win one of the biggest private lawsuits against Medicare fraud in history." The lead counsel on the case corroborated her role in it, and the Justice Department confirmed its historic importance.

Her claim that they recovered $324 million for taxpayers also checks out.

We rate this statement True.
Indeed, PolitiFact's summary reads like a textbook example of confirmation bias, emphasizing what confirms the claim and ignoring whatever does not.
There is an obvious difference between impartially evaluating evidence in order to come to an unbiased conclusion and building a case to justify a conclusion already drawn. In the first instance one seeks evidence on all sides of a question, evaluates it as objectively as one can, and draws the conclusion that the evidence, in the aggregate, seems to dictate. In the second, one selectively gathers, or gives undue weight to, evidence that supports one's position while neglecting to gather, or discounting, evidence that would tell against it.
Evans can only qualify for the "True" rating if PolitiFact's definition of "True" means nothing and the rating is entirely subjective.

Correction July 17, 2017: Changed "out-court settlement" to "out-of-court settlement." Also made some minor changes to the formatting.

Thursday, July 13, 2017

PolitiFact avoids snarky commentary?

PolitiFact, as part of a statement on avoiding political involvement that it developed in order to obtain its status as a "verified signatory" of the International Fact-Checking Network's statement of principles, say it avoids snarky commentary.

Despite that, we got this on Twitter today:

Did PolitiFact investigate to see whether Trump was right that a lot of people do not know that France is the oldest U.S. ally? Apparently not.

Trump is probably right, especially considering that he did not specify any particular group of people. Is it common knowledge in China or India, for example, that France is the oldest U.S. ally?

So, politically neutral PolitiFact, which avoids snarky commentary, is snarking it up in response to a statement from Trump that is very likely true--even if the population he was talking about was the United States, France, or both.

Here's how PolitiFact's statement of principle reads (bold emphasis added):
We don’t lay out our personal political views on social media. We do share news stories and other journalism (especially our colleagues’ work), but we take care not to be seen as endorsing or opposing a political figure or position. We avoid snarky commentary.

(Note that PolitiFact Bias has no policy prohibiting snarky commentary)

Tuesday, July 11, 2017

PolitiFact helps Bernie Sanders with tweezers and imbalance

Our posts carrying the "tweezers or tongs" tag look at how PolitiFact skews its ratings by shifting its story focus.

Today we'll look at PolitiFact's June 27, 2017 fact check of Senator Bernie Sanders (I-Vt.):

Where Sen. Sanders mentions 23 million thrown off of health insurance, PolitiFact treats his statement like a random hypothetical. But the context shows Sanders was not speaking hypothetically (bold emphasis added):
"What the Republican proposal (in the House) does is throw 23 million Americans off of health insurance," Sanders told host Chuck Todd. "What a part of Harvard University -- the scientists there -- determine is when you throw 23 million people off of health insurance, people with cancer, people with heart disease, people with diabetes, thousands of people will die."
The House health care bill does not throw 23 million Americans off of health insurance. The CBO did predict that at the end of 10 years 23 million fewer Americans would have health insurance compared to the current law (Obamacare) projection. There's a huge difference between those two ideas, and PolitiFact may never get around to explaining it.

PolitiFact, despite fact-checkers admitted preference for checking false statements, overlooks the low-hanging fruit in favor of Sanders' claim that thousands will die.

Is Sanders engaging in fearmongering? Sure. But PolitiFact doesn't care.

Instead, PolitiFact focused on Sanders' claim that study after study supports his point that thousands will die if 23 million people get thrown off of insurance.

PolitiFact verified his claim in hilariously one-sided fashion. One would never know from PolitiFact's fact check that the research findings are disputed, as here.

This is the type of research PolitiFact omitted (bold emphasis added) from its fact check:
After determining the characteristics of the uninsured and discovering that being  uninsured does not necessarily mean an individual has no access to health services, the authors turn to the question of mortality. A lack of care is particularly troubling if it leads to differences in mortality based on insurance status. Using data from the Health and Retirement Survey, the authors estimate differences in mortality rates for individuals based on whether they are privately insured, voluntarily uninsured, or involuntarily uninsured. Overall, they find that a lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple disadvantages that are associated with poor health.
So PolitiFact cherry-picked Sanders' claim with tweezers, then did a one-sided fact-check of that cherry-picked part of the claim. Sanders ended up with a "Mostly True" rating next to his false claims.

Does anybody do more to erode trust in fact-checking than PolitiFact?

It's worth noting this stinker was crafted by the veteran fact-checking team of Louis Jacobson and Angie Drobnic Holan.

Correction July 11, 2017: In the fourth paragraph after our quotation of PolitiFact, we had "23,000" instead of the correct figure of "23 million." Thanks to YuriG in the comments section for catching our mistake.

Saturday, July 8, 2017

PolitiFact California: Watching Democrats like a hawk

Is PolitiFact California's Chris Nichols the worst fact checker of all time? His body of evidence continues to grow, thanks to this port-tilted gem from July 7, 2017 (bold emphasis added):
We started with a bold claim by Sen. Harris that the GOP plan "effectively will end Medicaid."

Harris said she based that claim on estimates from the Congressional Budget Office. It projects the Senate bill would cut $772 billion dollars in funding to Medicaid over 10 years. But the CBO score didn’t predict the wholesale demise of Medicaid. Rather, it estimated that the program would operate at a significantly lower budget than if President Obama’s Affordable Care Act (ACA) were to remain in place.

Yearly federal spending on Medicaid would decrease about 26 percent by 2026 as a result of cuts to the program, according to the CBO analysis. At the request of Senate Democrats, the CBO made another somewhat more tentative estimate that Medicaid spending could be cut 35 percent in two decades.

Harris’ statement could be construed as saying Medicaid, as it now exists, would essentially end.
You think?

How else could it be construed, Chris Nichols? Inquiring minds want to know!

PolitiFact California declined to pass judgment on the California Democrats who made the claim about the effective end of Medicaid.


"Wouldn't end the program for good"? So the cuts just temporarily end the program?

Or have we misconstrued Nichols' meaning?

Friday, July 7, 2017

PolitiFact, Lauren Carroll, pathetic CYA

With a post on July 1, 2017, we noted PolitiFact's absurdity in keeping the "True" rating on Hillary Clinton's claim that 17 U.S. intelligence agencies "all concluded" that Russia intervened in the U.S. presidential election.

PolitiFact has noticed that not enough people accept 2+2=5, however, so departing PolitiFact writer Lauren Carroll returned with week with a pathetic attempt to justify her earlier fact check.

This is unbelievable.

Carroll's setup:
Back in October 2016, we rated this statement by then-candidate Hillary Clinton as True: "We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."

Many readers have asked us about this rating since the New York Times and Associated Press issued their corrections.
Carroll then repeats PolitiFact's original excuse that since the Director of National Intelligence speaks for all 17 agencies, it somehow follows that 17 agencies "all concluded" that Russia interfered with the U.S. election.

And the punchline (bold emphasis added):
We asked experts again this week if Clinton’s claim was correct or not.

"In the context of a national debate, her answer was a reasonable inference from the DNI statement," Cordero said, emphasizing that the statement said, "The U.S. Intelligence Community (USIC) is confident" in its assessment.

Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity across the community, and it’s possible that some organizations disagree.

But in the case of the Russia investigation, there is no evidence of disagreement among members of the intelligence community.
Put simply, either the people who work at PolitiFact are stupid, or else they think you're stupid.

PolitiFact claims it asked its cited experts whether Clinton's claim was correct.

PolitiFact then shares with its readers responses that do not tell them whether the experts think Clinton's claim was correct.

1) "In the context of a national debate, her answer was a reasonable inference from the DNI statement" 

It's one thing to make a reasonable inference. It's another thing whether the inference was true. If a person shows up at your home soaking wet, it may be a reasonable inference that it's raining outside. The inference isn't necessarily correct.

The quotation of Carrie Cordero does not answer whether Clinton's claim was correct.

How does a fact checker not know that?

 2) PolitiFact paraphrases expert Steven Aftergood: "Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity [sic] across the community, and it’s possible that some organizations disagree."

The paraphrase of Aftergood appears to make our point. Even if the Director of National Intelligence speaks for all 17 agencies it does not follow that all 17 agencies agreed with the finding. Put another way, even if Clinton's inference was reasonable the more recent reports show that it was wrong. The 17 agencies did not all reach the same conclusion independently, contrary to what Clinton implied.

And that's it.

Seriously, that's it.

PolitiFact trots out this absolutely pathetic CYA attempt and expects people to take it seriously?

May it never be.

The evidence from the experts does not support PolitiFact's judgment, yet PolitiFact uses that evidence to support its judgment.



Maybe they'll be able to teach Carroll some logic at UC Berkeley School of Law.

Correction July 7, 2017: Removed an extraneous "the" preceding "PolitiFact" in our first paragraph following our first quotation of PolitiFact.

Thursday, July 6, 2017

Transparency, Facebook and PolitiFact

We've never been impressed with Facebook as a vehicle for commenting on articles. We at PolitiFact Bias allow it as alternative to posting comments directly on the site, mostly to let people have their say aside from our judgment of whether they've stayed on topic and achieved a baseline level of decorum.

PolitiFact uses Facebook as its more-or-less exclusive forum for article commentary.

It's a great system for PolitiFact, for it allows its left-leaning readership to make things unpleasant for those who criticize PolitiFact, and the "top posts" feature allows the left-leaning mob to keep popular left-leaning comments in the most prominent places.

What it isn't is a place for PolitiFact to entertain, consider and address criticism of its work.

Today I noticed that some of my Facebook comments were not appearing on my Facebook "Activity Log."

A Facebook commenter mocked another person's reference to this website, PolitiFact Bias, facetiously stating that PolitiFact Bias isn't biased at all. It's a classic attack based on the genetic fallacy. We created a post awhile back to address attacks of that type, so I posted the URL. The comment seemed to publish normally, but then did not appear on my Activity Log and I could not find it on PolitiFact's Facebook page.

So it was time for some experimentation.

I replied again to the same comment, this time with the text of the PolitiFact Bias post instead of the URL, reasoning that perhaps I had been tagged as a spammer.

No go.

As before, the comment seemed to publish, but when I tried to edit to clarify some of the formatting, I got a message saying the post did not exist:

We imagine that Facebook may have some plausible reason for this behavior. But regardless of that, incidents like this show yet another lack of transparency for PolitiFact's version of the fact-checking game.

PolitiFact champions democracy, supposedly, but prefers a commenting system that buries or silences critical voices.

PolitiFact Texas uses tongs (2016)

Our "tweezers or tongs" tag applies to cases where PolitiFact had a choice of a narrow focus on one part of a claim or a wider focus on a claim with more than one part.

The tweezers or tongs option allows a fact-check to exercise bias by using the true part of a statement to boost the rating. Or ignoring the true part of the statement to drop the rating.

In this case, from 2016, a Democrat got the benefit of PolitiFact Texas' tongs treatment:

So, it was true that Texas law requires every high school to have a voter registrar.

But it was false that the law requires the registrar to get the children to vote once they're eligible.

PolitiFact averages it out:
Saldaña said a Texas law requires every high school to have a voter registrar "and part of their responsibility is to make sure that when children become 18 and become eligible to vote, that they vote."

A 1983 law requires every high school to have a deputy voter registrar tasked with giving eligible students voter registration applications. Each registrar also must make sure submitted applications are appropriately handled.

However, the law doesn’t require registrars to make every eligible student register; it's up to each student to act or not. Also, as Saldaña acknowledged, registrars aren’t required to ensure that students vote.

We rate this statement Half True.
There are dozens of examples where PolitiFact ignored what was true in favor of emphasizing the false. It's just one more way the PolitiFact system allows bias to creep in.

Here's one for which PolitiFact Pennsylvania breaks out the tweezers:

Sen. Toomey (R-Penn.) correctly says the ACA created a new category of eligibility. That part of his claim does not figure in the "Half True" rating.

We doubt that PolitiFact has ever created an ethical, principled and objective means for deciding when to ignore parts of compound claims.

Certainly we see no evidence of such means in PolitiFact's work.

Tuesday, July 4, 2017

PolitiFact revises its own history

We remember.

We remember when PolitiFact openly admitted that when Republicans charged that Obamacare cut Medicare it tended to rate such claims either "Half True" or "Mostly False."
PolitiFact has looked askance at bare statements that Obamacare cuts Medicare, rating them either Half True or Mostly False depending on how they are worded.
Nowadays, PolitiFact has reconsidered. It now says it generally rated claims that Obamacare cut Medicare as "Half True."
We did a series of fact-checks about the back-and-forth and generally rated the Republican attacks Half True.
PolitiFact doubtless took the latter position in response to criticism of its claims about Republican "cuts" to Medicaid. PolitiFact has flatly said Trump's budget cuts Medicaid (no caveats) despite the fact that outlays for Medicaid rise every year under the Trump budget. PolitiFact has also joined the mainstream media in attacking the Trump administration for denying it cuts Medicaid, rating those statements "Mostly False" or worse.

Given the context, PolitiFact's fact check of Kellyanne Conway looks like a retrospective attempt to excuse PolitiFact's inconsistency.

Unfortunately for PolitiFact writer Jon Greenberg, the facts do not support his defense narrative. The "Half True" ratings he cites tended to come from joint claims, that 1) Obamacare's Medicare cuts went to 2) pay for the Affordable Care Act. It's completely true that the ACA used Medicare savings to cut the price tag for the legislation. Every version of the CBO's assessment of the Democrats' health care reform bill will confirm it.

When PolitiFact rated isolated GOP claims that Democrats' health care reform cut Medicare, the verdict tended to come in as "Mostly False." It wasn't even close. Keep reading below the fold to see the proof.

So, PolitiFact writer Jon Greenberg either isn't so great at checking his facts, or else he is deliberately misleading his audience. The same goes for the editor, Angie Drobnic Holan.

Monday, July 3, 2017

PolitiFact's top 16 myths about Obamacare skips its 2013 "Lie of the Year"

In late 2013 PolitiFact announced its 2013 "Lie of the Year," supposedly President Barack Obama's promise that Americans could keep their plan (and their doctor) under the Affordable Care Act.

We noted at the time (even before the winner was announced) PolitiFact was forced into its choice by a broad public narrative.
With a hat tip to Vicini, it's inconceivable that PolitiFact will choose a claim other than "If you like it" as the "Lie of the Year" from its list of nominees.  Having gone out of the way to nominate a claim from years past made relevant by the events of 2013, PolitiFact must choose it or lose credibility.
Today on Facebook, PolitiFact highlighted its 2013 list of the top 16 myths about Obamacare.

It's "Lie of the Year" for 2013 did not make the list. It wasn't even mentioned in the article.

One item that did make the list, however, was Marco Rubio's "Mostly False" claim that patients won't be able to keep their doctors under the ACA.

Seriously. That one made PolitiFact's list.

If anyone needed proof that PolitiFact reluctantly pinned the 2013 "Lie of the Year" on Obama, PolitiFact has provided it.

Sunday, July 2, 2017

How to fact check like a partisan, featuring PolitiFact

First, find a politician who has made a conditional statement, like this one from Marco Rubio (R-Fla.):
"As long as Florida keeps the same amount of funding or gets an increase, which is what we are working on, per patient being rewarded for having done the right thing -- there is no reason for anybody to be losing any of their current benefits under Medicaid. None," he said in a Facebook Live on June 28."
Rubio starts his statement with the conditional: "As long as Florida keeps the same amount of funding or gets an increase ..." Logic demands that the latter part of Rubio's statement receive its interpretation under the assumption the condition is true.

A partisan fact checker can make a politician look bad by ignoring the condition and taking the remainder of the statement out of context. Like this:

As the partisan fact checker will want its work to pass as a fact check, at least to like-minded partisans and unsuspecting moderates, it should then proceed to check the out-of-context portion of the subject's statement.

For example, if the condition of the statement is the same or increased funding, look for ways the funding might decrease and use those findings as evidence the politician spoke falsely. For a statement like Rubio's one might cite a left-leaning think tank like the Urban Institute, with a finding that predicts lower funding for Medicaid:
The Urban Institute estimated the decline in federal dollars and enrollment for the states.

It found for Florida, that federal funding for Medicaid under ACA would be $16.8 billion in 2022. Under the Senate legislation, it would fall to about $14.6 billion, or a cut of about 13 percent (see table 6). The Urban Institute projects 353,000 fewer people on Medicaid or CHIP in Florida.
Easy-peasy, right?

Then use the rest of the fact check to show that Florida will not be likely to make up the gap predicted by the Urban Institute. That will prove, in a certain misleading and dishonest way, that Rubio's conditional statement was wrong.

The summary of such a partisan fact check might look like this:
Rubio said, "There is no reason for anybody to be losing any of their current benefits under Medicaid."

Rubio is wrong to state that benefit cuts are off the table.

There are reasons that Medicaid recipients could lose benefits if the Senate bill becomes law. The bill curbs the rate of spending by the federal government over the next decade and caps dollar amounts and ultimately reduces the inflation factor. Those changes will put pressure on states to make difficult choices including the possibility of cutting services.

We rate this claim Mostly False.
Ignoring the conditional part of the claim results in the fallacy of denying the antecedent. The partisan fact checker can usually rely on its highly partisan audience not noticing such fallacies.

Any questions?

Correction: July 2, 2017: In the next-to-last paragraph changed "to notice" to "noticing" for the sake of clarity.

Saturday, July 1, 2017

PolitiFact absurdly keeps "True" rating on false statement from Hillary Clinton

Today we were alerted about a story from earlier this week detailing a New York Times correction.

Heavy.com, from June 30, 2017:
On June 29 The New York Times issued a retraction to an article published on Monday, which originally stated that all 17 intelligence organizations had agreed that Russia orchestrated the hacking. The retraction reads, in part:
The assessment was made by four intelligence agencies — the Office of the Director of National Intelligence, the Central Intelligence Agency, the Federal Bureau of Investigation and the National Security Agency. The assessment was not approved by all 17 organizations in the American intelligence community.”
It should be noted that the four intelligence agencies are not retracting their statements about Russia involvement. But all 17 did not individually come to the assessment, despite what so many people insisted back in October.
The same article went on to point out that PolitiFact had rated "True" Hillary Rodham Clinton's claim that 17 U.S. intelligence agencies found Russia responsible for hacking. That despite acknowledging it had no evidence backing the idea that each agency had reached the conclusion based on its own investigation:
Politifact concluded that 17 agencies had, indeed, agreed on this because “the U.S. Intelligence Community is made up of 17 agencies.” However, the 17 agencies had not independently made the assessment, as many believed. Politifact mentioned this in the story, but still said the statement was correct.
We looked up the PolitiFact story in question. Heavy.com presents PolitiFact's reasoning accurately.

It makes for a great example of horrible fact-checking.

Clinton's statement implied each of the 17 agencies made its own finding:
"We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."
It's very easy to avoid making that implication: "Our intelligence agencies have concluded ..." Such a phrasing fairly represents a finding backed by a figure representing all 17 agencies. But when Clinton emphasized the 17 agencies "all" reached the same conclusion it implied independent investigations.

PolitiFact ignored that false implication in its original rating and in a June 2017 update to the article in response to information from FBI Director James Clapper's testimony earlier in the year:
The January report presented its findings by saying "we assess," with "we" meaning "an assessment by all three agencies."

The October statement, on the other hand, said "The U.S. Intelligence Community (USIC) is confident" in its assessment. As we noted in the article, the 17 separate agencies did not independently come to this conclusion, but as the head of the intelligence community, the Office of the Director of National Intelligence speaks on behalf of the group.

We stand by our rating.
PolitiFact's rating was and is preposterous. Note how PolitiFact defines its "True" and "Mostly True" ratings:
TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
It doesn't pass the sniff test to assert that Clinton's claim about "17 agencies" needs no clarification or additional information. We suppose that only a left-leaning and/or unserious fact-checking organization would conclude otherwise.

Wednesday, June 28, 2017

Ben Shapiro on Trump and fewer insurance options

Last week PolitiFact issued a completely ridiculous "Mostly False" rating on President Trump's tweet claiming Obamacare had resulted in millions of Americans having fewer health insurance options.

Many, including Jeff and me, pointed out the absurdity on Twitter. But we did not get around to writing it up for the blog.

Ben Shapiro saves us the trouble by summarizing PolitiFact's failure neatly in a short YouTube video.


Good work, Ben Shapiro.

We expected PolitiFact to walk this one back or at least try to answer the wave of criticism the ruling received. PolitiFact elected to do neither, apparently convinced that its airtight reasoning successfully weathered all the criticism it received.

In other words, industrial strength bubble.

Sunday, June 25, 2017

The unquotable Judith Curry

Judith Curry's Twitter avatar.
A reader tipped us to the fact that climate research expert Judith Curry has posted interview questions she received from PolitiFact's John Kruzel, along with her responses to those questions.

Kruzel solicited Curry's views in the context of fact-checking a statement about carbon dioxide's role in climate change. Kruzel's fact check lists his email interview of Curry in its source list, but the fact check does not quote, paraphrase or summarize Curry's views.

In accordance with its Creative Commons licensing, we present Curry's account of her PolitiFact interview, following the format she used at the blog she hosts, Climate Etc. (we added a bracketed explanation of the IPCC acronym):
On June 20, John Kruzel, the author of the Politifact article, sent me an email:

We’re looking into Energy Secretary Rick Perry’s recent claim that the main cause of climate change is most likely “the ocean waters and this environment that we live in.” We’ve asked the Department of Energy why Perry disagrees with the IPCC  [Intergovernmental Panel on Climate Change] that human activity is the main cause of climate change; we’ve received no response so far.

I’d be grateful if you’d consider the following questions:

Questions from Politifact to JC, and JC’s responses:

Do you consider the IPCC the world’s leading authority on climate change and why?
The IPCC is driven by the interests of policy makers, and the IPCC’s conclusions represent a negotiated consensus.  I don’t regard the IPCC framework to be helpful for promoting free and open inquiry and debate about the science of climate change.
Do you agree with the IPCC that effects of man-made greenhouse gas emissions “are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.”
It is possible that humans have been the dominant cause of the recent warming, but we don’t really know how to separate out human causes from natural variability.  The ‘extremely likely’ confidence level is wholly unjustified in my opinion.
How solid is the science behind the conclusion that human activity is the main cause of climate change?
Not very solid, in my opinion.  Until we have a better understanding of long term oscillations in the ocean and indirect solar effects, we can’t draw definitive conclusions about the causes of recent warming.
What is your response to Perry’s statement?
I don’t have a problem with Perry’s statement.  There is no reason for him to be set up as an arbiter of climate science.  He seems clearly committed to a clean environment and research to developing new energy technologies, which is  his job as Secretary of Energy.

JC question:  So what are we to conclude from PolitiFact’s failure to even mention or consider my responses, after explicitly asking for them?
We suggest it's safe to conclude Kruzel had his mind made up on this fact check before contacting his expert sources. Asking experts if they agree the IPCC is the leading authority on climate change qualifies as a classic leading question, and offers a strong indication that the IPCC's leading role in the story was central before Kruzel contacted Curry. The second question counts as another leading question, set up by the first leading question.

It looks like Kruzel was trying to lead the experts toward giving quotations to back what he had already decided to write.

Kruzel's third and fourth questions are fine. A serious fact check could have worked based on those questions alone, dropping the leading questions and Kruzel's/PolitiFact's confident proclamation regarding the IPCC (bold emphasis added):
The world’s leading authority on climate change, the Intergovernmental Panel on Climate Change, has concluded that human activity is "extremely likely" to be the main driver of warming since the mid 20th century.

While it’s still possible to find dissenters, scientists around the globe generally agree with this conclusion.
Kruzel might have added: "We actually found one such dissenter without even really trying!"  But since PolitiFact does not publish its email interviews (unlike one transparent fact checker we know of), there's no telling whether PolitiFact found more than one such dissenter in its small pool of expert sources.

Seriously, what is the basis in fact for calling the IPCC "the world's leading authority on climate change"? Such designations stem from popular or expert opinions, don't they? Objective reporting makes such distinctions clear. What Kruzel did was not objective reporting.

Correction June 28, 2017: Belatedly added a hyperlink to the PolitiFact fact check that cites Curry without quoting Curry

Saturday, June 24, 2017

The collapse of PolitiFact's favorite claim to neutrality

"What's funny is sometimes I'll get an email that'll say 'You guys are so biased.'  But I won't know who we're supposed to be biased in favor of, because we get criticized a lot by both sides."
--Bill Adair, from a 2012 NPR interview

For years, PolitiFact creator Bill Adair has excused PolitiFact from charges of bias by saying it receives criticism from both sides.

The line received such customary use that it found its way into Lucas Graves' account of the rise of the fact-checking movement, "Deciding What's True: The Rise of Political Fact-Checking in American Journalism" (numbered citation omitted):
Fact-checkers anticipate political criticism and develop reflexes for trying to defuse it. "We're going to make the best calls we can, in a pretty gutsy form of journalism," Bill Adair told NPR. "And when we do, I think it's natural for people on one side or the other of this very partisan world we live in are going to be unhappy." One strategy is responding only minimally or in carefully chosen venues, and always asserting their balance, often by showing the criticism they receive from the other side of the spectrum. Fact-checkers make this point constantly.
The point of this strategy is obvious. The fact checkers imply that getting criticized from both sides indicates they are neutral--a form of the middle ground fallacy.

But this month Adair, now ensconced in academia at Duke University helping run the Duke Reporters Lab, published research suggesting that the criticism fact checkers receive comes predominantly from conservatives (reviewed here).

Color us shocked?

We find it disingenuous for Adair to use the "we get criticized from both sides" argument to emphasize PolitiFact's neutrality and then fail to question PolitiFact's neutrality after admitting the criticism from both sides is mostly from one side.

Tuesday, June 13, 2017

PolitiFact: Gays (and lesbians!) most frequent victims of hate crimes

Isn't it clear that PolitiFact's behavior is most likely the result of liberal bias?

PolitiFact Bias co-editor Jeff D. caught PolitiFact pimping a flubbed fact check on Twitter, attaching it to the anniversary of the Orlando gay nightclub shooting.
The problem? It's not true.

As we pointed out when PolitiFact first ran its fact check, there's a big difference between claiming a group is the most frequent target of hate crimes and claiming a group is at the greatest risk (on a per-person basis) of hate crimes.

Blacks as a group experience the most targeted hate crimes (about 30 percent of the total), according to the imperfect FBI data. That makes blacks the most frequent targets of hate crimes, not gays and lesbians.

Perhaps LGBT as a group experience a greater individual risk of falling victim to a hate crime, but we do not trust the research on which PolitiFact based its ruling. We doubt the researchers properly considered the bias against various small groups, such as the Sikhs. Don't take our word for it. Use the hyperlinks.

There is reason to suspect the research was politicized. We recommend not drawing any conclusion until the question is adequately researched.

What would we do without fact checkers?

Clarification June 13, 2017: Added "(on a per person basis)" to accentuate the intended distinction. Also changed "greater risk" to "greater individual risk" for the same reason).

Sunday, June 11, 2017

PolitiFact New York: Facts be damned, what we think the Democrat was trying to say was true

Liberals like to consider the tendency of fact checkers to rate conservatives more harshly than liberals a fairly solid evidence that Republicans lie more. After all, as we are often reminded, "truth has a liberal bias." But the way fact checkers pick which stories to tell and what facts to check has a fundamental impact on how fact checkers rate claims by political party.

Take a June 9, 2017 fact check from PolitiFact New York, for example.

Image from PolitiFact.com

Lt. Gov. Kathy Hochul (D) of New York proclaimed that the state of New York has achieved pay equity.

Hochul also proclaimed women are paid 90 cents on the dollar compared to men.

Hochul's first claim seems flatly false, if we count women getting paid $1 for every $1 earned by a man as "pay equity."

Her second claim, putting an accurate number on the raw gender wage gap, typically rates either "Half True" or "Mostly True" according to PolitiFact. PolitiFact tends to overlook the fact that the statistic is almost invariably used in the context of gender discrimination (see "Afters" section below).

In fact, the PolitiFact New York fact check focuses exclusively on the second claim of fact and takes a pass on evaluating the first claim of fact. PolitiFact New York justified its rating by saying Hochul's point was on target (bold emphasis added):
Hochul's numbers are slightly off. The data reveals a gender pay gap, but her point that New York state has a significantly smaller gap compared with the national average is correct. We rate her claim Mostly True.
At PolitiFact Bias, we class these cases under the tag "tweezers or tongs." PolitiFact might focus on one part of a claim, or focus on what it has interpreted as the point the politician was trying to make. Or, PolitiFact might look at multiple parts of a claim and produce a rating of the claim's truthiness "on balance."

PolitiFact in this case appears to use tweezers to remove "We have pay equity" from consideration. That saves the Democrat, Hochul, from an ugly blemish on her PolitiFact report card.

The fact checkers have at least one widely recognized bias: They tend to look for problematic statements. When a fact checker ignores a likely problem statement like "We have pay equity" in favor of something more mundane in the same immediate context, it suggests a different bias affected the decision.

The beneficiary of PolitiFact's adjusted focus once again: a Democrat.

When this happens over and over again, as it does, this by itself calls into question whether PolitiFact's candidate "report cards" or comparisons of "Truth-O-Meter" ratings by party carry any scientific validity at all.


Did Hochul make her gender wage gap claim in the context of gender pay discrimination?

Our best clue on that issue comes from Hochul's statement, just outside the context quoted by PolitiFact New York, that "Now, it's got to get to 100 [cents on the dollar]."
We draw from that part of her statement that Hochul was very probably pushing the typical Democratic talking point that the raw wage gap results from gender discrimination, which is false. Interpreting her otherwise makes it hard to see the importance of pay equity regardless of the jobs men and women do. We doubt the popularity of having gender pay equity regardless of the job performed, even in the state of New York.

The acid test:
Will women's groups react with horror if women achieve an advantage in terms of the raw wage gap? When men make only 83 cents on the dollar compared to women? Or will they assure us that the differences in pay are okay as it is the result of the job choices people make? We'll find out in time.

Thursday, June 8, 2017

Incompetent PolitiFact engages in Facebook mission creep, false reporting

The liberal bloggers/mainstream fact checkers at PolitiFact are expanding their "fake news" police mission at Facebook. While they're at it, they're publishing misleading reports.

Facebook Mission Creep

Remember the pushback when Facebook announced that fact checkers would help it flag "fake news?" PolitiFact Editor Angie Drobnic Holan made the rounds to offer reassurance:
[STELTER:]Angie, there has been a lot of blowback already to this Facebook experiment. Some on the right are very skeptical, even mocking this. Why is it a worthwhile idea? Why are you helping Facebook try to fact-check these fake stories?

HOLAN: Go to Facebook, and they are going about their day looking to connect with friends and family. And then they see these headlines that are super dramatic and they wonder if they're right or not. And when they're wrong, sometimes they are really wrong. They're entirely made up.

It is not trying to censor anything. It is just trying to flag these reports that are fabricated out of thin air.
Fact check journalists spent their energy insisting that "fake news" was just made-up "news" items produced purely to mislead people.

Welcome to PolitiFact's version of Facebook mission creep. Sarah Palin posted a meme criticizing the Paris climate accord. The meme showed a picture of Florida legislators celebrating, communicating the attitude of those who support the Paris climate agreement:

The meme does not try to capture the appearance of a regular news story. It is primarily offering commentary, not communicating the idea that Florida legislators supported the Paris climate agreement. As such, it simply does not fit the definition of "fake news" that PolitiFact has insisted would guide the policing effort on Facebook.

Yet PolitiFact posted this in its fact check of Palin:
PolitiFact fact-checked Palin’s photo as part of our effort to debunk fake news on Facebook.
Fail. It's as though PolitiFact expects meme-makers to subscribe to the same sets of principles for using images that bind professional journalists (oops):

Maybe PolitiFact should flag itself as "fake news"?

Communicating Fact Checks Using Half Truths

Over and over we point out that PolitiFact uses the same varieties of deception that politicians use to sway voters. This fact check of Palin gives us yet another outstanding example.  What did Palin do wrong, in PolitiFact's eyes?
Says an Internet meme shows people rejoicing over the Paris agreement
PolitiFact provided no good evidence Palin said any such thing. The truth is that Palin posted an Internet meme (we don't know who created it) that used an image that did not match the story.

PolitiFact has posted images that do not match its reporting. We provided an example above, from a PolitiFact video about President Clinton's role in signing the North American Free Trade Agreement.

If we reported "PolitiFact said George W. and Jeb Bush Negotiated NAFTA," we would be giving a misleading report at best. At worst we'd be flatly lying. We apply the same standard to PolitiFact that we would apply to ourselves.


We sent a message to the writer and editor at PolitiFact Florida responsible for this fact check. We sent it before starting on the text of our post, but we're not waiting for a response from PolitiFact because PolitiFact usually fails to respond to substantive criticism. If we receive any substantive reply from PolitiFact, we will append it to this message and amend the title of the post to reflect the presence of an update (no, we won't hold our breath).

Dear Amy Sherman, Katie Sanders,

Your fact check of Sarah Palin's Paris climate accord meme is disgraceful for two big reasons.

First, you describe the fact check as part of the Facebook effort to combat "fake news." After laboring to insist to everyone that "fake news" is an intentionally false news item intended to mislead people, it looks like you've moved toward Donald Trump's definition of "fake news." The use of a photograph that does not match the story is bad and unethical practice in professional journalism. But it's pretty common in the production of political memes. Do you really want to expand your definition of "fake news" like that, after trying to reassure people that the Facebook initiative was not about limiting political expression? Would you want your PolitiFact video identifying George W. Bush/Jeb Bush as George H. W. Bush classified as "fake news" based on your use of an unrelated photograph?

Second, your fact check flatly states that Palin identified the Florida lawmakers as celebrants of the Paris climate accord. But that obviously is not the case. The fact check notes, in fact, that the post does not identify the people in the photo. All the meme does is make it easy to jump to the conclusion that the people in the photo were celebrating the Paris agreement. As such, it's a loose implication. But your fact check states the misdirection is explicit:
Palin posted a viral image that purportedly shows a group of people clapping as a result of the Paris agreement, presumably about the billions they will earn.
Purported by whom? It's implied, not stated.

Do you seriously think the purpose of the post was to convey to the audience that Florida legislators were either responsible for the Paris agreement or celebrating it? That would truly be fake news as PolitiFact has tried to define it. But that's not what this meme does, is it?

You're telling the type of half-truth you claim to expose.

Stop it.

Edit 6/9/2017: Added link to CNN interview in second graph-Jeff 

Wednesday, June 7, 2017

PolitiLies at PolitiFact Wisconsin II (Updated: PolitiFact amends)

In part one of "PolitiLies at PolitiFact Wisconsin," we shared our experience questioning PolitiFact's reporting from a fact check of U.S. Rep. Glenn Grothman (R-Wis.).

In part two, we will look at PolitiFact Wisconsin's response to having a clear error pointed out in one of its stories.

On May 11, 2017, PolitiFact Wisconsin published a "Pants on Fire" rating of U.S. Rep. Paul Ryan's claim that "Air Force pilots were going to museums to find spare parts over the last eight years."

PolitiFact issued the "Pants on Fire" ruling despite a Fox News report which featured an Air Force captain, identified by name, who said the Air Force had on seven occasions obtained parts for B-1 bombers from museums.

PolitiFact Wisconsin objected to the thin evidence, apparently including the failure of the report to identify any of the museums that allegedly served as parts repositories (bold emphasis added):
The only example Ryan’s office cited was a May 2016 Fox News article in which an Air Force captain said spare parts needed for a B-1 bomber at a base in South Dakota were taken from seven "museum aircraft" from around the country. The museums weren’t identified and no other details were provided.
Yet when we attempted to verify PolitiFact Wisconsin's reporting, we found the text version of the story said Capt. Travis Lytton (no other details were provided?) showed the Fox reporters a museum aircraft from which a part was stripped. Lytton also described the function of the part in the story (no other details were provided?).

The accompanying video showed a B-1 bomber situated next to the name of the museum: South Dakota Air and Space Museum.

If one of the seven museums was not the South Dakota Air and Space Museum, then the Fox News video was highly misleading. The viewer would conclude the South Dakota Air and Space Museum was one of the seven museums.

How did PolitiFact Wisconsin miss this information? And why, when Lytton was plainly identified in the Fox News report, did PolitiFact Wisconsin not try to contact Lytton to find out the names of the other museums?

"Readers who see an error should contact the writer or editor"

We like to contact the writer and the editor when we see an error.

In this case, we contacted writer Tom Kertscher and editor Greg Borowski (May 31, 2017):
Dear Tom Kertscher, Greg Borowski,
Your rating of Speaker Ryan's claim about the Air Force pulling parts from museum planes falsely claims that none of the seven museums were identified.

Yet the Fox News report said the Air Force officer showed reporters the museum plane from which a part was taken. And if you bothered to watch the video associated with the story, the name of the museum appears very plainly in front of the B-1 bomber the officer identified.


And if the names of the museums was a point worth mentioning, then why not contact the officer (identified by name in the Fox News report) and ask him? If he identified one of the museums, would he not identify the others?
After nearly a week, we have received no reply to our message and the PolitiFact Wisconsin fact check still features the same false information about the Fox News report.



Update June 10, 2017: On June 2017 we received a message from PolitiFact Wisconsin editor Greg Borowski. Borowski said he had not received our email message (we do not know if writer Tom Kertscher, to whom it was also sent, had the same experience). Borowski said after finding out about the criticism PolitiFact Wisconsin "added a note to the item."

PolitiFact Wisconsin removed two false statements from its fact check, one stating that the Fox News report identified none of the museums from which airplane parts were taken, and one stating that the report featured no other details beyond those mentioned in the fact check.

This editor's note was added at the end of the fact check:
Editor's note: This item was updated on June 9, 2017 to say that the Fox News report did identify one museum. That information does not change the rating.
As with the other correction we helped prompt this week, we are impressed by PolitiFact Wisconsin's ability to commit an error and then fix the error without admitting any mistake. The editor's note says the fact check was changed "to say the Fox News report did identify one museum." Why was that change made? The editor's note doesn't say. The truth is the change was made because PolitiFact Wisconsin made a mistake.

It's appropriate for journalists to admit to making mistakes when they make them.  We do not care for the spin we see in PolitiFact Wisconsin's update notices.

Are we being too tough on PolitiFact Wisconsin? We think noted journalist Craig Silverman would agree with us.
Rather than destroying trust, corrections are a powerful tool to reinforce how accountable and transparent we are.

“If you’re willing to admit you’re wrong, people will trust you more,” said Mathew Ingram of Gigaom. “If I said to someone ‘You know, I’m never wrong’ they would think I was a psychopath or a liar, so they would trust me less. That’s versus if I said ‘I screw up all the time.’ They trust you more because you’re more human.”

That’s the paradox of trust: admitting our mistakes and failings make us more deserving of trust.

Correction June 14, 2017: Commenter Vinni BoD noticed our update was dated Sept. 2017. The month was actually June, which was the correct month in two spots where we (inexplicably) had "Sept." instead.

PolitiLies at PolitiFact Wisconsin I (Updated: PolitiFact amends)

Back on May 15, 2017 we noticed a suspicious factoid in PolitiFact Wisconsin's fact check of congressman Glenn Grothman (R-Wis.) (bold emphasis added):
Grothman’s quick response: "Planned Parenthood is the biggest abortion provider in the country."

He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.
The notion that birth control services count as controversial looked suspiciously like the result of a liberal press filter. Curious whether the context of Grothman's statement supported PolitiFact Wisconsin's telling, we had a look at the context (17:55 through 20:55).

The crosstalk made it a bit hard for us to follow the conversation, but a partial transcript from an article by Jen Hayden at the left-leaning Daily Kos seemed reasonably accurate to us. Note the site also features a trimmed video of the same exchange.

It looked to us as though Grothman mentioned the "controversial programs" without naming them, instead moving on to talk about why his constituents can do without Planned Parenthood's role in providing contraceptive services. Just before Grothman started talking about alternatives to Planned Parenthood's contraceptive services, an audience member called out asking Grothman for examples of the "controversial programs." That question may have led to an assumption that Grothman was  naming contraceptive services as an example of "controversial programs."

In short, we could not see any solid justification for PolitiFact Wisconsin's reporting. So we emailed PolitiFact Wisconsin (writer Dave Umhoefer and editor Greg Borowski) to ask whether its evidence was better than it appeared:
Upon reading your recent fact check of Republican Glen Grothman, I was curious about the line claiming Grothman called birth control a "controversial" service.

He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.

I watched the video and had trouble hearing the audio (I've found transcripts that seem pretty much correct, however). It sounded like Grothman mentioned Planned Parenthood's support for some controversial services, then went on to talk about the ease with which people might obtain birth control. Was there some particular part of event that you might transcribe in clear support of your summary?

From what I can tell, the context does not support your account. If people can easily obtain birth control without Planned Parenthood's help, how would that make the service "controversial"? It would make the service less necessary, not controversial, right?

I urge you to either make clear the portion of the event that supports your interpretation, or else alter the interpretation to square with the facts of the event. By that I mean not guessing what Grothman meant when he referred to "controversial programs." If Grothman did not make clear what he was talking about, your account should not suggest otherwise.

If you asked Grothman what he was talking about and he made clear he believes birth control is a controversial service, likewise make that clear to your readers.
The replies we received offered no evidence in support of PolitiFact Wisconsin's reporting. In fact, the reply we received on May 18 from Borowski suggested that Umhoefer had (belatedly?) reached out to Grothman's office for clarification:
Dave has reached out to Grothman's office. So, you;ll [sic] have to be patient.
By June 4, 2017 we had yet to receive any further message with evidence backing the claim from the article. We sent a reminder message that day that has likewise failed to draw a reply.

[Update June 8, 2017: PolitiFact Wisconsin editor Greg Borowski alerted us that the fact check of Grothman was updated. We have reproduced the PolitiFact Wisconsin "Editor's note" at the end of this post]

What does it mean?

It looks like PolitiFact Wisconsin did careless reporting on the Grothman story. The story very likely misrepresented Grothman's view of the "controversial programs" he spoke about.

Grothman's government website offers a more reliable account of what Grothman views as Planned Parenthood's "controversial" programs.

It appears PolitiFact Wisconsin is aware it published something as fact without adequate backing information, and intends to keep its flawed article as-is so long as it anticipates no significant consequences will follow.



Also see PolitiLies at PolitiFact Wisconsin II,  published the same day as this part.

Update June 8, 2017: PolitiFact removed "such as birth control" from its summary of Grothman's statement about "controversial services."  PolitiFact Wisconsin appended the following editor's note to the story:
(Editor's note, June 7, 2017: An earlier version of this item quoted Grothman as saying that Planned Parenthood is an outspoken advocate for "controversial" services such as birth control. A spokesperson for his office said on June 7, 2017 that the video, in which Grothman's voice is hard to hear at times, may have led people to that conclusion, but that Grothman does not believe birth control is a controversial service. The birth control quote had no bearing on the congressman’s statement about Planned Parenthood and its role in abortions, so the rating of True is unchanged.)
We are impressed by PolitiFact Wisconsin's ability to run a correction while offering the appearance that it committed no error. Saying the original item "quoted Grothman" gives the reader the impression that Grothman must have misspoke. But benevolent PolitiFact Wisconsin covered for Grothman's mistake after his office clarified what he meant to say.

It's really not a model of transparency, and offers Grothman no apology for misrepresenting his views.

We stick with our assessment that PolitiFact Wisconsin reported carelessly. And we suggest that PolitiFact Wisconsin's error was the type of error that occurs when journalists think they know how conservatives think when in reality the journalists do not know how conservatives think (ideological bias).

On the bright side, the portion of the fact check that we criticized now reads as it should have read from the start. We credit PolitiFact Wisconsin for making that change. That fixes the main issue, for there's nothing wrong with having a bias if it doesn't show up in the reporting.

Of secondary importance, we judge the editor's note was subtly misleading and lacking in transparency.

We also note with sadness that the changes to PolitiFact Wisconsin's story do not count as either corrections or updates. We know this because PolitiFact Wisconsin added no "corrections and updates" tag to the story. Adding that tag would make a fact check appear on PolitiFact's page of stories that have been corrected or updated.

Correction June 9, 2017: Removed a redundant "because" from the final paragraph of the update.