Friday, July 21, 2017

The ongoing stupidity of PolitiFact California

PolitiFact on the whole stinks at fact-checking, but PolitiFact California is special. We don't use the word "stupid" lightly, but PolitiFact California has pried it from our lips multiple times already over its comparatively short lifespan.

PolitiFact's latest affront to reason comes from the following PolitiFact California (@CAPolitiFact) tweet:
The original fact check was stupid enough, but PolitiFact California's tweet twists that train wreck into an unrecognizable heap of metal.

  • The fact check discusses the (per year) odds of falling victim to a fatal terror attack committed by a refugee.
  • The tweet trumpets the (per year) odds of a fatal attack occurring.

The different claims require totally different calculations, and the fact that the tweet confused one claim for the other helps show how stupid it was to take the original fact-checked claim seriously in the first place.

The original claim said "The chances of being killed by a refugee committing a terrorist act is 1 in 3.6 billion." PolitiFact forgave the speaker for omitting "American" and "per year." Whatever.

But using the same data used to justify that claim, the per year chances of a fatal attack on an American by a refugee occurring is 1 in 13.3. That figure comes from taking number of fatal attacks by refugees (3) and dividing by the number of years (40) covered by the data. Population numbers do not figure in the second calculation, unlike the first.

That outcome does a great deal to show the silliness of the original claim, which a responsible fact checker would have pointed out by giving more emphasis to the sensible expert from the Center on Immigration Studies:
Mark Krikorian, executive director of the Center for Immigration Studies, a think tank that favors stricter immigration policies, said the "one in 3.6 billion" statistic from the Cato study includes too many qualifiers. Notably, he said, it excludes terrorist attacks by refugees that did not kill anyone and those "we’ll never know about" foiled by law enforcement.

"It’s not that it’s wrong," Krikorian said of the Cato study, but its author "is doing everything he can to shrink the problem."
Krikorian stated the essence of what the fact check should have found if PolitiFact California wasn't stupid.



Correction July 21, 2017: Fixed typo where "bit" was substituted for "but" in the opening paragraph.

Tuesday, July 18, 2017

A "Half True" update

Years ago, I pointed out to PolitiFact that it had offered readers two different definitions of "Half True." In November 2011, I posted to note PolitiFact's apparent acknowledgment of the problem, evidenced by its effort to resolve the discrepant definitions.

It's over five years later. But PolitiFact Florida (archived version, just in case PolitiFact Florida notices something amiss) either did not get the memo or failed to fully implement the change.
We then decide which of our six rulings should apply:

TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains some element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.
PolitiFact Florida still publishes what was apparently the original standard PolitiFact definition of "Half True." PolitiFact National revised its definition in 2011, adding "partially" to the definition so it read "The statement is partially accurate but leaves out important details or takes things out of context."

PolitiFact Florida uses the updated definition on its main page, and directs readers to PolitiFact's main "principles" page for more information.

It's not even clear if PolitiFact Florida's main page links to PolitiFact Florida's "About" page. It may be a vestigial limb of sorts, helping us trace PolitiFact's evolution.

In one sense, the inconsistency means relatively little. After all, PolitiFact's founder, Bill Adair, has himself said that the "Truth-O-Meter" ratings are "entirely subjective." That being the case, it matters little whether "partially" occurs in the definition of "Half True."

The main problem from the changing definition comes from PolitiFact's irresponsible publication of candidate "report cards" that supposedly help voters decide which candidate they ought to support.

Why should subjective report cards make any difference in preferring one candidate over another?

The changing definition creates one other concern--one that I've written about before: Academic researchers (who really ought to know better) keep trying to use PolitiFact's ratings as though they represent reliable truth measurements. That by itself is a preposterous idea, given the level of subjectivity inherent in PolitiFact's system. But the inconsistency of the definition of "Half True" makes it even sillier.

PolitiFact's repeated failure to fix the problems we point out helps keep us convinced that PolitiFact checks facts poorly. We think a left-leaning ideological bubble combined with the Dunning-Kruger effect best explains PolitiFact's behavior in these cases.

Reminder: PolitiFact made a big to-do about changing the label "Barely True" to "Mostly False," but shifted the definition of "Half True" without letting the public in on the long-running discrepancy.

Too much transparency?



Clarification July 18, 2017: Changed "PolitiFact" to "PolitiFact Florida" in the second paragraph after the block quotation.

This post also appears at the blog "Sublime Bloviations"

Monday, July 17, 2017

PolitiFact Georgia's kid gloves for Democratic candidate

Did Democratic Party candidate for Georgia governor Shelley Evans help win a Medicare fraud lawsuit, as she claimed? PolitiFact Georgia says there's no question about it:


PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Evans' statement misses quite a bit, so we will use this as an example of PolitiFact going easy on a Democrat. It's very likely that PolitiFact would have thought of the things we'll point out if it had been rating a Republican candidate. Republicans rarely get the kid gloves treatment from PolitiFact. But it's pretty common for Democrats.

The central problem in the fact check stems from a fallacy of equivocation. In PolitiFact's view, a win is a win, even if Evans implied a win in court covering the issue of fraud when in fact the win was an out-of-court settlement that stopped short of proving the existence of Medicare fraud.

Overlooking that considerable difference in the two kinds of wins counts as the type of error we should expect a partisan fact checker to make. A truly neutral fact-checker would not likely make the mistake.

Evans' claim vs. the facts

 

Evans: "I helped win one of the biggest private lawsuits against Medicare fraud in history."

Fact: Evans helped with a private lawsuit alleging Medicare fraud

Fact: The case was not decided in court, so none of the plaintiff's attorneys can rightly claim to have won the lawsuit. The lawsuit was made moot by an out-of-court settlement. As part of the settlement, the defendant admitted no wrongdoing (that is, no fraud).

Evans' statement leads her audience toward two false conclusions. First, that her side of the lawsuit won in court. It did not. Second, that the case proved the (DaVica) company was guilty of Medicare fraud. It did not.

How does a fact checker miss something this obvious?

It was plain in the facts as PolitiFact reported them that the court did not decide the case. It was therefore likewise obvious that no lawyer could claim an unambiguous lawsuit victory.

Yet PolitiFact found absolutely no problem with Evans' claim on its "Truth-O-Meter":
Evans said that she "helped win one of the biggest private lawsuits against Medicare fraud in history." The lead counsel on the case corroborated her role in it, and the Justice Department confirmed its historic importance.

Her claim that they recovered $324 million for taxpayers also checks out.

We rate this statement True.
Indeed, PolitiFact's summary reads like a textbook example of confirmation bias, emphasizing what confirms the claim and ignoring whatever does not.
There is an obvious difference between impartially evaluating evidence in order to come to an unbiased conclusion and building a case to justify a conclusion already drawn. In the first instance one seeks evidence on all sides of a question, evaluates it as objectively as one can, and draws the conclusion that the evidence, in the aggregate, seems to dictate. In the second, one selectively gathers, or gives undue weight to, evidence that supports one's position while neglecting to gather, or discounting, evidence that would tell against it.
Evans can only qualify for the "True" rating if PolitiFact's definition of "True" means nothing and the rating is entirely subjective.



Correction July 17, 2017: Changed "out-court settlement" to "out-of-court settlement." Also made some minor changes to the formatting.

Thursday, July 13, 2017

PolitiFact avoids snarky commentary?

PolitiFact, as part of a statement on avoiding political involvement that it developed in order to obtain its status as a "verified signatory" of the International Fact-Checking Network's statement of principles, say it avoids snarky commentary.

Despite that, we got this on Twitter today:



Did PolitiFact investigate to see whether Trump was right that a lot of people do not know that France is the oldest U.S. ally? Apparently not.

Trump is probably right, especially considering that he did not specify any particular group of people. Is it common knowledge in China or India, for example, that France is the oldest U.S. ally?

So, politically neutral PolitiFact, which avoids snarky commentary, is snarking it up in response to a statement from Trump that is very likely true--even if the population he was talking about was the United States, France, or both.

Here's how PolitiFact's statement of principle reads (bold emphasis added):
We don’t lay out our personal political views on social media. We do share news stories and other journalism (especially our colleagues’ work), but we take care not to be seen as endorsing or opposing a political figure or position. We avoid snarky commentary.
 ¯\_(ツ)_/¯

(Note that PolitiFact Bias has no policy prohibiting snarky commentary)

Tuesday, July 11, 2017

PolitiFact helps Bernie Sanders with tweezers and imbalance

Our posts carrying the "tweezers or tongs" tag look at how PolitiFact skews its ratings by shifting its story focus.

Today we'll look at PolitiFact's June 27, 2017 fact check of Senator Bernie Sanders (I-Vt.):


Where Sen. Sanders mentions 23 million thrown off of health insurance, PolitiFact treats his statement like a random hypothetical. But the context shows Sanders was not speaking hypothetically (bold emphasis added):
"What the Republican proposal (in the House) does is throw 23 million Americans off of health insurance," Sanders told host Chuck Todd. "What a part of Harvard University -- the scientists there -- determine is when you throw 23 million people off of health insurance, people with cancer, people with heart disease, people with diabetes, thousands of people will die."
The House health care bill does not throw 23 million Americans off of health insurance. The CBO did predict that at the end of 10 years 23 million fewer Americans would have health insurance compared to the current law (Obamacare) projection. There's a huge difference between those two ideas, and PolitiFact may never get around to explaining it.

PolitiFact, despite fact-checkers admitted preference for checking false statements, overlooks the low-hanging fruit in favor of Sanders' claim that thousands will die.

Is Sanders engaging in fearmongering? Sure. But PolitiFact doesn't care.

Instead, PolitiFact focused on Sanders' claim that study after study supports his point that thousands will die if 23 million people get thrown off of insurance.

PolitiFact verified his claim in hilariously one-sided fashion. One would never know from PolitiFact's fact check that the research findings are disputed, as here.

This is the type of research PolitiFact omitted (bold emphasis added) from its fact check:
After determining the characteristics of the uninsured and discovering that being  uninsured does not necessarily mean an individual has no access to health services, the authors turn to the question of mortality. A lack of care is particularly troubling if it leads to differences in mortality based on insurance status. Using data from the Health and Retirement Survey, the authors estimate differences in mortality rates for individuals based on whether they are privately insured, voluntarily uninsured, or involuntarily uninsured. Overall, they find that a lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple disadvantages that are associated with poor health.
So PolitiFact cherry-picked Sanders' claim with tweezers, then did a one-sided fact-check of that cherry-picked part of the claim. Sanders ended up with a "Mostly True" rating next to his false claims.

Does anybody do more to erode trust in fact-checking than PolitiFact?

It's worth noting this stinker was crafted by the veteran fact-checking team of Louis Jacobson and Angie Drobnic Holan.



Correction July 11, 2017: In the fourth paragraph after our quotation of PolitiFact, we had "23,000" instead of the correct figure of "23 million." Thanks to YuriG in the comments section for catching our mistake.

Saturday, July 8, 2017

PolitiFact California: Watching Democrats like a hawk

Is PolitiFact California's Chris Nichols the worst fact checker of all time? His body of evidence continues to grow, thanks to this port-tilted gem from July 7, 2017 (bold emphasis added):
We started with a bold claim by Sen. Harris that the GOP plan "effectively will end Medicaid."

Harris said she based that claim on estimates from the Congressional Budget Office. It projects the Senate bill would cut $772 billion dollars in funding to Medicaid over 10 years. But the CBO score didn’t predict the wholesale demise of Medicaid. Rather, it estimated that the program would operate at a significantly lower budget than if President Obama’s Affordable Care Act (ACA) were to remain in place.

Yearly federal spending on Medicaid would decrease about 26 percent by 2026 as a result of cuts to the program, according to the CBO analysis. At the request of Senate Democrats, the CBO made another somewhat more tentative estimate that Medicaid spending could be cut 35 percent in two decades.

Harris’ statement could be construed as saying Medicaid, as it now exists, would essentially end.
You think?

How else could it be construed, Chris Nichols? Inquiring minds want to know!

PolitiFact California declined to pass judgment on the California Democrats who made the claim about the effective end of Medicaid.



Afters:

"Wouldn't end the program for good"? So the cuts just temporarily end the program?

Or have we misconstrued Nichols' meaning?