Sunday, July 30, 2017

"Not a lot of reader confusion" IV

PolitiFact editor Angie Drobnic Holan has famously defended PolitiFact's various "report card" graphs by declaring she does not observe much reader confusion. Readers, Holan believes, realize that PolitiFact fact checkers are not social scientists. Equipped with that understanding, people presumably only draw reasonable conclusions from the graphed results of PolitiFact's "entirely subjective" trademarked "Truth-O-Meter" ratings.

What planet do PolitiFact fact checkers live on, we wonder?

We routinely see people using PolitiFact data as though it was derived scientifically. Jeff spotted a sensational example on Twitter.
Here's an enlarged view of the chart to which Jeff objected:


How did the chart measure the "actual honesty" of the four presidential primary candidates? Just in case it's hard to read, we'll tilt it 90 degrees and zoom in:


That's right. The chart uses PolitiFact's subjective ratings, despite the even more obvious problem of selection bias, to measure candidates "actual honesty."

The guy to whom Jeff replied, T. R. Ramachandran, runs a newsletter that gives us terrific (eh) information on politics. Comprehensive insights & stuff:

It's not plausible that the people who run PolitiFact do not realize that people use their offal (sic) data this way. The fact that PolitiFact resists adding a disclaimer to its ratings and charts leads us inexorably toward the conclusion that PolitiFact really doesn't mind misleading people. At least not to the point of adding the disclaimer that would fix the bulk of the problem.

Why not give this a try, PolitiFact? Hopefully it's not too truthful for you.




Tuesday, July 25, 2017

When PolitiFacts contradict

In PolitiFact's zeal to defend the Affordable Care Act from criticism, it contradicts itself.

In declaring it "False" that the ACA has entered a death spiral, PolitiFact Wisconsin affirms three aspects of a death spiral, one being rising premiums. PolitiFact affirms that premiums are rising. Then, PolitiFact states that none of the three conditions that make up a death spiral have occurred. We must conclude, via PolitiFact, that premiums are increasing and that premiums are not increasing.

In PolitiFact Wisconsin's own words (bold emphasis added):
Our rating

A death spiral is a health industry term for a cycle with three components — shrinking enrollment, healthy people leaving the system and rising premiums.

The latest data shows enrollment is increasing slightly and younger (typically healthier) people are signing up at the same rate as last year. And while premiums are increasing, that isn’t affecting the cost to most consumers due to built-in subsidies.

So none of the three criteria are met, much less all three.
It's not hard to fix. PolitiFact Wisconsin could alter its fact check to note that only one of the conditions of a death spiral is occurring across the board, but that subsidies insulate many customers from the effects of rising premiums.

Subsidizing the cost of buying insurance does not make the cost of the premiums shrink, exactly. Instead, it places part of the responsibility for paying on somebody else. When somebody else foots the bill, higher prices do not drive off consumers nearly as effectively.

We're still waiting for PolitiFact to recognize that the insurance market is not monolithic. When the rules of the ACA leave individual markets without any insurers because adverse selection has driven them out, the conditions of a death spiral have obtained in that market.

We also note, in the context of the ACA, that when the only people who elect to pay for insurance are those who are receiving subsidies, it is fair to say the share of the market that pays full price encountered a death spiral.

Sunday, July 23, 2017

PolitiFact's facts outpace the truth

"Falsehood flies, and the Truth comes limping after it"

With the speed of the Interwebs at its disposal, PolitiFact on July 22, 2017 declared that no evidence exists to show Senator Bill Nelson (D-Fla.) favors a single-payer health care system for the United States of America.


PolitiFact based its proclamation loosely on its July 19, 2017 fact check of the National Republican Senatorial Committee's ad painting Nelson as a potential supporter of a universal single-payer plan.

We detected signs of very poor reporting from PolitiFact Florida, which will likely receive a closer examination at Zebra Fact Check.

Though PolitiFact reported that Nelson's office declined to give a statement on his support for a single-payer plan, PolitiFact ignored the resulting implied portrait of Nelson: He does not want to go unequivocally on the record supporting single-payer because it would hurt his re-election chances.

PolitiFact relied on a paraphrase of Nelson from the Tampa Tribune (since swallowed by the Tampa Bay Times, which in turn runs PolitiFact) to claim Nelson has said he does not favor a single-payer plan (bold emphasis added):
The ad suggests that Nelson supports Warren on most things, including a single-payer health care system. Actually, Nelson has said he doesn’t support single payer and wants to focus on preserving current law. His voting record is similar to Warren’s, but members of the same party increasingly vote alike due to a lack of bipartisan votes in the Senate.
There's one redeeming feature in PolitiFact Florida's summary. Using the voting agreement between two candidates to predict how they'll vote on one particular issue makes little sense unless the past votes cover that issue. If Nelson and Senator Elizabeth Warren (D-Mass.) had voted together in support of a single-payer plan, then okay.

But PolitiFact downplayed the ad's valid point about the possibility Nelson would support a single-payer plan. And PolitiFact made the mistake of exaggerating its survey of the evidence. In declaring that evidence does not exist, PolitiFact produced the impression that it searched very thoroughly and appropriately for that evidence and could not find it because it does not exist.

In other words, PolitiFact produced a false impression.

"Another major step forward"


We tried two strategies for finding evidence Nelson likes the idea of a single-payer plan. The first strategy failed. But the second strategy quickly produced a hit that sinks PolitiFact's claim that no evidence exists of Nelson favoring a single-payer plan.

A Tampa Bay market television station, WTSP, aired an interview with Nelson earlier in July 2017. The interviewer asked Nelson if he would be willing to join with Democrats who support a single-payer plan.

Nelson replied (bold emphasis added):
Well, I've got enough trouble just trying to fix the Affordable Care Act. I mean, you're talking about another major step forward, and we're not ready for that now.
The quotation supports the view that Nelson is playing the long game on single-payer. He won't jeopardize his political career by unequivocally supporting it until he thinks it's a political winner.

PolitiFact's fact check uncovered part of that evidence by asking Nelson's office to say whether he supports single payer. The office declined to provide a statement, and that pretty much says it all. If Nelson does not support single-payer and does not believe that going on the record would hurt his chances in the election, then nothing should stop him from making his position clear.

PolitiFact will not jeopardize Nelson's political career by finding the evidence that the NRSC has a point. Instead, it will report that the evidence it failed to find does not exist.

It will present this twisted approach to reporting as non-partisan fact-checking.


Afters:

We let PolitiFact know about the evidence it missed (using email and Twitter). Now we wait for the evidence of PolitiFact's integrity and transparency.

Saturday, July 22, 2017

Video embed, Sen. Bill Nelson

I'm not a fan of Sen. Bill Nelson (D-Fla.). I just need to post this as part of an effort to save it for posterity.


Friday, July 21, 2017

The ongoing stupidity of PolitiFact California

PolitiFact on the whole stinks at fact-checking, but PolitiFact California is special. We don't use the word "stupid" lightly, but PolitiFact California has pried it from our lips multiple times already over its comparatively short lifespan.

PolitiFact's latest affront to reason comes from the following PolitiFact California (@CAPolitiFact) tweet:
The original fact check was stupid enough, but PolitiFact California's tweet twists that train wreck into an unrecognizable heap of metal.

  • The fact check discusses the (per year) odds of falling victim to a fatal terror attack committed by a refugee.
  • The tweet trumpets the (per year) odds of a fatal attack occurring.

The different claims require totally different calculations, and the fact that the tweet confused one claim for the other helps show how stupid it was to take the original fact-checked claim seriously in the first place.

The original claim said "The chances of being killed by a refugee committing a terrorist act is 1 in 3.6 billion." PolitiFact forgave the speaker for omitting "American" and "per year." Whatever.

But using the same data used to justify that claim, the per year chances of a fatal attack on an American (national risk, not personal) by a refugee occurring is 1 in 13.3. That figure comes from taking number of fatal attacks by refugees (3) and dividing by the number of years (40) covered by the data. Population numbers do not figure in the second calculation, unlike the first.

That outcome does a great deal to show the silliness of the original claim, which a responsible fact checker would have pointed out by giving more emphasis to the sensible expert from the Center on Immigration Studies:
Mark Krikorian, executive director of the Center for Immigration Studies, a think tank that favors stricter immigration policies, said the "one in 3.6 billion" statistic from the Cato study includes too many qualifiers. Notably, he said, it excludes terrorist attacks by refugees that did not kill anyone and those "we’ll never know about" foiled by law enforcement.

"It’s not that it’s wrong," Krikorian said of the Cato study, but its author "is doing everything he can to shrink the problem."
Krikorian stated the essence of what the fact check should have found if PolitiFact California wasn't stupid.



Correction July 21, 2017: Fixed typo where "bit" was substituted for "but" in the opening paragraph.

Clarification July 21, 2017: Added "(national risk, not personal)" to the eighth paragraph to enhance the clarity of the argument

Tuesday, July 18, 2017

A "Half True" update

Years ago, I pointed out to PolitiFact that it had offered readers two different definitions of "Half True." In November 2011, I posted to note PolitiFact's apparent acknowledgment of the problem, evidenced by its effort to resolve the discrepant definitions.

It's over five years later. But PolitiFact Florida (archived version, just in case PolitiFact Florida notices something amiss) either did not get the memo or failed to fully implement the change.
We then decide which of our six rulings should apply:

TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains some element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.
PolitiFact Florida still publishes what was apparently the original standard PolitiFact definition of "Half True." PolitiFact National revised its definition in 2011, adding "partially" to the definition so it read "The statement is partially accurate but leaves out important details or takes things out of context."

PolitiFact Florida uses the updated definition on its main page, and directs readers to PolitiFact's main "principles" page for more information.

It's not even clear if PolitiFact Florida's main page links to PolitiFact Florida's "About" page. It may be a vestigial limb of sorts, helping us trace PolitiFact's evolution.

In one sense, the inconsistency means relatively little. After all, PolitiFact's founder, Bill Adair, has himself said that the "Truth-O-Meter" ratings are "entirely subjective." That being the case, it matters little whether "partially" occurs in the definition of "Half True."

The main problem from the changing definition comes from PolitiFact's irresponsible publication of candidate "report cards" that supposedly help voters decide which candidate they ought to support.

Why should subjective report cards make any difference in preferring one candidate over another?

The changing definition creates one other concern--one that I've written about before: Academic researchers (who really ought to know better) keep trying to use PolitiFact's ratings as though they represent reliable truth measurements. That by itself is a preposterous idea, given the level of subjectivity inherent in PolitiFact's system. But the inconsistency of the definition of "Half True" makes it even sillier.

PolitiFact's repeated failure to fix the problems we point out helps keep us convinced that PolitiFact checks facts poorly. We think a left-leaning ideological bubble combined with the Dunning-Kruger effect best explains PolitiFact's behavior in these cases.

Reminder: PolitiFact made a big to-do about changing the label "Barely True" to "Mostly False," but shifted the definition of "Half True" without letting the public in on the long-running discrepancy.

Too much transparency?



Clarification July 18, 2017: Changed "PolitiFact" to "PolitiFact Florida" in the second paragraph after the block quotation.

This post also appears at the blog "Sublime Bloviations"

Monday, July 17, 2017

PolitiFact Georgia's kid gloves for Democratic candidate

Did Democratic Party candidate for Georgia governor Stacey Evans help win a Medicare fraud lawsuit, as she claimed? PolitiFact Georgia says there's no question about it:


PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Evans' statement misses quite a bit, so we will use this as an example of PolitiFact going easy on a Democrat. It's very likely that PolitiFact would have thought of the things we'll point out if it had been rating a Republican candidate. Republicans rarely get the kid gloves treatment from PolitiFact. But it's pretty common for Democrats.

The central problem in the fact check stems from a fallacy of equivocation. In PolitiFact's view, a win is a win, even if Evans implied a win in court covering the issue of fraud when in fact the win was an out-of-court settlement that stopped short of proving the existence of Medicare fraud.

Overlooking that considerable difference in the two kinds of wins counts as the type of error we should expect a partisan fact checker to make. A truly neutral fact-checker would not likely make the mistake.

Evans' claim vs. the facts

 

Evans: "I helped win one of the biggest private lawsuits against Medicare fraud in history."

Fact: Evans helped with a private lawsuit alleging Medicare fraud

Fact: The case was not decided in court, so none of the plaintiff's attorneys can rightly claim to have won the lawsuit. The lawsuit was made moot by an out-of-court settlement. As part of the settlement, the defendant admitted no wrongdoing (that is, no fraud).

Evans' statement leads her audience toward two false conclusions. First, that her side of the lawsuit won in court. It did not. Second, that the case proved the (DaVica) company was guilty of Medicare fraud. It did not.

How does a fact checker miss something this obvious?

It was plain in the facts as PolitiFact reported them that the court did not decide the case. It was therefore likewise obvious that no lawyer could claim an unambiguous lawsuit victory.

Yet PolitiFact found absolutely no problem with Evans' claim on its "Truth-O-Meter":
Evans said that she "helped win one of the biggest private lawsuits against Medicare fraud in history." The lead counsel on the case corroborated her role in it, and the Justice Department confirmed its historic importance.

Her claim that they recovered $324 million for taxpayers also checks out.

We rate this statement True.
Indeed, PolitiFact's summary reads like a textbook example of confirmation bias, emphasizing what confirms the claim and ignoring whatever does not.
There is an obvious difference between impartially evaluating evidence in order to come to an unbiased conclusion and building a case to justify a conclusion already drawn. In the first instance one seeks evidence on all sides of a question, evaluates it as objectively as one can, and draws the conclusion that the evidence, in the aggregate, seems to dictate. In the second, one selectively gathers, or gives undue weight to, evidence that supports one's position while neglecting to gather, or discounting, evidence that would tell against it.
Evans can only qualify for the "True" rating if PolitiFact's definition of "True" means nothing and the rating is entirely subjective.



Correction July 17, 2017: Changed "out-court settlement" to "out-of-court settlement." Also made some minor changes to the formatting.
Correction Oct. 8, 2017: Changed "Shelley Evans" to "Stacey Evans" in the opening paragraph. Our apologies to Stacey Evans for that mistake.

Thursday, July 13, 2017

PolitiFact avoids snarky commentary?

PolitiFact, as part of a statement on avoiding political involvement that it developed in order to obtain its status as a "verified signatory" of the International Fact-Checking Network's statement of principles, say it avoids snarky commentary.

Despite that, we got this on Twitter today:



Did PolitiFact investigate to see whether Trump was right that a lot of people do not know that France is the oldest U.S. ally? Apparently not.

Trump is probably right, especially considering that he did not specify any particular group of people. Is it common knowledge in China or India, for example, that France is the oldest U.S. ally?

So, politically neutral PolitiFact, which avoids snarky commentary, is snarking it up in response to a statement from Trump that is very likely true--even if the population he was talking about was the United States, France, or both.

Here's how PolitiFact's statement of principle reads (bold emphasis added):
We don’t lay out our personal political views on social media. We do share news stories and other journalism (especially our colleagues’ work), but we take care not to be seen as endorsing or opposing a political figure or position. We avoid snarky commentary.
 ¯\_(ツ)_/¯

(Note that PolitiFact Bias has no policy prohibiting snarky commentary)

Tuesday, July 11, 2017

PolitiFact helps Bernie Sanders with tweezers and imbalance

Our posts carrying the "tweezers or tongs" tag look at how PolitiFact skews its ratings by shifting its story focus.

Today we'll look at PolitiFact's June 27, 2017 fact check of Senator Bernie Sanders (I-Vt.):


Where Sen. Sanders mentions 23 million thrown off of health insurance, PolitiFact treats his statement like a random hypothetical. But the context shows Sanders was not speaking hypothetically (bold emphasis added):
"What the Republican proposal (in the House) does is throw 23 million Americans off of health insurance," Sanders told host Chuck Todd. "What a part of Harvard University -- the scientists there -- determine is when you throw 23 million people off of health insurance, people with cancer, people with heart disease, people with diabetes, thousands of people will die."
The House health care bill does not throw 23 million Americans off of health insurance. The CBO did predict that at the end of 10 years 23 million fewer Americans would have health insurance compared to the current law (Obamacare) projection. There's a huge difference between those two ideas, and PolitiFact may never get around to explaining it.

PolitiFact, despite fact-checkers admitted preference for checking false statements, overlooks the low-hanging fruit in favor of Sanders' claim that thousands will die.

Is Sanders engaging in fearmongering? Sure. But PolitiFact doesn't care.

Instead, PolitiFact focused on Sanders' claim that study after study supports his point that thousands will die if 23 million people get thrown off of insurance.

PolitiFact verified his claim in hilariously one-sided fashion. One would never know from PolitiFact's fact check that the research findings are disputed, as here.

This is the type of research PolitiFact omitted (bold emphasis added) from its fact check:
After determining the characteristics of the uninsured and discovering that being  uninsured does not necessarily mean an individual has no access to health services, the authors turn to the question of mortality. A lack of care is particularly troubling if it leads to differences in mortality based on insurance status. Using data from the Health and Retirement Survey, the authors estimate differences in mortality rates for individuals based on whether they are privately insured, voluntarily uninsured, or involuntarily uninsured. Overall, they find that a lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple disadvantages that are associated with poor health.
So PolitiFact cherry-picked Sanders' claim with tweezers, then did a one-sided fact-check of that cherry-picked part of the claim. Sanders ended up with a "Mostly True" rating next to his false claims.

Does anybody do more to erode trust in fact-checking than PolitiFact?

It's worth noting this stinker was crafted by the veteran fact-checking team of Louis Jacobson and Angie Drobnic Holan.



Correction July 11, 2017: In the fourth paragraph after our quotation of PolitiFact, we had "23,000" instead of the correct figure of "23 million." Thanks to YuriG in the comments section for catching our mistake.

Saturday, July 8, 2017

PolitiFact California: Watching Democrats like a hawk

Is PolitiFact California's Chris Nichols the worst fact checker of all time? His body of evidence continues to grow, thanks to this port-tilted gem from July 7, 2017 (bold emphasis added):
We started with a bold claim by Sen. Harris that the GOP plan "effectively will end Medicaid."

Harris said she based that claim on estimates from the Congressional Budget Office. It projects the Senate bill would cut $772 billion dollars in funding to Medicaid over 10 years. But the CBO score didn’t predict the wholesale demise of Medicaid. Rather, it estimated that the program would operate at a significantly lower budget than if President Obama’s Affordable Care Act (ACA) were to remain in place.

Yearly federal spending on Medicaid would decrease about 26 percent by 2026 as a result of cuts to the program, according to the CBO analysis. At the request of Senate Democrats, the CBO made another somewhat more tentative estimate that Medicaid spending could be cut 35 percent in two decades.

Harris’ statement could be construed as saying Medicaid, as it now exists, would essentially end.
You think?

How else could it be construed, Chris Nichols? Inquiring minds want to know!

PolitiFact California declined to pass judgment on the California Democrats who made the claim about the effective end of Medicaid.



Afters:

"Wouldn't end the program for good"? So the cuts just temporarily end the program?

Or have we misconstrued Nichols' meaning?

Friday, July 7, 2017

PolitiFact, Lauren Carroll, pathetic CYA

With a post on July 1, 2017, we noted PolitiFact's absurdity in keeping the "True" rating on Hillary Clinton's claim that 17 U.S. intelligence agencies "all concluded" that Russia intervened in the U.S. presidential election.

PolitiFact has noticed that not enough people accept 2+2=5, however, so departing PolitiFact writer Lauren Carroll returned within a week with a pathetic attempt to justify her earlier fact check.

This is unbelievable.

Carroll's setup:
Back in October 2016, we rated this statement by then-candidate Hillary Clinton as True: "We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."

Many readers have asked us about this rating since the New York Times and Associated Press issued their corrections.
Carroll then repeats PolitiFact's original excuse that since the Director of National Intelligence speaks for all 17 agencies, it somehow follows that 17 agencies "all concluded" that Russia interfered with the U.S. election.

And the punchline (bold emphasis added):
We asked experts again this week if Clinton’s claim was correct or not.

"In the context of a national debate, her answer was a reasonable inference from the DNI statement," Cordero said, emphasizing that the statement said, "The U.S. Intelligence Community (USIC) is confident" in its assessment.

Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity across the community, and it’s possible that some organizations disagree.

But in the case of the Russia investigation, there is no evidence of disagreement among members of the intelligence community.
Put simply, either the people who work at PolitiFact are stupid, or else they think you're stupid.

PolitiFact claims it asked its cited experts whether Clinton's claim was correct.

PolitiFact then shares with its readers responses that do not tell them whether the experts think Clinton's claim was correct.

1) "In the context of a national debate, her answer was a reasonable inference from the DNI statement" 

It's one thing to make a reasonable inference. It's another thing whether the inference was true. If a person shows up at your home soaking wet, it may be a reasonable inference that it's raining outside. The inference isn't necessarily correct.

The quotation of Carrie Cordero does not answer whether Clinton's claim was correct.

How does a fact checker not know that?

 2) PolitiFact paraphrases expert Steven Aftergood: "Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity [sic] across the community, and it’s possible that some organizations disagree."

The paraphrase of Aftergood appears to make our point. Even if the Director of National Intelligence speaks for all 17 agencies it does not follow that all 17 agencies agreed with the finding. Put another way, even if Clinton's inference was reasonable the more recent reports show that it was wrong. The 17 agencies did not all reach the same conclusion independently, contrary to what Clinton implied.

And that's it.

Seriously, that's it.

PolitiFact trots out this absolutely pathetic CYA attempt and expects people to take it seriously?

May it never be.

The evidence from the experts does not support PolitiFact's judgment, yet PolitiFact uses that evidence to support its judgment.

Ridiculous.



Afters

Maybe they'll be able to teach Carroll some logic at UC Berkeley School of Law.



Correction July 7, 2017: Removed an extraneous "the" preceding "PolitiFact" in our first paragraph following our first quotation of PolitiFact.

Thursday, July 6, 2017

Transparency, Facebook and PolitiFact

We've never been impressed with Facebook as a vehicle for commenting on articles. We at PolitiFact Bias allow it as alternative to posting comments directly on the site, mostly to let people have their say aside from our judgment of whether they've stayed on topic and achieved a baseline level of decorum.

PolitiFact uses Facebook as its more-or-less exclusive forum for article commentary.

It's a great system for PolitiFact, for it allows its left-leaning readership to make things unpleasant for those who criticize PolitiFact, and the "top posts" feature allows the left-leaning mob to keep popular left-leaning comments in the most prominent places.

What it isn't is a place for PolitiFact to entertain, consider and address criticism of its work.

Today I noticed that some of my Facebook comments were not appearing on my Facebook "Activity Log."

A Facebook commenter mocked another person's reference to this website, PolitiFact Bias, facetiously stating that PolitiFact Bias isn't biased at all. It's a classic attack based on the genetic fallacy. We created a post awhile back to address attacks of that type, so I posted the URL. The comment seemed to publish normally, but then did not appear on my Activity Log and I could not find it on PolitiFact's Facebook page.

So it was time for some experimentation.

I replied again to the same comment, this time with the text of the PolitiFact Bias post instead of the URL, reasoning that perhaps I had been tagged as a spammer.

No go.

As before, the comment seemed to publish, but when I tried to edit to clarify some of the formatting, I got a message saying the post did not exist:


We imagine that Facebook may have some plausible reason for this behavior. But regardless of that, incidents like this show yet another lack of transparency for PolitiFact's version of the fact-checking game.

PolitiFact champions democracy, supposedly, but prefers a commenting system that buries or silences critical voices.

PolitiFact Texas uses tongs (2016)

Our "tweezers or tongs" tag applies to cases where PolitiFact had a choice of a narrow focus on one part of a claim or a wider focus on a claim with more than one part.

The tweezers or tongs option allows a fact-check to exercise bias by using the true part of a statement to boost the rating. Or ignoring the true part of the statement to drop the rating.

In this case, from 2016, a Democrat got the benefit of PolitiFact Texas' tongs treatment:

So, it was true that Texas law requires every high school to have a voter registrar.

But it was false that the law requires the registrar to get the children to vote once they're eligible.

PolitiFact averages it out:
Saldaña said a Texas law requires every high school to have a voter registrar "and part of their responsibility is to make sure that when children become 18 and become eligible to vote, that they vote."

A 1983 law requires every high school to have a deputy voter registrar tasked with giving eligible students voter registration applications. Each registrar also must make sure submitted applications are appropriately handled.

However, the law doesn’t require registrars to make every eligible student register; it's up to each student to act or not. Also, as Saldaña acknowledged, registrars aren’t required to ensure that students vote.

We rate this statement Half True.
There are dozens of examples where PolitiFact ignored what was true in favor of emphasizing the false. It's just one more way the PolitiFact system allows bias to creep in.

Here's one for which PolitiFact Pennsylvania breaks out the tweezers:


Sen. Toomey (R-Penn.) correctly says the ACA created a new category of eligibility. That part of his claim does not figure in the "Half True" rating.

We doubt that PolitiFact has ever created an ethical, principled and objective means for deciding when to ignore parts of compound claims.

Certainly we see no evidence of such means in PolitiFact's work.

Tuesday, July 4, 2017

PolitiFact revises its own history

We remember.

We remember when PolitiFact openly admitted that when Republicans charged that Obamacare cut Medicare it tended to rate such claims either "Half True" or "Mostly False."
PolitiFact has looked askance at bare statements that Obamacare cuts Medicare, rating them either Half True or Mostly False depending on how they are worded.
Nowadays, PolitiFact has reconsidered. It now says it generally rated claims that Obamacare cut Medicare as "Half True."
We did a series of fact-checks about the back-and-forth and generally rated the Republican attacks Half True.
PolitiFact doubtless took the latter position in response to criticism of its claims about Republican "cuts" to Medicaid. PolitiFact has flatly said Trump's budget cuts Medicaid (no caveats) despite the fact that outlays for Medicaid rise every year under the Trump budget. PolitiFact has also joined the mainstream media in attacking the Trump administration for denying it cuts Medicaid, rating those statements "Mostly False" or worse.

Given the context, PolitiFact's fact check of Kellyanne Conway looks like a retrospective attempt to excuse PolitiFact's inconsistency.

Unfortunately for PolitiFact writer Jon Greenberg, the facts do not support his defense narrative. The "Half True" ratings he cites tended to come from joint claims, that 1) Obamacare's Medicare cuts went to 2) pay for the Affordable Care Act. It's completely true that the ACA used Medicare savings to cut the price tag for the legislation. Every version of the CBO's assessment of the Democrats' health care reform bill will confirm it.

When PolitiFact rated isolated GOP claims that Democrats' health care reform cut Medicare, the verdict tended to come in as "Mostly False." It wasn't even close. Keep reading below the fold to see the proof.

So, PolitiFact writer Jon Greenberg either isn't so great at checking his facts, or else he is deliberately misleading his audience. The same goes for the editor, Angie Drobnic Holan.

Monday, July 3, 2017

PolitiFact's top 16 myths about Obamacare skips its 2013 "Lie of the Year"

In late 2013 PolitiFact announced its 2013 "Lie of the Year," supposedly President Barack Obama's promise that Americans could keep their plan (and their doctor) under the Affordable Care Act.

We noted at the time (even before the winner was announced) PolitiFact was forced into its choice by a broad public narrative.
With a hat tip to Vicini, it's inconceivable that PolitiFact will choose a claim other than "If you like it" as the "Lie of the Year" from its list of nominees.  Having gone out of the way to nominate a claim from years past made relevant by the events of 2013, PolitiFact must choose it or lose credibility.
Today on Facebook, PolitiFact highlighted its 2013 list of the top 16 myths about Obamacare.

It's "Lie of the Year" for 2013 did not make the list. It wasn't even mentioned in the article.

One item that did make the list, however, was Marco Rubio's "Mostly False" claim that patients won't be able to keep their doctors under the ACA.


Seriously. That one made PolitiFact's list.

If anyone needed proof that PolitiFact reluctantly pinned the 2013 "Lie of the Year" on Obama, PolitiFact has provided it.

Sunday, July 2, 2017

How to fact check like a partisan, featuring PolitiFact

First, find a politician who has made a conditional statement, like this one from Marco Rubio (R-Fla.):
"As long as Florida keeps the same amount of funding or gets an increase, which is what we are working on, per patient being rewarded for having done the right thing -- there is no reason for anybody to be losing any of their current benefits under Medicaid. None," he said in a Facebook Live on June 28."
Rubio starts his statement with the conditional: "As long as Florida keeps the same amount of funding or gets an increase ..." Logic demands that the latter part of Rubio's statement receive its interpretation under the assumption the condition is true.

A partisan fact checker can make a politician look bad by ignoring the condition and taking the remainder of the statement out of context. Like this:


As the partisan fact checker will want its work to pass as a fact check, at least to like-minded partisans and unsuspecting moderates, it should then proceed to check the out-of-context portion of the subject's statement.

For example, if the condition of the statement is the same or increased funding, look for ways the funding might decrease and use those findings as evidence the politician spoke falsely. For a statement like Rubio's one might cite a left-leaning think tank like the Urban Institute, with a finding that predicts lower funding for Medicaid:
The Urban Institute estimated the decline in federal dollars and enrollment for the states.

It found for Florida, that federal funding for Medicaid under ACA would be $16.8 billion in 2022. Under the Senate legislation, it would fall to about $14.6 billion, or a cut of about 13 percent (see table 6). The Urban Institute projects 353,000 fewer people on Medicaid or CHIP in Florida.
Easy-peasy, right?

Then use the rest of the fact check to show that Florida will not be likely to make up the gap predicted by the Urban Institute. That will prove, in a certain misleading and dishonest way, that Rubio's conditional statement was wrong.

The summary of such a partisan fact check might look like this:
Rubio said, "There is no reason for anybody to be losing any of their current benefits under Medicaid."

Rubio is wrong to state that benefit cuts are off the table.

There are reasons that Medicaid recipients could lose benefits if the Senate bill becomes law. The bill curbs the rate of spending by the federal government over the next decade and caps dollar amounts and ultimately reduces the inflation factor. Those changes will put pressure on states to make difficult choices including the possibility of cutting services.

We rate this claim Mostly False.
Ignoring the conditional part of the claim results in the fallacy of denying the antecedent. The partisan fact checker can usually rely on its highly partisan audience not noticing such fallacies.

Any questions?


Correction: July 2, 2017: In the next-to-last paragraph changed "to notice" to "noticing" for the sake of clarity.

Saturday, July 1, 2017

PolitiFact absurdly keeps "True" rating on false statement from Hillary Clinton

Today we were alerted about a story from earlier this week detailing a New York Times correction.

Heavy.com, from June 30, 2017:
On June 29 The New York Times issued a retraction to an article published on Monday, which originally stated that all 17 intelligence organizations had agreed that Russia orchestrated the hacking. The retraction reads, in part:
The assessment was made by four intelligence agencies — the Office of the Director of National Intelligence, the Central Intelligence Agency, the Federal Bureau of Investigation and the National Security Agency. The assessment was not approved by all 17 organizations in the American intelligence community.”
It should be noted that the four intelligence agencies are not retracting their statements about Russia involvement. But all 17 did not individually come to the assessment, despite what so many people insisted back in October.
The same article went on to point out that PolitiFact had rated "True" Hillary Rodham Clinton's claim that 17 U.S. intelligence agencies found Russia responsible for hacking. That despite acknowledging it had no evidence backing the idea that each agency had reached the conclusion based on its own investigation:
Politifact concluded that 17 agencies had, indeed, agreed on this because “the U.S. Intelligence Community is made up of 17 agencies.” However, the 17 agencies had not independently made the assessment, as many believed. Politifact mentioned this in the story, but still said the statement was correct.
We looked up the PolitiFact story in question. Heavy.com presents PolitiFact's reasoning accurately.

It makes for a great example of horrible fact-checking.

Clinton's statement implied each of the 17 agencies made its own finding:
"We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."
It's very easy to avoid making that implication: "Our intelligence agencies have concluded ..." Such a phrasing fairly represents a finding backed by a figure representing all 17 agencies. But when Clinton emphasized the 17 agencies "all" reached the same conclusion it implied independent investigations.

PolitiFact ignored that false implication in its original rating and in a June 2017 update to the article in response to information from FBI Director James Clapper's testimony earlier in the year:
The January report presented its findings by saying "we assess," with "we" meaning "an assessment by all three agencies."

The October statement, on the other hand, said "The U.S. Intelligence Community (USIC) is confident" in its assessment. As we noted in the article, the 17 separate agencies did not independently come to this conclusion, but as the head of the intelligence community, the Office of the Director of National Intelligence speaks on behalf of the group.

We stand by our rating.
PolitiFact's rating was and is preposterous. Note how PolitiFact defines its "True" and "Mostly True" ratings:
TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
It doesn't pass the sniff test to assert that Clinton's claim about "17 agencies" needs no clarification or additional information. We suppose that only a left-leaning and/or unserious fact-checking organization would conclude otherwise.