Wednesday, June 29, 2016

Offsite Criticism of the Week

While PolitiFact continues its pattern of ducking public criticism, we will continue to model the opposite approach by dealing with a typical offsite criticism we run across (real example from real discussion board):

First, we'll note that generally citing PolitiFact Bias is the wrong way to defuse criticisms that Trump doesn't tell the truth. There are good reasons to dismiss the use of the PolitiFact statistics that some use to "prove" somebody lies more than somebody else. The liberal use of selection bias lays a poor foundation for reliable statistics. We've been pointing that out for years.

As for the criticism:
The first article in the first link says PolitiFact is wrong because it only used the data that was available to make its ruling. Oh the humanity!

We believe the comment was directed at our evaluation of the Loretta Lynch fact check where Lynch said the LGBT community endures most of the statistically recorded hate crimes. It's true, as Dustin Siggins pointed out, that PolitiFact relied on incomplete statistics. In keeping with PolitiFact's common practice, that should have prevented Lynch from getting a rating any better than "Mostly True." Lynch received a "True" rating.

But our central criticism was quite different from what the writer said.

We noted that Lynch said nothing, in context, about using the per capita measure. If that measure isn't used, then PolitiFact's statistics say blacks experience most of the hate crimes recorded in the statistics. So what Lynch said was false. PolitiFact pretended otherwise to give her the "True" rating.

This is typical of the criticism we receive. We're dismissed based on bias (ad hominem fallacy) and on misrepresentations of our work (straw man fallacy).

We'd love to do "Offsite Criticism of the Week" for a criticism that is not fallacious. Comments are open here and on Facebook if anybody knows of one.

Poynter hides criticism of PolitiFact's transparency?

The Poynter Institute, a journalism school in St. Petersburg, Florida, owns our reason for existence, PolitiFact. I sometimes read articles posted at Poynter, particularly those concerned with fact-checking.

Up through today, I believe Poynter has published every comment I have made.

Poynter re-published the guidelines for fact-checking written by PolitiFact editor Angie Drobnic Holan. I had responded to Holan's PolitiFact article with one of my own published to Medium. At the Poynter website I commented about one of my central points of rebuttal: PolitiFact's claims of accountability to the public can't be taken seriously so long as a story from PolitiFact Missouri from earlier this year sustains its train-wreck status (and it does sustain that status).

I captured the above image right after pushing the button to publish my comment. Note the message that the comment is pending approval. When I visited the page later, I was still able to see evidence I had commented and the comment was pending approval. I was also able to review the content of the yet-to-be-published comment.

Now, when I return to the page, no evidence remains of my comment activity.

Is it possible that the comment remains in moderation and Poynter will publish it later? Sure, that's possible. We will certainly update this item if Poynter publishes the comment.

Meanwhile, here is our message to PolitiFact and the Poynter Institute:

Hiding a problem of transparency does not address that problem of transparency.

Update June 30, 2016:


Pending for 11 hours. Shall we go for 36?

Sunday, June 26, 2016

More PolitiFingers on the scale: Loretta Lynch and LGBT hate crimes

Hat tip to journalist Dustin Siggins for bringing this item to our attention.

U.S. Attorney General Loretta Lynch received a "True" rating in a June 23, 2016 fact check by PolitiFact Florida. Lynch said members of the LGBT community are more often the victims of hate crimes than members of any other recognized community.

The "Truth-O-Meter" rating counts as an out-and-out gift to Lynch.

Our tipster, Dustin Siggins, pointed out that PolitiFact admittedly could only verify Lynch's claim according to a very incomplete data set:
An important caveat: There are several holes in the reporting of hate crimes to the FBI. Local law enforcement agencies voluntarily report their data to a state agency that compiles the information for the FBI. Some local agencies report no hate crimes or don’t submit a report.

A study by the federal Bureau of Justice Statistics found that 60 percent of violent hate crime victimizations were not reported to police in 2012.

A 2016 Associated Press investigation found that more than 2,700 city police and county sheriff's departments have not submitted a single hate crime report to the FBI during the past six years — about 17 percent of all city and county law enforcement agencies nationwide.
The caveat obviously wasn't important enough to drop Lynch's rating below "True."

So Siggins made a solid point about the quality of the data.

We were even more concerned about PolitiFact placing its focus on the "per capita" measurement of hate crimes.

There's simply nothing in Lynch's statement to suggest she was talking about a per capita measurement. PolitiFact reported that Lynch said she relied on a story in The New York Times for her information. And it's true that the Times' story focuses on per capita rates of hate crime.

But how does that excuse Lynch for the words she used?

We don't see how it could, and this type of imprecision frequently counts against the accuracy of a claim.

Compare a recent "False" rating PolitiFact gave to Republican presidential candidate Donald Trump. Trump said Americans pay the highest taxes in the world. He may well have been talking about the U.S. corporate tax rate, which PolitiFact said in 2014 was one of the highest in the world. Trump did not explicitly say he was talking about the corporate tax rate, however. Would it, or should it, make a difference if his campaign had responded to PolitiFact saying he was talking about the corporate tax rate?

We think statements get their degree of truth from their immediate context. Later explanations may help point to a different dimension of that context, even potentially revealing a hidden angle to a fact checker. But the later explanation per se is irrelevant to the truth of the claim.

If Not Per Capita Then Not True

According to the words she used and according to the statistics PolitiFact used, the LGBT community is not the group most victimized by hate crimes. Blacks are more often victimized by hate crimes, as PolitiFact plainly admitted (bold emphasis added):
Another way to look at the data about hate crimes rather than per capita is the sheer number of crimes. By that measure, there were more hate crimes against African-Americans than LGBT residents. But that’s not surprising since African-Americans represent about 13.2 percent of the population according to the U.S. Census, while the LGBT community represents about 2.3 percent of the population, according to a survey done by the Centers for Disease Control (other surveys found a slightly higher rate.)
PolitiFact ignores its own evidence showing Lynch was wrong, implicitly modifies Lynch's comment to refer to the per capita measurement, and awards Lynch a "True" rating.

That's fact-checking?

Tuesday, June 21, 2016

Power Line: "On crime, Trump's right and PolitiFact is wrong"

We profusely thank Power Line writer Paul Mirengoff for linking to PolitiFact Bias prominently in his post.

Power Line blog slammed PolitiFact today over its slipshod fact check of Donald Trump's claim that crime is increasing. PolitiFact absurdly rated Trump's claim "Pants on Fire" despite not considering crime data more recent than 2014:
How did Politifact err on such a basic question? It erred by looking at no data past 2014. Sean Kennedy at AEI Ideas blows the whistle.

Trump made his statement on June 7, 2016. Thus, his claim that crime is rising can only be fact-checked by analyzing current data. By failing to do so, Politifact confirmed that it is either incompetent, hopelessly biased, or both.
While it's true PolitiFact relied at least partly on a pair of experts it interviewed, Mirengoff and Kennedy make a great point. Where was PolitiFact in January 2016 when the Washington Post was claiming an increase in violent crime for early 2015 compared to early 2014?
The number of violent crimes committed across the country was up in the first half of 2015 compared with the same period a year earlier, with increases seen across the country and spanning different types of crimes, federal authorities said Tuesday.

The numbers of murders, rapes, assaults and robberies were all up over the first six months of 2015. Overall violent crime was up 1.7 percent, an increase that followed two consecutive years of declines, according to the FBI.
"No truth for you!" say the Truth Nazis at PolitiFact.

What About PolitiFact's Neutral Experts?


Sometimes we survey PolitiFact's list of experts to see if they have any obvious political leanings.

What have we here?

James Alan Fox: Just one FEC individual donation listed. To Elizabeth Warren, $800. It may not be PolitiFact's expert if there's another James Fox at Northeastern University. Warren's a Democrat.

Raymond Paternoster: The background information on Paternoster was equivocal. Paternoster has authored studies on race and the application of the death penalty. As with what he told PolitiFact, it's hard to confidently pin down his stance (bold emphasis added):
"Mr. Trump is wrong if he is talking about overall crime and even violent crime," agreed University of Maryland criminologist Raymond Paternoster. Any possible upward swing in the past year or so wouldn’t show up in the data currently available, he said.
Paternoster's admission in bold makes us very curious about the context of PolitiFact's interview. What question was Paternoster asked when he answered Trump is wrong? How could Paternoster agree that Trump is wrong without recent data to back the assessment?

More News Reports


The Associated Press:
CHICAGO (AP) — Violent crimes — from homicides and rapes to robberies — have been on the rise in many major U.S. cities, yet experts can't point to a single reason why and the jump isn't enough to suggest there's a trend.

Still, it is stumping law enforcement officials, who are seeking a way to combat the problem.

"It's being reported on at local levels, but in my view, it's not getting the attention at the national level it deserves," FBI Director James Comey said recently. "I don't know what the answer is, but holy cow, do we have a problem."
 A bunch of liars?

KUTV (Utah) cited a study by the left-leaning Brennan Center for Justice:
A new study of crime statistics from major cities across the country reveals a rising number of murders in 2015, with violence in three cities fueling half of that increase.

Crime data for the 30 largest cities in the U.S. released by the Brennan Center for Justice indicates a 13.3 percent rise in murders in 2015, but analysts say it is too soon to determine whether this reflects a broader trend.
Note the Brennan Center study involves a comparison between 2015 and 2014. Violent crime in 2016 has thus far built on the violent crime rate in 2015.

As Power Line noted, a fact checker should check the facts before ruling on the facts.

Edit 6/22/2016: Added link to WaPo story in relevant graph. Changed "in" to "an" same graph-Jeff 0857 PST

Sunday, June 19, 2016

Allsides: " is a bit left"

We preface this post with the disclaimer that the editors of PolitiFact Bias tend not to trust crowdsourced methods of epistemology.

Back in 2012, we noted the emergence of AllSides, a new project intended to help sift through and identify media bias so news consumers could take it into account.

AllSides now has a preliminary assessment of PolitiFact posted to its site. The evaluation could change as more information comes in. But for now, AllSides says PolitiFact leans slightly left:
In comparison with other fact checking sites, it appears that is a bit left of (bias rating "Center") and the Washington Post's Blog "The Fact Checker" (bias either "Center" or "Lean Left" - borderline).
Expect PolitiFact to pretend this evaluation does not exist, and expect journalists to refrain from asking PolitiFact's representatives about it.

'Cause that would be rude, right?

Visit AllSides to learn more about how it develops its bias ratings.

Wednesday, June 15, 2016

More word games at PolitiFact: voucher edition

On June 5, 2016, we reviewed PolitiFact's position, contrary to what economists write in published journals, that Social Security's "pay-as-you-go" financing is not a Ponzi game.

With this post, we'll see that PolitiFact adopts a different approach to the use of words when the term is "voucher."

Surprise! The inconsistency works against Republicans in both cases.

PolitiFact Wisconsin fact-checked a claim by the Democratic Party of Wisconsin that Republican Senator Ron Johnson voted to turn Medicare into a voucher program. PolitiFact Wisconsin rated the claim "Mostly True," ignoring the fact that the plans Johnson voted for involved no vouchers. Vouchers are pieces of paper representing value that will be covered by the government.

PolitiFact Wisconsin's explanation is priceless:
Our colleagues concluded that, although there are technical differences between voucher and premium supports that may matter to health policy professionals, the two definitions have become almost indistinguishable and voucher program is a fair description for what Ryan proposed.
It makes perfect sense to us that left-leaning PolitiFact staffers would accept that "voucher" is "almost indistinguishable" from "premium support. After all, who cares what health policy professionals think? What matter is what PolitiFact thinks.

So we move to the natural question: Did PolitiFact conclude that the terms were "almost indistinguishable" based on sound evidence? PolitiFact Wisconsin provided a source list featuring a number of PolitiFact stories, so we looked there for the evidence PolitiFact Wisconsin neglected to include in its story. We will review them for their evidence in chronological order.

PolitiFact National, Aug. 16, 2012

The differences between vouchers and premium support may matter to health-policy professionals, but not necessarily to a general audience. And while the 1995 Aaron-Reischauer paper may have offered a detailed definition for "premium support," language tends to evolve over nearly two decades.
The above essentially repeats the assertion that the terms have converged in meaning over time for the general audience. The fact check does elaborate on the argument. That elaboration takes the form of finding similarities between vouchers and premium supports, followed by having a pair of experts say it's reasonable to use "voucher" for "premium support." That is an approach PolitiFact could have applied to Ponzi schemes, but did not.

In fact, the fact check overlooks an obvious underlying Democratic argument. The fact check acknowledges that Republicans don't like the term "voucher" applied to the Medicare reform plan. But if the terms are interchangeable to the general audience then why would they care? Why would Democrats care, for that matter? We think people associate "voucher" with receiving a piece of paper and then having to go through the trouble of redeeming it. The perceived inconvenience accounts for the negative connotation the Democrats wish to pin on the Republicans' attempts at Medicare reform.

The Democrats are manipulating words to exaggerate inconveniences from a premium support system.

PolitiFact New Jersey, Sept. 10, 2012

PolitiFact New Jersey accepts and repeats the original argument from PolitiFact National:
As our PolitiFact colleagues noted, there are distinctions between the two terms, dealing with the type of inflation adjustment used and the degree of marketplace regulation imposed. Ryan’s most recent plan more closely reflects a pure premium support, but substantively, it’s somewhere between the two approaches. 

Henry Aaron, a senior fellow of economic studies at the Brookings Institution, a policy think tank, told PolitiFact National "that premium support is a type of voucher."
PolitiFact New Jersey adds nothing substantial to the argument.

PolitiFact National, Oct. 3, 2012

Less than a month later, PolitiFact National again recycled its earlier argument:
In the past, PolitiFact has found Obama’s "voucher" characterization reasonable, though as Obama noted, Republicans prefer "premium support."

Merriam-Webster defines a voucher as "a written affidavit or authorization … a form or check indicating a credit against future purchases or expenditures; a coupon issued by government to a parent or guardian to be used to fund a child's education in either a public or private school."

The plan pushed by Romney’s running mate, U.S. Rep. Paul Ryan, isn’t exactly a coupon, but it’s not so far off.
May it never be that "Social Security isn't exactly a Ponzi scheme, but it's not so far off."

PolitiFact New Jersey, Oct. 5, 2012

PolitiFact New Jersey stuck with the same mantra two days after PolitiFact National's "Mostly True" for President Obama:
Menendez’s claim is mostly accurate.

Ryan has proposed providing "premium support" payments to future Medicare beneficiaries to purchase health insurance. There are some distinctions between the two terms, but the word "voucher" generally describes this approach.

Again, the fact check contains no new reporting. The source list simply features the interview from which PolitiFact New Jersey pulled the claim it checked along with two earlier PolitiFact fact checks. That's it.

PolitiFact National, Nov. 19, 2013

In 2013, PolitiFact National finally added some new reporting. It's worth noting that the new reporting included another example of the "Ignore the Conservative Expert" game PolitiFact plays from time to time (bold emphasis added):
Yuval Levin, a health policy expert at the conservative Ethics and Public Policy Center told us he wouldn’t consider the proposal a voucher system at all.

There’s definitely debate over semantics, but to the average voter (if not the average policy wonk), it seems like the word "voucher" would accurately describe the basics of Ryan’s proposal (which, by the way, doesn’t sound all that different from the marketplaces for the uninsured). Calling programs like this "voucher systems" has been common in the field for years without negative connotations, Van de Water said.
The conservative expert disagrees with the others (liberals?)? The solution is simple. Ignore the conservative. The Democrat gets a "Mostly True" rating because "'voucher system' is the colloquial way to refer to a program that gives people credit to purchase something."

No matter how many economists refer to pay-as-you-go financing as a "Ponzi game," PolitiFact will not acknowledge that the characterization is anything better than "Mostly False" (thanks for nothing, PolitiFact Wisconsin).

It's just one more example of PolitiFact's inconsistency favoring the liberal point of view.


PolitiFact Wisconsin deserves special recognition for dinging Sen. Johnson with a "Mostly False" for comparing Social Security to a Ponzi scheme and giving an attack ad against Johnson a "Mostly True" since vouchers are (allegedly) pretty much the same as premium support.

PolitiFact's inconsistent approach to the proper use of terms is a disgrace to fact-checking.

PolitiFact again goofy on guns (Updated)

There PolitiFact goes again.

PolitiFact Florida did a fact check on a gun-related topic. "Orlando area" Rep. Alan Grayson claimed Orlando shooter Omar Matteen's rifle is able to fire 700 rounds per minute helping to directly lead to the high death toll from the tragedy.

The good news is that PolitiFact Florida did a decent-enough job on reporting the gun facts. Sure, "Glock" is a name brand and should have been capitalized. But PolitiFact found expert sources and appeared to mostly relay their statements accurately.

The fact is the rifle Mateen used cannot fire 700 rounds per minute.

So why is it "Mostly False" instead of "False"?

Let's go to PolitiFact's summary section (bold emphasis added):
Grayson said that the rifle Mateen used "shoots off 700 rounds in a minute."

On CNN, he includes this claim without any clarification. In other forums, he noted that his claim is only true for the hypothetical semiautomatic rifle converted to an automatic weapon.

Even then, however, experts say the 700-round-per-minute figure is not an accurate portrayal of rounds fired. This is true for many reasons, they said, including reloading time and the potential of overheating the gun.
That's right, ladies and gentlemen. If Mateen had an automatic version of the rifle he used, which he did not, he still would not have been able to fire off 700 rounds in a minute. But he would have been able to fire off quite a few rounds.

It looks to us as though PolitiFact is saying it is "Mostly False" that an automatic AR-type rifle can fire 700 rounds per minute. And PolitiFact gives that "Mostly False" rating to Grayson for claiming an automatic version of the rifle can fire 700 rounds per minute. And placing that "Mostly False" rating next to Grayson's separate (false) claim that Mateen's semiautomatic version of the rifle can fire 700 rounds per minute.

But we don't know. Maybe PolitiFact Florida thinks it's "Half True" that the automatic AR-type rifle can fire 700 rounds per minute, even though it cannot. And PolitiFact averages that out with a "False" rating for Grayson's claim about the semi-automatic version to get "Mostly False." And after that PolitiFact places the "Mostly False" rating next to Grayson's "False" claim that Mateen's rifle could have fired 700 rounds in a minute.

Any way you slice it, it comes out goofy.

Would anyone like to wager that it would have received a "False" rating if Donald Trump had said it?

Update June 20, 2016

Subsequent reading (and arguments) on the subject of firing rate inspires an update.

Is PolitiFact's "Mostly False" justified simply based on the fact that Mateen's gun is rated by the manufacturer to have 700 rounds per minute capability in short bursts, even if Mateen could never pull the trigger quickly enough to achieve that rate of fire?

We reject that argument as absurd.

We find it absurd because the same is true of non-automatic weapons. If you pull the trigger quickly enough on a revolver, there is no limitation on its rate of fire that a semiautomatic weapon does not share. The absurdity trebles upon considering part of Rep. Grayson's argument that PolitiFact Florida allowed to pass. Grayson said the high rate of fire for the AR-15 style weapon resulted in a much higher death toll than if Mateen had used his Glock pistol.

The quotation comes from PolitiFact Florida's fact check:
"If somebody like him had nothing worse to deal with than a glock [sic] pistol which was his other weapon today, he might have killed three or four people and not 50," Grayson said.
But the Glock 17/18 pistol has a similar--actually higher--rating for its potential rate of fire.

Grayson did not know what he was talking about. If the cyclic rate is the relevant factor (it isn't) then the Glock pistol is the more dangerous weapon. Did anybody learn that from PolitiFact's fact check?

The fact goes whoosh past PolitiFact Florida. No worries, Rep. Grayson. PolitiFact Florida's got your back.

Correction 6/15/2016: In the first paragraph, originally had "700 rounds per second" where "minute" was intended instead of "second." The wording now matches the intent.
Clarification 6/20/2016: Belatedly included a link to PolitiFact Florida's fact check of Grayson.

Friday, June 10, 2016

WUWT: "Note to Politifact: Obama DID say there is No Greater Threat than Climate Change"

The climate skeptic site "Watts Up With That" posted an item critical of PolitiFact on June 6, 2016. Contributor Eric Worrall begged to differ with the "Mostly False" rating PolitiFact Arizona gave to Pinal County Sheriff Paul Babeu. Babeu said President Obama has said climate change is the No. 1 security threat facing the United States.

Worrell responded with his post "Note to PolitiFact: Obama DID say there is No Greater Threat than Climate Change."

We think Worrell ends up striking a glancing blow by not addressing Babeu's use of the term "security," but even the glancing blow does a good bit of damage (bold emphasis added):
President Obama may have made other statements which contradict some of his statements on Climate Change – he is after all a politician. But Politifact’s assertion that it is a “mostly false” exaggeration, to say that President Obama thinks Climate Change is the greatest threat to national security, is clearly unreasonable – unless you think that suggesting Climate is the “greatest threat” to future generations, suggesting climate, unlike terrorism, might be an “existential threat” to the entire world, suggesting “we need to act now”, could not reasonably be interpreted as being a suggestion that climate is the nation’s number one priority.
The point in bold could have used more emphasis in Waddell's critique. PolitiFact literally used Obama's claim of prioritizing the fight against terrorists to pooh-pooh Babeu's claim (bold emphasis added):
Obama continues to cite climate change as a great threat to the world, but framing the issue as the country’s top national security threat is an exaggeration. Obama has said fighting terrorism is his most urgent priority.

The Arizona sheriff ignores important context, so we rate his claim as Mostly False.
The truth is that if Obama has said climate change is the top national security priority, he cannot undo the statement by claiming a different top priority.

PolitiFact Arizona's fact check shows its bias by failing to provide the most obvious counterbalance to its key evidence against Babeu.

Contrary to Babeu’s claim, the president’s top national security threat appears to be terrorism.

Out of context

In March, after the terrorist attacks in Brussels, Obama said, "I’ve got a lot of things of my plate, but my top priority is to defeat ISIL."
The president's remarks in the Atlantic downplaying the threat of ISIS compared to the threat of climate change failed to find their way into PolitiFact's fact check (we note that Worrell used the quotation to good effect):
ISIS is not an existential threat to the United States,” he told me in one of these conversations. “Climate change is a potential existential threat to the entire world if we don’t do something about it.”
Though it is common sense to suppose that a president will respond to the greatest existential threat by making it the highest priority, it does not follow as a matter of logic that the greatest threat is always the highest priority. In his the Atlantic interview, Obama went on at length about the political difficulty of addressing climate change.

Maybe that is a big part of the reason he does not call it his top priority?

Once again, PolitiFact substitutes opinion journalism for fact-checking. Babeu did not claim he was giving Obama's exact words. So PolitiFact Arizona arrogates to itself the privilege of cooking up, on the spot, a set of standards that result in the "Mostly False" rating.

There is no solid epistemological backing for the rating. In PolitiFact Arizona's opinion, what Babeu said was "Mostly False."

The truth is that President Obama has said climate change is an immediate and growing threat to U.S. national security. And his administration acts as though border security counts very low in terms of national security.

And wasn't that last point really Babeu's point?

The difference between a true underlying point that matters and a true underlying point that doesn't matter, we suppose, is that sometimes it matters and sometimes it doesn't.

Sunday, June 5, 2016

Zebra Fact Check: "Torture narrative trumps facts at PolitiFact"

Our previous post highlights a Flopping Aces critique of a PolitiFact Florida ruling on whether waterboarding works. Back on May 10, 2016, we posted "What is 'empirical evidence' to PolitiFact?" to highlight PolitiFact's mishandling of the evidence in an earlier waterboarding fact check of Hillary Clinton. At my fact check site, Zebra Fact Check, I sifted both PolitiFact fact checks for the evidence they used to find that waterboarding does not work.

That examination led to an article titled "Torture narrative trumps facts at PolitiFact" and its conclusion:
We don’t have the evidence to know whether waterboarding works. PolitiFact had no business issuing “True” and “False” ratings of statements it cannot verify as true or false. At least not while pretending to serve as a neutral fact checker.

Bias serves as the best explanation for PolitiFact’s treating the two claims according to different standards and justifying its ratings fallaciously.
See the article at Zebra Fact Check for a detailed examination of PolitiFact's evidences.

Find some of my older writing on waterboarding here.

Flopping Aces: "PolitiFact is PolitiWrong on Waterboarding"

I've been slow to post about my Zebra Fact Check critique of PolitiFact's reporting on waterboarding. I'll post that to PolitiFact bias soon, but it comes to my attention that the conservative blog Flopping Aces has a post, "PolitiFact is PolitiWrong on Waterboarding," that overlaps mine in quite a few ways.

Epic fail and lazy research on the part of PolitiFact for not going beyond mainstream media’s superficial reporting which basically accepted and parroted the bullet points given out from the Feinstein Report.

They did consult Reed College political science professor Darius Rejali. But while an expert in what he knows, what he knows also reveals what he doesn’t know: Basically, that he’s ignorant of the arguments made from experts on the other side of the coin. He simply knocks down the strawman claims, hawked around ad nauseam by the critics for years now.
Wordsmith has the details to back up his assertions.

Visit Flopping Aces and read the whole thing.

PolitiFact recycles garbage: "False" that Social Security is a Ponzi scheme

Are the fact checkers at PolitiFact inept? Intentionally deceitful? Both?

Jeff and I chat about that issue from time to time. We try to give people the benefit of the doubt, but PolitiFact makes this one a tough call.

The latest challenge to our attempt to see the best in the people at PolitiFact? They're misleading their readers with their recycled garbage about Social Security's financing structure. On Facebook, PolitiFact reminded readers they should remember and believe PolitiFact's flawed fact checks from the past:

What are the "several reasons" PolitiFact uses to deny the similarity?

PolitiFact uses Mitchell Zuckoff, a journalist who wrote about Charles' Ponzi's original ripoff, as its go-to expert. Let's examine the features supposedly distinguishing Social Security from a Ponzi scheme.
"First, in the case of Social Security, no one is being misled," Zuckoff wrote in a January 2009 article in Fortune. "Social Security is exactly what it claims to be: A mandatory transfer payment system under which current workers are taxed on their incomes to pay benefits, with no promises of huge returns."
People are being misled into believing that Social Security's financing is solid, so we challenge Zuckoff's claim that no one is being misled. But, more to the point, Ponzi himself would have had no need to mislead his victims if he shared the government's power to force participation. As Zuckoff notes, Social Security is a mandatory transfer payment system. This can't count as a critical distinguishing difference, for Ponzi would have leaped at the chance to force participation in his scheme.
Second, he wrote, "A Ponzi scheme is unsustainable because the number of potential investors is eventually exhausted." While Social Security faces a huge burden due to retiring Baby Boomers, it can be and has been tweaked, and "the government could change benefit formulas or take other steps, like increasing taxes, to keep the system from failing.
Here, Zuckoff is flatly wrong. A Ponzi scheme can have the same number of potential investors as Social Security. The trick is getting potential investors to turn into actual investors. Zuckoff's second difference maker is the same as the first, on examination: Ponzi had to lure new investors with false claims. He would have preferred the convenience of Social Security's mandatory participation.
Third, Zuckoff wrote, "Social Security is morally the polar opposite of a Ponzi scheme. ... At the height of the Great Depression, our society (see 'Social') resolved to create a safety net (see 'Security') in the form of a social insurance policy that would pay modest benefits to retirees, the disabled and the survivors of deceased workers. By design, that means a certain amount of wealth transfer, with richer workers subsidizing poorer ones. That might rankle, but it's not fraud."
The third difference, Zuckoff says, is that Social Security is good while Ponzi schemes are bad. We see two foundations for Zuckoff's claim. The first is ideological. Zuckoff takes it as axiomatic that the morality behind Social Security is superior to the morality behind Ponzi schemes. We think it's hard to base an objective finding of fact on the presumption of moral superiority. The second foundation of Zuckoff's argument comes again from his first point: Social Security, unlike a Ponzi scheme, is not fraud. As we pointed out above, Charles Ponzi would have loved to make mandatory participation a feature of his scheme.

That's it. Those three (a bit short of two after winnowing) reasons supposedly make it "False" that Social Security is a Ponzi scheme.

PolitiFact includes comments from CATO Institute's Michael Tanner. Tanner affirms a similarity between Social Security and a Ponzi scheme, while noting some differences. PolitiFact makes those comments a footnote, so the similarities are de-emphasized in the story and ignored in the final ruling.

Ultimately, the thing that keeps Social Security from being a Ponzi scheme in PolitiFact's eyes is a feature that Ponzi would have loved to include: the power to keep new money coming in through mandatory participation. If Ponzi had that power and were alive today, he could still be making money off of new investors.

PolitiFact shows its bias by ignoring the similarities Tanner points out and by making the power to force participation the line of demarcation between Social Security and Ponzi schemes.

There's also the issue of ignored evidence. Professional peer-reviewed journals routinely refer to Social Security's "pay-as-you-go" financing as a Ponzi game or Ponzi scheme. We offer one example among many:
In an unguarded moment 30 years ago, Nobel Laureate Paul Samuelson captured the reasoning undergirding this approach to public finance:
The beauty about social insurance is that it is actuarially un-sound. Everyone who reaches retirement age is given benefit privileges that far exceed anything he has paid in.... Social Security is squarely based on what has been called the eighth wonder of the world--compound interest. A growing nation is the greatest Ponzi game ever contrived. And that is a fact, not a paradox.
With below-replacement fertility and increasing longevity, however, the arithmetic of pay-as-you-go retirement programs changes unforgivingly. As the ratio of employees to retirees falls, a universal pay-as-you-go retirement system has only three options for preventing bankruptcy: reduce pension benefits; raise taxes; restrict eligibility. There are no alternatives.
Should you trust the fact checker that overlooks and denies by omission this part of the story? How does a competent fact checker fail to find Social Security's pay-as-you-go financing described abundantly in professional literature as a Ponzi game?

We don't know the answer to that question. Maybe PolitiFact knows.


The similarities CATO's Michael Tanner pointed out between Social Security and a Ponzi scheme amounted to a "False" rating. But PolitiFact found it "Half True" that President Barack Obama's signature health care reform, the so-called Affordable Care Act, was the Republican health care proposal from 1993.

The Republican bill must have been much more similar to Obamacare than Social Security is to a Ponzi scheme, right? Judge for yourself.

Our take? PolitiFact is magnificently inconsistent. And the inconsistency tends to favor the political left, as in this case.

Friday, June 3, 2016

Mark of the Least: PolitiFact Avoids Hillary's Most Damaging Lies

Pictures of last night ended up online, I'm screwed. OH WELL!
Yeah I think we broke the law, always say we're gonna stop...
This Friday night do it all again. 

This Tuesday, PolitiBlogger Lauren Carroll did me the favor of making my prediction come true, thus cementing my status as the world's least daring fortune teller.

Carroll's post helped PolitiFact readers sort out the truth of a mystery that CNN, Reuters, NBC, AP, New York Times, Wall Street Journal, and Politico had already confirmed and reported on extensively a week before PolitiFact even touched it. 

Carroll offered up this lame excuse:
We haven’t yet put the issue on the Truth-O-Meter because there were too many unknowns.
Carroll explains the IG report was the smoking gun they finally needed to put Hillary to the scientific rigors of the Truth-O-Meter.

Carroll failed in "sorting out the truth" of what she was sorting out the truth of.

Readers didn't benefit in any way by PolitiFact's delayed press time. Carroll's post didn't include any exclusive or developing details that weren't already reported on by much more popular journalism outlets the week before. Carroll only succeeded in regurgitating a widely known story, included a weak defense for ignoring it earlier, then slapped a gimmicky "Truth-O-Meter" graphic on it.

Carroll continued:
But the inspector general’s report has clarified some of those unknowns and demonstrated that Clinton’s exclusive use of personal email was, in fact, not allowed.
We've known for years Clinton's exclusive use of a personal email account violates State Department policy. Who does Carroll think she's fooling? Here's the source:
First of all, the State Department’s policy as of 2005 (Clinton joined in 2009) is that all day-to-day operations are to be conducted on the official State Department information channel. Clinton never once used this State Department email system.
The quotation comes from Carroll herself. As far back as March, 2013 it was reported that Clinton was exclusively using her personal email account for government business, contrary to State Department policy. PolitiFact is just figuring this out now? The IG report didn't confirm Clinton's email impermissibility so much as it reiterated it.

Carroll's excuse that there wasn't enough information doesn't pass the laugh test.

And what spared Clinton from the dreaded "Pants on Fire" rating? On Twitter, I asked both PolitiFact and Carroll herself what objective criteria they used to determine Clinton's claim was false, but not ridiculously false (That's the only difference between a False and Pants on Fire rating, as Bryan explains here.)

Neither responded so we're left to assume Clinton's repeated, years long, blatant lie wasn't too offensive to the political sensibilities of PolitiFact staff. It's false, they admit, but not ridiculous.

Finally, we have the issue of selection bias. It's arguable that of all the sordid details of Clinton's private email practices, her lie that it was "allowed" is arguably the least politically damaging to her. Breaking a few rules for convenience is hardly something most Americans would become outraged over, especially when so many are dealing with voluminous and complicated email work rules themselves. A partisan may even be able to paint Clinton in a sympathetic light if all she did was use the wrong email.

Rating Clinton's "allowed" lie is almost as helpful as that other time PolitiFact dipped a tepid toe into the Clinton email scandal. In that case they informed readers that Hillary Clinton was not under investigation. Instead, the FBI was investigating her email server, an inanimate object.

Why not check this Clinton whopper?
“There is no classified marked information on those emails, sent or received by me." 
No need for an IG report here. That claim was demonstrably false when she made the claim. (Nor has PolitiFact provided so much as an "In Context" article to explain it's irrelevant if Top Secret information was marked or not.)

Oddly enough PolitiFact editors don't see any news value in determining if Clinton put American lives at risk by failing to protect our most valuable secrets. That's why you won't see it rated on the Truth-O-Meter.

Besides, it seems that Carroll has stayed busy trying to sort out the truth of much more important things.

PolitiFact isn't holding anyone accountable. They're pushing narratives based on their own political inclinations. And politicians that lie will do it all again.

0638 PST 6/3/2016: Fixed various text formatting errors. Deleted duplicate final sentence in antepenultimate paragraph. -Jeff
0840 PST 6/4/2016 While fixing formatting errors the text "Oddly enough PolitiFact editors don't see any news value in determining if" was inadvertently deleted in antepenultimate paragraph. It has been restored. -Jeff  

Thursday, June 2, 2016

PolitiFact is California dreamin'

Hans Bader of the Competitive Enterprise Institute helpfully drew our attention to a recent PolitiFact Florida item showing PolitiFact's inconsistency. PolitiFact Florida fixed the "Mostly False" label on Enterprise Florida's claim that California's minimum wage law would cost that state 700,000 jobs.

What's wrong with PolitiFact Florida's verdict?

PolitiFact justified its ruling by claiming the ad suggested that the 700,000 lost jobs would mean 700,000 fewer California jobs than when the hike went into effect:
A radio ad by Enterprise Florida said, "Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage….Now Florida is adding 1 million jobs, not losing them."

This is misleading. The 700,000 figure refers to the number of jobs California could have added by 2026 if it didn’t increase the minimum wage, not a decline in net employment.
We don't think people would be misled by the ad. People would tend to understand the loss as compared to how the economy would perform without the hike.

Back in 2014, when PolitiFact Florida looked at Gov. Scott's claim that the Congressional Budget Office projected a 500,000 job loss from a federal minimum wage hike, the fact checkers had no trouble at all figuring out the 500,000 loss was from a projected baseline.

What's the difference in this case?

Enterprise Florida, an arm of Florida's state government, contrasted California's projected job loss with Florida's gain of 1 million jobs. The changes in the states' respective job numbers can't come from the same cause. Only California is giving its minimum wage a big hike.. So if Enterprise Florida was trying to directly compare the job figures the comparison is apples-to-oranges. But PolitiFact Florida's analysis overlooked the context the ad supplied (bold emphasis added):
"Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage," the ad says, as the Miami Herald reports. "Ready to leave California? Go to Florida instead — no state income tax, and Gov. Scott has cut regulations. Now Florida is adding 1 million jobs, not losing them."
PolitiFact Florida's fact check doesn't lift a finger to examine the effects of relaxed state regulations.

Incredibly, PolitiFact Florida ignores the tense and timing of the job gains Scott lauds ("Now Florida is adding") and insists on comparing future projections of raw job growth for California and Florida, as though California's size advantage doesn't make that an apples-to-oranges comparison.

We think Enterprise Florida muddles its message with its claim Florida is adding 1 million jobs. People hearing the ad likely lack the context needed to understand the message, which we suspect is the dubious idea that Scott's cutting of regulations accounts for Florida adding 1 million jobs.

But PolitiFact Florida oversteps its role as a fact checker by assuming Scott was talking about California losing 700,000 jobs while Florida would gain 1 million at the same time and in the same sense. The ad does not explicitly compare the two figures. And it provides context cluing listeners that the numbers are not directly comparable.

PolitiFact Florida's error, in detail

We'll illustrate PolitiFact's presumption with the classic illustration of ambiguity, courtesy of

Is it a chalice? Is it two people facing one another?

The problem with ambiguity is we don't know which it is. And the Enterprise Florida ad contains an ambiguity. Those hearing the ad do not know how they are supposed to compare California's loss of 700,000 jobs with Florida's gain of 1 million jobs. We pointed out contextual clues that might help listeners figure it out, but those clues do not entirely clean up the ambiguity.

PolitiFact's problem is its failure to acknowledge the ambiguity. PolitiFact has no doubt it is seeing two people facing one another, and evaluates the ad based on its own assumptions.

The ad should have received consideration as a chalice: California's 700,000 job loss represents a poor job climate caused by hiking the minimum wage while Florida's 1 million job gain represents an employment-friendly environment thanks to no state income tax and relaxed state regulations.


PolitiFact Florida succeeded in obscuring quite a bit of truth in Enterprise Florida's ad.

Update: Adding Insult to Injury

As we moved to finish our article pointing out PolitiFact Florida's unfair interpretation of Enterprise Florida's ad, PolitiFact California published its defense of California Governor Jerry Brown's reply to Enterprise Florida:
There’s a lot to unpack there. So we focused just on Brown’s statement about California adding twice as many jobs as Florida, and whether there was any context missing. It turns out California’s job picture is not really brighter than Florida’s, at least not during the period Brown described.
Why do we call it a "defense" instead of a "fact check"?

That's easy. The statement PolitiFact California examined was a classic bit of political deception: Say something true and imply that it means something false. For some politicians, typically liberals, PolitiFact will dutifully split the difference between the trivially true factoid and the false conclusion, ending up with a fairly respectable "Half True." Yes, PolitiFact California gave Brown a "Half True" rating for his claim.

Brown tried to make California's job picture look better than Florida's using a statistic that could not support his claim.

Was Brown's claim more true than Enterprise Florida's ad? We're not seeing it. But it's pretty easy to see that PolitiFact gave Brown more favorable treatment with its "Truth-O-Meter" ratings.

Note: This item was inadvertently published with a time backdated by hours--the scheduled date was wrong. We reverted the post to draft form, added this note, and scheduled it to publish at the originally planned time.

Wednesday, June 1, 2016

Hot Air: "Even lefty PolitiFact calls Hillary a liar over email scam"

Great minds think alike?

Hot Air's Larry O'Connor served up a healthy dollop of scathing PolitiFact criticism earlier today, treading much of the same territory the PolitiFact Bias twitter account (Jeff's baby) has covered recently: PolitiFact is behind the curve on detecting Hillary Clinton's email falsehoods.

O'Connor's best punches shadow those our own Twitter pugilist, Jeff:
Why is PolitiFact “fact-checking” Clinton’s “It was allowed” statement from 5 days ago? It makes it sound like she just made this remark and PolitiFact is “Johnny on the spot” with the truth.

Oh please.
Jeff's been pummelling PolitiFact over its foot-dragging for weeks. Months, even.
(O)nly now does PolitiFact decide to sneak a “fact-check” out to attempt to maintain a tad bit of credibility.

Don’t fall for it.

Visit Hot Air and read the whole thing. And/or follow @PolitiFactBias on Twitter.

Edits 6/1/2016 1306 PST: Fixed link in first paragraph. Added links to tweets in 5th graph. -Jeff

Something rotten in PolitiFact Missouri

It's not the bad reporting, it's the cover up.

Okay, it's both.

On May 18, 2016, PolitiFact Missouri published a fact check of the gender pay gap issue. Democratic gubernatorial candidate Chris Koster said closing Missouri's gender pay gap would gain $9 billion for Missouri women. PolitiFact Missouri botched the fact check, calculating that Missouri women would only gain $7.5 billion, not $9 billion. Koster received a "Mostly True" rating.

It read like this at the time:
The bias related portion of the gap could be as much as $7.5 billion. That’s a lot of money, but it isn’t the $9 billion Koster claimed. When we’ve rated such claims before, statements that speak broadly about a wage gap, regardless of the underlying factors, get some benefit of the doubt.

Koster’s claim greatly oversimplifies a very complex situation, but the size of the gap is real. We rate this claim Mostly True.

Now it reads like this (bold emphasis added):
The bias related portion of the gap could be as much as $1.7 billion. That’s a lot of money, but it isn’t the $9 billion Koster claimed. When we’ve rated such claims before, statements that speak broadly about a wage gap, regardless of the underlying factors, get some benefit of the doubt.

Koster’s claim greatly oversimplifies a very complex situation, but the size of the gap is real. We rate this claim Mostly True.

A key figure in the story changed from $7.5 billion to $1.7 billion. Koster's exaggeration, by percentage, went from 20 percent to 429 percent.  The new version of the story carries no correction notice, and the rating remains "Mostly True."

How did we get here? What went wrong at PolitiFact Missouri?

Spoiler: The present version of PolitiFact Missouri's fact check remains far from accurate.