Tuesday, March 20, 2018

PolitiFact's apples and oranges make Zinke a liar?

Fact checkers often mete out harsh ratings to politicians who employ apples-to-oranges comparisons to make their point.

Mainstream media fact checkers often find themselves immune from the principles they use to find fault with others, however.

Consider PolitiFact's March 19, 2018 fact check of Interior Secretary Ryan Zinke.

Zinke said a Trump administration proposal "is the largest investment in our public lands infrastructure in our nation's history."

PolitiFact found that Civilian Conservation Corps program under President Franklin Roosevelt would far exceed the proposed Trump administration spending if adjusted for inflation:
The CCC’s director wrote in 1939 that it had cost $2 billion; that was two-thirds of the way through the program’s life. And according to a Park Service study, the annual annual cost per CCC enrollee was $1,004 per year. If you assume that the average tenure of the CCC’s 3.5 million workers was about a year, that would produce a cumulative cost around $3 billion.

Such calculations "sound right — millions of young men, camps to house them, food and uniforms, and they were paid," said Steven Stoll, an environmental historian at Fordham University.

Once you factor in inflation, $3 billion spent in the 1930s would be the equivalent of about $53 billion today — about three times bigger than even the fully funded Trump proposal.
When a spokesperson for the Trump administration pointed out that the CCC included lands controlled at the state and local level, PolitiFact brushed the objection aside (bold emphasis added):
Interior Department spokeswoman Heather Swift pointed out that the CCC "also incorporated state and local land."

It’s true that the CCC created more than 700 state parks and upgraded many others, in addition to its efforts on federally owned land. Ultimately, though, the point is moot: Zinke didn’t say the proposal is the largest investment in federal lands infrastructure. He said "public lands infrastructure," and state and local parks count as "public lands."
The key to the "False" rating PolitiFact gave Zinke comes entirely from its insistence that Zinke's statement covers all public lands.

But the context, which PolitiFact reported but ignored, clearly shows Zinke was talking specifically about spending on federal lands (bold emphasis added):
"The president is a builder and the son of a plumber, as I am," Zinke told the Senate Energy & Natural Resources Committee. "I look forward to working with the president on restoring America's greatness through a historic investment of our public lands infrastructure. This is the largest investment in our public lands infrastructure in our nation's history. Let me repeat that, this is the largest investment in our public lands infrastructure in the history of this country."

Zinke specified that he was referring to the president's budget proposal, which would create a fund to provide "up to $18 billion over 10 years for maintenance and improvements in our national parks, our national wildlife refuges, and Bureau of Indian Education funds."
We note a pair of irreconcilable problems with PolitiFact's reasoning.

If Zinke had claimed the CCC spending was greater than the spending proposed by the Trump administration, he would be guilty of using an apples-to-oranges comparison. Why? Because the scope of the two spending programs varies at a fundamental level.

Any would-be comparison between spending on federal lands only and spending on federal, state and local lands qualifies as an apples-to-oranges comparison.

If Zinke's statement was interpreted in keeping with his comments on the scope of the spending--kept to "federal lands"--then PolitiFact simply elected to avoid doing the appropriate fact check. That is, measuring the CCC spending on federal lands against the proposed Trump administration spending on federal lands. Apples-to-apples.

PolitiFact bases its fact check on the apples-to-oranges comparison: CCC spending on federal, state and local parks against proposed Trump administration spending on federal lands only.



Not hardly.

Afters: Multiple flaky layers

In its fact check PolitiFact stresses the enormity of CCC spending under Roosevelt by expressing it as a percentage of the federal budget. And compares that to the tiny percentage of the total budget taken up by Trump's proposed spending.


Has the federal budget increased over time (try as a percentage of GDP)? Medicare? Medicaid? Hello?

PolitiFact loves it some apples and oranges.

Not a Lot of Reader Confusion IX

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

In a March 19, 2018 column published by The Hill, liberal radio talk show host Bill Press declared that we can't believe what President Donald Trump says.

What evidence did Press have to support his claim?


Press (bold emphasis added):
Trump tells so many lies, so often, that not even Politifact, the Pulitzer Prize-winning, nonpartisan fact-checking website can keep score. But they do the most thorough job of anybody. Since he launched his 2016 campaign, Politifact has evaluated more than 500 assertions made by candidate and president Donald Trump, and they’ve rated an astounding 69 percent of them as “false,” “mostly false,” or, the worst category, “liar, liar, pants on fire.” Think about that. On any given day, you know that seven out of ten things Donald Trump says are not true!
We at PolitiFact Bias have endlessly pointed out that even if one assumes that PolitiFact did its fact checks without mistakes that selection bias and subjective application of its rating system make claims like the one in bold ridiculous.

Claims mirroring the one above happen routinely, yet PolitiFact denies the evidence ("not a lot of reader confusion") and continues publishing its misleading charts and graphs with no explanatory disclaimer.

We can think of two primary potential explanations.
  • PolitiFact truly doesn't see the abundance of evidence even though we jump up and down calling attention to it
  • PolitiFact is deliberately misleading people
 We invite readers to provide alternative possibilities in the comments section.

Sunday, March 18, 2018

PolitiFact: It's 'Half True' and 'Mostly True' that President Obama doubled the debt

Twitterer Ely Brit (@RealElyBritt) tweeted out this comparison of past PolitiFact ratings on March 17, 2018:

For the Trump fact check, PolitiFact came down hard on The Donald for placing blame too squarely on President Obama when Congress controls the federal government's purse strings.

On the other hand, it's hard to see how Sen. Paul eases up on placing the blame, unless he gets a bipartisan pass for blaming President Bush for the earlier debt increase.

Apart from that, neither Trump nor Paul received a "True" rating because Congress shares the blame for government spending.

Right, PolitiFact?

O-kay, then.

Correction 3/18/2018: Corrected transposed misspellings of Ely Brit's name. Our apologies to Brit. 
Correction 3/18/2018: Commenter YuriG pointed out that I (Bryan) used "deficit" in the headline, conflicting with the content of the post. Changed "deficit" to "debt" in the headline. Our thanks to YuriG for taking the time to point out the problem.

Monday, March 12, 2018

PolitiFact's Jolly problem

PolitiFact hired David Jolly at some point in February, with the hire date depending on whether his job title was "reader representative" or "Republican guest columnist."

PolitiFact said the hire was intended to build trust in PolitiFact across party lines. We've viewed the experiment with justified skepticism. And Jolly's work so far as the "Republican guest columnist" only solidifies our skepticism.

Jolly's first guest column was published March 2, 2018. We noted that Jolly used that column to address a subject tied to his own advocacy of "common sense gun control." We doubted many PolitiSkeptical conservatives would hear their voices in that column. We judged that Jolly was using his position at PolitiFact to essentially write an op-ed about one of his pet political issues.

Jolly's March 7, 2018 column followed that pattern.

Instead of critiquing PolitiFact, Jolly used his column to attack the target of a PolitiFact fact check. The target of that fact check? President Donald Trump.

Jolly (bold emphasis added):
As the nation continues to debate which gun policies might provide for the safety of our schools and communities, PolitiFact demonstrated in a single column the critical importance fact-checkers serve in both informing the American public as well as holding politicians and advocates on both sides of the debate accountable for their assertions.
Jolly played his gun control theme song in the background again and again we ask: How does Jolly's approach to his columns build trust among conservatives skeptical of PolitiFact? Does he think that just having a Republican say something like "PolitiFact is right" will budge the needle of partisan mistrust?

We'll go out on a limb and predict that approach has a snowball's chance in hell of working.

Conservative mistrust in PolitiFact stems primarily from two factors:
  • Conservatives see PolitiFact turning a blind eye to conservative arguments
  • PolitiFact commonly makes errors of fact and logic damaging to conservatives
Jolly's column serves as an example of both problems, despite his willingness to identify as a Republican.

Jolly reinforces PolitiFact's left-leaning bias

Jolly lauded PolitiFact for rating "False" the claim that an armed civilian might have stopped the "Pulse" nightclub shooting in Orlando (bold emphasis added):
"You take Pulse nightclub," Trump said. "If you had one person in that room that could carry a gun and knew how to use it, it wouldn’t have happened, or certainly to the extent that it did."
The problem for both the president and his theory is that an armed officer and 15-year veteran of the Orlando Police Department, Adam Gruler, was actually working security at Pulse that fateful night, and indeed engaged the shooter directly with gunfire. Forty-nine people still lost their lives.
PolitiFact rightly rated as False the president's statement that an armed security guard could have saved those 49 victims.
The conservative would have read the PolitiFact fact check looking for evidence of fair treatment of the mainstream conservative point of view. That view is absent, and when Jolly fails to notice its absence and celebrates PolitiFact's fact check in spite of that, the conservative cannot take Jolly's columns any more seriously than the fact check in the first place.

The problem with the fact check

What's the problem with PolitiFact's fact check?

The fact check pretends that the armed guard working security in the Pulse parking lot counts as "one person in that room that could carry a gun and knew how to use it."

Is the Pulse parking lot inside the Pulse nightclub?

Shouldn't that make a difference for non-partisan fact checkers and GOP columnists alike?

Note PolitiFact's description of armed guard Anthony Gruler's involvement in the Pulse incident (bold emphasis added):
The Justice Department in 2017 released a nearly 200-page report detailing the Orlando police response to the shooting. Here’s the report’s account of Gruler’s initial confrontation with Mateen:
"Outside, in the Pulse parking lot, (Gruler), who was working extra duty at the club — to provide outside security and to provide assistance to security personnel inside the club if needed — heard the shots that were being fired; at 2:02:17 a.m., he broadcast over the radio, 'Shots fired, shots fired, shots fired,' and requested additional officers to respond.

"The detective told the assessment team that he immediately recognized that his Sig Sauer P226 9mm handgun was no match for the .223 caliber rifle being fired inside the club and moved to a position that afforded him more cover in the parking lot. Two patrons attempted to flee through an emergency exit on the south side of the club. When the detective saw the suspect shoot them, he fired at the suspect."
According to an Orlando Police Department report, additional officers arrived on the scene about a minute after Gruler’s call for backup was broadcast. A second backup officer arrived about a minute after that.
 PolitiFact's conclusion, sadly, serves as an adequate summary of its argument:
Talking about the Pulse nightclub shooting, Trump said, "If you had one person in that room that could carry a gun and knew how to use it, it wouldn’t have happened, or certainly to the extent that it did."

An armed, off-duty police officer in uniform was at the club during the shooting, and exchanged gunfire with the shooter, who managed to kill 49 people.

We rate this False.
Notice that PolitiFact says Gruler was "at the club," not "outside the club."

The "False" rating gets its premise from the fiction that Gruler was in the same room with Mateen while the latter was murdering club patrons. Jolly, PolitiFact's voice of the GOP, signs on with that falsehood.

In this case the deception comes from PolitiFact and Jolly. Not from Trump.

PolitiFact demonstrated nothing false in Trump's statement, yet pinned a "False" rating on his statement.

Jolly cheered PolitiFact's work, missing the key discrepancy in its fact check.

That's all kinds of wrong.

Saturday, March 3, 2018

The first critique from PolitiFact's Republican "guest columnist" David Jolly (Updated)

In early February, PolitiFact tabbed two former Washington D.C. politicians as Democrat and Republican "guest columnists." PolitiFact said the hires were intended to raise its credibility across partisan lines.

We looked the the first column from the Democrat back on Feb. 12 and judged it pointless crap.

Republican David Jolly published his first critique on March 2, 2018. And though we were aware of charges Jolly counts as an "MSNBC Republican," we were still a bit surprised at first blush when Jolly used his column to apparently suggest PolitiFact had rendered a too-harsh judgment on a Democrat's claim about gun violence.

It's worth noting that Jolly has been busy lately promoting "common sense" gun control legislation, so maybe it was just too tempting to resist tying that political cause to his role at PolitiFact.

PolitiFact gave a gun violence statistic from Rep. Ted Deutch (D-Fla) a rating of "Mostly False." Jolly appeared to think Deutch deserved a better fate (bold emphasis added)
Rep. Deutch is said to have relied on statistics provided by The Century Foundation, a left-leaning think tank, which plainly assessed there had been "an increase of over 200%" in mass shootings since the expiration of the assault weapons ban. PolitiFact, however, rated Deutch’s statement Mostly False, largely on the basis that the commentary upon which the congressman relied was substantively challenged by other experts in the field. Presumably, had Deutch said in the town hall, "According to The Century Foundation, mass shootings went up 200% in the decade after the assault weapons ban expired," PolitiFact would have found the Congressman’s statement True on its face, while ruling the findings of The Century Foundation Mostly False.

Moreover, had PolitiFact evaluated Deutch’s statement simply on the numbers, there is ample evidence in the PolitiFact article to support a ruling of Mostly True.
Jolly's critique hits at PolitiFact's standard operating procedure. Yes, of course PolitiFact's ratings oversimplify complexities. Will writing a column pointing that out for the umpteenth time change anything?

Is that the extent of Jolly's point?  That's what his conclusion suggests:
In this case, a congressman’s statement seems to have been ruled Mostly False on two primary factors — his citing a credible think tank’s commentary on gun violence statistics, and a drawn inference by fact-checkers that may or may not have been intended in the congressman’s statement. Neither makes PolitiFact’s ruling right or wrong, but it reflects the enormous challenge faced by politicians, fact-checkers and ultimately voters in today’s political environment.
Newsflash for David Jolly: The fact checkers do not see the subjectivity of their ratings as a problem. The problem, in their eyes, is that readers do not place enough trust in fact checkers. And they recruited their "guest columnists" to help build bipartisan trust in their subjective judgments.

Jolly's critique is about as deep as Barbie's "Math is hard" critique of high school and as such works better as a soft-sell version of his gun control op-ed that we linked above. But we like it for the fact that its premise arguably strikes against the fundamental subjectivity of the PolitiFact approach.

Expect a short run for this PolitiFact experiment if Jolly's future work similarly attacks the PolitiFact premise.


We learned about Jolly's column through PolitiFact's Facebook page, where it featured a link to his column.

PolitiFact's main page at PolitiFact.com does a pretty awesome job of burying these columns. The reader would have to scroll near the bottom of the page and see the "Inside the Meters" section in the right sidebar.

It's literally the last element on the right sidebar, and there's no way to directly reach the "Inside the Meters" posts using the navigation buttons at the top of the PolitiFact website.

Afters 2

Unsurprisingly, PolitiFact uses an "even the Republican thinks we were too tough on the Democrat" frame in its Facebook announcement of his column:

See why he said "there is ample evidence ... to support a ruling of "Mostly True."
PolitiFact left out enough context of Jolly's column to earn its own "Half True" (bold emphasis added):
Moreover, had PolitiFact evaluated Deutch’s statement simply on the numbers, there is ample evidence in the PolitiFact article to support a ruling of Mostly True.
Sometimes PolitiFact judges simply on the numbers, sometimes it doesn't. No objective, nonpartisan principle appears to guide the decision.

We suggest PolitiFact's manipulation of Jolly's statement misleads its audience about Jolly's point. Jolly's overall point, albeit delivered weakly, was the difficulty of making the ratings (thanks to subjectivity). PolitiFact spins that into the Republican saying PolitiFact was too hard on the Democrat.

PolitiFact's spin helps stuff Jolly's column into a frame that assists PolitiFact's purpose of fluffing up its own credibility: They're so nonpartisan! They gave a Democrat a "Mostly False" when a Republican said he should have had a "Mostly True!!"

Afters 3

Exit question for David Jolly:

PolitiFact sees your role as a guest columnist as a way for it to help build its own credibility. It has said as much in public statements. Describe the difference between the way you see your role as a guest columnist and the way PolitiFact sees it.

Update: Jeff adds

Cherry-picking the source of a claim

One might assume PolitiFact's newest critic would know something about PolitiFact, until you read PolitiFact's newest critic's newest PolitiFact critique.

Jolly's gripe, and his solution to it, betray a gross ignorance of the way his employer works.
"PolitiFact would have found the Congressman’s statement True on its face, while ruling the findings of The Century Foundation Mostly False."
Jolly finds fault with PolitiFact giving the False rating to Rep. Deutch, when Deutch was simply repeating a claim made by a (Jolly's words) "credible think tank." He argues the rating should have gone to the think tank.

That's nice, but it's also a problem spelled out on these pages for nearly a decade. One of the ways  PolitiFact shows its liberal lean comes from its choice of the source for a claim. We've called that "attribution bias."

In 2011, when The New York Times, ABC, and NPR all repeated a bogus number published in an official Inspector General report regarding $16 muffins, PolitiFact added a "False" rating to conservative Bill O'Reilly's "report card." 

In 2012, when Time, CNN, and the New York Post published a faulty stat from a dubious research firm, it was conservative media group American Crossroads that earned the demerit on its PolitiFact scorecard.

More recently in 2017 PolitiFact ignored a widely publicized liberal talking point espoused by numerous talking heads in the national media for months, and then quickly jumped in to deem it "False" within a day of Donald Trump repeating it.

Though in Deutch's case PolitiFact gave the rating to a Democrat instead of the think tank responsible for the stat, it still exemplifies PolitiFact's subjective, cherry-picking process for assigning its ratings. That Jolly seems to think his observation is novel only highlights his own naivety about PolitiFact.

The critique overall?

 Jolly's overall critique comes off as compelling as warm beer. It's almost as if he quickly cobbled together the column immediately after being criticized for not producing a critique during the first month of PolitiFact's new "readers advocate" feature.

We're not sure what value Jolly adds to the operation here, unless being duped into being a real life version of PolitiFact's lame "we get it from both sides!" defense is his goal. He didn't provide any insight or commentary that hasn't been offered in a better form on this blog for the last several years. We think having an in-house, right-leaning critic at PolitiFact is a good idea and think it could improve its work immensely. So far it appears Jolly is not the vehicle for the improvement.

In any event, congratulations to Jolly for finding a way to get his first critique of PolitiFact from the right to dovetail so nicely with his conservative efforts to support ... gun control.

Thursday, March 1, 2018

Top five reasons why David Jolly did not critique PolitiFact in February (Updated)

Update March 2, 2018

We've been curious, especially in light of the gun debate's current popularity, what has happened to PolitiFact's experiment involving "reader advocates." Particularly we noted the absence of Republican David Jolly. We published a post yesterday 
[March 1--bww] addressing this curiosity and included a healthy dose of snark. 

Mr. Jolly has since responded via Twitter that his silence was the result of personal matters, including a death in the family and threats he's been receiving.

Our sarcasm was intended to mock the same problems we typically do with PolitiFact. While there's nothing offensive in anything we wrote, we've decided to remove the post in deference to Mr. Jolly's personal circumstances.

We'll continue to use snark to highlight PolitiFact's dubious operation, but in this instance we don't see the value in unnecessary jokes in light of Mr. Jolly's response.
(above by Jeff D.)
Update continued:

Instead of deleting the post or preserving it offsite via the Internet Archive, we're putting it below a page break and changing the text to white (on a white background) to make it mostly invisible to all except the insatiably curious.

Some blue highlights from embedded URLs will remain visible.

Sunday, February 18, 2018

PolitiFact partially unveils spectacularly transparent description of its fact-checking process

"The Week in Fact-Checking," an update on the latest fact-checking news posted at the Poynter website, alerted us to the fact that PolitiFact has updated its statement of principles:
PolitiFact made their methodology more transparent, in keeping with other fact-checkers around the world. (And ICYMI,  PolitiFact has moved its headquarters to Poynter, earning a not-for-profit designation.)
We were surprised we had missed PolitiFact's welcome improvement to its methodological transparency. So we visited PolitiFact.com to check it out.

So ... where is it?

PolitiFact created multiple pages of transparent new content and apparently neglected to equip its website with internal links leading readers to the new content.

Clicking "About Us>>Our Process" on the main menu takes the reader to PolitiFact's 2013 statement of principles.

Clicking "Our Process" on the footer takes the reader to PolitiFact's 2013 statement of principles

There's no apparent way to use PolitiFact's main page to find the new even-more-transparent(!) statement of principles.

But people can see PolitiFact's latest extreme transparency through the Poynter.org website. Or maybe via links posted to Twitter. We haven't noticed any yet, but it's possible.

So there's that.

The new material published on Feb. 12, 2018. As of Feb. 18, 2018, PolitFact.com still funneled readers to its 2013 statement of principles.

We see that as illustrative of the PolitiFact bubble. PolitiFact judges its transparency according to its belief it has published a new statement of principles. Those outside the PolitiFact bubble, unaware of the new statement of principles thanks to PolitiFact's oversight, do not likely take the same view of PolitiFact's transparency.

Why are those outside the bubble so ignorant of PolitiFact's extreme transparency?

Monday, February 12, 2018

Guest columnist (Democrat) critiques PolitiFact

We covered PolitiFact's announcement it had hired Democratic and Republican "reader advocates" to help establish its trustworthiness. And we covered how PolitiFact unpublished that announcement when its choice of Alan Grayson, former Democratic congressman from Florida, blew up in its face.

Another announcement followed on Feb. 9, 2018, naming Republican David Jolly and Democrat Jason Altmire as "guest columnists."

The guest columns appear on PolitiFact's blog page, "Inside the Meters," which should prove sufficient to bury the columns outside the notice of anybody who doesn't either get Twitter or email alerts directly from PolitiFact.

Altmire was the first to have a critique published.

We think it's pointless crap.

Altmire says PolitiFact "generously" rated a Republican "Half True." Then later in the column says the "Half True" rating is the correct rating.

No, seriously. That's what Altmire does.

In the lead paragraph, Altimire says PolitiFact gave a "generous" Half True rating (bold emphasis added):
PolitiFact generously rates Congressman Mullin’s Facebook post "Half True." He got the numbers right, but failed to inform readers of the context. In evaluating claims involving the selective use of statistics, PolitiFact must consider whether the omission was accidental or meant to deceive. Mullin’s omission appears to have been purposeful, because he knows an evaluation of Obama’s entire economic record would present a completely different picture than the one the congressman was trying to paint. Is "Half True" an accurate rating in this case?
 And in his concluding paragraph, Altmire says PolitiFact got the rating right (bold emphasis added):
PolitiFact gave the correct rating; Mullin’s post was indeed "Half True." But from now on, when readers consider a statement that has been rated "Half True" based upon the misuse of statistics, I hope they will remember the less-than-complimentary implication of that rating.
("from now on"?????)

Altmire's point, cleverly disguised in the midst of his self-contradiction, was that Congressman Mullin was lying, and Altmire wishes PolitiFact had been more clear about it.

As critiques go, that's plenty lame. Media Matters could have come up with that one with no problem.

If these columnists don't pick up their game immediately, PolitiFact ought to waste no time at all pulling the plug on this experiment.

Saturday, February 10, 2018

How we made our meme mocking PolitiFact

Earlier this week we noticed PolitiFact making yet another hypocritical declaration. PolitiFact has ruled it misleading to use "cuts" to refer to reductions to a future projected spending baseline. In many cases a budget might increase year by year but the legislature "cuts" spending by slowing its increase.

In the past, we've pointed out how PolitiFact tended to rate Republicans "Mostly False" for claiming the Affordable Care Act cut Medicare by hundreds of millions of dollars. When President Donald R. Trump and the Republican Congress tried the same thing with Medicaid in 2017, PolitiFact discovered that the claim was "Half True" on the few occasion(s?) it noticed the Democrats' ubiquitous claim and then quickly lost interest.

Fast forward to 2018, and PolitiFact published a fact check of a Trump statement about protests over the United Kingdom's National Health Service, its universal care program. PolitiFact treated Trump unfairly by rating him on something he did not say, but what really knocked our socks off was a sentence PolitiFact reeled off in its summary:
While the NHS has lost funding over the years, the march that took place was not in opposition to the service, but a call to increase funding and stop austerity cuts towards health and social care.
The problem? You guessed it! Spending has gone up for the NHS pretty consistently. The fact checkers at Britain's Full Fact even did a fact check in January 2018 relating to NHS funding. It only reported spending going up.
Spending on the NHS in England has increased in real terms by an average of around 1% a year since 2010. Since the NHS was established spending increases have averaged 4% per year.
So the NHS hasn't "lost funding" except against baseline future spending. The austerity "cuts" PolitiFact reports are a decrease of the rate of future spending.

PolitiFact is making a claim it has rated "Half True" and worse in the past.

We don't appreciate that type of hypocrisy from a supposedly non-partisan and objective fact checker. So we went to work on meme.

First, we looked at PolitiFact's list of stories with the "Medicare" tag. We knew we'd find stories reporting on budget cuts to a baseline. And from those stories we looked for one with a summary that would fit the present case. It didn't take long. We found a "Half True" rating from PolitiFact Ohio that fit the bill:

"So-called cut reflects savings from slowing growth in spending." Doesn't that sound much better than "cutting Medicare"? Hurrah! It's savings!

Our next step was to replace the text and image to the left of the "Truth-O-Meter" graphic. We decided to pin the blame on PolitiFact Editor Angie Drobnic Holan instead of on the intern who wrote and researched the fact check. Holan had good reason to know PolitiFact's history on rating cuts to a future baseline.

We took Holan's image from her Twitter account.

We replaced the text with Holan's name and the outrageous quotation from the Trump fact check.

We credited our faux fact check to "PolitiFact National" on the day the Trump fact check came out. We skipped the em-dash this time since it takes a few extra steps.

And we put a big "PARODY" watermark on the whole thing to make clear we're not trying to trick anybody. The point is to mock PolitiFact for its inconsistency.

Our finished product:

Seriously: It's ridiculous for a national fact-checking service to do such a poor job of reporting consistently. Holan is the chief editor, and she doesn't notice this clear problem? She let the intern down by not catching it. And how long will it take to correct the problem? Eternity?

PolitiFact's past work on budget cuts is already so chaotic that one more miss hardly matters. We don't expect anything to change. PolitiFact will go right on giving readers a slanted view of budget cuts.

For that matter, we expect the other two of America's "elite three" fact checkers to independently follow the same misleading pattern PolitiFact uses. That's what happens when all three lean left.

Thursday, February 8, 2018

Are fact checkers fact-checking opinions more? Blame Trump! (Updated)

The fact checkers at PolitiFact apparently can't keep themselves from allowing their opinions to seep into their work.

Fortunately, we can all blame President Trump. That way, the fact checkers need not acknowledge any error.

A Feb. 6, 2018  PolitiFact fact check took as an assertion of fact Trump's apparent opinion that the word "treason" might apply to Democrats who failed to applaud good news about the United States during Trump's State of the Union Address.

PolitiFact, in classic straw man fashion, insisted that "treason" had to refer to the type codified in law, and so rated Trump's claim "Pants on Fire" (bold emphasis added):
Trump said that at the State of the Union address, Democrats, "even on positive news … were like death and un-American. Un-American. "even on positive news … were like death and un-American. Un-American. Somebody said, ‘treasonous.’ I mean, yeah, I guess, why not? Can we call that treason? Why not?"

There’s a good reason why not: Declining to applaud the president doesn’t come anywhere near meeting the constitutionally defined threshold of treason, which in any case can’t occur except in wartime. Rather, legal experts agree that it is a clear case of constitutionally protected free speech. We rate the statement Pants on Fire.
In fact, "treason" has a broader definition than PolitiFact allowed:

  1. the offense of acting to overthrow one's government or to harm or kill its sovereign.
  2. a violation of allegiance to one's sovereign or to one's state.
  3. the betrayal of a trust or confidence; breach of faith; treachery. 
Failing to applaud good news about one's state would, in a sense, violate allegiance to one's state. And, more to the point, one can define words as one likes. One could, for example, choose to define the word "Rump" to refer exclusively to President Trump. One can do such things because words are ultimately just symbols representing ideas, and people can choose what idea to associate with what symbol.

Is it a good idea to use words in ways that run against their commonly understood meanings? That's a different issue.

Trump afforded his critics another marvelous opportunity to criticize his temperament and wisdom, but that criticism belongs in op-eds, not fact checks.

The dastardly Trump forced helpless journalists to abandon their objectivity.

How dare he.

Update Feb. 8, 2018

We weren't going to make a big deal of PolitiFact saying that Trump was suggesting that not applauding for him (Trump) might qualify as treason.

But then PolitiFact started emphasizing that misleading headline on Twitter:
That's just bad reporting, and it's a classic example of a biased headline. Trump says failing to applaud good news about the United States might pass as treason, not the failure to applaud President Trump.

Nonpartisan and objective journalists should be able to distinguish between the two.

Tuesday, February 6, 2018

PolitiFact: One standard for me, and another for thee

On Feb. 5, 2018, PolitiFact published an article on cherry picking from one of its veteran writers, Louis Jacobson. Titled, "The Age of Cherry-picking," it led with a claim of fact as its main hook:
These days, it isn’t just that Republicans are from Mars and Democrats are from Venus. Increasingly, politicians on either side are cherry-picking evidence to support their version of reality.
With cherry-picking on the increase, and with both sides using it more, certainly readers would want to see what PolitiFact has to say about it.

But is it true? Is cherry-picking on the increase?

One had to read far down the column to reach Jacobson's evidence (bold emphasis added):
So is there more cherry-picking today in political rhetoric than in the past? That’s hard to say -- we couldn’t find anyone who measures it. But several political scientists and historians said that even if it’s not more common, the use of the tactic may have turned a corner.

If a writer tries to hook me into reading a story based on the claim that cherry-picking is on the increase, then takes over 20 paragraphs before getting around to telling me that no good evidence supports the claim, I want my money back.

This isn't hard, fact checkers. If it's hard to say if there is more cherry-picking today in political rhetoric than in the past, don't say "Increasingly, politicians on either side are cherry-picking evidence to support their version of reality."

Don't do it.

Even a Democrat probably couldn't entirely get away with a claim so poorly supported by the evidence, thanks to PolitiFact's occasionally-applied principle of the burden of proof:
Burden of proof – People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.
We used Twitter to needle PolitiFact over this issue, surprisingly drawing some response (nothing of substance). But the exchange ended up productive when co-editor Jeff D, who runs the PFB Twitter account, contributed this summary:
That about sums it up. One standard for me, and another for thee.

Update Feb. 7, 2018: Supplied URL to PolitiFact's article on cherry picking, added tag labels.

Monday, February 5, 2018

Does "lowest" mean something different in Georgia than it does in Texas?

Today PolitiFact National, posing as PolitiFact Georgia, called it "Mostly True" that Georgia has the lowest minimum wage in the United States.

Georgia law sets the minimum wage at $5.15 per hour, the same rate Wyoming uses, and the federal minimum wage of $7.25 applies to all but a very few Georgians. PolitiFact National Georgia hit Democrat Stacey Evans with a paltry "Mostly True" rating:
Evans said Georgia "has the lowest minimum wage in the country."

Georgia’s minimum wage of $5.15 per hour is the lowest in the nation, but Wyoming also has the same minimum wage.

Also, most of Georgia’s workers paid hourly rates earn the federal minimum of $7.25.

Evans’ statement is accurate but needs clarification or additional information. We rate it Mostly True.
Sounds good. No problem. Right?

Eh. Not so fast.

Why is it okay in Georgia for "lowest" to reasonably reflect a two-way tie with Wyoming, but in Texas using "lowest" where there's a three-way tie earns the speaker a "False" rating?

How did PolitiFact Texas justify the "False" rating it gave the Republican governor (bold emphasis added)?
Abbott tweeted: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."

The latest unemployment data posted when Abbott spoke showed Texas with a 4 percent unemployment rate in September 2017 though that didn't set a 40-year record. Rather, it tied the previous 40-year low set in two months of 2000.

Abbott didn’t provide nor did we find data showing jobs created in each state in October 2017.

Federal data otherwise indicate that Texas experienced a slight decrease in jobs from August to September 2017 though the state also was home to more jobs than a year earlier.

We rate this claim False.
 A tie goes to the Democrat, apparently.

We do not understand why it is not universally recognized that PolitiFact leans left.

Correction/clarification Feb. 5, 2018:
Removed unneeded "to" from the second paragraph. And added a needed "to" to the next-to-last sentence.

Thursday, February 1, 2018

The bird's-eye lowdown on PolitiFact's partisan reader advocates (Updated)

Today PolitiFact announced it will publish content from two reader advocates at its PolitiFact.com website.

The announcement didn't go so well. Shortly after making the announcement PolitiFact nixed Democrat Alan Grayson's planned involvement.

The Hill reports:
Fact-checking website PolitiFact on Thursday announced that it had hired former Florida Reps. David Jolly (R) and Alan Grayson (D) as “reader advocates” before hours later nixing Grayson's hire after fierce backlash.
The dumping of Grayson aside, does this represent a sincere effort from PolitiFact to help improve its product?

Probably not, despite the we're-so-humble sales job from PolitiFact Executive Director Aaron Sharockman.

Huh. Well, we intended to link directly to PolitiFact's announcement about its new reader representatives, but it looks like PolitiFact unpublished it. We'll go with the reporting from the Hill instead, which has coincidentally also been altered since its publication: (an earlier version carried a hyperlink to PolitiFact's announcement)
The two former lawmakers had been set to critique the website’s fact-checks and provide additional insight on political issues as part of a pilot program that will run through the end of April, according to a Thursday PolitiFact post announcing the hires. The post has since been deleted.

“David and Alan are both particularly qualified, we think, to critique the work of PolitiFact, because they’ve been subject to our fact-checks as members of Congress,” PolitiFact Executive Director Aaron Sharockman wrote an his initial statement.

Well-qualified to critique the work of PolitiFact! That implies that the critiques may prove valid, right?

PolitiFact said it would learn from Jolly and Grayson, though it's hard for us to prove that now that PolitiFact is disposing of the evidence. Like a good fact-checker should, I guess.

We think PolitiFact's flirtation with reader advocates fits well with its old narrative about how it receives criticism from both sides. Getting criticized from both sides, using mumbo-jumbo logic, shows the reliability of the entity getting criticized. That's a superb narrative for PolitiFact to promote. It's certainly far better than the alternative narrative, that getting criticized by both sides means you're probably doing something wrong.

PolitiFact's founding editor, Bill Adair, did research suggesting that PolitiFact receives most of its substantive criticism from the right. Will PolitiFact allow such research to impact its choice of which narrative to promote? We doubt it.

We captured images from Sharockman's Twitter feed--ones he had so far elected not to delete. The one at the top pretty much shows PolitiFact's thinking behind this experiment. It's not about improving the product. It's about encouraging the public to trust in the existing product.

Note that PolitiFact's experiment was only slated to run through April 2018. Three months. If PolitiFact detects signs that people are trusting it less during the experiment, it will terminate the experiment.

Does that sound like a sincere effort to improve the product? Or more like a cynical ploy intended to trick people into trusting PolitiFact?

How many times do we have to say it? One gains trust by proving trustworthy. One proves one's trustworthiness through accuracy and transparency.

Making published works entirely disappear is not transparency.

Jeff adds:

We wonder what it says about PolitiFact that they considered Grayson representative of a conventional Democrat voice.

We'd also like to congratulate the fact checkers on selecting conservative powerhouse David ... uh ... *checks notes* ... Jolly.

We're not surprised that PolitiFact's ill-advised attempt to gain credibility resulted in more of the same deception we've come to expect. For example, un-publishing articles is somehow a sign of trustworthiness?

For years we've said PolitiFact would benefit from an inside critic of their work and suggested most of their obvious blunders would have been prevented with a heterodox voice on staff. On its face the notion of "reader advocates" seems like a step in that direction, until you realize it's just another click-seeking gimmick (Is Grayson really the top pick for any serious endeavor?).

If PolitiFact were actually sincere about gaining reader trust there's more effective ways than adding sideshow acts performed by clowns and cranks. For instance, they could unequivocally disavow their longtime use of stealth edits. That might help bring them into compliance with the International Fact Checking Network code of principles that they currently violate.

But the most obvious thing they could do to improve their image is to credibly rebut the volumes of legitimate, earnest criticism of their work. So far, PolitiFact's response to charges of bias has been to call critics "mental" or to ignore them altogether. For some reason PolitiFact does not view an honest defense of their work as a viable remedy for reader distrust.

PolitiFact's incompetent and biased editorializing has never earned credibility. Adding more clowns to the car won't change that. This "readers advocate" stunt only shows how unserious PolitiFact is about providing readers the truth.

Edit 0835PST 2/2/2018: Added word "PolitiFact" in penultimate graph -Jeff

Imperious PolitiFact

PolitiFact decides who built what, and it's ridiculous

Back in 2016 we reviewed the "True" claim PolitiFact awarded First Lady Michelle Obama for her claim the White House was built by slaves.

Slaves definitely helped with the labor of constructing the White House, but to an unknown degree. Regardless of that, PolitiFact awarded Obama a "True" rating on its subjective "Truth-O-Meter."

We thought PolitiFact went too easy on the claim, given that one could use the same standard to claim it "True" that European immigrants built the White House. Including a word like "helped" allows either claim to rise to credibility.

Fast forward to 2018 and the State of the Union Address response from U.S. Rep. Joe Kennedy III (D-Mass.).

Kennedy said immigrants built Fall River, Massachusetts.

Breitbart, a right-leaning news outlet, judged Kennedy's statement "Mostly False," reasoning that the establishment of Fall River by native-born descendants of English settlers made it reasonable to say the city was built, at least in part, by those native-born people. Breitbart added that the native-born population has always outnumbered immigrants in the county that contains Fall River.

PolitiFact apparently doesn't care for sharing credit. If one group helped build something then that group gets credit and other groups that helped do not get credit.

PolitiFact rated Breitbart's claim "False." Yes, that implies that immigrants who helped "build" Fall River by coming to work at factories established by the native residents were the ones who exclusively built Fall River.

Call us radical right-wingers if you like, but we think if the facts show that credit for building something should be shared, then a fact checker should acknowledge shared credit in its ratings.

Breitbart's "Mostly False" rating hints at an ability to make that type of acknowledgement.

PolitiFact's "True" and "False" ratings make it look more partisan than Breitbart.

Tuesday, January 30, 2018

PolitiFact editor: "Tell me where the fact-check is wrong"

Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?

PolitiFact Editor Angie Drobnic Holan gave us another stanza of that song-and-dance in a Jan. 26, 2018 interview with Digital Charlotte. Digital Charlotte's Stephanie Bunao asked Holan whether she sees a partisan difference in the email and commentary PolitiFact receives from readers.

Holan's response (bold emphasis added):
Well, we get, you know, nobody likes it when their team is being criticized, so we get mail a lot of different ways. I think obviously there's a kind of repeated slogan from the conservative side that when they see media reports they don't like, that it's liberal media or fake news. On the left, the criticism is a little different – like they accuse us of having false balance. You know, when we say the Democrats are wrong, they say, ‘Oh, you're only doing that to try to show that you're independent.’ I mean it gets really like a little bit mental, when people say why we're wrong. If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Let us count the ways Holan achieves disingenuousness, starting with the big one at the end:

1) "Tell me where the fact-check is wrong"

We've been doing that for years here at PolitiFact Bias, making our point in blog posts, emails and tweets. Our question for Holan? If you think that's a useful conversation to have then why do you avoid having the conversation? On Jan. 25, 2018, we sent Holan an email pointing out a factual problem with one of its fact checks. We received no reply. And on Jan. 26 she tells an interviewer that the conversation she won't have is a useful one?

2) "Every year in December we look at all the things that we fact-check, and we say, ‘What is the most significant lie we fact-checked this year’"

Huh? In 2013, PolitiFact worked hard to make the public believe it had chosen the president's Affordable Care Act promise that people would be able to keep plans they liked under the new health care law as its "Lie of the Year." But PolitiFact did not fact check the claim in 2013. PolitiFact Bias and others exposed PolitiFact's deception at the time, but PolitiFact keeps repeating it.

3) PolitiFact's "extreme transparency"

Asked how the media can regain public trust, Holan mentioned the use of transparency. We agree with her that far. But she used PolitiFact as an example of providing readers "extreme transparency."

That's a laugh.

Perhaps PolitiFact provides more transparency than the average mainstream media outlet, but does that equal "extreme transparency"? We say no. Extreme transparency is admitting your politics (PolitiFail), publishing the texts of expert interviews (PolitiFail, except for PolitiFact Texas), revealing the "Truth-O-Meter" votes of its editorial "star chamber" (PolitiFail) and more.

PolitiFact practices above-average transparency, not "extreme transparency." And the media tend to deliver a poor degree of transparency.

We remain prepared to have that "useful conversation" about PolitiFact's errors of fact and research.

You let us know when you're ready, Angie Drobnic Holan.

Monday, January 29, 2018

PolitiFact masters the non sequitur

A non sequitur occurs when one idea does not follow from another.

A Jan. 23, 2018 fact check by PolitiFact's Miriam Valverde offers ample evidence that PolitiFact has mastered the non sequitur.

Valverde's fact check concerned a claim from a White House infographic*:

PolitiFact looked at whether it was true that immigrants cost U.S. taxpayers $300 billion annually. The careful reader will have noticed that the White House infographic did not claim that immigrants cost U.S. taxpayers $300 billion annually. It made two distinct claims, first that unskilled immigrants create a net fiscal deficit and second that current immigration policy puts U.S. taxpayers on the hook for as much as $300 billion.

Isn't it wonderful when supposedly non-partisan fact checkers create straw men?

As for what the White House actually claimed, yes the Washington Times reported there was one study--a thorough one--that said current immigration policy costs U.S. taxpayers as much as $296 billion annually.

We do not know the precise origin of that figure after looking for it in the study. Apparently PolitiFact also failed to find it and after mentioning the Times' report proceeded to use the study's figure of $279 billion for 2013. That figure was for the first of eight scenarios.

Was the $296 billion number an inflation adjustment? A population increase adjustment? A mistake? A figure representing one of the other groups? We don't know. But if the correct figure is $279 billion, $300 billion represents a liberal-but-common method of rounding. It could also qualify as an exaggeration of 8 percent.

What problem does PolitiFact find with the infographic (bold emphasis added)?
A consultant who contributed to the report told us that in 2013 the total fiscal burden -- average outlays minus average receipts multipled [sic] by 55.5 million individuals -- was $279 billion for the first generation of immigrants. But making a conclusion on that one figure is a mighty case of cherry-picking.

What conclusion did the infographic draw that represents cherry picking?

That line from Valverde does not belong in a fact check without a clear example of the faulty cherry picking. And in this case there isn't anything. The fact check provides more information about the report, including some positives regarding immigrant populations (especially second-generation immigrants), but ultimately finds no concrete fault with the infographic.

PolitiFact's charge of cherry picking doesn't follow.

And PolitiFact's conclusion?
Our ruling

The White House claimed that "current immigration policy imposes as much as $300 billion annually in net fiscal costs on U.S. taxpayers."

A study from the National Academies of Sciences, Engineering, and Medicine analyzed the fiscal impact of immigration under different scenarios. Under some assumptions, the fiscal burden was $279 billion, but $43 billion in other scenarios.

The report also found that U.S.-born children with at least one foreign-born parent are among the strongest economic and fiscal contributors, thanks in part to the spending by local governments on their education.

The statement is partially accurate but leaves out important details. We rate it Half True.
In the second paragraph PolitiFact says the fiscal burden amounted to $43 billion "in other scenarios." Put correctly, one scenario put the figure at $279 billion and two scenarios may have put the figure at $43 billion because the scenarios were nearly identical. The study looked at a total of eight scenarios, found here. It appears to us that scenario four may serve as the source of the $296 billion figure reported in the Washington Times.

Our conclusion? PolitiFact's fact check provides a left-leaning picture of the context of the Trump White House infographic. The infographic is accurate. It plainly states that it is picking out a high-end figure. It states it relies on one study for the figure.

The infographic, in short, provides readers alerts to the potential problems with the figure it uses.

That said, the $300 billion figure serves as a pretty good example of appealing to the audience's anchoring bias. Mentioning "$300 billion" predisposes the audience toward believing a similarly high figure regardless of other evidences. That's a legitimate gripe about the infographic, though one PolitiFact neglected to point out while fabricating its charge of "cherry-picking."


*I noticed ages ago that the Obama administration produced a huge number of misleading infographics. Maybe PolitiFact fact checked one of them?

Correction Jan. 31, 2018: Inserted the needed word "check" between "fact" and "provides" in the fourth paragraph from the end.

Thursday, January 25, 2018

PolitiFact rubberstamps a claim from Nancy Pelosi

We say PolitiFact leans left and stinks at fact-checking.

We support our point with examples.

Here's another.

We admit at the outset that if House Minority Leader Nancy Pelosi's statement is true and leaves out nothing significant then it follows that our example does not show that PolitiFact leans left and stinks at fact-checking.

And we assert that our argument will permit no reasonable person to believe that Pelosi left out nothing of significance.

The key to PolitiFact's fact check comes straight from the Congressional Budget Office:
Why does CHIP save the government money? In short, it’s because the alternatives cost more.

According to CBO, "extending funding for CHIP for 10 years yields net savings to the federal government because the federal costs of the alternatives to providing coverage through CHIP (primarily Medicaid, subsidized coverage in the marketplaces, and employment-based insurance) are larger than the costs of providing coverage through CHIP during that period."
 PolitiFact has CBO on its side. Game over? PolitiFact wins?

Here's the problem: The quotation of the CBO report is itself at odds with the CBO report.

On one hand, CBO says "federal costs" of CHIP alternatives come out higher than providing coverage through CHIP.

But CBO's explanatory chart tells a different story. It says that costs for CHIP alternatives through Medicaid and subsidized individual market coverage go down by $72.4 billion (red ovals). Over the same 10-year period, CHIP costs go up by $78.9 billion (black oval).

On the expense side, CHIP reauthorization increases costs by $6.9 billion.

The expense side isn't the only side for the CHIP bill.

"Employment-based insurance" accounts for $11.2 billion (black rectangle) in revenue over 10 years. That plus another $1.6 billion from the ACA marketplace brings the total revenue increase from CHIP reauthorization to $12.9 billion. The chart lists $4.6 billion as "off-budget," suggesting to us that the revenue may come from the off-budget Social Security program.

The $11.2 billion in revenue accounts directly for the $6 billion "savings" Pelosi touted (red circle), after accounting for the increased outlays.

Apparently the "savings" do not come from lower expenses at all. The "savings" come from taking $12.9 billion more for CHIP from taxpayers.

And Pelosi's statement told the whole truth with nothing significant left out? We don't buy it.

We think any competent journalist should have noticed this discrepancy and addressed it in the fact check.


We're not sure if it accounts for any kind of defense for PolitiFact fact checker Louis Jacobson, but his fact check did not directly cite the very clear CBO chart we used above. Jacobson cited a more complex chart (Table 3, Page 4) showing many billions in lost revenue from the suspension of a handful of ACA taxes such as the medical device tax. Even so, how does a fact-checker miss out on the relevance of CBO's plain identification of increased revenue from CHIP re-authorization?

After Afters

Zebra Fact Check picks up the ball PolitiFact dropped by asking CBO to explain to the public the origins of CHIP revenue.

Tuesday, January 23, 2018

PolitiFact vs Ted Cruz

PolitiFact demonstrates the wrong way to fact check

When we criticize PolitiFact's subjective rating system, we often see responses like "I don't pay attention to the ratings."

We tend to respond that PolitiFact reasons poorly, offering fact check consumers yet another reason to avoid PolitiFact. PolitiFact's January 22, 2018 fact check of Sen. Ted Cruz (R-Texas) helps illustrate the point. The fact checkers use equivocal language and straw man argumentation to support their conclusion.

Cruz claimed he has consistently opposed government shutdowns, which PolitiFact contrasted to "popular belief":
With an end to the federal government shutdown in sight, Sen. Ted Cruz, R-Texas, tried to argue that, contrary to popular belief, he was not the driving force behind the previous government shutdown in 2013.
Should we excuse PolitiFact from supporting its claim that most think Cruz was the driving force behind the 2013 shutdown?

The 2018 shutdown originated in the Senate, which had a funding bill but no attempt to force cloture before the funding deadline. A cloture vote would have reportedly failed, meaning the Democrats had a modern filibuster going. The 2013 shutdown stemmed from disagreement between the GOP-controlled House and the Democratic-controlled Senate.

PolitiFact tells part of the story, sending a misleading message in the process (bold emphasis added):
Back in 2013, Cruz -- then a junior member of the Senate’s minority party -- had tried to end funding for the Affordable Care Act. He pushed for language to defund Obamacare in spending bills, which would have forced then-President Barack Obama to choose between keeping the government open and crippling his signature legislative achievement.

As the high-stakes legislative game played out, Obama and his fellow Democrats refused to agree to gut the law, and the Republicans, as a minority party, didn’t have the numbers to force their will. Following a 16-day shutdown, lawmakers voted to fund both the government and the Affordable Care Act.

Cruz was widely identified at the time as the leader of the defunding effort.
We have two types of defunding going on in PolitiFact's explanation. First, we have Cruz's effort to defund the ACA. Then we have general defunding of the government.

See what PolitiFact did there? PolitiFact asserts that most believe Cruz led the effort to defund the government, and slips in the line "Cruz was widely identified at the time as the leader of the defunding effort." Yes, Cruz was the leader, in the Senate, of the attempt to defund the ACA. But defunding the ACA is not the same thing as defunding the government.

PolitiFact then included a little tidbit about a Cruz speech on the Senate floor using portions of Dr. Seuss' "Green Eggs & Ham." So it was a Cruz filibuster? Maybe PolitiFact wants its readers to think it was a Cruz filibuster. But it wasn't.
“This is not a filibuster. This is an agreement that he and I made that he could talk,” (Senate Majority Leader Harry) Reid said Wednesday.
Is there any good excuse for a journalist to offer such a sketchy account of history?

What PolitiFact got right

PolitiFact was right when it reported that Cruz's proposal to defund the ACA would have forced President Obama to choose between signing a bill that undercut the ACA and allowing a government shutdown. It follows that Cruz was playing the politics of government shutdown, though his method placed the onus on Mr. Obama, and of the two options Cruz would plainly prefer defunding the ACA to defunding the entire government.

So even though Cruz's effort to defund the ACA turned out a dismal failure, the effort carried a silver lining for Cruz: Cruz never needed to advocate or support shutting down the government.

... and what PolitiFact got wrong

Given that Cruz voted against the funding bill that eventually ended the 2013 shutdown, PolitiFact had what it needed to show Cruz supporting a government shutdown at least in some form.

Instead, PolitiFact opted for a hilarious overreach comparable to Cruz's failed plan to defund the ACA.

PolitiFact took the route of trying to show Cruz supported the shutdown according to his own standard:
However, even if, for the sake of argument, you accept Cruz’s line of thinking, his hallway comments offered a very specific definition of determining whether a lawmaker had "consistently opposed shutdowns."

In fact, Cruz offered a very specific definition of something else, as we see when PolitiFact picks up its narrative (bold emphasis added):

Specifically, Cruz said that "only one thing causes a shutdown: when you have senators vote to deny cloture on a funding bill." Cloture refers to a Senate vote to cut off debate and proceed to a bill; it’s a prerequisite for considering a bill, and these days, it typically takes 60 votes.
PolitiFact's notion that Cruz defined whether a lawmaker has consistently opposed government shutdowns counts as a fantasy, not a fact check. But for the sake of argument, let us accept PolitiFact's line of thinking.

Cruz said a shutdown only occurs when senators vote to deny cloture on a funding bill. In context, his statement obviously means enough senators opposed cloture for the cloture motion to fail. Why? Because over 30 senators can vote to deny cloture and find themselves overruled by the others. And in that case, no shutdown results.

But understanding what Cruz said in context will not allow PolitiFact's argument to succeed. PolitiFact can only stick the hypocrisy tag on Cruz if voting against cloture on a funding bill counts as causing a shutdown regardless of the outcome of the vote.

That's crazy. But that's PolitiFact's argument:
So did Cruz ever "vote to deny cloture on a funding bill"?

He did.

It came on the legislation to end the 16-day shutdown -- a bill that didn’t include the Obamacare defunding language that he had been seeking. If this spending bill didn’t pass, the government wouldn’t be funded and would have to remain closed. As it happened, the bill passed by a large bipartisan majority, but Cruz was one of 16 senators to vote against cloture. He was also one of 18 to vote against the bill itself.
Regardless of whether Cruz ever supported a government shutdown, taking Cruz's statement out of context is not the way to make the argument. It's simply a fact that one can vote against cloture on principle apart from a filibuster strategy. Cruz has plausible deniability going for him.

Cruz's was one of only 16 voting in opposition to cloture, and it could not be more obvious that such a vote does not meet Cruz's definition for what causes a shutdown. Sixteen senators voting against cloture cannot start a shutdown. Nor can they sustain a shutdown, as PolitiFact's example resoundingly illustrates.

PolitiFact altered Cruz's argument in its fact-checking process.

These fact checkers stink at fact-checking.

Tuesday, January 16, 2018

PolitiFact goes partisan on the "deciding vote"

When does a politician cast the "deciding vote"?

PolitiFact apparently delivered the definitive statement on the issue on Oct. 6, 2010 with an article specifically titled "What makes a vote 'the deciding vote'?"

Every example of a "deciding vote" in that article received a rating of "Barely True" or worse (PolitiFact now calls "Barely True" by the name "Mostly False"). And each of the claims came from Republicans.

What happens when a similar claim comes from a Democrat? Now we know:

Okay, okay, okay. We have to consider the traditional defense: This case was different!

But before we start, we remind our readers that cases may prove trivially different from one another. It's not okay, for example, if the difference is that this time the claim from from a woman, or this time the case is from Florida not Georgia. Using trivial differences to justify the ruling represent the fallacy of special pleading.

No. We need a principled difference to justify the ruling. Not a trivial difference.

We'll need to look at the way PolitiFact justified its rulings.

First, the "Half True" for Democrat Gwen Graham:
Graham said DeSantis casted the "deciding vote against" the state's right to protect Florida waters from drilling.

There’s no question that DeSantis’ vote on an amendment to the Offshore Energy and Jobs Act was crucial, but saying DeSantis was the deciding vote goes too far. Technically, any of the 209 other people who voted against the bill could be considered the "deciding vote."

Furthermore, the significance of Grayson’s amendment is a subject of debate. Democrats saw it as securing Florida’s right to protect Florida waters, whereas Republicans say the amendment wouldn’t have changed the powers of the state.

With everything considered, we rate this claim Half True.
Second, the "Mostly False" for the National Republican Senatorial Committee (bold emphasis added):
The NRSC ad would have been quite justified in describing Bennet's vote for either bill as "crucial" or "necessary" to passage of either bill, or even as "a deciding vote." But we can't find any rationale for singling Bennet out as "the deciding vote" in either case. He made his support for the stimulus bill known early on and was not a holdout on either bill. To ignore that and the fact that other senators played a key role in completing the needed vote total for the health care bill, leaves out critical facts that would give a different impression from message conveyed by the ad. As a result, we rate the statement Barely True.
Third, the "False" for Republican Scott Bruun:
(W)e’ll be ridiculously lenient here and say that because the difference between the two sides was just one vote, any of the members voting to adjourn could be said to have cast the deciding vote.
The Bruun case doesn't help us much. PolitiFact said Bruun's charge about the "deciding" vote was true but only because its judgment was "ridiculously lenient." And the ridiculous lenience failed to get Bruun's rating higher than "False."  So much for PolitiFact's principle of rating two parts of a claim separately and averaging the results.

Fourth, we look at the "Mostly False" rating for Republican Ron Johnson:
In a campaign mailer and other venues, Ron Johnson says Feingold supported a measure that cut more than $500 billion from Medicare. That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion. Under the plan, guaranteed benefits are not cut. In fact, some benefits are increased. Johnson can say Feingold was the deciding vote -- but so could 59 other people running against incumbents now or in the future.

We rate Johnson’s claim Barely True.
We know from earlier research that PolitiFact usually rated claims about the ACA cutting Medicare as "Mostly False." So this case doesn't tell us much, either. The final rating for the combined claims could end up "Mostly False" if PolitiFact considered the "deciding vote" portion "False" or "Half True." It would all depend on subjective rounding, we suppose.

Note that PolitiFact Florida cited "What makes a vote 'the deciding vote'?" for its rating of Gwen Graham. How does a non-partisan fact checker square Graham's "Half True" rating with the ratings given to Republicans? Why does the fact check not clearly describe the principle that made the difference for Graham's more favorable rating?

As far as well can tell, the key difference comes from party affiliation, once again suggesting that PolitiFact leans left.

After the page break we looked for other cases of the "deciding vote."