Tuesday, January 30, 2018

PolitiFact editor: "Tell me where the fact-check is wrong"

Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?

PolitiFact Editor Angie Drobnic Holan gave us another stanza of that song-and-dance in a Jan. 26, 2018 interview with Digital Charlotte. Digital Charlotte's Stephanie Bunao asked Holan whether she sees a partisan difference in the email and commentary PolitiFact receives from readers.

Holan's response (bold emphasis added):
Well, we get, you know, nobody likes it when their team is being criticized, so we get mail a lot of different ways. I think obviously there's a kind of repeated slogan from the conservative side that when they see media reports they don't like, that it's liberal media or fake news. On the left, the criticism is a little different – like they accuse us of having false balance. You know, when we say the Democrats are wrong, they say, ‘Oh, you're only doing that to try to show that you're independent.’ I mean it gets really like a little bit mental, when people say why we're wrong. If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Let us count the ways Holan achieves disingenuousness, starting with the big one at the end:

1) "Tell me where the fact-check is wrong"

We've been doing that for years here at PolitiFact Bias, making our point in blog posts, emails and tweets. Our question for Holan? If you think that's a useful conversation to have then why do you avoid having the conversation? On Jan. 25, 2018, we sent Holan an email pointing out a factual problem with one of its fact checks. We received no reply. And on Jan. 26 she tells an interviewer that the conversation she won't have is a useful one?

2) "Every year in December we look at all the things that we fact-check, and we say, ‘What is the most significant lie we fact-checked this year’"

Huh? In 2013, PolitiFact worked hard to make the public believe it had chosen the president's Affordable Care Act promise that people would be able to keep plans they liked under the new health care law as its "Lie of the Year." But PolitiFact did not fact check the claim in 2013. PolitiFact Bias and others exposed PolitiFact's deception at the time, but PolitiFact keeps repeating it.

3) PolitiFact's "extreme transparency"

Asked how the media can regain public trust, Holan mentioned the use of transparency. We agree with her that far. But she used PolitiFact as an example of providing readers "extreme transparency."

That's a laugh.

Perhaps PolitiFact provides more transparency than the average mainstream media outlet, but does that equal "extreme transparency"? We say no. Extreme transparency is admitting your politics (PolitiFail), publishing the texts of expert interviews (PolitiFail, except for PolitiFact Texas), revealing the "Truth-O-Meter" votes of its editorial "star chamber" (PolitiFail) and more.

PolitiFact practices above-average transparency, not "extreme transparency." And the media tend to deliver a poor degree of transparency.

We remain prepared to have that "useful conversation" about PolitiFact's errors of fact and research.

You let us know when you're ready, Angie Drobnic Holan.

Monday, January 29, 2018

PolitiFact masters the non sequitur

A non sequitur occurs when one idea does not follow from another.

A Jan. 23, 2018 fact check by PolitiFact's Miriam Valverde offers ample evidence that PolitiFact has mastered the non sequitur.


Valverde's fact check concerned a claim from a White House infographic*:


PolitiFact looked at whether it was true that immigrants cost U.S. taxpayers $300 billion annually. The careful reader will have noticed that the White House infographic did not claim that immigrants cost U.S. taxpayers $300 billion annually. It made two distinct claims, first that unskilled immigrants create a net fiscal deficit and second that current immigration policy puts U.S. taxpayers on the hook for as much as $300 billion.

Isn't it wonderful when supposedly non-partisan fact checkers create straw men?

As for what the White House actually claimed, yes the Washington Times reported there was one study--a thorough one--that said current immigration policy costs U.S. taxpayers as much as $296 billion annually.

We do not know the precise origin of that figure after looking for it in the study. Apparently PolitiFact also failed to find it and after mentioning the Times' report proceeded to use the study's figure of $279 billion for 2013. That figure was for the first of eight scenarios.

Was the $296 billion number an inflation adjustment? A population increase adjustment? A mistake? A figure representing one of the other groups? We don't know. But if the correct figure is $279 billion, $300 billion represents a liberal-but-common method of rounding. It could also qualify as an exaggeration of 8 percent.

What problem does PolitiFact find with the infographic (bold emphasis added)?
A consultant who contributed to the report told us that in 2013 the total fiscal burden -- average outlays minus average receipts multipled [sic] by 55.5 million individuals -- was $279 billion for the first generation of immigrants. But making a conclusion on that one figure is a mighty case of cherry-picking.
What?

What conclusion did the infographic draw that represents cherry picking?

That line from Valverde does not belong in a fact check without a clear example of the faulty cherry picking. And in this case there isn't anything. The fact check provides more information about the report, including some positives regarding immigrant populations (especially second-generation immigrants), but ultimately finds no concrete fault with the infographic.

PolitiFact's charge of cherry picking doesn't follow.

And PolitiFact's conclusion?
Our ruling

The White House claimed that "current immigration policy imposes as much as $300 billion annually in net fiscal costs on U.S. taxpayers."

A study from the National Academies of Sciences, Engineering, and Medicine analyzed the fiscal impact of immigration under different scenarios. Under some assumptions, the fiscal burden was $279 billion, but $43 billion in other scenarios.

The report also found that U.S.-born children with at least one foreign-born parent are among the strongest economic and fiscal contributors, thanks in part to the spending by local governments on their education.

The statement is partially accurate but leaves out important details. We rate it Half True.
In the second paragraph PolitiFact says the fiscal burden amounted to $43 billion "in other scenarios." Put correctly, one scenario put the figure at $279 billion and two scenarios may have put the figure at $43 billion because the scenarios were nearly identical. The study looked at a total of eight scenarios, found here. It appears to us that scenario four may serve as the source of the $296 billion figure reported in the Washington Times.

Our conclusion? PolitiFact's fact check provides a left-leaning picture of the context of the Trump White House infographic. The infographic is accurate. It plainly states that it is picking out a high-end figure. It states it relies on one study for the figure.

The infographic, in short, provides readers alerts to the potential problems with the figure it uses.

That said, the $300 billion figure serves as a pretty good example of appealing to the audience's anchoring bias. Mentioning "$300 billion" predisposes the audience toward believing a similarly high figure regardless of other evidences. That's a legitimate gripe about the infographic, though one PolitiFact neglected to point out while fabricating its charge of "cherry-picking."


Afters

*I noticed ages ago that the Obama administration produced a huge number of misleading infographics. Maybe PolitiFact fact checked one of them?



Correction Jan. 31, 2018: Inserted the needed word "check" between "fact" and "provides" in the fourth paragraph from the end.

Thursday, January 25, 2018

PolitiFact rubberstamps a claim from Nancy Pelosi

We say PolitiFact leans left and stinks at fact-checking.

We support our point with examples.

Here's another.


We admit at the outset that if House Minority Leader Nancy Pelosi's statement is true and leaves out nothing significant then it follows that our example does not show that PolitiFact leans left and stinks at fact-checking.

And we assert that our argument will permit no reasonable person to believe that Pelosi left out nothing of significance.

The key to PolitiFact's fact check comes straight from the Congressional Budget Office:
Why does CHIP save the government money? In short, it’s because the alternatives cost more.

According to CBO, "extending funding for CHIP for 10 years yields net savings to the federal government because the federal costs of the alternatives to providing coverage through CHIP (primarily Medicaid, subsidized coverage in the marketplaces, and employment-based insurance) are larger than the costs of providing coverage through CHIP during that period."
 PolitiFact has CBO on its side. Game over? PolitiFact wins?

Here's the problem: The quotation of the CBO report is itself at odds with the CBO report.

On one hand, CBO says "federal costs" of CHIP alternatives come out higher than providing coverage through CHIP.

But CBO's explanatory chart tells a different story. It says that costs for CHIP alternatives through Medicaid and subsidized individual market coverage go down by $72.4 billion (red ovals). Over the same 10-year period, CHIP costs go up by $78.9 billion (black oval).

On the expense side, CHIP reauthorization increases costs by $6.9 billion.


The expense side isn't the only side for the CHIP bill.

"Employment-based insurance" accounts for $11.2 billion (black rectangle) in revenue over 10 years. That plus another $1.6 billion from the ACA marketplace brings the total revenue increase from CHIP reauthorization to $12.9 billion. The chart lists $4.6 billion as "off-budget," suggesting to us that the revenue may come from the off-budget Social Security program.

The $11.2 billion in revenue accounts directly for the $6 billion "savings" Pelosi touted (red circle), after accounting for the increased outlays.

Apparently the "savings" do not come from lower expenses at all. The "savings" come from taking $12.9 billion more for CHIP from taxpayers.

And Pelosi's statement told the whole truth with nothing significant left out? We don't buy it.

We think any competent journalist should have noticed this discrepancy and addressed it in the fact check.


Afters

We're not sure if it accounts for any kind of defense for PolitiFact fact checker Louis Jacobson, but his fact check did not directly cite the very clear CBO chart we used above. Jacobson cited a more complex chart (Table 3, Page 4) showing many billions in lost revenue from the suspension of a handful of ACA taxes such as the medical device tax. Even so, how does a fact-checker miss out on the relevance of CBO's plain identification of increased revenue from CHIP re-authorization?

After Afters

Zebra Fact Check picks up the ball PolitiFact dropped by asking CBO to explain to the public the origins of CHIP revenue.


Tuesday, January 23, 2018

PolitiFact vs Ted Cruz

PolitiFact demonstrates the wrong way to fact check


When we criticize PolitiFact's subjective rating system, we often see responses like "I don't pay attention to the ratings."

We tend to respond that PolitiFact reasons poorly, offering fact check consumers yet another reason to avoid PolitiFact. PolitiFact's January 22, 2018 fact check of Sen. Ted Cruz (R-Texas) helps illustrate the point. The fact checkers use equivocal language and straw man argumentation to support their conclusion.


Cruz claimed he has consistently opposed government shutdowns, which PolitiFact contrasted to "popular belief":
With an end to the federal government shutdown in sight, Sen. Ted Cruz, R-Texas, tried to argue that, contrary to popular belief, he was not the driving force behind the previous government shutdown in 2013.
Should we excuse PolitiFact from supporting its claim that most think Cruz was the driving force behind the 2013 shutdown?

The 2018 shutdown originated in the Senate, which had a funding bill but no attempt to force cloture before the funding deadline. A cloture vote would have reportedly failed, meaning the Democrats had a modern filibuster going. The 2013 shutdown stemmed from disagreement between the GOP-controlled House and the Democratic-controlled Senate.

PolitiFact tells part of the story, sending a misleading message in the process (bold emphasis added):
Back in 2013, Cruz -- then a junior member of the Senate’s minority party -- had tried to end funding for the Affordable Care Act. He pushed for language to defund Obamacare in spending bills, which would have forced then-President Barack Obama to choose between keeping the government open and crippling his signature legislative achievement.

As the high-stakes legislative game played out, Obama and his fellow Democrats refused to agree to gut the law, and the Republicans, as a minority party, didn’t have the numbers to force their will. Following a 16-day shutdown, lawmakers voted to fund both the government and the Affordable Care Act.

Cruz was widely identified at the time as the leader of the defunding effort.
We have two types of defunding going on in PolitiFact's explanation. First, we have Cruz's effort to defund the ACA. Then we have general defunding of the government.

See what PolitiFact did there? PolitiFact asserts that most believe Cruz led the effort to defund the government, and slips in the line "Cruz was widely identified at the time as the leader of the defunding effort." Yes, Cruz was the leader, in the Senate, of the attempt to defund the ACA. But defunding the ACA is not the same thing as defunding the government.

PolitiFact then included a little tidbit about a Cruz speech on the Senate floor using portions of Dr. Seuss' "Green Eggs & Ham." So it was a Cruz filibuster? Maybe PolitiFact wants its readers to think it was a Cruz filibuster. But it wasn't.
“This is not a filibuster. This is an agreement that he and I made that he could talk,” (Senate Majority Leader Harry) Reid said Wednesday.
Is there any good excuse for a journalist to offer such a sketchy account of history?

What PolitiFact got right

PolitiFact was right when it reported that Cruz's proposal to defund the ACA would have forced President Obama to choose between signing a bill that undercut the ACA and allowing a government shutdown. It follows that Cruz was playing the politics of government shutdown, though his method placed the onus on Mr. Obama, and of the two options Cruz would plainly prefer defunding the ACA to defunding the entire government.

So even though Cruz's effort to defund the ACA turned out a dismal failure, the effort carried a silver lining for Cruz: Cruz never needed to advocate or support shutting down the government.

... and what PolitiFact got wrong

Given that Cruz voted against the funding bill that eventually ended the 2013 shutdown, PolitiFact had what it needed to show Cruz supporting a government shutdown at least in some form.

Instead, PolitiFact opted for a hilarious overreach comparable to Cruz's failed plan to defund the ACA.

PolitiFact took the route of trying to show Cruz supported the shutdown according to his own standard:
However, even if, for the sake of argument, you accept Cruz’s line of thinking, his hallway comments offered a very specific definition of determining whether a lawmaker had "consistently opposed shutdowns."

In fact, Cruz offered a very specific definition of something else, as we see when PolitiFact picks up its narrative (bold emphasis added):

Specifically, Cruz said that "only one thing causes a shutdown: when you have senators vote to deny cloture on a funding bill." Cloture refers to a Senate vote to cut off debate and proceed to a bill; it’s a prerequisite for considering a bill, and these days, it typically takes 60 votes.
PolitiFact's notion that Cruz defined whether a lawmaker has consistently opposed government shutdowns counts as a fantasy, not a fact check. But for the sake of argument, let us accept PolitiFact's line of thinking.

Cruz said a shutdown only occurs when senators vote to deny cloture on a funding bill. In context, his statement obviously means enough senators opposed cloture for the cloture motion to fail. Why? Because over 30 senators can vote to deny cloture and find themselves overruled by the others. And in that case, no shutdown results.

But understanding what Cruz said in context will not allow PolitiFact's argument to succeed. PolitiFact can only stick the hypocrisy tag on Cruz if voting against cloture on a funding bill counts as causing a shutdown regardless of the outcome of the vote.

That's crazy. But that's PolitiFact's argument:
So did Cruz ever "vote to deny cloture on a funding bill"?

He did.

It came on the legislation to end the 16-day shutdown -- a bill that didn’t include the Obamacare defunding language that he had been seeking. If this spending bill didn’t pass, the government wouldn’t be funded and would have to remain closed. As it happened, the bill passed by a large bipartisan majority, but Cruz was one of 16 senators to vote against cloture. He was also one of 18 to vote against the bill itself.
Regardless of whether Cruz ever supported a government shutdown, taking Cruz's statement out of context is not the way to make the argument. It's simply a fact that one can vote against cloture on principle apart from a filibuster strategy. Cruz has plausible deniability going for him.

Cruz's was one of only 16 voting in opposition to cloture, and it could not be more obvious that such a vote does not meet Cruz's definition for what causes a shutdown. Sixteen senators voting against cloture cannot start a shutdown. Nor can they sustain a shutdown, as PolitiFact's example resoundingly illustrates.

PolitiFact altered Cruz's argument in its fact-checking process.

These fact checkers stink at fact-checking.

Tuesday, January 16, 2018

PolitiFact goes partisan on the "deciding vote"

When does a politician cast the "deciding vote"?

PolitiFact apparently delivered the definitive statement on the issue on Oct. 6, 2010 with an article specifically titled "What makes a vote 'the deciding vote'?"

Every example of a "deciding vote" in that article received a rating of "Barely True" or worse (PolitiFact now calls "Barely True" by the name "Mostly False"). And each of the claims came from Republicans.

What happens when a similar claim comes from a Democrat? Now we know:


Okay, okay, okay. We have to consider the traditional defense: This case was different!

But before we start, we remind our readers that cases may prove trivially different from one another. It's not okay, for example, if the difference is that this time the claim from from a woman, or this time the case is from Florida not Georgia. Using trivial differences to justify the ruling represent the fallacy of special pleading.

No. We need a principled difference to justify the ruling. Not a trivial difference.

We'll need to look at the way PolitiFact justified its rulings.

First, the "Half True" for Democrat Gwen Graham:
Graham said DeSantis casted the "deciding vote against" the state's right to protect Florida waters from drilling.

There’s no question that DeSantis’ vote on an amendment to the Offshore Energy and Jobs Act was crucial, but saying DeSantis was the deciding vote goes too far. Technically, any of the 209 other people who voted against the bill could be considered the "deciding vote."

Furthermore, the significance of Grayson’s amendment is a subject of debate. Democrats saw it as securing Florida’s right to protect Florida waters, whereas Republicans say the amendment wouldn’t have changed the powers of the state.

With everything considered, we rate this claim Half True.
Second, the "Mostly False" for the National Republican Senatorial Committee (bold emphasis added):
The NRSC ad would have been quite justified in describing Bennet's vote for either bill as "crucial" or "necessary" to passage of either bill, or even as "a deciding vote." But we can't find any rationale for singling Bennet out as "the deciding vote" in either case. He made his support for the stimulus bill known early on and was not a holdout on either bill. To ignore that and the fact that other senators played a key role in completing the needed vote total for the health care bill, leaves out critical facts that would give a different impression from message conveyed by the ad. As a result, we rate the statement Barely True.
Third, the "False" for Republican Scott Bruun:
(W)e’ll be ridiculously lenient here and say that because the difference between the two sides was just one vote, any of the members voting to adjourn could be said to have cast the deciding vote.
The Bruun case doesn't help us much. PolitiFact said Bruun's charge about the "deciding" vote was true but only because its judgment was "ridiculously lenient." And the ridiculous lenience failed to get Bruun's rating higher than "False."  So much for PolitiFact's principle of rating two parts of a claim separately and averaging the results.

Fourth, we look at the "Mostly False" rating for Republican Ron Johnson:
In a campaign mailer and other venues, Ron Johnson says Feingold supported a measure that cut more than $500 billion from Medicare. That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion. Under the plan, guaranteed benefits are not cut. In fact, some benefits are increased. Johnson can say Feingold was the deciding vote -- but so could 59 other people running against incumbents now or in the future.

We rate Johnson’s claim Barely True.
We know from earlier research that PolitiFact usually rated claims about the ACA cutting Medicare as "Mostly False." So this case doesn't tell us much, either. The final rating for the combined claims could end up "Mostly False" if PolitiFact considered the "deciding vote" portion "False" or "Half True." It would all depend on subjective rounding, we suppose.

Note that PolitiFact Florida cited "What makes a vote 'the deciding vote'?" for its rating of Gwen Graham. How does a non-partisan fact checker square Graham's "Half True" rating with the ratings given to Republicans? Why does the fact check not clearly describe the principle that made the difference for Graham's more favorable rating?

As far as well can tell, the key difference comes from party affiliation, once again suggesting that PolitiFact leans left.


After the page break we looked for other cases of the "deciding vote."

Thursday, January 11, 2018

PolitiFact Texas transparency update

In a post we titled "PolitiFact Texas unpublished" we described how PolitiFact Texas published an article declaring Gov. Greg Abbott broke his promise to provide leadership training for principals of Texas schools. PolitiFact Texas published that article on Jan. 4, 2018. On Jan. 9, 2018, PolitiFact Texas announced it had taken down (unpublished) the article.


We checked and the article was gone from the PolitiFact Texas website. It was still available, and remains available, at Austin American-Statesman website, one of PolitiFact's partners for PolitiFact Texas.

And on Jan. 11, 2018, PolitiFact republished the old version of the story along with an updated version on the same page. PolitiFact Texas' actions make the temporarily unpublished version of the story look like a normal update, albeit the careful reader may notice that the second update followed just one week after the first one.

We think that's goofy.

Why do we think it's goofy? If the unpublished version of the update was just a normal update later updated with new information, then why unpublish it in the first place? It makes no sense.

In its current form, the story offers readers no hint at all that PolitiFact Texas unpublished the story for two days. Instead, PolitiFact Texas offers its readers this explanation:
Back when he was campaigning to be governor of Texas, Greg Abbott called for training school principals to be better leaders.

Legislative proposals to get such trainings off the ground floundered, however, leading us to rate this Abbott vow a Promise Broken.

But that was before the Texas Education Agency alerted us to other efforts focused on bolstering school leaders.

We decided to look afresh at progress on this promise.
Does PolitiFact Texas downplay the timing of its two updates or what? PolitiFact Texas does mention the date in an editor's note (not labeled as an editor's note) as part of an introduction to the earlier update:
The Abbott-O-Meter update below was posted Jan. 4, 2018. It's eclipsed by the update above:
If it's in italics maybe it automatically counts as an editor's note?

The finished product shows PolitiFact Texas rating a fulfilled promise as broken about a week before reversing itself to declare the promise fulfilled. But with no admission of any error and an incomplete description of what occurred.

PolitiFact thought it had erred, but was mistaken to think so?

We don't quite buy that.

Tuesday, January 9, 2018

PolitiFact Texas unpublished (Updated)

It seems like only yesterday we were praising PolitiFact Texas and W. Gardner Selby for taking an important step toward full transparency by making its interviews of experts available to readers.

Now it has come to our attention that PolitiFact Texas has followed PolitiFact National's lead in unpublishing stories when it decides they are defective.

A publisher may have legitimate reasons for unpublishing a story. But in the interest of transparency organizations should not totally remove the defective work from public view. Organizations should archive the story and keep it available before and after the organization puts the needed changes into effect.

Lately PolitiFact disappears the entire story and only posts a link to the archived version after republishing a reworked version.

If there's a good excuse for that doughnut hole in transparency we are not aware of it.


We also disapprove of PolitiFact only communicating its decision to unpublish the item on Twitter. That's transparency only for the Twitterverse. Readers deserve better than that.



Update Jan. 10, 2018

We found what is apparently the original version of the Abbott-O-Meter ruling and archived it at the Internet Archive.

Facebook comments show the dire need for the PolitiFact Bias website

Over the past few days, we received a number of comments on our Facebook page that help show the dire need for our work.

We are not identifying the person by name, though we expect it's easy to find on our page. It's a public page and the comments were posted in response to our public posts on our page. In short, it's public.

We discourage any attempt to harass this person or make contact with them against their wishes.

Beyond that, we offer thanks for the comments because we can use them to help educate others. We're using quotation marks but correcting errors without making them obvious. So the quotations are not always verbatim.

We're spotlighting these comments because they are so typical of our critics.


Saturday, January 6, 2018

More "True But False" fact-checking from PolitiFact

PolitiFact has always had a tendency to skew its fact-checking in favor of liberals and Democrats. But with speak-from-the-hip President Trump in the White House, PolitiFact has let its true blue colors show like perhaps never before.

A Jan. 5, 2017 fact check from PolitiFact's John Kruzel rates two true statements from President Trump "False." Why would a fact checker rate two true statements "False"? That's a good question. And it's one that the fact check doesn't really answer. But it's worth fisking the fact check for whatever nods it makes toward justifying its conclusions.

Framing the fact check

 

President Trump tweeted that he had not authorized any White House access for Michael Wolff, the author of the book "Fire and Fury" and that he had not spoken to Wolff for the book.


Right off the bat, PolitiFact frames Trump's claim as a denial that Wolff had access to the White House. With the right evidence, PolitiFact might have a case interpreting Trump's statement that way. But pending that justification, PolitiFact leads with a red flag hinting that it is more interested in its own interpretation of Trump words than in the words Trump used.

If Trump had meant to indicate Wolff had no access at all to the White House, he could tweet that in under 140 characters.  Like so:
Wolff had Zero access to White House. I never spoke to him. Liar! Sad!!!!!!!!!!!!!!!!
See? Under 90 characters, including the multiple exclamation points.

Most people understand that when a writer or speaker burdens potentially simple statements with more words the extra words are supposed to mean something. For example, if somebody says "I never spoke to Wolff for book" and not "I never spoke to Wolff" then it strongly hints that the speaker spoke to Wolff but not for the book.

Can PolitiFact explain away the importance of all those words Trump used?

Leading with the fact checker's opinion

From the first, we have said PolitiFact mixes its opinion in with its supposedly objective reporting. PolitiFact and Kruzel have opinion high in the mix in the introduction to the story (bold emphasis added):
The Trump administration has scrambled to control damaging headlines based on Michael Wolff’s Fire and Fury: Inside the Trump White House, which was rushed to shelves Jan. 5 over threats from President Donald Trump’s attorneys.

For his part, Trump sought to undermine Wolff’s credibility by calling into question the author’s access to the administration’s highest levels.
Is Kruzel an objective reporter or a prosecuting attorney telling the jury that the accused has a motive?

Kruzel dedicates his first to paragraphs to the creation of a narrative based on Trump's desire to attack Wolff's credibility. As we proceed, we should stay alert from cues Kruzel might offer the reader about Wolff's credibility. Will Kruzel allow any indication that Wolff deserves skepticism? Or perhaps present Wolff as credible by default?

Dissecting Trump's tweet or ignoring what it says?

Trump's tweet:
Kruzel comments:
We decided to dissect Trump’s tweet by sifting through what’s known about Wolff’s White House access. We can’t know everything that goes on behind the scenes, but even the public record shows that Trump’s statement is inaccurate.
This had better be good, given that the headline offers a skewed impression of Trump's tweet.

Kruzel defeats a straw man

PolitiFact and Kruzel deal first with the issue of White House access. Whereas Trump said he authorized no access for Wolff, PolitiFact creates a straw contradiction by pointing out some might believe Trump was saying Wolff had no access to the White House at all.

How we wish we were kidding (though this is by no means a first for PolitiFact):
Wolff’s access to the White House

Trump’s tweet could give the impression that Wolff was denied access to the White House entirely. But as Trump’s own press secretary has acknowledged, the author had more than a dozen interactions with administration officials at 1600 Pennsylvania Avenue.
What if instead of fact-checking people's false impressions fact checkers instead explained to people the correct impression? But that's not PolitiFact's way. PolitiFact dedicates its fact check to showing that misinterpreting the claim shows that Trump wasn't telling the truth.

Kruzel concludes the first section:
So, while it may be the case that Trump did not personally grant Wolff access, his own press secretary says the author had access to administration officials at the White House.
Our summary so far:
  1. PolitiFact finds "it may be the case" that Trump did not authorize Wolff's access to the White House (as Trump said)
  2. No indication from PolitiFact that Wolff should be regarded as anything other than reliable
  3. Proof that the misinterpreted version of Trump's statement is false (straw man defeated)

Kruzel defeats another straw man

With the first straw man defeated, PolitiFact and Kruzel deal with the burning question of whether Trump spoke to Wolff at all.

Yes, you read that correctly. The fact check focuses on whether Trump spoke to Wolff, not on whether Trump spoke to Wolff "for book."
Did Wolff and Trump talk?

To the casual reader, Trump’s tweet could give the impression that he and Wolff never spoke — but that’s far from the case.
Never fear, casual reader! PolitiFact is here for you as it is for no other type of reader. And if PolitiFact has to create and destroy a straw man or two to keep from helping you improve your reading comprehension, then so be it.

Kruzel follows immediately with his conclusion (explaining after that the details behind the defeat of the straw man):
While it may be the case that Trump never talked to Wolff with the express understanding that their discussion would later be incorporated into a book, the two men certainly spoke, though the length and nature of their conversations is not entirely clear.
And we review and add to our summary:
  1. PolitiFact finds "it may be the case" that Trump did not authorize Wolff's access to the White House (as Trump said)
  2. Still no indication from PolitiFact that Wolff should be regarded as anything other than reliable
  3. PolitiFact proves that the misinterpreted version of Trump's first claim is false (first straw man defeated)
  4. PolitiFact finds "it may be the case" that Trump did not talk to Wolff for the book (as Trump said)
  5. PolitiFact proves that the misinterpreted version of Trump's second claim is false (second straw man defeated)

Whatever one thinks of Trump, that's awful fact-checking

Trump made two claims that were apparently true according to PolitiFact's investigation, but because casual readers might think Trump meant something other than what he plainly said, PolitiFact rated the statements "False."


That approach to fact-checking could make virtually any statement false.

Is Wolff reliable? Who cares? PolitiFact is interested in Trump's supposed unreliability.

This PolitiFact fact check ought to serve as a classic example of what to avoid in fact-checking. Instead, PolitiFact's chief editor Angie Drobnic Holan edited the piece. And a PolitiFact "star chamber" of at least three editors reviewed the story and decided on the rating without seeing anything amiss with what they were doing.

Welcome to the "True but False" genre of fact-checking.

You can't trust these fact checkers.

Thursday, January 4, 2018

No Underlying Point For You!

PolitiFact grants Trump no underlying point on his claim about the GOP lock on senate seat



The NBC sitcom "Seinfeld" featured an episode focused in part on the "Soup Nazi." The "Soup Nazi" was the proprietor of a neighborhood soup shop who would refuse service in response to minor breaches of etiquette, often with a shouted "No soup for you!"

PolitiFact's occasional refusal to allow for the validity of an underlying point reminded us of the "Soup Nazi," and gives rise to our new series of posts recognizing PolitiFact's occasional failure to recognize underlying points.

PolitiFact's statement of principles assures readers that it takes a speaker's underlying point into account (bold emphasis added):
We examine the claim in the full context, the comments made before and after it, the question that prompted it, and the point the person was trying to make.
We see credit for the speaker's underlying point on full display in this Feb. 14, 2017 rating of Bernie Sanders, at the time running as a Democratic nominee for president of the United States (bold emphasis added):
Sanders said, "Before the Affordable Care Act, (West Virginia’s) uninsured rate for people 64 to 19 was 29 percent. Today, it is 9 percent."

Sanders pointed to one federal measurement, though it has methodological problems when drilling down to the statistics for smaller states. A more reliable data set for West Virginia’s case showed a decline from 21 percent to 9 percent. The decline was not as dramatic as he’d indicated, but it was still a significant one.

We rate the statement Mostly True.
Sanders' point was the decline in the uninsured rate owing to the Affordable Care Act, and we see two ways to measure the degree of his error. Sanders used the wrong baseline for his calculation, 29 percent instead of 21 percent. That represents a 38 percent exaggeration. Or we can look at the difference in the change from that baseline to reach Sanders' (accurate) 9 percent figure. That calculation results in a percentage error of 67 percent.

PolitiFact, despite an error of at least 38 percent, gave Sanders a "Mostly True" rating because Sanders was right that a decline took place.

For comparison, Donald Trump tweeted that former associate Steve Bannon helped lose a senate seat Republicans had held for over 30 years. The seat was held by the GOP by a mere 21 years. Using 31 years as a number greater than 30 years, Trump exaggerated by about 52 percent. And PolitiFact rated his claim "False":
Trump said the Senate seat won by Jones had been "held for more than thirty years by Republicans." It hasn’t been that long. It’s been 21 years since Democrat Howell Heflin retired, paving the way for his successor, Sessions, and Sessions’ elected successor, Jones. We rate the statement False.
Can the 14 percentage point difference by itself move the needle from "Mostly True" to "False"?

Was Trump making the point that the GOP had controlled that senate seat for a long time? That seems undeniable. Is 21 years a long time to control a senate seat? That likewise appears undeniable. Yet Trump's underlying point, in contrast to Sanders', was apparently a complete non-factor when PolitiFact chose its rating.

We say that inconsistency is a bad look for a non-partisan fact checker.

On the other hand, we might predict this type of inconsistency from a partisan fact checker.

Wednesday, January 3, 2018

'(PolitiFact's) rulings are based on when a statement was made and on the information available at that time'

PolitiFact Texas issued a "False" rating to Gov. Greg Abbott on Nov. 16, 2017, finding it "False" that Texas had experience its lowest unemployment rate in 40 years.

PolitiFact Texas was also rating Abbott's claim that Texas led the nation last month (September?) in job creation. But we will focus on the first part of the claim, for that rating centers on PolitiFact's principle that it bases its rulings on the timing of a statement:
Timing – Our rulings are based on when a statement was made and on the information available at that time.
Our interest in this item was piqued when we found it linked at PolitiFact's "Corrections and updates" page. We went looking for the correction and found this:
UPDATE, Nov. 17, 2017: Two days after Abbott tweeted his claim about the Texas jobless rate, the federal government reported that the state had a 41-year record low 3.9 percent jobless rate in October 2017.
The release of government statistics confirmed the accuracy of Abbott's claim if he was talking about October.

PolitiFact Texas' update reminded us of a PolitiFact rating from March 18, 2011. Actor Anne Hathaway said the majority of Americans support gay marriage. PolitiFact rated her claim "Mostly True" based on polling released after Hathaway made her claim. Note how PolitiFact foreshadowed its unprincipled decision (bold emphasis added):
(P)ublic opinion on gay marriage is shifting quickly. How quickly? Let's just say we're glad we waited a day to publish our item.
I covered PolitiFact's failure to follow its principles back when the incident happened. But in this case PolitiFact was consistent with its principles.

Or was it?

What information was available at the time?

When Hathaway made her claim, no poll unequivocally supported her claim, and we had no reason to think the actor was in any position to have insider pre-publication information about new polling. But upon reading PolitiFact Texas' fact check of Abbott, we were left wondering whether Abbott might know the government numbers before they were released to the public.

PolitiFact Texas did not address that issue, noting simply that the unemployment rates for October were not yet released. We infer that PolitiFact Texas presumed the BLS statistics were not available to government leaders in Texas. As for us, we had no idea whether the BLS made statistics available to state governments but thought it was worth exploring.

Our search quickly led us to a Nov. 17, 2017 article at the Austin American-Statesman. That's the same Austin American-Statesman that has long partnered with PolitiFact to publish content for PolitiFact Texas.

The article, by Dan Zehr, answered our question:
It’s common and appropriate for state workforce commissions to share “pre-release” data with governors’ offices and other officials, said Cheryl Abbot, regional economist at the Bureau of Labor Statistics Southwest regional office. However, she said, the bureau considers the data confidential until their official release.
Zehr's article focused on a dilemma: Was Abbott talking about the October numbers (making him guilty of breaching confidentiality), or was he just wrong based on the number from September 2017? Zehr reported the governor's office denied that Abbott was privy to the October numbers before their official release.

We think Zehr did work that PolitiFact Texas should have either duplicated or referenced. PolitiFact Texas apparently failed to rule out the possibility that Abbott referred to the official October numbers based on the routine early sharing of such information with state government officials.

For the sake of argument, let's assume Abbott's office told Zehr the truth

PolitiFact Texas' fact check based its rating on the assumption Abbott referred to unemployment numbers for September 2017. That agrees with Zehr's reporting on what the governor's office said it was talking about.

If Abbott was talking about the September 2017 numbers, was his statement false, as PolitiFact Texas declared?

Let's review what Abbott said.

PolitiFact (bold emphasis added):
It’s commonplace for a governor to tout a state’s economy. Still, Greg Abbott of Texas made us wonder when he tweeted in mid-November 2017: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."
And let's review what PolitiFact found from the Bureau of Labor Statistics:
(W)e fetched Bureau of Labor Statistics figures showing that the state’s impressive 4 percent jobless rate for September 2017 tied the previous record low since 1976. According to the bureau, the state similarly had a 4 percent unemployment rate in November and December 2000, 17 years ago. The state jobless rate in fall 1977, 40 years ago, hovered at 5.2 percent.
According to PolitiFact's research, what is the lowest unemployment rate in Texas over the past 40 years? The answer is 4 percent. That percentage occurred three times over the 40 year span, including September 2017. But by PolitiFact Texas' reasoning (and Zehr's reasoning, too), it is false for Abbott to claim September 2017 as the lowest in the past 40 years.

We say PolitiFact Texas (and Zehr) were wrong to suggest Abbott was simply wrong about the unemployment rate in Texas.

Ambiguous isn't the same as wrong

Certainly Gov. Abbott might have expressed himself more clearly. Abbott had the option of saying "The Texas unemployment rate is lower now than it has been in 40 years" if he believed that was the case. Such phrasing would tell his audience that no matter what the unemployment rate over the past 40 years, the current rate is lower.

Alternatively, Abbott might have said "The Texas unemployment rate is as low now as it has been in 40 years." That phrasing would clue his audience that the present low unemployment rate was achieved during the past 40 years at least twice.

Abbott's phrasing was somewhere in between the two alternatives we created. What he said hinted that the September 2017 rate was lower than it has been in 40 years but did not say so outright. His words were compatible with the September 2017 rate equaling the lowest in the past 40 years, but fell short of telling the entire story.

Kind of like PolitiFact Texas fell short of telling the entire story.

Though we took note of it on Twitter, we will again take the opportunity to recognize PolitiFact Texas and W. Gardner Selby as PolitiFact's best exemplars of transparency with respect to expert interviews. PolitiFact Texas posted the relevant portions (so far as we can tell!) of its interview of Cheryl Abbot. PolitiFact Texas has done similarly in the past, and we have encouraged PolitiFact (and other fact checkers) to make it standard practice.

Selby's interview shows him asking Cheryl Abbot to confirm his reading of the unemployment statistics. Selby's question was mildly leading, keeping Abbott to the topic of whether the low September 2017 unemployment rate had been equaled twice in the past 40 years. A different approach might have clued Selby to the same valuable information Dan Zehr reported: Gov. Abbott may have had access to the confidential October figures and his statement may prove correct for that month once the BLS releases the numbers.

It's notable that in the interview Abbot said that the numbers from September 2017 were the "lowest" in 40 years (bold emphasis added):
(T)he seasonally adjusted unemployment rate for Texas in September 2017 matched those of November and December 2000, all being the lowest rates recorded in the State since 1976.
Selby did not use the above quotation from Abbot. Perhaps he did not want his audience confused by the fact Abbot used the same term Abbott used.

In our view, Gov. Abbott was at least partially correct if he was talking about September 2017 and correct if he was talking about October 2017.

PolitiFact Texas should have covered both options more thoroughly.

Monday, January 1, 2018

PolitiFact's worst 17 fact checks of 2017

PolitiFact had a terrific year churning out horrible fact-checking content. In 2016 we published a list of PolitiFact's 11 worst fact checks. This year we're expanding that number to 17. Some of the selections show off PolitiFact's tendency to check statements of opinion. Some of them focus on other methodological blunders. And some make our list based on their potential impact on public debate.

Let's get to it.

PolitiFact's worst 17 fact checks of 2017 (blank)


Note: With this post we are experimenting for the first time with multiple pages for a single post. We anticipate having to update the post a number of times to ensure proper formatting. Once that is done, we will remove this message and consider the post completed.

Or, we'll decide the errors require too much work to correct and use a format like our summary post from last year.