Friday, August 26, 2016

Literally false, but "Mostly True"

If you're a Republican who makes a literally false claim with a true underlying point, you deserve what you get from PolitiFact ("False," "Pants on Fire," whatever).

However, sometimes the underlying point takes clear precedence. Most likely if you're a Democrat:

With its April 28, 2016 update, PolitiFact rendered Democrat Rep. Alan Grayson's claim literally false. But Grayson received no downgrade from his original "Mostly True" rating.

Should it matter with respect to the "Truth-O-Meter" rating whether a statement is literally true?

PolitiFact's statement of principles hints that literal truth definitely counts. Examples like this one help show that PolitiFact principles are made of Play-Doh.

Update/Correction Aug. 27, 2016: Belatedly added the link to the PolitiFact Florida fact check of Grayson.


Sometimes we spot-check to see if PolitiFact is listing a corrected item on its page of corrected or updated items. Somehow this one did not make the list (archived here). The moral of the story: If you're looking for a measure of how often PolitiFact makes a correction or update, you can't rely on PolitiFact's list.

A partisan coin flip for PolitiFact Florida?

A former staffer at the Cleveland Plain Dealer (the one-time PolitiFact Ohio affiliate) said the difference between one rating and another often amounted to a coin flip.

Such coin flips provide a golden opportunity for bias to swing the vote in one of PolitiFact's "star chamber" judgment sessions.

PolitiFact Florida gives us a wonderfully illustrative pair of examples:

Alan Grayson (D) vs  Patrick Murphy (D), May 16, 2016

Democrat Alan Grayson charged fellow Democrat and senate primary opponent Patrick Murphy with voting in favor of the House Benghazi committee. PolitiFact found Grayson's claim "Mostly  True." It would have been simply "True" except Grayson neglected to mention that Murphy defended his vote by saying he wanted the committee to clear Hillary Clinton's name.

Patrick Murphy (D) vs Marco Rubio (R), August 24, 2016

Democrat Patrick Murphy (same Patrick Murphy from the example above) charged Republican incumbent senatorial candidate Marco Rubio with voting against the Violence Against Women Act. PolitiFact Florida rated Murphy's claim "True," while pointing out that Rubio objected to changes made to the Act when it was submitted for reauthorization. Rubio said he favored the original wording.

But ... but ... but ...

We often encounter one knee-jerk defense when we compare two different and inconsistent ratings from PolitiFact: The circumstances were different!

Yes, the circumstances are always at least somewhat different when comparing two different fact checks. It's the difference between the two that makes them two different fact checks in the first place. The difference in circumstance only serves as a defense against the charge of inconsistency if the difference serves as a good explanation for the difference in ratings.

Grayson made a compound charge against Murphy, while Murphy made a simple charge against Rubio. That's a difference, but it only makes explaining the different ratings more difficult. PolitiFact Florida made no complaint against Grayson's charge that Murphy was in a small group of Democrats voting for the Benghazi committee. Averaging that "true" part of the rating with the less-true "voted for" part of the rating makes the latter even lower when considered by itself.

The difference in this case relies entirely on whatever criteria PolitiFact Florida used to figure out when it is okay to leave out context.

Coin flip?


PolitiFact has published definitions of its ratings. Two definitions are relevant to our comparison:
TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
From the two definitions above, it follows that Grayson's statement about Murphy needed clarification or additional information. Murphy's statement about Rubio, in contrast, had nothing significant missing.

It was significant that Murphy claimed his vote was intended to defend Clinton.

It was insignificant that Rubio claimed he supported the Violence Against Women Act minus the objectionable amendments.

Was this a pair of coin flips both won by Murphy?

The call for transparency


The Poynter Institute, which owns PolitiFact, supports the idea that fact checkers ought to exhibit transparency. In that spirit of transparency, we contacted the writer and editor of both fact checks to ask how they objectively determined when it was okay to leave out context.
The two of you collaborated on a parallel pair of fact checks dealing with charges that a candidate voted for a certain bill.

In one case Alan Grayson charged that Patrick Murphy voted to establish the House Benghazi committee. Your fact check found Murphy had made the vote but said he did it to clear Clinton. The fact check said Grayson should have included that context and dropped Grayson's rating down to "Mostly True."

In the second case the same Murphy said Marco Rubio voted against the Violence Against Women Act. Your fact check found Rubio had opposed the Act with his vote but he said he supported the original version of the bill (without amendments added for its re-authorization). The fact check rated Murphy's statement "True," implying that Murphy was at no fault for omitting Rubio's support for the original version of the VAWA.

In the interest of journalistic transparency (which I know PolitiFact publicly champions):

How does an objective and nonpartisan fact checker make the critical distinction between context that is properly omitted and context that should have been included?
We will transparently update this item if we receive any reply from PolitiFact Florida or its parent organizations.

Meanwhile, heads Murphy wins, tails Rubio loses. Coin flip.

Thursday, August 25, 2016

Jon Feere: "Does Hillary support a border wall? Someone should ask"

Jon Feere of the Center for Immigration Studies was recently cited as an expert by PolitiFact Wisconsin.

Like a number of experts in the past, Feere came away unimpressed with the way PolitiFact made use of his expertise. Feere wrote an article questioning PolitiFact's finding that it is "Half True" Hillary Clinton once favored building a wall on the Mexican border.

We love articles like Feere's that reveal the correspondence between PolitiFact and the experts it cites in its stories. People like Feere give us the type of transparency PolitiFact should provide in the first place.

We recommend reading the whole thing for the detailed analysis Feere offers of PolitiFact's approach to the subject.

Here's a two-paragraph teaser:
Had Trump said, "By the way, Hillary's 700 miles is 300 miles shorter than the 1,000 miles I'm supporting" PolitiFact would have had to give the statement a "True" rating which means it probably would never have been picked up by PolitiFact in the first place.

Bottom line: It's totally true that Clinton once supported a double-wall border fence that never got constructed, and it's totally true that Trump's proposed wall is 300 miles longer.
Also check the tag "Biased interviews of experts" for another example of an expert with this type of PolitiFact experience. More if we get around to tagging them.

Wednesday, August 24, 2016

Making stuff up the PolitiFact way (Updated)

Why is it so easy to find PolitiFact publishing falsehoods?

Early on Aug. 24, 2016 I was reviewing a "False" rating given to Donald Trump for claiming FBI Director James Comey said Hillary Clinton's email actions amounted to misconduct that was a disgrace and embarrassment to our country.

PolitiFact doled out the "False" rating based on its finding that Trump's paraphrase of Comey was far too creative.

But was Trump paraphrasing Comey? PolitiFact's reasoning on that key point raised a question or two:
The speech transcript specifically indicates, via the dash, that Trump was explaining the remarks of the FBI director, not just giving his own opinion. Trump's delivery on video makes it sound that way, too, and that's what caught our attention.
I found the audiovisual evidence from Trump's speech ambiguous. But what was this about a dash indicating that Trump was explaining Comey's remarks?

I dashed off a message to the writer and editor:
A little while ago I was reading your July 15, 2016 fact check of Donald Trump regarding his supposed paraphrase of FBI Director James Comey.

I found this piece of evidence striking:
The speech transcript specifically indicates, via the dash, that Trump was explaining the remarks of the FBI director, not just giving his own opinion.
I confess I had never before heard that the use of a dash serves to reliably indicate that what follows the dash shows an attempt to explain the remarks of another. But I was slightly more comfortable with my ignorance after failing to find evidence supporting the idea via Internet research and by consulting an English major.

Can you offer any support for the idea that a dash serves the purpose identified for it in your fact check? Thanks very much in advance.
(If we receive any reply from the PolitiFact team, we will certainly update this item.)

Compounding the problem for PolitiFact, Trump's speechwriters used dashes liberally in the speech. The speech includes 16 dashes of the same type, and so far as we can tell no pattern exists of the dash serving the purpose given it by PolitiFact.

Further, Jeff pointed out this article he ran across about Trump's speaking style. It has an entire section dedicated to how transcriptionists rely heavily on the em dash for transcriptions of Trump's speeches:
Trump's crimes against clarity are multifarious: He often speaks in long, run-on sentences, with frequent asides. He pauses after subordinate clauses. He frequently quotes people saying things that aren't actual quotes. And he repeats words and phrases, sometimes with slight variations, in the same sentence.

To untangle the jumble, his stenographers are increasingly reliant on a punctuation known as the "em dash" (—), which are used to separate parentheticals within the same sentence. Philip Rucker, The Washington Post's national political correspondent, said that among reporters covering Trump, he has become known as the "em-dash candidate."
Do we expect a response from PolitiFact? No. We think PolitiFact invented its key justification for claiming Trump was paraphrasing what Comey said. We expect PolitiFact to defend its crime against fact-checking with silence and inaction.

PolitiFact offered no solid justification for interpreting Trump's words as a paraphrase of Comey, but performed the fact check as though its interpretation was solidly justified.

That is deception.

Update 8/24/2016: Thanks to CSPAN, we have a short and sweet version of the video/audio of the key part of Trump's speech.

We are not impressed by those who would claim Trump's meaning is clear where the claim is unaccompanied by a convincing explanation of why it is supposedly clear.

Correction/Update Aug. 26, 2016: Belatedly included the embedded URL of the PolitiFact fact check.

Update 8/24/2016: For the sake of thoroughness, I suppose I should note the reply I received from editor Angie Drobnic Holan's email bot:
I am out of the office until Aug. 29. I won't be checking email until I return. If you have a question about PolitiFact, please contact Aaron Sharockman or Katie Sanders.
Sharockman occasionally responds to email messages, so I forwarded my message to him. No reply so far aside from Holan's 'bot.

Update Aug. 25, 2016:

I tried needling Aaron Sharockman on Twitter to draw attention to this issue (including the email messages sent to PolitiFact). It kinda-sorta worked.

Via Twitter, I reminded Sharockman of the importance PolitiFact's parent organization, Poynter Institute, places on journalistic transparency. Specifically, “show how the reporting was done and why people should believe it.”

Odds are we'll be back to the silent treatment from the exemplars of transparency at PolitiFact.

Tuesday, August 23, 2016

The epistemology of Bill Adair's birthday sermon

This post echos the title and content of Joseph E. Uscinski's (and Ryden W. Butler's) "The Epistemology of Fact Checking" as it points to the continued poor approaches fact checkers use toward epistemology.

As I read the birthday wishes Duke University Professor Bill Adair wrote for PolitiFact, the fact-checking enterprise he helped create, I was floored by its spurious reasoning. Adair looked to validate his creation by telling readers how it has met its primary goal:
The mission of fact-checking has always been to inform, not to change behavior. And by that measurement, fact-checking this year is a smashing success. There is more fact-checking than ever — more than 100 sites around the world, according to the latest tally by the Duke Reporters’ Lab. That’s up more than 60 percent in the last year.

Indeed, the fact that we know the politicians are using so many falsehoods is actually proof that fact-checking is working. In debates and interviews, the candidates are being questioned about the claims the fact-checkers have found were untrue.
What's wrong with Adair's reasoning? I shall explain in a moment, but whether I end up informing anybody about Adair's mistake will depend on my audience.

Consider that foreshadowing.

When Adair considers the purpose of fact-checking (informing readers), he chooses a poor means of measuring whether readers are informed. Adair proposes measuring the degree to which readers are informed based on the number of fact-checking websites.

After telling readers that the aim of fact-checking is not merely to change behavior, Adair offered readers a metric that amounts to a change in behavior. Journalists in the media changed their behavior by producing 60 percent more fact-checking sites this year than last. And by asking politicians about fact checked claims.

Were people successfully informed though that increase in fact-checking sites? Maybe. But confirming that would take, for example, a scientific survey of readers that would measure the effects of fact-checking. It does not automatically follow that an increase in the number of fact-checking sites leads to a more informed public. And it should concern us that a principal figure in the fact-checking movement makes an argument that misses that fact.

Adair segues from this stinker of an argument to a second one: He says "The fact that we know politicians are using so many falsehoods is actually proof that fact-checking is working." Adair has simply assumed what he wants to prove. He offers not a shred of evidence that "we know politicians are using so many falsehoods." If Adair tried to offer evidence in support of his claim, it would probably rely on the findings of fact-checkers. Using the findings of fact-checkers to confirm that people informed by the fact-checkers know the truth assumes what Adair seeks to prove.

If readers learn the truth from fact checks, let us see that outcome in the form of hard data. Instead, Adair commits the fallacy of begging the question.

For fact checkers to successfully inform their readers of the truth, the fact checkers have to have a message capable of communicating the truth (unproven) as well as an audience that successfully receives that message (also unproven).

Yet here we have Bill Adair, fact checker, telling people, despite the lack of evidence, that fact-checking is working to inform people.

Adair's argument is naive and this naive approach to knowledge broadly infects Adair's fact-checking movement.

Here's one more plug for Uscinski & Butler's "The Epistemology of Fact Checking," which makes the case that fact-checkers take a naive approach to epistemology.

Update Aug. 23, 2016:

PolitiFact highlighted Adair's birthday gloat, using one of the statements we criticized as the pull quote.

Self-validation the easy way!

Sunday, August 21, 2016

Hoystory: "PolitiFact California is Stupid ..."

Reformed California journalist Matthew Hoy chimes in agreeing with our "Is PolitiFact California stupid?" post before sharing his experience with PolitiFact that expands on the point.

Hoy notes that PolitiFact California is stupid "…and dishonest. And not transparent. And thin-skinned."

It's another account of PolitiFact resisting correction and hiding or downplaying evidence of its misdeeds. Hoy was trying to hold PolitiFact to account for its support of a false claim spread by Democrat Gavin Newsom. Newsom tweeted that it's easier to buy a gun in California than a Happy Meal. PolitiFact retweeted it. Hoy wrote about it.

Read both of Hoy's posts to get a picture of how the top-flight journalists at PolitiFact take the low road.

Correction Aug. 23, 2016: Our title omitted mention of "California," amounting to a misquote of the title of Hoy's article. We apologize for the mistake.

Universal health plans that aren't

From time to time we remind our readers that only the lack of time keeps us from finding many more examples of PolitiFact's bias and incompetence.

Here's one we missed from 2012, involving then-San Antonio Mayor Julian Castro:

As the captured image shows, Castro said Seven U.S. presidents had tried to expand health care to all Americans. PolitiFact rated the claim "Mostly True," but based on very questionable evidence.

Two U.S. presidents tried to provide health care for all Americans. A few others tried to expand the provision of health care to more Americans. In fact, PolitiFact used something like the latter wording to paraphrase Castro, perhaps reasoning that changing what Castro said would make the evidence stack up better in his favor (bold emphasis added):
President Barack Obama’s health care law has been one of the most polarizing aspects of his presidency, with Republicans criticizing it at every turn. But the keynote speaker at the Democratic National Convention in Charlotte, N.C., San Antonio Mayor Julian Castro, didn’t run from it. He applauded Obama for pursuing expanded health care -- and succeeding where his predecessors had failed.
Castro mentioned expanding health care to "all Americans," not the bar-lowering "expanded health care" offered by PolitiFact. By replacing Castro's actual words, PolitiFact avoided the embarrassment of admitting that Castro was wrong when he went on to say Obama succeeded in providing health care to "all Americans." The Affordable Care Act succeeded in growing the number of Americans who have some type of insurance--often Medicaid--but the ACA did not achieve universal coverage.

It takes universal coverage to bring health care to "all Americans."

Fact-checking is great, right?

The Ultimate List of non-Universal Universal Health Care Plans