Thursday, May 26, 2016

Surreal: PolitiFact gives Clinton "Mostly True" rating for deliberate deceit

Though we're used to seeing PolitiFact publish some of the worst and most biased fact checks of all time, PolitiFact's May 26, 2016 fact check of Democratic presidential candidate Hillary Rodham Clinton still counts as surreal.

It's surreal because we can hardly imagine anybody outside of the PolitiFact offices not seeing that it's ridiculously biased.

Here's what the top of it looks like:


This almost makes for a classic tweezers or tongs case. Should PolitiFact pick out the true nugget from the false boulder? Or focus on the boulder while giving some credit for the true nugget?

That subjective choice is, of course, one of the things that makes PolitiFact's claims of nonpartisanship ring resoundingly hollow. But in this case PolitiFact went way beyond its normal exercise of subjective discretion. Read on.

Note at the top of the image we clipped that PolitiFact uses a quotation from Clinton: "In 2006, Donald Trump was hoping for a real estate crash." If that was all there was to it, then Clinton could earn a "Mostly True." Donald Trump did hope the real estate bubble would burst (which ordinarily just means that overpriced real estate receives a correction). So penalize Clinton a little for exaggerating hoping the bubble would burst into hoping for a "crash," and okay. Maybe it's a little easy on Clinton, but okay.

But that isn't what PolitiFact did at all. No sirree.

PolitiFact gave Clinton the grade that might have been justified after tweezing out the nugget of truth from her ad. But PolitiFact associated the positive rating for the true-ish nugget with what can only be called a deliberate deception. A lie, if you will. A statement made with the intent to deceive.

Yes, it's right there in the same image capture up above: "Hillary Clinton faults Donald Trump for hoping for real estate crash that led to the Great Recession." Up from the headline and to the right a smidgen and the "Truth-O-Meter" blares its helpful "Mostly True" next to Clinton's gross untruth.

In context, the ad is even more blatant in sending the false message that Trump hoped for the Great Recession. PolitiFact recognizes it, even though the blatant deception fails to figure in the final rating:
He said on more than one occasion that he welcomed a downturn in the real estate market because it would give him a chance to buy properties at a bargain and sell them at a higher price later. That's the essence of profitable investing.

What's far less clear is whether Trump was rooting for something on the scale of the Great Recession, a suggestion made in the Clinton ad.
Got that? It not completely clear to PolitiFact that Trump wasn't hoping for a crash along the lines of the Great Recession. But Clinton's ad suggested as much. So what can you do? You just have to give Clinton the benefit of the doubt, right? It's mostly true that Trump was hoping for the Great Recession. It's a mostly true fact. It's right there in the name PolitiFact.

It's not at all plausible that Trump was hoping for the Great Recession, and Clinton knows it. There's no real evidence to support it except for a quotation taken out of context. The quotation is out of context because there was no Great Recession when he made the statement, and Trump doubted the housing bubble would burst. Should Trump have nonetheless predicted the depth of the recession along with the sluggish recovery engineered by the Obama administration?

Clinton's ad tells a lie, and PolitiFact grants its seal of mostly approval. Disgusting.

PolitiFact has turned out a ton of stinkers in its history. This one reeks with the worst of them.

Monday, May 23, 2016

Gender pay gap follies with PolitiFact Missouri

I've covered the poor job mainstream fact checkers do on Democrats' gender pay gap claims at Zebra Fact Check back in 2014. PolitiFact remains a basket case example among the big three fact checkers. A May 18, 2016 fact check from PolitiFact Missouri perhaps establishes a new low point in fact-checking the gender pay gap.

Our review of this case will look at the major errors first, but this post will go in-depth on the evidence because this case looks very bad for PolitiFact. The facts demand we consider the possibility that PolitiFact Missouri chose and executed this fact check to deliberately favor Democratic gubernatorial hopeful Chris Koster.

The Gender Pay Gap in Missouri

PolitiFact Missouri's fact check focuses on a claim made on Twitter:
Koster tweeted, "The #WageGap isn’t about isn’t about [sic] a few cents — it can mean EVERYTHING to a working woman trying to provide for her family."

His tweet contained a photo with a captain [sic] that stated, "Closing our state’s wage gap would make a $9 billion difference to Missouri women."
Democratic politicians often blur the line between the raw gender pay gap and a gap caused by gender discrimination in the workplace.

The raw gender pay gap in this case represents the difference in pay between men and women regardless of the job and without accounting for differences in hours worked by full-time employees.

The unexplained gender pay gap is the difference in pay between men and women after explaining part of the gap. The unexplained gap varies depending on what the researchers try to explain, so measurement of that gap may vary.


Some portion of the unexplained gap is perhaps explained by gender discrimination. But the research does not pin down that percentage. Studies that consider more potential explanations tend to show smaller unexplained gaps.

PolitiFact Missouri acknowledged that Koster blurred that line between raw pay gap and any percentage of that gap caused by gender discrimination . But PolitiFact Missouri's generous calculation of wages potentially lost to gender discrimination was close to Koster's $9 billion figure. So Koster eventually skated with a "Mostly True" rating (bold emphasis added):
A study from the Institute for the Study of Labor, an economic research institute based in Bonn, Germany, shows the unexplained wage gap in the United States falls somewhere between 8 percent and 18 percent of the total earnings difference, if the figures are adjusted for additional factors.Using the high-end estimate, the $9.5 billion figure falls to about $7.5 billion.
At first, we thought PolitiFact did the wrong math equation. If  "18 percent of the total earnings difference" may stem from gender discrimination, then taking 18 percent of the total earnings difference ($9.5 billion, according to PolitiFact Missouri) should yield a figure of $1.71 billion. Yet PolitiFact Missouri calculated a figure of $7.5 billion. Our first messages to PolitiFact Missouri about its fact check questioned their math equation.

However, we did not do a good enough job verifying that PolitiFact's description of the report was correct. The report estimates an unexplained gender pay gap between 8 and 18 percent for 2014 2010. So 18 percent represents the total unexplained earnings difference the researchers estimated, not a percentage of the raw gender wage gap. PolitiFact Missouri described the report's results poorly.  PolitiFact Missouri also relied on a study that skimped on gap explanations compared to other studies found through its source list.

Using its preferred study, PolitiFact calculated as much as $7.5 billion of the raw $9.5 billion gap may represent gender discrimination. The math is right. But is PolitiFact Missouri justified in using the 18 percent figure for its calculation?

We think using the high-end estimate from the report containing the highest estimates amounts to cherry-picking. We expect neutral fact checkers to avoid cherry-picking.

After the page break, we'll describe the findings of the reports PolitiFact Missouri cited but ignored for purposes of its math equations. Beyond that, we'll note how PolitiFact passed up an easier fact check of Koster--one with a cut-and-dried poor result for Koster. That other fact check was also from Koster's Twitter feed, so we see no obvious reason why PolitiFact Missouri would have missed it.

Thursday, May 19, 2016

NTSH: PolitiFact twice misrepresented

This "Nothing To See Here" post looks at two media sources who publicly misrepresented PolitiFact.

We think PolitiFact may be expected protect its reputation from misrepresentations coming from mainstream sources.

We have two recent examples of such misrepresentation.

Thomas Baekdal


Thomas Baekdal bills himself as a new media analyst. Apparently some people take him seriously. But Baekdal tried to support his views public misinformation using data from PolitiFact, data that Baekdal massaged with his own formula, as he explains:
I came up with another system. It's based on a logarithmic scale which works like this:
  • We give 'half-true' a value of 1 and center it on the graph.
  • We give 'Mostly True' a value of 2, and 'True' a value of 5. The idea here is that we reward not just that something is true, but also that it provides us with the complete picture (or close to it).
  • Similarly, we punish falsehoods. So, 'Mostly False' is given a value of -2, and 'False' a value of -5.
  • Finally, we have intentional falsehoods, the 'Pants on Fire', which we punish by giving it a value of -10.
Sound reasonable?
No, it doesn't sound reasonable.

It's not reasonable because, regarding the final point, PolitiFact does not define the "Pants on Fire" rating as Baekdal does, and in fact does not take deceitful intent into account in doling out its ratings.

So Baekdal is misinforming people by misrepresenting PolitiFact's rating scale and purpose. Nothing to see here? Should PolitiFact just look the other way? There's an incentive to do that: Baekdal claims the data show PolitiFact's impartiality. PolitiFact might like to preserve that fiction.

Doyle McManus


Doyle McManus works in Washington D.C. as an L.A. Times political columnist. McManus wrote a thinly-veiled "Hillary for President" column published on May 18, 2016. The column, "How brazenly can a candidate lie and get away with it? We're going to find out with Donald Trump," like Baekdal's article, misrepresented PolitiFact:
Trump fibs so often that the fact-checking website Politifact awarded him its 2015 “Lie of the Year” award for his entire body of work, a lifetime achievement award for prevarication.
It's very easy to check PolitiFact's reason for awarding its 2015 "Lie of the Year" to "the campaign misstatements of Donald Trump."
In considering our annual Lie of the Year, we found our only real contenders were Trump’s -- his various statements also led our Readers’ Poll. But it was hard to single one out from the others. So we have rolled them into one big trophy.

To the candidate who says he’s all about winning, PolitiFact designates the many campaign misstatements of Donald Trump as our 2015 Lie of the Year.
The question is, how did McManus and the Times manage to get the facts wrong?

Should PolitiFact let this misinformation stand, or call on its PunditFact unit to fact check McManus?

Is there nothing to see here?


Afters:

It's worth noting that we have taken action to try to correct both errors, contacting Baekdal via Twitter and email and McManus with a comment on his article as well as Twitter. Neither man seems so far inclined to fix his errors. We will update on this point if we discover either has corrected the record.

Wednesday, May 18, 2016

Debra J. Saunders: "PolitiFact or PolitiFable?"

The San Francisco Chonicle's conservative columnist Debra J. Saunders published a nice blasting of PolitiFact on May 17, 2016.

In "PolitiFact or PolitiFable?" Saunders notes that PolitiFact California gave Pat Buchanan a "Mostly False" rating for claiming half of California households, when in the home, speak a language other than English.

Saunders:
PolitiFact’s Chris Nelson did some checking and found out that — drum roll — what Buchanan said was pretty accurate. According to the U.S. Census 44 percent of Californians spoke a language other than English at home in 2014. Steven Camarota of the Center for Immigration Studies, crunched the data and concluded 48 percent of Cali households spoke a language other than English at home.

So how did the statement get a “Mostly False” rating? While Nelson noted Buchanan was right on the numbers, he nonetheless concluded Buchanan “wrongly implies that half the state does not speak English.” That is so bogus.
We think Saunders makes a great point. It's not easy to see any such implication from Buchanan.

PolitiFact California:
We checked the second part of Buchanan’s statement, about the percentage of Californians who speak a foreign language at home.

It’s a claim that was close to correct on the numbers, but wrongly implies that half the state does not speak English.
PolitiFact California reasons speciously. If half of Californians did not speak English then it would not matter whether they are at home when they speak a language other than English. If people do not speak English then they do not speak English, regardless of location.

We give the last word to Saunders:
Note to PolitiFact: When you have to make up something you think someone implied, then you should get your facts straight.

Saturday, May 14, 2016

PolitiFact's throne of lies



PolitiFact editor Angie Drobnic Holan told some whoppers on May 13, 2016.

We take a dim view here of fact checkers making stuff up. Here's what Holan had to say in her opinion piece about the 2016 election:
Our reporting is not "opinion journalism," because our sole focus is on facts and evidence. We lay out the documents and sources we find; we name the people we interviewed. The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy. The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire. Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did.
Holan's spouting hogwash.

"Our reporting is not 'opinion journalism,' because our sole focus is on facts and evidence." Rubbish. PolitiFact establishes the direction of its fact checks based on its interpretation of a subject's claim. PolitiFact tends to have a tin ear for hyperbole and jest, and that's just the tip of PolitiFact's iceberg of subjectivity. The "Truth-O-Meter" rating system is unavoidably subjective. Perhaps Holan's qualifier "Our reporting" is intended as weasel-words avoiding that fact. Is the "Truth-O-Meter" rating PolitiFact's "reporting"? If not, Holan weasels her way to some wiggle room.

"We lay out the documents and sources we find; we name the people we interviewed." Okay, fine, but what if PolitiFact's uncovering of documents and sources was biased? What if PolitiFact conveniently finds a professional "consensus" on a key issue based on a handful of experts who lean left? Is that an objective process? Isn't that process easily led by subjective opinions? What if PolitiFact unexpectedly overlooks/ignores a report from the Congressional Budget Office when it typically counts CBO reports as the gold standard?

"The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy." As we pointed out above, the "Truth-O-Meter" ratings are impossible to apply objectively. For example, PolitiFact judges between "needs clarification," "leaves out important details" and "ignores critical facts." Those criteria do not at all lend themselves to objective demarcation. Where does one of them end and the next one begin? Republican presidential candidate Mitt Romney's 2012 Jeep ad certainly had more than "an element of truth" to it, yet PolitiFact gave the ad its harshest rating, "Pants on Fire," supposedly reserved, if the definitions mean anything, for statements that do not possess an element of truth. The "weight of evidence" is decided by the subjective impressions of PolitiFact editors, not by meeting objective criteria.

"The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire." Yes, those are the ratings. If we focus just on this statement we could give Holan a "True" rating. That reminds us of another subjective aspect of PolitiFact's process. PolitiFact decides whether to consider a whole statement or just part of a statement when it applies its ratings. Is that decision based solely on facts and evidence? Of course not. It's just another part of PolitiFact's subjective judgment.

"Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did." From a paragraph full of howlers, this may be the biggest howler. PolitiFact editors say they're often divided on what rating to give. They say it's one of the hardest parts of the job. John Kroll, who worked at the Cleveland Plain Dealer, PolitiFact's original partner for PolitiFact Ohio, said the decision between one rating and another often amounted to a coin flip. It's outrageous for Holan to sell the narrative that the ratings are driven only by the facts.

Holan smells of beef and cheese.

Friday, May 13, 2016

Thomas Baekdal: 'These graphs also illustrate how impartial PolitiFact is' (Updated)

Danish blogger Thomas Baekdal wrote up a lengthy piece on public misinformation, "The increasing problem with the misinformed," published on March 7, 2016. The piece caught our interest because Baekdal used graphs of PolitiFact data and made some intriguing assertions about PolitiFact, particularly the one quoted in our title. Baekdal said his graphs show PolitiFact's impartiality.

We couldn't detect any argument (premises leading via logical inference to a conclusion) in the article supporting Baekdal's claim, so we wrote to him asking for the explanation.

Then a funny thing happened.

We couldn't get him to explain it, not counting "the data speaks for itself."

So, instead of a post dealing with Baekdal's explanation of his assertion, that his graphs show PolitiFact's impartiality, we'll go over a few points that cause us to doubt Baekdal and his conclusion.

1) Wrong definition
    Jeff was the first to respond to Baekdal's article, correctly noting on Twitter that Baekdal misrepresented PolitiFact's "Pants on Fire" rating. Baekdal wrote "(T)hey also have a 6th level for statements where a politician is not just making a false statement, but is so out there that it seems to be intentionally misleading." But PolitiFact never explicitly describes its "Pants on Fire" rating as indicative of deliberate deceit. The closest PolitiFact comes to that is calling the rating "Pants on Fire." That's a point we've criticized PolitiFact over repeatedly. It protests that it doesn't call people liars, but its lowest rating makes that accusation implicitly via association with the popular rhyme "Liar, liar, pants on fire!" Readers sometimes take the framing to heart. Perhaps Baekdal was one of those.

    2) An unsupported assertion

    Baekdal asserted, as a foundation for his article, "(W)e have always had a problem with the misinformed, but it has never been as widespread as it is today." But he provided no evidence supporting the assertion. Should we risk aggravating our misinformed state by accepting his claim without evidence? Or maybe extend a license for hyperbole?
      3) Unscientific approach

      Baekdal prepares his readers for his PolitiFact graph presentations by noting potential problems with sample size, but never talks about how using a non-representative sample will undercut generalizations from his data. We think a competent analyst would address this problem.

      4) Unscientific approach (2)

      Baekdal uses an algorithm to score his PolitiFact data, statistically punishing politicians for telling intentional falsehoods if they received a "Pants on Fire" rating. But PolitiFact never provides affirmative evidence in its fact checks that a falsehood was intentional. The raw data do not show the wrong Baekdal claims to punish with his algorithm. Baekdal punishes others for his own misinterpretation of the data.

      5) Unscientific approach (3)


      Jeff also pointed out via Twitter that Baekdal accepts (without question) the dependability of PolitiFact's ratings. Baekdal offers no evidence that he considered PolitiFact might have a poor record of accuracy.

      Conclusion

      What point was Baekdal trying to make with his PolitiFact stats? It looks like he was trying to show the unreliability of politicians and pundits. Why? To highlight concern that people feel more mistrust for the press than for politicians. Baekdal lays a big share of the blame on the press, but apparently fails to realize PolitiFact is guilty of many of the problems he describes in his criticism of the press, such as misleading headlines.

      We see no reason to trust Baekdal's assessment of PolitiFact's impartiality, or his assessment of anything else for that matter. His research approach is not scientific, failing to account for reliability of data, the reality of selection bias or alternative explanations of the data. His unwillingness to justify his claims via email did nothing to change our minds.

      We continue to extend our invitation to Baekdal to explain how his graphs support PolitiFact's impartiality.


      Hat tip to Twitterer @SatoshiKsutra for bringing Baekdal's article to our attention.


      Update May 16, 2016:

      Baekdal responds:


      And this after we did him the favor of not publicly parsing his email responses.

      Baekdal has something in common with some of the likewise thick-skinned folks at PolitiFact.

      Tuesday, May 10, 2016

      What is 'empirical evidence" to PolitiFact?

      At Stanford University, Democratic presidential candidate Hillary Rodham Clinton claimed some things aren't working in the war on terror. Clinton named torture among them:

      Enter PolitiFact:

      PolitiFact Clinton claims empirical evidence shows torture does not work

      PolitiFact thus endorses Clinton's claim that empirical evidence shows torture does not work.

      Do the people at PolitiFact understand the concept of "empirical evidence"?

      It's striking that PolitiFact finds Clinton's statement "True" despite not supplying a single shred of evidence supporting Clinton's claim.

      PolitiFact correctly notes the difficulty with the scientific study of torture:
      Because nobody's going to volunteer to be part of a scientific study where you might get tortured — ethics review boards might be apoplectic about such a proposal — the only way to examine the issue is through case studies.
      Game over. Case studies provide a type of empirical evidence, but anecdotal evidence, being anecdotal, does not lend itself to generalizations. Therefore, case studies do not empirically support the broad generalization that torture does not work. That goes double for case studies not specifically geared toward answering the issue scientifically.

      The entire exercise, parading under the banner of "fact-checking," at best substitutes the opinions of experts for empirical data. And it is worth emphasizing that the experts' opinions have no firm basis in empirical data--only case studies.

      At worst, the fact check treats congressional reports as proof.

      PolitiFact's summary, even as it gives Clinton a "True" rating, implicitly confesses its failure:
      Clinton said that when it comes to fighting terrorism, "Another thing we know that does not work, based on lots of empirical evidence, is torture."

      When it comes to the real goal of getting useful intelligence, the preponderance of the evidence shows that the details interrogators will get from a detainee can typically be acquired without torture. When torture is used, the "information" extracted is likely to be fiction created by a prisoner who will say anything to get the punishment to stop.

      All ethical issues aside, the experts say, it doesn't work because it is extremely inefficient and, in many ways, counterproductive.
      PolitiFact says "the preponderance of evidence" shows information obtained through torture might be obtained without torture. But Clinton said we know, thanks to empirical evidence, that torture simply does not work. She calls it a "fact." The second paragraph in PolitiFact's summary does not support Clinton's claim, despite PolitiFact using it as justification. Compare PolitiFact's approach to the claim it is "clear" Clinton broke the law with her handling of top secret information. It's a different standard.

      PolitiFact's experts say that torture does not work because it is extremely inefficient. But a thing that works inefficiently works, albeit inefficiently. It clouds the issue to claim something doesn't work because it works inefficiently.

      PolitiFact's fact check clouds the issue on torture. We do not possess enough empirical data to know torture does not work. Giving Clinton a "True" rating makes a total mockery of journalistic objectivity.