Tuesday, May 31, 2016

'Just the facts' from PolitiFact? Think again

We've pointed out before (including recently) that subjectivity affects the interpretation PolitiFact brings to the statements it rates on its trademarked "Truth-O-Meter." A May 31, 2016 fact check of Republican presidential candidate Donald Trump offers a great example of PolitiFact applying spin to a politician's statement.

We'll start with the version of the statement PolitiFact cited in its fact check, from the Washington Post:
Wind is very expensive. I mean, wind, without subsidy, wind doesn't work. You need massive subsidies for wind. There are places maybe for wind. But if you go to various places in California, wind is killing all of the eagles.

You know if you shoot an eagle, kill an eagle, they want to put you in jail for five years. Yet the windmills are killing hundreds and hundreds of eagles. One of the most beautiful, one of the most treasured birds — and they're killing them by the hundreds and nothing happens. So wind is, you know, it is a problem.
A fact checker needs to answer a basic question before proceeding with a fact check of the number of eagles killed by windmills: Which windmills is Trump talking about? Is it just windmills in "various places in California"? Windmills in the entire state of California? Or was California just an example of a place where windmills are killing eagles and the "hundreds and hundreds" mentioned later may come from locations across the United States?

We do not detect any strong argument for limiting Trump's claim about the number of eagles killed just to California. If Trump had said "Yet the windmills in California are killing hundreds and hundreds of eagles" then, sure, go for it. He's talking about California. But Trump is speaking in South Dakota about national energy policy. He has reason not to limit his comments to the wind energy industry's effects to California.

PolitiFact, surprising us not in the slightest, offers no good reason for concluding Trump was saying California windmills kill hundreds of eagles.

PolitiFact (bold emphasis added):
Setting aside Trump’s exaggeration about killing all of the eagles, we wondered, are wind turbines in California killing "hundreds and hundreds," as he said?
The fact check assumes without justification Trump was talking just about California. That assumption leads directly to the "Mostly False" rating PolitiFact pins on Trump's claim.

But what if Trump was talking about the whole of the United States? The words Trump used fit that interpretation at least as well as the interpretation PolitiFact used.

Adjusting the math

PolitiFact accepted an estimate of about 100 eagles killed per year by California wind turbines. PolitiFact accepted that wind turbines rarely kill Bald Eagles. Assuming Golden Eagles, the variety making up the majority killed in California, count as the type most vulnerable to wind turbine deaths, we can very roughly estimate eagle deaths by looking at the Golden Eagle's U.S. range.

Golden Eagles occur year-round in six of the top 15 states for generating wind energy. They occur in all 15 states for at least part of the year. California comes in at No. 2, after Texas. Golden Eagles range into the part of Texas known for generating wind energy.

It seems reasonable to estimate that wind turbines kill over 200 eagles per year in the United States.

That interpretation of Trump's comment makes his claim true, though based on an estimate, and supportive of his underlying point that wind turbines kill many eagles otherwise protected by federal law.

PolitiFact's interpretation lacks clear justification in the context of Trump's remarks, but fits PolitiFact's narrative about Trump.

A politician's lack of clarity does not give fact checkers justification for interpreting statements as they wish. The neutral fact checker notes for readers the lack of clarity and then examines the possible interpretations that are at the same time plausible. The neutral fact checker applies the same standard of charitable interpretation to all, regardless of popular public narratives.

The point? PolitiFact's rating of Trump was not based simply on the facts. It was based on an unjustified (and unjustifiable?) interpretation of what Trump said.

Jeff Adds (June 2, 2016): 
I'll give Greenberg a hat tip for applying PolitiFact's on-again, off-again policy on rating hyperbole for ignoring Trump's "wind is killing all of the eagles" blurb, but my praise ends there.

Bryan is right that Greenberg offers no reason to limit the death count to those killed in California. Likewise, Greenberg offers no justification for limiting Trump's claim to eagle deaths per year. Greenberg's own expert acknowledged "probably about 2,000" eagles had been killed in California alone since the early 80's. It is generally understood that thousands and thousands are more than hundreds and hundreds.

Greenberg's article is also an example of PolitiFact disregarding an underlying point in favor of the raw numbers. We think raw facts are the only objective thing "fact-checkers" can check, and evaluating underlying points is best counted as analysis or editorial. But PolitiFact has applied its methods so inconsistently we've likened it to playing the game of Plinko.

When PolitiFact fails to employ a consistent method for rating claims, it can't be taken seriously as  neutral or objective journalism.

Sunday, May 29, 2016

What??? No 2016 Pulitzer for PolitiFact?

We're a bit overdue bragging about our prediction last year that PolitiFact would continue its years-long streak of failing to win a Pulitzer Prize.

PolitiFact won a Pulitzer in 2009 for 13 stories relating to the 2008 election. We've been telling anybody who will listen that the win came in large part thanks to technological aspects of PolitiFact. PolitiFact was an online information source that put its stories in a searchable database. That somehow counted as an important advance in journalism at the time.

Since 2009, PolitiFact is 0-7 for Pulitzers. But not to worry. PolitiFact continues to use its Pulitzer Prize win in 2009 to communicate to readers that it is a reliable source of information. PolitiFact reminds readers it is "Winner of the Pulitzer Prize." Check it out via this fresh screen capture:

Subtle, right?

It was pretty much the same story in 2015:

And in 2014:

And in 2013:

And in 2012:

And in 2011 (I wonder why they no longer identify "2009" as the year PolitiFact won?):

And in 2010:

The truth is, if winning a Pulitzer Prize means anything at all about a media source's reliability, it's right next to nothing. We believe we were the first to point out the irony that The Wall Street Journal's Joseph Rago won a Pulitzer Prize in  2011 for a series of editorials that included a pointed criticism of PolitiFact.

In 2014 we created a video of Hitler finding out PolitiFact failed to win a Pulitzer Prize.

We still think it's funny.

Thursday, May 26, 2016

Surreal: PolitiFact gives Clinton "Mostly True" rating for deliberate deceit

Though we're used to seeing PolitiFact publish some of the worst and most biased fact checks of all time, PolitiFact's May 26, 2016 fact check of Democratic presidential candidate Hillary Rodham Clinton still counts as surreal.

It's surreal because we can hardly imagine anybody outside of the PolitiFact offices not seeing that it's ridiculously biased.

Here's what the top of it looks like:

This almost makes for a classic tweezers or tongs case. Should PolitiFact pick out the true nugget from the false boulder? Or focus on the boulder while giving some credit for the true nugget?

That subjective choice is, of course, one of the things that makes PolitiFact's claims of nonpartisanship ring resoundingly hollow. But in this case PolitiFact went way beyond its normal exercise of subjective discretion. Read on.

Note at the top of the image we clipped that PolitiFact uses a quotation from Clinton: "In 2006, Donald Trump was hoping for a real estate crash." If that was all there was to it, then Clinton could earn a "Mostly True." Donald Trump did hope the real estate bubble would burst (which ordinarily just means that overpriced real estate receives a correction). So penalize Clinton a little for exaggerating hoping the bubble would burst into hoping for a "crash," and okay. Maybe it's a little easy on Clinton, but okay.

But that isn't what PolitiFact did at all. No sirree.

PolitiFact gave Clinton the grade that might have been justified after tweezing out the nugget of truth from her ad. But PolitiFact associated the positive rating for the true-ish nugget with what can only be called a deliberate deception. A lie, if you will. A statement made with the intent to deceive.

Yes, it's right there in the same image capture up above: "Hillary Clinton faults Donald Trump for hoping for real estate crash that led to the Great Recession." Up from the headline and to the right a smidgen and the "Truth-O-Meter" blares its helpful "Mostly True" next to Clinton's gross untruth.

In context, the ad is even more blatant in sending the false message that Trump hoped for the Great Recession. PolitiFact recognizes it, even though the blatant deception fails to figure in the final rating:
He said on more than one occasion that he welcomed a downturn in the real estate market because it would give him a chance to buy properties at a bargain and sell them at a higher price later. That's the essence of profitable investing.

What's far less clear is whether Trump was rooting for something on the scale of the Great Recession, a suggestion made in the Clinton ad.
Got that? It not completely clear to PolitiFact that Trump wasn't hoping for a crash along the lines of the Great Recession. But Clinton's ad suggested as much. So what can you do? You just have to give Clinton the benefit of the doubt, right? It's mostly true that Trump was hoping for the Great Recession. It's a mostly true fact. It's right there in the name PolitiFact.

It's not at all plausible that Trump was hoping for the Great Recession, and Clinton knows it. There's no real evidence to support it except for a quotation taken out of context. The quotation is out of context because there was no Great Recession when he made the statement, and Trump doubted the housing bubble would burst. Should Trump have nonetheless predicted the depth of the recession along with the sluggish recovery engineered by the Obama administration?

Clinton's ad tells a lie, and PolitiFact grants its seal of mostly approval. Disgusting.

PolitiFact has turned out a ton of stinkers in its history. This one reeks with the worst of them.

Monday, May 23, 2016

Gender pay gap follies with PolitiFact Missouri (Updated)

I've covered the poor job mainstream fact checkers do on Democrats' gender pay gap claims at Zebra Fact Check back in 2014. PolitiFact remains a basket case example among the big three fact checkers. A May 18, 2016 fact check from PolitiFact Missouri perhaps establishes a new low point in fact-checking the gender pay gap.

Our review of this case will look at the major errors first, but this post will go in-depth on the evidence because this case looks very bad for PolitiFact. The facts demand we consider the possibility that PolitiFact Missouri chose and executed this fact check to deliberately favor Democratic gubernatorial hopeful Chris Koster.

The Gender Pay Gap in Missouri

PolitiFact Missouri's fact check focuses on a claim made on Twitter:
Koster tweeted, "The #WageGap isn’t about isn’t about [sic] a few cents — it can mean EVERYTHING to a working woman trying to provide for her family."

His tweet contained a photo with a captain [sic] that stated, "Closing our state’s wage gap would make a $9 billion difference to Missouri women."
Democratic politicians often blur the line between the raw gender pay gap and a gap caused by gender discrimination in the workplace.

The raw gender pay gap in this case represents the difference in pay between men and women regardless of the job and without accounting for differences in hours worked by full-time employees.

The unexplained gender pay gap is the difference in pay between men and women after explaining part of the gap. The unexplained gap varies depending on what the researchers try to explain, so measurement of that gap may vary.

Some portion of the unexplained gap is perhaps explained by gender discrimination. But the research does not pin down that percentage. Studies that consider more potential explanations tend to show smaller unexplained gaps.

PolitiFact Missouri acknowledged that Koster blurred that line between raw pay gap and any percentage of that gap caused by gender discrimination . But PolitiFact Missouri's generous calculation of wages potentially lost to gender discrimination was close to Koster's $9 billion figure. So Koster eventually skated with a "Mostly True" rating (bold emphasis added):
A study from the Institute for the Study of Labor, an economic research institute based in Bonn, Germany, shows the unexplained wage gap in the United States falls somewhere between 8 percent and 18 percent of the total earnings difference, if the figures are adjusted for additional factors.Using the high-end estimate, the $9.5 billion figure falls to about $7.5 billion.
At first, we thought PolitiFact did the wrong math equation. If  "18 percent of the total earnings difference" may stem from gender discrimination, then taking 18 percent of the total earnings difference ($9.5 billion, according to PolitiFact Missouri) should yield a figure of $1.71 billion. Yet PolitiFact Missouri calculated a figure of $7.5 billion. Our first messages to PolitiFact Missouri about its fact check questioned their math equation.

However, we did not do a good enough job verifying that PolitiFact's description of the report was correct. The report estimates an unexplained gender pay gap between 8 and 18 percent [see embedded Update for clarification] for 2014 2010. So 18 percent represents the total unexplained earnings difference the researchers estimated, not a percentage of the raw gender wage gap. PolitiFact Missouri described the report's results poorly.  PolitiFact Missouri also relied on a study that skimped on gap explanations compared to other studies found through its source list.

Using its preferred study, PolitiFact calculated as much as $7.5 billion of the raw $9.5 billion gap may represent gender discrimination. The math is right. But is PolitiFact Missouri justified in using the 18 percent figure for its calculation?

We think using the high-end estimate from the report containing the highest estimates amounts to cherry-picking. We expect neutral fact checkers to avoid cherry-picking.

After the page break, we'll describe the findings of the reports PolitiFact Missouri cited but ignored for purposes of its math equations. Beyond that, we'll note how PolitiFact passed up an easier fact check of Koster--one with a cut-and-dried poor result for Koster. That other fact check was also from Koster's Twitter feed, so we see no obvious reason why PolitiFact Missouri would have missed it.

Thursday, May 19, 2016

NTSH: PolitiFact twice misrepresented

This "Nothing To See Here" post looks at two media sources who publicly misrepresented PolitiFact.

We think PolitiFact may be expected protect its reputation from misrepresentations coming from mainstream sources.

We have two recent examples of such misrepresentation.

Thomas Baekdal

Thomas Baekdal bills himself as a new media analyst. Apparently some people take him seriously. But Baekdal tried to support his views public misinformation using data from PolitiFact, data that Baekdal massaged with his own formula, as he explains:
I came up with another system. It's based on a logarithmic scale which works like this:
  • We give 'half-true' a value of 1 and center it on the graph.
  • We give 'Mostly True' a value of 2, and 'True' a value of 5. The idea here is that we reward not just that something is true, but also that it provides us with the complete picture (or close to it).
  • Similarly, we punish falsehoods. So, 'Mostly False' is given a value of -2, and 'False' a value of -5.
  • Finally, we have intentional falsehoods, the 'Pants on Fire', which we punish by giving it a value of -10.
Sound reasonable?
No, it doesn't sound reasonable.

It's not reasonable because, regarding the final point, PolitiFact does not define the "Pants on Fire" rating as Baekdal does, and in fact does not take deceitful intent into account in doling out its ratings.

So Baekdal is misinforming people by misrepresenting PolitiFact's rating scale and purpose. Nothing to see here? Should PolitiFact just look the other way? There's an incentive to do that: Baekdal claims the data show PolitiFact's impartiality. PolitiFact might like to preserve that fiction.

Doyle McManus

Doyle McManus works in Washington D.C. as an L.A. Times political columnist. McManus wrote a thinly-veiled "Hillary for President" column published on May 18, 2016. The column, "How brazenly can a candidate lie and get away with it? We're going to find out with Donald Trump," like Baekdal's article, misrepresented PolitiFact:
Trump fibs so often that the fact-checking website Politifact awarded him its 2015 “Lie of the Year” award for his entire body of work, a lifetime achievement award for prevarication.
It's very easy to check PolitiFact's reason for awarding its 2015 "Lie of the Year" to "the campaign misstatements of Donald Trump."
In considering our annual Lie of the Year, we found our only real contenders were Trump’s -- his various statements also led our Readers’ Poll. But it was hard to single one out from the others. So we have rolled them into one big trophy.

To the candidate who says he’s all about winning, PolitiFact designates the many campaign misstatements of Donald Trump as our 2015 Lie of the Year.
The question is, how did McManus and the Times manage to get the facts wrong?

Should PolitiFact let this misinformation stand, or call on its PunditFact unit to fact check McManus?

Is there nothing to see here?


It's worth noting that we have taken action to try to correct both errors, contacting Baekdal via Twitter and email and McManus with a comment on his article as well as Twitter. Neither man seems so far inclined to fix his errors. We will update on this point if we discover either has corrected the record.

Wednesday, May 18, 2016

Debra J. Saunders: "PolitiFact or PolitiFable?"

The San Francisco Chonicle's conservative columnist Debra J. Saunders published a nice blasting of PolitiFact on May 17, 2016.

In "PolitiFact or PolitiFable?" Saunders notes that PolitiFact California gave Pat Buchanan a "Mostly False" rating for claiming half of California households, when in the home, speak a language other than English.

PolitiFact’s Chris Nelson did some checking and found out that — drum roll — what Buchanan said was pretty accurate. According to the U.S. Census 44 percent of Californians spoke a language other than English at home in 2014. Steven Camarota of the Center for Immigration Studies, crunched the data and concluded 48 percent of Cali households spoke a language other than English at home.

So how did the statement get a “Mostly False” rating? While Nelson noted Buchanan was right on the numbers, he nonetheless concluded Buchanan “wrongly implies that half the state does not speak English.” That is so bogus.
We think Saunders makes a great point. It's not easy to see any such implication from Buchanan.

PolitiFact California:
We checked the second part of Buchanan’s statement, about the percentage of Californians who speak a foreign language at home.

It’s a claim that was close to correct on the numbers, but wrongly implies that half the state does not speak English.
PolitiFact California reasons speciously. If half of Californians did not speak English then it would not matter whether they are at home when they speak a language other than English. If people do not speak English then they do not speak English, regardless of location.

We give the last word to Saunders:
Note to PolitiFact: When you have to make up something you think someone implied, then you should get your facts straight.

Saturday, May 14, 2016

PolitiFact's throne of lies

PolitiFact editor Angie Drobnic Holan told some whoppers on May 13, 2016.

We take a dim view here of fact checkers making stuff up. Here's what Holan had to say in her opinion piece about the 2016 election:
Our reporting is not "opinion journalism," because our sole focus is on facts and evidence. We lay out the documents and sources we find; we name the people we interviewed. The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy. The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire. Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did.
Holan's spouting hogwash.

"Our reporting is not 'opinion journalism,' because our sole focus is on facts and evidence." Rubbish. PolitiFact establishes the direction of its fact checks based on its interpretation of a subject's claim. PolitiFact tends to have a tin ear for hyperbole and jest, and that's just the tip of PolitiFact's iceberg of subjectivity. The "Truth-O-Meter" rating system is unavoidably subjective. Perhaps Holan's qualifier "Our reporting" is intended as weasel-words avoiding that fact. Is the "Truth-O-Meter" rating PolitiFact's "reporting"? If not, Holan weasels her way to some wiggle room.

"We lay out the documents and sources we find; we name the people we interviewed." Okay, fine, but what if PolitiFact's uncovering of documents and sources was biased? What if PolitiFact conveniently finds a professional "consensus" on a key issue based on a handful of experts who lean left? Is that an objective process? Isn't that process easily led by subjective opinions? What if PolitiFact unexpectedly overlooks/ignores a report from the Congressional Budget Office when it typically counts CBO reports as the gold standard?

"The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy." As we pointed out above, the "Truth-O-Meter" ratings are impossible to apply objectively. For example, PolitiFact judges between "needs clarification," "leaves out important details" and "ignores critical facts." Those criteria do not at all lend themselves to objective demarcation. Where does one of them end and the next one begin? Republican presidential candidate Mitt Romney's 2012 Jeep ad certainly had more than "an element of truth" to it, yet PolitiFact gave the ad its harshest rating, "Pants on Fire," supposedly reserved, if the definitions mean anything, for statements that do not possess an element of truth. The "weight of evidence" is decided by the subjective impressions of PolitiFact editors, not by meeting objective criteria.

"The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire." Yes, those are the ratings. If we focus just on this statement we could give Holan a "True" rating. That reminds us of another subjective aspect of PolitiFact's process. PolitiFact decides whether to consider a whole statement or just part of a statement when it applies its ratings. Is that decision based solely on facts and evidence? Of course not. It's just another part of PolitiFact's subjective judgment.

"Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did." From a paragraph full of howlers, this may be the biggest howler. PolitiFact editors say they're often divided on what rating to give. They say it's one of the hardest parts of the job. John Kroll, who worked at the Cleveland Plain Dealer, PolitiFact's original partner for PolitiFact Ohio, said the decision between one rating and another often amounted to a coin flip. It's outrageous for Holan to sell the narrative that the ratings are driven only by the facts.

Holan smells of beef and cheese.

Friday, May 13, 2016

Thomas Baekdal: 'These graphs also illustrate how impartial PolitiFact is' (Updated)

Danish blogger Thomas Baekdal wrote up a lengthy piece on public misinformation, "The increasing problem with the misinformed," published on March 7, 2016. The piece caught our interest because Baekdal used graphs of PolitiFact data and made some intriguing assertions about PolitiFact, particularly the one quoted in our title. Baekdal said his graphs show PolitiFact's impartiality.

We couldn't detect any argument (premises leading via logical inference to a conclusion) in the article supporting Baekdal's claim, so we wrote to him asking for the explanation.

Then a funny thing happened.

We couldn't get him to explain it, not counting "the data speaks for itself."

So, instead of a post dealing with Baekdal's explanation of his assertion, that his graphs show PolitiFact's impartiality, we'll go over a few points that cause us to doubt Baekdal and his conclusion.

1) Wrong definition
    Jeff was the first to respond to Baekdal's article, correctly noting on Twitter that Baekdal misrepresented PolitiFact's "Pants on Fire" rating. Baekdal wrote "(T)hey also have a 6th level for statements where a politician is not just making a false statement, but is so out there that it seems to be intentionally misleading." But PolitiFact never explicitly describes its "Pants on Fire" rating as indicative of deliberate deceit. The closest PolitiFact comes to that is calling the rating "Pants on Fire." That's a point we've criticized PolitiFact over repeatedly. It protests that it doesn't call people liars, but its lowest rating makes that accusation implicitly via association with the popular rhyme "Liar, liar, pants on fire!" Readers sometimes take the framing to heart. Perhaps Baekdal was one of those.

    2) An unsupported assertion

    Baekdal asserted, as a foundation for his article, "(W)e have always had a problem with the misinformed, but it has never been as widespread as it is today." But he provided no evidence supporting the assertion. Should we risk aggravating our misinformed state by accepting his claim without evidence? Or maybe extend a license for hyperbole?
      3) Unscientific approach

      Baekdal prepares his readers for his PolitiFact graph presentations by noting potential problems with sample size, but never talks about how using a non-representative sample will undercut generalizations from his data. We think a competent analyst would address this problem.

      4) Unscientific approach (2)

      Baekdal uses an algorithm to score his PolitiFact data, statistically punishing politicians for telling intentional falsehoods if they received a "Pants on Fire" rating. But PolitiFact never provides affirmative evidence in its fact checks that a falsehood was intentional. The raw data do not show the wrong Baekdal claims to punish with his algorithm. Baekdal punishes others for his own misinterpretation of the data.

      5) Unscientific approach (3)

      Jeff also pointed out via Twitter that Baekdal accepts (without question) the dependability of PolitiFact's ratings. Baekdal offers no evidence that he considered PolitiFact might have a poor record of accuracy.


      What point was Baekdal trying to make with his PolitiFact stats? It looks like he was trying to show the unreliability of politicians and pundits. Why? To highlight concern that people feel more mistrust for the press than for politicians. Baekdal lays a big share of the blame on the press, but apparently fails to realize PolitiFact is guilty of many of the problems he describes in his criticism of the press, such as misleading headlines.

      We see no reason to trust Baekdal's assessment of PolitiFact's impartiality, or his assessment of anything else for that matter. His research approach is not scientific, failing to account for reliability of data, the reality of selection bias or alternative explanations of the data. His unwillingness to justify his claims via email did nothing to change our minds.

      We continue to extend our invitation to Baekdal to explain how his graphs support PolitiFact's impartiality.

      Hat tip to Twitterer @SatoshiKsutra for bringing Baekdal's article to our attention.

      Update May 16, 2016:

      Baekdal responds:

      And this after we did him the favor of not publicly parsing his email responses.

      Baekdal has something in common with some of the likewise thick-skinned folks at PolitiFact.

      Tuesday, May 10, 2016

      What is 'empirical evidence" to PolitiFact?

      At Stanford University, Democratic presidential candidate Hillary Rodham Clinton claimed some things aren't working in the war on terror. Clinton named torture among them:

      Enter PolitiFact:

      PolitiFact Clinton claims empirical evidence shows torture does not work

      PolitiFact thus endorses Clinton's claim that empirical evidence shows torture does not work.

      Do the people at PolitiFact understand the concept of "empirical evidence"?

      It's striking that PolitiFact finds Clinton's statement "True" despite not supplying a single shred of evidence supporting Clinton's claim.

      PolitiFact correctly notes the difficulty with the scientific study of torture:
      Because nobody's going to volunteer to be part of a scientific study where you might get tortured — ethics review boards might be apoplectic about such a proposal — the only way to examine the issue is through case studies.
      Game over. Case studies provide a type of empirical evidence, but anecdotal evidence, being anecdotal, does not lend itself to generalizations. Therefore, case studies do not empirically support the broad generalization that torture does not work. That goes double for case studies not specifically geared toward answering the issue scientifically.

      The entire exercise, parading under the banner of "fact-checking," at best substitutes the opinions of experts for empirical data. And it is worth emphasizing that the experts' opinions have no firm basis in empirical data--only case studies.

      At worst, the fact check treats congressional reports as proof.

      PolitiFact's summary, even as it gives Clinton a "True" rating, implicitly confesses its failure:
      Clinton said that when it comes to fighting terrorism, "Another thing we know that does not work, based on lots of empirical evidence, is torture."

      When it comes to the real goal of getting useful intelligence, the preponderance of the evidence shows that the details interrogators will get from a detainee can typically be acquired without torture. When torture is used, the "information" extracted is likely to be fiction created by a prisoner who will say anything to get the punishment to stop.

      All ethical issues aside, the experts say, it doesn't work because it is extremely inefficient and, in many ways, counterproductive.
      PolitiFact says "the preponderance of evidence" shows information obtained through torture might be obtained without torture. But Clinton said we know, thanks to empirical evidence, that torture simply does not work. She calls it a "fact." The second paragraph in PolitiFact's summary does not support Clinton's claim, despite PolitiFact using it as justification. Compare PolitiFact's approach to the claim it is "clear" Clinton broke the law with her handling of top secret information. It's a different standard.

      PolitiFact's experts say that torture does not work because it is extremely inefficient. But a thing that works inefficiently works, albeit inefficiently. It clouds the issue to claim something doesn't work because it works inefficiently.

      PolitiFact's fact check clouds the issue on torture. We do not possess enough empirical data to know torture does not work. Giving Clinton a "True" rating makes a total mockery of journalistic objectivity.

      Left Jab: "Politifact Is All Wrong About Clooney-Clinton Fundraiser"

      Liam Miller, blogging for the Huffington Post, lets PolitiFact have it for slanting its fact checks in favor of Hillary Rodham Clinton. Miller sees this bias as harmful to Clinton's rival for the Democratic presidential nomination, Vermont's Sen. Bernie Sanders.

      In his article "PolitiFact is All Wrong About Clooney-Clinton Fundraiser," Miller gives two examples of the bias harming Sanders.

      First, Miller questions PolitiFact's revised "Half True" rating (down from its original "Mostly True" rating) of George Clooney's claim that a fundraiser he attended would mostly help down-ballot Democrats:
      I suppose we could really use a logical approach to understanding the rating system. What parts of Clooney’s statement is true?

      Clearly not “overwhelming amount”, since only 1% went to the states. That qualifier on the word “money” was the point of his statement, though; it isn’t even correct if you change it from “overwhelming amount” to “most of”; or even “a lot of”. It’s only accurate if you change it to “a little bit”. His whole statement only has persuasive power if the vast majority of the money is going to downballot races, and it’s clear that that is not true.
      Miller goes on to conclude that both main elements of Clooney's statement were wrong, and that the "Half True" rating remains way too generous to Clooney.

      Next, Miller hits PolitiFact for its rating of Sanders' claims that national polls show him performing better than Clinton does against Donald Trump.

      PolitiFact rated Sanders' claim "Mostly True" after finding Sanders representation of the poll results was right.

      Miller (bold emphasis Miller's):
      Apart from the disingenuousness of saying “yes it’s true but we don’t think that’s what matters”, everything in their “additional context” itself requires a fact-check. Politifact didn’t bother to cite or quote their “polling experts”, or say what they mean by “well before”. Sanders does a lot better than Clinton against Trump right now, and that is relevant to November. Their justification is loaded with opinion and presumption, and is far from the simple checking of the numbers that it could and should be.
      Both of Miller's criticisms point out real problems with PolitiFact. But there's a problem. Miller apparently doesn't realize that he's criticizing PolitiFact's standard operating procedure. PolitiFact's history of ratings is littered with many examples of false-but-true and true-but-false ratings. PolitiFact's justifications are often "loaded with opinion and presumption."

      And does it really pass the sniff test that PolitiFact could be biased against Sanders to Clinton's benefit but not show a similar bias against Clinton's GOP opponent in the general election?

      Miller makes the point that PolitiFact's parent paper, the Tampa Bay Times, endorsed Clinton.
      Politifact’s parent paper, the Tampa Bay Times, endorsed Clinton. Others have shown their unfairness toward Sanders. It’s laborious to make each case, since their ratings system must involve some subjectivity, and you have to do a lot of work to fact check anything (hence the value in having a neutral fact-checker, like Politifact aspires to be, and most of the time is).
      It's true the Times endorsed Clinton. But the Times also endorsed Jeb Bush in the Republican primary. Is that a good supporting argument for the idea that PolitiFact's coverage is weighted in favor of Bush against the other Republican candidates?

      What would Miller say if he knew that the Times has never in its entire history endorsed a Republican presidential candidate in the general election, going back over 100 years?

      Most of the time a neutral fact-checker?

      Monday, May 2, 2016

      A "key bit of data" sometimes

      We say PolitiFact uses inconsistent methods in fact-checking political claims. An April 27, 2016 fact check of the "New Day for America" Super PAC serves as a recent example, paired with a comparable case from the 2012 presidential election.

      The 2016 claim from "New Day for America" charged Democratic presidential candidate Hillary Rodham Clinton with promising to raise taxes by $1 trillion. PolitiFact's conclusion said the number was about right but charged the Super PAC with ignoring the 10-year time frame over which the tax increase would produce its revenue. PolitiFact  added that Clinton would not even be in office for the entire 10 years. PolitiFact also claimed it was important who paid the taxes, calling that "another key bit of data." "New Day for America" ignored two key bits of data, so "Half True":
      [Clinton's] plan does, in fact, call for raising a trillion dollars, but it would do so over 10 years — longer than she could serve as president, even if she were re-elected. So if she brought in roughly $100 billion per year, even a two-term Clinton administration couldn't fulfill a promise to bring the total to $1 trillion.

      Also, the statement ignores another key bit of data — that the money would be raised by tax changes targeted to the richest Americans, a group that has seen its top tax rate drop dramatically since the 1950s and early 1960s, when the marginal tax rate was over 90 percent.

      Because the statement is partially accurate but leaves out important details or takes things out of context, we rate it Half True.
      We couldn't remember PolitiFact using this approach to a tax proposal before, so we looked for a comparable example from PolitiFact's past. It turned out President Obama's re-election campaign back in 2012 said challenger Mitt Romney's tax plan would "add trillions to the deficit." PolitiFact examined that claim along with the associated claim that President Obama's tax plan would cut deficits by $4 trillion. In both cases PolitiFact found that the numbers were inflated (cherry-picking high estimates). The details of Romney's tax plan were unknown, so the accusations about his tax plan had little foundation in fact.

      More importantly, the effects of the tax plans were estimated over a 10-year period in both cases. Neither Romney or Obama would serve as president over the full 10 years. The Obama ad said Romney wold cut taxes for millionaires,, but ignored other tax cuts and tax increases in the plan. Good enough for PolitiFact? The Obama ad received a "Half True" rating, the same as "New Day For America" even though the Obama ad committed the same errors and more.

      We encourage readers to look at PolitiFact's rating of the Obama campaign, especially the summary paragraphs, to see how PolitiFact largely forgave the fundamental inaccuracies in the ad. Compare it to the summary PolitiFact offered for the ad coming from "New Day For America."

      PolitiFact simply does not use the same standards for the two ads.

      In Fact-Checking This Is a Big Deal

      Why do we nitpick PolitiFact over this kind of thing? Both "Half True" ratings are justified on reasonable grounds, aren't they?

      No. Full stop. Good fact-checking requires a consistent approach to the issues. PolitiFact repeatedly fails to achieve consistency.

      PolitiFact's rating system has always been a sham, because PolitiFact follows no rigid definition for its ratings. Sure, PolitiFact usually attempts justifications, but they are all over the map. For example, PolitiFact once gave Mitt Romney a "Half True" rating because the problems with his claim matched PolitiFact's definition of "Mostly True." Seriously, that's how PolitiFact justified its rating of Romney. PolitiFact has let the error stand for years.

      Critics left and right have panned PolitiFact's rating system. Defenders often claim that the ratings aren't important. The important thing, they say, is the detailed information we get in the fact check. But if PolitiFact varies in its approach, finding a problem in one instance and overlooking that same problem in another instance, it gives its readers poor fact-checking.

      Of course the problem is worse when the inconsistencies favor one political leaning over another.


      In defending Clinton from the "New Day For America" ad, PolitiFact pointed out that her tax plan tries to increase taxes on the rich, ostensibly to help restore the balance in effect in the 1950s:
      (T)he statement ignores another key bit of data — that the money would be raised by tax changes targeted to the richest Americans, a group that has seen its top tax rate drop dramatically since the 1950s and early 1960s, when the marginal tax rate was over 90 percent.
      PolitiFact evidently did not bother to check its facts on this point. PolitiFact writer C. Eugene Emery Jr. supported his statement by providing links to raw data showing top marginal income tax rates. But changes to tax law affecting deductions have kept the effective tax rates on rich people from changing much. It was rare for anybody to pay the highest marginal income tax rate from the 1950s. Ari Gupta, in a paper for the Manhattan Institute, pointed out that popular liberal economists Thomas Piketty and Emanuel Saez admitted as much:
      The reduction in top marginal individual income tax rates has contributed only marginally to the decline of progressivity of the federal tax system, because with various deductions and exemptions, along with favored treatment for capital gains, the average tax rate paid by those with very high income levels has changed much less over time than the top marginal rates.
      Why is it okay to call out others for leaving out key information while at the same time omitting key information in one's own reporting? Ask PolitiFact. But don't expect an answer.