Saturday, January 30, 2016

PolitiFact's second-most popular response to criticism

We all know PolitiFact's most popular response to criticism: Ignore it.

Today we reveal PolitiFact's second-most popular response to criticism, via example. The example comes from PolitiFact's Mailbag feature, Jan. 29, 2016:
One reader criticized our story, "The presidential scorecards so far," which listed the summaries of our fact-check ratings for the 2016 presidential field, candidate by candidate.

Such a comparison, the reader wrote, "is absolutely meaningless, because the statements selected for Truth-O-Meter ratings for each candidate were neither scientifically nor randomly selected. In previous responses to my emails, Truth-O-Meter personnel have stated that the criteria used for selection of statements is documented and thus is sufficient for journalistic integrity. I have no problem with what statements you select for fact-checking, or the process used in the selection. However, trying to lump all such statements together into a scorecard, and then comparing scorecards for different people, when different statements were rated, is scientifically meaningless, and, in my opinion, advocacy journalism rather than fact-based journalism."

***
And there you have it.

PolitiFact's second-most popular response to criticism involves acknowledging the existence of the criticism and offering no response to it.

Friday, January 29, 2016

Response to G, of the G'nO

Our post on the worst of PolitiFact's Reddit AMA drew a lengthy response from "G, of the G'nO." Interested readers can find that comment through the link in uninterrupted form. Here, we'll respond point by point in conversational fashion, breaking G's comment up into separate sections as we offer comment.
Wow. I was actually excited to see this site but, it looks so far like I might be the only one who has -- but, I'll step up and comment on this anyway. Overall, I think the questions you posed in this post are valid, on track, and worthwhile asking. Unfortunately, I also feel your interpretation of the answers to the questions posed completely nullifies them, takes them way off base, and makes it all a virtual waste of time.
We can't wait to read the specifics.
Your first two-part question:
Part 1: I would have to say that I believe (but, of course cannot state with certainty) that Sharockman most likely knows or, at least, has a large degree of insight into the ideological make up of his staff. My gut agrees with you in saying that he dodged that question.
Okay, on this one it sounds like we agree. Just to be clear, however, the question was not originally posed by us. It is "ours" in the sense that we used it as a heading in our post.
Part 2: From this point you state "it gets worse" but what actually happens is you just get a bit silly. Sharockman gave a reasonable answer -- you just didn't like it. That said, your idea that they could have "more balance" could certainly hold water. But then it just begs the question as to who wants to expend the energy schlepping that bucket of water around. After all, this portion of the two part question primarily seems to address the possible bias around the rating system itself. If I make the claim "that purple ball is blue" we would be able to read it a multitude of ways: false, mostly false, half true, mostly true. But, regardless of its ultimate rating, the fact still remains that the ball is purple and everyone except the colorblind should be able to get the point.
While we did not write anything about not liking Sharockman's answer, we did offer a substantive criticism of his answer that you do not appear to acknowledge. If Sharockman doesn't know the ideologies of his staff, he has no reason to have any sort of confidence that three or four people in PolitiFact's "star chamber" will produce judgments based on ideological balance. Without that assurance, Sharockman's reassuring words are empty.
Your follow-up question about the predominance of conservatives being fact checked -- I agree: great question.
Again, it wasn't our question. And we did not offer the opinion that it was a great question. We thought Sharockman could have offered a good answer but did not.
But, here again, Sharockman actually provided a good answer and, while it seems like you might be on the scent of making a good point, you never really make it -- and your response is fatally flawed from multiple angles:
** First, your statement "The only way the number of fact checks of Obama can carry is relevance is if that number is greater than the number of ratings of conservatives." That's just an absolutely silly assertion. Why would Obama (a single, individual liberal) need a greater number of ratings than all conservatives combined to carry relevance? In the vein of your flimflammery verbiage, I say hogwash!
Quite simply, touting the large number of ratings given to a special case (the only U.S. president PolitiFact has ever really bothered to rate, as well as a two-time presidential candidate) does nothing to address the imbalance charged by the reader. It's a meaningless metric in answer to the question, and any fact checker should know better. It's like saying more Raiders than Texans make the Pro Bowl and somebody objecting by saying "But the Texans' Joe Smith has been named to the Pro Bowl three years straight." It doesn't address the issue.

Combining the Obama rating with the ratings of other Democrats could mean something. But the Obama ratings by themselves mean nothing. It's a nutty case of cherry-picking.
** From there you say you found 68 ratings for Clinton since 2010. OK, good. You then make the point that a Clinton statement in 2008 is not relevant to 2016. Agreed -- but, the statement from 2008 lands outside of the range of those 68 ratings you found and doesn't really seem to apply to anything. However, since you made the point, that point can actually be used to poke holes in your next comment -- you found 86 ratings for Trump since 2011. So what? Sharockman's claim that, [of the 2016 candidates, Clinton was fact checked the most] may have actually been made within the context of relevance to the 2016 campaign -- which would pretty much make everything before 2015 moot. How many times has Clinton been rated since the beginning of the 2016 campaign? I don't know. I haven't checked. Like I said, you may be on the scent of making a good point -- but you have yet to make it.
Don't neglect our point, which is that Sharockman isn't making any sort of reasonable point. If you look at the ratings, both Trump and Clinton are arguably putting themselves forward (as candidates--ed.) at the beginning of the timeline we identified. We didn't go into detail because we're not fact-checking Sharockman. We're just poking holes in his argument. Compare Trump to Clinton however you wish as a 2016 candidate. Chances are exceedingly high you'll need to cheat to put Clinton in the lead on the number of ratings.

You don't appear to have made any sort of argument that rescues Sharockman from the charges we've made against him.
As to the difference between "false" and "pants on fire" -- I actually like this question but, really, who cares?
We do. And so should you. One of the main features of our site is a research section. The most developed research project we've published so far looks at PolitiFact's bias in applying its "Pants on Fire" ratings. The key premise of that research is the lack of any objective means of distinguishing between "False" ratings and "Pants on Fire" ratings. We're always amused when figures from PolitiFact address the issue in a way that supports the premise of our research.
There are obviously five general ratings which are all fairly defined with the criteria for each. False is false -- and, you know what -- pants on fire is also false. It just has a little flair attached for style and enjoyment of the readers -- it's childlike and funny. It points out things of a ridiculous nature -- like the post to which I write this reply. Is that subjective? Sure, I guess it is…
The other ratings are better defined than the giant blur that divides "False" from "Pants on Fire." But there's no good evidence PolitiFact pays particular attention to the definitions it gives for its ratings. A literally true statement can receive any rating. We pick on the "Pants on Fire" rating because the definition offers no real guidance at all in applying any objective criterion. And you appear to at least lean toward our view that the rating is essentially a subjective measure.

So, let's see what we've got:

  • You more-or-less agreed with us twice.
  • You said Sharockman's reasoning was good but didn't support your statement.
  • You said our view of the importance of the Obama ratings was hogwash but didn't say why.
  • You think maybe Clinton was rated more times than Trump (do maybes support Sharockman?).
  • You don't know why the line between "False" and "Pants on Fire" is important. Hopefully we cleared that up for you.

You really have to marvel over how much better we are than PolitiFact at responding to critics.

The worst of PolitiFact's Reddit AMA

Friday, Jan. 29, 2016, PolitiFact conducted an "Ask Me Anything" session on Reddit.

And here's the Worst Of.

What is the ideological makeup of the Politifact team? How do you ensure there is balance in this respect?

Great question, right? Recently promoted PolitiFact director Aaron Sharockman double-fumbled this one. Part one:

Sharockman:

Honestly, I have no idea people's party affiliations. I'm a registered NPA, though I've been registered as a Democrat and a Republican in the past.
It's plausible that Sharockman doesn't know the party affiliations on the PolitiFact team (we happen to know for a few of them but we don't tell because it can potentially affect their job prospects). What's less plausible is the idea he has no insight into the ideologies of the team. The question was about ideologies, not political affiliations. Sharockman dodged the first part of the question.

It gets worse:
The meat of your question is how to do we ensure balance. On that, I can offer a better answer. The writer who writes a fact-check proposes a rating (True, False, Pants on Fire, etc.), but it's actually a panel of three judges (editors) who decide the rating that gets published. So in reality, four people have a vote in every fact-check. I think that makes us sort of unique in the fact-checking game.

The point of having three editors involved is so that different people can offer their viewpoints, analysis to best inform the fact-check. And to make sure balance does exist.
If Sharockman doesn't know the ideologies of the staff, then what guarantee can he offer that having three editors consider the issue serves to make sure balance exists? Answer: He can't offer any such guarantee. It's just words.

I reached the discussion in time to offer a follow up question:
Isn't a voting process like that primarily a guarantee that the majority ideology controls outcomes?
Sharockman responded:
We're not the Supreme Court. We haven't been appointed R's or D's. And I'd say, we often strive for a unanimous decision. So in the event of a 2-1 vote, we'll often ask for more reporting, or clarification on a point to try and get to a unanimous verdict (so to speak).
Indeed, aren't all PolitiFact staffers hired by the editorially liberal (consistently liberal) Tampa Bay Times? PolitiFact's "star chamber" would likely have more balance if if was constructed like the Supreme Court.

Asking for more reporting if there's a holdout does offer some promise for giving greater voice to dissent, but to what extent if all the judges trend left? The voice that's absent from the table will not receive a hearing.

PolitiFact's still flubbing this question just as it has for years.


Follow up: why is it predominantly conservatives who are fact checked and not liberals?


There are reasonable answers to this question, we think. PolitiFact opted for something else.

Sharockman:
Disagree. We've fact-checked President Barack Obama more than any other person. http://www.politifact.com/personalities/barack-obama/

Of the 2016 candidates, who have we fact-checked the most? Hillary Clinton http://www.politifact.com/personalities/hillary-clinton/
If PolitiFact has done more fact checks of conservatives than liberals then of what relevance is the number of times PolitiFact has fact checked Barack Obama? The only way the number of fact checks of Obama carries relevance is if that number is greater than the number of ratings of conservatives.

As for the second part of Sharockman's answer, we found 68 ratings of Clinton since 2010, including at least one flip-flop rating. We don't think Clinton's statements about John McCain in 2008 count as statements by a 2016 presidential candidate. Not in any relevant sense, anyway.

Since 2011, PolitiFact has done 86 fact checks of Donald Trump. Eighty-six is greater than 68.

We don't keep track of how many more stories PolitiFact does about conservatives compared to liberals. But we know flim-flam when we see it, and that's what Sharockman offered in answer to this question.

What is the difference between "False" and "Pants on Fire?"

Jeff said he was planning on asking this one. But somebody else beat us to it.

PolitiFact editor Angie Drobnic Holan provided the type of answer we're used to seeing from PolitiFact:

We actually have definitions for all of our ratings. False means the statement is not accurate. Pants on Fire means the statement is not accurate and makes a ridiculous claim. Three editors vote on every rating.
 Yes, that's the difference between the two according to PolitiFact's definitions. But what's the real difference between the two? My follow up question still hangs:
Is there any objective difference between the two ratings? An objective measure of "ridiculous"?
Holan's answer from December 2014 still can't be beat:
So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other.
She'd go into the science involved, but y'all wouldn't understand.

What about that website "PolitiFact Bias"? Somebody brought up our website and Sharockman offered a comment. We think Sharockman's comment was deleted, but we found it on Sharockman's post history page.

A website devoted to saying another website is 100 percent biased seems seem objective to you? 


I asked Sharockman where he got his "100 percent" figure. His description doesn't comport with the way we describe PolitiFact's bias. Sharockman made it up on the spot, like a politician.

Here's looking forward to the next PolitiFact AMA.

More Clintonian PolitiMath

With our PolitiMath posts we look for correlations between numerical errors and PolitiFact's "Truth-O-Meter" ratings. Today's item looks at PolitiFact's rating of Democratic presidential candidate Hillary Rodham Clinton, who said she is the only candidate with a specific plan to fight ISIS.

PolitiFact said there were at least seven such plans among presidential candidates and gave Clinton a "False" rating (bold emphasis added):
While Clinton’s plan is more detailed, by some measurements, than those of other candidates, at least seven other candidates in both parties have released multi-point plans for taking on ISIS. Some plans -- such as those from Bush and Rubio -- approach Clinton’s in either length or degree of detail. In fact, there’s a significant degree of overlap between the agenda items in Clinton’s plan and in plans released by other candidates.

We don’t see strong evidence for Clinton’s claim that she’s the only member of the 2016 field with a "specific plan." We rate the claim False.
Clinton, using PolitiFact's estimates as a basis, underestimated the number of specific plans for defeating ISIS by 86 percent.

We found a close parallel to this case involving Republican candidate Rick Santorum (an 83 percent underestimation). PolitiFact rated Santorum "False" also.

PolitiFact trumps the truth

No, this is not one of those classic cases where PolitiFact announces that a claim is true but rates it "False" anyway. This is just a case where PolitiFact's research backed the claim and PolitiFact rated the claim "False" in spite of the evidence.

The victim this time? Donald Trump.

PolitiFact sets the stage with a quotation from Trump's appearance on the CNN show Anderson Cooper 360. But PolitiFact does not include the question Trump was asked. From Trump's response, it looks like Trump was asked why he chose not to attend a presidential primary debate on Fox News:
He (Trump) says a biting Fox News release is why he pulled the plug.

"Well, I’m not a person that respects Megyn Kelly very much. I think she’s highly overrated. Other than that, I don’t care," he told CNN an hour before the debate. "I never once asked that she be removed. I don’t care about her being removed. What I didn’t like was that public relations statement where they were sort of taunting. I didn’t think it was appropriate. I didn’t think it was nice."

His assertion that he "never once" asked for Kelly’s removal piqued our interest.
Was Trump saying he never set Kelly's removal as a precondition for attending the debate? That seems possible, but without the full transcript we can't say.

Do we trust PolitiFact to adequately assess the context and provide that information. No, of course not. PolitiFact makes too many simple errors to garner that type of trust.

After PolitiFact questions whether Trump asked for Kelly's removal, it provides a ton of evidence supposedly supporting the "False" rating it eventually gave to Trump. Except it doesn't add up that way:
"Based on @MegynKelly's conflict of interest and bias she should not be allowed to be a moderator of the next debate," Trump tweeted Jan. 23 and made similar comments in a campaign rally in Iowa the same day
Saying Kelly should not be allowed to be a moderator is not the same as asking for Kelly's removal as moderator. It's certainly not a "full-throated" request for Kelly's removal.
Trump repeated his position that Kelly "should recuse herself from the upcoming Fox News debate," according to Boston Globe reporter James Pindell.
Likewise, saying Kelly should recuse herself from the debate is not the same as asking for her removal.
According to New York, Trump began to threaten a boycott a day later and toy with the idea of holding his own event.
If Trump threatens not to attend the debate and starts talking about holding his own event in the same time slot, is he asking for Kelly's removal? If so, the request is indirect.
Two days before the debate, Trump polled his Twitter followers, asking,"Should I do the #GOPDebate?" (Of over 150,000 responses, 56 percent were "Yes.")

In the tweet, Trump posted a link to an Instagram video, in which he said, "Megyn Kelly is really biased against me. She knows that, I know that., everybody knows. Do you really think she can be unbiased in a debate?"
We see that PolitiFact strikes the same note over and over. If Trump threatens to skip the debate and offers Kelly's involvment as a reason, PolitiFact reasons, then Trump is asking for Kelly's exclusion.

But that's not a case of black-and-white truth, is it?

If Trump never literally asked for Kelly's exclusion, then it's literally true that Trump never asked for Kelly's exclusion. At most, he's implying that he might participate in the debate if Fox removed Kelly from her role in moderating the debate.

Let's look at how PolitiFact justifies its rating of Trump in the conclusion (bold emphasis added):
Trump said, "I never once asked that (Megyn Kelly) be removed" as a debate moderator.

This statement greatly downplays Trump’s comments ahead of the debate, even if his absence really had more to do with a mocking Fox News release in the end.
In the same context where Trump said "I never once asked that she be removed" Trump repeated criticisms of Kelly ("I’m not a person that respects Megyn Kelly very much. I think she’s highly overrated"). And if Trump's main reason for not attending the debate was the nature of the Fox communications in response to his complaints, then downplaying those comments is appropriate.

PolitiFact never ruled out the possibility that Trump stayed away from the debate for the reason he claimed (see bold emphasis above).

PolitiFact's evidence does not support a "False" rating for Trump. That would require an unambiguous example of Trump asking for Kelly's removal. There's nothing like that in PolitiFact's fact check. PolitiFact also failed to provide enough context from the Anderson Cooper interview with Trump to allow readers to verify PolitiFact's judgment.

This is "gotcha" journalism designed to continue feeding a narrative PolitiFact is peddling about Trump. The evidence says Trump was literally correct in saying he did not ask for Kelly's removal. The missing context might further vindicate Trump.


Update Jan. 30, 2016: Clarified the third paragraph to make more clear the context of Trump's appearance on Anderson Cooper 360. 
Update Feb. 29, 2016: Fixed formatting to make clear "His assertion that he "never once" asked for Kelly’s removal piqued our interest" was part of a quotation of PolitiFact. Also added a link to the original PolitiFact article--we apologize for the delay, for we take it as standard practice to link to all our sources.

Wednesday, January 27, 2016

Martin O'Malley, PolitiFact, inconsistency and PolitiMath

One of the things that makes PolitiFact's "Truth-O-Meter" so subjective is the lack of any apparent standard for weighting  the literal truth of a claim against its underlying point.

PolitiFact gives us a fresh example with the "Mostly True" rating it bestowed on Democratic presidential nominee Martin O'Malley.

O'Malley said in 1965 the average GM employee could pay for a year's college tuition on just two weeks' wages.

Have a gander at PolitiFact's summary conclusion:
O’Malley said, "Fifty years ago, the average GM employee could pay for a year of a son or daughter’s college tuition on just two weeks [sic] wages."

That’s not quite right -- it would have taken about four weeks of work at GM, not two, to pay for a year at the average four-year college in 1965, and more than that if you take account of taxes. Still, O’Malley has a point that the situation in 1965 was quite a deal compared to today, when a typical auto worker would have to work for 10 weeks in order to pay for a year of tuition at the average four-year college. We rate the statement Mostly True.
Note that PolitiFact cuts O'Malley a break on payroll taxes and he still underestimated the amount of work needed by half with his estimate.

That's like saying the Empire State Building is 800 feet tall but getting a "Mostly True" because, hey, the underlying point is that it's a tall building.

PolitiMath

With PolitiMath we're looking for correlations between numerical errors and PolitiFact's ratings.

This O'Malley case caught our eye right away:



The story summary makes clear right away that O'Malley was way off with his figure. Despite that, he received the "Mostly True" rating.

What was O'Malley's percentage error?

Using PolitiFact's figures for tuition at a 4-year college ($607) and giving O'Malley a pass on payroll taxes ($297.60), O'Malley underestimated the cost in 1965 by 51 percent. He cut it in half and received a "Mostly True" rating.

We're not aware of any other percentage error of this magnitude receiving a rating of "Mostly True" from PolitiFact. This might be the record.

Tuesday, January 26, 2016

PolitiFact's partisan correlation correlation

The co-founder and co-editor of PolitiFact Bias, Jeff D, noted this outstanding example of PolitiFact's inconsistency. Jeff's a bit too busy to write up the example and so granted me that honor.

The problem Jeff noted has to do with PolitiFact's treatment of factual correlations. A correlation occurs when two things happen near the same time. When the correlation occurs regularly, it is often taken as a sign of causation. We infer that one of the things causes the other in some way.

Correlation, however, is not a proof of causation. PolitiFact recognizes that fact, as we can see from the explanation offered in a fact check of Lieutenant Governor Dan Patrick (R-Texas):
Lott’s study shows a 25 percent decrease in murder and violent crime across the country from 2007 to 2014, as well as a 178 percent rise in the number of concealed-carry permits. Those two trends may be correlated, but experts say there’s no evidence showing causation. Further, gun laws may have little to nothing to do with rates of falling crime.
PolitiFact ruled Patrick's statement "Mostly False," perhaps partly because Patrick emphasized open carry while Lott's research dealt with concealed-carry.

PolitiFact also noted that correlation does not equal causation while evaluating a claim from Democrat presidential nominee Hillary Clinton. Clinton said recessions happen much more often under Republican presidents:
The numbers back up Clinton’s claim since World War II: Of the 49 quarters in recession since 1947, eight occurred under Democrats, while 41 occurred under Republicans.

It’s important to note, however, that many factors contribute to general well-being of the economy, so one shouldn’t treat Clinton’s implication -- that Democratic presidencies are better for the economy -- with irrational exuberance.
Okay, maybe PolitiFact was a little stronger with its warning that correlation does not necessarily indicate causation while dealing with the Republican. But that doesn't necessarily mean that PolitiFact gave Clinton a better rating than Patrick.

Clinton's claim received a "Mostly True" rating, by the way.

Was Patrick's potentially faulty emphasis on open carry the reason he fared worse with his rating? We can't rule it out as a contributing factor, though PolitiFact wasn't quite crystal clear in communicating how it justified the rating. On the other hand, Clinton left out details from the research supporting her claim, such as the fact that the claim applied to the period since 1947. We see no evidence PolitiFact counted that against her.

Perhaps this comparison is best explained via biased coin tosses.


Post-publication note: We'll be looking at PolitiFact's stories on causation narratives to see if there's a partisan pattern in their ratings.

 

Friday, January 22, 2016

The 2015 "Pants on Fire" bias for PunditFact and the PolitiFact states

Earlier this week we published our 2015 update to our study of PolitiFact's bias in applying its "Pants on Fire" rating.

The premise of the research, briefly, is that no objective criterion distinguishes between a "False" rating and a "Pants on Fire" rating. If the ratings are subjective then a "Pants on Fire" rating provides a measure of opinion and nothing more.

In 2015 the states provided comparatively little data. State franchises, with a few exceptions, seem to have a tough time giving false ratings. The state PolitiFact operations also tend to vary widely in the measurement of the "Pants on Fire" bias. PolitiFact Wisconsin's "Pants on Fire" ratings proportionally treat Democrats more harshly than Republicans, for example.


PolitiFact Florida: PolitiFact Florida's data roughly matched those from PolitiFact National and PunditFact. Those three franchises are the most closely associated with one another since all are based in Florida and tend to share writing and editorial staff. In 2015 the PoF bias number was within a range of five hundredths for each. That's so close that it's suspicious on its face. All three gave Republicans more false ratings than Democrats (4.83, 3.50, 2.43).

PolitiFact Georgia: Though PolitiFact Georgia has operated for a good number of years and has in the past provided us with useful data, that wasn't the case in 2015. PolitiFact Georgia's false ratings went to apparently non-partisan claims.

PolitiFact New Hampshire: PolitiFact New Hampshire historically provides virtually nothing helpful in terms of the PoF bias number. But false ratings for Democrats outnumbered false ratings for Republicans (blue numerals indicate that bias).

PolitiFact Rhode Island: PF Rhode Island rated two statements from Democrats "False."

PolitiFact Texas: PF Texas gave Republicans false ratings an astonishing 8 times more than Democrats. But at the same time, PF Texas produced a PoF bias number harming Democrats. The key to both figures? PF Texas only doled out two false ratings to partisan Democrats. Both were "Pants on Fire" ratings. An entire year with no "False" ratings for Democrats? Texas' previous low for "False" ratings was four (twice).

PolitiFact Virginia: PF Virginia achieved perfect neutrality in terms of our PoF bias number. That's the meaning of a 1.00 score. The "Pants on Fire" claims as a percentage of all false claims was equal for Republicans and Democrats.

PolitiFact Wisconsin: PF Wisconsin continued its trend of giving Democrats the short end of the PoF bias measure. That's despite giving Republicans a bigger share than usual of the total false ratings. The 5.00 selection bias number was easily the all-time high for PF Wisconsin, besting the old mark of 1.57 back in 2011.

PunditFact: PunditFact, we should note, produces data we class in our "Group B." PunditFact tends not to rate partisan candidates, officeholders, partisan appointees or party organizations. It focuses more on pundits, as the name implies. We consider group B data less reliable as a measure of partisan bias than the group A data. But we do find it interesting that PunditFact's data profile lines up pretty closely with the most closely associated PolitiFact entities, as noted above. That finding proves consistent with the idea that PolitiFact ratings say something about the viewpoint of the ones giving the ratings.

Thursday, January 21, 2016

Hillary Clinton & PolitiMath

Our PolitiMath series of posts looks for correlations between numerical errors and PolitiFact's ratings.

The "False" rating PolitiFact gave to Democratic presidential candidate Hillary Clinton on Jan. 20, 2015 allows us to further expand our data set. Clinton said nearly all of the bills she presented as a senator from New York had Republican co-sponsors.

PolitiFact said her numbers were off.
We found at least one Republican co-sponsor in 4 of 7 resolutions or continuing resolutions (57 percent) but only 9 of 37 bills (24 percent).

Overall, that's 13 out of 44, or just under 30 percent.

Focusing on the 18 bills that Clinton sponsored and brought to the Senate floor for consideration, four had at least one Republican co-sponsor (22 percent) ...
Note we are dealing with a slightly mushy comparison. What is "nearly all"? We think setting a fairly low bar gives us the most useful comparison to other ratings.

Let's say 80 percent of her bills would count as "nearly all." We think that's a fairly low bar to clear.

Using the best figure PolitiFact produced on Clinton's behalf, she exaggerated her claim by 167 percent. Using the figure reflecting a more literal interpretation of her words (the 22-percent figure), Clinton exaggerated by 264 percent.

PolitiFact said Clinton's claim "isn't even close" to the truth. But we know it wasn't ridiculously far off, otherwise PolitiFact would have awarded Clinton a "Pants on Fire" rating.

Right?

Wednesday, January 20, 2016

Left Jab: "Politifact's Fuzzy Case Against Bernie Sanders on Pentagon Spending"

An article from the Huffington Post gets some good pokes in against PolitiFact's less-than-clear reasoning in a fact check of Democratic presidential candidate Bernie Sanders.

We're not convinced of the author's main point. He says the mainstream media have no interest in real discussions of the military budget.

But he makes a pretty good case that the fact check of Sanders is short on facts:
What is the core of PolitiFact's argument? Sanders' claim is "mostly false" not because we have good reason to believe that some very different number is much more likely to be correct - like 20%, or 50%, or 90% - but because the share of the Pentagon budget which goes into fighting ISIS and international terrorism is a fundamentally unknowable fact, like how God passed the time before creating the world. The question is intrinsically outside the scope of human knowledge.
And what does PolitiFact do with questions outside the scope of human knowledge? Why, fact check them, of course!

These types of fact checks occur often on PolitiFact's pages. More often Republicans are the victims of this shoddy approach to fact-checking. But we don't shy away from sharing good examples showing PolitiFact treating Democrats or progressives unfairly.




Update 1/20/16 2129PST: Added links to Huffington Post and PolitiFact articles in first paragraph- Jeff

PolitiFact's "Pants on Fire" bias, 2015 update

It's time again for our annual update to our research on PolitiFact's bias in applying its "Pants on Fire" rating. And the results for 2015 show a surprisingly low bias against Republicans. Read on.

PolitiFact's "Pants on Fire" bias

As we have noted, PolitiFact has failed to ever provide any objective means for distinguishing between its "False" and "Pants on Fire" ratings on its trademarked "Truth-O-Meter." The only difference between the two ratings by PolitiFact's telling is that "Pants on Fire" statements are ridiculous as well as false.

We did an extensive survey of the reasons PolitiFact has given in its stories for its ratings and failed to even find an informal criterion that might pass as objective.

This research project does not focus on whether Republicans simply receive more "Pants on Fire" ratings than Democrats. We look at proportions, not the raw numbers. We look at the percentage of total false ratings ("False" plus "Pants on Fire") and then divide the number of "Pants on Fire" ratings for each party by that number. Comparing the percentages for each party then gives us a number we call the "PoF bias number." And what is the significance of that number?

The PoF bias number shows which political party is more likely to receive a "Pants on Fire" rating. If the difference between "False" and "Pants on Fire" is subjective, as PolitiFact's definitions and our research appear to indicate, the "Pants on Fire" rating shows which party is more likely to receive the subjective "Pants on Fire" rating.

In 2011, the first year we did the study, Republican statements were 57 percent more likely to receive a "Pants on Fire" rating than statements from Democrats. As the chart below shows, 1.57 serves as the corresponding PoF bias number.

In PolitiFact's very first year, Democrats received a higher percentage of "Pants on Fire" ratings. It turned out PolitiFact's founding editor Bill Adair said the "Pants on Fire" rating started out for use on "light-hearted" fact checks. Ever since "Pants on Fire" stopped being a joke at PolitiFact, Republicans have been more likely to have their false statements ruled as "Pants on Fire."

Republicans' false statements were far more likely than Democrats' to receive a "Pants on Fire" rating from 2009 through 2012.



(PoFBias)(Sel.Bias)
YEAR
GOP
Democrat
GOP
Democrat
2007

2.50
1.00
1.00
2008
1.31

1.53

2009
3.14

1.48

2010
2.75

3.15

2011
1.57

3.83

2012
2.25

2.52

2013
1.24

1.81

2014
1.95

2.56

2015
1.10

4.83

Chart notes: Columns 2 and 3 deal with the PoF bias number. Columns 3 and 4 deal with the "selection bias number," which is the comparison between Republicans and Democrats of the total number of false claims.


Our results for 2015 proved intriguing.

Republicans' false statements were only 10 percent more likely than Democrats' false statements to receive the "Pants on Fire" rating. In terms of historical patterns, the bias against Republicans in 2015 was minimal.

But given the current mainstream media narrative that the Republicans wantonly lie and do so perhaps the most in history, how should we interpret the data?

The Matter of Interpretation

We've had critics insist that a high PoF bias number against Republicans is best interpreted as a sign that Republicans simply lie more. For those critics, this year's findings present a problem. Is the supposed trend of Republicans lying tailing off?

We'd love to see our critics try to make that argument.

We would suggest two hypotheses to help explain the numbers.

First, we would suggest the numbers do not necessarily need much explanation. Simple regression to the mean may explain the surprisingly low measurement of anti-Republican bias in applying the "Pants on Fire" rating. The PoF Bias number for the past three years, after all, falls very close to the average for the preceding years. However, we looked for an alternative explanation anyway because we should predict that the "Republicans lie more" narrative would make the recent numbers look worse than usual for Republicans if PolitiFact journalists allow ideological bias to affect their ratings.

The Secondary Hypothesis

Our secondary hypothesis came from our past analysis of the PoF bias number for PolitiFact's state operations.

Some of the PolitiFact states had PoF bias numbers favoring Republicans instead of Democrats despite paying relatively little attention to statements from Democrats. We hypothesized some PolitiFact states might be applying a compensatory bias: If a PolitiFact state was grading many Republican statements as "False" compared to very few statements from Democrats, the state PolitiFact might rate the few Democrat statements more harshly to look more fair.

So, we hypothesized that PolitiFact states might go harder on false statements from Democrats to make the large number of false statements from Republicans look less like a sign of journalistic bias.

As a corollary to the secondary hypothesis, we noted with our first publication of our research that PolitiFact staffers could read our research and act to change future outcomes. That possibility does not concern us much, for if the bias in applying the "Pants on Fire" rating stabilizes near a more equitable level, it tends to support our interpretation of the data for PolitiFact's earlier years.

Are Democrats Telling the Truth More and More?

Though our research project focuses on the percentages of "Pants on Fire" ratings compared to the total number of statements PolitiFact views as false (making the number totals unimportant to the PoF bias research), we end up collecting data on the total number of false statements as a matter of course.

Those data do appear to tell an interesting story: PolitiFact National seems to have increasing difficulty in rating Democrats' statements false.

Note that we sift the PolitiFact ratings to obtain the data most likely to accurately measure bias.  We're talking about the "Group A" data described in our first research paper, which consists of ratings of party candidates, elected officeholders, partisan appointees or party organizations. We exclude statements obviously attacking members of one's own party. Still, the shrinking number of false statements for Democrats stands out:

(Numbers exclude party-on-party claims, as where a Democrat attacks another Democrat)

PolitiFact had little difficulty finding false statements from Democrats over its first three years. The low mark during those first three years was 18, when PolitiFact only operated for about half the year. Since that first year the number has declined or stayed even every year save for one. The number of false statements from Democrats skyrocketed from 20 all the way up to 23 in 2011 before resuming its decline.

Why is PolitiFact only finding half the false statements per year it found over the first three years? What explains that decline? Are Democrats edging closer to the truth on the whole? Just once per month, on average, PolitiFact was able to catch a full-fledged partisan Democrat telling a falsehood in 2015?

We doubt Democrats have significantly changed their approach to political speech. We think the explanation has to do with the fact checkers buying the idea that failing to let their bias lead them (they don't see it as bias) results in a "false equivalence" in fact checking. PolitiFact produces results consistent with what we should expect from left-leaning fact checkers.

Tuesday, January 19, 2016

It's a fact! It's from a Republican! It's "Pants on Fire"!

Computer access problems stopped us from joining the vanguard in trashing  PolitiFact's recent "Pants on Fire" rating given to Republican presidential candidate Marco Rubio for saying Reagan's strength compared to Carter's weakness led to the release of hostages on the day Reagan was inaugurated in early 1981.

PolitiFact:


Fortunately, Power Line and Red State admirably filled the breach.

Power Line faults PolitiFact for citing the controversial Gary Sick as an expert historical source, then skewers PolitiFact for simply ignoring the evidence supporting Rubio:
Anyway, Politifact continues:
Instead, the Iranians had tired of holding the hostages, and that the administration of Jimmy Carter did the legwork to get the hostages released.
They got tired of it, you see. Riiiight. Okay, if you’re done being convulsed with laughter on the floor, let’s recall what the Washington Post editorial page (!!) had to say about the matter on January 21, 1981:
“Who doubts that among Iran’s reasons for coming to terms now was a desire to beat [Reagan] to town?”
And who doubts that Politifact and other “fact checkers” are too clueless to grasp Rubio’s argument that your reputation in the world counts for something—especially with your enemies.
Red State noted:
The release did coincide with Reagan’s inauguration. Any critique of Rubio’s statement must include an very solid bit of proof that the two events were disconnected. As a matter of fact, the negotiations that led to the release of the hostages were not even signed until January 19, 1981. If as Gary Sick states, it was that the Iranians were afraid of having to start all over again with Reagan then why was the release not effected earlier. While the Reagan administration, rightfully, had nothing to do with the negotiations it is utter lunacy to assert that Reagan’s election did not have a demonstrable effect.
Power Line and Red State do a nice job in pointing out the holes in PolitiFact's version of the events surrounding the hostage crisis. But we would add to their criticisms the point that PolitiFact also pulled its all-too-typical creative straw man technique on Rubio.

Where Rubio staked out the very defensible position that Iran cut its deal with the president from which it thought it would get the better deal, PolitiFact implies Rubio claimed that Reagan's inauguration caused the release of the hostages:
We flagged Rubio’s comment as a misleading framing of history. Reagan’s inauguration in 1981 may have coincided with the release of the hostages, but historians say it did not cause it.
Is that how PolitiFact framed the issue when it contacted its select panel of experts?

Regardless, this looks like a case of PolitiFact non-transparently interviewing a half-dozen experts and then declaring an expert consensus where no such consensus exists in reality.

Via the United States Institute for Peace (bold emphasis added):
Ronald Reagan was sworn into office on January 20, 1981, just as Iran released 52 Americans held hostage at the U.S. Embassy in Tehran for 444 days. The timing was deliberate. The young revolutionary regime did not want the hostages freed until after Jimmy Carter, who had supported the shah and allowed him into the United States, left office. At the same time, Tehran wanted to clear the slate in the face of a new Republican administration that had vowed to take a tougher stand on terrorism and hostage-taking.
 So totally nothing to do with Reagan.

Whatever.

Here's predicting PolitiFact will do what it usually does when confronted with a strong critique from the right: Nothing.

Thursday, January 14, 2016

PolitiFact's Policy Plinko: What Rules Get Applied Today?

Yesterday Bryan wrote a piece noting PolitiFact ignores their own policies in favor of subjective whim and it's easy to find evidence supporting him. PolitiFact's application of standards resembles the game of Plinko, wherein they start off at one point, but can bounce around before they reach a final ruling. The notable difference between the two is that Plinko is much less predictable.

In 2012 former editor Bill Adair announced a new policy at PolitiFact that they would begin taking into account a person's underlying argument when determining a rating for a numbers claim. That new policy turned out to be bad news for Mitt Romney:





In Romney's case, PolitiFact says we need to look beyond the numbers and observe the historical context to find the truth:
The numbers are accurate but quite misleading....It's a historical pattern...not an effect of Obama's policies.

There is a small amount of truth to the claim, but it ignores critical facts that would give a different impression. We rate it Mostly False.
Romney's numbers are accurate, but, gee golly, PolitiFact needs to investigate in order to find out the meaning behind them so no one gets the wrong impression.

Thankfully for Democrats, just a few months later PolitiFact was back to dismissing the underlying argument and was simply performing a check of the numbers:




Some underlying arguments are more equal than others. Contrary to the Romney rating, PolitiFact chose to ignore the implication of the claim:
Our ruling

Clinton’s figures check out, and they also mirror the broader results we came up with two years ago. Partisans are free to interpret these findings as they wish, but on the numbers, Clinton’s right. We rate his claim True.
PolitiFact suddenly has no interest in whether or not the statistics are misleading. They're just here to make sure the numbers check out and all you partisans can decide what they mean.

Sometimes...



Look, kids! Wheel-O-Standards has come all the way back around! And just in time to hit the conservative group the Alliance Defending Freedom:
The organization does not provide mammograms at any of its health centers...

So Mattox is correct, by Planned Parenthood’s own acknowledgement, that the organization does not provide mammograms...

Federal data and Planned Parenthood’s own documents back up the claim from the Alliance Defending Freedom.

That puts the claim in the realm that won’t make either side happy: partially accurate but misleading without additional details. We rate the claim Half True.
We're back to numbers being accurate but misleading! In this rating PolitiFact finds the number of Planned Parenthood facilities licensed to perform mammograms (zero) is accurate, but after editorially judging the statistic gives the wrong impression, PolitiFact issues a rating based on the underlying argument. Because "fact checker" or something.

Why muck up such a great narrative just for the sake of applying consistent standards?

Wednesday, January 13, 2016

PolitiFact: Point? What underlying point?

In January 2012 PolitiFact's founding editor Bill Adair famously--at least we've tried to make it famous--assured PolitiFact's readers that the most important aspect of a numerical claim is the underlying point:
About a year ago, we realized we were ducking the underlying point of blame or credit, which was the crucial message. So we began rating those types of claims as compound statements. We not only checked whether the numbers were accurate, we checked whether economists believed an office holder's policies were much of a factor in the increase or decrease.
Today PolitiFact rated a State of the Union address claim from President Obama that the private sector created jobs every month since the Affordable Care Act, otherwise known as "Obamacare," went into effect.

PolitiFact's rating? "True."

And what about that underlying argument, eh?
There’s room for argument over what the growth would have looked like absent the health care law, but Obama’s statistic is on target. We rate this claim True.
So ... these days it's like "Why mess up Obama's "True" rating by unnecessarily complicating things? We have a deadline!"

We don't mean to imply any kind of meaningful policy change since PolitiFact supposedly altered its policy under Adair. So far as we can tell, PolitiFact employs a subjective set of rules in reaching its "Truth-O-Meter" ratings. That subjectivity will always take precedence over whatever changes PolitiFact makes to its stated policies.


Note (Jan. 14, 2016): Be sure to check out Jeff D's follow up to this post, PolitiFact's Policy Plinko: What Rules Get Applied Today? Jeff gives some examples of the inconsistency talked about in the above post.

Tuesday, January 12, 2016

True statements ruled "Mostly False"

What happens when PolitiFact finds that a statement is literally true?

That issue was brought up indirectly when Jeff retweeted economist Howard Wall. Wall had tweeted:
We looked up the story where Wall was quoted as an expert. It was a fact check of Mitt Romney from the 2012 presidential election. The Romney campaign said women had suffered the greatest job losses under Obama, implying Obama's leadership had been bad for women.

PolitiFact ruled the claim "Mostly False."

The Romney campaign pushed back. PolitiFact looked at the issue again and ruled the claim "Mostly False." But at the same time, PolitiFact said "The numbers are accurate but quite misleading."

Don't blame the Romney campaign. It probably operated under the assumption that PolitiFact's definitions for its "Truth-O-Meter" ratings mean something.

Taking PolitiFact's definitions literally, the lowest rating one should receive for a literally true claim is "Mostly True." Once below that level, the definitions start talking about "partially true" statements that give a misleading impression ("Half True") and "The statement contains some element of truth" but ignores facts that could give a different impression ("Mostly False").

What's our point? We've always said PolitiFact's ratings reveal more about PolitiFact than they do about the entities receiving the ratings. It's a scandal that social scientists keep their eyes closed to that. Search PolitiFact's ratings for claims it says are literally true. Note the rating given to the claim. Then take a look at the ideology of the entity making the claim.

There's your evidence of journalistic bias by fact checkers.

This is an important issue. If social scientists aren't looking at it, it suggests they don't care.

Why wouldn't they care?



Jeff Adds: We highlighted a Mark Hemingway critique of PolitiFact's Romney claim back in 2012 that is still worth a read. It would seem little has changed at PolitiFact since then.



Update 0956PST 1/12/2016: Added "Jeff Adds" portion - Jeff


Wednesday, January 6, 2016

PolitiFact dances the "can"-"can" on Obama's gun speech

PolitiFact's recent fact check of a statement from President Obama's January 5, 2015 gun control speech ought to give us a week's worth of PolitiFact Bias material.

PolitiFact looked at the president's claim that violent felons can buy guns through the Internet without a background check, ruling it "Mostly True."

We'll kick things off with perhaps the most obvious error:
Some readers seemed to think Obama was suggesting such transactions were legal. We don’t see that in Obama’s comments. (The grammarians at PolitiFact would note that Obama said "can," not "may.") To be clear, such a transaction would be illegal. What Obama said is that such transactions are possible.
We're champions of charitable interpretation, but this one doesn't fly. Why not? Because it leaves no reason for Mr. Obama to single out the Internet. If the issue is simply whether it's possible to buy guns illegally--yes it's possible over the Internet, in person or in any imaginable scenario. The interpretation PolitiFact chose is effectively meaningless. Is Obama going to pass an executive order that criminals cannot potentially disobey? No. That's ridiculous. No matter what Obama orders, criminals "can" make gun purchases bypassing the order under PolitiFact's interpretation.

Violent criminals and others can even buy guns across state lines though PolitiFact explicitly says they cannot. That's the type of absurdity to which PolitiFact's wordplay leads.

It's a little early in the year for a worst fact check of 2016, but we've got an early contender with this one.

It's very likely we'll have more to say about this one.

Sunday, January 3, 2016

"Not a lot of reader confusion" in San Luis Obispo

The Tribune of San Luis Obispo, California, published a reader's letter touching on our favorite topic, PolitiFact. The letter provides yet more evidence of reader confusion about PolitiFact's candidate report cards. PolitiFact editor Angie Drobnic Holan insists that little reader confusion exists about the report cards.
A Dec. 13 New York Times column (“All politicians lie. Some lie more than others”) indicated that the percentages of claims in these categories for candidates Ben Carson, Donald Trump and Ted Cruz are 84 percent, 76 percent and 66 percent, respectively.
The reader doesn't bother to say whether he believes those percentages reasonably extrapolate to generalizations about the candidates. But there was hardly a need for that, thanks to the headline provided by the Tribune:

GOP presidential front-runners lie the most

Do they?

Well, the editors of the Tribune read it in The New York Times via the editor of Pulitzer Prize-winning PolitiFact, so it must be true, right?

That's PolitiFact and the mainstream press running their own deceptive political messages while flying the banner of truth overhead.