Friday, January 29, 2016

The worst of PolitiFact's Reddit AMA

Friday, Jan. 29, 2016, PolitiFact conducted an "Ask Me Anything" session on Reddit.

And here's the Worst Of.

What is the ideological makeup of the Politifact team? How do you ensure there is balance in this respect?

Great question, right? Recently promoted PolitiFact director Aaron Sharockman double-fumbled this one. Part one:


Honestly, I have no idea people's party affiliations. I'm a registered NPA, though I've been registered as a Democrat and a Republican in the past.
It's plausible that Sharockman doesn't know the party affiliations on the PolitiFact team (we happen to know for a few of them but we don't tell because it can potentially affect their job prospects). What's less plausible is the idea he has no insight into the ideologies of the team. The question was about ideologies, not political affiliations. Sharockman dodged the first part of the question.

It gets worse:
The meat of your question is how to do we ensure balance. On that, I can offer a better answer. The writer who writes a fact-check proposes a rating (True, False, Pants on Fire, etc.), but it's actually a panel of three judges (editors) who decide the rating that gets published. So in reality, four people have a vote in every fact-check. I think that makes us sort of unique in the fact-checking game.

The point of having three editors involved is so that different people can offer their viewpoints, analysis to best inform the fact-check. And to make sure balance does exist.
If Sharockman doesn't know the ideologies of the staff, then what guarantee can he offer that having three editors consider the issue serves to make sure balance exists? Answer: He can't offer any such guarantee. It's just words.

I reached the discussion in time to offer a follow up question:
Isn't a voting process like that primarily a guarantee that the majority ideology controls outcomes?
Sharockman responded:
We're not the Supreme Court. We haven't been appointed R's or D's. And I'd say, we often strive for a unanimous decision. So in the event of a 2-1 vote, we'll often ask for more reporting, or clarification on a point to try and get to a unanimous verdict (so to speak).
Indeed, aren't all PolitiFact staffers hired by the editorially liberal (consistently liberal) Tampa Bay Times? PolitiFact's "star chamber" would likely have more balance if if was constructed like the Supreme Court.

Asking for more reporting if there's a holdout does offer some promise for giving greater voice to dissent, but to what extent if all the judges trend left? The voice that's absent from the table will not receive a hearing.

PolitiFact's still flubbing this question just as it has for years.

Follow up: why is it predominantly conservatives who are fact checked and not liberals?

There are reasonable answers to this question, we think. PolitiFact opted for something else.

Disagree. We've fact-checked President Barack Obama more than any other person.

Of the 2016 candidates, who have we fact-checked the most? Hillary Clinton
If PolitiFact has done more fact checks of conservatives than liberals then of what relevance is the number of times PolitiFact has fact checked Barack Obama? The only way the number of fact checks of Obama carries relevance is if that number is greater than the number of ratings of conservatives.

As for the second part of Sharockman's answer, we found 68 ratings of Clinton since 2010, including at least one flip-flop rating. We don't think Clinton's statements about John McCain in 2008 count as statements by a 2016 presidential candidate. Not in any relevant sense, anyway.

Since 2011, PolitiFact has done 86 fact checks of Donald Trump. Eighty-six is greater than 68.

We don't keep track of how many more stories PolitiFact does about conservatives compared to liberals. But we know flim-flam when we see it, and that's what Sharockman offered in answer to this question.

What is the difference between "False" and "Pants on Fire?"

Jeff said he was planning on asking this one. But somebody else beat us to it.

PolitiFact editor Angie Drobnic Holan provided the type of answer we're used to seeing from PolitiFact:

We actually have definitions for all of our ratings. False means the statement is not accurate. Pants on Fire means the statement is not accurate and makes a ridiculous claim. Three editors vote on every rating.
 Yes, that's the difference between the two according to PolitiFact's definitions. But what's the real difference between the two? My follow up question still hangs:
Is there any objective difference between the two ratings? An objective measure of "ridiculous"?
Holan's answer from December 2014 still can't be beat:
So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other.
She'd go into the science involved, but y'all wouldn't understand.

What about that website "PolitiFact Bias"? Somebody brought up our website and Sharockman offered a comment. We think Sharockman's comment was deleted, but we found it on Sharockman's post history page.

A website devoted to saying another website is 100 percent biased seems seem objective to you? 

I asked Sharockman where he got his "100 percent" figure. His description doesn't comport with the way we describe PolitiFact's bias. Sharockman made it up on the spot, like a politician.

Here's looking forward to the next PolitiFact AMA.


  1. This comment has been removed by the author.

  2. Sorry -- had to delete my first posting to correct a couple problematic typos due to voice non-recognition software -- otherwise, all the same:

    Wow. I was actually excited to see this site but, it looks so far like I might be the only one who has -- but, I'll step up and comment on this anyway. Overall, I think the questions you posed in this post are valid, on track, and worthwhile asking. Unfortunately, I also feel your interpretation of the answers to the questions posed completely nullifies them, takes them way off base, and makes it all a virtual waste of time.

    Your first two-part question:
    Part 1: I would have to say that I believe (but, of course cannot state with certainty) that Sharockman most likely knows or, at least, has a large degree of insight into the ideological make up of his staff. My gut agrees with you in saying that he dodged that question.
    Part 2: From this point you state "it gets worse" but what actually happens is you just get a bit silly. Sharockman gave a reasonable answer -- you just didn't like it. That said, your idea that they could have "more balance" could certainly hold water. But then it just begs the question as to who wants to expend the energy schlepping that bucket of water around. After all, this portion of the two part question primarily seems to address the possible bias around the rating system itself. If I make the claim "that purple ball is blue" we would be able to read it a multitude of ways: false, mostly false, half true, mostly true. But, regardless of its ultimate rating, the fact still remains that the ball is purple and everyone except the colorblind should be able to get the point.

    Your follow-up question about the predominance of conservatives being fact checked -- I agree: great question. But, here again, Sharockman actually provided a good answer and, while it seems like you might be on the scent of making a good point, you never really make it -- and your response is fatally flawed from multiple angles:
    ** First, your statement "The only way the number of fact checks of Obama can carry is relevance is if that number is greater than the number of ratings of conservatives." That's just an absolutely silly assertion. Why would Obama (a single, individual liberal) need a greater number of ratings than all conservatives combined to carry relevance? In the vein of your flimflammery verbiage, I say hogwash!
    ** From there you say you found 68 ratings for Clinton since 2010. OK, good. You then make the point that a Clinton statement in 2008 is not relevant to 2016. Agreed -- but, the statement from 2008 lands outside of the range of those 68 ratings you found and doesn't really seem to apply to anything. However, since you made the point, that point can actually be used to poke holes in your next comment -- you found 86 ratings for Trump since 2011. So what? Sharockman's claim that, [of the 2016 candidates, Clinton was fact checked the most] may have actually been made within the context of relevance to the 2016 campaign -- which would pretty much make everything before 2015 moot. How many times has Clinton been rated since the beginning of the 2016 campaign? I don't know. I haven't checked. Like I said, you may be on the scent of making a good point -- but you have yet to make it.

    As to the difference between "false" and "pants on fire" -- I actually like this question but, really, who cares? There are obviously five general ratings which are all fairly defined with the criteria for each. False is false -- and, you know what -- pants on fire is also false. It just has a little flair attached for style and enjoyment of the readers -- it's childlike and funny. It points out things of a ridiculous nature -- like the post to which I write this reply. Is that subjective? Sure, I guess it is…

    1. Thanks for commenting, G.

      We like to show off how much better we are at responding to criticism than is PolitiFact, so I'll be dedicating a post in reply to your comment. And you're welcome to comment in reply to that post, of course.



Thanks to commenters who refuse to honor various requests from the blog administrators, all comments are now moderated. Pseudonymous commenters who do not choose distinctive pseudonyms will not be published, period. No "Anonymous." No "Unknown." Etc.