Showing posts with label The Ratings Game. Show all posts
Showing posts with label The Ratings Game. Show all posts

Friday, September 22, 2017

Joy Behar lies 100 percent of the time. It's from PolitiFact.

Of course the title of this post is intended solely to draw attention to its content. We do not think Joy Behar lies 100 percent of the time, no matter what PolitiFact or Politico say.

For the record, Behar's PolitiFact file as of Sept. 19, 2017:


As we have noted over the years, many people mistakenly believe  PolitiFact scorecards reasonably allow one to judge the veracity of politicians and pundits. We posted about Behar on Sept. 7, 2017, noting that she apparently shared that mistaken view.

PolitiFact surprised us by fact-checking Behar's statement. The fact check gave PolitiFact the opportunity to correct Behar's core misperception.

Unfortunately, PolitiFact and writer Joshua Gillin blew the opportunity.

A representative selection of statements?


Critics of PolitiFact, including PolitiFact Bias, have for years pointed out the obvious problems with treating PolitiFact's report cards as a means of judging general truthfulness. PolitiFact does not choose its statements in way that would ensure a representative sample, and an abundance of doubt surrounds the accuracy of the admittedly subjective ratings.

Gillin's fact check rates Behar's conclusion about Trump's percentage of lies "False," but he succeeds in tap-dancing around each of the obvious problems.

Let Fred Astaire stand aside in awe (bold emphasis added):
It appeared that Behar was referring to Trump’s PolitiFact file, which tracks every statement we’ve rated on the Truth-O-Meter. We compile the results of a person's most interesting or provocative statements in their file to provide a broad overview of the kinds of statements they tend to make.
Focusing on a person's most interesting or provocative statements will never provide a broad overview of the kinds of statements they tend to make. Instead, that focus will provide a collection of the most interesting or provocative statements the person makes, from the point of view of the ones picking the statements. Gillin's statement is pure nonsense, like proposing that sawing segments from a two-by-four will tend to help lengthen the two-by-four. In neither case can the method allow one to reach the goal.

Gillin's nonsense fits with a pattern we see from PolitiFact. Those in charge of PolitiFact will occasionally admit to the problems the critics point out, but PolitiFact's daily presentation obscures those same problems.

Gillin sustains the pattern as his fact check proceeds.

When is a subjective lie an objective lie?


In real life, the act of lying typically involves an intent to deceive. In PolitiFact's better moments, it admits the difficulty of appearing to accuse people of lying. In a nutshell, it's very dicey to state as fact a person was lying unless one is able to read minds. But PolitiFact apparently cannot resist the temptation of judging lies, or at least the temptation of appearing to make those judgments.

Gillin (bold emphasis added):
Behar said PolitiFact reported that "95 percent of what (Trump) says is a lie."

That’s a misreading of Trump’s file, which notes that of the 446 statements we’ve examined, only 5 percent earned a True rating. We’ve rated Trump’s statements False or Pants On Fire a total of 48 percent of the time.

The definitions of our Truth-O-Meter ratings make it difficult to call the bulk of Trump’s statements outright lies. The files we keep for people's statements act as a scorecard of the veracity of their most interesting claims.
Is Gillin able to read minds?

PolitiFact's fact checks, in fact, do not provide descriptions of reasoning allowing it to judge whether a person used intentionally deceptive speech.

PolitiFact's report cards tell readers only how PolitiFact rated the claims it chose to rate, and as PolitiFact's definitions do not mention the term "lie" in the sense of willful deception, PolitiFact ought to stick with calling low ratings "falsehoods" rather than "lies."

Of course Gillin fails to make the distinction clear.

We are not mind readers. However ...

Though we have warned about the difficulty of stating as fact that a person has engaged in deliberate deception, there are ways one may reasonably suggest it has occurred.

If good evidence exists that a party is aware of information contradicting that party's message and the party continues to send that same message anyway, it is reasonable to conclude that the party is (probably) lying. That is, the party likely engages in willful deception.

The judgment should not count as a matter of fact. It is the product of analysis and may be correct or incorrect.

Interviews with PolitiFact's principal figures often make clear that judging willful deception is not part of their fact-checking process. Yet PolitiFact has a 10-year history of blurring the lines around its judgments, ranging from the "Pants on Fire" rating ("Liar, liar, pants on fire!") for "ridiculous" claims, to articles like Gillin's that skip opportunities to achieve message clarity in favor of billows of smoke.

In between the two, PolitiFact has steadfastly avoided establishing a habit of attaching appropriate disclaimers to its charts and graphs. Why not continually remind people that the graphs only cover what PolitiFact has rated after judging it interesting or provocative?

We conclude that PolitiFact wants to imply that some politicians habitually tell intentional falsehoods while maintaining its own plausible deniability. In other words, the fact checkers want to judge people as liars under the deceptive label of nonpartisan "fact-checking" but with enough wiggle room to help shield it from criticism.

We think that is likely an intentional deception. And if it is intentional, then PolitiFact is lying.

Why would PolitiFact engage in that deception?

Perhaps it likes the influence it wields on some voters through the deception. Maybe it's just hungry for click$. We're open to other explanations that might make sense of PolitiFact's behavior.

Wednesday, October 26, 2016

Adding an annotation to PolitiFact's annotation of the third 2016 presidential debate

Is it news that fact-checkers are far from perfect?

Behold, a screen capture from PolitiFact's annotated version of the third presidential debate, hosted at Medium. PolitiFact says you can't see it unless you follow PolitiFact on Medium. If our readers can't see it without following PolitiFact, then maybe they're right (we have our doubts about that, too):



PolitiFact highlights Trump's claim that Clinton wants open borders. By hovering over an asterisk on the sidebar, a window appears showing PolitiFact's comment. PolitiFact says it rated Trump's claim that Clinton wants open borders "False."

Click on the link and you eventually end up on PolitiFact's web page and PolitiFact's fact check of Trump's claim about wanting open borders, where it is rated "Mostly False."


There's no editor's note announcing a change in the rating, so we assume that no issue of timing excuses PolitiFact for falsely reporting its own finding.

PolitiFact. The best of the best. Right?

Wednesday, October 12, 2016

'Stronger'='Better'? Pretty much, says PolitiFact

PolitiFact continues to determinedly destroy whatever credibility it has outside its group of left-wing devotees.

Our latest example consists of PolitiFact's fact check of Tim Kaine from Oct. 5, 2016. Kaine said his vice-presidential debate opponent, Mike Pence, had said Vladimir Putin was a better leader than President Obama.

PolitiFact's cutesy-and-misleading video short captures the moment. Well, one of the moments:


Kaine used the same line twice. PolitiFact did well to report both instances, along with providing the broader context of the first instance:
At one point, Kaine said, "Hillary also has the ability to stand up to Russia in a way that this ticket does not. Donald Trump, again and again, has praised Vladimir Putin. … Gov. Pence made the odd claim — he said, inarguably, Vladimir Putin is a better leader than President Obama. Vladimir Putin has run his economy into the ground. He persecutes LGBT folks and journalists. If you don't know the difference between dictatorship and leadership, then you got to go back to a fifth-grade civics class."

Kaine hammered the point again later in the debate.

"Well, this is one where we can just kind of go to the tape on it. But Gov. Pence said, inarguably, Vladimir Putin is a better leader than President Obama."
It turned out Pence had said "stronger," not "better." Kaine had the wording of the quotation precisely aside from that key word, making sure both times he misquoted Pence that he got the use of "inarguably" right.

Is it a big deal to get the key term wrong? Not so much, according to PolitiFact. PolitiFact rated Kaine's claim "Mostly True":
Pence did say something very similar -- but not exactly as Kaine said. Pence had said that Putin "has been a stronger leader in his country than Barack Obama has been in this country." However, "stronger" is not identical to "better." We rate the statement Mostly True.
Not identical? Certainly not. And certainly not in the context Kaine presented the claim. Remember the examples Kaine gave to show the oddness of Pence's claim?

Putin ran the Russian economy into the ground.

It's not "better" to run an economy into the ground, is it? But bucking the West and annexing Crimea despite Western sanctions takes strong leadership. Strong, yes. Better, no.

Persecutes LGBT folks and journalists

It's not "better" to persecute LGBT folks and journalists. But doing so while maintaining high public approval ratings (over 84 percent) shows strength of leadership. Strong, yes. Better, no.

PolitiFact's ratings game

Kaine fully exploited the difference between "stronger" and "better" the way a skilled liar would. But PolitiFact drops his rating only to "Mostly True" because he literally changed the word Pence had used.

Kaine would have been taking Pence out of context. even if he had quoted Pence correctly. His examples saw to that.

Eeny-Meeny-Miny-Moe (red emphasis added):
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
Is it a critical fact that "stronger" and "better" have different meanings?

Did Kaine provide a context for "better" that differed from the context for "stronger" offered by Pence?

Is the statement "accurate" if the key word is the wrong word?

Do PolitiFact's definitions for its "Truth-O-Meter" ratings mean anything at all?

Make-Believe: A world where PolitiFact's definitions mean what they say

If PolitiFact's definition of "Mostly True" was taken literally, then Kaine's statement could not receive a "Mostly True" rating. Kaine's version of what Pence said used a different word than what Pence had said. That makes Kaine's version inaccurate. If accurate and inaccurate do not mean the same thing, then "Mostly True" cannot fit Kaine's claim.

How about "Half True"? Kaine's paraphrase of Pence was not wildly off. "Stronger" and "Better" have some overlap in meaning, and otherwise Kaine got the words right. Kaine's statement could pass as "partially accurate." Kaine also took things out of context, which fits the description of "Half True."

And what about "Mostly False"? Kaine's statement could also qualify as having an element of truth. Most of the words he attributed to Pence were right, though he switched out "stronger" for "better." Was that change a critical fact, given how the terms differ in meaning? Arguably so. Kaine's statement thus also fits the definition of "Mostly False." Which rating fits better amounts to a subjective judgment. The definitions overlap, like the definitions of "stronger" and "better."

If PolitiFact's definitions were taken literally, Kaine's rating would be a subjective coin flip between "Half True" and "Mostly False." That PolitiFact can bend its definitions to apply the rating for accurate statements to inaccurate statements, like Kaine's, shows that PolitiFact puts even more subjectivity in its ratings than its fuzzy definitions demand.

Coincidentally, the Democrat gained the benefit this time. It's a pattern.

Thursday, September 8, 2016

More unprincipled principles from PolitiFact (Updated)

When PolitiFact released its politisplainer video on its fact-checking process, "The PolitiFact Process," we responded with an annotated version of that video. We made one of our key points by contradicting Editor Angie Drobnic Holan's claim that "These ratings are not arbitrary. Each one has a specific definition." We pointed out that PolitiFact's "Truth-O-Meter" definitions are ambiguous, making it impossible for Holan to support her denial that the ratings are arbitrary.

With hardly a week having passed, PolitiFact serves up an example proving our point.

PolitiFact's fact check titled "Hillary Clinton says none of her emails had classification headers," makes a number of its typical mistakes (such as ignoring PolitiFact's "Burden of Proof" principle), but we would draw attention to the conclusion of the piece (bold emphasis added):
Clinton’s carefully worded statement is partially accurate but leaves out important context. For that, we rate her claim Mostly True.
Is the problem obvious?

Let's review PolitiFact's definition of "Mostly True":
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
To us, that does not seem like a perfect match for the "Mostly True" rating Clinton received.

What about the definition of "Half True," then?
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
This one is closer, but still not quite a perfect match. Sure, we've got  "partially accurate," "details," and "context" mentioned, but PolitiFact's specific definition mentions "important details or taking things out of context," not the mere absence of "important context."

This 'tweener language underscores a point my co-editor Jeff D. made on Twitter earlier this week:



Fact checks like this one help illustrate Jeff's point: There is no objective difference between "needs additional information" and "leaves out important details."

PolitiFact can take exactly the same story detail and write the conclusion with a "Mostly True" or "Half True" ending.

In this case, it looks like writer Lauren Carroll (bless her heart) may have recommended a "Half True" rating for Clinton before the story went before PolitiFact's exalted "star chamber" (bless its heart) for a final determination of its "Truth-O-Meter" rating. The group of editors may have wanted a softer rating for Clinton, doubtless in devotion to objectivity and non-partisanship, and so decided on the "Mostly True" rating. Then Carroll presumably did an incomplete revision of the concluding paragraph.

We could be wrong in our hypothesis, of course, but there's little doubt the conclusion Carroll attached jibes better with a "Half True" rating than the "Mostly True" rating Clinton gets on her PolitiFact report card (bless its heart).

In the end, we get a timely example supporting our point that Angie Drobnic Holan spoke falsely when she claimed PolitiFact's ratings are not arbitrary.




Afters:

While it should not surprise us at all if the wrong rating description stays in the story (like it did for PolitiFact fact check of Mitt Romney), it's possible PolitiFact will "fix" this problem by changing the description of the rating to match the definition of "Mostly True."

That change would not truly fix the problem nor blunt our point.

Why?

Because unless the facts of the story change no justification exists for changing the rating or its description. Changing the wording of the rating description does not alter the facts of the story.



Correction Sept. 8, 2016 6:30 p.m. EDT:
In the first paragraph of the "Afters" section, changed "Half True" to "Mostly True" to match the intent of the sentence.


 
Update Sept. 8, 2016 (6:30 p.m. EDT):

Jeff hinted to PolitiFact's Katie Sanders, the editor of the story, that something was amiss:

So far as I can tell, we received no clarification why Clinton's claim received a "Mostly True" rating with the "Half True" definition. We see no other difference in the story, and PolitiFact mentions no other changes in its editor's note.

How did the definition of "Half True" get into a fact check making the finding of "Mostly True"? That's the kind of transparency you don't normally get from PolitiFact.

Wednesday, December 2, 2015

Just wondering

A Dec. 2, 2015 fact check from the national PolitiFact outfit looks at Democratic Party presidential candidate Hillary Clinton's claim that Republican Sen. Ted Cruz has tried to ban contraception five times.

PolitiFact researched the issue and concluded Cruz had never tried to ban contraception, but at most might ban some types of birth control or make it more difficult in some cases to access birth control.

PolitiFact:
The strongest conclusion about Cruz’s views that one could draw from these examples is that he might support a ban on certain types of contraception (but not all) through his support for a personhood amendment. The other examples are even more limited and deal with what employers would be required to pay for, for instance, or whether a major birth control provider should continue to receive federal funding.

The statement contains some element of truth but ignores critical facts that would give a different impression, so we rate it Mostly False.
 The "Mostly False" ruling set us to wondering: If  PolitiFact can give a "Mostly False rating when none of the five examples from the Clinton ad featured Cruz banning birth control, what would it take to get a "Half True" rating?

What if Cruz had tried to ban all birth control in one of the five examples? Mostly False? Half True?

What if Cruz had tried to ban all birth control in two of the five examples? Half True? Mostly True?

What if Cruz had tried to ban all birth control in three of the five examples? Half True? Mostly True? Just plain True?

We're just wondering.

Thursday, January 30, 2014

PolitiFact: half mostly false, therefore false

It's PolitiFact potpourri!

We've had something like this happen at least once before, with Mitt Romney the focus of the fact check.  This time, PolitiFact was testing a viral claim making the rounds on Facebook:
The post has an element of truth but takes information out of context and requires a good deal of clarification. We rate this claim False.
The problem's so obvious that one would think layers of editors would be all over it.

Here's how PolitiFact defines "False":
FALSE – The statement is not accurate.
Here's how PolitiFact defines "Mostly False"(bold emphasis added):
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
This is how PolitiFact defines "Half True" (bold emphasis added):
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
And here's how PolitiFact defines "Mostly True" (bold emphasis added):
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
PolitiFact rates a statement "False" because it matches parts of PolitiFact's definitions of "Mostly True," "Half True" and "Mostly False."  And that kind of sums up the problem with PolitiFact.

Rocket science it's not.  Subjective it is.

Tuesday, October 29, 2013

More PolitiPundit

We've written before about (former PolitiFact editor) Bill Adair's desire to have it both ways with regard to PolitiFact's ratings. When cornered by skeptics, Adair usually defended himself by saying "PolitiFact rates the factual accuracy of specific claims; we do not seek to measure which party tells more falsehoods." However, when preaching to his flock he would proclaim PolitiFact's ratings create "report cards for each candidate that reveal patterns and trends about their truth-telling." and the tallies of those ratings "provide interesting insights into a candidate's overall record for accuracy."

Either the ratings are scientific measurements or they're not, and they're either revealing patterns or they're not. PolitiFact cannot promote the cumulative results of its ratings as indicative of a person's honesty while simultaneously hiding behind a mask of random curiosity.

Apparently new editor Holan has bought into this contradiction with her eyes wide shut. It also appears Holan has convinced herself and her staff that they can don a magical cloak of objectivity when checking pundits as well as they do with politicians. PolitiFact's selection bias is only poised to be more evident when checking pundits than it is with public servants. Pundits, by definition, deal in nuance and opinion.

But the real howler with this PolitiPundit announcement was the apparent lack of self-awareness in this line (emphasis added):
Although PolitiFact has done occasional fact-checks of pundits and talk show hosts, the new venture will mark the first time that staffers have been dedicated to checking media figures.
I bet the best part of being the Unquestionable Arbiter of Facts is you get to decide what words like "occasional" mean. Are Rush Limbaugh's 17 ratings an occasional event? What about Glenn Beck's 23 ratings, or Rachel Maddow's 16? I suppose Sean Hannity's eight ratings or Bill O'Reilly's 10 count as rare?

It's been common practice for PolitiFact to rate pundits and commentators since its inception. The only thing new here is the devotion of additional resources to its fact checking farce.  I'll go out on a limb and predict PolitiPundit will be an even bigger embarrassment than their flagship site.


Bryan adds: 

Some new readers might wonder:  What's the big deal with PolitiFact being a bit imprecise?  "Occasional" covers a good bit of ground, so what's the big deal?

PolitiFact has often downgraded political figures and pundits for rhetorical imprecision.  It's hypocritical.  To mimic PolitiFact's typical judgmental tone:

PolitiFact left a misleading impression by saying it "occasionally" rates pundits.  The facts show otherwise, so the statement tells a partial truth but leaves out important details.  That meets our definition of "Half True."

Friday, April 12, 2013

The Houston Chronicle: "Half of Ted Cruz’s political claims are false, PolitiFact reports" and "Ted Cruz v. PolitiFact: Whose pants are on fire?"

Joanna Raines of the Houston Chronicle wrote a story on March 28 attempting to report PolitiFact's assessment of Sen. Ted Cruz (R-Texas).

The headline and text of Raines' story hinted at a fallacious reasoning.

Raines' story was titled "Half of Ted Cruz’s political claims are false, PolitiFact reports."  It's somewhat natural to take PolitiFact's report card-ish stories along those lines, but the fact is that PolitiFact doesn't rate enough statements to draw general conclusions about political figures.  Compounding the problem, PolitiFact offers no hint at all that its system includes controls for selection bias.  Compounding the problem further, PolitiFact often reaches questionable conclusions with its ratings.

Raines published a follow up story on March 29, titled "Ted Cruz v. PolitiFact: Whose pants are on fire?"  That story implicitly acknowledged the last problem on the preceding list.  Raines offered a brief and reasonably fair account of the substance of Cruz's disagreements with PolitiFact.

Unfortunately, the second story contains a strong hint that Raines continues to reason fallaciously:
To navigate these muddy waters, we’re asking for you to weigh in. Cruz’s office has defended itself against PolitiFact’s claims, and we’re going to let you determine who deserves the “pants on fire” rating, Cruz or PolitiFact?
Ugh.

A survey of public opinion serves very poorly as a measure of truth in most instances.  The flawed reasoning goes by the Latin name argumentum ad populum, also known as the fallacious appeal to the people or the appeal to popularity.

Our guide to the truth should remain the state of the evidence, supplemented by consideration of the arguments pro and con.

Raines' first story helped underscore the truth of one of our longstanding criticisms of PolitiFact:  Publish "report cards" for candidates and many readers will assume that the results say something about the candidates unless the fact checkers explain the error behind that assumption.

Sunday, January 20, 2013

PolitiFact Georgia publishes inaccurate stats for 2012?

These are definitely the people we want doing our fact checks.

PolitiFact Georgia published a statistical breakdown of its fact checks for 2012.

There's just one problem.  The statistics don't appear to have any solid relationship to reality.

Let's see how PolitiFact Georgia editor Jim Tharpe tells it:

Most ratings, as in 2011, fell between the extreme ratings of True and Pants On Fire.

Ratings for the GOP/conservative fact checks broke down like this: 26 True, 22 Mostly True, 32 Half True, 14 Mostly False, 16 False and eight Pants On Fire.

Ratings for the Democratic/liberal fact checks broke down like this: 10 True, 14 Mostly True, 23 Half True, 10 Mostly False, 15 False and four Pants On Fire.

For fact checks for groups and individuals we labeled as "other, " the ratings broke this way: seven True, 14 Mostly True, eight Half True, eight Mostly False, eight False and three Pants On Fire.

It's gotta be the glasses.

Going by the numbers appearing on its website, PolitiFact Georgia did less than 160 fact checks in 2012.  Tharpe gives the number 242.  Looking at all the "Truth-O-Meter" ratings, the site gives 20 per page in reverse chronological order.  On the eighth page of results we reach 2011.  Toss in 36 promises from Gov. Nathan Deal and we get to 196, not that PolitiFact Georgia has actually done that many ratings of Deal's promises.

Where do Tharpe's numbers come from?  Is there a secret list of unpublished "Truth-O-Meter" ratings from PolitiFact Georgia?  Does PolitiFact Georgia count PolitiFact National articles published in the Atlanta Journal-Constitution?  Should PolitiFact Georgia take credit for those stories if that's how Tharpe arrived at his total?

Tharpe isn't from outer space simply with respect to the total number of stories.  None of it adds up, so far as I can tell.  Add Tharpe's total stories for Democrats and Republicans and we end with a total of 194 stories--just two short of the inflated total we get by rounding up to 160 and adding in Deal's promises.  And that doesn't count an additional 48 stories from Tharpe's "other" category.

If there's one saving grace, at least the total of all those stories does agree with Tharpe's overall total of 242 stories.  It's just hard to tell from which planet those stories originated.

Here's the tale of the numbers directly from the PolitiFact Georgia website for 2012:

PolitiFact Georgia published 148 "Truth-O-Meter" fact check stories.  Of that total, 30 were "True" ratings, 33 were "Mostly True" ratings, 35 were "Half True" ratings, 20 were "Mostly False" ratings, 22 were "False" ratings and 8 "Pants on Fire" ratings.

I emailed Tharpe asking about the discrepancy.  He answered that PolitiFact Georgia keeps "a very accurate count" and confirmed that PF Georgia counted stories from PolitiFact national as well as those from the "Deal-O-Meter."  We've already touched on the problems with those methods.

If any "Deal-O-Meter" ratings count in the totals then the breakdown I show shouldn't add up.  If we don't have any "Deal-O-Meter" ratings in the total then it takes 94 stories from PF national to get to Tharpe's total of 242 stories.

On Jan. 3 I wrote again to Tharpe:
When I added together your three breakdowns of stories in the classes "Republican," "Democrat" and "other" the totals agreed with your overall total.  But all three classes are broken down into "Truth-O-Meter" ratings like "True" and "Mostly False."  There are no ratings like "Promise Kept" in those groups. Could you explain how the "Deal-O-Meter" claims fit with the total of 242 fact checks?

You affirm that your totals include some stories from PolitiFact's national operation.  At the same time, your article reports "a year of more than 200 fact checks by your local team of truth-seekers, collectively known as PolitiFact Georgia."  How does that claim jibe with using stories from PolitiFact's national operation in the totals?

When Tharpe first replied he said I could phone him if I needed more help with the numbers. Perhaps it is because I contacted him again by email instead of by phone that the message remains unanswered.

Sunday, September 2, 2012

Disconnect at PolitiFact Ohio

Nothing's better than getting PolitiFact editors on the record about PolitiFact.  Their statements probably do more than anything else to show that the PolitiFact system encourages bias and the people who created it either don't realize it or couldn't care less.

Statements from editors at PolitiFact's associated newspapers come in a close second.

Ted Diadiun, reader representative for the Cleveland Plain Dealer (host of PolitiFact Ohio), answering a reader's question:
In July you printed a chart with two years of PolitiFact Ohio results. It showed Democrats with 42 ratings of Mostly False, False or Pants on Fire, while the Republicans had a total of 88 in those categories. Doesn't that prove you guys are biased?

Well, it doesn't necessarily prove that. It might prove instead that in the statements our PolitiFact team chose to check out, Republicans tended to be more reckless with the truth than Democrats.
Diadiun apparently doesn't realize that if PolitiFact Ohio chooses more Republican statements to treat harshly then it is a likely sign of institutional selection bias unless PolitiFact Ohio either randomizes its story selection (highly unlikely) or coincidentally chose a representative sample.  How would we ever know that the sample is representative unless somebody runs a controlled study?  Great question.  It's such a good question that it is reasonable to presume that a disparity in the treatment of the respective parties by PolitiFact results from an ideologically influenced selection bias.  That was the point of Eric Ostermeier's study of PolitiFact's 2011 "Truth-O-Meter" results.

Diadiun, continuing his answer to the same question:
Or, it might prove only that there are a lot more Republicans who have been elected to the major offices that provide most of the fodder for fact-checking.
It would prove nothing of the kind.  PolitiFact has one state operation in a state that is firmly controlled by Democrats:  PolitiFact Oregon.  PolitiFact Oregon, despite a state political climate dominated by Democrats, rates the parties about evenly in its bottom three "Truth-O-Meter" categories (Republicans fare slightly worse).

Diadiun:
It is also a fact that Republicans had a few more statements rated True than the Democrats did, but the Truth-O-Meter was indeed a bit tougher overall on Republicans. You can find the report here

Does that show bias? I've said it before, and I'll say it again here: The PolitiFact Truth-O-Meter is an arbitrary rating that has the often impossible job of summing up an arduously reported, complicated and nuanced issue in one or two words.
Diadiun goes on to tell his readers to "ignore" the Truth-O-Meter.

That's quite the recommendation for PolitiFact's signature gimmick.

He's partly right.  PolitiFact ratings cram complicated issues into narrow and ill-defined categories.  The ratings almost inevitably distort whatever truth ends up in the reporting.  So shouldn't we ask why a fact checker steadfastly continues to use a device that distorts the truth?

The answer is pretty plain:  The "Truth-O-Meter" gimmick is about money.  PolitiFact's creators think it helps market the fact checking service.  And doubtless they're right about that.

There is a drawback to selling out accuracy for 30 pieces of silver:  Contrary to Diadiun's half-hearted reassurances, the "Truth-O-Meter" numbers do tell a story of selection and ideological bias.  Readers should not ignore that story. 


Jeff adds:

The hubris on display from Diadiun could fill gallon-sized buckets. Notice that he completely absolves PolitiFact from the role they play in selecting which statements to rate, and immediately implies the "Republicans tended to be more reckless with the truth than Democrats." Incompetence or deceit are the only reasonable explanations for such an empty claim.

For the benefit of our new readers, I'd like to provide an exaggerated example of selection bias: Let's say I'm going to call myself an unbiased fact checker. Let's say I'm going to check 4 statements that interest me (as opposed to a random sample of claims). I'll check Obama's claim that he would close Guantanamo Bay, and his claim that he "didn't raise taxes once." I find he's telling falsehoods on both accounts.

Next, I'll check Rush Limbaugh's statement that he's happy to be back in the studio after a long weekend. I'll also check his claim that he's one of the most popular radio shows in the nation. Of course, these claims are true.

What can we learn from this? According to PolitiFact's metrics, Rush Limbaugh is a bastion of honesty while Barack Obama is a serial liar. I'll even put out "report cards" that "reveal patterns and trends about their truth-telling." I'll admit the "tallies are not scientific, but they provide interesting insights into a candidate's overall record for accuracy." It's unbiased because, as the popular defense goes, I checked both sides. The reality that I checked statements that interested me supposedly has no influence on the credibility of the overall ratings. If you don't like the results, it's because you're biased!

See how that trick works?

It's something to keep in mind the next time you see Obama earning a Promise Kept for getting his daughters a puppy or a True for his claim that the White Sox can still make the playoffs. A cumulative total of PolitiFact's ratings serves the purpose of telling readers about the bias of PolitiFact's editors and what claims are interesting to them. It's a worthless measure of anything else.

Tuesday, August 14, 2012

Tommy Christopher: "My conclusion isn't all that different from yours"

We panned Mediaite's Tommy Christopher over his critique of PolitiFact earlier this week.  Christopher has responded via Twitter, resulting in a correction and update of our original item.

Christopher responded again via Twitter not long ago, in response to our tweet about the update:

Gasp! @tommyxtopher responds by pointing out...a typo. Will he address actual flaws in his analysis? Here's our update: politifactbias.blogspot.com/2012/08/pfb-sm…

@PolitiFactBias Well, you don't seem to have read it. My conclusion isn't all that different from yours.
Christopher accurately notes that PolitiFact's ratings make for an inconsistent and unsatisfactory whole.  But our conclusions based on that common observation are fundamentally dissimilar.

Christopher (bold emphasis added):
Fact-checkers like Politifact are tremendously valuable for the research that they aggregate and conduct themselves, but inconsistent, contradictory, and capricious rulings badly undercut that value, especially when those are what politicians and media outlets pay the most attention to. Either a more consistent ratings scale is needed, or they ought to scrap them entirely, and let each fact-check stand on its own merits.

Until then, though, these are the numbers we have to work with, so if these presidential campaigns are going to rely on Politifact when it’s convenient, then they ought to live with these results, and media organizations who constantly quote Politifact should report them.
In our original review of Christopher's piece, we noted the following:
In short, contrary to Christopher's suggestion, aggregating PolitiFact's ratings is a useless exercise for purposes other than evaluating PolitiFact.
Christopher says, despite the problems with PolitiFact's inconsistency, that the media should report the aggregated "Truth-O-Meter" results as if they tell us something valuable about the candidates, hence his headline about Romney's supposed dominance of Obama in the lying department.

We say that the flaws in PolitiFact's process preclude useful comparison of the aggregated results except as a means of evaluating PolitiFact.

We say our conclusion is quite different from Christopher's.  It's irresponsible for the media, including PolitiFact, to slap together the results in a way that suggests to readers something about the tendency of candidates to lie. 

Christopher, despite providing some legitimate caveats, places himself in the irresponsible camp.

Sunday, August 12, 2012

PFB Smackdown: Mediaite's Tommy Christopher (Updated/Corrected)

We use the PFB Smackdown feature to critique the worst of the left's best critiques of PolitiFact.


I get another excuse to say the political Left's critiques of PolitiFact are generally poor, thanks to Mediaite's political editor and White House correspondent Tommy Christopher.

Take it away, Christopher:
Political ads have been an especially hot topic this week, with surrogates from both presidential campaigns alternately citing, and arguing with, vaunted fact-checking outfits like the Pulitzer Prize-winning Politifact. Although controversial rulings have eroded the magic of such efforts, it is worth noting that, by Politifact’s numbers, Republican presidential candidate Mitt Romney is 58% more likely to lie than President Obama.
What magic?

PolitiFact's ratings have always drawn well-deserved criticism, from Bill Adair's brain-dead analysis of Joe Biden's hyperbole through last week's continuation of PolitiFact's series of misdirections about effective tax rates.  Why is it worth noting PolitiFact's comparison of Romney to Obama after we add the problem of selection bias to PolitiFact's inability to apply consistent standards or even achieve a reasonable minimum standard of quality?

Other than the fact that it might serve Christopher's politics, that is?  It's hard not to notice that both of Christopher's examples of supposed "controversial rulings" allegedly caused unfair harm to the Left.

It doesn't take many blown calls to produce an 58 percent difference between two individuals' "Truth-O-Meter" report cards, nor does it take much selection bias to produce that type of difference.

In short, contrary to Christopher's suggestion, aggregating PolitiFact's ratings is a useless exercise for purposes other than evaluating PolitiFact.

We have at least two examples of the latter so far:


"Selection Bias? PolitiFact Rates Republican Statements as False at 3 Times the Rate of Democrats"
"Bias in PolitiFact’s ratings: Pants on Fire vs. False"

The utility of its "report cards" stands as one of PolitiFact's most spectacular lies.  Don't buy it.


Update Aug. 14, 2012:  

Tommy Christopher tweets in response:
@PolitiFactBias Before you "smack" anyone "down," you ought to learn some math. It was a 17-point difference, not 8. Romney 46% Obama 29%
Christopher has a point in that the numbers I used were incorrect.  The passage was intended from the first to read "a 58 percent difference," and with this update that reading shows above.  My primary mistake was in failing to see a typographical error instead of a math error when I did yesterday's correction.

The change in percentage does not significantly affect the thrust of the criticism of Christopher's claim.  Without a control on selection bias,--and there is no good evidence of any such control--using PolitiFact's ratings other than to find out things about PolitiFact just doesn't make sense.

Will Christopher address that point or allow the typographical error to serve as a red herring covering for his mushy thinking?


Correction Aug. 13, 2012:  Math mistake:  Was:  "an 8 percent difference."  Is:  "an 8 percentage point difference."  Apologies for the error.
Correction correction Aug. 14, 2012:  See update above. 
Corrected update Au. 14, 2012:   Thanks to Jeff Dyberg for pointing out that I had incorrectly posted the original intent as "a 58 percentage point difference.  Rather, the intended figure was as Christopher expressed it, as a 58 percent difference.

 

Tuesday, August 7, 2012

Virginia Watchdog: "State GOP criticizes PolitiFact for bias"

Virginia Watchdog takes note of the Republican Party of Virginia's critique of PolitiFact Virginia.  We at PolitiFact Bias reviewed the strengths and weaknesses of the GOP effort last month.

The story by Jon Cassidy and Kenric Ward counts as the most complete reporting yet regarding the GOP's lengthy jab at PolitiFact Virginia.  We found comments by Rick Edmonds of PolitiFact's owner, the Poynter Institute, of particular interest:
Edmonds speculated that Republicans come in for more criticism because they are the party in control of state government — not because of any built-in political bias by PolitiFact.

“Naturally, the rulings focus on the majority party,” he reasoned. “It’s also possible that one side is making more outrageous or newsworthy claims that attract attention.”
If the party in control of the government receives the most ratings then shouldn't PolitiFact National show a marked lean toward Democratic Party claims for 2009?  Democrats controlled both houses of Congress as well as the presidency. 

Edmonds' claim resists clear falsification at the state level mainly because PolitiFact mostly runs state operations in states controlled by Republicans.  And not all state operations freely allow the numbers to give the impression of bias, as we discovered from a figure associated with PolitiFact Georgia.

One wonders why Edmonds fails to mention the possibility of selection bias.


Afters:

PolitiFact Bias has improved the case against PolitiFact's impartiality with the publication of our first research project earlier this month.

Sunday, May 6, 2012

Balancing act at PolitiFact Ohio?

Since its 2011 "Lie of the Year," the claim from Democrats that Republicans voted to end Medicare, PolitiFact has found itself trying to answer the criticism that it tries to achieve a false balance in its fact checking operations.

Past questions about the dangers of selection bias should have amply prepared PolitiFact editors to answer this sort of question.

Should have.

PolitiFact Ohio editor Robert Higgs of the Cleveland Plain Dealer took a shot at addressing the issue of balance in comments to the Plain Dealer's reader representative, Ted Diadium:
I asked Bob Higgs, the editor who oversees the PolitiFact Ohio operation, if he deliberately tries for balance:

"The belief is that if we apply the same constructive standards to all claims, we'll end up treating all sides fairly," he said. "Some of the state operations (there are 10 in the PolitiFact organization), as well as the national operation, do not tally the rankings at all."

Higgs admits that he does tally up the results by party (which shows them remarkably even), "but only to see after the fact how we've done."
Even if PolitiFact Ohio applies its evaluation standards consistently to all its stories, balanced treatment need not result. In fact, it probably won't result.

It won't result because selection bias will occur without active steps taken to avoid the problem.

Select nine stories likely to make Democrats look bad while selecting one that will likely made Republicans look good will not achieve balance regardless of applying identical standards in the evaluation--not that we at PolitiFact Bias believe PolitiFact applies its standards consistently.

Newsflash for Higgs:  Every writer and editor at PolitiFact likely has a sense of how the stories break down by party.  The "remarkably even" count that results at PolitiFact Ohio helps prove the point.

PolitiFact markets its stats as candidate report cards and the like, but the real value of PolitiFact's numbers comes from the insights we obtain into PolitiFact's behavior--not the behavior of those featured in the stories.


Correction:  Changed the first of two consecutive instances of "the" to a "from" in the concluding paragraph.  Hat tip to Jeff Dyberg for catching the the error.



Tuesday, April 3, 2012

James Wigderson: "Looking for the 'Pants on Fire' rating"

James Wigderson of the Wigderson Library & Pub offered a critique of PolitiFact late last year that we failed to highlight (we'll excuse ourselves based on the huge amount of PolitiFact-related material published in December and January). 

Visit the Library & Pub for the details of the relatively gentle takedown; our takeaway was the final paragraph:
We also give the Milwaukee Journal Sentinel’s rating system a “pants on fire” rating for failing to have any standards for the public to use to judge whether the Journal Sentinel’s ratings have any meaning.
A most palpable hit.

One may pretend that some of the definitions PolitiFact gives for its ratings allow for objective categorization, but PolitiFact applies its ratings in a manner that seems to defy systematization.  Though some of PolitiFact's writers wisely make an attempt to correlate the fact check's findings to the appropriate definition, those attempts often (if not always) seem subjectively tinged.  The PFB team excepted, who even noticed when PolitiFact changed its definition of "Half True"?

By now so many have leveled the criticism that its novelty has faded well into the past, but it bears repeating:  PolitiFact's rating system is a flop.  It's not worthy of association with fact checking.  It is way too subjective for that.