Tuesday, January 31, 2012

Ranting and Rating: Why PolitiFact's Numbers Don't Add Up

Crossposted from Bewz, Newz, 'n' Vewz.


"I've given you a decision to make,
Things to lose, things to take,
Just as she's about ready to cut it up,
She says:
  "Wait a minute, honey, I'm gonna add it up."

One of the more common methods of using PolitiFact's findings is to add up total ratings and form a conclusion based on the data. In its simplest form, this is when someone looks at, for example, 100 PolitiFact ratings, 50 from Republicans and 50 from Democrats, then adds up who received more trues and who had more falses, and concludes from that total who is more credible. The reality is that a collection of PolitiFact's ratings provides far more information about the ideological bias of PolitiFact's editors than it does about the people they check.

One of the reasons this flawed method is so popular is that PolitiFact frequently promotes it as part of its shtick. Whether it's the ubiquitous report cards, or the iPhone app with its absurd Truth Index (described as a "Dow Jones Industrial Average of truth"), PolitiFact implicitly tells readers they can simply click a link to find out the credibility of a particular politician. Like most diet pills and get-rich-quick schemes, it's snake oil science and complete junk.

The most obvious flaw with this method is selection bias. There's simply no way for PolitiFact, or anyone for that matter, to check every statement by every politician. This means PolitiFact needs to have some sort of random selection process in order to ensure their sample reflects the wide variety of political statements being made, as well as the politicians making them. Without a random process, editors and writers might investigate statements that pique their own ideological interests. And how does PolitiFact choose its subjects?
"We choose to check things we are curious about. If we look at something and we think that an elected official or talk show host is wrong, then we will fact-check it."
Ruh-roh.

This "things we're curious about" method may explain why Senate hopeful Christine O'Donnell (Rep-RI) garnered four PolitiFact ratings, while the no less comical Senate hopeful Alvin Greene (Dem-SC) received none

Officially, PolitiFact only checks claims that are the "most newsworthy and significant." (Unless it's about Obama getting his daughters a puppy. Or baseball). PolitiFact also has a penchant for accepting reader suggestions. Anyone visiting PolitiFact's Facebook page is aware that their fans overwhelmingly reside on the left side of the political spectrum. If PolitiFact asks 50,000 liberals what statements to check, guess what? Statements about Fast and Furious won't be as popular as, say, Sarah Palin.

It's also important to consider the source of the statement being rated. For example, when Barack Obama made the claim that preventative health care is an overall cost saver, and Republican David Brooks wrote a column explaining Obama is wrong; PolitiFact gave a True to Brooks. This spares Obama a demerit in his column* while granting Republicans an easy True. Another example of this source selection is evident in the rating about $16 muffins for a Department of Justice meeting. Despite the claim being made in an official Inspector General report and being repeated by several media outlets, including the New York Times and PolitiFact partners NPR and ABC news, PolitiFact hung a False rating around Bill O'Reilly's neck. PolitiFact refrained from judging the nominally liberal media outlets--and the source of the claim--all while burdening O'Reilly with a negative mark in his file.

One of the most overlooked problems with analyzing a tally of the ratings is the inconsistent application of standards PolitiFact employs in different fact checks. Even if one was to assume PolitiFact used a random selection process and assigned its ratings to the appropriate source, we still have a problem when subjects aren't checked according to the same set of standards. For example, Politifact rated Obama "Half True" when he made a claim about the rates certain taxpayers pay. His claim only earned that rating when PolitiFact considered the amount their employers contributed to the employees' tax burden. Almost simultaneously, they labeled Herman Cain Mostly False in a similar claim specifically because he used the same formula. A cursory analysis of total ratings fails to detect this disparate treatment. When considering such flexible guidelines, the "report cards" don't seem like such a credible evaluation.

Ultimately, the sum of PolitiFact's ratings tells us far more about what interests PolitiFact's editors and readers than it does about the credibility of any individual politician. With so many flaws in their process, and such a minute sample size in a vast ocean of statements, conclusions about a subject's overall honesty should be considered dubious. We recognize that this flawed process will undoubtedly affect liberal politicians as well. However, it's our contention that the personal bias of the editors and writers will harm those on the right side of the aisle more often and more dramatically than those on the left.

Adding up PolitiFact's ratings in an attempt to analyze a person's or party's credibility produces misleading results. Until PolitiFact includes a check for selection bias and establishes and adheres to transparent and objective standards, an analysis based on their cumulative work is ill-founded at best, and grossly inaccurate at worst.




*PolitiFact did eventually give Obama a False after he repeatedly made the claim, but still spared several high profile Democrats for the same statement. Unlike perennial fact check favorites like the jobs created by the stimulus or Obama's birth certificate, PolitiFact seems to think the issue isn't worth revisiting.


Update
(4/8/2012)-Shortly after (because?) we published this post PolitiFact did revisit Obama's  "Preventative Care Saves Money" claim, and gave him another False. Fair enough. But there's still plenty of other claims that give Obama a pass while simultaneously earning their "We give good grades to the GOP too!" chits that the original point still stands.-Jeff



(2/1/2012): Corrected Alvin Greene's state and name.

The Weekly Standard: PolitiFact Can’t Get Its Story Straight on Romneycare and Abortion

Does the truth have a shelf-life?

Jeffrey H. Anderson, writing at the Weekly Standard, takes the fact-finding DeLorean all the way back to 2007 to highlight PolitiFact's conflicting ratings on RomneyCare's coverage of abortions. Before you read Anderson's article, check out the graphics for the remarkably dissimilar PolitiFact articles:

In 2007, PolitiFact says RomneyCare covers abortions:

Image clipped from PolitiFact.com

In 2012, the issue isn't so clear:

Image clipped from PolitiFact.com

Anderson notes:
[The Gingrich rating] sounds reasonable enough — except that the 2007 PolitiFact verdict directly refutes it. 
At first glance the statements have just enough wiggle room between them to possibly have different ratings. But Anderson's article explains there's not enough to justify different ratings..

Also observe how PolitiFact presented Newt's statement. Newt claimed "Romney signed government-mandated health care with taxpayer-funded abortions." Notice PolitiFact's first question: "Did Mitt Romney make taxpayer funded abortion the law of the land?" The difference is that abortion's status as a covered procedure prior to Romney enacting the legislation is independent of Gingrich's claim. The context of Gingrich's ad was that Romney was sympathetic to abortion rights issues. Whether or not abortion was covered by taxpayers under existing Massachusetts law is irrelevant to Gingrich's point. The fact that Romney helped perpetuate the taxpayer funding is enough to make Gingrich's underlying argument accurate.

We'd also like to point out that for a Republican plan, RomneyCare has been the subject of several favorable articles, and even a ridiculous push poll, at PolitiFact. The cynical reader might surmise the kid glove treatment has something to do with RomneyCare's similarity to ObamaCare. Nah, that couldn't be it.




(1/31/2012) Corrected quote of Gingrich's PF statement. No change in context-Jeff

Sunday, January 29, 2012

PFB Smackdown x2: Daily (Kos) double

Unfortunately it remains far easier to locate poor criticism of PolitiFact than it is to locate good criticism.

First up for PFB Smackdown, "Hunter" from the Daily Kos. Hunter thinks that PolitiFact blew its rating of Mitt Romney's claim that he never voted for a Democrat if a Republican was on the ballot.

Hunter:
(P)eople had a wee bit of a problem with this, because the context was Romney's vote for Democrat Paul Tsongas in the 1992 Democratic presidential primary. The Republicans certainly were having a primary that day as well: The incumbent president, George H.W. Bush, was running against Pat Buchanan. Now we can all look back now and have a good laugh at permanent cable news fixture Pat Buchanan taking on the incumbent president, but they both were certainly "on the ballot" in the 1992 primaries. So Mitt's completely making stuff up on this one—his critics have got him dead to rights.
One should not ignore the fact that the Democratic and Republican presidential primaries are two different elections.  There was no Republican on the ballot for the Democratic presidential primary.  Given the context, it is overpoweringly obvious that Romney was saying that he votes for Republicans whenever Republicans are on the ballot against Democrats, as with a general election.  The critics have to ignore one of the primary salient facts to have Romney "dead to rights."  We agree on the point that PolitiFact blew the rating, though for different reasons.  The context makes his statement true, assuming the Tsongas case is the worst would-be exception.

Next up, "dcg2," also writing for the Daily Kos, noticed a supposed trend with PolitiFact's front page material:
I took a quick look at Politifact's home page, and -- surprise, surprise... found two more quick examples of their front page of taking statements from Democrats that they admit are true, but calling them half-true anyway. Just a quick look at the Politifact's front page shows even more outrages -- all against Democrats ...
The author included no list of stories in his post, so it's hard to verify his claim to some extent, but it seems likely dcg2 was somehow able to ignore PolitiFact's flubs of Obama's milk regulations claim, Romney's voting claim and Mitch Daniels claim about the percentage of those under 30 years of age not going to work for the day ("Because many of them were in school"!).

PolitiFact announced in the summer of 2011 that it would start grading statistical statements in light of arguments regarding cause and effect.  Critics like these two Kossacks simply ignore relevant data.  And that makes for poor critiques.

Friday, January 27, 2012

Props to Micah Cohen

Micah Cohen, in a piece appearing at Nate Silver's portion of the New York Times, provides an excellent example for PolitiFact to follow in presenting its candidate report cards.

Cohen's summary of PolitiFact data on the Republican field of candidates very prominently featured the following (from a post Cohen wrote back in September of 2011):
PolitiFact only looks at statements that pique its interest. Here’s how Bill Adair, the editor of PolitiFact, described the process: “We select statements that we think readers will be curious about. If someone hears a claim and wonders, ‘Is that true?’ then it’s something that we’ll check.”

In other words, if Mitt Romney says the sky is blue, PolitiFact doesn’t bother grading the statement as true. So there is a sampling bias at play here. Accordingly, the following numbers should be interpreted with caution. They aren’t perfect indicators of the honesty of each candidate, and conclusions like “Candidate X lies the most” or “Candidate Y is the most truthful” should probably not be drawn from the data.
Perfect.

PolitiFact, take note.



(1/31/2012) Jeff adds: A commenter to this post brings up an excellent point, and it's one I happen to agree with. Cohen deserves credit for at least recognizing some of the flaws, and including a disclaimer. The fact that it happened in the NYT is also worthy of note. In my view, however, it's unfortunate that Cohen immediately disregards his own reservations about the legitimacy of the report cards and goes on, in detail, to provide an analysis of them anyway. A colored chart with percentages provides an undeserved air of scientific authenticity. I'm not impressed.

The flaws with PolitiFact's report cards go deeper than Cohen implied. We've published a new post that explains these problems in detail. You can read it here.

Liberals late to the party on PolitiFact

As expected, PolitiFact's 2011 "Lie of the Year" selection did a good bit of damage to PolitiFact's reputation on the left.  President Obama's 2012 State of the Union speech produced a claim that again has some liberals crying foul.  The Daily Kos and the Huffington Post both published entries condemning PolitiFact's "Half True" ruling on Obama's claim that the private sector jobs increased by 3 million in 22 months.

Jared Bernstein:
I ask you, why do they go where they go? Because of this:
In his remarks, Obama described the damage to the economy, including losing millions of jobs "before our policies were in full effect." Then he describe [sic!] the subsequent job increases, essentially taking credit for the job growth. But labor economists tell us that no mayor or governor or president deserves all the claim or all the credit for changes in employment.
Really? That's it? That makes the fact not a fact? I've seen some very useful work by these folks, but between this and this, Politifact just can't be trusted. Full stop.
(what's with the exclamation point after the "sic," Bernstein?)

Was PolitiFact blatantly unfair to Obama?

Not necessarily. PolitiFact pledged in July of 2011 to take credit and blame more into account for statistical claims.  PolitiFact, in the segment Bernstein quoted, made a decent case that Obama was giving credit to his policies.

Fortunately for the crybabies of the left, PolitiFact promptly caved on this one, revising the ruling to "Mostly True."  The rationale for the change is weaker than the justification for the original ruling:
EDITOR’S NOTE: Our original Half True rating was based on an interpretation that Obama was crediting his policies for the jobs increase. But we've concluded that he was not making that linkage as strongly as we initially believed and have decided to change the ruling to Mostly True.
That editor's note doesn't give readers any concrete information at all justifying the new ruling.  It doesn't take Obama's phrasing into account in any new way, doesn't acknowledge any misinterpretation of Obama's words and doesn't reveal new information unavailable for the earlier ruling.  In short, it looks like a judgment call all the way, where PolitiFact arbitrarily (if we don't count the criticism from the left) decided to give Obama the benefit of the doubt.

The critics on the left, meanwhile, remain apparently oblivious to the another ruling from the State of the Union speech where Obama received an undeserved "True" rating. 

And where were they when Sarah Palin could have used their defense for her true claim about defense spending as a percentage of GDP?

We have a PFB research project planned to address this general issue of technically true claims.


Addendum:

PolitiFact editor Bill Adair has once again come forth to explain PolitiFact's ruling and change of mind:
Lou, deputy editor Martha Hamilton and I had several conversations about the rating. We wrestled with whether it deserved a Half True or a Mostly True and could not reach a conclusion. We decided that it would depend on how directly Obama linked the jobs numbers to his policies.
What criteria were used to determine how directly Obama linked the jobs numbers to his policies?

Adair:
Lou, Martha and I had another conversation about the rating and whether it should be Half or Mostly True. At various points, each of us switched between Half and Mostly True. Each of us felt it was right on the line between the two ratings (unfortunately, we do not have a rating for 5/8ths True!).

We brought another editor, deputy government & politics editor Aaron Sharockman, into the conversation and he too was on the fence. Finally, we decided on Half True because we thought Obama was implicitly crediting his own policies for the gains.
How was Obama's statement "right on the line"?  What criteria placed it there?  What criteria might have moved it one way or the other?

An item like this from Adair is precisely where we should expect a detailed explanation if there is any detailed explanation.

There's essentially nothing.

We get the report of disagreement and vacillation and none of the specific reasons in favor of one rating over the other, except for the implied admission that at least one person making the determination had a change of heart leading to a reversal of the rating.

If that sounds subjective on PolitiFact's part, it probably is.

Wednesday, January 25, 2012

Sublime Bloviations: "PolitiFlub: Udderly confused by EPA milk regs"

 Crossposted from Sublime Bloviations.


This.
just.
doesn't.
look.
very.
easy.
to.
reconcile:


(link to story at PolitiFact.com)

That's President Obama from yesterday's State of the Union Address.



(clipped from PolitiFact.com)

The latter rating came from PolitiFact Virginia almost a full year ago.

On the face of it, one can imagine a reconciliation of the two rulings.  But it's doubtful if you've looked at the one for Obama after evaluating the one for Griffith.

The best part of it is that PolitiFact may have flubbed both rulings.  The EPA was leaning toward an exemption for homogenized milk.  But the exemption would not have covered raw milk, which should have left Griffith's claim at least partly true.  And if raw milk received no exemption from the EPA then Obama's claim is approximately half true as well.  A raw milk spill would still be treated just like an oil spill even after President Obama supposedly eliminated the rule that required a milk spill to be treated like an oil spill.

Welcome to the wonderful world of PolitiFact fact checking.


Update:  Looks like the EPA did get around to exempting all milk products from the rule. Griffith was still partially correct at worst, and President Obama did not eliminate an EPA rule.  Rather, Obama's EPA exempted milk from a rule that remains in effect.  And Obama's statement obscures the administration's vacillating actions on the issue:
"The Obama Administration pulled back the rule in January of 2009, then reissued it in November, and to large degree it was the same rule," Schlegel said.

Jeff adds (1/26/2012): This is more accurately a "Bryan adds" because I simply wanted to highlight something Bryan wrote on PolitiFact's Facebook page regarding this ruling. I think it exemplifies the "gimme" True's that PolitiFact grants to Obama:

"[T]he Bush administration had created an exemption for milk as it was leaving office. That exemption was one of those that Obama pulled back for re-examination when he took office. So the only reason he had the chance to save us from the EPA's application of oil spill rules to milk was by preventing Bush from fixing it first."

Sunday, January 22, 2012

Bill Adair describes selection bias at PolitiFact

We've noted before, along with Eric Ostermeier, that PolitiFact's self-described methods of choosing stories amount to a recipe for selection bias.  The Parker Report recently published an interview with PolitiFact editor Bill Adair that describes more of the same:
How do you decide which reporters take on which claims? How many people check PolitiFact’s facts before publishing? In other words, what is the process? 

Here’s how it works: Every day, our interns look through transcripts, campaign videos, news coverage and interviews for factual claims. The editors review them and choose the claims to fact-check based on whether they are timely, newsworthy and whether people will wonder if the claim is true. That is our biggest criteria for selecting a claim to check — to satisfy people’s curiosity.
 
Most of the time, our reporters choose the claims they want to check. They do the research and write the article, which typically takes a day.

The articles, which include a recommended Truth-O-Meter rating, are then edited by one of our editors and then reviewed by a three-editor panel. The panel makes the final decision on the rating.

Though Adair does not mention it in this interview, PolitiFact also solicits fact check ideas from readers (see image below).

(clipped from http://www.politifact.com/)

If those who send in story suggestions are predominantly liberal or predominantly conservative, this feature will tend to skew story selection toward one political pole or the other.

Likewise, in Adair's description we have two factors that will help introduce partisan bias.  First, if the interns' political biases influence their choices of stories to send to the editors, then the system again will tend to skew story selection toward one political pole or the other.  Second, if the editors use their own sense of curiosity in choosing which statements to rate we have another case where ideology may influence the selection process left or right.

The three-editor panel that chooses the final "Truth-O-Meter" rating will by its nature tend to preserve a majoritarian ideological bias in PolitiFact's ratings.  Suppose the panel has two conservatives and one liberal.  In any rating where ideology serves as an influence, the two conservatives could consistently outvote the liberal.

Interviewer Erik Parker did not ask Adair any particularly tough questions (at least it doesn't appear on the record). Maybe next time.

Saturday, January 21, 2012

Spreading the Word

Big thanks to Right Wing News and Ace of Spades for the links this week.

John Hawkins provided a link to us on his site Right Wing News, and Ace Re-Tweeted a mention from one Wayne Austin.

Highlighting the bias and flawed standards at PolitiFact hasn't always been as popular as it is now. When we started PFB we did so in order to offer readers a collection point of disconnected and hard to find criticisms of the supposedly objective fact checkers.

Slowly but steadily the evidence is piling up and PolitiFact is no longer as trusted a source as they once were. We'd like to think we had something to do with encouraging the skepticism, but we wouldn't have been able to do it without our fans promoting our blog.

Thanks to people like Austin, and outlets like Right Wing News, Ace, Legal Insurrection and Big Hollywood, the word is spreading.

Thanks for your support and we look forward to being your source for evidence of and links to evidence of PolitiFact's liberal bias and journalistic failings.

The PolitiFact response to the Romney story pushback

PolitiFact doesn't just entertain with its incompetent and unfair fact check stories.  It also entertains with its response to criticism. 

The most popular response of all is the "turtle."  Just don't respond and wait for the criticism to die off.

The second method involves acknowledging the criticism, followed by ignoring it as though it makes no difference.

PolitiFact editor Bill Adair uses a variant of the second method in responding to criticism of a recent fact check of Republican presidential candidate Mitt Romney:  Seize on some minor point in the criticism and act like the minor point nullifies the criticism (second method plus the minor point, in other words).

Hilariously, PolitiFact takes its critic, Dylan Byers, out of context for purposes of its response.  Note the relevant portion of Byers' critique:
I've reached out to Jacobson to see how many experts PolitiFact spoke with, and to ask for his reaction to Bruscino's post. There's a fair chance that PolitiFact spoke to other experts who saw things differently, but its hard to imagine how Bruscino, with all that detailed analysis, could be relegated to a minority view.
Bruscino was one of the experts PolitiFact cited in the Romney story.  He objected to PolitiFact's methods and conclusion in a blog post earlier this week at Big Tent.

Check out the headline over PolitiFact's response:

'There's a fair chance PolitiFact spoke to other experts' -- yes, 13 others
Yes, ladies and gentlemen, our esteemed fact checkers lead off with a fine example of quote mining.

Commentary: "PolitiFact's Pants on Fire"

Another PolitiFact-cited expert has joined Tom Bruscino in dumping on PolitiFact over a recent fact check on Republican presidential candidate Mitt Romney.

This time it's Ted R. Bromund, writing in Commentary magazine.  Like Bruscino, Bromund pans PolitiFact for using leading questions and for issuing a ruling with which he cannot agree.  But Bromund goes further in noting that PolitiFact performed its fact check based on a straw man version of Romney's point (bold emphasis added):
Jacobson sums up Romney’s contention as being: “The U.S. military has been seriously weakened compared to what it was 50 and 100 years ago.” Since the Truman Doctrine of 1947 is as good a marker as any of the moment when the U.S. assumed the global security responsibilities that formerly belonged to Britain, the fact that our Air Force is smaller and older than it has been at any point since that date might give immediately give cause for concern.

But the obvious point of Romney’s statement was not that the U.S. military of today would lose a war to the U.S. military of 1917 or 1947. It was that the margin of U.S. “military superiority” – i.e., its relative strength versus potential and actual adversaries – is at risk if defense spending declines, as President Obama plans for it to do. The question is not whether the U.S.’s “military posture is in any way similar to that of its predecessors in 1917 or 1947”: it is whether the U.S.’s margin of superiority over other actors, taking contextual factors into account, is better or worse than it was in previous eras.
Bromund weakens his case some with his characterization of PolitiFact's argument in the story.  It seems the argument was more that the current state of the armed forces as described by Romney remains the world's superior military force, though PolitiFact does use some language easily taken to support his interpretation.  But Bromund scores a huge hit in noting that PolitiFact stuffed Romney's argument full of straw.  Just have a look at the manipulated quotation PolitiFact used in its deck material:
The U.S. military is at risk of losing its "military superiority" because "our Navy is smaller than it's been since 1917. Our Air Force is smaller and older than any time since 1947." 
PolitiFact uses the above to represent Romney's argument. True, the story goes on to quote Romney accurately, including his argument that cuts in defense spending might lead to the U.S. losing military superiority.  But the story treats Romney's argument in accordance with the inaccurate hybrid quotation/paraphrase:
(A) wide range of experts told us it’s wrong to assume that a decline in the number of ships or aircraft automatically means a weaker military. Quite the contrary: The United States is the world’s unquestioned military leader today, not just because of the number of ships and aircraft in its arsenal but also because each is stocked with top-of-the-line technology and highly trained personnel.

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges.
It's remarkable that PolitiFact's summary omits Romney's mention of cuts in defense spending.


PolitiFact has stepped in it again and has again reacted by publishing a pedantic rebuttal along the lines of the one it used in response to criticism of its 2011 Lie of the Year.  The former will receive a review here in due time. (Update 1/22/12 See review here).

Additional reading:

Power Line stays on the story, providing additional valuable material by publishing the text of Bromund's reply to PolitiFact writer Louis Jacobson after Jacobson's initial inquiry about the Romney issue.

Politico took note of the story with a piece by Dylan Byers.  Byers's story drew most of PolitiFact's ire in the rebuttal mentioned above, so reading it provides excellent background material.

The Huffington Post even gets in on the act with solid article.  The story provides the context of Romney's remarks with a partial debate transcript.


Also see the my latest post in the "Piquing PolitiFact" category at Sublime Bloviations, which features the text of an email message I sent to PolitiFact asking for PolitiFact to fact check one of its own claims.



Update (1/22/2012): Added link to Bruscino article, link to PF response review, and minor spelling corrections-Jeff

Thursday, January 19, 2012

Future of Capitalism: "Two More Obama Switcheroos"

I'm a regular reader of Ira Stoll's Future of Capitalism, though it's not where I typically troll for PolitiFact rebukes. So I was surprised last week to catch a brief mention of our factastic friends in the otherwise non-PolitiFact related piece.

At issue is the announcement that the Obama administration seeks to reduce the number of armed services personnel by 490,000 troops over the next decade. Stoll happens to be keeping his own mini-ObamaMeter and points out that this reduction contradicts an Obama campaign promise that he "supports plans to increase the size of the Army by 65,000 troops and the Marines by 27,000 troops."

It takes Stoll a whopping two sentences to expose PolitiFact's sophistry:
The Politifact Web site absurdly rates this as a "promise kept," explaining, "Obama said nothing about keeping the higher levels indefinitely." How cynical can you get?
Not only is it cynical, it's also an example of PolitiFact's alternating standards. Note how PolitiFact dealt with Rand Paul's claim that the "average federal employee makes $120,000 a year. The average private employee makes $60,000 a year." Paul received a False rating because, as PolitiFact explained:

"Most people hearing that would assume he was talking about salary alone, but  he was talking about total compensation, including benefits such as retirement pay and paid holidays."

In Obama's case, PolitiFact gives him a Promise Kept because of the literal (and cynical) interpretation of his words. Paul, on the other hand, is deemed false not because of what he said, but because of what PolitiFact thinks people would assume he meant.

If that's not good enough, check out this recent rating on Mitt Romney's claim that "More Americans have lost their jobs under Barack Obama than any president in modern history..."

Romney’s claim is accurate if you count from every president’s first day in office to his final day -- by those standards, Obama is indeed the only president since World War II to have presided over a net job loss.

Don't worry. Despite finding the literal claim true, PolitiFact invented arbitrary standards to justify finding Romney's claim Mostly False. No "said nothing about" treatment for poor ol' Mitt.

One more?

You can read Bryan's in-depth take down of this rating on Sarah Palin, but I'll give you a hint on how it ends:

Although she's technically correct, the numbers are wildly skewed by tiny, non-industrialized countries. We find her claim Barely True.

The examples are, if I can avoid being taken literally, endless. PolitiFact's vacillating guidelines on when to literally interpret something can provide solid evidence of bias. And, to be fair, PolitiFact doesn't always take Obama literally:

So overall, the poll numbers support Obama’s general point, but they don’t fully justify his claim that "the American people for the most part think it’s a bad idea." Actually, in most of the polls just a plurality says that. On balance, we rate his statement Mostly True.

Doh!

In case readers infer that the above conflicting standards have been cherry picked, it's important to note that every one of the examples provided, including the Promise Kept cited by Stoll, was written by PolitiFacter Louis Jacobson. So we can rule out different approaches from different authors.

Until PolitiFact develops objective standards and adheres to them, the political bias of the writers and editors will be exposed with each contradictory rating.

Although Future of Capitalism isn't a usual destination for PolitiFact articles, I encourage readers to read the whole thing and enjoy Stoll's brilliant writing.


Bryan adds:

Bill Adair offered this in mid-2011:
"You're right that we have not always been consistent on our ratings for these types of claims. We've developed a new principle that is reflected in the Axelrod ruling and should be our policy from now on. The principle is that statistical claims that include blame or credit like this one will be treated as compound statements, so our rating will reflect 1) the relative accuracy of the numbers and 2) whether the person is truly responsible for the statistic.)"
That's probably good news for "Bush's fault" in the past and bad news for "Obama's fault" moving forward.


Jeff adds: Bryan is correct that PolitiFact has "updated" their principles. But we've shown, repeatedly,  that PolitiFact hasn't changed in practice subsequent to publication of Adair's new and improved standard. And Adair's update still fails to provide a transparent, let alone objective, method to weigh the various components of a compound statement.

Big Tent: "A PolitiFact Example"

Blogger and PolitiFact-cited expert Tom Bruscino supplies a partial insider's look at the PolitiFact process along with a critique of the finished work of which he was a part in his post "A PolitiFact Example."

PolitiFact writer Louis Jacobson asked Bruscino for his assessment of Mitt Romney's claim that the U.S. Navy is at its smallest since 1947.

Bruscino found Jacobson's questions leading:
Jacobson did a remarkable bit of research in a very short period of time. However, I did think his questions to me were leading. Remember, Mr. Jacobson asked "(2) What context does this ignore (changing/more lethal technology, changed geopolitical needs, etc)?," which both assumes and implies to the interviewees that Romney ignored those specific contexts.
And after registering some surprise at Jacobson's use of apparently non-objective descriptors of Romney, Bruscino demurs from PolitiFact's "Pants on Fire" ruling:
My opinion, for what it is worth, is that since Romney's base statement was factually accurate when it came to most numerical metrics, it would seem that he could be given credit for a half-truth, even if the context complicates the matter.
Do read Bruscino's entire post, which is particularly valuable since it provides yet another look at the style of inquiry used by PolitiFact journalists.  The commentary thread is also well worth reading.

Hat tip to Power Line blog.  Visit Power Line also for a parallel review I'd have been better off copying rather than writing up my own.



Jeff adds: I first saw this rating yesterday, and couldn't help but notice it provided another example of PolitiFact's alternating standards. Check out how PolitiFact presented this article on their Facebook page:

Image from http://www.facebook.com/politifact

Notice that Romney is spreading ridiculous falsehoods because he "ignores quantum leaps in technology and training."

Poor Mitt. If only he had made this statement back in 2009 when PolitiFact's standards were much different:

We agree that the two cars are totally different. But Obama was careful in the way he phrased his statement: "The 1908 Model T earned better gas mileage than a typical SUV sold in 2008."  As long as you don't consider any factors other than mileage, he's right. We rate his statement Mostly True.

You see, Obama is rated only for his literal statement, while ignoring quantum leaps in technology that make the Model T "totally different." Romney suffers from additional qualifiers that PolitiFact throws in to the mix.

The similarities between the two ratings don't end there. Here's a bit from the Obama/Model T rating:

So technically Obama is right.


But his implication is that we haven't gotten more fuel efficient in 100 years. And that's a reach.
...

...Model Ts reached top speeds of only 40 miles an hour. They guzzled motor oil, about a quart a month. The original tops were made of canvas, and they had no heating or cooling systems. They also had none of the safety features of modern cars: no bumpers, no air bags, no seat belts, no antilock breaks [sic].

The cars had large, skinny wheels to more easily clear the obstacles on rocky, rutted roads. Corner them too fast and they could tip over. And if you crashed, the windshield would usually shatter into sharp, jagged pieces that could slice you to ribbons.

"The government would not allow anyone to sell Model Ts today because they're so unsafe," Casey said. "It's a car that no one would use on a regular basis today. It's not a fair comparison."

Here's similar text from the Romney rating:

This is a great example of a politician using more or less accurate statistics to make a meaningless claim. Judging by the numbers alone, Romney was close to accurate.

...

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges. Today’s military and political leaders face real challenges in determining the right mix of assets to deal with current and future threats, but Romney’s glib suggestion that today’s military posture is in any way similar to that of its predecessors in 1917 or 1947 is preposterous.

Obama: Technically correct, as long as you don't consider any other factors, but a reach. Mostly True.

Romney: Close to accurate, meaningless, egregious, glib, preposterous. Pants on Fire.

Bruscino is right to point out the terms used to describe Romney's statement are more appropriate for the editorial page as opposed to an objective determination of facts. And once again, we're left to wonder why different guidelines are used for different people.

Update (1/19/2012 1921 pst) Jeff adds: Speaking of glib and preposterous, this part of the rating just caught my eye:

A wide range of experts told us it’s wrong to assume that a decline in the number of ships or aircraft automatically means a weaker military. Quite the contrary: The United States is the world’s unquestioned military leader today, not just because of the number of ships and aircraft in its arsenal but also because each is stocked with top-of-the-line technology and highly trained personnel.

The first problem is obvious. Romney never claimed that a reduction in the number of ships or aircraft automatically meant a weaker military.  Actually, Romney was citing examples in support of his overall claim (that continued cuts in defense spending will eventually lead to a weaker force). Jacobson's second sentence is a howler. "Quite the contrary" to what? The fact that the U.S. is the world's supreme military force is totally irrelevant to whether or not it's on the path to becoming weaker. If Warren Buffet loses a million dollars on a bad deal, the fact that he's still the richest guy in the room does not negate the fact that he's also a million dollars poorer. And just like Romney claimed in his statement, Buffet simply cannot continue to cut bad deals if he is going to remain the richest guy in the room.

Thursday, January 12, 2012

Anchor Rising: "Cicilline Gifted Another Mostly True From Politifact -- Seriously?"

Patrick Laverty and the Anchor Rising blog give us yet another quality criticism of PolitiFact.

PolitiFact just makes it way too easy.
Ok sure, the funding level was "close". It was in the 90s and it was more than the previous administration. However, as Politifact themselves often say, that isn't what Cicilline said. He said it was at 100% all but two years. It was there for all but six years. That's a big difference.

So the issue really speaks to Politifact's credibility, if they have much left. They are, at best, inconsistent with their rulings especially when it comes to Congressman Cicilline. This is the same newspaper that int 2010 endorsed Cicilline for Congress, in part due to his fiscal management of Providence. 
 PolitiFact's justification for the ruling?
Cicilline, in his off-the-cuff statement, mixed up where the years with the lowest contribution fell. But he made it clear a few times that he was citing figures from memory, so we’ll give him some leeway and rule Mostly True.
It's good to know that one can obtain some leeway if one is citing figures from memory.  At least when PolitiFact decides to grant such leeway (I'm not finding other examples of this kind of treatment).  If you're working from memory then the same set of facts can become more true than otherwise.  Like magic.

Looks like we can add a new wrinkle to the Principles of PolitiFact.

Wednesday, January 11, 2012

Mark Hemingway and Glenn Kessler on NPR

Mark Hemingway, who wrote a key critique of modern fact-checking operations back in December, appeared with the Washington Post's fact checker, Glenn Kessler, for a radio interview on NPR.  It's worth either listening to it or reading the transcript, but one particular section deserves special attention:
CONAN: Here's an email from Noreen(ph). I don't understand that the - that since - excuse me. I don't understand the idea that since PolitiFact demonstrates that Republicans lie three times as often as Democrats mean it's biased. Maybe Republicans actually do lie that much more. The idea that you have to have an even number of lies reported for Democrats and Republicans in order to be considered not biased is ridiculous. One side could lie way more than the other. And by trying to make them even, you are distorting fact. Is simple numerical balance an indication of nonpartisanship?

KESSLER: No. I don't look at them that way, and, as I said, I don't really keep track of, you know, how many Democrats or how many Republicans I'm looking at until, you know, at the end of the year, I count it up. My own experience from 30 years covering Washington and international diplomacy and that sort of thing is there's - both Democrats and Republicans will twist the truth as they wish if it somehow will further their aims. I mean, no one is pure as a driven snow here. And I've often joked that if I ever write an autobiography, I'm going to title it "Waiting for People to Lie to Me."

(SOUNDBITE OF LAUGHTER)

CONAN: That's something reporters do a lot. Mark Hemingway?

HEMINGWAY: Why - I think I said when I even brought this up. I mean, you know, I don't think that, you know, that, you know, numerical selection is indicative of, you know, bias per se. I just think that it's highly suspicious. When it's three to one, you know, if it were 60-40, you know, whatever, yeah, sure, you know? But when it's three to one, you start getting things where, you know, you start wondering about, you know, why the selection bias.

Hemingway's December article was quite valuable, but he missed an opportunity to explain an important aspect of Eric Ostermeier's examination of PolitiFact's story selection.

PolitiFact rated about the same number of politicians from each party.  Yet one party received significantly worse "Truth-O-Meter" ratings.   The key inference behind Ostermeier's study was the expectation that a party-blind editorial selection process should be expected to choose the same types of stories for both parties.  The results, then, if Republicans really do lie more, would show approximately the same distribution of ratings but with one party more heavily represented in the total number of stories.  The approximately even number of stories for each group throws the monkey wrench in Noreen's reasoning.

It would have been good if Hemingway had explained that during the broadcast.

As a side note, it's interesting that Kessler likewise ends up writing approximately as many stories about Democrats as about Republicans.  Run the numbers for Kessler as Ostermeier did for PolitiFact and perhaps the tendencies look alike. The obvious reason for focusing on PolitiFact instead of Kessler is PolitiFact's far greater volume of material.



(1/12/12) Jeff adds: There's a flaw that is often overlooked when discussing the "add 'em up" style of interpreting PolitiFact's ratings, and that's the issue of the quality of the fact checks themselves. Assuming PolitiFact actually adheres to an objective formula for avoiding selection bias, and then rates 50 statements from the left and 50 from the right, it still wouldn't disprove an ideological lean.

Take for example the different standards used when PolitiFact rated similar statements from Herman Cain and Barack Obama. Both included the employer's portion of payroll taxes in their respective calculations, but in Cain's case PolitiFact downgraded him for it, while in Obama's case it pushed him higher up the rating scale. And this still doesn't take into account the dishonest tactic of inventing statements out of thin air.

It may be interesting to review the tallies of who gets what ratings and discuss the merits of the numbers. Ultimately though it's the alternating standards that will offer the best evidence of PolitiFact's liberal bias.


(1/19/2012) Jeff adds: An additional flaw with adding up PolitiFact's ratings is the fact that PolitiFact chooses who to give the rating to.

When Obama claimed that preventative health care "saves money", and David Brooks said he's wrong, PolitiFact gave a True to Brooks. This serves the dual purpose of sparing Obama a False on his "report card" that PolitiFact likes to shill so often, while also providing cover in the "we give Republicans True's too!" sense.

When PolitiFact rated the oft-repeated, and false, claim about $16 muffins at DOJ event, PolitiFact could have given the rating to NPR, the New York Times, or even (gasp!) PolitiFact partner ABC News. Instead, they chose to burden Bill O'Reilly with the falsehood, despite the original claim coming from a government report.

It's these types of shenanigans that will always distort a ratings tally.

Update/clarification (1/14/2012):

Added "for a radio interview on NPR" to the first sentence.

Wednesday, January 4, 2012

James Taranto: "Bad-Faith Journalism"

The Wall Street Journal once again weighs in on PolitiFact following its "Lie of the Year" selection.

This cycle, James Taranto hits PolitiFact for its wrongheaded approach to fact checking:
We're not as troubled as Ponnuru is by the effect of PolitiFact, and the "fact checking" genre it exemplifies, on politics. We'd argue instead that it has a baneful effect on journalism.
Taranto's complaint about PolitiFact has much to do with the rationale Michael F. Cannon used to make his decision to withhold from PolitiFact the expert opinions it solicits from time to time.

Though Taranto goes a bit easy on PolitiFact for its past "Lie of the Year" failures, it's worth reading every word.

Sunday, January 1, 2012

PolitiFact 2011: A review

Crossposted from Sublime Bloviations.

PolitiFact Bias has now spent approximately a full year highlighting criticisms of the PolitiFact fact checking brand.

Our hopes that PolitiFact would improve its performance in light of outside criticism have gone largely unfulfilled.  Perhaps the biggest improvement was the reconciliation of two differing definitions of the "Half True" rating, but that modest accomplishment occurred without any announcement or acknowledgment at all from PolitiFact.  By contrast, PolitiFact wrote extensively about its momentous change in calling its fourth rating from the top "Mostly False" rather than "Mostly True" even though the definition remained the same.

Here's a rundown of the issues that should keep discerning readers from trusting PolitiFact:

1)  PolitiFact persistently ignores the effects of selection bias.  It simply isn't plausible that editors who are very probably predominantly liberal will choose stories of interest on a neutral basis without some systematic check on ideological bias.  PolitiFact, for example, continues to publish candidate report cards as though selection bias has no effect on the report card data.

2)  PolitiFact continues to publish obviously non-objective stories without employing the journalistic custom of using labels like "commentary," "opinion" or even "news analysis."  Readers are implicitly led to believe that stories like an editorial "Lie of the Year" selection are objective news stories.

3)  PolitiFact continues to routinely apply its principles of analysis unevenly, as with its interpretation of job creation claims (are the claims assumed to refer to gross job creation or net job creation?).

4)  PolitiFact has yet to shake its penchant for snark.  Snark has no place in objective reporting (see #2 above).  Unfortunately, PolitiFact treats it like a selling point instead of a weakness, and PolitiFact's intentional use of it has apparently influenced Annenberg Fact Check to follow suit.

There is a silver lining.  PolitiFact's methods produce perhaps the best opportunity yet to objectively measure mainstream media bias.  Some of those projects will be published at PolitiFact Bias over the coming year, with the study specifics available through Google Docs.