Wednesday, May 29, 2013

Flub & Scrub: Mitch McConnell edition, Part II

On May 28 PolitiFact published a new version of its May 24 fact check of Sen. Mitch McConnell (R-Ky.).

It's mostly bad, but we'll start with the good.

The good

PolitiFact republished its first version of the story and archived it while publishing a new version.  PolitiFact improved its reporting in the new version by responding to criticism from healthcare expert Michael F. Cannon of the Cato Institute.  Cannon noted that HHS Secretary Kathleen Sebelius sent a potentially intimidating letter to America's Health Insurance Plans:
An HHS spokesman told PolitiFact that the letter was in response to insurance companies using Obamacare as an excuse to raise premiums, and that the law gave the agency the authority to scrutinize excessive premium increases and require justifications from insurance companies. But we find Cannon's interpretation more accurate. The letter chastised the insurers for anti-Obamacare messages and threatened them with regulatory action.
Basely largely on Cannon's argument and the AHIP letter, PolitiFact upgraded its ruling from "Mostly False" to "Mostly True."

The bad

In its "Editor's note" preceding the new version of its McConnell fact check, PolitiFact blamed its failure on McConnell's office (bold emphasis added):
Editor's note: This item was initially published May 24, 2013, as a Mostly False because of the limited supporting information we received Sen. Mitch McConnell's office. The office cited a letter sent to Humana, a government contractor for Medicare Advantage. We found that letter provided relatively little support for the senator's claim and rated it Mostly False.
PolitiFact's failure was not the fault of McConnell's office.  We pointed out in our previous post that PolitiFact omitted key information from its original fact check--information readily available from a Government Accountability Office report that PolitiFact itself cited in the original reporting.  We failed to credit PolitiFact with its fractional disclosure of the key information:
[T]he Humana mailing prompted CMS to send a memo to all other Medicare Advantage and Part D contractors, warning them "to suspend potentially misleading mailings to beneficiaries about health care and insurance reform."
That doesn't sound so much like a "gag order," does it?  But the GAO report related a different account (bold emphasis added):
Although CMS's actions generally conformed to its policies and procedures, the September 21, 2009, memorandum instructing all MA organizations to discontinue communications on pending legislation while CMS conducted its investigation was unusual.
Note the difference between suspending "potentially misleading mailings" with "communications on pending legislation."  GAO uses language like the latter twice in its report.

But what about the version PolitiFact quotes?  All the CMS did was ask MA insurers to stop potentially misleading mailings.  Right?

Most likely the GAO has it right.  PolitiFact quoted from a CMS press release announcing the Sept. 21, 2009 memo, not the memo itself.  The press release does not appear to use language directly from the memo.  PolitiFact presents the press release quotation as though it comes from the memo.  The CMS press release does not express the type of policy the GAO highlighted in its report.  PolitiFact blew the reporting.

The new version of the McConnell fact check doesn't even mention the CMS memo or the GAO report (the latter remains among the sources listed on the sidebar).  The CMS memo, based on the GAO report detailing its instructions, serves as the best evidence supporting McConnell's claim.

We applaud PolitiFact for keeping the original version of the article available to the public, even if the posting followed a disappointing delay.  It should be a simple matter to archive the post as soon as the decision was made to replace it.  The original page URL could link to the archived version, explain why it was archived and assure the reader a new version is in the works.

We condemn PolitiFact for blaming its poor reporting on McConnell's office and for publishing a new version that's almost as defective as the first version.  Certainly the new "Mostly True" rating appropriately gives McConnell more credit, but it's inexcusable for journalists to simply leave out easily accessed information that supports McConnell.  That's just poor journalism.

Correction May 29, 2013:  Fixed title to agree with original title (added "Mitch" and reversed "Scrub" with "Flub") aside from the "Part II."

About that George Mason University study showing PolitiFact rates Republicans as less truthful ...

Just about every media outlet has flubbed the reporting on that study from George Mason University that says PolitiFact finds Republicans less truthful.

Most media outlets lean (or fall) toward the view that the study is saying something about the veracity of Republicans.  That's not the point of the study.  It's a media study.  It's studying PolitiFact, not politicians.   Conclusions from the study apply to PolitiFact, not to politicians.

What's the value of this study?  Not much at all.  It proves nothing, as John Sides points out, because so many different explanations may explain the facts.  This study simply records what PolitiFact did with its ratings over a given time period.  So as much as we might like to see a study that quantifies PolitiFact's selection bias or outright spin in writing stories, this isn't it.  Our study probably remains the best of the lot when it comes to showing PolitiFact's bias.

We've run across a couple of media reports that get things mostly right:  Peter Roff at U.S. News & World Report and John Sides of Washington Monthly and "The Monkey Cage."

Roff doesn't clearly describe the point of the study except in terms of his own view (bold emphasis added):
The fact that, as the Lichter study shows, "A majority of Democratic statements (54 percent) were rated as mostly or entirely true, compared to only 18 percent of Republican statements," probably has more to do with how the statements were picked and the subjective bias of the fact checker involved than anything remotely empirical. Likewise, the fact that "a majority of Republican statements (52 percent) were rated as mostly or entirely false, compared to only 24 percent of Democratic statements" probably has more to do with spinning stories than it does with evaluating statements.
It's likely Roff is describing the purpose of the study.  He's not explaining anything new to the researchers (nor to Sides at The Monkey Cage). 

But, hilariously, the media have largely interpreted the GMU press release in terms of liberal orthodoxy.

The Poynter Institute, owner of the Tampa Times and PolitiFact, ran the ambiguous headline "Study: PolitiFact finds Republicans ‘less trustworthy than Democrats’" and published comments from long-time PolitiFact editor Bill Adair to the effect that PolitiFact doesn't try to measure "which party tells more falsehoods."  Newsflash, Bill Adair:  That's not the point of the study.

Typically the media published semi-accurate accounts like the one at Poynter.  But a few others flatly interpreted the study as saying Republicans tell more falsehoods.


The Huffington Post

Evidence Republicans tell more falsehoods

The Raw Story
Talking Points Memo

The two in the "ambiguous" category should write clarifications.  The four in the latter category should write corrections.

Tuesday, May 28, 2013

PolitiFact is "Not Far" from "Large-Scale" Inconsistencies

Just over a week ago we highlighted PolitiFact's dubious rating of radio host John DePetro. The problem with that rating was simple: DePetro said the Boston Bomber was buried "not far" from John F. Kennedy. PolitiFact did a search on Google Maps to find the linear distance between the two graves, and said DePetro's claim was "mildly ridiculous." Somehow, PolitiFact Rhode Island was able to determine the specific distance of "not far."

PolitiFact explained their scientific conclusion:
Saying that Kennedy is buried "not far" from Tsarnaev is like saying Newport is not far from the eastern tip of Cape Cod, that Rhode Island's State House is not far from Derry, N.H., or that the site of the Boston Marathon bombing is "not far" from the the southernmost tip of Narragansett, R.I.

And to say that such a distance should somehow spark offense strikes us as mildly ridiculous, so we rate his statement Pants on Fire!
Last Thursday, after hearing President Obama claim "there have been no large scale attacks on the United States" during his presidency, I fired off an email to my co-editor Bryan White: "What are the odds PF gives Obama leeway because 'large scale' is too vague?"

As it turns out, the Predictability Gods were listening to my prayers:
Indeed, the definitions of "large-scale" are sufficiently vague that there’s a lot of room for Obama... 
That's right. PolitiFact can determine the linear distance of "not far," but the definition of "large scale" is beyond their ability to comprehend.

Their final ruling is so pathetically protective of Obama it's actually insulting (emphasis added)
Obama said that since he has taken office, "there have been no large-scale attacks on the United States."

Two attacks on Obama’s watch that might qualify as "large scale" -- the Fort Hood shootings and the Boston Marathon bombing. They caused substantially fewer deaths than the biggest terrorist attacks of recent years, and they are believed to have been carried out by "lone wolf" attackers with limited connections to large-scale terrorist networks. But where to draw the line between small, medium and large attacks is open to interpretation. Obama's formulation is plausible, but not the only one. We rate it Half True.
There's a lot wrong with this paragraph. What qualities are necessary for an attack to be deemed "biggest"? Loss of life? Property damage? Number of terrorists involved? How far back do "recent" years go? Notice that "small," "medium" and "large" are "open to interpretation," but "not far" can be quantified by a Google search.

The reality is that reasonable people are entitled to their own opinions about what constitutes a "large scale" attack, or how close is too close to bury a terrorist to a fallen president. In either case, though, it amounts to an opinion, and is rightly beyond the bounds of the clinical world of fact checking. 

PolitiFact does decent work as an editorial page. And they tend to provide valid arguments in favor of their opinions. But it's dishonest to label themselves "fact checkers," let alone pretend to be unbiased. This latest Obama rating is not a legitimate uncovering of facts. It's a defense of a spin of a denial. PolitiFact may as well have their own podium next to Jay Carney.

Bryan adds:

The way fact checkers rule on ambiguous claims over time reveals much about their ideology.  If one side gets a statistically significant advantage over the other then we have a strong indication of ideological bias.

HuffPo commenter: PFB is "a MADE UP FAKE site"

And who needs evidence when everyone knows that truth has a liberal bias?

Since we're made up, J.D. will change his name to "Rumpelstiltskin" while I will take the name of "Pinocchio."

But seriously, this isn't a fake site and the HuffPo story has the reporting all wrong on the George Mason U study.  More on that later.

Our thanks to Bart DePalma for linking us in the comments.

HuffPo visitors are welcome to comment on or criticize our content.  But please do not misrepresent us as a fake or "made up" site.

Saturday, May 25, 2013

Flub & Scrub: Mitch McConnell edition

[Also see "Flub & Scrub:  Mitch McConnell edition Part II"]

On Friday May 24 PolitiFact posted a fact check of Sen. Mitch McConnell (R-Ky.).  I won't quote from it because PolitiFact, perhaps only temporarily, has scrubbed the story from its website with apparently only an announcement on Twitter in explanation:
We're revising our fact-check on Sen. McConnell's claim on health insurers. New item will be posted later today.
PolitiFact later tweeted that it would publish the revised item next week.  We'll see.

So what happened?

PolitiFact rated "Mostly False" a claim from McConnell that HHS Secretary Kathleen Sebelius had forbidden insurance companies from sending information to policyholders about the effects of Obamacare.  Then Michael F. Cannon happened.  Cannon sent out a tweet of his own:
Cannon's URL led to his 2010 article "Secretary Sebelius Slips on the Brass Knuckles," where he detailed Sebelius' demands that insurers not communicate to policyholders information about the Affordable Care Act she considered false or misleading.

I ran across the McConnell rating on PolitiFact's Facebook page that afternoon, well after Cannon had tweeted about it.  I spot-checked the story as I sometimes do.  The story claimed a GAO report said the HHS office "in general" acted properly in responding to what it regarded as a misleading taxpayer-funded message the insurer Humana sent out to Medicare Advantage policyholders.

I looked at the report.  I confirmed that PolitiFact reported more-or-less accurately that the report "in general" found nothing amiss in the handling of situation.

But I also found a significant caveat PolitiFact failed to mention (bold emphasis added):
Although CMS’s actions generally conformed to its policies and procedures, the September 21, 2009, memorandum instructing all MA organizations to discontinue communications on pending legislation while CMS conducted its investigation was unusual. Officials from the MA organizations and CMS regional offices that we interviewed told us they were unaware of CMS ever directing all MA organizations to immediately stop an activity before CMS had determined whether that activity violated federal laws, regulations, or MA program guidance. When asked about this directive, officials from CMS’s central office stated that, given the degree of potential harm to beneficiaries, the action was appropriate for the circumstances.
A source PolitiFact cited provided information supporting McConnell's claim.  And PolitiFact didn't mention it.

I posted this evidence to PolitiFact's Facebook page, arguing that PolitiFact's story was more misleading than McConnell's statement.

As it happens, PolitiFact scrubbed its Facebook page of the link to the scrubbed article at its main website.  So those Facebook comments are gone along with PolitiFact's wall post about its McConnell article.  Mostly gone, that is.  We have a few screen captures.

I argue that a reporter's background knowledge and ideology predispose the reporter to make mistakes of this type--missing obvious evidence mentioned in a source document--in cases where that ideology is threatened or attacked.  This applies even if journalists otherwise keep their ideology secret.

This was a big mistake.  Journalistic organizations do not lightly pull entire articles from a website.  Count it as yet another evidence helping to show PolitiFact's liberal bias.

And now we'll get another opportunity to gauge PolitiFact's transparency as it deals with a big mistake.  The past record isn't particularly good.

Addendum 5/27/2013

J.D. points out something that helps cinch the stink factor on PolitiFact's move to unpublish.

The article I provided about unpublishing stories includes quoted material from Craig Silverman.  Silverman works for the Poynter Institute, which owns the Tampa Bay Times and PolitiFact.

Here's Silverman in his Poynter Institute "Regret the Error" column dealing with issue of pulling a story offline (leading off with a quotation from the Washington Post):

* We should never “unpublish” stories from the Web. Once a story is up, however, the content can be removed with the approval of a senior editor. In those rare cases when we remove the content of a story from the page, it must be replaced with an editor’s note explaining the reason for the deletion. For example if an embargo has been broken, the note would read: “Editors’ Note: This article was published inadvertently and has been removed.”
This is the right standard when it comes to unpublishing. First of all, you try to never unpublish. But if you do have to remove content, be sure to publish an explanation/apology at the same URL.
Why doesn't PolitiFact follow Poynter Institute-recommended standards for unpublishing a story?

Here's what I don't want to see:  "We did follow our guidelines.  We published an explanation/apology to the same URL.  A few days after we unpublished."


Edit 5/28/13: Corrected "Friday May 25" to "Friday May 24"-Jeff

Saturday, May 18, 2013

I Need A Facts 'Cuz I'm Goin' Down

You tell lies thinking I can't see, You can't cry 'cuz you're laughing at me
I'm down (I'm really down)
-The Beatles

It's been more than a decade since Bill Clinton put a face on the concept of obfuscation when he uttered the now infamous words "It depends on what the meaning of the word 'is' is." Thankfully, PolitiFact has resurrected the is defense in their ongoing protection of ObamaCare.

Image from

PolitiFact put their Pulitzer-winning skills to the test while grappling with the difficult question of Nancy Pelosi's confusing, ambiguous statement. Is the ACA bringing the cost of health care down? Heroically, PolitiFact pores through the numbers and sorts out the truth:
It depends partly on what you mean by "down."

Ah, yes, that most complicated and mysterious of all adverbs; "down." What does it mean?

Apparently to the Fact Mongers at PolitiFact, it means up, but not as up as before. Or something:
Pelosi said "the Affordable Care Act is bringing the cost of health care in our country down." But it’s the rate of growth that’s dropped, not the actual cost of care — which is still rising.
Something going up is generally considered to be the exact opposite of something going down.

This Half True rating is pure editorial spin. Pelosi is flatly wrong. PolitiFact acknowledges that costs are rising. And even if we accept at face value their argument that costs are rising slower than they were before, there's hardly an objective way to determine the ACA's influence on that.

When your fact check stumbles over what the definition of the word "down" is, you have to wonder if you're in the right line of work.

Bryan adds:

A "Half True" is almost defensible if Pelosi truly meant to refer to health care costs rising more slowly than they would have in the absence of the ACA.

The context of her statement, however, makes that interpretation implausible (bold emphasis added):
"Many of the initiatives that he passed are what are coming to bear now, including the Affordable Care Act. The Affordable Care Act is bringing the cost of health care in our country down in both the public and private sector.

"And that is what is largely responsible for the deficit coming down."
Slowing the growth of health care spending cannot bear responsibility for "the deficit coming down." PolitiFact's evaluation of Pelosi's statement involves giving her the benefit of the doubt twice:  When she says health care costs are going down she means growing more slowly, and when she says the deficit is coming down she means it's growing more slowly.

The deficit is coming down in 2013, not growing more slowly.  The CBO released statements to that effect in February and May of this year.  It therefore makes no sense to think Pelosi was saying the deficit is growing more slowly.

It's another Olympian flub by PolitiFact. Rachel Maddow's going to explode over this incompetence.  Any day now...

A Fact Too Far

PolitiFact Rhode Island published an article yesterday that highlights the distance between fact checking and what PolitiFact actually does.

Image from PolitiFact

At issue is radio host John DePetro's comments regarding the current resting place of the deceased Boston bomber: "You know, in a way, think of who else is there. That is, President Kennedy is buried not far from there, in Virginia,"

PolitiFact's findings?
We used the Google Maps Distance Calculator to find the actual span between Kennedy's grave at Arlington and the Al-Barzakh Cemetery on Sadie Lane in Doswell, Virginia.

Driving distance: 87 miles.
Bee-line distance: 74 miles.
That's about 55,817 casket lengths.

When we informed DePetro of the distance and asked if he was still bothered, he wrote in an e-mail, "Yes. Insult to bury him so close to JFK. Johnston landfill was my choice or out to sea."
Notice anything missing?  PolitiFact failed to provide a standardized measurement for the linear distance of "not far." Probably because no such definition exists. It's an opinion, and one DePetro articulated quite effectively.

Any guess on PolitiFact's rating?
[T]o say that such a distance should somehow spark offense strikes us as mildly ridiculous, so we rate his statement Pants on Fire!
This is a wholly inappropriate sentence to include in a supposed fact check. We've long argued that there is simply no objective definition of what makes a claim "ridiculous." It's a subjective term determined only by the personal inclinations of PolitiFact's editors. Compounding that subjectivity, PolitiFact finds DePetro's claim only mildly ridiculous in this case. So now not only is the Pants on Fire rating based on an opinion, it's also subject to a sliding scale, the standards for which have yet to be published. Is it a Truth-O-Meter or a mood ring?

It would be interesting to learn when PolitiFact acquired the magical gift of objectively defining what should or should not cause offense. The fact that PolitiFact Rhode Island isn't offended does not make something inoffensive. That's a personal judgement that has no place in a dispassionate determination of fact.

This article is yet another example of how PolitiFact operates as an editorial site sheathed in a false blanket of objectivity. There is simply no way for them to measure the accuracy of DePetro's opinion, and even less possible for them to place a factual determination on what is or isn't offensive.

This is an opinion piece. It's not a fact check. It is dishonest for PolitiFact to suggest otherwise.

Wednesday, May 8, 2013

Got 29 States but a Fact Ain't One

On Tuesday PolitiFact published a rating on Martina Navratilova that caught the ire of liberal bloggers:

PolitiFact gave Navratilova the dreaded Half-True rating, and this upset Wonkette writer "DOKTOR ZOOM," who complained:
Politifact, which now apparently is fact-checking retired pro athletes, to contribute to serious political discourse, checked into Navratilova’s claim, determined that employers in 29 states can indeed fire people for being gay, and rated Navratilova’s statement as “half true,” because it turns out that there are a few exceptions.
It doesn't happen often, but we're inclined to agree with ZOOM on this one.  We think it's a legitimate gripe. Of course, that's probably because it's a gripe we've been making for years, but we won't hold it against the suddenly enlightened left for (probably temporarily) noticing how arbitrary PolitiFact's ratings system really is.

The Wonkette article highlights PolitiFact's lame logic:
But what are these exceptions? First off, the Politifrackers acknowledge that 21 states and the District of Columbia “explicitly prohibit employment discrimination based on sexual orientation,” and that in the 29 states that do not have such laws,
“employees in these states who believe they are discriminated against would not have grounds to win a lawsuit alleging discrimination.”
OK, so Navratilova was right, and her statement is true, right? Well, no, you see, because what she said took a single sentence, and there are paragraph-length exceptions
The gist of it is that Navratilova is correct that in 29 states there is no statewide protection for gay and lesbian employees from being fired for their sexuality. PolitiFact knocks the tennis star down a few notches because, gee golly, some companies have policies against discrimination, and some employees are protected by federal statutes. That's a bogus argument, and it's not fact checking. The fact that some people in specific employment situations are protected does not negate the fact that other people are not protected.

Navratilova is right, and PolitiFact is playing its usual word games.  

Our regular readers might wonder why we chose to highlight this as an example of PolitiFact's liberal bias. Since we started this site we've acknowledged that PolitiFact's arbitrary standards will eventually harm both the left and the right. This rating doesn't change that. PolitiFact simply doesn't offer quality fact checking, and it will inevitably flub ratings both ways. As we've documented, PolitiFact's inadequacy overwhelmingly harms those on the right more often than those on the left. This rating provides an example of how flawed their system is.

Any reputation PolitiFact has as a dispassionate arbiter of facts is completely undeserved. For all the bluster, they're a run-of-the-mill commentary site. Wonkette is correct to point out the subjective nature of this rating, but it's nothing out of the ordinary for PolitiFact. Navratilova is simply collateral damage in PolitiFact's inept carpet bombing of reality.

Our purpose is to highlight PolitiFact's liberal bent. But PolitiFact puts out shoddy work and opinionated claptrap that often distorts the truth instead of clarifying it. Eventually both sides of the aisle will take a hit.

The reality is no one should trust them.

Bryan adds:

It's worth emphasizing just how normal it is for PolitiFact to rule "Half True" for a claim that is true.  As Rachel Maddow notes, facts are either true or false.  One look at PolitiFact's list of "Half-True" rulings shows a great set of recent examples like the one Maddow complained of, including specific ratings of Sen. Marco Rubio and Sen. Jeff Sessions.

Rubio said the Gang of Eight immigration bill isn't amnesty.  PolitiFact said that it depends on how one defines "amnesty."  Yet Rubio used the normal, commonly understood definition.  "Half True," said PolitiFact.  Maddow went ballistic.  Just kidding.  She was able to contain herself until the Navratilova rating served as the last straw.

Sessions said prosecutions for failing gun background checks were down every year under Obama.  It's true for every year for which records have been published, and PolitiFact claims to rule according to information available when a claim is made.  "Half True," said PolitiFact, reasoning that Sessions kind of implied a trend that continued through the current year, and we can't confirm that yet.  PolitiFact also reasoned (!) that since prosecutions were also low under Bush therefore prosecutions under Obama "didn't nosedive."  The left wing blogosphere yawned if it noticed at all, as if "Half True" is the best we should expect from those lyin' Republicans.

Edit 5/8/13: Originally this post inadvertently included a draft paragraph at the end that was not intended for publication. It has been removed-Jeff

Edit: 5/9/13: Added "We think" to third paragraph-Jeff

PolitiFact and the 77-cent solution (Updated)

Is there gender discrimination in wages?

PolitiFact, a project of the Tampa Bay Times supposedly designed to help you find the truth in politics, has the answer.  In fact, PolitiFact does even better than giving us an answer.  It gives us two different answers to the same question.

Is it true that "women earn 77 cents for every dollar earned by a man"?

It's "Mostly True," says PolitiFact.  It's "Half True," says PolitiFact.

You'd think they might be able to settle on "Mostly Half True."

How is it that PolitiFact can reach two different conclusions about the same claim, know that it has reached two different conclusions regarding the same claim and yet fail to resolve the discrepancy?

This is supposed to be fact checking, not "Wheel of Fortune."

We've said for years that PolitiFact's rating system by its nature forces reporters and editors into making subjective judgment calls.  This case serves as yet another example supporting that claim.

Could some difference in the claims or the context of the claims justify a different rating?  PolitiFact mentions no such differences.  Yet PolitiFact has terrific motivation for explaining the different ratings.  In its recent fact check of Rep. Marcia Fudge's "77 cent" claim, PolitiFact Ohio cited other PolitiFact ratings of similar statements:
PolitiFact has made several examinations of the claim that women earn 76 to 77 percent as much as men, and found that they lacked context because they failed to account for factors like education, type of job, age of employee and experience level.
The hotlink associated with "several examinations" leads to a "Half True" rating from PolitiFact Georgia for a claim effectively identical to Fudge's.  Fudge received a "Mostly True" rating.

The writers and editors at PolitiFact apparently don't realize that linking to a closely parallel fact check with a different rating exposes a problem of inconsistency.

Inconsistency isn't bias!

By itself, inconsistency is not bias.  But patterns of inconsistency may provide evidence of bias.  We have that sort of pattern in PolitiFact's ratings of differences in pay by gender.

We can measure by tracking the frequency with which stories either favor one political party over another or cause harm to one party more often than to another.  My co-editor at PFB, Jeff D, points out that Republican presidential candidate Mitt Romney made a claim about differences in pay by gender during the 2012 election.  Romney noted that the equal pay candidate, President Obama, was paying male White House employees more than the female employees.  PolitiFact found that Romney was right.  And rated the claim "Half True":
In the broadest sense, the Romney campaign is on solid ground when it says that "women in Barack Obama's White House are earning less than men." But the closer you look at the data, the less striking this conclusion becomes.
...The statement is accurate but needs clarification or additional information, so we rate it Half True.
"The statement is accurate but needs clarification or additional information," so PolitiFact rates it "Half True."  There's just one problem.  That's the definition PolitiFact gives for "Mostly True":

MOSTLY TRUE – The statement is accurate but needs clarification or additional information.

Even aside from that PolitiFact blunder that somehow escaped the notice of layers of editors, we see a pattern of partisan inconsistency.

Romney's statement, as Jeff points out, avoids false precision.  Romney simply says men get paid more than the women at the White House.  It's very hard to argue that Romney's statement is in any way more misleading than any of the "77 cent" claims.  Indeed, it's hard to argue that Romney misled any more than did the National Women's Law Center with its claim that every state has a gender wage gap.  PolitiFact Georgia rated that claim "True."  PolitiFact simply doesn't provide reasoning that would distinguish one rating from another in this similar set of claims.

Whether the correct rating is "Mostly True" or "Half True," the Republicans draw the short straw with PolitiFact in comparison to Democrats.


Here's the list of similar gender gap stories, followed by two stories where claimants used the 77 cent figure claiming it's the difference where men and women do the same work.

Diana DeGette says women earn 77 cents for every dollar earned by a man
"Mostly True"

R.I. Treasurer Gina Raimondo repeats oft-quoted, but misleading, statistic in equal pay debate
"Half True"

Rep. Marcia Fudge cites wage gap between Ohio women and men
"Mostly True"

Gender wage gap claim needs more context
"Half True"

Tim Kaine says Virginia women earn 79 cents to every $1 made by men
"Mostly True"

[National Women's Law Center] Is there a gender wage gap in every state?

Mitt Romney says women White House employees earn less than men under Barack Obama
"Half True"

Same job, same work

U.S. Rep. David Cicilline says women earn only 77 percent of what men earn in the same job
"Mostly False"

Barack Obama ad says women are paid "77 cents on the dollar for doing the same work as men"
"Mostly False"

Update July 10, 2013

A reader alerted us to another PolitiFact rating that fits with this group.  Former U.S. president Jimmy Carter lowers the bar for "Mostly False" by making the same job, same work claim while naming the wrong percentage.  Carter said the wage gap for the same job and same work averaged 70 cents on the dollar.  "Mostly False," said PolitiFact Georgia.

We wonder how low one could go with the percentage and still rate higher than "False"?