Showing posts with label comparison. Show all posts
Showing posts with label comparison. Show all posts

Tuesday, October 1, 2013

The partisan view of bipartisanship

PolitiFact offers us implicit instruction in the partisan view of bipartisanship:

Image from PolitiFact.com

PolitiFact says "bipartisan" doesn't mean just one or two votes from one partisan group.  What does the dictionary say?
representing, characterized by, or including members from two parties or factions: Government leaders hope to achieve a bipartisan foreign policy.
Cruz is correct using the loosest definition of "bipartisan."  Why would PolitiFact fail to recognize that in its rating?  The rating arbitrarily discounts a clear element of truth in Cruz's statement.

In contrast, when President Obama claims Democrats and Republicans voted to keep the government open, PolitiFact finds that no Republicans supported the final bill that would have prevented a partial government shutdown.

Is Cruz objectively misleading his audience more than Obama misleads his?  Look in vain for the evidence in the fact checks of Cruz and Obama.


Wednesday, May 8, 2013

PolitiFact and the 77-cent solution (Updated)

Is there gender discrimination in wages?

PolitiFact, a project of the Tampa Bay Times supposedly designed to help you find the truth in politics, has the answer.  In fact, PolitiFact does even better than giving us an answer.  It gives us two different answers to the same question.

Is it true that "women earn 77 cents for every dollar earned by a man"?

It's "Mostly True," says PolitiFact.  It's "Half True," says PolitiFact.

You'd think they might be able to settle on "Mostly Half True."

How is it that PolitiFact can reach two different conclusions about the same claim, know that it has reached two different conclusions regarding the same claim and yet fail to resolve the discrepancy?

This is supposed to be fact checking, not "Wheel of Fortune."

We've said for years that PolitiFact's rating system by its nature forces reporters and editors into making subjective judgment calls.  This case serves as yet another example supporting that claim.

Could some difference in the claims or the context of the claims justify a different rating?  PolitiFact mentions no such differences.  Yet PolitiFact has terrific motivation for explaining the different ratings.  In its recent fact check of Rep. Marcia Fudge's "77 cent" claim, PolitiFact Ohio cited other PolitiFact ratings of similar statements:
PolitiFact has made several examinations of the claim that women earn 76 to 77 percent as much as men, and found that they lacked context because they failed to account for factors like education, type of job, age of employee and experience level.
The hotlink associated with "several examinations" leads to a "Half True" rating from PolitiFact Georgia for a claim effectively identical to Fudge's.  Fudge received a "Mostly True" rating.

The writers and editors at PolitiFact apparently don't realize that linking to a closely parallel fact check with a different rating exposes a problem of inconsistency.

Inconsistency isn't bias!

By itself, inconsistency is not bias.  But patterns of inconsistency may provide evidence of bias.  We have that sort of pattern in PolitiFact's ratings of differences in pay by gender.

We can measure by tracking the frequency with which stories either favor one political party over another or cause harm to one party more often than to another.  My co-editor at PFB, Jeff D, points out that Republican presidential candidate Mitt Romney made a claim about differences in pay by gender during the 2012 election.  Romney noted that the equal pay candidate, President Obama, was paying male White House employees more than the female employees.  PolitiFact found that Romney was right.  And rated the claim "Half True":
In the broadest sense, the Romney campaign is on solid ground when it says that "women in Barack Obama's White House are earning less than men." But the closer you look at the data, the less striking this conclusion becomes.
 
...The statement is accurate but needs clarification or additional information, so we rate it Half True.
"The statement is accurate but needs clarification or additional information," so PolitiFact rates it "Half True."  There's just one problem.  That's the definition PolitiFact gives for "Mostly True":

MOSTLY TRUE – The statement is accurate but needs clarification or additional information.

Even aside from that PolitiFact blunder that somehow escaped the notice of layers of editors, we see a pattern of partisan inconsistency.

Romney's statement, as Jeff points out, avoids false precision.  Romney simply says men get paid more than the women at the White House.  It's very hard to argue that Romney's statement is in any way more misleading than any of the "77 cent" claims.  Indeed, it's hard to argue that Romney misled any more than did the National Women's Law Center with its claim that every state has a gender wage gap.  PolitiFact Georgia rated that claim "True."  PolitiFact simply doesn't provide reasoning that would distinguish one rating from another in this similar set of claims.

Whether the correct rating is "Mostly True" or "Half True," the Republicans draw the short straw with PolitiFact in comparison to Democrats.



Afters

Here's the list of similar gender gap stories, followed by two stories where claimants used the 77 cent figure claiming it's the difference where men and women do the same work.

Diana DeGette says women earn 77 cents for every dollar earned by a man
"Mostly True"

R.I. Treasurer Gina Raimondo repeats oft-quoted, but misleading, statistic in equal pay debate
"Half True"

Rep. Marcia Fudge cites wage gap between Ohio women and men
"Mostly True"

Gender wage gap claim needs more context
"Half True"

Tim Kaine says Virginia women earn 79 cents to every $1 made by men
"Mostly True"

[National Women's Law Center] Is there a gender wage gap in every state?
"True"

Mitt Romney says women White House employees earn less than men under Barack Obama
"Half True"

Same job, same work

U.S. Rep. David Cicilline says women earn only 77 percent of what men earn in the same job
"Mostly False"

Barack Obama ad says women are paid "77 cents on the dollar for doing the same work as men"
"Mostly False"

Update July 10, 2013

A reader alerted us to another PolitiFact rating that fits with this group.  Former U.S. president Jimmy Carter lowers the bar for "Mostly False" by making the same job, same work claim while naming the wrong percentage.  Carter said the wage gap for the same job and same work averaged 70 cents on the dollar.  "Mostly False," said PolitiFact Georgia.

We wonder how low one could go with the percentage and still rate higher than "False"?

Friday, April 26, 2013

PolitiFact in Mathmagic Land

A reader pointed us to yet another marvelous example that helps show how PolitiFact applies an irregular set of standards.

PolitiFact Georgia investigated Democrat state senator Vincent Fort's charge that Gov. Nathan Deal, a Republican, has appointed blacks to government positions less than 3 percent of the time.  PolitiFact said the actual number was a little over 7 percent.  So the claim was only "Half True":
The senator’s overarching claim that Deal has appointed a relatively low percentage of minorities has merit. But he was wrong by a handful of percentage points. It was based on an incomplete sampling of Deal’s total appointees.

We rate the claim Half True.
The lawmaker, state senator Vincent Fort,  received a "Half True" because of the truth of his underlying point, that the percentage of black appointees was low.  And he was wrong by "only a handful of percentage points."

Compare the treatment Fort received to Grover Norquist's fate at the hands of PolitiFact Virginia back in February.  Norquist said Virginia uses less than 1 percent of its budget surplus on roads.  The actual number was about 7 percent, so Norquist was off by a handful and committed an error of about 86 percent.  Compare that to Fort's error of about 43 percent.  Norquist's rating from PolitiFact Virginia?  "False."  Norquist received no credit at all for an accurate underlying argument that Virginia doesn't spend much of its budget surplus on roads.

Perhaps Fort could claim Deal has appointed zero blacks and still get a "Half True" from PolitiFact because of the accuracy of his same underlying point.  There's no way to know based on PolitiFact's grading system.  It's all up to the subjective impressions of PolitiFact's star chambers.

It's a crazy way to fact check.  Matt Bryant scores a touchdown!  No, it was just a field goal. Half True.


Jeff Adds:

It's hard to avoid the reality that PolitiFact is an editorial site with articles like this. The final rating is based on the editors' arbitrary standard of however much a "handful" is. Not to mention that Fort named a specific figure, 3 percent, that was unarguably wrong. Whether or not Fort's underlying argument is a good one is clearly the stuff of an editorial. His figures were incorrect, and a fact check, by definition, should be limited to that.

Then again, maybe they cut Fort some slack because they assumed he was citing figures from memory?

Sunday, February 10, 2013

PolitiFact's defense spending shenanigans

Remember when PolitiFact believed that the best way to measure defense spending was as a percentage of GDP among OECD nations or world powers?

No worries. PolitiFact doesn't remember it either.

Today PolitiFact New Jersey ruled "True" a claim from Newark's Mayor Cory Booker that U.S. defense spending exceeds defense spending for "the next 10, 11, 12 countries combined."

Back in July of 2010, PolitiFact ruled "Mostly False" a claim from conservative Sarah Palin that in terms of defense spending as a percentage of GDP, the U.S. ranks No. 25 in the world.

The two rulings make up yet another classic comparison illustrating PolitiFact's liberal bias.

In Booker's case, the facts are simple.  If the U.S. spends more like Booker said, then his statement is true.  It doesn't even matter if nobody really knows what China and Russia spend on defense, and it doesn't matter that the various nations use no standard method of defining defense spending.  We might spend more on salaries and veterans' benefits than Canada spends on guns, tanks, ships and planes.  Can we defend ourselves effectively with veterans' benefits?  Probably not.  But none of that matters to PolitiFact New Jersey.  Booker is right, so just deal with it.

In Palin's case, facts are complex.  Palin is technically right about U.S. defense spending as a percentage of GDP.
We quickly tracked down the chart from which we suspect she pulled her factoid. (Her staff didn't return our e-mail query.) It's a credible source -- the CIA World Factbook -- and, as Palin said, the U.S. does rank 25th in the world, spending an estimated 4.06 percent of GDP on defense in 2005.

Case closed? Not really.
Palin was mostly wrong because it's not fair to compare the U.S. to tiny non-industrialized nations.  In Booker's case, of course, it's fair to compare our defense spending to that of nations that deliberately under-report their defense spending.

Stepping back from agenda journalism toward reality, both Booker and Palin have a legitimate point, with Palin probably better grounded in the facts since her claim uses a figure for the U.S. that is inflated relative to nations that do not fully reveal their defense spending.  In effect the failure of certain nations to fully report their defense spending understates Palin's point while overstating Booker's.

There's no objective justification for this disparity in ratings about defense spending.  PolitiFact New Jersey liked the point Booker was making and subjected it to less scrutiny than Palin received for her claim.  That's how mainstream media fact checking works.  It's not objective.  And PolitiFact is the worst of the lot when it comes to this type of thing.

Note:
I fisked the whole of the Palin ruling way back when at Sublime Bloviations.

Monday, November 5, 2012

Equal Ratings for Equal Claims!

Note: Bryan and I appreciate the tips and suggestions we receive from our readers. Encouraging people to do their own skeptical research of PolitiFact was a primary goal of starting PFB in the first place. But rarely do readers send us ready made blog posts. One such reader (who has asked to remain anonymous) recently did just that. We've made some formatting changes but the following post is largely unedited.

Do women get paid less than men? It depends on who's getting rated.
Politifact showed its bias recently through the glaringly obvious inconsistency between the 26 Oct ruling on Mitt Romney's claim of White House women earning less than their male counterparts, and the 5 Sept ruling on Diana DeGette's statement of effectively the same thing.

Romney's statement received a Half-True rating:


DeGette's scored a Mostly True:



Each person basically said that women earn less than men, though DeGette was referring to the U.S. as a whole, while Romney was referring to the Obama White House.What was the difference that allowed DeGette to earn a Mostly True versus Romney's Half-True?

It's worth noting that both claims were judged against the light of discriminating factors such as gender differences in occupation, education, experience and hours worked.
 
This is an outstanding example of Politifact's arbitrary ratings system because it's so easy to see the issue in a non-partisan light. Readers must ask themselves if partisanship is the only reason for the disparity. Romney and DeGette made very similar statements. Whatever rulings they receive, they at least must be identical.

What's most absurd is the Romney story contained a reference to the DeGette story, and actually used the fact that the White House pay gap is smaller than the overall gap to knock Romney's claim down a rung. The Politifact staff had this mistake glaring them in the face, and they still didn't see it. Were they blinded by their bias or an agenda?


Jeff adds:

Our alert reader hits some solid points. To point out DeGette's rating in the Romney article and cite it in order to lower Romney's rating is baffling. What's even more absurd is DeGette came up with a specific figure of $0.77, while Romney used the more general term "less than." You'd think a broader margin of error would help Romney. The discrepancies should have been picked up by Angie Drobnic-Holan, as she was the editor on both rulings.

Many thanks to our anonymous reader! We encourage everyone to send in your tips, and analysis.

When is a tax that isn't a tax a tax?

On Oct. 5, 2012 PolitiFact published a fact check examining Vice President Joe Biden's claim that the Ryan budget would result in a $460 annual tax on Social Security recipients.

Biden's claim was based on filling in blanks of  Romney's budget plan based on extrapolations.  PolitiFact ruled the claim "Mostly False."

On Oct. 26, 2012 PolitiFact published a fact check examining a Romney campaign claim that President Obama's policies place a $4000 tax hike on middle class families.

The Romney campaign's claim was based on translating the accumulated interest from Mr. Obama's spending programs into a tax collected over a 10-year period.  In other words, the Romney campaign extrapolated from the ongoing costs to place a number on a tax that Mr. Obama has not proposed.  PolitiFact ruled the claim "Pants on Fire."

We think the comparison helps illustrate the arbitrary nature of PolitiFact's rating system.  In both cases, the claim is based on the made-up existence of a proposed tax.


Update Nov. 6, 2012:  The image did not behave as hoped when clicked upon, so it's been re-sized and embedded at its actual size to make it easier to read.  The page may load more slowly than average as a result.



Wednesday, July 18, 2012

Relevant: Jay Cost with "Bain Capital and Media Bias"

The Weekly Standard's Jay Cost provides a timely reminder of yet another subtle form of media bias:
Most journalists will swear that, despite the fact they vote Democratic, they treat both sides fairly. Indeed, it is a rare event to read a news article that directly attacks the Republican party or one that praises the Democratic party.

But that does not mean media bias does not exist. It does – its exercise is just subtler than this. And the last two weeks have been a great example of how it operates.
Read Cost's entire article for his excellent descriptions of the way the mainstream media can lend aid to its ideological favorites through story selection.

And now let's have a look at the ten most recent stories at PolitiFact:

Main headline today at PolitiFact's main page:  "Checking the facts about Romney and Bain Capital."

(Barack Obama) Says Mitt Romney’s carried interest income was a tax "trick."

(Mary Matalin) Says Debbie Wasserman Schultz "has these offshore accounts" like Mitt Romney.

(Mitt Romney)  "When I was governor, not only did test scores improve – we also narrowed the ach
ievement gap." 


(Barack Obama) Mitt Romney "says the Arizona immigration law should be a model for the nation." 

(Barack ObamaSays Mitt Romney had millions in the Cayman Islands, a tax haven.

(chain email)  The media won’t publish a real photo of Trayvon Martin with tattoos on his face.

(Barack ObamaSays Mitt Romney "had millions in a Swiss bank account."

(Steve Doocy)  "If you make more than $250,000 a year … you only really take home about $125,000."

(Marco Rubio)  The health care law "adds around $800 billion of taxes on the American people. It does not discriminate between rich and poor." 

("Obameter" promise item indicating compromise)

The Obama campaign probably can't complain about having the featured article plus four of the ten featured fact checks surrounding its intended campaign narrative.  See if you can locate the Romney campaign's narrative anywhere on the above list.

Is this typical?

Hopefully this example serves to show PolitiFact at its best in favoring the Obama campaign narrative.  But the chances are that Democrats have the advantage most of the time.

It's the nature of the beast.

Thursday, May 24, 2012

Power Line: "Barack Obama, Fiscal Conservative!"

The latest smackdown of PolitiFact's unbelievably inept attempt to present Obama as a budget miser comes from John Hinderaker over at Power Line.

Hinderacker first delves into the problems with Rex Nutting's flawed analysis that started this meme off in the first place:
It started with the ridiculous column by one Rex Nutting that I dismantled last night. Nutting claims that the “Obama spending binge never happened.” He says Obama has presided over the slowest growth in federal spending in modern history. Nutting achieves this counter-intuitive feat by simply omitting the first year of the Obama administration, FY 2009, when federal spending jumped $535 billion, a massive increase that has been sustained and built upon in the succeeding years. Nutting blithely attributes this FY 2009 spending to President Bush, even though 1) Obama was president for more than two-thirds of FY 2009; 2) the Democratic Congress never submitted a budget to President Bush for FY 2009, instead waiting until after Obama was inaugurated; 3) Obama signed the FY 2009 budget in March of that year; 4) Obama and the Democratic Congress spent more than $400 billion more in FY 2009 than Bush had requested in his budget proposal, which was submitted in early 2008; and 5) the stimulus bill, which ballooned FY 2009 spending, was, as we all know, enacted by the Democratic Congress and signed into law by President Obama. So for Nutting to use FY 2010 as the first year of the Obama administration for fiscal purposes was absurd.
Hinderaker goes on to list several of Obama's big spending, deficit-boosting credentials before getting to PolitiFact. Hinderaker has some choice words for PolitiFact's determination that Obama is St. Skinflint, but more importantly notes a discrepancy with a past fact check:
PolitiFact arrived at this conclusion by swallowing the claim that President Bush is somehow responsible for the spending that Obama and the Democrats did in 2009 after he left office. This is doubly amusing because it contradicts the approach PolitiFact took when the shoe was on the other foot. In January 2010, PolitiFact purported to evaluate David Axelrod’s claim that “The day the Bush administration took over from President Bill Clinton in 2001, America enjoyed a $236 billion budget surplus….” PolitiFact found that claim to be true by referring to the FY 2000 budget:
When we asked for his sources, the White House pointed us to several documents. The first was a 2002 report from the Congressional Budget Office, an independent agency, that reported the 2000 federal budget ended with a $236 billion surplus. So Axelrod was right on that point.
So at that time, PolitiFact was clear: the Clinton administration’s responsibility ended in FY 2000, the year before President Bush took office. But, now that the partisan position is reversed, PolitiFact says the opposite. Obama isn’t responsible for anything until he had been in office for eight-plus months, even though, in that time, he had signed nine spending bills plus the stimulus.
PolitiFact's assertion that "Obama has indeed presided over the slowest growth in spending of any president" is absurd. The sheer level of incompetence demanded for a rating like this makes it easy to believe that PolitiFact overlooked the problems deliberately. It's simply implausible that PolitiFact overlooked such obvious flaws accidentally.

It takes a special kind of hubris to call yourself non-partisan when dispensing this type of deceitful gimmickry.

Hinderaker's article goes into more detail pointing out the problems from Nutting and PolitiFact.  Do visit Power Line and read the whole thing.

Thursday, January 19, 2012

Big Tent: "A PolitiFact Example"

Blogger and PolitiFact-cited expert Tom Bruscino supplies a partial insider's look at the PolitiFact process along with a critique of the finished work of which he was a part in his post "A PolitiFact Example."

PolitiFact writer Louis Jacobson asked Bruscino for his assessment of Mitt Romney's claim that the U.S. Navy is at its smallest since 1947.

Bruscino found Jacobson's questions leading:
Jacobson did a remarkable bit of research in a very short period of time. However, I did think his questions to me were leading. Remember, Mr. Jacobson asked "(2) What context does this ignore (changing/more lethal technology, changed geopolitical needs, etc)?," which both assumes and implies to the interviewees that Romney ignored those specific contexts.
And after registering some surprise at Jacobson's use of apparently non-objective descriptors of Romney, Bruscino demurs from PolitiFact's "Pants on Fire" ruling:
My opinion, for what it is worth, is that since Romney's base statement was factually accurate when it came to most numerical metrics, it would seem that he could be given credit for a half-truth, even if the context complicates the matter.
Do read Bruscino's entire post, which is particularly valuable since it provides yet another look at the style of inquiry used by PolitiFact journalists.  The commentary thread is also well worth reading.

Hat tip to Power Line blog.  Visit Power Line also for a parallel review I'd have been better off copying rather than writing up my own.



Jeff adds: I first saw this rating yesterday, and couldn't help but notice it provided another example of PolitiFact's alternating standards. Check out how PolitiFact presented this article on their Facebook page:

Image from http://www.facebook.com/politifact

Notice that Romney is spreading ridiculous falsehoods because he "ignores quantum leaps in technology and training."

Poor Mitt. If only he had made this statement back in 2009 when PolitiFact's standards were much different:

We agree that the two cars are totally different. But Obama was careful in the way he phrased his statement: "The 1908 Model T earned better gas mileage than a typical SUV sold in 2008."  As long as you don't consider any factors other than mileage, he's right. We rate his statement Mostly True.

You see, Obama is rated only for his literal statement, while ignoring quantum leaps in technology that make the Model T "totally different." Romney suffers from additional qualifiers that PolitiFact throws in to the mix.

The similarities between the two ratings don't end there. Here's a bit from the Obama/Model T rating:

So technically Obama is right.


But his implication is that we haven't gotten more fuel efficient in 100 years. And that's a reach.
...

...Model Ts reached top speeds of only 40 miles an hour. They guzzled motor oil, about a quart a month. The original tops were made of canvas, and they had no heating or cooling systems. They also had none of the safety features of modern cars: no bumpers, no air bags, no seat belts, no antilock breaks [sic].

The cars had large, skinny wheels to more easily clear the obstacles on rocky, rutted roads. Corner them too fast and they could tip over. And if you crashed, the windshield would usually shatter into sharp, jagged pieces that could slice you to ribbons.

"The government would not allow anyone to sell Model Ts today because they're so unsafe," Casey said. "It's a car that no one would use on a regular basis today. It's not a fair comparison."

Here's similar text from the Romney rating:

This is a great example of a politician using more or less accurate statistics to make a meaningless claim. Judging by the numbers alone, Romney was close to accurate.

...

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges. Today’s military and political leaders face real challenges in determining the right mix of assets to deal with current and future threats, but Romney’s glib suggestion that today’s military posture is in any way similar to that of its predecessors in 1917 or 1947 is preposterous.

Obama: Technically correct, as long as you don't consider any other factors, but a reach. Mostly True.

Romney: Close to accurate, meaningless, egregious, glib, preposterous. Pants on Fire.

Bruscino is right to point out the terms used to describe Romney's statement are more appropriate for the editorial page as opposed to an objective determination of facts. And once again, we're left to wonder why different guidelines are used for different people.

Update (1/19/2012 1921 pst) Jeff adds: Speaking of glib and preposterous, this part of the rating just caught my eye:

A wide range of experts told us it’s wrong to assume that a decline in the number of ships or aircraft automatically means a weaker military. Quite the contrary: The United States is the world’s unquestioned military leader today, not just because of the number of ships and aircraft in its arsenal but also because each is stocked with top-of-the-line technology and highly trained personnel.

The first problem is obvious. Romney never claimed that a reduction in the number of ships or aircraft automatically meant a weaker military.  Actually, Romney was citing examples in support of his overall claim (that continued cuts in defense spending will eventually lead to a weaker force). Jacobson's second sentence is a howler. "Quite the contrary" to what? The fact that the U.S. is the world's supreme military force is totally irrelevant to whether or not it's on the path to becoming weaker. If Warren Buffet loses a million dollars on a bad deal, the fact that he's still the richest guy in the room does not negate the fact that he's also a million dollars poorer. And just like Romney claimed in his statement, Buffet simply cannot continue to cut bad deals if he is going to remain the richest guy in the room.

Thursday, December 8, 2011

WaPo Fact Checker: "Revisiting Romney’s ‘deceitful, dishonest’ ad about Obama"

Back in late October, PolitiFact was publicly wringing its hands over a story it published that was out of step with fact checks of the same material by Annenberg Fact Check and the Washington Post's "The Fact Checker" column by Glenn Kessler.

It's hand-wringing time again as Kessler writes about a Mitt Romney ad that PolitiFact found outrageous ("Pants on Fire") while Kessler and the Annenberg folks found the ad more middle-of-the-road misleading:
(T)here are three reasons why we have trouble being outraged.


 First, the ad makes clear that Obama is speaking in 2008.
(...)
 Second, Obama’s statement was actually a misleading quote itself.
(...)
 Finally, the Romney campaign made it very clear that it had truncated the quote.
Two out of three of Kessler's points appeared in our own analysis of Romney's claim in our review of the PolitiFact fact check.

Though Kessler doesn't mention our central point about the ad, that its point doesn't change significantly regardless of whether the context was included or not, Kessler does note PolitiFact's out-of-step fact check response:
(Fact Checkers can disagree: PolitiFact labeled it “Pants on Fire.” But Factcheck.org reached a conclusion similar to ours, saying the health-care line actually posed a “more serious problem.”)
Kessler treats PolitiFact very kindly.  The fact is that PolitiFact failed to make any mention of Kessler's three points.  In baseball terms, they whiffed on all three.

And Annenberg Fact Check?  The quotation issue was a sideshow so far as they were concerned:
What the Obama campaign chose to take issue with was how the then-candidate’s words were edited in a section where he is heard to say, “If we keep talking about the economy, we’re going to lose.” Obama was actually quoting his Republican opponent. The full quote is: “Senator McCain’s campaign actually said, and I quote, if we keep talking about the economy, we’re going to lose.”

Is that “deceitful and dishonest,” as Obama campaign spokesman Ben LaBolt quickly claimed? Or “blatantly dishonest,” as the liberal group ThinkProgress described it? It is possible that a viewer might be misled into thinking that Obama said this about his own campaign in 2011, since the quote comes 23 seconds after a graphic cites Obama’s comments as being uttered in 2008. But we’ll leave that for our readers to determine.
PolitiFact is, uh, bolder than that.  That's why PolitiFact is closer to Media Matters than the other major fact check services.  They have the chutzpah to let their subjective judgments determine the position of the misnamed "Truth-O-Meter" and serve it up to their readers as though it is objective journalism.



Jeff adds: When I first read the original PolitiFact piece I was reminded of a rating they gave former congressman Alan Grayson (D-FL). Grayson ran an ad that referred to his opponent, Daniel Webster, as "Taliban Dan." In the ad, Grayson edited a video of Webster to distort Webster's words into the opposite of what he said. Check out PolitiFact's summary in that ruling (bold emphasis added):
The Grayson ad clearly suggests that Webster thinks wives should submit to their husbands, and the repeated refrain of "Submit to me," is an effort to scare off potential female voters. But the lines in the video are clearly taken out of context thanks to some heavy-handed editing. The actual point of Webster's 2009 speech was that husbands should love their wives.

We rate Grayson's claim False.
Now read PolitiFact's treatment of Romney's ad (emphasis added):
We certainly think it’s fair for Romney to attack Obama for his response to the economy. And the Romney camp can argue that Obama’s situation in 2011 is ironic considering the comments he made in 2008. But those points could have been made without distorting Obama’s words, which have been taken out of context in a ridiculously misleading way. We rate the Romney ad’s portrayal of Obama’s 2008 comments Pants on Fire.
As Bryan noted, including the context wouldn't have changed the point of Romney's ad. Yet in Grayson's ad he not only took Webster out of context, he distorted (removed) Webster's words in order to make it appear Webster said something contrary to what he actually said (to say nothing of associating his opponent with a terrorist group). What exactly is more ridiculous about Romney's editing than Grayson's? What standard is PolitiFact using to make these determinations?

Until PolitiFact comes up with a way to objectively quantify a statements ridiculousness the ratings will continue to be plagued by the editors' personal biases.

Edit 12/11/11 : Added link to the original WaPo article-Jeff

Wednesday, July 20, 2011

RedState: "Politifact’s Review of Josh Trevino: Mostly Hackery"

Red State, thanks to Leon H. Wolf, has another excellent criticism of a PolitiFact fact check.

Wolf takes PolitiFact Texas to task over its rating of RedState Co-Founder Josh Treviño. Treviño cited a poll on an MSNBC program. PolitiFact subjected (pun intended) the statement to its Truth-O-Meter.

And that gets Wolf to wondering:
Politifact was forced to concede that Trevino’s characterization of the poll showing a plurality opposed to raising the debt ceiling was 100% correct and accurate. So what caused them to rate Trevino’s remarks as “mostly true” instead of “completely and entirely true”?
I'll supply a bit more context than did Wolf in summing up PolitiFact's complaint (he quotes only the latter paragraph):
Treviño’s other point — that Americans favor mostly budget cuts to deal with the deficit — didn’t poll as neatly as his recap suggests.

Asked how they’d prefer members of Congress to address the deficit, 20 percent said only by cutting spending and another 30 percent said mostly with spending cuts. Four percent favored solely tax increases, while 7 percent said they’d prefer to tackle the deficit mostly by tax hikes.
Wolf notes that the 20 percent and 30 percent figures add up to exactly the plurality Treviño describes.  So Treviño's numbers and underlying argument both stand as "True."  Yet contrast the rating of Treviño with a "Mostly True" PolitiFact rating of President Obama using a similar set of figures:
Getting back to Obama's statement, he said, "You have 80 percent of the American people who support a balanced approach. Eighty percent of the American people support an approach that includes revenues and includes cuts." Even the best poll doesn't show support quite that high -- he would more accurately have accounted for the small numbers that support only tax increases or were unsure, putting the number at 70 percent. But his overall point is correct that polls show most Americans support a balanced approach when given a choice between cutting spending or raising taxes. So we rate his statement Mostly True.
The president, using the most favorable numbers, therefore inflates his figure by 14 percent (10 percentage points).  And the president leaves at least as much context unstated as did Treviño.  Treviño arguably left out nothing of importance.

Wolf (bold emphasis added):
Memo to Politifact: the fact that a poll contains additional information that Trevino did not discuss does not make his statement less than entirely truthful. For example: if Trevino had been discussing the latest poll of the Republican caucus in Iowa and had claimed (correctly) that “Bachmann leads Romney 32%-29%,” his statement would not be rated merely “mostly true” because he did not disclose that Pawlenty was at 7%, Santorum at 6%, etc. Trevino by his own statement wasnt’ (sic) discussing the people who wanted the deficit solution split roughly down the middle, he was discussing people who favored “mostly cuts” versus “mostly taxes,” and his statement was (and should have been scored) completely correct.
Treviño used the poll data responsibly and accurately.  The president didn't.  If Treviño is at fault for failing to point out that a plurality are open to additional revenue/tax increases then isn't the president at fault for failing to mention the plurality who favor more reliance on budget cuts than on tax increases?  Yet PolitiFact mentions only Treviño's supposed omission.  The president gets a pass.

Do both men deserve the same grade, PolitiFact?  Seriously?