Showing posts with label Statement Distortion. Show all posts
Showing posts with label Statement Distortion. Show all posts

Tuesday, February 3, 2015

PunditFact amends pundit's claim about amendments

We've pointed out before how PolitiFact will fault statements made on Twitter for lacking context despite the 140-character limit Twitter imposes.

This week PunditFact played that game with the following tweet from conservative pundit Phil Kerpen:
PunditFact found that the new Republican-controlled Senate has already voted on more amendments in 2015 than Reid allowed in the Democrat-controlled Senate for all of 2014: "On the numbers, that is right."

But PunditFact went on to find fault with Kerpen for leaving out needed context:
On the numbers, that is right. But experts cautioned us that the claim falls more in the interesting factoid category than a sign of a different or more cooperative Senate leadership.

The statement is accurate but needs clarification and additional information. That meets our definition of Mostly True.
We'll spell out the obvious problem with PunditFact's rating: Kerpen's tweet doesn't say anything about different or more cooperative Senate leadership. If Kerpen's not making that argument (we found no evidence he was), then it makes no sense at all to charge him with leaving out information. In effect, PunditFact is amending Kerpen's tweet, giving it context that doesn't exist in the original. Kerpen's statement doesn't need clarification or additional information to qualify as simply "True."

PunditFact's rating offers us a perfect opportunity to point out that if Kerpen's statement isn't simply "True" then there's probably no political claim anywhere that's immune to the type of objection PunditFact used to justify its "Mostly True" rating of Kerpen. A politician could claim the sky is blue and the fact checker could reply that yes, the sky is blue but no thanks to the policies of that politician's party! There are endless ways to rationalize withholding a "True" rating.

This rating convinces us that it would be productive to look at the breakdown between "True" and "Mostly True" ratings to look for a partisan bias. Since there's always context missing from political claims, drawing that line between "True" and "Mostly True" may prove no more objective than the line between "False" and "Pants on Fire."

Saturday, April 13, 2013

PolitiFact's comedic tilt

Here are two more examples of PolitiFact's hilarious inability to reach rational conclusions as it goes about its business.

Fred Thompson and the A-Ford-able Care Act

On April 3, a PolitiFact item appeared with the following headline:  "Ex-Sen. Fred Thompson says Obamcare could raise premiums enough to pay for a new Ford Explorer."

That sounds serious.  Ford Explorers are expensive.

But reading just a little deeper into the story, it turns out that Thompson's claim wasn't serious at all.

Here's how PolitiFact reports what Thompson literally said, instead of the attempted paraphrase:
As District Attorney Arthur Branch on Law & Order, former Sen. Fred Thompson was always ready with a wisecrack. In a recent Twitter post, the Tennessee Republican offered a pointed barb about President Barack Obama’s health care law:

"Report: Obamacare could raise ins premiums by 200%. It's the ‘A-Ford-able Care Act’ -- your insurance costs as much as a new Explorer."
First clue:  It's from Twitter, renowned for its 140-character limit.  Second clue:  "Ford" set apart by hyphens in "Affordable."  It's a joke, and PolitiFact hints that it's a joke by referencing the wisecracks Thompson offered while appearing as Arthur Branch on "Law & Order."

Why fact check a joke?

It makes sense to fact check a joke if the joke is supposed to convey a particular truth worth fact checking.  Was Thompson communicating that insurance premiums will cost the same as a new Explorer?  Isn't that line obviously part of the joke?

Thompson's Twitter feed is substantially made up of similar jokes.  He mentions a fact, then riffs on it for laughs.  Have a look at the tweet PolitiFact graded along with the two before and the two after.


It's a pattern with Thompson.  Offer a factoid, crack a joke.  The only facts worth checking are those Thompson expects readers to take seriously, which in this set means the factoid in the first line of each tweet.  PolitiFact, instead of looking at whether insurance premiums could rise as much as 200 percent, looked at the joke end of the statement.  Thompson gets a "False" instead of a "True."  Go figure.

We note that Thompson has written about PolitiFact's focus on his joke.

Dick Cheney and a wobbly President George H. W. Bush?

On April 10, PolitiFact put some bizarre spin on former vice president Dick Cheney's weekend appearance on Fox News, where he offered remembrance of the late Margaret Thatcher.  PolitiFact begins its quotations with the following:
Van Susteren: "But there's that famous quote where, apparently, she told President Bush 41 not to go wobbly."

Cheney: "That's not true."
When I hear the exchange it never occurs to me that Cheney is saying Thatcher never said "Don't go wobbly."  It seems to me that Cheney was telling Van Susteren that President Bush never went wobbly and people should not interpret Thatcher's comment that way.  As Cheney said immediately after:
There was never any doubt about what the president was doing. He didn't need any b[u]cking up.
If Cheney was trying to make the point that Thatcher never said the words attributed to her then he would  more likely say something like "Thatcher never said it.  It's an old wives' tale."

PolitiFact assumes Cheney claimed Thatcher never said the words and rates his supposed claim "False."


These types of PolitiFlubs do not happen rarely.



Jeff adds:

Check out how PolitiFact completely invented a claim that Thompson never made:


Image from PolitiFact

Even if we disregard Thompson's comical context, he never claimed that your premiums would rise by $30,000. That's a total distortion on the part of our Pulitzer winning pals. Thompson's claim, even if taken seriously, was that your insurance policy in total would equal the value of an Explorer, not that its price would rise that much.

And once again, we see PolitiFact playing games with quotation marks:




Here's what Thompson tweeted: "It's the ‘A-Ford-able Care Act’ -- your insurance costs as much as a new Explorer."

Note that PolitiFact added the word premiums, making Thompson's claim seem more outlandish. This, by the way, is exactly what they did to Mitt Romney in their bogus Lie of the Year rating:




PolitiFact took the Romney ad's claim; "sold Chrysler to Italians who are going to build Jeeps in China," which was and is entirely accurate, then added the words at the cost of American jobs, something the Romney ad never claimed, then slapped a Lie of the Year award on their own invention.

Whether it's shoddy work or intentional partisan spin, this type of nonsense should prevent PolitiFact from being taken seriously.

 

Wednesday, December 12, 2012

Media Trackers (Florida): "PolitiFact Florida Dishonestly Smears Pam Bondi on Obamacare"

Media Trackers of Florida continues to assail the purulent pronouncements of PolitiFact, this time over PolitiFact Florida's "False" ruling for Attorney General Pam Bondi for a statement regarding ObamaCare's effect on business.

Media Trackers:
The numbers cited by Bondi are verifiable and accurate. The Mercer survey found that 61 percent of employers expect costs to rise as a result of Obamacare. As PolitiFact Florida itself noted, “Bondi is correct on the specific numbers she cited.”

Nevertheless, PolitiFact Florida ruled that Bondi’s statement was “false.” How could this be?
It's a good question.  This case involving Bondi creates such a good example of poor journalism that Media Trackers probably distracts readers from appreciating its problems with an abundance of sensationalistic rhetoric, above quotation excepted.

It's hard to see how PolitiFact justifies the ruling in spite of the descriptions.  Take the conclusion, for example:
We don’t doubt there’s anxiety among some businesses over what’s to come under the health care law, and maybe some are talking about whether they’ll have to raise prices or cut jobs. But Bondi didn’t talk about planning, she talked about what’s occuring right now, and we find no studies already showing the negative effects or evidence that businesses are cutting jobs or raising prices now. We rate Bondi’s statement False.
PedantiFact is more like it. 

We often see PolitiFact applying unnecessarily uncharitable interpretations to politicians' statements, with conservatives receiving the greater harm.  Bondi made two main points, that multiple studies showed damage to businesses from ObamaCare and that businesses were responding by cutting hours or laying off workers.  Bondi did not state that studies showed businesses were cutting hours or laying off workers.  PolitiFact drew that inference and graded Bondi in part on that claim.

Given normal charitable interpretation, Bondi was correct in that Mercer conducted more than one study indicating economic damage to business as reflected in employer expectations.  Bondi was likewise correct, based on anecdotal evidence, that businesses are reacting by cutting hours or laying off workers.  The statement of intent is enough to justify Bondi's use of tense.

Here's an analogy:  Suppose a baseball team ended the previous season without hitting a home run.  At the winter meetings the team acquires renowned sluggers Jeff Smith and Alex Weston.  The GM announces the team is solving its power woes with Smith and Weston.

But wait!  The season hasn't started yet, so the team isn't solving anything yet.  Right, PolitiFact?  Smith and Weston might suffer season-ending injuries on their plane ride to join the team.

This type of language is common in English.  A high school senior in California announces she's going to college at Yale.  So what's she still doing in a California high school?  PolitiFact rates the scholarly senior "False."


We applaud Media Trackers for highlighting yet another PolitiGaffe fact check.

We do experience concern that some of Media Trackers' assertions are vulnerable to challenge, such as saying PolitiFact did its reporting "dishonestly."  Also saying that PolitiFact smears Republicans while "bolstering" Democrats oversimplifies a complex record of unfairness to both parties that happens to harm Republicans more than it does Democrats.  Toning down the condemnation will allow such reports to reach and influence a wider audience.

Friday, November 30, 2012

Michael F. Cannon: "I Have Been False*"

Health care policy expert Michael F. Cannon of the Cato Institute brings us yet another astonishing display of fact-check incompetence from PolitiFact.

PolitiFact Georgia is the culprit this time.
 
Cannon:
In an unconscious parody of everything that’s wrong with the “fact-checker” movement in journalism, PolitiFact Georgia (a project of the Atlanta Journal-Constitution) has rated false my claim that operating an ObamaCare Exchange would violate Georgia law.
Cannon offers a devastating and conclusive rebuke of PolitiFact Georgia, and it's so elegant that putting the argument in our own words is pointless.  But to sum up, PolitiFact committed one of its traditional sins by incomprehensibly misinterpreting what the subject was saying.  PolitiFact charges Cannon with claiming that it is illegal for anyone to operate an insurance exchange in Georgia.  Cannon was talking specifically about the states setting up their own exchanges.

Here's the original context, for comparison (bold emphasis added):
State-created exchanges mean higher taxes, fewer jobs, and less protection of religious freedom. States are better off defaulting to a federal exchange. The Medicaid expansion is likewise too costly and risky a proposition. Republican Governors Association chairman Bob McDonnell (R.,Va.) agrees, and has announced that Virginia will implement neither provision.

There are many arguments against creating exchanges.
Could the context make it any clearer that Cannon refers to state-created exchanges with the arguments that follow?  The subsequent arguments augment the clarity.

PolitiFact (bold emphasis added):
[Cannon] wrote a claim we hadn’t heard before.

"[O]perating an Obamacare exchange would be illegal in 14 states," he wrote. "Alabama, Arizona, Georgia, Idaho, Indiana, Kansas, Louisiana, Missouri, Montana, Ohio, Oklahoma, Tennessee, Utah, and Virginia have enacted either statutes or constitutional amendments (or both) forbidding state employees to participate in an essential exchange function: implementing Obamacare’s individual and employer mandates."

Is that correct? PolitiFact Georgia decided to conduct an examination of the claim.
Does the federal government propose to operate a federal exchange in Georgia using Georgia government state employees?  How is that supposed to work?

Don't let our brief summary prevent you from reading Cannon's whole response.

It's just another example of amazing incompetence from PolitiFact.  Props to Cannon for standing up to this form of media tyranny.


Jeff adds:
Despite Bill Adair's assurance that PolitiFact "publishes a list of sources with every Truth-O-Meter item" in order to "to help readers judge for themselves whether they agree with the ruling," Cannon notes that the context of his original article "was lost on PolitiFact readers, because PolitiFact provided neither a citation nor a link to the opinion piece it was fact-checking."

As of the time we write this, there is in fact a link to Cannon's National Review article posted on the PF Georgia source list. This means either Cannon was wrong, or PF Georgia amended their article without informing readers of an update, correction, or even an editor's note to document the change.

Considering PolitiFact's long history of inconsistent application of their corrections policy, we're inclined to take Cannon's word for it.


Tuesday, September 4, 2012

Another dimension to the Janesville GM plant fact check

Hot Air's Ed Morrissey performs an important service by emphasizing that Paul Ryan's RNC speech mention of GM's Janesville plant was entirely accurate.

Morrissey:
Clearly, the job of “fact checker” in the mainstream media must not involve research skills.  Nor does it take much in comprehension, because these supposed fact checks started with a misrepresentation of what Ryan actually said.  Here are his actual words, emphasis mine:
President Barack Obama came to office during an economic crisis, as he has reminded us a time or two. Those were very tough days, and any fair measure of his record has to take that into account. My home state voted for President Obama. When he talked about change, many people liked the sound of it, especially in Janesville, where we were about to lose a major factory.

A lot of guys I went to high school with worked at that GM plant. Right there at that plant, candidate Obama said: “I believe that if our government is there to support you … this plant will be here for another hundred years.” That’s what he said in 2008.

Well, as it turned out, that plant didn’t last another year. It is locked up and empty to this day. And that’s how it is in so many towns today, where the recovery that was promised is nowhere in sight.

Morrissey points out in his post that a number of mainstream media fact checkers published stories claiming Ryan was inaccurate in talking about the Janesville plant.

Here's how PolitiFact continues to play it:
Here is a look at the accuracy of various statements Ryan made Wednesday night, based on past PolitiFact rulings and other sources as noted:

Closed GM plant: PolitiFact Wisconsin evaluated Ryan’s statement — made both before the convention and in his speech — that Obama broke his promise to keep the Janesville auto factory from closing.

The claim was rated False due to the lack of evidence Obama explicitly made such a promise and the fact the Janesville plant shut down before he took office.
During his convention speech, Ryan referred to Obama's promise of economic recovery, not a promise to keep the Janesville plant open.  Ryan quoted Obama accurately about the Janesville plant, in fact.  Given that Ryan said nothing about an Obama promise to keep the plant open, it is irrelevant when the plant closed.  And as to the timing of the closure, Ryan provides what seems like adequate context.  He says the plant was at risk when Obama made his speech.  He points out the plant didn't last another year.

Ryan's convention speech was too accurate for PolitiFact.  So PolitiFact did not fact check Ryan's convention speech regarding the Janesville plant.  It fact checked a different statement Ryan made earlier in 2012 and claimed it was fact checking Ryan's convention speech.
Ryan stirred memories of the factory on Aug. 16, 2012, attacking President Barack Obama during a campaign speech in Ohio.

"I remember President Obama visiting it when he was first running, saying he'll keep that plant open. One more broken promise," Ryan said.

He made the same point Aug. 29 during his speech to the Republican National Convention in Tampa.
The ensuing fact check emphasizes things only specifically found in the Aug. 16 speech.

These types of errors appear to go beyond mere incompetence.  They look like designed misdirection.  When a fact check claims to check something said at a convention it should stick with what was said at the convention.  That's something no competent reporter should bungle.

Are we to believe the team at PolitiFact responsible for this fact check was collectively that incompetent?


Afters: 

Morrissey updated his story with a new post including video and additional comments.  And it's worth reminding readers that even though PolitiFact checked a statement other than the one Ryan made at the convention it still blew the call.

Saturday, September 1, 2012

PolitiFlub: PolitiFact grades Callista Gingrich by the wrong measure

Crossposted from Sublime Bloviations


Words matter -- We pay close attention to the specific wording of a claim. Is it a precise statement? Does it contain mitigating words or phrases?
--Principles of PolitiFact and the "Truth-O-Meter"

It's a testament to PolitiFact's warped self-image that it continues churning out journalistic offal even while enduring a wave of substantive criticism.

Our latest example comes again from the Republican National Convention, where Callista Gingrich claimed that the Obama administration's foreign policy has led to decreased respect for the United States.

A legitimate fact checking enterprise immediately suspects that Gingrich referred to respect from foreign governments in terms of recognizing the U.S. as a power to which deferral yields the most beneficial results.  In other words, other nations fear the United States depending on the degree to which they operate contrary to our policy designs.  Based on that premise, the legitimate fact checker asks Gingrich to clarify the intent and tries to find a verifiable statistic that measures her accuracy.

That's not PolitiFact:
While surveys are currently being undertaken in 20 nations, only 14 of those have been done for long enough to shed light on Callista Gingrich’s claim.

The question asked is, "Please tell me if you have a very favorable, somewhat favorable, somewhat unfavorable or very unfavorable opinion of ... the United States." While favorability isn’t exactly identical to respect, we think it’s very close and a good approximation.
Seriously?

No doubt PolitiFact used the opinions of foreign policy experts to determine that the Pew data were an appropriate measure.

Or maybe not:


Seriously?  No expert sources?  Not one?

That's not a responsible fact check.  The global standing of the United States does not depend on the popular view among the world's peoples.  It comes directly from the way the world's leaders view the United States and whether they believe they can flaunt their power contrary to U.S. interests.

PolitiFact chose the wrong measure.

Why does anyone take PolitiFact seriously?



Jeff adds (9-2-12): 

If there's any doubt that PolitiFact is peddling editorial pieces as objective reporting, check out this Bret Stephens op-ed in the Wall Street Journal last week discussing the same topic and using the same sources:
In June, the Pew Research Center released one of its periodic surveys of global opinion. It found that since 2009, favorable attitudes toward the U.S. had slipped nearly everywhere in the world except Russia and, go figure, Japan. George W. Bush was more popular in Egypt in the last year of his presidency than Mr. Obama is today.

It's true that these surveys need to be taken with a grain of salt: efficacy, not popularity, is the right measure by which to judge an administration's foreign policy. But that makes it more noteworthy that this administration should fail so conspicuously on its own terms. Mr. Obama has become the Ruben Studdard of the world stage: the American Idol who never quite made it in the real world.
Is PolitiFact accusing Mr. Stephens of lying? Inaccuracy? Or is the reality that the world's opinion of America is beyond the scope of objective, measurable standards? How could two reputable outfits come up with such contradictory interpretations of the same facts? What is the measuring stick that makes Louis Jacobson and the Truth-O-Meter the final arbiter of truth on one end and Bret Stephens a dishonest, partisan dolt on the other?

Callista Gingrich made a perfectly reasonable, if not politically rhetorical, statement about Obama's influence on the world's impression of our country. She offered an opinion that has solid, if not conclusive, support. PolitiFact's biggest lie is their claim that they can fit opinions onto a ratings scale and objectively disprove them with opinions of their own.

The reality is PolitiFact often publishes opinion pieces instead of fact checks. And if it expects to maintain whatever shred of credibility it has left, it should take a lesson from Mr. Stephens' employer, and publish its articles on the editorial page.


(Earlier today I explained even more problems with PolitiFact's treatment of Gingrich's claim in the comments section below, so I won't repeat them here.)

Thursday, August 30, 2012

PolitiFlub: PolitiFact Wisconsin, the Obama promise and the Janesville GM plant (Updated)

Crossposted from Sublime Bloviations


PolitiFact has earned its status as the least-dependable of the stable of left-leaning fact check organizations.  PolitiFact Wisconsin gives us one more sparkling example supporting that judgment with a fact check of Republican vice presidential candidate Paul Ryan.

Ryan said President Obama broke a campaign promise to keep the Janesville (Wisc.) plant open.   PolitiFact Wisconsin detected no such promise from Mr. Obama.

Here's what then-candidate Obama said in February 2008 (bold emphasis added) during a speech in Janesville:
This can be America’s future. I know that General Motors received some bad news yesterday, and I know how hard your Governor has fought to keep jobs in this plant. But I also know how much progress you’ve made – how many hybrids and fuel-efficient vehicles you’re churning out. And I believe that if our government is there to support you, and give you the assistance you need to re-tool and make this transition, that this plant will be here for another hundred years. The question is not whether a clean energy economy is in our future, it’s where it will thrive. I want it to thrive right here in the United States of America; right here in Wisconsin; and that’s the future I’ll fight for as your President.
Importantly, Obama opened his speech with references to the plant.  He then sketched his vision of America before mentioning how the Janesville plant could stay open if the government provides support.  In that context, Obama pledged to provide that support.  Does Mr. Obama use the specific term "promise" in his statement?  No, certainly not.  Does he guarantee the plant will remain open?  Again, no.  However, there is little doubt  that every person in Janesville listening to his speech took it as a pledge from the president to work to enact policies to keep the plant open. Mr. Obama did, in fact, pledge to do just that.

PolitiFact Wisconsin located no such pledge.

But it gets worse.  Much worse. PolitiFact builds its conclusion primarily on its claim that the Janesville plant closed before Mr. Obama took office (bold emphasis added):
Ryan said Obama broke his promise to keep a Wisconsin GM plant from closing. But we don't see evidence he explicitly made such a promise -- and more importantly, the Janesville plant shut down before he took office.

We rate Ryan's statement False.
GM announced the likely permanent closure of the Janesville plant in June of 2008, less than four months after Mr. Obama pledged to work toward an agenda that would keep the plant open for "another hundred years."

So, when is the plant closed?  When it closes for the last time?  When it produces its last GM vehicle?  When the company announces its permanent closure on a particular date?

When President Bush left office, he had provided Chrysler and GM loans to keep them going until the automakers could present restructuring plans to the Obama administration in April.

GM announced the final closing of the Janesville plant in April of 2008, and the final Chevy Tahoe came off the line in December 2008, before Obama took office as president.  On the other hand, the plant stayed open so that GM could build trucks for Isuzu:
The company stopped building SUVs at the plant just before Christmas.

That decision left about 1,200 workers unemployed.At the time GM said a crew would remain to complete an order for Isuzu.
But by June of 2009, while the Obama administration was still negotiating GM's fate and after completing the work for Isuzu, Janesville continued to maintain hope that its plant might reopen:
JANESVILLE (WKOW) -- There is a lot of optimism in Janesville today, after receiving word GM could reopen one of its idle plants to produce new fuel efficient cars, according to the Wisconsin Department of Workforce development.
If the GM restructuring deal brokered by the Obama administration resulted in continued production at GM's plant in Janesville, is there any doubt at all that Obama would receive credit for delivering on a promise?  Especially if the work involved hybrid vehicles?  The opportunity was there for the taking.

Why is so much of this information missing from a fact check?


Update 8/30/2012, 4:15 p.m.:

NPR fills in some of the missing information PolitiFact omitted.


Correction 8/31/2012:  Original version had wrong date for Obama's Janesville speech on first reference:  "Here's what President-elect Obama said in December 2008 (bold emphasis added) during a speech in Janesville:"  That sentence has been made accurate.

Friday, August 3, 2012

Americans for Tax Reform: "Responding to Politifact on Olympics and Taxes"

Do PolitiFact staffers actually read the statements they rate? It's stuff like this that implies they don't.

At issue is PolitiFact's rating of a recent Americans for Tax Reform claim that set the Internet abuzz:

Image from PolitiFact.com

PolitiFact found the claim to be Mostly False. Americans for Tax Reform responded and made quick work of PolitiFact's sophomoric attempt at fact checking (emphasis in the original):
ATR's primary claim is that the prizes are taxable, not that all medalists will necessarily owe $9,000 in taxes.  Poltifact admits that after they checked with their own experts, it was confirmed that prizes awarded would be taxable.  On this finding alone, the verdict should have been "mostly true," at least.
They're right. The original ATR article that PolitiFact rated emphasized the onerous tax policies of the U.S., not the specific cost.  By focusing on the $9,000 figure, PolitiFact is able to fish out a kernel of ambiguity from an otherwise factually solid article. But even with their myopic focus, PolitiFact still manages to flub this check:
ATR consistently said that prizes were taxable "up to" a 35% marginal tax rate.  We deliberately used this language because we know that Olympians will pay taxes at whatever marginal tax rate they happen to find themselves in this year.
Remember back in the olden days when PolitiFact passed out Mostly True ratings for demonstrably false numbers as long as the claimant was "citing figures from memory"?  Apparently some qualifiers are more equal than others.

ATR goes on to explain in detail how the $9,000 figure itself is a perfectly accurate example. Make sure to read the entire post to see the step-by-step take-down of PolitiFact's bupkis.

The story here is PolitiFact found a solid, honest criticism of the U.S. tax code and had to resort to distortion and gimmicks to cast a pall over the entire article. ATR presented a legitimate example that illustrated its political position. PolitiFact found that claim accurate, and then editorialized to brand ATR with the mark of dishonesty. 

That's not what a non-partisan fact checker is supposed to do.

Monday, July 23, 2012

Ohio Watchdog: "PolitiFact slams GOP spokeswoman for ‘literally true’ statement"

Jon Cassidy and Ohio Watchdog give us an 8th installment in its series on PolitiFact Ohio, this time examining PolitiFact's rating of the Ohio Republican Party and spokeswoman Izzy Santa (bold emphasis in the original):
Joe Guillen, the Cleveland Plain Dealer reporter writing for PolitiFact Ohio, was determined to find fault.

“The claim is literally true because it includes both Brown and his allies,” Guillen wrote, and he should have stopped right there. If it’s literally true, are we supposed to worry it might be figuratively untrue? It’s a number, not a simile.

It turns out that Guillen’s beef is that Santa’s declaration changed the subject.
It turns out that we have to rely on Guillen alone for the context of Santa's remarks.  Guillen insists that the context indicates Santa was talking about "outside money."  Part of Guillen's evidence for the PolitiFact story is Guillen's July 10 story for the Plain Dealer that likewise insists--based on a partial quotation and Guillen's paraphrase--that Santa was talking about "outside money":
Izzy Santa, a spokeswoman for the Ohio Republican Party, said Redfern’s criticisms are not credible because special interest groups supporting Brown “are plotting to spend over $13 million.”
Was Guillen's paraphrase justified?

Cassidy apparently has the text of the email Santa sent to reporters (reformatted quotation):
After Redfern’s July 10 press conference, she sent out an email to reporters:
Redfern is the least credible person to be commenting on outside spending when it comes to Ohio’s U.S. Senate race. Sherrod Brown and his special interest allies in Washington are plotting to spend over $13 million, with no end in sight. It’s clear that Brown and his supporters are having to spend this type of money because Brown’s out-of-touch record has exposed him to Ohioans as a 38-year politician and Washington insider who puts politics over people.

If the above represents the full context of Santa's response, then Guillen has misrepresented her.  Santa specifically wrote "Brown and his special interest allies" and Guillen Sentenceshopped that into "special interest groups" minus Brown.  Guillen's paraphrase, in other words, changed Santa's meaning.  And Guillen proceeds to fact check his paraphrase and blame it on Santa.

Guillen probably shouldn't expect more than a lump of coal for Christmas this year.

Rather than interpreting "Sherrod Brown and his special interest allies" contrary to its literal meaning, he should have inquired further as to how Santa justified calling Redfern "the least credible person" to comment on outside spending.

Visit Ohio Watchdog to read the whole of Cassidy's report.

counterirritant: "Holy Misrepresentation, Batman!"

Alternate title:  "The unBainable lightness of Bane"

The intermittent "counterirritant" returns with another critique of PolitiFact, this time skewering Poynter's Pedants for ignoring the context of Rush Limbaugh's comments regarding the Bain-Bane connection Limbaugh mentioned on his radio program.

counterirritant (bold emphasis added):
The transcript PolitiFact used as a source offers more support for the radio host than it does the fact checkers. Only two sentences discuss whether the name Bane was selected because of Romney. The first is a question: “Do you think that it is accidental that the name of the really vicious fire breathing four eyed whatever it is villain in this movie is named Bane?” After asking that question, Limbaugh was interrupted and went off on a tangent. When he returned to the subject, he continued: “So, anyway, this evil villain in the new Batman movie is named Bane.  And there’s now a discussion out there as to whether or not this is purposeful and whether or not it will influence voters.” After noting it, Limbaugh never takes a side in the discussion of purposefulness.
The point is inarguable:  PolitiFact fact checked Limbaugh on a statement he never made, unless one insists without reasonable supporting evidence that his question was rhetorical.  PolitiFact didn't bother making the argument and moreover decided against dealing with this issue through its "In Context" feature.  "In Context" functions for PolitiFact pretty much like a "nothing to see here" tag.

The critique by counterirritant serves as a fine bookend for my "PolitiFlub" critique of the same PolitiFact item.  I took a different tack, showing that even if Limbaugh was making the claim PolitiFact imagined the fact checkers went about their business in entirely the wrong way.


Jeff adds: (07/24/12): Lest there be any confusion about what PolitiFact accused Rush of saying, here's a screenshot of their twitter feed:

Thursday, June 7, 2012

(for Glenn Kessler and PolitiFact) How to fact check the job recovery numbers

Originally posted on May 14, 2012 at Sublime Bloviations

A valuable media watchdog watchdog post at the new blog "counterirritant" pointed out a problem with Glenn Kessler's fact check of a claim from Republican presidential candidate Mitt Romney.

Kessler writes the fact checker feature for the Washington Post.  Romney claimed that in a normal recovery from recession the country should be adding something like 500,000 jobs per month.

counterirritant:
Kessler decided that the best way to “check” this was determine how frequently 500,000 jobs were created in a month in the last 65 year.
The post goes on to very effectively criticize Kessler's methodology throughout.

By a funny coincidence (general leftward lean of the mainstream media, maybe?), PolitiFact used very similar reasoning on the same claim:
Is 500,000 jobs created per month normal for a recovery?

The short answer is "no."

We arrived at this conclusion by looking at the net monthly change in jobs all the way back to 1970. Since Romney was referring to total jobs, rather than private-sector jobs only, we used total jobs as our measurement. And since Romney was talking about job creation patterns during a recovery, we looked only at job creation figures for non-recessionary periods, as defined by the National Bureau of Economic Research. Finally, we excluded the current recovery.
The Kessler/PolitiFact method is entirely wrongheaded.

Romney didn't claim that 500,000 jobs created per month was a normal figure during a recovery.  I can imagine the furrowed brows of Kessler, Louis Jacobson and other mainstream media fact checkers.  Aren't they the experts?  What am I talking about?

It's actually pretty simple.

The size of the economy changes.  If country A enters a recession losing 1 million jobs  then it takes two months to regain the lost jobs at a rate of 500,000 per month.  If country B experiences a recession losing 10 million jobs then it takes 20 months to regain them at a rate of 500,000 per month.

Not only does the size of the economy vary, but so does the depth of the recession.  The rate of recovery for lost jobs needs  to account for both factors.  Neither Kessler nor PolitiFact gave any apparent consideration to those critical criteria.  It's like comparing prices between now and the 1950s without adjusting for inflation.

The Romney campaign has made a number of statements like the 500,000 jobs claim, and they probably relate to the following chart or one like it from the Bureau of Labor Statistics:

Clipped from nytimes.com

Romney's claim almost certainly derives from the fact that post-war recoveries usually replace lost jobs much faster than the present recovery. 

So how does one check that claim?  It's not that hard.  Take the bottom point of employment, then count the number of months that it takes to get employment back up to the peak level.  Divide the number of jobs lost by the number of months it took to return to return to the employment peak at the start of the recession.  Do the same for each of the post-war recessions, then average the numbers to obtain the average job recovery time.  After all of that, divide the number of jobs lost from the 2007 recession by the averaged job recovery time.

Why didn't the Washington Post or PolitiFact do anything remotely resembling the fact check I just described?  It could be gross incompetence.  It could be ideological bias.  Or it could be both.


Update 6/7/2012:  Added post title

Sunday, June 3, 2012

Cover your PolitifArse! PolitiFact goes shameless

PolitiFact has egg on its face worthy of the Great Elephant Bird of Madagascar.

On May 23, PolitiFact published an embarrassingly shallow and flawed fact check of two related claims from a viral Facebook post.  One of the claims held false a claim from Mitt Romney that President Obama has presided over an acceleration of government spending unprecedented in recent history.  The second claim, quoted from Rex Nutting of MarketWatch, held that "Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years." 

PolitiFact issued a "Mostly True" rating to these claims, claiming their math confirmed select portions of Nutting's math. The Associated Press and Glenn Kessler of the Washington Post, among others, gave Nutting's calculations very unfavorable reviews.

PolitiFact responded with an article titled "Lots of heat and some light," quoting some of the criticisms without comment other than to insist that they did not justify any change in the original "Mostly True" rating.  PolitiFact claimed its rating was defensible since it only incorporated part of Nutting's article.
(O)ur item was not actually a fact-check of Nutting's entire column. Instead, we rated two elements of the Facebook post together -- one statement drawn from Nutting’s column, and the quote from Romney.
I noted at that point that we could look forward to the day when PolitiFact would have to reveal its confusion in future treatments of the claim.

We didn't have to wait too long.

On May 31, last Thursday, PolitiFact gave us an addendum to its original story.  It's an embarrassment.

PolitiFact gives some background for the criticisms it received over its rating.  There's plenty to criticize there, but let's focus on the central issue:  Was PolitiFact's "Mostly True" ruling defensible?  Does this defense succeed?

The biggest reason this CYA fails

PolitiFact keeps excusing its rating by claiming it focuses on the Facebook post by "Groobiecat", rather than Nutting's article, and only fact checks the one line from Nutting included in the Facebook graphic.

Here's the line again:
Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years.
This claim figured prominently in the AP and Washington Post fact checks mentioned above.  The rating for the other half of the Facebook post (on Romney's claim) relies on this one.

PolitiFact tries to tell us, in essence, that Nutting was right on this point despite other flaws in his argument (such as the erroneous 1.4 percent figure embedded right in the middle), at least sufficiently to show that Romney was wrong.

A fact check of the Facebook graphic should have looked at Obama's spending from the time he took office until Romney spoke.  CBO projections should have nothing to do with it.  The fact check should attempt to pin down the term "recent history" without arbitrarily deciding its meaning. 

The two claims should have received their own fact checks without combining them into a confused and misleading whole.  In any case, PolitiFact flubbed the fact check as well as the follow up.

Spanners in the works

As noted above, PolitiFact simply ignores most of the criticisms Nutting received.  Let's follow along with the excuses.

PolitiFact:
Using and slightly tweaking Nutting’s methodology, we recalculated spending increases under each president back to Dwight Eisenhower and produced tables ranking the presidents from highest spenders to lowest spenders. By contrast, both the Fact Checker and the AP zeroed in on one narrower (and admittedly crucial) data point -- how to divide the responsibility between George W. Bush and Obama for the spending that occurred in fiscal year 2009, when spending rose fastest.
Stay on the lookout for specifics about the "tweaking."

Graphic image from Groobiecat.blogspot.com

I'm still wondering why PolitiFact ignored the poor foundation for the 1.4 percent average annual increase figure the graphic quotes from Nutting.  But no matter.  Even if we let PolitiFact ignore it in favor of  "slower than at any time in nearly 60 years" the explanation for their rating is doomed.

PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.
The fiscal year 2009 spending figure from the Office of Management and Budget was $3,517,677,000,000.  That means that $307 billion (there's a tweak!) is 8.7 percent of the 2009 total spending.  And it means before Obama even starts getting blamed for any spending he increased spending in FY 2009 over the 2008 baseline more than President Bush did.  I still don't find it clear where PolitiFact puts that spending on Obama's account.
(B)y our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent. That would place Obama either second from the bottom or third from the bottom out of the 10 presidents we rated, rather than last.
PolitiFact appears to say its calculations suggest that accepting the critics' points makes little difference.  We'll see that isn't the case while also discovering a key criticism of the "annual spending increase" metric.

Reviewing PolitiFact's calculations from earlier in its original story, we see that PolitiFact averages Obama's spending using fiscal years 2010 through 2013.  However, in this update PolitiFact apparently does not consider another key criticism of Nutting's method:  He cherry picked future projections.  Subtract $307 billion from the FY 2009 spending and the increase in FY 2010 ends up at 7.98 percent.  And where then do we credit the $307 billion?

An honest accounting requires finding a proper representation of Obama's share of FY 2009 spending.  Nutting provides no such accounting:
If we attribute that $140 billion in stimulus to Obama and not to Bush, we find that spending under Obama grew by about $200 billion over four years, amounting to a 1.4% annualized increase.
Neither does PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.

That’s quite a bit larger than Nutting’s $140 billion, but by our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent.
But where does the spending go once it is shifted? Obama's 2010?  It makes a difference.

"Lies, damned lies, and statistics":  PolitiFact, Nutting and the improper metric

Click image for larger view
The graphic embedded to the right helps illustrate the distortion one can create using the average increase in spending as a key statistic.  Nutting probably sought this type of distortion deliberately, and it's shameful for PolitiFact to overlook it.

Using an annual average for spending allows one to make much higher spending not look so bad.  Have a look at the graphic to the right just to see what it's about, then come back and pick up the reading.  We'll wait.

Boost spending 80 percent in your first year (A) and keep it steady thereafter and you'll average 20 percent over four years. Alternatively, boost spending 80 percent just in your final year (B) and you'll also average 20 percent per year. But in the first case you'll have spent far more money--$2,400 more over the course of four years.

It's very easy to obscure the amount of money spent by using a four-year average.  In case A spending increased by a total of $3,200 over the baseline total.  That's almost $800 more than the total derived from simply increasing spending 20 percent each year (C).

Note that in the chart each scenario features the same initial baseline (green bar), the same yearly average increase (red star), and widely differing total spending over the baseline (blue triangle).

Some of Nutting's conservative critics used combined spending over four-year periods to help refute his point.  Given the potential distortion from using the average annual increase it's very easy to understand why.  Comparing the averages for the four year total smooths out the misleading effects highlighted in the graphic.

We have no evidence that PolitiFact noted any of this potential for distorting the picture.  The average percentage increase should work just fine, and it's simply coincidence that identical total increases in spending look considerably lower when the largest increase happens at the beginning (example A) than when it happens at the end (example B).

Shenanigan review:
  • Yearly average change metric masks early increases in spending
  • No mention of the effects of TARP negative spending
  • Improperly considers Obama's spending using future projections
  • Future projections were cherry-picked
The shift of FY 2009 spending from TARP, the stimulus and other initiatives may also belong on the above list, depending on where PolitiFact put the spending.

I have yet to finish my own evaluation of the spending comparisons, but what I have completed so far makes it appear that Romney may well be right about Obama accelerating spending faster than any president in recent history (at least back through Reagan).  Looking just at percentages on a year-by-year basis instead of averaging them shows Obama's first two years allow him to challenge Reagan or George W. Bush as the biggest accelerator of federal spending in recent history.  And that's using PolitiFact's $307 billion figure instead of the higher $456 billion one.

So much for PolitiFact helping us find the truth in politics.

Note:

I have a spreadsheet on which I am performing calculations to help clarify the issues surrounding federal spending and the Nutting/PolitiFact interpretations.  I hope to produce an explanatory graphic or two in the near future based on the eventual numbers.  Don't expect all the embedded comments on the sheet to make sense until I finalize it (taking down the "work in progress" portion of the title).



Jeff adds:

It's not often PolitiFact admits to the subjective nature of their system, but here we have a clear case of editorial judgement influencing the outcome of the "fact" check:
Our extensive consultations with budget analysts since our item was published convinces us that there’s no single "correct" way to divvy up fiscal 2009 spending, only a variety of plausible calculations.
This tells us that PolitiFact arbitrarily chose the "plausible calculation" that was very favorable to Obama in its original version of the story. By using other equally plausible methods, the rating would have gone down. By presenting this interpretation of the calculations as objective fact, PolitiFact misleads their readers into believing the debate is settled.

This update also contradicts PolitiFact's reasons for the "Mostly True" rating:
So the second portion of the Facebook claim -- that Obama’s spending has risen "slower than at any time in nearly 60 years" -- strikes us as Half True. Meanwhile, we would’ve given a True rating to the Facebook claim that Romney is wrong to say that spending under Obama has "accelerated at a pace without precedent in recent history." Even using the higher of the alternative measurements, at seven presidents had a higher average annual increases in spending. That balances out to our final rating of Mostly True.
In the update, they're telling readers a portion of the Facebook post is Half-True, while the other portion is True, which balances out to the final Mostly True rating. But that's not what they said in the first rating (bold emphasis added):
The only significant shortcoming of the graphic is that it fails to note that some of the restraint in spending was fueled by demands from congressional Republicans. On balance, we rate the claim Mostly True.
In the first rating, it's knocked down because it doesn't give enough credit to the GOP for restraining Obama. In the updated version of the "facts", it's knocked down because of a "balance" between two portions that are Half-True and completely True. There's no mention of how the GOP's efforts affected the rating in the update.

Their attempts to distance themselves from Nutting's widely debunked article are also comically dishonest:
The Facebook post does rely partly on Nutting’s work, and our item addresses that, but we did not simply give our seal of approval to everything Nutting wrote.
That's what PolitiFact is saying now. But in the original article PolitiFact was much more approving:
The math simultaneously backs up Nutting’s calculations and demolishes Romney’s contention.
 And finally, we still have no explanation for the grossly misleading headline graphic, first pointed out by Andrew Stiles:

Image clipped from PolitiFact.com
Neither Nutting or the original Groobiecat post claim Obama had the "lowest spending record". Both focused on the growth rate of spending. This spending record claim is PolitiFact's invention, one the fact check does not address. But it sure looks nice right next to the "Mostly True" graphic, doesn't it? Sorting out the truth, indeed.

The bottom line is PolitiFact's CYA is hopelessly flawed, and offensive to anyone that is sincerely concerned with the truth. A fact checker's job is to illuminate the facts. PolitiFact's efforts here only obfuscate them.


Bryan adds:

Great points by Jeff across the board.  The original fact check was indefensible and the other fact checks of Nutting by the mainstream media probably did not go far enough in calling Nutting onto the carpet.  PolitiFact's attempts to glamorize this pig are deeply shameful.


Update:  Added background color to embedded chart to improve visibility with enlarged view.



Correction 6/4/2012:  Corrected one instance in which PolitiFact's $307 billion figure was incorrectly given as $317 billion.  Also changed the wording in a couple of spots to eliminate redundancy and improve clarity, respectively.

Thursday, May 31, 2012

Big Journalism: "PolitiFact Bases Entire Fact Check on Author's Intuition"

John Sexton of Big Journalism (and Verum Serum fame) joins Matthew Hoy in slamming PolitiFact's rating of a recent Crossroads GPS ad.

Sexton notes that the ad says one thing and PolitiFact claims the ad says something else:
The ad is clearly about the President's promise that you could keep your insurance, not some insurance. Instead of staying on that point, PolitiFact's introduces a novel new interpretation of the ad's meaning. Suddenly, it's not about the President's promise at all, rather " Its point seems to be simply that a lot of people will lose coverage." Really? Where does it say that?
Sexton draws attention to a recurrent problem at PolitiFact.  Statements that fail to accord with the views inside the left-skewed journalistic bubble often receive an uncharitable interpretation that the original speaker would scarcely recognize.  PolitiFact ends up appearing either unable or unwilling to understand the readily apparent meaning.

Sexton makes other good points as well, so visit Big Journalism and read it through start to finish.  Sexton gets Bill Adair on the record defending PolitiFact's journalistic malpractice, and that's always worth seeing even if it draws  from one of Adair's two favorite cliches:  People won't always agree with PolitiFact's ratings and PolitiFact gets criticized from conservatives and liberals (PolitiFact, therefore, is fair).

Wednesday, May 16, 2012

Sublime Bloviations: Grading PolitiFact (Florida): Is U.S. Chamber of Commerce's Bill Nelson ad accurate?

PFB editor Bryan White takes an in-depth look at a recent PolitiFact rating that is a good example of the Pulitzer winners' habit of inventing a claim to check. It's a tad too lengthy to crosspost here, but Bryan's post is well worth the read.

The issue is a U.S. Chamber of Commerce ad critical of Democratic Senator Bill Nelson. The ad highlights Nelsons support for the Affordable Care Act (ACA) and reminds viewers of the CBO estimate that 20 million people could lose their current coverage:
PolitiFact focuses on a would-be broader context where the ad supposedly implies that 20 million Medicare beneficiaries will lose their current insurance:
Here, we’re checking whether "20 million people could lose their current coverage," and whether those people are older Americans on Medicare as the ad strongly suggests.
Don't hold your breath waiting for PolitiFact to substantiate its claim that the ad "strongly suggests" that 20 million Medicare beneficiaries will lose their current coverage. It never happens.
Bryan goes on to list four "shenanigans" PolitiFact employs in order to end up with the rating they want. Here's my favorite:
Shenanigan C:
Second, some portion of that (20 million) number are people voluntarily switching to other, better coverage -- not being forced out of coverage against their will.
Ah, the old "conjecture as evidence" ploy. "Are" suggests a fact in evidence. But the consequences of the law foretold in the CBO report are not yet in evidence. As chronicled in an earlier "Grading PolitiFact" entry, PolitiFact invented its evidence on this point. Is it possible that a person will voluntarily leave employer-provided coverage for coverage under an exchange? Sure, barely. But subsidized exchange coverage under the health care reform act is not available to those forsaking employer-offered coverage.
Bryan also highlights yet another example of PolitiFact asking leading questions to its sources. That's a problem we've pointed out before

Bryan spares little in his critique. It's difficult to believe PolitiFact is this inept at following basic journalistic guidelines. The more likely excuse for their failures is a political bias that goes unchecked by the editors. Bryan lays out his case in detail, and this short review does not give readers the full depth of PolitiFact's flaws. Do yourself a favor and read the whole thing.

Tuesday, February 7, 2012

Hoystory: "Obama’s War on Religion and Conscience"

Matthew Hoy is back at it with his usual biting commentary on PolitiFact. This time he shares his thoughts on the current debate about the effect of PPACA mandates on institutions of the Roman Catholic Church.

Hoy deals broadly with the controversy, but we'll highlight his mention of PolitiFact. At issue is PolitiFact's treatment of Newt Gingrich's statement that the PPACA requires religious institutions to provide insurance coverage for contraceptives:
After honestly analyzing the rule and the law, Politifraud labels Gingrich’s charge “mostly false” as they engage in an amount of hand-waving that would enable human flight without the aid of wings, engines or the other commonly required tools.
Still, if you consider a Catholic church to be a "Catholic institution," or a synagogue to be a "Jewish institution," Gingrich isn’t correct that the recent federal rule on contraceptives applies. Those nonprofit religious employers could choose whether or not they covered contraceptive services.
It’s pretty clear that Gingrich chose his words carefully here and Politifraud is muddying the waters. When I hear the words “Catholic institution” I think of everything Catholic that isn’t the church. I think of hospitals, soup kitchens, homeless shelters, adoption services, the Knights of Columbus, etc. Maybe it’s just because I’m likely more familiar with religious terminology than the (snark on) godless heathens (snark off) who populate many newsrooms, that I interpret it this way. But if the difference between a “True” or “Mostly True” ruling and a “Mostly False” ruling is over whether the word “institution” includes the church or not, then there’s way too much parsing going on.
Parsing words is nothing new for PolitiFact. But that's not the biggest flub Hoy spots:
In the video Politifact links to of Gingrich’s statement (provided by none other than Think Progress), Gingrich makes it clear that he is talking about the rule issued “last week.” The rule issued last week was the one regarding religious employers covering contraceptives in their health plans. Politifraud dishonestly expands that specific criticism of that specific rule into states can set their own benchmarks. No, they can’t. Not when it comes to the rule that came down “last week.” That rule says they MUST cover contraceptives.
Once again Hoy is spot on, though as usual our brief review doesn't do his work justice. Head over to Hoystory and read the whole thing.

Wednesday, November 9, 2011

Sublime Bloviations: "Grading PolitiFact: Alan Hays, proof of citizenship and voting"

Could the cure for world hunger be as simple as picking the low hanging fruit from PolitiFact? Sometimes it seems that way.

PFB editor Bryan White was quick to spot the latest gaffe from our facticious friends. Check out PolitiFact Florida's rating of state Senator Alan Hays (R-Umatilla):

Image from PolitiFact.com

Now check out what Hays actually said:
"...I'm not aware of any proof of citizenship necessary before you register to vote."
Bryan notes:
If words matter then we should expect PolitiFact to note the difference between saying one does not know of a requirement and saying that no requirement exists.
If PolitiFact was just your average bucket of hackery, there wouldn't be much more to say other than they distorted Hays' quote. But our site wasn't created because PolitiFact is average. They take distortion to new heights.

Bryan goes on to expose the flim-flammery of how they eventually found Hays Mostly False for something he didn't say (which seems to be a common theme for them). It's impressive to witness the amount of work it takes to get something so wrong.

And for those of you keeping track, it includes yet another example of PolitiFact citing non-partisan, objective Democrats as experts.

So make sure to head over to Sublime Bloviations, because this one is a must read.


Bryan adds:  Not only did PolitiFact rate Hays on a statement he did not make, the rating of what he didn't say is also wrong.  PolitiFact continues to amaze.

Tuesday, September 27, 2011

Peach Pundit: "Former Senator Dan Moody Responds To PolitiFact"

Georgia's Peach Pundit, from Jan. 27:
On Tuesday, PolitiFact weighed in on a statement made by House Ethics Committee Chairman Joe Wilkinson. Politifact declared Wilkinson’s statement that Georgia’s Ethics laws are among the toughest in the nation is “false.”
Peach Pundit went on to publish an answering message from former Georgia state senator Dan Moody.  Moody makes a great point that PolitiFact's grading of Wilkinson falls into the realm of editorial judgment, deciding what criteria qualify as the proper ones to rank the strength of state ethics laws.

Wilkinson and Moody argued that disclosure laws serve as the foundation of state ethics law.  PolitiFact disagreed and pinned the "False" on Moody.

Hilariously, PolitiFact didn't even bother to quote Moody in its fact check.  It graded Moody based on a paraphrase appearing in the Atlanta Journal-Constitution, the PolitiFact affiliate in Georgia.

Didn't the reporter retain notes that would have allowed readers access to Moody's actual statement?

Unbelievable.
We always try to get the original statement in its full context rather than an edited form that appeared in news stories.
About PolitiFact
Visit Peach Pundit to read Moody's riposte in full.


Edit 11/13/11: While doing some formatting work on the site after midnight I accidentally spilled some water on Gizmo and somehow this review re-posted with a new date. I "corrected" the date to 9-27-11, the day prior to when this articles "tweet" was sent, which is standard for us. Sorry for the confusion. Jeff

Tuesday, August 16, 2011

The Weekly Standard: "PolitiFact’s Problem with Long Division"

Jeffrey H. Anderson may have a Ph.D., but it's not in mathematics. So when he's faced with the daunting task of taking one number and dividing by another number, he should just leave it to the rocket surgeons over at PolitiFact. This is especially important if old math doesn't produce PolitiFact's desired result.

Anderson sums up the numerical details while answering a PolitiFact analysis:
Last month, I wrote that President Obama’s own handpicked Council of Economic Advisors had released an estimate that the president’s economic “stimulus” had added or saved just one job for every $278,000 of taxpayer money spent. Obama’s economists said the “stimulus” had cost $666 billion to date and had added or saved 2.4 million jobs. $666 billion divided by 2.4 million is $278,000. Yet when Speaker John Boehner tweeted, “POTUS’ economists: ‘Stimulus’ Has Cost $278,000 per job,” PolitiFact Ohio rated [*] his tweet as “False.” PolitiFact Texas and PolitiFact Wisconsin have chimed in with identical scoring of similar statements.

So, what does PolitiFact have against long division?

Had Anderson been a reader of this blog, he would know that when numbers act in defiance of predetermined talking points, PolitiFact simply invents new standards to measure them against. And when it comes to inventing new standards, PolitiFact Ohio gets its cue straight from the top:

After Republicans began to circulate the blog item, White House spokesman Jay Carney said its conclusions were "based on partial information and simply false analysis." White House spokeswoman Liz Oxhorn issued a statement that noted the Recovery Act bolstered infrastructure, education, and industries "that are critical to America’s long-term success and an investment in the economic future of America’s working families."

The White House points out that Recovery Act dollars didn’t just fund salaries - as the blog item implies - it also funded numerous capital improvements and infrastructure projects.

Lumping all costs together and classifying it as salaries produces an inflated figure.

Of course, PolitiFact fails to offer evidence that Anderson did classify the entire stimulus spending as salaries.

Here's where PolitiFact Ohio tags out, and PolitiFact Texas brings some new moves to the ring:

The White House points out that Recovery Act dollars didn’t just fund salaries — as the blog item implies. Lumping all stimulus costs together and classifying the total as salaries produces an inflated figure.

Oops! PF Ohio already said that. Let's try again:
We checked the White House report, and of the $666 billion stimulus total, 43 percent was spent on tax cuts for individuals and businesses; 19 percent went to state governments, primarily for education and Medicaid; and 13 percent paid for government benefits to individuals such as unemployment and food stamps.

The remainder, about 24 percent, was spent on projects such as infrastructure improvement, health information technology and research on renewable energy.

How would Anderson respond to this arithmetical assault?

There are a number of problems with these claims.

First, I never said that the $278,000 per job was all spent on salaries or wages. I would never attribute anything close to that degree of efficiency to the federal government.

I'm really starting to like this Anderson guy.

He continues:

As I wrote in my response to the White House, “This much is clear: Based on an estimate by Obama’s own economists, for every $278,000 in taxpayer-funded “stimulus” money that the Obama administration has spent — whatever it may have spent it on — the “stimulus” has added or saved just one job.” That remains an undeniable fact.

Anderson's article takes up another issue with the stimulus, which is that not only is it incredibly expensive per job created, he also contends that the stimulus is actually causing jobs to be lost. PolitiFact, unsurprisingly, took issue with that claim as well. But Anderson effortlessly debunks PolitiFact's debunkery:

The entire response on this point from PolitiFact (both the Ohio and Texas versions) is to cite Moody’s chief economist Mark Zandi, who told the left-leaning website TPMDC that “the Weekly Standard misinterpreted that data.” That was good enough for PolitiFact. Never mind that Zandi is a Keynesian economist whose estimates of the stimulus’s likely effects were cited (see table 4) by Christina Romer, the first head of Obama’s Council of Economic Advisors, before the “stimulus” was even passed. In other words, Zandi said it would work, and now he says it worked.

In the end we're left with yet another (and multiple) examples of PolitiFact taking an objective, verifiable statement, constructing a straw man, and quickly demolishing their creation. Anderson never implied that the $278,000 figure represented salaries per job. He was making a point regarding the expense of the stimulus overall and putting it into a context that was easily digested by readers. It's impressive (if not disturbing) the lengths PolitiFact went to in order to distort and discard Anderson's valid premise.

Our goal at PolitiFact Bias is to consolidate and condense the best critiques of PolitiFact and provide a collection point of those criticisms. It is not our prerogative nor desire to reprint full articles.This brief review doesn't do justice to Anderson's excellent and thorough work. As always, we encourage you to go to the source and read the whole thing.

On a side note, I'd like to add something I found amusing, repeated verbatim in both the PF Ohio and Texas editions:
Furthermore, the publication created its statistic with the report's low-end jobs estimate. Had it gone with the 3.6 million job figure at the top end of the range, it would have come up with a smaller $185,000 per job figure.
Are we to assume $185,000 per job created would bump the stimulus into the "successful" category?



Bryan adds:

I find it amusing that PolitiFact accepts Mark Zandi's opinions without comment yet spends much of another recent fact check attacking Florida governor Rick Scott's source because of its supposed partiality.

PolitiFact's work generally leaves the impression that it favors liberal sources in terms of both numbers and reputation.  One might say PolitiFact represents Groseclose liberal media syndrome on steroids.

Pending a rigorous evaluation, of course.


*In Anderson's article the original "PolitiFact Ohio rated" hyperlink linked back to Anderson's own piece. I changed it to link to the PF Ohio rating of Boehner's tweet that was described.-Jeff