Wednesday, December 24, 2014

PolitiFact editor explains the difference between "False" and "Pants on Fire"

During an interview for a  "DeCodeDC" podcast, PolitiFact editor Angie Drobnic Holan explained to listeners the difference between the Truth-O-Meter ratings "False" and "Pants on Fire":

Our transcript of the relevant portion of the podcast follows, picking up with the host asking why President Barack Obama's denial of a change of position on immigration wasn't rated more harshly (bold emphasis added):
Why wouldn't that be "Pants on Fire," for example?

You know, that's an interesting question.

We have definitions for all of our ratings. The definition for "False" is the statement is not accurate. The definition for "Pants on Fire" is the statement is not accurate and makes a ridiculous claim. So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other. And we totally understand when readers might disagree and say "You rated that 'Pants on Fire.' It should only be 'False.'" Or "You rated that 'False." Why isn't it 'Pants on Fire'?" Those are the kinds of discussions we have every day ...
One branch of our research examines how PolitiFact differentially applies its "Pants on Fire" definition to false statements by the ideology of the subject. Holan's description accords with other statements from PolitiFact regarding the criteria used to distinguish between "False" and "Pants on Fire."

Taking PolitiFact at its word, we concluded that the line of demarcation between the two ratings is essentially subjective. Our data show that PolitiFact National is over 70 percent more likely to give a Republican's false statement a "Pants on Fire" rating than a Democrat's false statement.

We don't necessarily agree with PolitiFact's determinations of what is true or false, of course. What's important to our research is that the PolitiFact editors doing the voting believe it.

Holan's statement helps further confirm our hypothesis regarding the subjective line of demarcation between "False" and "Pants on Fire."

We'll soon publish an update of our research, covering 2014 and updating cumulative totals.

Monday, December 22, 2014

Mailbag meets windbag

PolitiFact published a "Lie of the Year" edition of its "Mailbag" feature on Dec. 22, 2014. Criticism by Hot Air's Noah Rothman drew immediate mention:
Noah C. Rothman at the conservative blog Hot Air took issue with our Lie of the Year choice.

"Some of these assertions (that collectively earned the Lie of the Year) were misleading, but PolitiFact’s central thesis – ‘when combined, the claims edged the nation toward panic’ – is unfalsifiable. In the absence of any questioning of the federal response to the Ebola epidemic, an unlikely prospect given the government’s poor performance, PolitiFact cannot prove there would have been no broader apprehension about the deadly African hemorrhagic fever. In fact, to make that claim would be laughable.

"In response to Ebola, Sierra Leone literally canceled Christmas. In Britain, returning health care workers who may have had contact with an Ebola patient will have a lonely holiday as well. They will be forced by government mandate to isolate themselves for the duration of the 21-day incubation period, despite the protestations of health care workers. If Ebola ‘panic’ exists, it is certainly not limited to America and is not the fault of exclusively conservative lawmakers. … PolitiFact embarrassed itself again today, but I guess that’s hardly news."
Rothman's main criticism was PolitiFact's ridiculous primary focus on George Will's true claim that Ebola could be transmitted through the air by a sneeze or a cough.

PolitiFact's guidelines:
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
Left out: the main part of Rothman's criticism.


Wednesday, December 17, 2014

Commentary: 'PolitiFact’s Ebola Distortions'

Seth Mandel has another deft dissection of PolitiFact's 2014 "Lie of the Year" up at Commentary:
Different statements being grouped together into one “lie”–especially when they’re not lies, even if they’re mistaken–will not do wonders for PolitiFact’s already rock-bottom credibility. But in fact it’s really worse than that. Here’s PolitiFact’s explanation for their choice of “Lie of the Year,” demonstrating beyond any semblance of a doubt that those who run PolitiFact don’t understand the concept around which they’ve supposedly built their business model:
Yet fear of the disease stretched to every corner of America this fall, stoked by exaggerated claims from politicians and pundits. They said Ebola was easy to catch, that illegal immigrants may be carrying the virus across the southern border, that it was all part of a government or corporate conspiracy.
The claims — all wrong — distorted the debate about a serious public health issue. Together, they earn our Lie of the Year for 2014.
You’ll notice right there that PolitiFact engages in its own bit of shameless dishonesty.
Mandel makes a great point about PolitiFact's careless reporting of its "Lie of the Year" selection, a point we're also poised to make by using even more blatant examples from Aaron Sharockman, the editor of PolitiFact's "PunditFact" venture.

It's bad enough to botch the fact-checking end of things. Telling people about the botched fact checks using a new layer of falsehoods and distortions intensifies the deceptive effects.

This is nothing out of the ordinary for PolitiFact.

We'll once again emphasize the point we made in our post yesterday: Naming more than one "Lie of the Year" has some utility when it comes to deflecting criticism. Even Mandel mumbled something about being fair to PolitiFact owing to the multiple winners before he eviscerated their inclusion of Will's claim about the airborne spread of Ebola.

Tuesday, December 16, 2014

Lost letters: Apology Tour edition

We love getting reader feedback on our posts. And we're much more likely to respond to criticism than is PolitiFact. We found this bit of feedback very recently posted to a message board after somebody mentioned how PolitiFact found claims about an Obama "apology tour" false:
How cute, some nut's blog claiming that conservative-leaning Politifact is biased in favor of Obama. And claims that Obama had an "apology tour" because...far-right nutcase Nile Gardiner of the Heritage Foundation says so
How odd for "conservative-leaning PolitiFact" to dismiss the one conservative view among the four experts whose views it solicited! Especially when Gardiner stood alone among the experts with his experience in international relations.

And, of course, our argument wasn't based solely on PolitiFact arbitrarily ignoring an expert opinion it solicited. We found descriptions of apologies in professional journals and applied the definitions we found to the idea of an Obama "apology tour." Because that's what nutcases do.

Why didn't PolitiFact bother reviewing professional literature in its effort to settle the fact of the matter? Probably the same reason it ignored Gardiner's professional opinion: PolitiFact is conservative-leaning.

It makes complete sense if reality is liberally biased.

Find the full explanation of why it's reasonable to call Obama's world tour an "apology tour" at Zebra Fact Check


Jeff thinks it may not be obvious to every visitor that I'm kidding about PolitiFact's conservative bias. This note is intended as a corrective for any who might fall in that group.

2014: Another year, another laughable Lie of the Year

It's time for our annual criticism of PolitiFact's "Lie of the Year" award!

Leading off in a bipartisan spirit, let's note that every single one of PolitiFact's "Lie of the Year" award winners have contained some nugget of truth. This year, PolitiFact decisively elected to give the award to many quite different claims, each having something to do with the Ebola virus.

There's nothing like the meat tenderizer approach when wielding the scalpel of truth.

My handicapping job on the Lie of the Year award was pretty close. But PolitiFact threw us another curve this year by choosing two entries from its list of candidates and then throwing a bunch of other somewhat related claims in for good measure.

No, we're not even kidding.

Let's let PolitiFact's editor, Angie Drobnic Holan, tell the story:
[F]ear of the disease stretched to every corner of America this fall, stoked by exaggerated claims from politicians and pundits. They said Ebola was easy to catch, that illegal immigrants may be carrying the virus across the southern border, that it was all part of a government or corporate conspiracy.

The claims -- all wrong -- distorted the debate about a serious public health issue. Together, they earn our Lie of the Year for 2014.
PolitiFact's lead example, that Ebola is easy to catch, matches closely with the entry I marked as the most likely candidate. It's also the candidate that Hot Air's Noah Rothman identified as the worst candidate:
[T]he most undeserving of entries upon which PolitiFact has asked their audience to vote is a claim attributed to the syndicated columnist George Will. That claim stems from an October 18 appearance on Fox News Sunday in which Will criticized the members of the Obama administration for their hubristic early statements assuring the country that the Ebola outbreak in Africa was contained to that continent.

“The problem is the original assumption, said with great certitude if not certainty, was that you need to have direct contact, meaning with bodily fluids from someone because it’s not airborne,” Will said of the deadly African hemorrhagic fever. “There are doctors who are saying that in a sneeze or some cough, some of the airborne particles can be infectious.”
Rothman's post at Hot Air makes essentially the same points we posted to PolitiFact's Facebook page back in October:

PolitiFact's ruling was an exercise in pedantry, extolling the epidemiological understanding of "airborne" over the common understanding. Perhaps Will's statement implicitly exaggerated the risk of contracting Ebola via airborne droplets, but his statement was literally true.

What else went into the winning "Lie of the Year" grab-bag?
  • Rand Paul's claim that Ebola is "incredibly contagious" (not a candidate)
  • Internet users claiming Obama would detain persons showing Ebola symptoms (not a candidate)
  • Bloggers claiming the virus was cooked up in a bioweapons lab (not a candidate)
  • Rep. Paul Broun's claim he'd encountered reports of Ebola carriers crossing the U.S.-Mexico border (a candidate!)
  • Sen. John McCain's claim the the Obama administration said there's be no U.S. outbreak of Ebola (not a candidate).
PolitiFact tosses in a few more claims later on, but you get the idea. PolitiFact crowned every blue-eyed girl Homecoming Queen in 2014, after naming only two statements "Lie of the Year" in 2013.

Why so many lies of the year in 2014?

Sunday, December 14, 2014

PolitiFact poised to pick 2014 "Lie of the Year"

It's time for PolitiFact's "Lie of the Year" nonsense again, where the supposedly nonpartisan fact checkers set aside objectivity even more blatantly than usual to offer their opinion on the year's most significant political falsehood.

We'll first note a change from years past, as PolitiFact abandons its traditional presentation of the candidates accompanied by their corresponding "Truth-O-Meter" graphic. Does that have something to do with criticisms over last year's deceitful presentation? One can only hope, but we're inclined to call it coincidence.

And now a bit of handicapping, using a 0-10 scale to rate the strength of the candidate:

Tuesday, December 9, 2014

PolitiFact's coin flips

We've often highlighted the apparent non-objective standards PolitiFact uses to justify its "Truth-O-Meter" ratings. John Kroll, a former staffer at the Cleveland Plain Dealer, PolitiFact's former partner with PolitiFact Ohio, said the choice between one rating and another was often difficult and said the decisions amounted to "coin flips" much of the time.

Heads the liberal wins, tails the Republican loses, at least in the following comparison of PolitiFact's ratings of Stephen Carter (liberal) and Ted Cruz (Republican).

I'll simply reproduce the email PolitiFact Bias editor Jeff D. sent me, reformatted to our standard PFB presentation:
Read the last three paragraphs of each one (emphasis mine):
Carter said that more than 70 percent of American adults have committed a crime that could lead to imprisonment. Based on a strictly technical reading of existing laws, the consensus among the legal experts we reached is that the number is reasonable. Way more than a majority of Americans have done something in their lives that runs afoul of some law that includes jail or prison time as a potential punishment.

That said, experts acknowledged that the likelihood of arrest, prosecution or imprisonment is exceedingly low for many of Americans’ "crimes." 

As such, we rate the claim Mostly True.

Cruz said that "Lorne Michaels could be put in jail under this amendment for making fun of any politician."

Most experts we talked to agreed that the proposed amendment’s language left open the door to that possibility. But many of those same experts emphasized that prosecuting, much less imprisoning, a comedian for purely political speech would run counter to centuries of American tradition, and would face many obstacles at a variety of government levels and run headlong into popular sentiment.

In the big picture, Cruz makes a persuasive case that it’s not a good idea to mess with the First Amendment. Still, his SNL scenario is far-fetched. The claim is partially accurate but leaves out important details, so we rate it Half True.

One wonders if PolitiFact sought the consensus of experts while considering whether blacks were convicted at a higher rate than whites in a recent fact check. Rudy Giuliani received a "False" rating since PolitiFact could locate no official statistics backing his claim. Looks like official statistics aren't really needed if experts think a claim seems reasonable.

Jeff Adds: 

Though former Cleveland Plain Dealer (PolitiFact Ohio) editor John Kroll admits PolitiFact's ratings often amount to coin flips, their other journalistic standards are applied with the same consistency. Take for instance their Dec. 2 dodge of the claim Obama's executive order on immigration would create a $3000 incentive to hire undocumented workers:
The claim isn’t so much inaccurate as it is speculative. For that reason, we won’t put this on our Truth-O-Meter.
Was there an unannounced policy change at PolitiFact? Aaron Sharockman was editor on both the Cruz and Carter checks. An unnamed editor signed off on the Incentive claim, adding flip flops to coin flips.

Here's a timeline:
  • On Sept. 11, 2014, there was enough established, tangible evidence for something that may or may not happen in the future to say Ted Cruz' prediction was half wrong
  • On Dec. 2, 2014, PolitiFact suddenly has a policy against checking speculative claims, but felt compelled enough to spend an entire article Voxsplaining their work to readers.
  • On Dec. 8th, 2014, PolitiFact is back in the future-checking business and found enough proof of something that hasn't actually happened yet to definitively determine a liberal's claim is Mostly True.
Remember also that Mitt Romney won the Lie of the Year award for a TV ad that claimed implied Chrysler would be moving Jeep production to China. So in 2012, PolitiFact's most notable falsehood of the year was a campaign ad implying something would happen in the future.

But does Obama's executive order offer a certain economic incentive, as in the Dec. 2 article? Sorry, PolitiFact says it doesn't rate speculative claims.

Friday, December 5, 2014

Legal Insurrection: 'PolitiFact debunks obviously 'shopped photo (that no one believed was real)'

The title pretty much says it all.

We had plans to write this example, but we're pleased as punch to find our effort pre-empted by the good folks at Legal Insurrection:
Two different photographs, one very clearly photoshopped to provide some social commentary on a quickly spiraling situation. Predictably, the photograph enraged some, delighted others, but no one with two brain cells to rub together believed that the “rob a store” version of the photograph was real.

Politifact, however, dove in headfirst to provide us with an analysis no one asked for[.]
Visit Legal Insurrection for the entire article and to experience the visuals.

Friday, November 28, 2014

Thursday, November 27, 2014

Different strokes for different folks

Folk A: President Barack Obama

Obama claimed border crossings are at the lowest level since the 1970s.
We cannot directly check Obama's literal claim -- which would include the number of people who failed and succeeeded to cross the border -- because those statistics are not maintained by the federal government.
Truth-O-Meter rating: "Half True"

Folk B: Rudy Giuliani

Giuliani said blacks and whites are acquitted of murder at about the same rate.
We couldn't find any statistical evidence to support Giuliani’s claim, and experts said they weren't aware of any, either. We found some related data, but that data only serves to highlight some of the racial disproportion in the justice system.
We found "related data" PolitiFact apparently couldn't find:
Blacks charged with murder, rape and other major crimes are more likely to be acquitted by juries or freed because of a dismissal than white defendants, according to an analysis of Justice Department statistics.
Truth-O-Meter rating: "False"

Different strokes for different folks.

Wednesday, November 26, 2014

PolitiFact and the Forbidden Fact Check

Democrats have thrown around plenty of accusation of racism over the years. So why haven't fact checkers like PolitiFact stuck their non-partisan fact-checking noses into those claims (Zebra Fact Check has done it)?

Buzzfeed's Andrew Kaczynski gives us the latest example spilling from the lips of Rep. Bennie Thompson (D-Miss.). Thompson was defending President Obama from criticism of his executive action on immigration:
“He’s not doing anything that the Bushes, the Reagans, the Clintons, and other presidents all the way back to Eisenhower, as it addressed immigration. So but again, this is just a reaction in Bennie Thompson’s words to a person of color being in the White House.”
Opposition to Obama's action on immigration is just a reaction to a person of color being in the White House, Thompson says.



Nothing to see here?

Thursday, November 20, 2014

Lost letters to PolitiFact Bias

We discovered a lost letter of sorts intended in response to our recent post "Fact-checking while blind, with PolitiMath."

Jon Honeycutt, posting to PolitiFact's Facebook page, wrote that he posted a comment to this site but it never appeared. I posted to Facebook in response to Honeycutt, including the quotation of his criticism in my reply:
Jon Honeycutt (addressing "PolitiFact Bias") wrote:
Hmm, just looked into 'politifact bias', the very first article I read Claimed that politifact found a 20% difference in the congressional approval rating but still found the meme mostly true. But when you read the actual article they link to, politifact found about a 3% difference. Then when I tried to comment to correct it, my comment never appeared.
Jon, I'm responsible for the article you're talking about. You found no mistake. As I wrote, "percentage error calculations ours." That means PolitiFact didn't bother calculating the error by percentage. The 3 percent different you're talking about is a difference in terms of percentage *points*. It's two different things. We at PolitiFact Bias are much better at those types of calculations than is PolitiFact. You were a bit careless with your interpretation.I have detected no sign of any attempt to comment on that article. Registration is required or else we get anonymous nonsense. I'd have been quite delighted to defend the article against your complaint.
To illustrate the point, consider a factual figure of 10 percent and a mistaken estimate of 15 percent. The difference between the two is 5 percentage points. But the percentage error is 50 percent. That's because the estimate exceeds the true figure by that percentage (15-10=5, 5/10=.5).

Don't be shy, would-be critics! We're no less than 10 times better than is PolitiFact at responding to criticism, based on past performance. The comments section is open to those who register, and anyone who is a member of Facebook can post to our Facebook page.

Tuesday, November 11, 2014

Fact-checking while blind, with PolitiMath

One of the things we would predict from biased journalists is a forgiving eye for claims for which the journalist sympathizes.

Case in point?

A Nov. 11, 2014 fact check from PolitiFact's Louis Jacobson and intern Nai Issa gives a "True" rating to a Facebook meme claiming Congress has 11 percent approval while in 2014 96.4 percent of incumbents successfully defended their seats.

PolitiFact found the claim about congressional approval was off by about 20 percent and the one about the percentage of incumbents was off by a maximum of 1.5 percent (percentage error calculations ours). So, in terms of PolitiMath the average error for the two claims was 10.75 percent yet PolitiFact ruled the claim "True." The ruling means the 11 percent average error is insignificant in PolitiFact's sight.

Aside from the PolitiMath angle, we were intrigued by the precision of the Facebook meme. Why 96.4 percent and not an approximate number by 96 or 97? And why, given that PolitiFact often excoriates its subjects for faulty methods, wasn't PolitiFact curious about the fake precision of the meme?

Even if PolitiFact wasn't curious, we were. We looked at the picture conveying the meme and saw the explanation in the lower right-hand corner.

Red highlights scrawled by the PolitiFact Bias team. Image from

It reads: "Based on 420 incumbents who ran, 405 of which kept their seats in Congress."

PolitiFact counted 415 House and Senate incumbents, counting three who lost primary elections. Not counting undecided races involving Democrats Mark Begich and Mary Landrieu, incumbents held 396 seats.

So the numbers are wrong, using PolitiFact's count as the standard of accuracy, but PolitiFact says the meme is true.

It was fact-checked, after all.

Nothing To See Here: Krugman plays lawyer

With a hat tip to Power Line blog and John Hinderaker, we present our latest "Nothing To See Here" moment where we highlight a fact check that PolitiFact may or may not notice.

Nobel Prize-winning economist and partisan hack Paul Krugman krugsplains the latest legal challenge to the Affordable Care Act and tells his readers why the challenge is ridiculous:
(N)ot only is it clear from everything else in the act that there was no intention to set such limits, you can ask the people who drafted the law what they intended, and it wasn’t what the plaintiffs claim.
We're not offering any hints why Krugman's claim interests conservatives.

Krugman's talking about the Halbig case, where a D.C. Circuit panel ruled the language of the ACA specifies that state-established exchanges could receive federal subsidies but made no such provision for exchanges set up by the federal government. The en banc D.C. court, not-at-all-packed-with-three-unfilibusterable-Obama-appointed-liberal-judges, later reversed the panel's ruling.

Nothing to see here?

Thursday, November 6, 2014

PunditFact PolitiFail on Ben Shapiro, with PolitiMath

On Nov. 6, 2014 PunditFact provided yet another example why the various iterations of PolitiFact do not deserve serious consideration as fact checkers (we'll refer to PolitiFact writers as bloggers and the "fact check" stories as blogs from here on out as a considered display of disrespect).

PunditFact reviewed a claim by Truth Revolt's Ben Shapiro that a majority of Muslims are radical. PunditFact ruled Shapiro's claim "False" based on the idea that Shapiro's definition of "radical" and the numbers used to justify his claim were, according to PunditFact, "almost meaningless."

Lost on PunditFact was the inherent difficulty of ruling "False" something that's almost meaningless. Definite meanings lend themselves to verification or falsification. Fuzzy meanings defy those tests.

PunditFact's blog was literally filled with laughable errors, but we'll just focus on three for the sake of brevity.

First, PunditFact faults Shapiro for his broad definition of "radical," but Shapiro explains very clearly what he's up to in the video where he made the claim. There's no attempt to mislead the viewer and no excuse to misinterpret Shapiro's purpose.

Second, PunditFact engages in its own misdirection of its readers. In PunditFact's blog, it reports how Muslims "favor sharia." Pew Research explains clearly what that means: Favoring sharia means favoring sharia as official state law. PunditFact never mentions what Pew Research means by "favor sharia."

Do liberals think marrying church and state is radical? You betcha. Was PunditFact deliberately trying to downplay that angle? Or was the reporting just that bad? Either way, PunditFact provides a disservice to its readers.

Third, PunditFact fails to note that Shapiro could easily have increased the number of radicalized Muslims in his count. He drew his totals from a limited set of nations for which Pew Research had collected data. Shapiro points this out near the end of the video, but it PunditFact either didn't notice or else determined its readers did not need to know.


PunditFact used what it calls a "reasonable" method of counting radical Muslims to supposedly show how Shapiro engaged in cherry-picking. We've pointed out at least two ways PunditFact erred in its methods, but for the sake of PolitiMath we'll assume PunditFact created an apt comparison between its "reasonable" method and Shapiro's alleged cherry-picking.

Shapiro counted 680 million radical Muslims. PunditFact counted 181.8 million. We rounded both numbers off slightly.

Taking PunditFact's 181.8 million as the baseline, Shapiro exaggerated the number of radical Muslims by 274 percent. That may seem like a big enough exaggeration to warrant a "False" rating. But it's easy to forget that the bloggers at PunditFact gave Cokie Roberts a "Half True" for a claim exaggerated by about 9,000 percent. PunditFact detected a valid underlying argument from Roberts. Apparently Ben Shapiro has no valid underlying argument that there are plenty of Muslims around who hold religious views that meet a broad definition of "radical."


Liberal bias is as likely an explanation as any.


Shapiro makes some of the same points we make with his own response to PunditFact.

Friday, October 31, 2014

Update on Florida, shark attacks and voter fraud

Back in 2012, PolitiFact Florida created a hilariously unbalanced fact check of the claim that shark attacks are more common than cases of voter fraud in Florida.

The key to the fact check was PolitiFact Florida's decision to only consider a "case" of voter fraud that was literally a legal "case" deemed worthy of prosecution by the Florida Department of Law Enforcement. No, we're not kidding. That's actually what PolitiFact Florida did (and we highlighted the hilarity once already).

It recently came to our attention that researchers have looked into the question of whether illegal immigrants vote (illegally) in U.S. elections.

The Washington Post published a column by the researchers on Oct. 24. They said, in part:
How many non-citizens participate in U.S. elections? More than 14 percent of non-citizens in both the 2008 and 2010 samples indicated that they were registered to vote. Furthermore, some of these non-citizens voted. Our best guess, based upon extrapolations from the portion of the sample with a verified vote, is that 6.4 percent of non-citizens voted in 2008 and 2.2 percent of non-citizens voted in 2010.
We decided to develop a conservative version of this estimate and apply it to Florida.

As of 2010 an estimated 825,000 illegal immigrants lived in Florida. That was down from about 1.1 million in 2007, probably owing to the weak economy. We'll conservatively estimate that 600,000 illegal immigrants continue to live in Florida.

The researchers, as noted above, estimated that 2.2 percent of non-citizens voted in 2010. Again, that represented a decline from the estimate from 2008. We'll assume for our estimate that only 1 percent of non-citizens will vote in the 2014 election.

We have our numbers. We multiply 600,000 by 1 percent (0.01). The result is 6,000.

PolitiFact counted 72 shark attacks all-time from 2008 through 2011 (correction Nov. 1, 2014) in Florida and called it "Mostly True" that the number of shark attacks outnumbers the cases of voter fraud.

If PolitiFact Florida is correct that the number of shark attacks outnumbers cases of voter fraud, we have a recommendation.

Stay out of the water.

Thursday, October 30, 2014

Larry Elder: 'PunditFact Lies Again'

Conservative radio show host Larry Elder has an Oct. 30 criticism of PunditFact posted at It's definitely worth a read, and here's one of our favorite bits:
Since PunditFact kicks me for not using purchasing power parity, surely PunditFact's parent, Tampa Times, follows its own advice when writing about the size of a country's economy? Wrong.

A Tampa Times' 2012 story headlined "With Slow Growth, China Can't Prop Up the World Economy" called China "the world's second-largest economy," with not one word about per capita GDP or purchasing power parity. It also reprinted articles from other papers that discuss a country's gross GDP with no reference to purchasing power parity or per capita income.
Elder does a nice job of highlighting PolitiFact's consistency problem. PolitiFact often abandons normal standards of interpretation in its fact check work. Such fact checks amount to pedantry rather than journalistic research.

A liberal may trot out a misleading statistic and it will get a "Half True" or higher. A figure like Sarah Palin uses CIA Factbook ratings of military spending and receives a "Mostly False" rating.

Of course Elder makes the point in a fresh way by looking at the way PolitiFact's parent paper, the Tampa Bay Times, handles its own reporting. And the same principle applies to fact checks coming from PolitiFact. The fact checkers don't follow the standard for accuracy they apply to others.

Tuesday, October 28, 2014

PolitiFact supports dishonest Democratic Party "war on women" meme

There's plenty wrong with PolitiFact's Oct. 28 fact check of a Facebook meme, but we'll focus this time only on PolitiFact's implicit support of Democrats' baseless "war on women" political attack strategy regarding equal pay for equal work.

The meme graphic looks like this:

The question at the end, "Share if you miss the good old days!" implies a contrast between today's Republican Party platform and the platform in 1956. For us, the relevant point is No. 7, "Assure equal pay for equal work regardless of sex." Democrats have made the supposed Republican "war on women" a main point of their campaigns, partly by criticizing Republican opposition of the Democrats' proposed "Paycheck Fairness Act," which places new burdens on businesses defending against pay discrimination lawsuits. Equal pay for equal work, of course, already stands as the law of the land.

PolitiFact, on its main page of fact checks, elevates the fake contrast on the equal pay issue to first place:

Image from, appropriated under Fair Use.

"What a difference 58 years makes."

Back in '56, Republicans supported equal pay for equal work. But today, PolitiFact implies, Republicans no longer support equal work for equal pay.

It's horrible and biased reporting that has no place in a fact check.

Are things better in the text of the story? Not so much (emphasis in the original):
The 2012 platform doesn’t mention two of the meme’s seven items from 1956 -- unemployment benefits and equal pay for women.

The bottom line, then, is that on most of these issues, the GOP moved to the right between 1956 and 2012, though the degree of that shift has varied somewhat issue by issue.
PolitiFact finds no evidence of a shift on equal pay other than the issue's absence in the 2014 GOP platform. But that only makes sense given that federal equal pay laws went on the books in the 1960s. So there's no real evidence of any shift other than a fallacious appeal to silence, yet we have "equal pay" as PolitiFact's lead example of the GOP's supposed move to the right.

The issue also gets top billing in PolitiFact's concluding paragraphs (bold emphasis added):
The meme says the 1956 Republican Party platform supported equal pay, the minimum wage, asylum for refugees, protections for unions and more.

That’s generally correct. However, it’s worth noting that other elements of the 1956 platform were considered conservative for that era. Also, some of the issues have changed considerably between 1956 and 2012, such as the shift from focusing on post-war refugees to focusing on illegal immigration.
There's really no need to mention anything in the fact check about federal equal pay legislation passing in the 1960s, right?

This type of journalism is exactly what one would predict if liberal bias affected the work of left-leaning journalists. This fact check serves as an embarrassment to the practice of journalism.

Tuesday, October 21, 2014

Fact-checker can't tell the difference between "trusted" and "trustworthy"?

PolitiFact just makes criticism too easy.

Today PolitiFact's PunditFact highlighted a Pew Research poll that found many people do not trust Rush Limbaugh as a news source. The fact checkers published their article under the headline "Pew study finds Rush Limbaugh least trustworthy news source."

No, we're not kidding.

As if botching the headline isn't bad enough, the article contains more of PolitiFact's deceptive framing:
PunditFact is tracking the accuracy of claims made on the five major networks using our network scorecards. By that measure, 61 percent of the claims fact-checked on Fox News have been rated Mostly False, False or Pants on Fire, the most among any of the major networks.
As PolitiFact's methodology for constructing its scorecards lacks any scientific basis, this information is a curiosity at best. But by juxtaposing it with the polling data from Pew Research, the reader gets a gentle nudge in the direction of "Ah, yes, so the accuracy of PolitiFact's scorecards is supported by this polling!" That's bunk.

The scientific angle would come from an investigation into whether prior impressions of trustworthiness influence the ratings organizations like PolitiFact give to their subjects.

If PunditFact's article passes as responsible journalism then journalism is a total waste of time.

Sunday, October 12, 2014

Hot Air: 'It’s time to ask PolitiFact: What is the meaning of “is”?'

Dustin Siggins, writing for the Hot Air blog, gives us a PolitiFact twofer, swatting down two fact-check monstrosities from the left-leaning fact checker.

In 1998, then-President Bill Clinton told the American people that he hadn’t lied to a grand jury about his relationship with Monica Lewinsky because, on the day he said “there’s nothing going on between us,” nothing was going on.

While that line has been a joke among the American people ever since, it looks like PolitiFact took the lesson to heart in two recent fact-checks that include some Olympic-level rhetorical gymnastics.
Siggins goes on from there, dealing with recent fact checks of senators Ted Cruz (R-Texas) and Jeanne Shaheen (D-N.H.).

Click the link and read.

Also consider reviewing our highlights of a few other stories by Siggins.

Tuesday, September 30, 2014

Hot Air: 'Whiplash: Politifact absolves Democrat who repeated…Politifact’s lie of the year'

Rest assured, readers: There's no lack of PolitiFact blunders to write about, merely a lack of time to get to them all. For that reason, we're grateful that we're not the only ones doing the work of exposing the worst fact checker in the biz for what it iz.

Take it away, Guy Benson:
Politifact, the heavily left-leaning political fact-checking oufit, has truly outdone itself.  The organization crowned President Obama as the 2013 recipient of its annual “lie of the year” designation for his tireless efforts to mislead Americans about being able to keep their existing healthcare plans under Obamacare.  While richly deserved, the decision came as a bit of a surprise because Politifact had rated that exact claim as “half true” in 2012, and straight-up “true” in 2008 (apparently promises about non-existent bills can be deemed accurate).
And what did PolitiFact do to outdo itself? Republican senatorial candidate Ed Gillespie ran an ad attacking Democratic rival Mark Warner over pledge not to vote for a bill that would take away people's current health insurance plans.

PolitiFact Virginia, incredibly, ruled the ad "False."

Read Benson's piece at Hot Air in full for all the gory details. The article appropriately strikes down PolitiFact Virginia's thin justification for its ruling.

Also see our past assessment of PolitiFact's preposterous maneuvering on its editorial "Lie of the Year" proclamation from 2013.

Friday, September 26, 2014

Left Jab: Rachel Maddow and the presidential salute

MSNBC television host Rachel Maddow is probably the highest-profile critic of PolitiFact from the left. We've panned a number of her criticisms of PolitiFact as weak, but her Sept. 25 blog scores a palpable hit:
So, what I wrote is true. Punditfact found it to be true. They published an amusing presidential speechmaking anecdote that not only shows that it’s true, but makes you feel all warm-hearted about its being true.  And then gave their rating:  “Mostly False”.  Ta-daa!

Usually, I ignore these guys.  Yesterday, I made the mistake of responding to their letter, which I regret. Don’t feed the trolls.  They included a line from my response to them in their rating, which I realize now may create the impression that I participated in this enterprise as if it was a real thing.  It’s not a real thing: it’s Politifact.  It’s terrible.
We appreciate the absence in Maddow's post of any partisan whining. She just makes the justifiable assertion that PolitiFact does fact checking badly, and supports it with a pretty good anecdote. PolitiFact uses some sort of Associative Property of Quotations to blame Maddow for the questionable claim of a blogger who cited her book.

We'll repeat our position there's nothing inconsistent between PolitiFact treating liberals or Democrats unfairly and our position that PolitiFact displays an anti-conservative and anti-Republican bias. Maddow has a legitimate example of PolitiFact treating her unfairly.

Saturday, September 20, 2014

PolitiMath at PolitiFact New Hampshire

PolitiFact New Hampshire provides us an example of PolitiMath with its Sept. 19, 2014 rating of Sen. Jeanne Shaheen's ad attacking Republican challenger Scott Brown.

The ad claims Brown ranked first in receiving donations from "Wall Street," to the tune of $5.3 million.

PolitiFact New Hampshire pegged the reasonably "Wall Street" figure lower than $5.3 million:
Brown’s total haul from these six categories was about $4.2 million, or about one-fifth lower than what the ad said.
Note that national PolitiFact's Louis Jacobson, writing for PolitiFact New Hampshire, figures the difference between the two figures with the errant figure as the baseline. That method sends the message that Shaheen's ad was off on the number by about one-fifth, or in error by about 20 percent. Calculated properly, the figure in Shaheen's ad represents an exaggeration (that is, error) of 26 percent.

Curiously, PolitiFact doesn't bother reaching a conclusion on whether it's true that Brown ranks number one in terms of Wall Street giving. Jacobson says Brown led in four of the six categories he classified as Wall Street, but kept mum about where Brown ranked with the figures added up.

That makes it difficult to judge whether the 26 percent error implied by PolitiFact New Hampshire's $4.2 million figure accounts for the "Mostly True" rating all by itself.


For comparison, we have a rating of President Obama where the PolitiFact team made a similar mistake, calculating the error as a percentage of the errant number. In that case, Obama gave a figure that was off by 27 percent and received a rating of "Mostly True."


After a little searching we found a "Mostly True" rating of a conservative where the speaker used the wrong figure. Conservative pundit Bill Kristol said around 40 percent of union members voted for the Republican presidential candidate in 2008. The actual number was 37 percent. Kristol was off by about 8 percent. So "Mostly True."

Tuesday, September 16, 2014

PunditFact and the disingenuous disclaimer

Since PunditFact brings up its network scorecards yet again, it's worth repeating our observation that PunditFact speaks out of both sides of its mouth on the issue of scorecards/report cards.

This corner:
The network scorecards were designed to provide you a way to measure the relative truth of statements made on a particular network.
The other corner:
We avoid comparisons between the networks.
PunditFact breaks down its data to enable its readers to "measure the relative truth of statements made on a particular network."

At the same time, PunditFact tells its readers that it's not comparing the networks.

We're still trying to figure out a way these claims can reconcile without contradiction and/or excusing PolitiFact from the charge of deliberately misleading its readers.

If the scorecards provide readers with a legitimate tool for judging the relative truth of statements made on a particular network, then why would PunditFact avoid comparisons between the networks? And how can PunditFact even claim to avoid making comparisons between the networks when its scorecards avowedly serve the purpose of leading readers to make those comparisons?

If this paradox doesn't indicate simple ignorance on PunditFact's part, it indicates a disturbingly disingenuous approach to its subject matter.

Saturday, September 6, 2014

Quality at PolitiFact New Hampshire

Let's just say consistency isn't PolitiFact's strong suit.

PolitiFact Wisconsin serves up more baloney on Obama cutting the deficit in half

PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Thus we greet with derisive laughter PolitiFact's Sept. 5, 2014 bestowal of a "True" rating on President Obama's declaration "We cut our deficits by more than half."

Curious about what "we" cut the deficits? PolitiFact Wisconsin is here to help:

"We" is "he": Obama (image from
"We" is "he." Obama did it. Obama cut the national deficit in half. The statement is accurate and there's nothing significant missing. Right?

Well, no. It's a load of hooey that PolitiFact has consistently helped Obama sell.

Here are some insignificant things PolitiFact Wisconsin found:
  1. "When you use Obama's methodology to compare the deficit Obama inherited -- the 2009 result minus the stimulus package to that in 2013 --  the drop in the deficit is slightly under half, at 48%."
  2.  "'The economic recovery, wind-down of stimulus, reversal of TARP/Fannie transactions, and lower interest rates are really what has caused our deficit to fall so much,' Goldwein told us. He mentioned cuts in discretionary spending as well."
  3.  "(Ellis) and Goldwein emphasized that while the deficit has been halved, it’s been halved from a skyscraping peak."
The second point is significant because TARP and other bailout spending was heavily focused on FY2009. As that money is repaid, it counts as lower spending ("negative spending"). The government has turned a profit on the TARP bailouts, so a fair bit of the "skyscraping peak" came right back to the government, making its later spending appear lower.

Here are some insignificant missing things PolitiFact Wisconsin didn't bother to mention:
  1. PolitiFact claims it takes credit and blame into account. But Obama carries little (if any) personal responsibility for reducing the deficit by half.
  2. Remember those obstructionist Republicans who block the Democrats' every attempt to pass jobs bills and keep critically important entitlement benefits flowing?
  3. PolitiFact's expert, Goldwein, mentioned cuts in discretionary spending. Way to go, Obama! Oh, wait, that was largely a result of the sequestration that the president blames on Republicans.
So, yeah, the deficit was cut in half. But given the nature of the FY2009 deficit spike, cutting the deficit in half by the end of Obama's first term in office should have been a layup. It wasn't a layup because the economy stayed bad. Democrats would have continued spending investing in jobs and education if Republicans hadn't gained control of the House of Representatives in 2010.

Obama tries to take this set of circumstances largely beyond his control to fashion a feather for his own cap.

To PolitiFact Wisconsin, none of that is significant. What a joke.


For more on Obama's effect on the deficit and debt, see the following Zebra Fact Check articles: says federal spending has increased ‘far more slowly’ under Obama than under Bush

Is the federal deficit ‘falling at fastest rate in 60 years’?

Edit 11/08/2014 - Added link to original PFW article in second paragraph - Jeff

Sunshine State News: 'Charlie Can't Even Get a 'Pants on Fire' for the Phony Rothstein Connection?'

On the Sunshine State News website, opinion writer Nancy Smith asks what party-switching Democratic gubernatorial candidate Charlie Crist needs to do to earn a "Pants on Fire" rating from PolitiFact:
The Crist ad that claims  the governor "teamed up with a felon convicted of running a Ponzi scheme to smear Charlie Crist" is grudgingly rated "false." 

What does this new Democratic Party darling have to do to show he's not only rewriting his own life as he goes along, but he's making up Rick Scott's, too?
Smith's criticism of this Crist rating from PolitiFact Florida quickly widens in scope:
I often feel my temperature rise reading the Times-Herald because these folks never admit to bias and probably never will. But by no means am I the only one to cite PolitiFact for "ranting and rating." The Internet is alight with websites trying hard to tell the real story and keep the Tampa Bay newspaper (now the Times-Herald) honest.

Check out It claims to be the work of "independent bloggers who share a sense of outrage that PolitiFact often peddles outrageous slant as objective news."
We thank Smith for noticing our work, and Jeff appreciates the likely hat tip to his classic work "Ranting and Rating: Why PolitiFact's Numbers Don't Add Up."

Yes, it's hard, though not impossible, for a Democrat to earn a "Pants on Fire" rating from PolitiFact Florida. Charlie Crist got one when he was a Republican, back in 2009, and he got another the next year after switching to Independent. But since turning Democrat in 2012 the "False" rating Smith notes is Crist's worst run-in with Florida's journalistic arbiters of truth.

That's to be expected, of course, when combining an ideological slant with a subjective rating system. The difference between a "False" and a "Pants on Fire" on PolitiFact's scale consists of the judgment that the latter claims are "ridiculous."

How PolitiFact objectively measures ridiculousness is anyone's guess. And until PolitiFact announces its objective criteria for utilizing the rating, we'll go right on using it as one measure of PolitiFact's ideological bias.

Friday, September 5, 2014

PunditFact vs. Larry Elder

Have we mentioned lately that PolitiFact/PunditFact is a sorry excuse for a fact-checking organization?

A supposed fact check by PunditFact of conservative radio show host and CNN guest Larry Elder exemplifies the criticism.

Elder mixed it up with Marc Lamont Hill on CNN over Ferguson, Missouri and associated issues. Elder insisted that a focus on supposed racism distracted from more pressing problems.

During the course of the discussion, Elder illustrated the progress of blacks in the United States by saying if American blacks were a country they'd be the 15th wealthiest.

PunditFact decided to rate that claim from Elder (PunditFact also rated a statement from Hill made during the exchange).

Two Faults?

PunditFact found experts who faulted Elder on two points.

First, the experts said, Elder used a figure for black income and compared that to GDP figures for a list of nations. That's an apples-to-oranges comparison. PunditFact did a poor job, however, of explaining that differences between the two measures would relatively underestimate the collective economic power of American blacks. Elder, in effect, underestimated the collective economic power of American blacks.

Second, the experts said GDP was a poor measure for the average economic well-being of American blacks.

To which we reply: Who says Elder was trying to offer a measure of the average well-being of American blacks?

PunditFact is allowing the experts to play pundit. PunditFact should restrict its use of expert testimony to subjects where the expert possesses relevant expertise.

Elder has a valid point if he's talking about collective wealth. Black America would rank close to 15th or perhaps higher on an apples-to-apples comparison of collective personal wealth with other nations. PolitiFact normally looks kindly on inaccuracies that weaken the speaker's point. President Obama, for example, received a "Mostly True" for a substantial underestimation of the number of states with miscegenation laws on the books in 1961.

Elder, for his closer estimate, received a "False" rating.

Gobbledegook + PPP=Gobbledegook

After faulting Elder for using an apples-to-oranges comparison in his effort to downplay the importance of racism in America, PunditFact proceeds to use the same apples-to-oranges comparison in per-capita form, with purchasing power parity added, to rate Elder on a claim he didn't make:
We took Clementi's suggestion and divided the most recent estimate of black earned income, $1 trillion, by the Census Bureau estimate of 44.5 million African-Americans. That would create a per capita buying power of around $23,000 a year, which would translate to around 34th around the world on the International Monetary Fund’s list of countries by GDP per capita (between the Bahamas and Malta).

But $23,000 doesn’t go as far in the United States as, say, in Lithuania. Economists multiply GDP per capita by a conversion factor called purchasing power parity to account for the different values of goods and services in different countries. If you apply these factors, the African-American population’s $23,000 a year ranks 44th (between Portugal and Lithuania).
PunditFact takes a measure of American blacks' after-tax income and compares it on a per-capita basis to the per-capita GDP of a list of nations, adjusted by purchasing power parity. This move accomplishes two things. First, it ignores Elder's stated argument and replaces it with an argument chosen by PunditFact. Second, it sustains one of the two errors PunditFact charged to Elder. Combining these two mistakes results in a faux trouncing of the straw-man claim PunditFact attributes to Elder.

Racism "not a problem"?

In PunditFact's conclusion we find yet another problem.

Not content to rate its straw man version of Elder's claim "False," PunditFact also tries to create the impression Elder said racism isn't a problem in the United States.

Note PunditFact's conclusion:
Arguing that racism is "not a problem," Elder said that "if black America were a country, it would be the 15th wealthiest in the world."
 The problem? Elder didn't say racism isn't a problem. He said it isn't "a major problem."

We emailed the writer and editor of the PunditFact story, Derek Tsang and Aaron Sharockman, respectively, asking about the source of the quotation in PunditFact's conclusion. We'll update this item if we receive a reply.

Our view? A fact-checker should not misquote and misrepresent the persons it fact checks.

Yet another PolitiFact train wreck

We think it's a major problem when a mainstream fact checker alters the claims of the subjects it fact checks, applies standards inconsistently and uses inaccurate quotations.

Wednesday, September 3, 2014

Lairs of editors at PolitiFact Florida


With this item we go back a few years into PolitiFact Florida's history to remind us of the extent to which fact checkers will allow inaccuracies to go into print or onto the Internet.

PolitiFact will sometimes pick on the accuracy of somebody else's headline. PolitiFact Florida's headline on a fact check of Gov. Rick Scott is a doozy:

The problem is obvious just from the image capture, but we'll explain it just in case it's not obvious to somebody.

The headline says Scott claimed Thomas Jefferson called regulations an "endemic weakness," which, by standard norms of interpretation, means Scott is saying Jefferson used the term "endemic weakness" in describing government regulations.

But the section above quoting Scott doesn't jibe with PolitiFact Florida's headline. It's Scott using the term "endemic weakness" and saying Jefferson's complaints against the King of England as expressed in the Declaration of Independence serve as a vintage example.

PolitiFact Florida's fact check never makes the case that "endemic weakness" was attributed to Jefferson.

Making matters worse, Scott didn't even imply that Jefferson was complaining in principle that regulations make up an endemic weakness of government. He just used Jefferson's complaint as one example supporting his own point.

PolitiFact ignores Scott's real point and fact checks a tangent:
Scott quotes Jefferson correctly. But we wondered, what did Jefferson mean when he wrote that line? Did he think regulations are an endemic weakness in government?
Poor PolitiFact Florida! Its fact check assumes it matters whether Jefferson was saying regulations were an endemic weakness of government! It doesn't matter. Was Scott using a solid example of a proliferation of government regulations? Not really. But that doesn't mean Scott was saying Jefferson called government relations an "endemic weakness."

How does this type of colossal blunder make it past layers of editors? Why is PolitiFact's cure for Scott's inaccuracy worse than the disease?

It's makes us think perhaps PolitiFact uses lairs of editors instead of layers of editors. Perhaps layers of lairs of editors.

With a hat tip to the old "Batman" television series, our conception of a PolitiFact lair of editors:


Another year has come around, and finds us still blessed with peace and friendship abroad; law, order, and religion at home; good affection and harmony with our Indian neighbors; our burthens lightened, yet our income sufficient for the public wants, and the produce of the year great beyond example. These, fellow-citizens, are the circumstances under which we meet, and we remark with special satisfaction those which under the smiles of Providence result from the skill, industry, and order of our citizens, managing their own affairs in their own way and for their own use, unembarrassed by too much regulation, unoppressed by fiscal exactions.
 --Thomas Jefferson, Second Annual Message to Congress, December 15, 1802
The natural progress of things is for liberty to yeild, and government to gain ground.
 --Thomas Jefferson to Edward Carrington, Paris, May 27, 1788

Zebra Fact Check: The Importance of Interpretation

Bryan has written an article over at his fact checking site, Zebra Fact Check, that I think is worth highlighting here. Bryan discusses the importance and benefits of correctly interpreting a person's claims, and uses a recent PunditFact article as an example of botching this critical exercise:
PunditFact fails to apply one of the basic rules of interpretation, which is to interpret less clear passages by what more clear passages say. 
Bryan profiles PunditFact's article on Tom DeLay, who was discussing the indictment of Texas governor Rick Perry. In addition to pointing out PundiFact's shoddy journalism, Bryan spots several ways their apparent bias affected the fact check:
We think PunditFact’s faulty interpretation did much to color the results of the fact check. Though PolitiFact’s headline announced a check of DeLay’s claim of ties between McCrum and Democrats, it’s hard to reconcile PolitiFact’s confirmation of such ties with the “Mostly False” rating it gave DeLay. PunditFact affirms “weak ties” to Democrats. Weak ties are ties.
Even more damning evidence of PundiFact's liberal bent comes from their selective use of a CNN chyron placed next to its ubiquitous Truth-O-Meter graphic, allowing PolitiFact to reinforce the editorial slant of its fact check.

While I'm admittedly biased, Bryan's piece is well done and I recommend you read the whole thing.

Bryan didn't mention the main thing I noticed when I first read PunditFact's DeLay article, namely, the superfluous inclusion of a personal smear. PunditFact writer Linda Qiu offered up this paragraph in summation:
This record of bipartisanship is not unusual nor undesired in special prosecutors, said Wisenberg, who considers himself a conservative and opposes the prosecution against DeLay. He pointed out that special prosecutor Ken Starr, famous for investigating President Bill Clinton, also had ties to both parties, and DeLay did not oppose him.
We're not sure what probative value these two sentences have beyond suggesting DeLay is a hypocrite. Highlighting hypocrisy is a very persuasive argument, but it's also a fallacious one. Tom DeLay's support or opposition to Ken Starr bears no relevance to the factual accuracy of the current claim PunditFact is supposedly checking. It serves only to tarnish DeLay's character with readers. That's not fact checking, and that's not even editorializing. It's immature trolling.

Thursday, August 28, 2014

Layers of editors at PolitiFact Florida.

We ran across some faulty after-publication editing at PolitiFact Florida while doing some research.

A picture tells the story (red ovals and yellow highlights added):

Why pick on PolitiFact Florida over something relatively minor? We think it's a healthy reminder that the people who work for PolitiFact are fallible. Seeing this type of mistake reminds us that we shouldn't be too surprised to see other types of mistakes in their work, including mistakes in the research and conclusions.

Tuesday, August 26, 2014

David Friedman: "Problems with 'Fact Checking'"

Academic David Friedman, on his personal blog "Ideas," fired off a Sunday salvo aimed at PunditFact, the wing of PolitiFact that fact checks pundits.

Friedman has plenty to say and says it well, so we'll just tease our readers with his first paragraph and provide our usual encouragement to click the link and read the whole thing:
I recently came across a link on Facebook to a claim that "Over Half of all Statements Made on Fox News are False," based on a story in the Tampa Bay Times' PunditFact. My first reaction was that I had no more reason to trust the Tampa Bay Times than to trust Fox, making the story pure partisan assertion, so I followed the link to see what support it offered. To their credit, they listed the statements on Fox that they based their claim on and provided the basis for their conclusions. But looking at them in detail, their evaluation was clearly biased in favor of what they wanted to believe.
We never tire of highlighting good criticism of PolitiFact. If only there were more hours in the day.

Marc Lamont Hill and PolitiMath

A PunditFact rating of CNN pundit Marc Lamont Hill drew our attention today for its PolitiMath content.

PolitiMath takes place when math calculations appear to bear on whether a figure receives one "Truth-O-Meter" rating instead of another.  In this case, Hill received a "False" rating for claiming an unarmed black person is  shot by a cop every 28 hours.

PunditFact found Hill reached his conclusion using the numbers for black persons armed or unarmed. The total figure for both was 313. The figure for unarmed black people was 136.  The calculation is uncomplicated. Taking the number of hours in a 365-day year, we get 8760. Divide 8760 by 313 and we get Hill's 28-hour figure. Use what PunditFact said was the correct figure and we get 64 hours (8760/136).

Hill exaggerated the frequency of a unarmed black person dying from a police shooting by 124 percent.

We're certainly not saying that PolitiFact is in any way consistent with how it classifies errors by percentage, but for comparison Florida lawmaker Will Weatherford made a statistical claim that was off by about 49 percent and received a "False" rating. Democrat Gerry Connolly, on the other hand, managed to wring a "Mostly False" rating out of a statistic that was off by about 45 percent.

Perhaps this is just science at work. Given reality's liberal bias, it may make sense to grade errors of the same percentage more harshly where they affect liberally-biased truths. Fact checkers could be guilty of false equivalency by acting as though the truth is simply objective.

Monday, August 25, 2014

Unearthing a truth PolitiFact buried,


We've often reminded readers that we only scratch the surface of PolitiFact's mountain of journalistic malfeasance. Reminding us of that point, we have an item from way back on Sept 15, 2009, when PolitiFact was still connected to Congressional Quarterly.

The issue? Economist Thomas Sowell wrote that President Obama let the economic stimulus bill sit on Obama's desk for three days before the president signed it.

In a recent column in Investor's Business Daily, economist and political commentator Thomas Sowell said that President Barack Obama was trying to rush his health care bill through Congress. Sowell cited the quick passage of the economic stimulus bill in February 2009 as proof that Obama is too hasty in passing major legislation.

Sowell wrote that "the administration was successful in rushing a massive spending bill through Congress in just two days — after which it sat on the president's desk for three days, while he was away on vacation."
In truth, Sowell wasn't trying to prove Obama was too hasty in passing major legislation. He was arguing Obama passes legislation hastily when there's no apparent reason to rush the legislation.

"Allow five days of public comment before signing bills"

A PolitiFact item from earlier that same year, on January 29, 2009, helps provide some context for Sowell's complaint:
"Too often bills are rushed through Congress and to the president before the public has the opportunity to review them," Obama's campaign Web site states . "As president, Obama will not sign any nonemergency bill without giving the American public an opportunity to review and comment on the White House Web site for five days."

But the first bill Obama signed into law as president — the Lilly Ledbetter Fair Pay Act — got no such vetting.
So, Obama promised he would wait at least five days before signing non-emergency legislation.  The three-day wait for the stimulus bill implies it qualified as an emergency bill. But not such an emergency that Obama couldn't wait a few days before signing it.

The key to PolitiFact's argument? The ultra-literal reading of "sat on the president's desk." In PolitiFact's judgment, since the bill wasn't literally sitting on the desk waiting for the president's signature therefore the case won't support Sowell's point.

Sowell expresses his point:
The only reasonable alternative seems to be that he wanted to get this massive government takeover of medical care passed into law before the public understood what was in it.

Moreover, he wanted to get re-elected in 2012 before the public experienced what its actual consequences would be.

Unfortunately, this way of doing things is all too typical of the way this administration has acted on a wide range of issues.
The example using the stimulus bill followed. Sowell points out spending from the stimulus bill took place over an extended period, making a joke of the notion the stimulus was intended as a strong short-term Keynesian stimulus.

Sowell's point with his example remains: If the stimulus bill was an emergency, then why not sign it as soon as possible?

How did PolitiFact miss Sowell's point? Maybe PolitiFact wasn't interested in Sowell's point. How did PolitiFact miss the context of President Obama ignoring his broken pledge of transparency on legislative action? Maybe PolitiFact wasn't interested in that context.

Correction 8-25-2014:  Referred to the Affordable Care Act in one instance where the stimulus bill was intended.

Sunday, August 24, 2014

Missed opportunity?

Now they tell us.

PolitiFact's pundit-checking operation, PunditFact, reveals on Aug. 24, 2014 that the Obama administration originally planned to leave 10,000 U.S. troops in Iraq as part of a status of forces agreement.

Let's imagine an alternative universe in which PolitiFact could have done this fact check during a presidential election after President Obama appeared to deny he wanted a status of forces agreement that would have kept troops in Iraq.  Via NPR:
MR. ROMNEY: Excuse me. It's a geopolitical foe. And I said in the same — in the same paragraph, I said, and Iran is the greatest national security threat we face. Russia does continue to battle us in the U.N. time and time again. I have clear eyes on this. I'm not going to wear rose-colored glasses when it comes to Russia or Mr. Putin, and I'm certainly not going to say to him, I'll give you more flexibility after the election. After the election he'll get more backbone.
Number two, with regards to Iraq, you and I agreed, I believe, that there should have been a status of forces agreement. Did you —

PRESIDENT OBAMA: That's not true.

MR. ROMNEY: Oh, you didn't — you didn't want a status of forces agreement?

PRESIDENT OBAMA: No, but what I — what I would not have done is left 10,000 troops in Iraq that would tie us down. That certainly would not help us in the Middle East.

Just another in a long line of missed opportunities for PolitiFact.

PolitiFact offers Christmas miracle

We so often ding PolitiFact for its lack of consistency that we have to count it a miracle that PolitiFact ruled consistently on similar claims by Gov. Rick Perry of Texas and President Barack Obama.

Both men, one a Republican and one a Democrat, said Congress was on "vacation" when there were important things to do.

PolitiFact gave both the same rating, "Mostly False."

Knock us over with a feather.

Thursday, August 21, 2014

Nothing To See Here: CNN anchor buys automatic weapon (Updatted)

With hat tips to Hot Air and Twitchy, here's a layup for PunditFact featuring CNN anchor Don Lemon:
Don Lemon: What do you mean anyone can’t wa— Listen, during the theater shooting in Colorado, I was able to go and buy an automatic weapon, and I, you know, have maybe shot a gun, three, four times in my life. I don’t even live in Colorado. I think most people can go out and buy an automatic weapon. I don’t understand your argument there.
In reality, it's not so easy to run out and buy an automatic weapon.

Nothing to see here? We'll see.

Update 8-26-2014

PunditFact published a rating of Lemon today, rating his statement "False."

Tuesday, August 19, 2014

Nothing To See Here: Grimes outs sexist McConnell?

Does Senate Minority Leader Mitch McConnell want women to receive less pay for the same work as men?  McConnell's Democratic Party challenger for his senate seat, Alison Lundergan Grimes, wants people to think so (
"When you finally see Sen. McConnell and I on the same stage, you realize only one of us believes women deserve equal pay for equal work. If Mitch McConnell were a TV show, he'd be "Mad Men," treating women unfairly, stuck in 1968 and ending this season." -- Secretary of State Alison Lundergan Grimes, Democrat
Maybe PolitiFact would be interested in this?

But hasn't PolitiFact been tough enough on Grimes already? And this item was from Aug. 4.  Surely if it was important then PolitiFact would already have rated it.

Nothing to see here.

Monday, August 18, 2014

PunditFact gives Tucker Carlson biased fact check

PolitiFact's PunditFact, the fact-checking arm that rates the statements of pundits, gave Fox News' Tucker Carlson a hilariously slanted "Pants on Fire" rating last week. PunditFact's August 15, 2014 fact check of Carlson was chock full of the baloney we're used to finding in PolitiFact's fact checks.

Carlson objected to the tone of a Fox News segment playing up the dangers of unsecured firearms.  Carlson tried to add perspective to the story by claiming accidental bathtub drownings claimed the lives of far more children last year than did accidental gun deaths.

That looked like a job for ... PunditFact!

The PunditFact writer, Jon Greenberg, used search tools at the Centers for Disease Control website to research the numbers.  The CDC tracks the causes of death logged on death certificates.  That information is logged using ICD-9 or ICD-10 codes like those hospitals use in medical billing.

Greenberg flubbed up his work in a number of ways.

First, Greenberg used 2011 to check Carlson's claim about 2013.  It's fair to ding Carlson for trotting out an unverified claim, but the fact of the matter is that 2011 is not 2013.  Any fact checker should know this.  PunditFact rated Carlson "Pants on Fire" on a fact that PunditFact could not check for lack of data.

Second, Greenberg assumed 2011 was a suitably representative year to substitute for 2013.  That's not fact checking.  It's possible to estimate whether Carson was in the ballpark with his claim by looking at trends for the two types of accidental death.  But 2011 isn't a trend.  It's just one year.

Third, Greenberg used three ICD-10 coded deaths to represent accidental gun deaths.  The third code accounted for the largest number, and it's a catch-all code that includes deaths from flare guns and airguns.

Fourth, Greenberg excluded from the other side of the ledger accidental bathtub drownings that also included a fall into the bathtub.  Yes, there's an ICD-10 code just for deaths caused by a fall into a bathtub with a subsequent drowning.

None of these problems should occur in a competent fact check, at least not unexplained.  Yet Greenberg doesn't explain how any one of these problems affects PunditFact's ability to verify Carlson's claim.

The bogus chart for "Drowned in a bathtub" vs. "Accidental gunfire"

Here's PunditFact's bogus chart:

We looked at the numbers affecting 0-14 years.  The total for the first column on PunditFact's chart, with the deaths from a bathtub drowning associated with a fall, rises to 95.  Additional deaths from accidental bathtub drownings may have ended up under another ICD-10 catch-all code: "Unspecified cause of accidental drowning and submersion."  So the number may be higher than 95.

Also for the 0-14 age group, we checked the numbers for accidental gunfire deaths.  Of the 74 deaths in the that age range in 2011, 56 were documented using the catch-all code for "Accidental discharge and malfunction from other and unspecified firearms and guns."  That code is one of three alternatives.  The first code, W32, specifies handguns.  The second code, W33, specifies rifles and shotguns.  The third code, W34, covers everything else, including BB guns and paintball guns.

We don't know how many W34 deaths were caused by guns in the commonly understood sense of the term.  Neither does PunditFact.

One might argue that it makes sense to lump in deaths caused by every type of gun, including flare guns and paintball guns.  But if that's the case, shouldn't bathtub drowning get the same broad treatment?  Isn't a swimming pool essentially a large bathtub?

We're not saying Carlson was right that many more children died in 2013 from accidentally drowning in a bathtub than from accidental gunshots.  But in the 0-14 age range more children died from accidental bathtub drowning than from accidental gunshots (W34's included) each year we checked, from 2009-2011.

That's enough to show that Carlson has something to his point about perspective.  And it's enough to show that PunditFact stacked the deck against Carlson.

The Howler

We can't end this review without mentioning one particularly hilarious line from PunditFact's fact check.  PunditFact tried to artificially narrow the definition of "child" to make Carlson's claim look worse (bold emphasis added):
Carlson didn’t say what age children he had in mind, but in the context of the story he was responding to — and his rhetorical question about something "I want to know before I let my child go over to your house" — this is not about children under 4 years old. Parents don't let toddlers "go over" to a friend’s house.
"Go over" gets the scare quotes, we suppose, to show that the term refers only to children sufficiently autonomous to safely go unattended to a friend's house.  That's assuming the neighbor isn't in the adjacent duplex apartment.  More importantly, it assumes no danger to toddlers posed by older children, such as babysitters, playing with unsecured guns.

How ridiculous.

That's PunditFact/PolitiFact fact checking for you.