Wednesday, November 26, 2014

PolitiFact and the Forbidden Fact Check

Democrats have thrown around plenty of accusation of racism over the years. So why haven't fact checkers like PolitiFact stuck their non-partisan fact-checking noses into those claims (Zebra Fact Check has done it)?

Buzzfeed's Andrew Kaczynski gives us the latest example spilling from the lips of Rep. Bennie Thompson (D-Miss.). Thompson was defending President Obama from criticism of his executive action on immigration:
“He’s not doing anything that the Bushes, the Reagans, the Clintons, and other presidents all the way back to Eisenhower, as it addressed immigration. So but again, this is just a reaction in Bennie Thompson’s words to a person of color being in the White House.”
Opposition to Obama's action on immigration is just a reaction to a person of color being in the White House, Thompson says.

True?

False?

Nothing to see here?


Thursday, November 20, 2014

Lost letters to PolitiFact Bias

We discovered a lost letter of sorts intended in response to our recent post "Fact-checking while blind, with PolitiMath."

Jon Honeycutt, posting to PolitiFact's Facebook page, wrote that he posted a comment to this site but it never appeared. I posted to Facebook in response to Honeycutt, including the quotation of his criticism in my reply:
Jon Honeycutt (addressing "PolitiFact Bias") wrote:
Hmm, just looked into 'politifact bias', the very first article I read http://www.politifactbias.com/.../fact-checking-while... Claimed that politifact found a 20% difference in the congressional approval rating but still found the meme mostly true. But when you read the actual article they link to, politifact found about a 3% difference. Then when I tried to comment to correct it, my comment never appeared.
Jon, I'm responsible for the article you're talking about. You found no mistake. As I wrote, "percentage error calculations ours." That means PolitiFact didn't bother calculating the error by percentage. The 3 percent different you're talking about is a difference in terms of percentage *points*. It's two different things. We at PolitiFact Bias are much better at those types of calculations than is PolitiFact. You were a bit careless with your interpretation.I have detected no sign of any attempt to comment on that article. Registration is required or else we get anonymous nonsense. I'd have been quite delighted to defend the article against your complaint.
To illustrate the point, consider a factual figure of 10 percent and a mistaken estimate of 15 percent. The difference between the two is 5 percentage points. But the percentage error is 50 percent. That's because the estimate exceeds the true figure by that percentage (15-10=5, 5/10=.5).

http://www.basic-mathematics.com/calculating-percent-error.html

Don't be shy, would-be critics! We're no less than 10 times better than is PolitiFact at responding to criticism, based on past performance. The comments section is open to those who register, and anyone who is a member of Facebook can post to our Facebook page.

Tuesday, November 11, 2014

Fact-checking while blind, with PolitiMath

One of the things we would predict from biased journalists is a forgiving eye for claims for which the journalist sympathizes.

Case in point?

A Nov. 11, 2014 fact check from PolitiFact's Louis Jacobson and intern Nai Issa gives a "True" rating to a Facebook meme claiming Congress has 11 percent approval while in 2014 96.4 percent of incumbents successfully defended their seats.

PolitiFact found the claim about congressional approval was off by about 20 percent and the one about the percentage of incumbents was off by a maximum of 1.5 percent (percentage error calculations ours). So, in terms of PolitiMath the average error for the two claims was 10.75 percent yet PolitiFact ruled the claim "True." The ruling means the 11 percent average error is insignificant in PolitiFact's sight.

Aside from the PolitiMath angle, we were intrigued by the precision of the Facebook meme. Why 96.4 percent and not an approximate number by 96 or 97? And why, given that PolitiFact often excoriates its subjects for faulty methods, wasn't PolitiFact curious about the fake precision of the meme?

Even if PolitiFact wasn't curious, we were. We looked at the picture conveying the meme and saw the explanation in the lower right-hand corner.

Red highlights scrawled by the PolitiFact Bias team. Image from PolitiFact.com

It reads: "Based on 420 incumbents who ran, 405 of which kept their seats in Congress."

PolitiFact counted 415 House and Senate incumbents, counting three who lost primary elections. Not counting undecided races involving Democrats Mark Begich and Mary Landrieu, incumbents held 396 seats.

So the numbers are wrong, using PolitiFact's count as the standard of accuracy, but PolitiFact says the meme is true.

It was fact-checked, after all.

Nothing To See Here: Krugman plays lawyer

With a hat tip to Power Line blog and John Hinderaker, we present our latest "Nothing To See Here" moment where we highlight a fact check that PolitiFact may or may not notice.

Nobel Prize-winning economist and partisan hack Paul Krugman krugsplains the latest legal challenge to the Affordable Care Act and tells his readers why the challenge is ridiculous:
(N)ot only is it clear from everything else in the act that there was no intention to set such limits, you can ask the people who drafted the law what they intended, and it wasn’t what the plaintiffs claim.
We're not offering any hints why Krugman's claim interests conservatives.

Krugman's talking about the Halbig case, where a D.C. Circuit panel ruled the language of the ACA specifies that state-established exchanges could receive federal subsidies but made no such provision for exchanges set up by the federal government. The en banc D.C. court, not-at-all-packed-with-three-unfilibusterable-Obama-appointed-liberal-judges, later reversed the panel's ruling.

Nothing to see here?

Thursday, November 6, 2014

PunditFact PolitiFail on Ben Shapiro, with PolitiMath

On Nov. 6, 2014 PunditFact provided yet another example why the various iterations of PolitiFact do not deserve serious consideration as fact checkers (we'll refer to PolitiFact writers as bloggers and the "fact check" stories as blogs from here on out as a considered display of disrespect).

PunditFact reviewed a claim by Truth Revolt's Ben Shapiro that a majority of Muslims are radical. PunditFact ruled Shapiro's claim "False" based on the idea that Shapiro's definition of "radical" and the numbers used to justify his claim were, according to PunditFact, "almost meaningless."

Lost on PunditFact was the inherent difficulty of ruling "False" something that's almost meaningless. Definite meanings lend themselves to verification or falsification. Fuzzy meanings defy those tests.

PunditFact's blog was literally filled with laughable errors, but we'll just focus on three for the sake of brevity.

First, PunditFact faults Shapiro for his broad definition of "radical," but Shapiro explains very clearly what he's up to in the video where he made the claim. There's no attempt to mislead the viewer and no excuse to misinterpret Shapiro's purpose.



Second, PunditFact engages in its own misdirection of its readers. In PunditFact's blog, it reports how Muslims "favor sharia." Pew Research explains clearly what that means: Favoring sharia means favoring sharia as official state law. PunditFact never mentions what Pew Research means by "favor sharia."

Do liberals think marrying church and state is radical? You betcha. Was PunditFact deliberately trying to downplay that angle? Or was the reporting just that bad? Either way, PunditFact provides a disservice to its readers.

Third, PunditFact fails to note that Shapiro could easily have increased the number of radicalized Muslims in his count. He drew his totals from a limited set of nations for which Pew Research had collected data. Shapiro points this out near the end of the video, but it PunditFact either didn't notice or else determined its readers did not need to know.

PolitiMath


PunditFact used what it calls a "reasonable" method of counting radical Muslims to supposedly show how Shapiro engaged in cherry-picking. We've pointed out at least two ways PunditFact erred in its methods, but for the sake of PolitiMath we'll assume PunditFact created an apt comparison between its "reasonable" method and Shapiro's alleged cherry-picking.

Shapiro counted 680 million radical Muslims. PunditFact counted 181.8 million. We rounded both numbers off slightly.

Taking PunditFact's 181.8 million as the baseline, Shapiro exaggerated the number of radical Muslims by 274 percent. That may seem like a big enough exaggeration to warrant a "False" rating. But it's easy to forget that the bloggers at PunditFact gave Cokie Roberts a "Half True" for a claim exaggerated by about 9,000 percent. PunditFact detected a valid underlying argument from Roberts. Apparently Ben Shapiro has no valid underlying argument that there are plenty of Muslims around who hold religious views that meet a broad definition of "radical."

Why?

Liberal bias is as likely an explanation as any.


Addendum:

Shapiro makes some of the same points we make with his own response to PunditFact.

Friday, October 31, 2014

Update on Florida, shark attacks and voter fraud

Back in 2012, PolitiFact Florida created a hilariously unbalanced fact check of the claim that shark attacks are more common than cases of voter fraud in Florida.

The key to the fact check was PolitiFact Florida's decision to only consider a "case" of voter fraud that was literally a legal "case" deemed worthy of prosecution by the Florida Department of Law Enforcement. No, we're not kidding. That's actually what PolitiFact Florida did (and we highlighted the hilarity once already).

It recently came to our attention that researchers have looked into the question of whether illegal immigrants vote (illegally) in U.S. elections.

The Washington Post published a column by the researchers on Oct. 24. They said, in part:
How many non-citizens participate in U.S. elections? More than 14 percent of non-citizens in both the 2008 and 2010 samples indicated that they were registered to vote. Furthermore, some of these non-citizens voted. Our best guess, based upon extrapolations from the portion of the sample with a verified vote, is that 6.4 percent of non-citizens voted in 2008 and 2.2 percent of non-citizens voted in 2010.
We decided to develop a conservative version of this estimate and apply it to Florida.

As of 2010 an estimated 825,000 illegal immigrants lived in Florida. That was down from about 1.1 million in 2007, probably owing to the weak economy. We'll conservatively estimate that 600,000 illegal immigrants continue to live in Florida.

The researchers, as noted above, estimated that 2.2 percent of non-citizens voted in 2010. Again, that represented a decline from the estimate from 2008. We'll assume for our estimate that only 1 percent of non-citizens will vote in the 2014 election.

We have our numbers. We multiply 600,000 by 1 percent (0.01). The result is 6,000.

PolitiFact counted 72 shark attacks all-time from 2008 through 2011 (correction Nov. 1, 2014) in Florida and called it "Mostly True" that the number of shark attacks outnumbers the cases of voter fraud.

If PolitiFact Florida is correct that the number of shark attacks outnumbers cases of voter fraud, we have a recommendation.

Stay out of the water.

Thursday, October 30, 2014

Larry Elder: 'PunditFact Lies Again'

Conservative radio show host Larry Elder has an Oct. 30 criticism of PunditFact posted at Townhall.com. It's definitely worth a read, and here's one of our favorite bits:
Since PunditFact kicks me for not using purchasing power parity, surely PunditFact's parent, Tampa Times, follows its own advice when writing about the size of a country's economy? Wrong.

A Tampa Times' 2012 story headlined "With Slow Growth, China Can't Prop Up the World Economy" called China "the world's second-largest economy," with not one word about per capita GDP or purchasing power parity. It also reprinted articles from other papers that discuss a country's gross GDP with no reference to purchasing power parity or per capita income.
Elder does a nice job of highlighting PolitiFact's consistency problem. PolitiFact often abandons normal standards of interpretation in its fact check work. Such fact checks amount to pedantry rather than journalistic research.

A liberal may trot out a misleading statistic and it will get a "Half True" or higher. A figure like Sarah Palin uses CIA Factbook ratings of military spending and receives a "Mostly False" rating.

Of course Elder makes the point in a fresh way by looking at the way PolitiFact's parent paper, the Tampa Bay Times, handles its own reporting. And the same principle applies to fact checks coming from PolitiFact. The fact checkers don't follow the standard for accuracy they apply to others.