Monday, May 2, 2016

A "key bit of data" sometimes

We say PolitiFact uses inconsistent methods in fact-checking political claims. An April 27, 2016 fact check of the "New Day for America" Super PAC serves as a recent example, paired with a comparable case from the 2012 presidential election.

The 2016 claim from "New Day for America" charged Democratic presidential candidate Hillary Rodham Clinton with promising to raise taxes by $1 trillion. PolitiFact's conclusion said the number was about right but charged the Super PAC with ignoring the 10-year time frame over which the tax increase would produce its revenue. PolitiFact  added that Clinton would not even be in office for the entire 10 years. PolitiFact also claimed it was important who paid the taxes, calling that "another key bit of data." "New Day for America" ignored two key bits of data, so "Half True":
[Clinton's] plan does, in fact, call for raising a trillion dollars, but it would do so over 10 years — longer than she could serve as president, even if she were re-elected. So if she brought in roughly $100 billion per year, even a two-term Clinton administration couldn't fulfill a promise to bring the total to $1 trillion.

Also, the statement ignores another key bit of data — that the money would be raised by tax changes targeted to the richest Americans, a group that has seen its top tax rate drop dramatically since the 1950s and early 1960s, when the marginal tax rate was over 90 percent.

Because the statement is partially accurate but leaves out important details or takes things out of context, we rate it Half True.
We couldn't remember PolitiFact using this approach to a tax proposal before, so we looked for a comparable example from PolitiFact's past. It turned out President Obama's re-election campaign back in 2012 said challenger Mitt Romney's tax plan would "add trillions to the deficit." PolitiFact examined that claim along with the associated claim that President Obama's tax plan would cut deficits by $4 trillion. In both cases PolitiFact found that the numbers were inflated (cherry-picking high estimates). The details of Romney's tax plan were unknown, so the accusations about his tax plan had little foundation in fact.

More importantly, the effects of the tax plans were estimated over a 10-year period in both cases. Neither Romney or Obama would serve as president over the full 10 years. The Obama ad said Romney wold cut taxes for millionaires,, but ignored other tax cuts and tax increases in the plan. Good enough for PolitiFact? The Obama ad received a "Half True" rating, the same as "New Day For America" even though the Obama ad committed the same errors and more.

We encourage readers to look at PolitiFact's rating of the Obama campaign, especially the summary paragraphs, to see how PolitiFact largely forgave the fundamental inaccuracies in the ad. Compare it to the summary PolitiFact offered for the ad coming from "New Day For America."

PolitiFact simply does not use the same standards for the two ads.

In Fact-Checking This Is a Big Deal

Why do we nitpick PolitiFact over this kind of thing? Both "Half True" ratings are justified on reasonable grounds, aren't they?

No. Full stop. Good fact-checking requires a consistent approach to the issues. PolitiFact repeatedly fails to achieve consistency.

PolitiFact's rating system has always been a sham, because PolitiFact follows no rigid definition for its ratings. Sure, PolitiFact usually attempts justifications, but they are all over the map. For example, PolitiFact once gave Mitt Romney a "Half True" rating because the problems with his claim matched PolitiFact's definition of "Mostly True." Seriously, that's how PolitiFact justified its rating of Romney. PolitiFact has let the error stand for years.

Critics left and right have panned PolitiFact's rating system. Defenders often claim that the ratings aren't important. The important thing, they say, is the detailed information we get in the fact check. But if PolitiFact varies in its approach, finding a problem in one instance and overlooking that same problem in another instance, it gives its readers poor fact-checking.

Of course the problem is worse when the inconsistencies favor one political leaning over another.


In defending Clinton from the "New Day For America" ad, PolitiFact pointed out that her tax plan tries to increase taxes on the rich, ostensibly to help restore the balance in effect in the 1950s:
(T)he statement ignores another key bit of data — that the money would be raised by tax changes targeted to the richest Americans, a group that has seen its top tax rate drop dramatically since the 1950s and early 1960s, when the marginal tax rate was over 90 percent.
PolitiFact evidently did not bother to check its facts on this point. PolitiFact writer C. Eugene Emery Jr. supported his statement by providing links to raw data showing top marginal income tax rates. But changes to tax law affecting deductions have kept the effective tax rates on rich people from changing much. It was rare for anybody to pay the highest marginal income tax rate from the 1950s. Ari Gupta, in a paper for the Manhattan Institute, pointed out that popular liberal economists Thomas Piketty and Emanuel Saez admitted as much:
The reduction in top marginal individual income tax rates has contributed only marginally to the decline of progressivity of the federal tax system, because with various deductions and exemptions, along with favored treatment for capital gains, the average tax rate paid by those with very high income levels has changed much less over time than the top marginal rates.
Why is it okay to call out others for leaving out key information while at the same time omitting key information in one's own reporting? Ask PolitiFact. But don't expect an answer.

Tuesday, April 26, 2016

NTSH: 95 percent of Clinton's claims "Mostly True" or better?

We tip our hats to Power Line blog for making it easy to add a "Nothing To See Here" item.

With "Nothing To See Here" we take note of political statements deserving of a fact check. But we tend to doubt one will occur. Power Line blog noted a problem with a Nicholas Kristoff column in The New York Times. Kristoff, a liberal columnist, wrote a column highlighting Clinton's position head-and-shoulders above the competition when it comes to PolitiFact report cards. But there was a problem: Kristoff got the key numbers wrong.

Power Line's Steven Hayward compared the original version of Kristoff's column with the Times' later correction of the article.
At the bottom of the column is this short correction:
Correction: April 23, 2016: An earlier version of this column misstated some of the percentages of true statements as judged by PolitiFact.
So how did the original version of Kristof’s column read? Here:
PolitiFact, the Pulitzer Prize winning fact checking site, calculates that of the Clinton statements it has examined, 95 percent are either true or mostly true.

That’s more than twice as high as the percentages for any of the other candidates, with 46 percent for Bernie Sanders’s, 12 percent for Trump’s, 23 percent for Ted Cruz’s and 33 percent for John Kasich’s. Here we have a rare metric of integrity among candidates, and it suggest that contrary to popular impressions, Clinton is far more honest and trustworthy than her peers.
So we go from 95 percent true to 50 percent true and switch out “far more honest and trustworthy than her peers” for “relatively honest by politician standards,” with the blink of a mere correction.
We've repeatedly noted PolitiFact's weak-to-nonexistent efforts to police the misuse of its "report card" data. If PunditFact and PolitiFact let Kristoff slide on this one, what else are they willing to overlook?

Wednesday, April 13, 2016

A reader's take on our "Pants on Fire" research

Elizabeth MacInnis wrote:
Your premise here is that if there are an equal amount of fact checks and Republicans lie more, then there must be fact-checker bias. It couldn't possibly be that Republicans actually do lie more. That's an unobjective analysis. In my opinion, watching both sides closely in each election, Republicans do lie more, but not for the reason that you think. Democrats typically run on a platform of hope and ideas. It's much more subjective. Republicans tend to run on fear and attacks against other candidates (whether you like this or not, it's true). An attack on someone's record is more likely to be proven true or false. Watching all the debates, I consistently hear Republicans say things like "we've had a job killing president," when in reality (as of now) we've had 72 months of private-sector job growth, a record. With consistent, clearly false statements like this, Republicans are doing it to themselves. I believe we need a balanced system with both parties, but in many Americans' opinions (including mine), Republicans have become more and more outrageous in recent years (and no, that is not to say that Democrats are clear of any wrongdoing). The only way, in my opinion, to tame this is to hold them accountable - in fact-checking and votes. I hope both parties learn this lesson before their groups become too fractured.
Point by point:

"Your premise here is that if there are an equal amount of fact checks and Republicans lie more, then there must be fact-checker bias."

No, that's not our premise. Our premise (the one you seem to be talking about) is that if PolitiFact chose its stories only based on its editorial sense of whether the claim is true, then the results would be proportional. So Republicans could have five times more "false" ratings than Democrats but the distribution curve for both parties should appear similar. We don't think PolitiFact uses only its editorial sense in choosing stories.

The true central premise of the "Pants on Fire" research is that PolitiFact offers no objective means of distinguishing between its ratings of "False" and "Pants on Fire."

"It couldn't possibly be that Republicans actually do lie more. That's an unobjective analysis."

Can anyone explain to me how the "Pants on Fire" rating, with no apparent objective measure undergirding its use, contributes any empirical data toward the notion that Republicans lie more? Isn't that notion a sham?

"In my opinion, watching both sides closely in each election, Republicans do lie more, but not for the reason that you think. Democrats typically run on a platform of hope and ideas. It's much more subjective. Republicans tend to run on fear and attacks against other candidates (whether you like this or not, it's true)."

Summing up, then, MacInnis' opinion is true whether I like it or not? Is there any solid evidence supporting that opinion? Any at all?

(skipping some opinion that doesn't interest me so much)

"Watching all the debates, I consistently hear Republicans say things like "we've had a job killing president," when in reality (as of now) we've had 72 months of private-sector job growth, a record."

Remember when the Obama administration was lauding the helpful effects of the $900 billion stimulus bill? Employment was going down, but PolitiFact accepted arguments that unemployment would be even worse without the stimulus bill. Why is it that PolitiFact gives no consideration at all to a parallel principle with respect to claims of job-killing? That's not consistent, is it? We shouldn't make the mistake of thinking that inconsistent methods lead to good fact-checking, should we?

Elizabeth MacInnis, PolitiFact does fact-checking poorly. Don't be fooled.

PolitiFact's subjective "Pants on Fire" ratings tell us about PolitiFact, not about the entities receiving the subjective ratings.


Friday, April 8, 2016

PolitiFact: Lightning strikes still make a better comparison than alligator attacks

If you're tempted to illustrate the rarity of something by comparing it to something from real life, PolitiFact has a message for you: Take lightning strikes over alligator attacks.

On the issue of voter fraud, PolitiFact has given a number of "True" ratings to persons saying lightning strikes outnumber cases (that's cases considered for prosecution, mind you, not "cases" in the sense of "instances") of in-person voter impersonation.

PolitiFact's comparison is rigged by its narrow count of "cases," of course, but that's another story.

PolitiFact Wisconsin recycled the fact check with an April 7, 2016 item. The item offers no hint of criticism of the comparison of voter fraud to lightning strikes.

PolitiFact Wisconsin's "True" rating perpetuates the inconsistency we noted from a PolitiFact Florida fact check from 2015. It was claimed alligator attacks are more likely than a criminal attack by a Floridian with a concealed-carry gun permit. PolitiFact found the evidence broadly supported the claim but ruled it "Mostly False" since comparing alligator attacks to attacks with a firearm doesn't make sense:
(T)hese statistics, imperfect as they are, do support the notion that both kinds of attacks are uncommon. Whether this is a valid argument in favor of the bill is in the eye of the beholder. We find the statement has an element of truth but ignores other information that would give a different impression. So we rate it Mostly False.
There's an item a Zebra Fact Check criticizing PolitiFact Florida's ruling in detail.

It's worth noting that PolitiFact Wisconsin's evidence on voter fraud shared essentially the same weakness (no dependable count creating doubt):
It’s fair to say, however, that impersonation cases can be hard to count in that they are hard to prove -- particularly when no photo ID requirement is in place and a voter can cast a ballot simply by stating the name of a registered voter.

So the number of cases of in-person fraud by impersonation may be higher than that cited by Levitt, but no independent source suggests it is higher than the number of lightning strikes.


We rate Pocan’s statement True.
In both cases, then, PolitiFact doesn't really have the facts to fit the claim. But the liberal gets a "True" and the conservative gets a "Mostly False."

In other words, PolitiFact is objective and nonpartisan. Or something.

And if something happens rarely, compare it to lighting strikes instead of alligator attacks. Your PolitiFact report card may suffer otherwise.

Thursday, March 31, 2016

Fact checker avoids checking facts

A summary article by PolitiFact Wisconsin's Tom Kertscher contributes new evidence supporting our claims that PolitiFact checks facts poorly and applies its standards inconsistently. We'll address the latter point in a later post.

Kertscher's article reviewed statements made in Wisconsin by Democratic presidential front-runner Hillary Rodham Clinton. The following example from Kertscher's story grabbed our attention:

"The Republican governor of Florida has forbidden any state employee ever to use, either orally or in writing, the words ‘climate change.’ "

news report in March 2015 made that assertion, though the governor, Rick Scott, denied it.
The story Kertscher linked in support, however, dealt only with Florida's Department of Environmental Protection. The State of Florida has quite a few employees outside of the DEP. On the face of it, Clinton exaggerated wildly and the best PolitiFact Wisconsin can do in response is prop up a he-said/she-said facade supporting Clinton.

PolitiFact Wisconsin reported falsely. The March 2015 news report made no assertion that Gov. Scott placed a "climate change" gag order on every state employee in Florida.

We sent a message to Tom Kertscher on March 30, 2016 pointing out the error.

We'll update this item if we receive any response.

Wednesday, March 23, 2016

Have Democrats ''never held up a Supreme Court nomination"? (Updated)

We said before that PolitiFact does not hold itself to the same standard it applies to others. Though perhaps PolitiFact's scarce adherence to any consistent standard makes that inevitable. Our example comes from a March 20, 2016 fact check of Senate Minority Leader Harry Reid (D-Nev.).

Let's start with the misleading headline:

PolitiFact noted in its fact check that Democrats did, in fact, hold up the Supreme Court nomination of Robert Bork, whom Reagan nominated in 1987. So how does it work out that Reid's statement is "mostly true" anyway?

It's tricky, in keeping with PolitiFact's tradition of convoluted and selective justification.

In context, Reid stipulated that he was talking about lame duck cases. That makes his statement literally true false (see Update below) under an expansive interpretation of "lame duck," since Bork was nominated before 1988, Reagan's final year in office.

Up through this point, one might argue PolitiFact is treating Reid unfairly by rating his true statement only "Mostly True."

But there's much more to this story, and PolitiFact leaves out important parts.

First and foremost, there is no historical parallel to the current situation with the Supreme Court. The Bork nomination is considered a prime turning point in the politization of the confirmation process, and there is no example of a lame-duck nomination since Reagan.

And the current situation during Obama's last year in office sets a new precedent because his choice would not replace a liberal justice but a conservative justice. With his choice of Bork, Reagan was trying to replace the Nixon-appointed Justice Lewis Powell.

PolitiFact's effort to help us understand the truth in politics completely omits any information about differences in the ways these nominations would affect the political balance on the Supreme Court. It's apparently unimportant context in PolitiFact's eyes.

PolitiFact almost makes it look like Democrats rolled out the red carpet for Bork compared to Obama's hapless nominee Merrick Garland:
Bork did face a hearing and a Senate vote, which he lost, but his confirmation process made the rules of the game more contentious.
What's left out? The Democrat-controlled Senate Committee that sent Bork's nomination to the Senate recommended the Senate reject the nomination. And Democrats had enough of a majority that Bork had no chance with 52 of 54 Democrats voting against him (four Republicans often associated with the acronym RINO also opposed Bork).

Why is this background important? Let's revisit the context of Reid's reply to Meet the Press host Chuck Todd. Todd played a clip from 2005 of Reid saying the Senate has no constitutional duty to vote on a Supreme Court nomination. Now Reid says the Republicans have that duty. Todd asked Reid what changed. What changed, Reid said, is that the Democrats have never opposed a lame-duck nominee--a history running right up through 1988, before Reid ever claimed the Senate has no duty to vote on a nomination.

Reid's answer to Todd was complete baloney, in context. PolitiFact's fact-check does nothing to emphasize that context to its readers. Instead, PolitiFact readers get a misleading headline sending the message that Democrats hardly at all obstruct the Supreme Court nominations of Republicans.

PolitiFact: Putting the Clintonian "is" in "nonpartisan" since 2007.

Update: "Lame Duck" Lameness

Jeff D. points out the elephant in the room.

Before we rule, we wanted to note a slight error in the second part of Reid’s statement that "since 1900 in a lame-duck session, there have been six (nominees) that have all been approved." We have found in a previous fact-check that since the early 1900s, there have been six Supreme Court nominees in election years, and all were confirmed. However, only one was clearly a "lame-duck" nominee, meaning the president making the nomination was no question on the way out (Reagan). The others were nominated by presidents running for re-election to serve another term (Herbert Hoover, William Howard Taft, Franklin D. Roosevelt and Woodrow Wilson, who nominated two people in 1916).
Reid's stipulation that he was talking about lame ducks makes his statement literally false, contrary to the charitable reading I gave it in the post above. Reid's statement was flat wrong, but PolitiFact arbitrarily determined that Reid's exaggeration of 500 percent (of the number of "lame duck" Supreme Court nominations) was a "slight error" that does not appear to count against Reid's eventual rating.

PolitiFact used tweezers to pull out the most truth it could from Reid's statement, leaving behind plenty of falsehood.


Correction March 23, 2016:  Replaced "Mostly False" with "Mostly True" in referring to the rating Reid received from PolitiFact. March 31 update: Added strikethrough of "true" and added "false" to clarify the meaning.

Thursday, March 17, 2016

Ted Cruz fully to blame for giving Obama too much blame?

Does PolitiFact show a left-leaning bias in the blame game?

We thought PolitiFact went a bit easy on President Obama in a State of the Union speech some time ago. Obama said businesses had created so many jobs. PolitiFact said Obama's claim was "Half True" but then later elevated the rating to "Mostly True" because the president did not take as much credit as PolitiFact had first believed.

No, of course there was no concrete explanation for why PolitiFact changed its opinion.

PolitiFact played the blame game again on March 16, 2016, this time with Ted Cruz.

Here's how it looked:

PolitiFact said Cruz said President Obama has been presiding over U.S. jobs going overseas. PolitiFact reasons that Cruz gives Obama too much blame and so rates Cruz's claim "Mostly False."

Whatever plausibility PolitiFact's rating carries from its headline and deck material ought to fade pretty quickly once readers stumble over what Cruz actually said (bold emphasis added):
[Meet the Press host] Chuck Todd played a clip of Obama saying the Republicans are significantly to blame for the angry tone of politics today.

Cruz responded, "You know, Chuck, Barack Obama's a world-class demagogue. That language there is designed to divide us. No, Mr. President, we're not angry at that. We're angry at politicians in Washington, including you, who ignore the men and women who elected you, who have been presiding over our jobs going overseas for seven years."

The part of Cruz’s comment that caught our eye was that Obama has "been presiding over our jobs going overseas for seven years." We decided to take a look. (Cruz’s staff did not respond to inquiries.)
To factually conclude that too much blame was placed, the fact checker needs a blame baseline. Knowing whether Cruz blamed the president too much requires the fact checker to reasonably gauge how much blame Cruz placed on the president.

We think Cruz made that very difficult for PolitiFact with the wording he used, for Cruz did not single out the president. Cruz first mentions anger at "politicians in Washington" and after that makes clear Obama is included in the group ("including you").

So how much blame is Cruz placing on Obama, based on what Cruz said? How is the blame divided up between "politicians in Washington" and President Obama?

We don't see any way for PolitiFact to make that determination without simply making an assumption. Cruz offered no guidance. There's nothing in the context that helps. At least in the earlier case featuring President Obama we have the context of the State of the Union address. Presidents use that address to implicitly play up the benefits of their policies.

PolitiFact apparently assumes Cruz is blaming the president particularly for some unspecified role in allowing jobs to go overseas. Cruz doesn't even specify how much blame falls on Washington politicians, let alone the president. It isn't even necessary to assume that the anger at Obama and other Washington politicians is justified anger.

Is this fact-checking? It's hard to see how it qualifies.

PolitiFact has no trouble at all, despite the ambiguous nature of Cruz's claim, finding that Cruz placed too much blame on Obama. And PolitiFact likewise has an easy time assigning blame to Cruz for wrongly assigning blame, ergo the "Mostly False" rating.

PolitiFact considered no Cruz blame on "Washington politicians" other than President Obama.

In a way, it's easy to understand why PolitiFact left the other Washington politicians out of its consideration. Keeping them in consideration makes the fact check even more difficult than doing one that places an unspecified degree of blame on Obama. Pretending Cruz did not spread the blame around makes it easier for PolitiFact to maintain the fiction that Cruz placed too much blame on Obama.

We hasten to point out that such an approach hardly qualifies as unbiased.