Tuesday, April 17, 2018

What??? No Pulitzer for PolitiFact in 2018?

We're not surprised PolitiFact failed to win a Pulitzer Prize in 2018.

The Pulitzer it won back in 2009 was a bit of a fluke to begin with, stemming from prize submissions in the public service category and arbitrarily used by the Pulitzer board to justify the prize for "National Reporting."

Since the win in 2009, PolitiFact has won exactly the same number of Pulitzer Prizes that PolitiFact Bias has won: Zero.

So, no, we're not surprised. But we think Hitler might be, judging from his reaction when PolitiFact failed to win a Pulitzer back in 2014.

Friday, April 13, 2018

PolitiFact continues to botch the gender pay gap


We can depend on PolitiFact to perform lousy fact-checking on the gender wage gap.

PolitiFact veteran Louis Jacobson proved PolitiFact consistent ineptitude with an April 13, 2018 fact check of Sen. Tina Smith (D-Min.), Sen. Al Franken's replacement.

Sen. Smith claimed that women earn only 80 cents on the dollar for doing the same jobs as men. That's false, and PolitiFact rated it "Mostly False."


That 80-cents-on-the-dollar wage gap is calculated based on full-time work irrespective of the job type and irrespective of hours worked once above the full-time threshold. The figure represents the median, not the average.

But isn't "Mostly False" a Fair Rating for Smith?

Good question! PolitiFact noted that the figure Smith was using did not take the type of job specifically into account. And PolitiFact pointed out that Smith made a common mistake. People often fail to mention that the raw wage gap figure doesn't take the type of job into account.

PolitiFact's Jacobson doesn't precisely spell out why PolitiFact finds a germ of truth in Smith's statement. Presumably PolitiFact's reasoning matches that of its earlier ratings where it noted that the wage gap statistic is accurate except for the part about it applying to equal work. So it's true except for the part that makes it false, therefore "Mostly False" instead of "False."

Looking at it objectively, however, it's just plain false that women earn 80 cents on the dollar for doing the same work. Researchers talk about an "unexplained gap" after taking various factors into account to explain the gap, and the ceiling for gender discrimination looks like it falls to around 5 percent to 7 percent.

Charitably using the 7 percent figure as the ceiling for gender-based wage discrimination, Smith exaggerated the gap by 186 percent. It's likely the exaggeration was far greater than that.

For comparison, when Bernie Sanders said 40 percent of U.S. gun sales occur without background checks, PolitiFact gave him a "False" rating for exaggerating the right figure by 90 percent.

The Ongoing Democratic Deception PolitiFact Overlooks

If a Democrat describes the 80 percent raw pay gap accurately, why not give it a "True" rating? Or at least "Mostly True"?

Democrats tend to trot out the raw gender pay gap statistic while proposing legislation that supposedly addresses gender discrimination. By repeatedly associating the raw wage gap with the issue of wage discrimination, Democrats send the implicit message that the raw wage gap describes gender discrimination. It uses the anchoring bias to mislead the audience about the size of the pay gap stemming from gender discrimination.

Democrats habitually use "Equal Pay Day," based on the raw wage gap, to argue for equal pay for equal work. But the raw wage gap doesn't take the type of job into account.

Trust PolitiFact not to notice the deception.

Fact checkers ought to assist in making Democrats clarify their position. Are Democrats in favor of equal pay regardless of the job or hours worked? Or do Democrats believe the demands for equal pay apply only to matters of gender discrimination?

If the latter, Democrats' continued use of the raw wage gap to peg the date of its "Equal Pay Day" counts as a blatant deception.

If the former, voters deserve to know what Democrats stand for.

Afters


It amused us that Jacobson directly referenced an earlier PolitiFact Florida botched treatment of the gender pay gap.

PolitiFact Florida couldn't figure out that claiming the gap occurs "simply because she isn't a man" is equivalent to claiming the raw gap is for men and women doing the same work. Think about it. If the gap occurs "simply because she isn't a man" then the reason for the disparity cannot be because she is doing different work. Doing different work would be a factor in addition to her not being a man.

PolitiFact Florida hilariously rated that claim "Mostly True." We wrote about it on March 14, 2017.

Fact checkers. D'oh.

Thursday, April 12, 2018

Not a Lot of Reader Confusion X: "I admit that there are flaws in this ..."

So hold my beer, Nelly.

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Quite a few folks apparently have no clue at all that PolitiFact's charts and graphs lack anything like a scientific basis. Others know that something isn't right about the charts and graphs but against all reason find some value in them anyway.

PolitiFact itself would fall in the latter camp, based on the way it uses its charts and graphs.

So would Luciano Gonzalez, writing at Patheos. Gonzalez listened to Speaker of the House Paul Ryan's speech announcing his impending retirement and started wondering about Ryan's record of honesty.

PolitiFact's charts and graphs don't tell people about the honesty of politicians because of many flaky layers of selection bias, but people can't seem to help themselves (bold emphasis added):
I decided after hearing his speech at his press conference to independently check if this House Speaker has made more honest claims than his predecessors. To did this I went to Politifact and read the records of Nancy Pelosi (House Speaker from January 4th 2007-January 3rd 2011), John Boehner (House Speaker from January 5th 2011-October 29th, 2015), and of course of the current House Speaker Paul Ryan (October 29th 2015 until January 2019). I admit that there are flaws in this, such as the fact that not every political claim a politician makes is examined (or even capable of being examined) by Politifact and of course the inherent problems in giving political claims “true”, “mostly”, “half-true”, “mostly false”, “false”, & “pants on fire” ratings but it’s better than not examining political claims and a candidate’s level of honesty or awareness of reality at all.
If we can't have science, Gonzalez appears to say, pseudoscience is better than nothing at all.

Gonzalez proceeds to crunch the meaningless numbers, which "support" the premise of his column that Ryan isn't really so honest.

That accounts for the great bulk of Gonzalez's column.

Let's be clear: PolitiFact encourages this type of irresponsible behavior by publishing its nonsense graphs without the disclaimers that spell out for people that the graphs cannot be reasonably used to gauge people's honesty.

PolitiFact encourages exactly the type of behavior that fact checkers ought to discourage.

Monday, April 2, 2018

PolitiFact Bias Fights Fact Checker Falsehoods

A December 2017 project report by former PolitiFact intern Allison Colburn of the University of Missouri-Columbia School of Journalism made a number of misleading statements about PolitiFact Bias. This is our second post addressing that report. Find the first one here.

Colburn:
A blog, PolitiFactBias.com, devotes itself to finding specific instances of PolitiFact being unfair to conservatives. The blog does not provide analysis or opinion about fact-checks that give Republicans positive ratings. Rather, it mostly focuses on instances of PolitiFact being too hard on conservatives.
We find all three sentences untrue.

Does PolitiFact Bias devote itself to finding specific instances of PolitiFact being unfair to conservatives?

The PolitiFact Bias banner declares the site's purpose as "Exposing bias, mistakes and flimflammery at the PolitiFact fact check website." Moreover, the claim is specious on its face. After the page break we posted the title of each PolitiFact Bias blog entry from 2017, the year when Colburn published her report. The titles alone provide strong evidence contradicting Colburn's claim.

PolitiFact Bias exists to show the strongest evidence of the left-leaning bias that a plurality of Americans detect in the mainstream media, specific to PolitiFact. As such, we look for any manifestations of bias, including patterns in the use of words, patterns in the application of subjective ratings, biased framing and inconsistent application of principles.


Does PolitiFact Bias not provide analysis or opinion about about fact-checks that give Republicans positive ratings?

PolitiFact Bias focuses its posts on issues that accord with its purpose of exposing PolitiFact's bias, mistakes and flimflammery. Our focus by its nature is technically orthogonal to PolitiFact giving Republicans positive ratings. And, in fact, PolitiFact Bias does analyze cases where Republicans received high ratings. PolitiFact Bias even highlights some criticisms of PolitiFact from the left.

We simply do not find many strong criticisms of PolitiFact from the left. There are plenty of criticisms of PolitiFact from the right that we likewise find weak.

Does PolitiFact Bias "mostly focus" on PolitiFact's harsh treatment of conservatives?

PolitiFact Bias recognizes the subjectivity of PolitiFact's "Truth-O-Meter" ratings. PolitiFact's rating system offers no dependable means of objectively grading the truth value of political statements. For that reason, this site tends to avoid specifically faulting PolitiFact's assigned ratings. Instead, PolitiFact Bias places its emphasis on cases showing PolitiFact inconsistency in applying its ratings. In two similar cases where a Democrat received a positive rating and a Republican received a lower rating it might be the case that PolitiFact went easy on the Democrat.

That said, the list of post titles again shows that PolitiFact Bias produces a great deal of content that is not focused on showing PolitiFact should give conservatives more positive ratings. Holan's statement jibes with Colburn's false statement about the focus at PolitiFact Bias.

Why the misleading claims about PolitiFact Bias?

As far as we can tell, the entire evidence Colburn used in her report's judgment of PolitiFact Bias came from her interview with PolitiFact Editor Angie Drobnic Holan:
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from. But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
Note that in Holan's response to Colburn's first question about PolitiFact Bias she suggests the site focuses on PolitiFact not giving enough positive ratings to conservatives.

Are Colburn and Holan lying?

PolitiFact Bias co-editor Jeff D. has used the PolitiFact Bias Twitter account to charge Colburn and Holan with lying.

The charge isn't unreasonable.

Colburn very likely read the PolitiFact Bias site to some extent before asking Holan about it. Even a cursory read ought to have informed a reasonable person that Holan's description of the site was slanted at best. Yet Holan's description apparently underpinned Colburn's description of PolitiFact Bias.

Likewise, Holan's familiarity with the PolitiFact Bias site ought to have informed her that her description of it was wrong and misleading.

When a person knowingly makes a false or misleading statement, it counts as a lie. Colburn and Holan were both very likely to have reason to know their statements were false or misleading.

We're pondering a second post pressing the issue still further in Holan's case.

Wednesday, March 28, 2018

How PolitiFact Fights Its Reputation for Anti-conservative Bias

This week we ran across a new paper with an intriguing title: Everyone Hates the Referee:How Fact-Checkers Mitigate a Public Perception of Bias.

The paper, by Allison Colburn, pretty much concludes that fact checkers do not know how to fight their reputation for bias. Aside from that, it lets the fact checkers describe what they do to try to seem fair and impartial.

The paper mentions PolitiFact Bias, and we'll post more about that later. We place our focus for this post on Colburn's October 2017 interview of PolitiFact Editor Angie Drobnic Holan. Colburn asks Holan directly about PolitiFact Bias (Colburn's words in bold, following the format from her paper):
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just  seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from.

But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
We could probably mine material from these answers for weeks. One visit to our About/FAQ page would prove enough to treat the bulk of Holan's misstatements. Aside from the FAQ, Jeff's tweet of Rodney Dangerfield's mug is the perfect response to Holan's suggestion that PolitiFact Bias is "one guy."



The Holan interview does deliver some on the promise of Colburn's paper. It shows how Holan tries to marginalize PolitiFact's critics.

I summed up one prong of PolitiFact's strategy in a post from Jan 30, 2018:
Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?
In that post, I reported on how Holan bemoaned the fact that PolitiFact critics do not offer factual criticisms of its fact checks, preferring instead to quibble over its subjective ratings.
If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Holan says my (our) criticism amounts to a call for more positive ratings for conservatives. So we're just carping about the ratings, right? Holan's summation does a tremendous disservice to our painstaking and abundant research pointing out PolitiFact's errors (for example).

In the Colburn interview Holan also says she has trouble taking criticism seriously when the critic doesn't write articles complimenting what PolitiFact does correctly.

We suppose it must suck to find one's self the victim of selection bias. We suppose Holan must have a tough time taking FactCheck.org seriously, given its policy against publishing fact checks showing a politician spoke the truth without misleading.

The hypocrisy from these people is just too much.

Exit question: Did Holan just not know what she was talking about, or was she simply lying?



Afters

For what it's worth, we sometimes praise PolitiFact for doing something right.



Correction March 31, 2018: We erred by neglecting to include the URL linking to Colburn's paper. We apologize to Allison Colburn and our readers for the oversight.

Tuesday, March 20, 2018

PolitiFact's apples and oranges make Zinke a liar?

Fact checkers often mete out harsh ratings to politicians who employ apples-to-oranges comparisons to make their point.

Mainstream media fact checkers often find themselves immune from the principles they use to find fault with others, however.

Consider PolitiFact's March 19, 2018 fact check of Interior Secretary Ryan Zinke.


Zinke said a Trump administration proposal "is the largest investment in our public lands infrastructure in our nation's history."

PolitiFact found that Civilian Conservation Corps program under President Franklin Roosevelt would far exceed the proposed Trump administration spending if adjusted for inflation:
The CCC’s director wrote in 1939 that it had cost $2 billion; that was two-thirds of the way through the program’s life. And according to a Park Service study, the annual annual cost per CCC enrollee was $1,004 per year. If you assume that the average tenure of the CCC’s 3.5 million workers was about a year, that would produce a cumulative cost around $3 billion.

Such calculations "sound right — millions of young men, camps to house them, food and uniforms, and they were paid," said Steven Stoll, an environmental historian at Fordham University.

Once you factor in inflation, $3 billion spent in the 1930s would be the equivalent of about $53 billion today — about three times bigger than even the fully funded Trump proposal.
When a spokesperson for the Trump administration pointed out that the CCC included lands controlled at the state and local level, PolitiFact brushed the objection aside (bold emphasis added):
Interior Department spokeswoman Heather Swift pointed out that the CCC "also incorporated state and local land."

It’s true that the CCC created more than 700 state parks and upgraded many others, in addition to its efforts on federally owned land. Ultimately, though, the point is moot: Zinke didn’t say the proposal is the largest investment in federal lands infrastructure. He said "public lands infrastructure," and state and local parks count as "public lands."
The key to the "False" rating PolitiFact gave Zinke comes entirely from its insistence that Zinke's statement covers all public lands.

But the context, which PolitiFact reported but ignored, clearly shows Zinke was talking specifically about spending on federal lands (bold emphasis added):
"The president is a builder and the son of a plumber, as I am," Zinke told the Senate Energy & Natural Resources Committee. "I look forward to working with the president on restoring America's greatness through a historic investment of our public lands infrastructure. This is the largest investment in our public lands infrastructure in our nation's history. Let me repeat that, this is the largest investment in our public lands infrastructure in the history of this country."

Zinke specified that he was referring to the president's budget proposal, which would create a fund to provide "up to $18 billion over 10 years for maintenance and improvements in our national parks, our national wildlife refuges, and Bureau of Indian Education funds."
We note a pair of irreconcilable problems with PolitiFact's reasoning.

If Zinke had claimed the CCC spending was greater than the spending proposed by the Trump administration, he would be guilty of using an apples-to-oranges comparison. Why? Because the scope of the two spending programs varies at a fundamental level.

Any would-be comparison between spending on federal lands only and spending on federal, state and local lands qualifies as an apples-to-oranges comparison.

If Zinke's statement was interpreted in keeping with his comments on the scope of the spending--kept to "federal lands"--then PolitiFact simply elected to avoid doing the appropriate fact check. That is, measuring the CCC spending on federal lands against the proposed Trump administration spending on federal lands. Apples-to-apples.

PolitiFact bases its fact check on the apples-to-oranges comparison: CCC spending on federal, state and local parks against proposed Trump administration spending on federal lands only.

Objective?

Nonpartisan?

Not hardly.



Afters: Multiple flaky layers

In its fact check PolitiFact stresses the enormity of CCC spending under Roosevelt by expressing it as a percentage of the federal budget. And compares that to the tiny percentage of the total budget taken up by Trump's proposed spending.

Hello?

Has the federal budget increased over time (try as a percentage of GDP)? Medicare? Medicaid? Hello?

PolitiFact loves it some apples and oranges.

Not a Lot of Reader Confusion IX

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

In a March 19, 2018 column published by The Hill, liberal radio talk show host Bill Press declared that we can't believe what President Donald Trump says.

What evidence did Press have to support his claim?

PolitiFact.

Press (bold emphasis added):
Trump tells so many lies, so often, that not even Politifact, the Pulitzer Prize-winning, nonpartisan fact-checking website can keep score. But they do the most thorough job of anybody. Since he launched his 2016 campaign, Politifact has evaluated more than 500 assertions made by candidate and president Donald Trump, and they’ve rated an astounding 69 percent of them as “false,” “mostly false,” or, the worst category, “liar, liar, pants on fire.” Think about that. On any given day, you know that seven out of ten things Donald Trump says are not true!
We at PolitiFact Bias have endlessly pointed out that even if one assumes that PolitiFact did its fact checks without mistakes that selection bias and subjective application of its rating system make claims like the one in bold ridiculous.

Claims mirroring the one above happen routinely, yet PolitiFact denies the evidence ("not a lot of reader confusion") and continues publishing its misleading charts and graphs with no explanatory disclaimer.

We can think of two primary potential explanations.
  • PolitiFact truly doesn't see the abundance of evidence even though we jump up and down calling attention to it
  • PolitiFact is deliberately misleading people
 We invite readers to provide alternative possibilities in the comments section.