Monday, February 18, 2019

PolitiFact California rates "clean water" claim "True," doesn't define "clean water"

What is "clean water"? It depends on whom you ask, among a variety of different factors.

In California, for example, one can use the federal drinking water standard or the California drinking water standard. Not that using either standard would invalidate the opinion of one who wants to use Mexican water quality standards.

In spite of the lack of any set standard for drawing a clean (pun intended) distinction between "clean water" and "unclean water," PolitiFact California crowned Gov. Gavin Newsom's claim that over 1 million Californians do not have clean water for drinking or bathing with its highest truth designation: "True."


That's supposed to mean that Newsom left nothing significant out, such as what standard he was using to call water "clean."

It's a basic rule of fact checking. If somebody says over 1 million people do not have X, the fact checker needs to have X clearly defined before it's possible to check the numerical claim.

PolitiFact California simply didn't bother, instead leaning on the expert opinions of environmental justice advocate Jonathan London of UC Davis and the equally non-neutral Kelsey Hinton (bold emphasis added):
"Unfortunately, (Newsom’s) number is true,’ added Kelsey Hinton, spokesperson for the Community Water Center, a San Joaquin Valley nonprofit that advocates for clean drinking water.

As evidence, both London and Hinton pointed to a 2017 drinking water compliance report by the State Water Resources Control Board, which regulates water quality. The report shows that an estimated 592,000 Californians lived in a public water district that received a water quality violation in 2017. But that doesn’t include people living in private, unregulated districts.
What neutral and objective fact-checker deliberately seeks out only experts who double as advocates for the subject of a fact check?

PolitiFact California's fact check successfully obscures the fact that drinking water standards are arbitrary. They are arbitrary in that those setting the standards are weighing costs and benefits. There is no set point at which contaminants suddenly turn harmful.

See, for example, the World Health Organization's statement about its standard for poisonous chemical arsenic:
The current recommended limit of arsenic in drinking-water is 10 μg/L, although this guideline value is designated as provisional because of practical difficulties in removing arsenic from drinking-water. Every effort should therefore be made to keep concentrations as low as reasonably possible and below the guideline value when resources are available.
The case is the same with other contaminants, including those yet to be named by regulators. There is no objective line of demarcation between "clean water" and "unclean water." At best there's a widely-accepted standard. PolitiFact California only mentions state standards in its fact check (the word "federal" does not appear) while citing reports that refer to both state and federal standards.

There's a clear solution to this problem. Politicians, if you're tempted to talk about how many do not have access to water meeting a given standard, cite the standard by name. Fact-checkers, if you fact-check claims about water quality that avoid mentioning specific regulatory standards and instead use the slippery term "clean water," realize that you're fact-checking an opinion.

PolitiFact California let slip a wonderful opportunity to educate its readers about water quality standards and what they mean in real life.


Afters: 

PolitiFact California describes environmental justice advocate London as "a UC Davis professor who’s written about contaminated drinking water."

Did PolitiFact California not know London advocates for "environmental justice" or did it deliberately hide that fact from its readers?

Thursday, February 14, 2019

PolitiFact and the 'Green New Deal' Fiction Depiction

What if the GOP released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

We don't see why not.

But what if one of the bill's proponents released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

For PolitiFact, apparently the answer is "no."

A week after Green New Deal proponent Alexandria Ocasio-Cortez released a FAQ about the proposal on her website, including its supposed proposal to provide "economic security for all who are unable or unwilling to work," PolitiFact published a Feb. 12, 2019 article on the Green New Deal that apparently shows that publishers of the false information will not face PolitiFact's "Truth-O-Meter."

PolitiFact toed the line on the media narrative that somehow, some way, incorrect information was published by someone. Okay, it was Ocasio-Cortez's staff, but so what?
We should distinguish the official resolution with some additional documents that were posted and shared by Ocasio-Cortez’s staff around the time the resolution was introduced. Once they became public, portions of these additional documents became grist for ridicule among her critics.

Some critics took aim at a line in one of the FAQs that said "we aren’t sure that we’ll be able to fully get rid of farting cows and airplanes." The same document promised "economic security for all who are unable or unwilling to work."
Under normal circumstances, PolitiFact makes politicians accountable for what appears on their official websites.

Are these special circumstances? Why doesn't PolitiFact attribute the false information to Ocasio-Cortez in this case? Are the objective and neutral fact checkers trying to avoid creating a false equivalency by not fact-checking morally-right-but-factually-wrong information on Ocasio-Cortez's website?

A round of Mulligans for everyone!

Many will benefit from PolitiFact's apparent plan to give out "Truth-O-Meter" mulligans over claimed aspects of the Green New Deal resolution not actually in the resolution. Critics of those parts of the plan will not have their attacks rated on the Truth-O-Meter. And those responsible for generating the controversy in the first place by publishing FAQs based on something other than the actual resolution also find themselves off the hook.

Good call by the fact checkers?

We think it shows PolitiFact's failure to equally apply its standards.


A Morally Right Contradiction?

If Ocasio-Cortez's website says the Green New Deal provides economic security for persons unwilling to work but the Green New Deal resolution does not provide economic security for persons unwilling to work, then the resolution contradicts Ocasio-Cortez's claim. That's assuming PolitiFact follows past practice in making politicians accountable for what appears on their websites.

So PolitiFact could have caught Ocasio-Cortez in a contradiction, and could have represented that finding somehow with its rating system.

In January we pointed out how PolitiFact falsely concluded that President Trump had contradicted himself in a tweet. The false finding of a contradiction resulted in a (bogus) "False" rating for Mr. Trump.

What excuse could possibly dress this methodology up as objective and unbiased? What makes a contradictory major policy FAQ unworthy of a rating compared to a non-contradictory presidential tweet?

Guess what? PolitiFact is biased. And we're not going to get coherent and satisfactory answers to our questions so long as PolitiFact insists on presenting itself as objective and unbiased.

Saturday, February 2, 2019

PolitiFact Editor: "Most of our fact checks are really rock solid as far as the reporting goes"

Why do we love it when PolitiFact's principals do interviews?

Because it almost always provides us with material for PolitiFact Bias.

PolitiFact Editor Angie Drobnic Holan, in a recent interview for the Yale Politic, damned her own fact-checking organization with faint praise (bold emphasis added):

We do two things–well, we do more than two things–but we do two things that I want to mention for public trust. First, we have a list of our sources with every fact check. If you go into a fact check on the right-hand rail [of the PolitiFact webpage], we list of all of our sources, and then we explain in detail how we came to our conclusion. I also wrote a recent column on why PolitiFact is not biased to try to answer some of the critique that we got during the latest election season. What we found was that when a campaign staffer does not like our fact check on their candidate, they usually do not argue the facts with us– they usually come straight at us and say that we are biased. So, I wrote this column in response to that. And the reason that they don’t come straight out and dispute the facts with us is because the fact checks are solid. We do make some mistakes like any other human beings, but most of our fact checks are really rock solid as far as the reporting goes. And yet, partisans want to attack us anyway.
We find Holan's claim plausible. The reporting on more than half of PolitiFact''s fact checks may well be rock solid. But what about the rest? Are the failures fair game for critics? Holan does not appear to think so, complaining that even though the reporting for PolitiFact's fact checks is solid more than half the time "partisans want to attack us anyway."

The nerve of those partisans!

Seriously, with a defender like Holan who needs partisan attacks? Imagine Holan composing ad copy extolling the wonders of PolitiFact:

PolitiFact: Rock Solid Reporting Most of the Time


Holan's attempt to discredit PolitiFact's critics hardly qualifies as coherent. Even if PolitiFact's reporting was "rock solid" 99 percent of the time criticizing the errors should count as fair game. And a 1 percent error rate favoring the left would indicate bias.

Holan tries to imply that the quality reporting results in a lack of specific criticism, but research connected to Holan's predecessor at PolitiFact, Bill Adair of Duke University, contradicts that notion:
Conservative outlets were much more likely to question the validity of fact-checks and accuse fact-checkers of making errors in their research or logic.
It isn't that conservatives do not criticize PolitiFact on its reporting. They do (we do). But PolitiFact tends to ignore the criticisms. Perhaps because the partisan critiques are "rock solid"?

More interviews, please.

Friday, February 1, 2019

Not a Lot of Reader Confusion XII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.


Ho-hum. Another day, another example showing that PolitiFact's charts and graphs mislead its readers.

When Sen. Cory Booker (D-NJ) announced he was running for president, PolitiFact was on the case looking for clicks by publishing one of its misleading graphs of Booker's (subjective) "Truth-O-Meter" ratings.

PolitiFact used a pie chart visual with its Facebook posting:


Note the absence of any disclaimer admitting that selection bias affects story selection, along with no mention of the fundamental subjectivity of the "Truth-O-Meter" ratings.

How is this type of graph supposed to help inform PolitiFact's readers? We say it misinforms readers, such as those making these types of comments (actual comments from PolitiFact's Facebook page in response to the Booker posting):

  • "A pretty sad state of affairs when a candidate lies or exaggerates 35% of the time."
  • "Funny how Politifact find Trump lying about 70% of the time." 
  • "(W)hile that's much better than (T)rump's scorecard, it's not honest enough for my tastes."


PolitiFact seems to want people to vote for the candidate with the best-looking scorecard. But given that the ratings are subjective and the stories are selected in a non-random manner (nearly guaranteeing selection bias), the "best" scorecard is, in effect, an editorial position based on opinions, not facts.

PolitiFact is selling editorials to its readers with these graphs.

Many in the fact-checking community oppose the use of gimmicky meters (FactCheck.org, Full Fact (UK), for example). If the International Fact-Checking Network and PolitiFact were not both under the auspices of the Poynter Institute then perhaps we'd soon see an end to the cheesy meters.

Wednesday, January 23, 2019

Not a Lot of Reader Confusion XI

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Today on its Facebook Page, PolitiFact continued its custom of highlighting one of its politician "report cards."

In keeping with our expectation that PolitiFact's report cards mislead the PolitiFact audience, we found the following:


We're not making the commenters impossible to verify, but we're blacking out the names and emphasizing that nobody should harass these people over their comments in any way. Somebody already pointed out a problem with their view, so there's not even a need to do that. Just leave it alone and allow this to stand as the latest proof that PolitiFact has its head in the sand over the way its charts mislead its audience. That's using the charitable assumption that PolitiFact isn't deliberately deceiving its audience.

As we have endlessly pointed out, the non-scientific method PolitiFact uses to sample the universe of political claims make statements about the overall percentages of false statements unreliable.

PolitiFact knows this but declines to make it clear to readers with the use of a consistent disclaimer.

The type of response we captured above happens routinely.

Thursday, January 17, 2019

PolitiFact's Heart Transplant

In the past we have mocked PolitiFact's founding editor Bill Adair for saying the "Truth-O-Meter" is the "heart of PolitiFact."

We have great news. PolitiFact has given itself a heart transplant.

PolitiFact's more recent (since May of 2018) self-descriptions now say that fact-checking is the heart of PolitiFact:


That's a positive move we applaud, while continuing to disparage the quality of PolitiFact's fact checks.

It was always silly to call a subjective sliding-scale Gimmick-O-Meter the heart of PolitiFact (even if it was or remains true).

The new approach at least represents improved branding.

Now if PolitiFact could significantly upgrade the quality of its work ...




Post-publication update: Added hotlinks to the first paragraph leading to past commentary on PolitiFact's heart.

Tuesday, January 15, 2019

PolitiFact and the Contradiction Fiction

We consider it incumbent on fact checkers to report the truth.

PolitiFact's struggles in that department earn it our assessment as the worst of the mainstream fact checkers. In our latest example, PolitiFact reported that President Donald Trump had contradicted his claim that he had never said Mexico would pay for the border wall with a check.

We label that report PolitiFact's contradiction fiction. Fact checkers should know the difference between a contradiction and a non-contradiction.

PolitiFact (bold emphasis added):
"When during the campaign I would say ‘Mexico is going to pay for it,’ obviously, I never said this, and I never meant they're going to write out a check," Trump told reporters. "I said they're going to pay for it. They are."

Later on the same day while visiting the border in Texas, Trump offered the same logic: "When I say Mexico is going to pay for the wall, that's what I said. Mexico is going to pay. I didn't say they're going to write me a check for $20 billion or $10 billion."

We’ve seen the president try to say he never said something that he very much said before, so we wondered about this case.

Spoiler: Trump has it wrong.

We found several instances over the last few years, and in campaign materials contradicting the president’s statement.
PolitiFact offers three links in evidence of its "found several instances" argument, but relies on the campaign material for proof of the claimed contradiction.

We'll cover all of PolitiFact evidence and show that none of it demonstrated that Mr. Trump contradicted himself on this point. Because we can.


Campaign Material

PolitiFact made two mistakes in trying to prove its case from a Trump Campaign description of how Trump would make Mexico pay for the border wall. First, it ignored context. Second, it applied spin to one of the quotations it used from the document.

PolitiFact (bold emphasis added):
"It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year," the memo said.

Trump proposed measures to compel Mexico to pay for the wall, such as cutting off remittances sent from undocumented Mexicans in the U.S. via wire transfers.

Then, the memo says, if and when the Mexican government protested, they would be told to pay a lump sum "to the United States to pay for the wall, the Trump Administration will not promulgate the final rule, and the regulation will not go into effect." The plan lists a few other methods if that didn’t work, like the trade deficit, canceling Mexican visas or increasing visa fees.
We placed bold emphasis on the part of the memo PolitiFact mentioned but ignored in its reasoning.

If the plan mentions methods to use if Mexico did not fork over the money directly, then how can the memo contradict Trump's claim he did not say Mexico would pay by check? Are fact checkers unable to distinguish between "would" and "could"? If Trump says Mexico could pay by check he does not contradict that claim by later saying he did not say Mexico would pay by check.

And what's so hard to understand about that? How can fact checkers not see it?

To help cinch its point, PolitiFact quotes from another section of the document, summarizing it as saying Mexico would pay for the wall with a lump sum payment: "(Mexico) would be told to pay a lump sum 'to the United States to pay for the wall'"). Except the term "lump sum" doesn't occur in the document.

There's reason for suspicion any time a journalist substitutes for the wording in the original document, using only a partial quotation and picking up mid-sentence. Here's the reading from the original:
On day 3 tell Mexico that if the Mexican government will contribute the funds needed to the United States to pay for the wall ...
We see only one potential justification for embroidering the above to make it refer to a "lump sum." That's from interpreting "It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year" as specifying a lump sum payment. We think confirmation bias would best explain that interpretation. It's more reasonable to take the statement to mean that paying for the wall once and having it over with is an obvious choice when it helps preserve a greater amount of income for Mexico annually after that. And the line does not express an expectation of a lump-sum payment but instead the strength (rightly or wrongly) of the bargaining position of the United States.

In short, PolitiFact utterly failed to make its case with the example it chose to emphasize.


... And The Rest


 (these are weak, so they occur after a page break)

Monday, January 7, 2019

Research shows PolitiFact leans left: The "Pants on Fire" bias

In 2011 PolitiFact Bias started a study of the way PolitiFact employs its "Pants on Fire" rating.

We noted that PolitiFact's definitions for "False" and "Pants on Fire" ratings appeared to differ only in that the latter rating represents a "ridiculous" claim. We had trouble imagining how one would objectively measure ridiculousness. PolitiFact's leading lights appeared to state in interviews that the difference in the ratings was subjective. And our own attempt to survey PolitiFact's reasoning turned up nothing akin to an empirically measurable difference.

We concluded that the "Pants on Fire" rating was likely just as subjective as PolitiFact editors described it. And we reasoned that if a Republican statement PolitiFact considered false was more likely than the same type of statement from a Democrat to receive a "Pants on Fire" rating we would have a reasonable measure of ideological bias at PolitiFact.

Every year we've updated the study for PolitiFact National. In 2017, PolitiFact was 17 percent more likely to give a Democrat a "Pants on Fire" rating for a false statement. But the number of Democrats given false ratings was so small that it hardly affected the historical trend. Over PolitiFact's history, Republicans are over 50 percent more likely to receive a "Pants on Fire" rating for a false claim than a Democrat.

2017


After Angie Drobnic Holan replaced Bill Adair as PolitiFact editor, we saw a tendency for PolitiFact to give Republicans many more false ("False" plus "Pants on Fire") ratings than Democrats. In 2013, 2015, 2016 and 2017 the percentage was exactly 25 percent each year. Except for 2007, which we count as an anomaly, that percentage marked the record high for Democrats. It appeared likely that Holan was aware of our research and leading PolitiFact toward more careful exercise of its subjective ratings.

Of course, if PolitiFact fixes its approach to the point where the percentages are roughly even, this powerfully shows that the disparities from 2009 through 2014 represent ideological bias. If one fixes a problem it serves to acknowledge there was a problem in need of fixing.


In 2018, however, the "Pants on Fire" bias fell pretty much right in line with PolitiFact's overall history. Republicans in 2018 were about 50 percent more likely to receive a "Pants on Fire" rating for a claim PolitiFact considered false.

The "Republicans Lie More!" defense doesn't work

Over the years we've had a hard time explaining to people why it doesn't explain away our data to simply claim that Republicans lie more.

That's because of two factors.

First, we're not basing our bias measure on the number of "Pants on Fire" ratings PolitiFact doles out. We're just looking at the percentage of false claims given the "Pants on Fire" rating.

Second, our research provides no reason to believe that the "Pants on Fire" rating has empirical justification. PolitiFact could invent a definition for what makes a claim "Pants on Fire" false. PolitiFact might even invent a definition based on some objective measurement. And in that case the "Republicans lie more!" excuse could work. But we have no evidence that PolitiFact's editors are lying when they tell the public that the difference between the two ratings is subjective.

If the difference is subjective, as it appears, then PolitiFact's tendency to more likely give a Republican's false statement a "Pants on Fire" rating counts as a very clear indicator of ideological bias.

To our knowledge, PolitiFact has never addressed this research with public comment.