Saturday, August 11, 2018

Did an Independent Study Find PolitiFact Is Not Biased?

An email alert from August 10, 2018 led us to a blaring headline from the International Fact-Checking Network:

Is PolitiFact biased? This content analysis says no

Though "content analysis" could mean the researchers looked at pretty much anything having to do with PolitiFact's content, we suspected the article was talking about an inventory of PolitiFact's word choices, looking for words associated with a political point of view. For example, "anti-abortion" and "pro-life" signal political points of view. Using those and similar terms may tip off readers regarding the politics those who produce the news.

PolitiFact Bias has never used the presence of such terms to support our argument that PolitiFact is biased. In fact, I (Bryan) tweeted out a brief judgment of the study on Twitter back on July 16, 2018:
We have two major problems with the the IFCN article at Poynter.org (by Daniel Funke).

First, it implies that the word-use inventory somehow negates the evidence of bias that PolitiFact's critics use that do not include the types of word choices the study was was designed to detect:
It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “'Fact Checkers' Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.
The second paragraph mentions selection bias (taking the Weekly Standard quotation out of context) and other types of bias noted by PolitiFact Bias ("an entire blog dedicated to showing the ways in which PolitiFact is biased"--close enough, we suppose, thanks for linking us).

The third paragraph says PolitiFact has "rebuffed those accusations." We think "ignores those accusations" describes the situation more accurately.

The third paragraph goes on to mention PolitiFact's "transparent methodology" (true if you ignore the ambiguity and inconsistency) and transparent funding (yes, funded by some left-wing sources but PolitiFact Bias does not use that as an evidence of PolitiFact's bias). before claiming that PolitiFact "has an academic study to back it up."

"It"=PolitiFact's rebuffing of accusations it is biased????

That does not follow logically. To support PolitiFact's denials of the bias of which it is accused, the study would have to offer evidence countering the specific accusations. It doesn't do that.

Second, Funke's article suggests that the study shows a lack of bias. We see that idea in the title of Funke's piece as well as in the material from the third paragraph.

But that's not how science works. Even for the paper's specific area of study, it does not show that PolitiFact has no bias. At best it could show the word choices it tested offer no significant indication of bias.

The difference is not small, and Funke's article even includes a quotation from one of the study's authors emphasizing the point:
But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there's really nothing to find, or because our tools aren't powerful enough to find what's there,” he said.
So the co-author says maybe the study's tools were not powerful enough to find the bias that exists. Yet Funke sticks with the title "Is PolitiFact biased? This content analysis says no."

Is it too much to ask for the title to agree with a co-author's description of the meaning of the study?

The content analysis did not say "no." It said (we summarize) "not in terms of these biased language indicators."

Funke's article paints a very misleading picture of the content and meaning of the study. The study refutes none of the major critiques of PolitiFact of which we are aware.


Afters

PolitiFact's methodology, funding and verified IFCN signatory status is supposed to assure us it has no political point of view?

We'd be more impressed if PolitiFact staffers revealed their votes in presidential elections and more than a tiny percentage voted Republican more than once in the past 25 years.

It's anybody's guess why fact checkers do not reveal their voting records, right?


Correction Aug. 11, 2018: Altered headline to read "an Independent Study" instead of "a Peer-Reviewed Study"

The Weekly Standard Notes PolitiFact's "Amazing" Fact Check

The Weekly Standard took note of PolitiFact's audacity in fact-checking Donald Trump's claim that the economy grew at the amazing rate of 4.1 percent rate in the second quarter.
The Trumpian assertion that moved the PolitiFact’s scrutineers to action? This one: “In the second quarter of this year, the United States economy grew at the amazing rate of 4.1 percent.” PolitiFact’s objection wasn’t to the data—the economy really did grow at 4.1 percent in the second quarter—but to the adjective: amazing.
That's amazing!

PolitiFact did not rate the statement on its "Truth-O-Meter" but published its "Share The Facts" box featuring the judgment "Strong, but not amazing."

PolitiFact claims it does not rate opinions and grants license for hyperbole.

As we have noted before, it must be the fault of Republicans who keep trying to use hyperbole without a license.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out what a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Reader: "PolitiFact is not biased. Republicans just lie more."

Every few years or so we recognize a Comment of the Week.

Jehosephat Smith dropped by on Facebook to inform us that PolitiFact is not biased:
Politifact is not biased, Republicans just lie more. That is objectively obvious by this point and if your mind isn't moved by current realities then you're willfully ignorant.
As we have prided ourselves on trying to communicate clearly exactly why we find PolitiFact biased, we find such comments fascinating on two levels.


First, how can one claim that PolitiFact is not biased? On what evidence would one rely to support such a claim?

Second, how can one contemplate claiming PolitiFact isn't biased without making some effort to address the arguments we've made showing PolitiFact is biased?

We invited Mr. Smith to make his case either here on the website or on Facebook. But rather than simply heaping Smith's burden of proof on his head we figured his comment would serve us well as an excuse to again summarize the evidence showing PolitiFact's bias to the left.


Journalists lean left
Journalists as a group lean left. And they lean markedly left of the general U.S. population. Without knowing anything else at all about PolitiFact we have reason to expect that it is made up mostly of left-leaning journalists. If PolitiFact journalists lean left as a group then right out of the box we have reason to look for evidence that their political leaning affects their fact-checking.

PolitiFact's errors lean left I
When PolitiFact makes a egregious reporting error, the error tends to harm the right or fit with left-leaning thinking. For example, when PolitiFact's Louis Jacobson reported that the Hobby Lobby's policy on health insurance "barred" women from using certain types of birth control, we noted that pretty much anybody with any rightward lean would have spotted the mistake and prevented its publication. Instead, PolitiFact published it and later changed it without posting a correction notice. We have no trouble finding such examples.

PolitiFact's errors lean left II
We performed a study of PolitiFact's calculations of percentage error. PolitiFact often performs the calculation incorrectly, and errors tend to benefit Democrats (caveat: small data set).

PolitiFact's ratings lean left I
When PolitiFact rates Republicans and Democrats on closely parallel claims Democrats often fare better. For example, when PolitiFact investigated a Democratic Party charge that Rep. Bill McCollum raised his own pay while in Congress PolitiFact said it was true. But when PolitiFact investigated a Republican charge that Sherrod Brown had raised his own pay PolitiFact discovered that members of Congress cannot raise their own pay and rated the claim "False." We have no trouble finding such examples.

PolitiFact's ratings lean left II
We have done an ongoing and detailed study looking at partisan differences in PolitiFact's application of its "Pants on Fire" rating. PolitiFact describes no objective difference in distinguishing between "False" and "Pants on Fire" ratings, so we hypothesize that the difference between the two ratings is subjective. Republicans are over 50 percent more likely than Democrats to have a false rating deemed "Pants on Fire" false for apparently subjective reasons.

PolitiFact's explanations lean left
When PolitiFact explains topics its explanations tend to lean left. For example, when Democrats and liberals say Social Security has never contributed a dime to the deficit PolitiFact gives it a rating such as "Half True," apparently unable to discover the fact that Social Security has run a deficit during years when the program was on-budget (and therefore unquestionably contributed directly to the deficit those years). PolitiFact resisted Republican claims that the ACA cut Medicare, explaining that the so-called Medicare cuts were not truly cuts because the Medicare budget continued to increase. Yet PolitiFact discovered when the Trump administration slowed the growth of Medicaid it was okay to refer to the slowed growth as a program cut. Again, we have no trouble finding such examples.

How can a visitor to our site (including Facebook) contemplate declaring PolitiFact isn't biased without coming prepared to answer our argument?


Friday, July 6, 2018

PolitiFact: "European Union"=Germany

PolitiFact makes all kinds of mistakes, but some serve as better examples of ideological bias than others. A July 2, 2018 PolitiFact fact check of President Donald Trump serves as pretty good evidence of a specific bias against Mr. Trump:


The big clue that PolitiFact botched this fact check occurs in the image we cropped from PolitiFact's website.

Donald Trump states that the EU sends millions of cars to the United States. PolitiFact performs adjustments to that claim, suggesting Trump specified German cars and specifying that the EU sends millions of German cars per year. Yet Trump did not specify German cars and did not specify an annual rate.

PolitiFact quotes Trump:
At one point, he singled out German cars.

"The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions," Trump said.
Saying Trump "singled out German cars" counts as twisting the truth. Trump "singled out" German cars in the sense of offering two examples of German cars among the millions sent to the United States by the European Union.

It counts as a major error for a fact checker to ignore the clear context showing that Trump was talking about the European Union and not simply German cars of one make (Mercedes) or another (BMW). And if those German makes account for large individual shares of EU exports to the United States then Trump deserves credit for choosing strong examples.

It counts as another major error for a fact checker to assume an annual rate in the millions when the speaker did not specify any such rate. How did PolitiFact determine that Trump was  not talking about a monthly rate, or the rate over a decade? Making assumptions is not the same thing as fact-checking.

When a speaker uses ambiguous language, the responsible fact checker offers the speaker charitable interpretation. That means using the interpretation that makes the best sense of the speaker's words. In this case, the point is obvious: The European Union exports millions of cars to the United States.

But instead of looking at the number of cars the European Union exports to the United States, PolitiFact cherry picked German cars. That focus came through strongly in PolitiFact's concluding paragraphs:
Our ruling

Trump said, "The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions."

Together, Mercedes, BMW and Volkswagen imported less than a million cars into the United States in 2017, not "millions."

More importantly, Trump ignores that a large proportion of German cars sold in the United States were also built here, using American workers and suppliers whose economic fortunes are boosted by Germany’s carnakers [sic]. Other U.S.-built German cars were sold as exports.

We rate the statement False.
That's sham fact-checking.

A serious fact check would look at the European Union's exports specifically to the United States. The European Automobile Manufacturers Association has those export numbers available from 2011 through 2016. From 2011 through 2013 the number was under 1 million annually. For 2014 through 2016 the number was over 1 million annually.

Data through September 2017 from the same source shows the European Union on pace to surpass 1 million units for the fourth consecutive year.


Does exporting over 1 million cars to the United States per year for three or four consecutive years count as exporting cars to the United States by the millions (compare the logic)?

We think we can conclude with certainty that the notion does not count as "False."

Our exit question for PolitiFact: How does a non-partisan fact checker justify ignoring the context of Trump's statement referring specifically to the European Union? How did the European Union get to be Germany?

Friday, June 22, 2018

PolitiFact Corrects, We Evaluate the Correction

PolitiFact corrected an error in one of its fact checks this past week, most likely in response to an email we sent on June 20, 2018.
Dear PolitiFact,

A recent PolitiFact fact check contains the following paragraph (bold emphasis added):
Soon after, in February 2017, Nehlen wrote on Twitter that Islam was not a religion of peace and posted a photo of a plane striking the World Trade Center with the caption, "9/11 would’ve been a Wonderful #DayWithoutImmigrants." In the following months, Nehlen also tweeted that "Islam is not your friend," implied that Muslim communities should be bombed and retweeted posts saying Bill and Hillary Clinton were murdering associates.

The hotlink ("implied") leads to an archived Twitter page. Unless I'm missing somelthing [sic], the following represents the best candidate as a supporting evidence:


Unless "Muslim no-go zones" represent typical Muslim communities, PolitiFact's summary of Nehlen's tweet distorts the truth. If a politician similarly omitted context in this fashion, would PolitiFact not mete out a "Half True" rating or worse?

If PolitiFact excuses itself from telling the truth where people accused of bigotry are involved, that principle ought to appear in its statement of principles.

Otherwise, a correction or clarification is in order. Thanks.
We were surprised to see that PolitiFact updated the story with a clarification within two days. And PolitiFact did most things right with the fix, which it labeled a "clarification."

Here's a checklist:
  1. Paid attention to the criticism
  2. Updated the article with a clarification
  3. Attached a clarification notice at the bottom of the fact check
  4. Added the "Corrections and Updates" tag to the article, ensuring it would appear on PolitiFact's "Corrections and Updates" page
Still, we think PolitiFact can do better.

Specifically, we fault PolitiFact for its lack of transparency regarding the specifics of the mistake.

Note what Craig Silverman, long associated with PolitiFact's owner, the Poynter Institute, said in an American Press Institute interview about letting readers know what changed:

News organizations aren’t the only ones on the internet who are practicing some form of journalism. There are a number of sites or blogs or individual bloggers who may not have the same standards for corrections. Is there any way journalists or anyone else can contribute to a culture of corrections? Where does it start?

SILVERMAN: Bloggers actually ended up doing a little bit of correction innovation. In the relatively early blogging days, you’d often see <strike>strikethrough</strike> used to cross out a typo or error. This was a lovely use of the medium, as it showed what was incorrect and also included the correct information after. In that respect, bloggers modelled good behavior, and showed how digital corrections can work. We can learn from that.

It all starts with a broad commitment to acknowledge and even publicize mistakes. That is the core of the culture, the ethic of correction.
We think Silverman has it right. Transparency in corrections involves letting the reader know what the story got wrong. In this case, PolitiFact reported that a tweet implied that somebody wanted to bomb Muslim communities. The tweet referred, in fact, to a small subset of Muslim communities (so small PolitiFact says they do not exist hey that one failed its fact check and I forgot to remove it from the first published draft) referred to as "no-go zones"--areas where non-Muslims allegedly face unusual danger to their person and property.

PolitiFact explained its error like this:
This fact-check has been updated to more precisely refer to a previous Nehlen tweet
That notice is transparent about the fact the text of the fact check was changed and transparent about the part of the fact check that was changed (information about a Nehlen tweet). But it mostly lacked transparency about what the fact check got wrong and the misleading impression it created.

We think journalists, including PolitiFact, stand to gain public trust by full transparency regarding errors. Though that boost to public trust assumes that errors aren't so ridiculous and rampant that transparency instead destroys the organization's credibility.

Is that what PolitiFact fears when it issues these vague descriptions of its inaccuracies?

Still, we're encouraged that PolitiFact performed a clarification and mostly followed its corrections policy. Ignoring needed corrections is worse than falling short of best practices with the corrections.