Tuesday, August 21, 2018

All About That Base(line)

When we do not publish for days at a time it does not mean that PolitiFact has cleaned up its act and learned to fly straight.

We simply lack the time to do a thorough job policing PolitiFact's mistakes.

What caught our attention this week? A fact check authored by one of PolitiFact's interns, Lucia Geng.



We were curious about this fact check thanks to PolitiFact's shifting standards on what counts as a budget cut. In this case the cut itself was straightforward: A lower budget one year compared to the preceding year. In that respect the fact check wasn't a problem.

But we found a different problem--also a common one for PolitiFact. At least when PolitiFact is fact-checking Democrats.

The fact check does not question the baseline.

The baseline is simply the level chosen for comparison. The Florida Democratic Party chose to compare the 2011 water management districts' collective budgets with the ones in 2012 and found that they were about $700 million lower. Our readers should note that the FDP started making this claim in 2018, not 2012.

It's just crazy for a fact checker to perform a fact check without looking at other potential baselines. Usually politicians and political groups choose a baseline for a reason. Comparing 2011 to 2012 appears to make sense superficially. The year 2011 represents Republican-turned-Independent Governor Charlie Crist. The year 2012 represents the current governor, also a Republican, Rick Scott.

But what if there's more to it? Any fact checker should look at data covering a longer time period to get an idea of what the claimed cut would actually mean.

We suspected that 2010 and before might show much lower budget numbers. To our surprise, the budget numbers were far higher, at least for the South Florida Water Management District whose budget dwarfs those of the other districts.

From 2010 to 2011, Gov. Crist cut the SFWMD budget by about $443 million. From 2009 to 2010 Gov. Crist cut the SFWMD budget by almost $1.5 billion. That's not a typo.

The message here is not that Gov. Crist was some kind of anti-environmental zealot. What we have here is a sign that the water management district budgets are volatile. They can change dramatically from one year to the next. The big question is why, and a secondary question is whether the reason should affect our understanding of the $700 million Gov. Scott cut from the combined water management district budgets between 2011 and 2012.

A fact checker who looked at the volatile changes in spending could then use that knowledge to ask officials at the water management districts questions that would help answer our two questions above. Geng listed email exchanges with officials from each of Florida's water management districts. But the fact check contains no quotations from those officials. It does not even refer to their responses via paraphrase or summary. We don't even know what questions Geng asked.

We did not contact the water management districts. But we looked for a clue regarding the budget volatility in the SFWMD's fiscal year 2011 projections for its future budgets. The agency expected capital expenditures to drop by more than half after 2011.

Rick Scott had not been elected governor at that time (October 2010).

This suggests that the water management districts had a budget cut baked into their long-term program planning, quite possibly strongly influenced by budgeting for the Everglades restoration project (including land purchases). If so, that counts as critical context omitted from the PolitiFact Florida fact check.

We flagged these problems for PolitiFact on Twitter and via email. As usual, the faux-transparent fact checkers responded with a stony silence and made no apparent effort to fix the deficiencies.

Aside from the hole in the story we felt the "Mostly True" rating was very forgiving of the Florida Democratic Party's blatant cherry-picking. And somehow PolitiFact even resisted using the term "cherry-picking" or any close synonym.



Afters:
The Florida Democratic Party, in the same tweet PolitiFact fact-checked, recycled the claim that Gov. Scott "banned the term 'Climate Change.'"

We suppose that's not the sort of thing that makes PolitiFact editors wonder "Is that true?"

Saturday, August 11, 2018

Did an Independent Study Find PolitiFact Is Not Biased?

An email alert from August 10, 2018 led us to a blaring headline from the International Fact-Checking Network:

Is PolitiFact biased? This content analysis says no

Though "content analysis" could mean the researchers looked at pretty much anything having to do with PolitiFact's content, we suspected the article was talking about an inventory of PolitiFact's word choices, looking for words associated with a political point of view. For example, "anti-abortion" and "pro-life" signal political points of view. Using those and similar terms may tip off readers regarding the politics those who produce the news.

PolitiFact Bias has never used the presence of such terms to support our argument that PolitiFact is biased. In fact, I (Bryan) tweeted out a brief judgment of the study on Twitter back on July 16, 2018:
We have two major problems with the the IFCN article at Poynter.org (by Daniel Funke).

First, it implies that the word-use inventory somehow negates the evidence of bias that PolitiFact's critics use that do not include the types of word choices the study was was designed to detect:
It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “'Fact Checkers' Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.
The second paragraph mentions selection bias (taking the Weekly Standard quotation out of context) and other types of bias noted by PolitiFact Bias ("an entire blog dedicated to showing the ways in which PolitiFact is biased"--close enough, we suppose, thanks for linking us).

The third paragraph says PolitiFact has "rebuffed those accusations." We think "ignores those accusations" describes the situation more accurately.

The third paragraph goes on to mention PolitiFact's "transparent methodology" (true if you ignore the ambiguity and inconsistency) and transparent funding (yes, funded by some left-wing sources but PolitiFact Bias does not use that as an evidence of PolitiFact's bias). before claiming that PolitiFact "has an academic study to back it up."

"It"=PolitiFact's rebuffing of accusations it is biased????

That does not follow logically. To support PolitiFact's denials of the bias of which it is accused, the study would have to offer evidence countering the specific accusations. It doesn't do that.

Second, Funke's article suggests that the study shows a lack of bias. We see that idea in the title of Funke's piece as well as in the material from the third paragraph.

But that's not how science works. Even for the paper's specific area of study, it does not show that PolitiFact has no bias. At best it could show the word choices it tested offer no significant indication of bias.

The difference is not small, and Funke's article even includes a quotation from one of the study's authors emphasizing the point:
But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there's really nothing to find, or because our tools aren't powerful enough to find what's there,” he said.
So the co-author says maybe the study's tools were not powerful enough to find the bias that exists. Yet Funke sticks with the title "Is PolitiFact biased? This content analysis says no."

Is it too much to ask for the title to agree with a co-author's description of the meaning of the study?

The content analysis did not say "no." It said (we summarize) "not in terms of these biased language indicators."

Funke's article paints a very misleading picture of the content and meaning of the study. The study refutes none of the major critiques of PolitiFact of which we are aware.


Afters

PolitiFact's methodology, funding and verified IFCN signatory status is supposed to assure us it has no political point of view?

We'd be more impressed if PolitiFact staffers revealed their votes in presidential elections and more than a tiny percentage voted Republican more than once in the past 25 years.

It's anybody's guess why fact checkers do not reveal their voting records, right?


Correction Aug. 11, 2018: Altered headline to read "an Independent Study" instead of "a Peer-Reviewed Study"

The Weekly Standard Notes PolitiFact's "Amazing" Fact Check

The Weekly Standard took note of PolitiFact's audacity in fact-checking Donald Trump's claim that the economy grew at the amazing rate of 4.1 percent rate in the second quarter.
The Trumpian assertion that moved the PolitiFact’s scrutineers to action? This one: “In the second quarter of this year, the United States economy grew at the amazing rate of 4.1 percent.” PolitiFact’s objection wasn’t to the data—the economy really did grow at 4.1 percent in the second quarter—but to the adjective: amazing.
That's amazing!

PolitiFact did not rate the statement on its "Truth-O-Meter" but published its "Share The Facts" box featuring the judgment "Strong, but not amazing."

PolitiFact claims it does not rate opinions and grants license for hyperbole.

As we have noted before, it must be the fault of Republicans who keep trying to use hyperbole without a license.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out what a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Reader: "PolitiFact is not biased. Republicans just lie more."

Every few years or so we recognize a Comment of the Week.

Jehosephat Smith dropped by on Facebook to inform us that PolitiFact is not biased:
Politifact is not biased, Republicans just lie more. That is objectively obvious by this point and if your mind isn't moved by current realities then you're willfully ignorant.
As we have prided ourselves on trying to communicate clearly exactly why we find PolitiFact biased, we find such comments fascinating on two levels.


First, how can one claim that PolitiFact is not biased? On what evidence would one rely to support such a claim?

Second, how can one contemplate claiming PolitiFact isn't biased without making some effort to address the arguments we've made showing PolitiFact is biased?

We invited Mr. Smith to make his case either here on the website or on Facebook. But rather than simply heaping Smith's burden of proof on his head we figured his comment would serve us well as an excuse to again summarize the evidence showing PolitiFact's bias to the left.


Journalists lean left
Journalists as a group lean left. And they lean markedly left of the general U.S. population. Without knowing anything else at all about PolitiFact we have reason to expect that it is made up mostly of left-leaning journalists. If PolitiFact journalists lean left as a group then right out of the box we have reason to look for evidence that their political leaning affects their fact-checking.

PolitiFact's errors lean left I
When PolitiFact makes a egregious reporting error, the error tends to harm the right or fit with left-leaning thinking. For example, when PolitiFact's Louis Jacobson reported that the Hobby Lobby's policy on health insurance "barred" women from using certain types of birth control, we noted that pretty much anybody with any rightward lean would have spotted the mistake and prevented its publication. Instead, PolitiFact published it and later changed it without posting a correction notice. We have no trouble finding such examples.

PolitiFact's errors lean left II
We performed a study of PolitiFact's calculations of percentage error. PolitiFact often performs the calculation incorrectly, and errors tend to benefit Democrats (caveat: small data set).

PolitiFact's ratings lean left I
When PolitiFact rates Republicans and Democrats on closely parallel claims Democrats often fare better. For example, when PolitiFact investigated a Democratic Party charge that Rep. Bill McCollum raised his own pay while in Congress PolitiFact said it was true. But when PolitiFact investigated a Republican charge that Sherrod Brown had raised his own pay PolitiFact discovered that members of Congress cannot raise their own pay and rated the claim "False." We have no trouble finding such examples.

PolitiFact's ratings lean left II
We have done an ongoing and detailed study looking at partisan differences in PolitiFact's application of its "Pants on Fire" rating. PolitiFact describes no objective difference in distinguishing between "False" and "Pants on Fire" ratings, so we hypothesize that the difference between the two ratings is subjective. Republicans are over 50 percent more likely than Democrats to have a false rating deemed "Pants on Fire" false for apparently subjective reasons.

PolitiFact's explanations lean left
When PolitiFact explains topics its explanations tend to lean left. For example, when Democrats and liberals say Social Security has never contributed a dime to the deficit PolitiFact gives it a rating such as "Half True," apparently unable to discover the fact that Social Security has run a deficit during years when the program was on-budget (and therefore unquestionably contributed directly to the deficit those years). PolitiFact resisted Republican claims that the ACA cut Medicare, explaining that the so-called Medicare cuts were not truly cuts because the Medicare budget continued to increase. Yet PolitiFact discovered when the Trump administration slowed the growth of Medicaid it was okay to refer to the slowed growth as a program cut. Again, we have no trouble finding such examples.

How can a visitor to our site (including Facebook) contemplate declaring PolitiFact isn't biased without coming prepared to answer our argument?


Friday, July 6, 2018

PolitiFact: "European Union"=Germany

PolitiFact makes all kinds of mistakes, but some serve as better examples of ideological bias than others. A July 2, 2018 PolitiFact fact check of President Donald Trump serves as pretty good evidence of a specific bias against Mr. Trump:


The big clue that PolitiFact botched this fact check occurs in the image we cropped from PolitiFact's website.

Donald Trump states that the EU sends millions of cars to the United States. PolitiFact performs adjustments to that claim, suggesting Trump specified German cars and specifying that the EU sends millions of German cars per year. Yet Trump did not specify German cars and did not specify an annual rate.

PolitiFact quotes Trump:
At one point, he singled out German cars.

"The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions," Trump said.
Saying Trump "singled out German cars" counts as twisting the truth. Trump "singled out" German cars in the sense of offering two examples of German cars among the millions sent to the United States by the European Union.

It counts as a major error for a fact checker to ignore the clear context showing that Trump was talking about the European Union and not simply German cars of one make (Mercedes) or another (BMW). And if those German makes account for large individual shares of EU exports to the United States then Trump deserves credit for choosing strong examples.

It counts as another major error for a fact checker to assume an annual rate in the millions when the speaker did not specify any such rate. How did PolitiFact determine that Trump was  not talking about a monthly rate, or the rate over a decade? Making assumptions is not the same thing as fact-checking.

When a speaker uses ambiguous language, the responsible fact checker offers the speaker charitable interpretation. That means using the interpretation that makes the best sense of the speaker's words. In this case, the point is obvious: The European Union exports millions of cars to the United States.

But instead of looking at the number of cars the European Union exports to the United States, PolitiFact cherry picked German cars. That focus came through strongly in PolitiFact's concluding paragraphs:
Our ruling

Trump said, "The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions."

Together, Mercedes, BMW and Volkswagen imported less than a million cars into the United States in 2017, not "millions."

More importantly, Trump ignores that a large proportion of German cars sold in the United States were also built here, using American workers and suppliers whose economic fortunes are boosted by Germany’s carnakers [sic]. Other U.S.-built German cars were sold as exports.

We rate the statement False.
That's sham fact-checking.

A serious fact check would look at the European Union's exports specifically to the United States. The European Automobile Manufacturers Association has those export numbers available from 2011 through 2016. From 2011 through 2013 the number was under 1 million annually. For 2014 through 2016 the number was over 1 million annually.

Data through September 2017 from the same source shows the European Union on pace to surpass 1 million units for the fourth consecutive year.


Does exporting over 1 million cars to the United States per year for three or four consecutive years count as exporting cars to the United States by the millions (compare the logic)?

We think we can conclude with certainty that the notion does not count as "False."

Our exit question for PolitiFact: How does a non-partisan fact checker justify ignoring the context of Trump's statement referring specifically to the European Union? How did the European Union get to be Germany?