Friday, August 31, 2018

False Stuff From Fact Checker (PolitiFact)

A funny thing happened when PolitiFact fact-checked a claim about a bias against conservative websites: PolitiFact did not fact check its topic.

No, we're not kidding. Instead of researching whether the claim was true, PolitiFact spent its time undermining the source of the claim. And PolitiFact even used a flatly false claim of its own toward that end (bold emphasis added):
The chart is not neutral evidence supporting Trump’s point, and it labels anything not overtly conservative as "left. In the "left" category are such rigorously mainstream outlets as the Associated Press and Reuters. The three big broadcast networks — ABC, NBC, CBS — are considered "left," as are the Washington Post and the New York Times. Other media outlets that produce a large amount of content every day, including CNN, NPR, Politico, USA Today, and CNBC, are labeled "left."
The statement we highlighted counts as hyperbole at best. On it's face, it simply counts as a false statement exposed as such by the accompanying graphic:


If PolitiFact's claim was true then any outlet not labeled "left" would overtly identify itself as conservative. We can disprove PolitiFact's claim easily by simply looking down the line at the middle. If a media outlet straddles the line between left and right then that organization is not classified as "left." And if such media organizations do not overtly identify as conservative then PolitiFact's claim is false.

Overtly conservative? Let's go down the line:
And for good measure: The Economist, located on the right side of the chart. Is The Economist overtly conservative (See also Barron's, McClatchy)?

Did PolitiFact even bother to research its own claim? Where are the sources listed? Or did writer Louis Jacobson just happen to have that factoid rattling around in his cranium?

But it's not just Jacobson! The factoid gets two mentions in the fact check (the second one in the summary paragraph) and was okayed by editor Katie Sanders (recently promoted for obvious reasons to managing editor at PolitiFact) and at least two other editors from the PolitiFact "star chamber" that decides the "Truth-O-Meter" rating.

As we have asked before, how can a non-partisan and objective fact checker make such a mistake?

Inconceivable!

And how does a fact checker properly justify issuing a ruling without bothering to check on the fact of the matter?

Monday, August 27, 2018

PolitiFact Illinois: 'Duckwork's background check claim checks out" (Updated x2)

Huh.

On August 26, 2018 PolitiFact Illinois published a fact check of Sen. Tammy Duckworth (D-Ill.) with the title "Duckwork's background check claim checks out."

We find it hard to believe a fact-checking organization could prove so careless it would badly misspell the last name of one of its senators in a headline.

And we find it even harder to believe the error could last until the next day (today) without receiving a correction.

We will update this item to track whether PolitiFact Illinois runs a correction notice when it fixes the problem.

Assuming it fixes the problem.



Update Aug. 27, 2018:

Apparently "Duckwork" is a fairly common misspelling of Sen. Duckworth's name. NPR (Illinois) made a similar mistake in January 2018 and fixed it on the sly. Don't journalists know better? Misspelling a name warrants a transparent correction.


Update Aug. 28, 2018:

Very early on Aug. 28, 2018, I tweeted a message pointing out this error and tagging the author, editor and PolitiFact Illinois.


When I checked hours later PolitiFact had corrected the spelling of Duckworth's name but added no correction notice to the item.

It's important to note, we suppose, that PolitiFact's corrections policy does not obligate it to append a correction notice on the basis of a misspelled name. That policy, in fact, appears to promise that PolitiFact will fix all of its spelling errors without acknowledging error (italics added for emphasis):
Typos, grammatical errors, misspellings – We correct typos, grammatical errors, misspellings, transpositions and other small errors without a mark of correction or tag and as soon as they are brought to our attention.
That seems to us like an unusually low bar for running a correction. Compare the above with the aggressive use of corrections involving misspelled names by PolitiFact's parent organization, the Poynter Institute.

Here's  one example from that page:
‘Newspapers killed newspapers,’ says reporter who quit the business (March 20, 2013)
Correction: This post misspelled Bird’s last name in one instance.
Journalists traditionally seem to give special attention to misspellings involving names. Misspelling a person's name counts as a different degree of error than a minor typographical error:
In journalism schools across Canada this week, many a freshman student will learn one of the foremost lessons of the J-school classroom: Get someone’s name wrong and you get a failing grade.

In the decade I taught at Ryerson University’s journalism school my students understood that no matter how brilliant their reporting and writing, if they messed up a name, they got an automatic F on that assignment. That’s a common policy of most journalism schools.
Apparently the fact checkers at PolitiFact find such obsessive attention to detail quaint.Which we count as a strange attitude for people calling themselves "fact checkers."

Saturday, August 25, 2018

PolitiFact's Fallacious "Burden of Proof" Bites a Democrat? Or Not

We're nonpartisan because we defend Democrats unfairly harmed by the faulty fact checkers at PolitiFact.

See how that works?

On with it, then:

Oops.

Okay, we made a faulty assumption. We thought when we saw PolitiFact's liberal audience complaining about the treatment of Nelson that it meant Nelson had received a "False" rating based on Nelson not offering evidence to support his claim.

But PolitiFact did not give Nelson a "Truth-O-Meter" rating at all. Instead of the "Truth-O-Meter" graphic for the claim (there is none), PolitiFact gave its readers the "Share The Facts" version:



Republicans (and perhaps Democrats) have received poor ratings in the past where evidence was lacking, which PolitiFact justifies according to its "burden of proof" criterion. But either the principle has changed or else PolitiFact made an(other) exception to aid Nelson.

If the principle has changed that's good. It's stupid and fallacious to apply a burden of proof standard in fact checking, at least where one determines a truth value based purely on the lack of evidence.

But's it's small consolation to the people PolitiFact unfairly harmed in the past with its application of this faulty principle.


Afters:

In April 2018 it looks like the "burden of proof" principle was still a principle.



As we have noted before, it often appears that PolitiFact's principles are more like guidelines than actual rules.

And to maintain our nonpartisan street cred, here's PolitiFact applying the silly burden of proof principle to a Democrat:


If "burden of proof" counts as one of PolitiFact's principles then PolitiFact can only claim itself as a principled fact checker if the Nelson exception features a principled reason justifying the exception.

If anyone can find anything like that in the non-rating rating of Nelson, please drop us a line.

Thursday, August 23, 2018

PolitiFact Not Yet Tired of Using Statements Taken Out Of Context To Boost Fundraising

Remember back when PolitiFact took GOP pollster Neil Newhouse out of context to help coax readers into donating to PolitiFact?

Good times.

Either the technique works well or PolitiFact journalists just plain enjoy using it, for PolitiFact Editor Angie Drobnic Holan's Aug. 21, 2018 appeal to would-be supporters pulls the same type of stunt on Rudy Giuliani, former mayor of New York City and attorney for President Donald Trump.

Let's watch Holan the politician in action (bold emphasis added):
Just this past Sunday, Rudy Giuliani told journalist Chuck Todd that truth isn’t truth.

Todd asked Giuliani, now one of President Donald Trump’s top advisers on an investigation into Russia’s interference with the 2016 election, whether Trump would testify. Giuliani said he didn’t want the president to get caught perjuring himself — in other words, lying under oath.

"It’s somebody’s version of the truth, not the truth," Giuliani said of potential testimony.

Flustered, Todd replied, "Truth is truth."

"No, it isn’t truth. Truth isn’t truth," Giuliani said, going on to explain that Trump’s version of events are his own.

This is an extreme example, but Giuliani isn’t the only one to suggest that truth is whatever you make it. The ability to manufacture what appears to be the truth has reached new heights of sophistication.
Giuliani, contrary to Holan's presentation, was almost certainly not suggesting that truth is whatever you make it.

Rather, Giuliani was almost certainly making the same point about perjury traps that legal expert Andrew McCarthy pointed out in a Aug. 11, 2018 column for National Review (hat tip to Power Line Blog)
The theme the anti-Trump camp is pushing — again, a sweet-sounding political claim that defies real-world experience — is that an honest person has nothing to fear from a prosecutor. If you simply answer the questions truthfully, there is no possibility of a false-statements charge.

But see, for charging purposes, the witness who answers the questions does not get to decide whether they have been answered truthfully. That is up to the prosecutor who asks the questions. The honest person can make his best effort to provide truthful, accurate, and complete responses; but the interrogator’s evaluation, right or wrong, determines whether those responses warrant prosecution.
It's fair to criticize Giuliani for making the point less elegantly than McCarthy did. But it's inexcusable for a supposedly non-partisan fact checker to take a claim out of context to fuel an appeal for cash.

That's what we expect from partisan politicians, not non-partisan journalists.

Unless they're "non-partisan journalists" from The Bubble.





 

Worth Noting:

For the 2017 version of this Truth Hustle, Holan shared writing credits with PolitiFact's Executive Director Aaron Sharockman.

Tuesday, August 21, 2018

All About That Base(line)

When we do not publish for days at a time it does not mean that PolitiFact has cleaned up its act and learned to fly straight.

We simply lack the time to do a thorough job policing PolitiFact's mistakes.

What caught our attention this week? A fact check authored by one of PolitiFact's interns, Lucia Geng.



We were curious about this fact check thanks to PolitiFact's shifting standards on what counts as a budget cut. In this case the cut itself was straightforward: A lower budget one year compared to the preceding year. In that respect the fact check wasn't a problem.

But we found a different problem--also a common one for PolitiFact. At least when PolitiFact is fact-checking Democrats.

The fact check does not question the baseline.

The baseline is simply the level chosen for comparison. The Florida Democratic Party chose to compare the 2011 water management districts' collective budgets with the ones in 2012 and found that they were about $700 million lower. Our readers should note that the FDP started making this claim in 2018, not 2012.

It's just crazy for a fact checker to perform a fact check without looking at other potential baselines. Usually politicians and political groups choose a baseline for a reason. Comparing 2011 to 2012 appears to make sense superficially. The year 2011 represents Republican-turned-Independent Governor Charlie Crist. The year 2012 represents the current governor, also a Republican, Rick Scott.

But what if there's more to it? Any fact checker should look at data covering a longer time period to get an idea of what the claimed cut would actually mean.

We suspected that 2010 and before might show much lower budget numbers. To our surprise, the budget numbers were far higher, at least for the South Florida Water Management District whose budget dwarfs those of the other districts.

From 2010 to 2011, Gov. Crist cut the SFWMD budget by about $443 million. From 2009 to 2010 Gov. Crist cut the SFWMD budget by almost $1.5 billion. That's not a typo.

The message here is not that Gov. Crist was some kind of anti-environmental zealot. What we have here is a sign that the water management district budgets are volatile. They can change dramatically from one year to the next. The big question is why, and a secondary question is whether the reason should affect our understanding of the $700 million Gov. Scott cut from the combined water management district budgets between 2011 and 2012.

A fact checker who looked at the volatile changes in spending could then use that knowledge to ask officials at the water management districts questions that would help answer our two questions above. Geng listed email exchanges with officials from each of Florida's water management districts. But the fact check contains no quotations from those officials. It does not even refer to their responses via paraphrase or summary. We don't even know what questions Geng asked.

We did not contact the water management districts. But we looked for a clue regarding the budget volatility in the SFWMD's fiscal year 2011 projections for its future budgets. The agency expected capital expenditures to drop by more than half after 2011.

Rick Scott had not been elected governor at that time (October 2010).

This suggests that the water management districts had a budget cut baked into their long-term program planning, quite possibly strongly influenced by budgeting for the Everglades restoration project (including land purchases). If so, that counts as critical context omitted from the PolitiFact Florida fact check.

We flagged these problems for PolitiFact on Twitter and via email. As usual, the faux-transparent fact checkers responded with a stony silence and made no apparent effort to fix the deficiencies.

Aside from the hole in the story we felt the "Mostly True" rating was very forgiving of the Florida Democratic Party's blatant cherry-picking. And somehow PolitiFact even resisted using the term "cherry-picking" or any close synonym.



Afters:
The Florida Democratic Party, in the same tweet PolitiFact fact-checked, recycled the claim that Gov. Scott "banned the term 'Climate Change.'"

We suppose that's not the sort of thing that makes PolitiFact editors wonder "Is that true?"

Saturday, August 11, 2018

Did an Independent Study Find PolitiFact Is Not Biased?

An email alert from August 10, 2018 led us to a blaring headline from the International Fact-Checking Network:

Is PolitiFact biased? This content analysis says no

Though "content analysis" could mean the researchers looked at pretty much anything having to do with PolitiFact's content, we suspected the article was talking about an inventory of PolitiFact's word choices, looking for words associated with a political point of view. For example, "anti-abortion" and "pro-life" signal political points of view. Using those and similar terms may tip off readers regarding the politics those who produce the news.

PolitiFact Bias has never used the presence of such terms to support our argument that PolitiFact is biased. In fact, I (Bryan) tweeted out a brief judgment of the study on Twitter back on July 16, 2018:
We have two major problems with the the IFCN article at Poynter.org (by Daniel Funke).

First, it implies that the word-use inventory somehow negates the evidence of bias that PolitiFact's critics use that do not include the types of word choices the study was was designed to detect:
It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “'Fact Checkers' Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.
The second paragraph mentions selection bias (taking the Weekly Standard quotation out of context) and other types of bias noted by PolitiFact Bias ("an entire blog dedicated to showing the ways in which PolitiFact is biased"--close enough, we suppose, thanks for linking us).

The third paragraph says PolitiFact has "rebuffed those accusations." We think "ignores those accusations" describes the situation more accurately.

The third paragraph goes on to mention PolitiFact's "transparent methodology" (true if you ignore the ambiguity and inconsistency) and transparent funding (yes, funded by some left-wing sources but PolitiFact Bias does not use that as an evidence of PolitiFact's bias). before claiming that PolitiFact "has an academic study to back it up."

"It"=PolitiFact's rebuffing of accusations it is biased????

That does not follow logically. To support PolitiFact's denials of the bias of which it is accused, the study would have to offer evidence countering the specific accusations. It doesn't do that.

Second, Funke's article suggests that the study shows a lack of bias. We see that idea in the title of Funke's piece as well as in the material from the third paragraph.

But that's not how science works. Even for the paper's specific area of study, it does not show that PolitiFact has no bias. At best it could show the word choices it tested offer no significant indication of bias.

The difference is not small, and Funke's article even includes a quotation from one of the study's authors emphasizing the point:
But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there's really nothing to find, or because our tools aren't powerful enough to find what's there,” he said.
So the co-author says maybe the study's tools were not powerful enough to find the bias that exists. Yet Funke sticks with the title "Is PolitiFact biased? This content analysis says no."

Is it too much to ask for the title to agree with a co-author's description of the meaning of the study?

The content analysis did not say "no." It said (we summarize) "not in terms of these biased language indicators."

Funke's article paints a very misleading picture of the content and meaning of the study. The study refutes none of the major critiques of PolitiFact of which we are aware.


Afters

PolitiFact's methodology, funding and verified IFCN signatory status is supposed to assure us it has no political point of view?

We'd be more impressed if PolitiFact staffers revealed their votes in presidential elections and more than a tiny percentage voted Republican more than once in the past 25 years.

It's anybody's guess why fact checkers do not reveal their voting records, right?


Correction Aug. 11, 2018: Altered headline to read "an Independent Study" instead of "a Peer-Reviewed Study"

The Weekly Standard Notes PolitiFact's "Amazing" Fact Check

The Weekly Standard took note of PolitiFact's audacity in fact-checking Donald Trump's claim that the economy grew at the amazing rate of 4.1 percent rate in the second quarter.
The Trumpian assertion that moved the PolitiFact’s scrutineers to action? This one: “In the second quarter of this year, the United States economy grew at the amazing rate of 4.1 percent.” PolitiFact’s objection wasn’t to the data—the economy really did grow at 4.1 percent in the second quarter—but to the adjective: amazing.
That's amazing!

PolitiFact did not rate the statement on its "Truth-O-Meter" but published its "Share The Facts" box featuring the judgment "Strong, but not amazing."

PolitiFact claims it does not rate opinions and grants license for hyperbole.

As we have noted before, it must be the fault of Republicans who keep trying to use hyperbole without a license.



Correction Jan. 2, 2018: Fixed hotlink to the Weekly Standard, which mistakenly linked directly to the PolitiFact story.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out whaten a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Reader: "PolitiFact is not biased. Republicans just lie more."

Every few years or so we recognize a Comment of the Week.

Jehosephat Smith dropped by on Facebook to inform us that PolitiFact is not biased:
Politifact is not biased, Republicans just lie more. That is objectively obvious by this point and if your mind isn't moved by current realities then you're willfully ignorant.
As we have prided ourselves on trying to communicate clearly exactly why we find PolitiFact biased, we find such comments fascinating on two levels.


First, how can one claim that PolitiFact is not biased? On what evidence would one rely to support such a claim?

Second, how can one contemplate claiming PolitiFact isn't biased without making some effort to address the arguments we've made showing PolitiFact is biased?

We invited Mr. Smith to make his case either here on the website or on Facebook. But rather than simply heaping Smith's burden of proof on his head we figured his comment would serve us well as an excuse to again summarize the evidence showing PolitiFact's bias to the left.


Journalists lean left
Journalists as a group lean left. And they lean markedly left of the general U.S. population. Without knowing anything else at all about PolitiFact we have reason to expect that it is made up mostly of left-leaning journalists. If PolitiFact journalists lean left as a group then right out of the box we have reason to look for evidence that their political leaning affects their fact-checking.

PolitiFact's errors lean left I
When PolitiFact makes a egregious reporting error, the error tends to harm the right or fit with left-leaning thinking. For example, when PolitiFact's Louis Jacobson reported that the Hobby Lobby's policy on health insurance "barred" women from using certain types of birth control, we noted that pretty much anybody with any rightward lean would have spotted the mistake and prevented its publication. Instead, PolitiFact published it and later changed it without posting a correction notice. We have no trouble finding such examples.

PolitiFact's errors lean left II
We performed a study of PolitiFact's calculations of percentage error. PolitiFact often performs the calculation incorrectly, and errors tend to benefit Democrats (caveat: small data set).

PolitiFact's ratings lean left I
When PolitiFact rates Republicans and Democrats on closely parallel claims Democrats often fare better. For example, when PolitiFact investigated a Democratic Party charge that Rep. Bill McCollum raised his own pay while in Congress PolitiFact said it was true. But when PolitiFact investigated a Republican charge that Sherrod Brown had raised his own pay PolitiFact discovered that members of Congress cannot raise their own pay and rated the claim "False." We have no trouble finding such examples.

PolitiFact's ratings lean left II
We have done an ongoing and detailed study looking at partisan differences in PolitiFact's application of its "Pants on Fire" rating. PolitiFact describes no objective difference in distinguishing between "False" and "Pants on Fire" ratings, so we hypothesize that the difference between the two ratings is subjective. Republicans are over 50 percent more likely than Democrats to have a false rating deemed "Pants on Fire" false for apparently subjective reasons.

PolitiFact's explanations lean left
When PolitiFact explains topics its explanations tend to lean left. For example, when Democrats and liberals say Social Security has never contributed a dime to the deficit PolitiFact gives it a rating such as "Half True," apparently unable to discover the fact that Social Security has run a deficit during years when the program was on-budget (and therefore unquestionably contributed directly to the deficit those years). PolitiFact resisted Republican claims that the ACA cut Medicare, explaining that the so-called Medicare cuts were not truly cuts because the Medicare budget continued to increase. Yet PolitiFact discovered when the Trump administration slowed the growth of Medicaid it was okay to refer to the slowed growth as a program cut. Again, we have no trouble finding such examples.

How can a visitor to our site (including Facebook) contemplate declaring PolitiFact isn't biased without coming prepared to answer our argument?