Wednesday, November 14, 2018

PolitiFact misses obvious evidence in Broward recount fact check

On Nov. 13, 2018 PolitiFact's "PunditFact" brand issued a "Pants on Fire" rating to conservative Ken Blackwell for claiming Democrats and their allies were manufacturing voters in the Florida election recount.


The problem?

PolitiFact somehow overlooked obvious evidence reported in the mainstream media. The Tampa Bay Times, former owner of PolitiFact before it was transferred to the nonprofit Poynter Institute, published a version of the story:
Broward's elections supervisor accidentally mixed more than a dozen rejected ballots with nearly 200 valid ones, a circumstance that is unlikely to help Brenda Snipes push back against Republican allegations of incompetence.

The mistake — for which no one had a solution Friday night — was discovered after Snipes agreed to present 205 provisional ballots to the Broward County canvassing board for inspection. She had initially intended to handle the ballots administratively, but agreed to present them to the canvassing board after Republican attorneys objected.
The Times story says counting the 205 provisional ballots resulted in at least 20 illegal votes ending up in Broward County's vote totals.

The Times published its story on Nov. 10, 2018.

PolitiFact/PunditFact published its fact check on Nov. 13, 2018 (2:24 p.m. time stamp). The fact check contains no mention at all that Broward County included invalid votes in its vote totals.

Instead, PolitiFact reporter John Kruzel gives us the breezy assurance that neither he nor the state found evidence supporting Blackwell's charge.
Our ruling

Blackwell said, "Democrats and their allies (...) are manufacturing voters."

We found no evidence, nor has the state, to support this claim. Blackwell provided no evidence to support his statement.

We rate this Pants on Fire.
Inconceivable, you say?



via GIPHY

Friday, November 9, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 4

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:
4. Reader support allows us to stay independent.

Our independent journalism, seeking only to sort out the truth in American policy, is what motivates us to keep publishing for the benefit of our readers. We began over a decade ago as a politics project at Florida’s largest daily newspaper, the Tampa Bay Times. Today, we are a nonprofit newsroom that is part of the Poynter Institute, a school for journalists based in Florida.
As with Holan's third point offered to show PolitiFact is not biased, her fourth point gives no reason to think PolitiFact is not biased.

Does anybody need a list of partisan causes supported by public donations? Does anybody have the slightest doubt that PolitiFact's "Truth Squad" membership skews strongly left? If anyone harbors the second doubt we recommend checking out polls like this one that show moderates and conservatives place little trust in the media.

If PolitiFact relies on donations primarily from liberals, then how does that make it more independent instead of less independent? Were PolitiFact to displease its liberal base it could expect its primary source of private donations to shrink.

Here's something we'd like to see. And it's something that will never happen. Let PolitiFact poll its "Truth Squad" to find out how its ideology trends as a group. If conservatives and moderates are a distinct minority PolitiFact can use that information to bolster its membership outreach to those groups: "We need more support from conservatives who care about objective fact-checking!"

And of course that will never happen. It tears down the facade PolitiFact built to suggest that its reliance on public support somehow keeps it politically neutral. PolitiFact has no interest in that kind of transparency. That kind of truth is not in PolitiFact's self-interest.


The Main Point? Reader $upport

We do not buy that PolitiFact sincerely tried to put forth a serious argument that it is unbiased. The argument Holan put forward toward that end was simply way too weak to make that easily believable. We think the main point was to soften misgivings people may have about joining PolitiFact's financial support club, which it has dubbed its "Truth Squad."

Holan tipped off that purpose early in her article (bold emphasis added):
We expect it (accusations of bias--ed.). Afterall [sic], as an independent group measuring accuracy, we are disrupting the agendas of partisans and political operatives across the ideological spectrum. We do it to give people the information they need to govern themselves in a democracy, and to uphold the tradition of a free and independent press.

Still, we think it’s worth explaining our mission and methods, both to answer those who make the charge against us, and for our supporters when confronted by naysayers.
Also see the trimmed screen capture image below (bottom) with an ad asking readers to support PolitiFact.

If you ask us, PolitiFact's "Truth Squad" isn't worthy of the name if they buy Holan's argument. "Dupe Squad" would be more like it. Holan wants your money. And it looks like she's willing to put forward an argument she wouldn't buy herself to help keep that money flowing.

Holan offers no real answer to those who claim PolitiFact is biased. To do that, Holan would need to specifically answer the arguments critics use to support their claims.

PolitiFact finds it preferable to simply say it is unbiased without offering real evidence supporting its claim. And without rebutting the arguments of its detractors.


Is the embedded ad asking for money a mere coincidence? We added the red border for emphasis.

Thursday, November 8, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 3

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

3. We make mistakes sometimes, but we correct our errors promptly.
The facts comes first with us. That’s why it’s important for us -- or any reputable news organization -- to correct mistakes promptly and clearly. We follow a published corrections policy than anyone can read. Readers also can easily access a list of fact-checks that have been corrected or updated after the original publication.
I make mistakes sometimes, but I correct my errors promptly. Would that make me unbiased? Who believes that?

A willingness to correct errors does not bear directly on the issue of bias. Consider PolitiFact's move of paying researchers to look for examples of biased language in its work (the study found no systematic evidence of biased language). Would a policy of correcting mistakes promptly cancel out a strong propensity to use biased language?

Of course not. Correcting mistakes would only have an effect on biased language if the publisher viewed biased language as a mistake and corrected it as such.

In our experience PolitiFact often refuses to consider itself mistaken when it makes a real mistake.

What good is a thorough and detailed corrections policy if the publishing entity can't recognize the true need for a correction?

And doesn't it go without saying that the failure to recognize the need for a correction may serve as a strong indicator of bias?


Wonderful-Sounding Claim Meaning Nearly Nothing

How great is PolitiFact's corrections policy? Just let Holan tell you:
We believe it is one of the most robust and detailed corrections policies in American fact-checking.
We were momentarily tempted to fact check Holan's claim. Except she starts with "We believe" which immediately moves the claim into the realm of opinion. But if it were a claim of fact Holan could probably easily defend it because the claim doesn't really mean anything.

Think about it. "One of the the most robust and detailed corrections policies in American fact-checking." Let's take a look at the set of American fact checkers, using the list of IFCN-verified fact-checkers. When we looked on Nov. 8, 2018 there were eight (including PolitiFact).

With a pool that small PolitiFact could have the least robust and detailed corrections policy among the eight and plausibly say it has one of the most robust and detailed corrections policies in American fact-checking. Our opinion? In a pool of eight there's nothing to crow about unless you're No.1. Coming in No. 4 puts one in the middle of the pack, after all.

We think PolitiFact's corrections policy is less robust than that of its parent organization, the Poynter Institute. We're wondering why that should be the case.


Summary

The first two reasons Holan offered to support PolitiFact's "not biased" claim were incredibly weak. But the third item managed to register even lighter weight on the scale of evidence. A robust corrections policy is a poor protection against ideological bias. It's a bit like using a surgical mask for protection against mustard gas.

PolitiFact: "PolitiFact is not biased--here's why" Pt. 2 (Updated)

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:

2. We follow the facts, not fact-check count formulas.

We let the factual chips fall where they may. This is not bias; this is sticking to our mission of correcting falsehoods as we find them.
As with PolitiFact's first supposed evidence, this one does not appear to work without assuming PolitiFact lacks bias. It's not biased to let the factual chips fall where they may if the evaluation of the facts was unbiased. But it's biased to let the factual chips fall where they may if the evaluation of the facts was biased.

So far, this item offers us no solid reason for concluding PolitiFact lacks bias.

Our Little League doesn't keep score!

Holan continues:
We don’t worry about who got the last False rating or how long since some group got a True rating. We look at each statement and each set of evidence separately and give it a rating that stands on its own.
Concerning the first sentence, our data hint that PolitiFact does consider the proportion of "Pants on Fire" ratings it gives in relation to false ratings overall. Note the pattern under Holan (2014 onward) exhibits much more stability than PolitiFact's record under her predecessor, Bill Adair. Note that we started publishing our data midway through 2011. PolitiFact editors may have looked at the data in 2011 or later and acted on it, which may explain the reduced variation.


Bringing the chart up to date as of today would bring the blue 11.11 percent for 2018 up to 20 percent. That's thanks to a "Pants on Fire" rating given to the North Dakota Democratic-Nonpartisan League Party (who?). Is there even a PolitiFact North Dakota? No. And the other "Pants on Fire" rating given to a Democrat this year went to Alexandria Ocasio-Cortez, who would have easily won her race in New York with 3,000 "Pants on Fire" ratings.

Only once under Holan's tenure (omitting 2013, which she shared with Adair) has either party had a percentage outside the 20 percent to 30 percent range. Under Adair it happened seven times (again omitting 2013).

Holan's assurances ring hollow because PolitiFact's Truth-O-Meter ratings get picked by a fairly consistent group of editors. They know the ratings they are giving even if they're not looking at a scoreboard, just like members of those Little League teams in leagues that avoid hurting feelings by not keeping score.

On top of that, PolitiFact constantly encourages readers to view candidate "report cards" that show all the "Truth-O-Meter" ratings PolitiFact has meted out to a given candidate.

Does this look like PolitiFact isn't keeping score?

But Let's Assume PolitiFact Does Not Keep Score

Even assuming Holan is right that PolitiFact does not keep score with its "Truth-O-Meter" ratings, that offers no assurance that PolitiFact lacks bias. Think of an umpire in one of those "no score" little league games. Would not keeping a tally of the number of runs scored prevent an umpire from calling a bigger strike zone for one team than the other? We don't see what would prevent it.

Tweaking the Little League Analogy: Yes We Keep Score, But It Does Not Make Us Biased

The Little League analogy breaks down in the end because PolitiFact does keep score, as Holan acknowledges:
Our database of fact-checks make it easy to see the ratings people or parties have received over the years. Our readers tell us they like seeing these summaries and find them easy to browse. But we are not driven by those numbers; they have no bearing on how we rate the next statement we choose to fact-check.
So Holan is saying yeah, we keep score but we don't let it bias our decisions therefore we are not biased. It's circular reasoning again. Where's the evidence?

How does Holan know PolitiFact does not let the score affect its work? What is PolitiFact's secret for exterminating normal human bias? Wouldn't we all like to know?

We're not going to know from Holan's explanation, that's for sure.

There's nothing in this section of Holan's article that offers any kind of legitimate assurance that PolitiFact filters bias from its work.



Update Jan. 7, 2018



In our story, we jabbed PolitiFact for publishing a page showing the "report cards" for some of the people it fact checks most often, asking our readers whether that page makes it look like PolitiFact doesn't keep score.

Today, upon testing the link, it returned one of PolitiFact's 404 page not found errors.

We don't know whether PolitiFact removed the page to make it look less like PolitiFact keeps score. But if the page was permanently removed (instead of being temporarily inaccessible) it's not a good look for Pulitzer Prize-winning PolitiFact.

If PolitiFact removed the page, at least we've got an Internet Archive version for our readers. Our archived version is from Dec. 28, 2018.






Wednesday, November 7, 2018

Remember when the Bush brothers negotiated NAFTA?

We've written about his before, but it's useful for communication purposes to dedicate a post to this memorable PolitiFact pictorial flub:

PolitiFact: "PolitiFact is not biased--here's why" Pt. 1

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:

1. We fact-check inaccurate statements, not political parties.

We are always on the lookout for bad information that needs correcting. We don’t have any concern about which party it comes from or who says it. If someone makes an inaccurate statement, it gets a negative rating on our Truth-O-Meter: Mostly False, False or Pants on Fire.
If we at PolitiFact Bias were to come up with a story making an assertion, we would certainly try to produce some type of evidence giving palpable evidence in support. We find PolitiFact's article striking for its lack of evidence in support of the claim in the title.

Let's assume for the sake of argument that it's true PolitiFact fact checks inaccurate statements and not political parties. We find both assertions questionable, but we can set that aside for the moment.

What stops a biased fact checker from allowing factors like confirmation bias to guide its selection of fact checks to reflect an ideological bias? This is an obvious objection to the first part of Holan's argument but her article completely fails to acknowledge it. If Holan assumes that PolitiFact has no bias and therefore no confirmation bias can result then her argument begs the question (circular reasoning: PolitiFact is not biased because PolitiFact is not biased).

If Holan isn't using circular reasoning then she's simply not addressing the issue in any relevant way. Fact-checking inaccurate statements and not political parties does nothing to show a lack of bias.


The Elephant in the Room (a pun of foreshadowing)

In early 2011 Eric Ostermeier of the University of Minnesota did a study of PolitiFact's ratings. Ostermeier found Republicans were receiving worse treatment in PolitiFact's ratings. Ostermeier noted that PolitiFact's descriptions of its methodology offered no assurance at all that the skew in its ratings was unaffected by selection bias. In other words, was unrepresentative sampling responsible for making it appear that Republicans lie more?

Ostermeier posed an important question that PolitiFact has never satisfactorily addressed:
The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case.

The evidence says PolitiFact's story selection is biased

While developing our own research approaches to PolitiFact's ratings we came up with an observation we say strongly shows PolitiFact guilty of selection bias.

Imagine PolitiFact used only its editorial judgment of whether a statement seemed so false that it was worthy of a fact check and was completely blind to political party and ideology.

We say that regardless of whether one party lies more, the results should prove pretty close to proportional. If 40 percent of PolitiFact's ratings of Republicans come out "Pants on Fire" or "False" then the same should hold true of Democrats. If Republicans lie more that should end up reflected in the number of ratings, not in the proportions.

PolitiFact as much as admitted to selection bias in the early days. PolitiFact founding editor Bill Adair said PolitiFact tried to do a roughly equal number of fact checks for Republicans and Democrats. That makes no less than two criteria for selecting a story, and one of them is not simply whether the statement appeared false. Trying to fact check Republicans and Democrats equally will skew the proportions (unless the parties lie equally and PolitiFact's sample is effectively random).

In Ostermeier's research, Republicans' statements were 39 percent "Pants on Fire" or "False" while Democrats' statements were 12 percent "Pants on Fire" or "False." That's strong evidence of selection bias.

Note: We have not tracked these numbers through the present. Perhaps PolitiFact is closer to rating claims proportionally now than it was in Adair's time. If it is, then PolitiFact could present that as evidence it is blind to ideology when it chooses which claims to check.


Until PolitiFact answers Eric Ostermeier's question it is unsafe to conclude that PolitiFact lacks bias.

PolitiFact: "PolitiFact is not biased--here's why" (Intro) (Updated)

On Nov. 6, 2018, PolitiFact published an article declaring itself "not biased," suggesting with the tail end of the title ("here's why") that it could support the declaration with evidence.

We welcome PolitiFact's better-late-than-never response to its critics. But we find the proffered reasoning incredibly weak. If the argument from PolitiFact Editor Angie Drobnic Holan addresses any item from our list of arguments helping show PolitiFact's leftward lean it does so obliquely at best.

At the risk of using a pile driver to squash a gnat, we will address each of Holan's arguments in a series of posts. As we complete each part of the series we will add a hotlink to the corresponding list of arguments from Holan and PolitiFact.

Just click a claim from PolitiFact to see our answers to Holan's arguments.


1. We fact-check inaccurate statements, not political parties.

2. We follow the facts, not fact-check count formulas.

3. We make mistakes sometimes, but we correct our errors promptly.

4. Reader support allows us to stay independent.

 

PolitiFact Editor Angie Drobnic Holan

 

Update, With Conclusion (Nov. 9, 2018)

We have completed adding links to articles debunking each of PolitiFact's supposed reasons supporting its claim of being unbiased.

In summary, we think the purpose of Holan's article was not to put forward a serious argument. The article was ad copy designed to support an appeal for reader support.

It was PolitiFlack. Item No. 4 emphasizes the point.


 

Saturday, November 3, 2018

PolitiFact's Liberal Tells for $400, Alex

When PolitiFact released the results of a language inventory it commissioned on itself, we were not surprised that the researchers found no clear evidence of biased language. PolitiFact's bias is mostly found in its choice of stories accompanied by bias in the execution of the fact checks.

But ...

On Oct. 31, 2018 PolitiFact Editor Angie Drobnic Holan published an article on the top election issues for 2018 and promptly stepped in it:
PolitiFact has been monitoring and fact-checking the midterm campaigns of 2018 in races across the country. We’ve seen common themes emerge as the Democrats and Republicans clash. Here’s a look at what we’ve found to be the top 10 storylines of the 2018 contests. (We provide short summaries of our fact-checks here; links will take you to longer stories with detailed explanations and primary sources.)

1. Fear of immigration
We'll explain to Holan (and the audience) the right way to identify immigration as an election issue without employing biased language:
1. Immigration
It's pretty easy.

Use "Fear of immigration" and the language communicates a lean to the left. Something like "Inadequate border security" might communicate the opposite (no danger of that from PolitiFact!).

Others from Holan's list of 10 election topics may also qualify as biased language. But this one is the most obvious. "Fear of immigration" is how liberals imagine conservatives reach the conclusion that securing the border and controlling immigration count as good policy.

PolitiFact's claim to non-partisanship is a gloss.