Thursday, January 17, 2019

PolitiFact's Heart Transplant

In the past we have mocked PolitiFact's founding editor Bill Adair for saying the "Truth-O-Meter" is the "heart of PolitiFact."

We have great news. PolitiFact has given itself a heart transplant.

PolitiFact's more recent (since May of 2018) self-descriptions now say that fact-checking is the heart of PolitiFact:


That's a positive move we applaud, while continuing to disparage the quality of PolitiFact's fact checks.

It was always silly to call a subjective sliding-scale Gimmick-O-Meter the heart of PolitiFact (even if it was or remains true).

The new approach at least represents improved branding.

Now if PolitiFact could significantly upgrade the quality of its work ...




Post-publication update: Added hotlinks to the first paragraph leading to past commentary on PolitiFact's heart.

Tuesday, January 15, 2019

PolitiFact and the Contradiction Fiction

We consider it incumbent on fact checkers to report the truth.

PolitiFact's struggles in that department earn it our assessment as the worst of the mainstream fact checkers. In our latest example, PolitiFact reported that President Donald Trump had contradicted his claim that he had never said Mexico would pay for the border wall with a check.

We label that report PolitiFact's contradiction fiction. Fact checkers should know the difference between a contradiction and a non-contradiction.

PolitiFact (bold emphasis added):
"When during the campaign I would say ‘Mexico is going to pay for it,’ obviously, I never said this, and I never meant they're going to write out a check," Trump told reporters. "I said they're going to pay for it. They are."

Later on the same day while visiting the border in Texas, Trump offered the same logic: "When I say Mexico is going to pay for the wall, that's what I said. Mexico is going to pay. I didn't say they're going to write me a check for $20 billion or $10 billion."

We’ve seen the president try to say he never said something that he very much said before, so we wondered about this case.

Spoiler: Trump has it wrong.

We found several instances over the last few years, and in campaign materials contradicting the president’s statement.
PolitiFact offers three links in evidence of its "found several instances" argument, but relies on the campaign material for proof of the claimed contradiction.

We'll cover all of PolitiFact evidence and show that none of it demonstrated that Mr. Trump contradicted himself on this point. Because we can.


Campaign Material

PolitiFact made two mistakes in trying to prove its case from a Trump Campaign description of how Trump would make Mexico pay for the border wall. First, it ignored context. Second, it applied spin to one of the quotations it used from the document.

PolitiFact (bold emphasis added):
"It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year," the memo said.

Trump proposed measures to compel Mexico to pay for the wall, such as cutting off remittances sent from undocumented Mexicans in the U.S. via wire transfers.

Then, the memo says, if and when the Mexican government protested, they would be told to pay a lump sum "to the United States to pay for the wall, the Trump Administration will not promulgate the final rule, and the regulation will not go into effect." The plan lists a few other methods if that didn’t work, like the trade deficit, canceling Mexican visas or increasing visa fees.
We placed bold emphasis on the part of the memo PolitiFact mentioned but ignored in its reasoning.

If the plan mentions methods to use if Mexico did not fork over the money directly, then how can the memo contradict Trump's claim he did not say Mexico would pay by check? Are fact checkers unable to distinguish between "would" and "could"? If Trump says Mexico could pay by check he does not contradict that claim by later saying he did not say Mexico would pay by check.

And what's so hard to understand about that? How can fact checkers not see it?

To help cinch its point, PolitiFact quotes from another section of the document, summarizing it as saying Mexico would pay for the wall with a lump sum payment: "(Mexico) would be told to pay a lump sum 'to the United States to pay for the wall'"). Except the term "lump sum" doesn't occur in the document.

There's reason for suspicion any time a journalist substitutes for the wording in the original document, using only a partial quotation and picking up mid-sentence. Here's the reading from the original:
On day 3 tell Mexico that if the Mexican government will contribute the funds needed to the United States to pay for the wall ...
We see only one potential justification for embroidering the above to make it refer to a "lump sum." That's from interpreting "It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year" as specifying a lump sum payment. We think confirmation bias would best explain that interpretation. It's more reasonable to take the statement to mean that paying for the wall once and having it over with is an obvious choice when it helps preserve a greater amount of income for Mexico annually after that. And the line does not express an expectation of a lump-sum payment but instead the strength (rightly or wrongly) of the bargaining position of the United States.

In short, PolitiFact utterly failed to make its case with the example it chose to emphasize.


... And The Rest


 (these are weak, so they occur after a page break)

Monday, January 7, 2019

Research shows PolitiFact leans left: The "Pants on Fire" bias

In 2011 PolitiFact Bias started a study of the way PolitiFact employs its "Pants on Fire" rating.

We noted that PolitiFact's definitions for "False" and "Pants on Fire" ratings appeared to differ only in that the latter rating represents a "ridiculous" claim. We had trouble imagining how one would objectively measure ridiculousness. PolitiFact's leading lights appeared to state in interviews that the difference in the ratings was subjective. And our own attempt to survey PolitiFact's reasoning turned up nothing akin to an empirically measurable difference.

We concluded that the "Pants on Fire" rating was likely just as subjective as PolitiFact editors described it. And we reasoned that if a Republican statement PolitiFact considered false was more likely than the same type of statement from a Democrat to receive a "Pants on Fire" rating we would have a reasonable measure of ideological bias at PolitiFact.

Every year we've updated the study for PolitiFact National. In 2017, PolitiFact was 17 percent more likely to give a Democrat a "Pants on Fire" rating for a false statement. But the number of Democrats given false ratings was so small that it hardly affected the historical trend. Over PolitiFact's history, Republicans are over 50 percent more likely to receive a "Pants on Fire" rating for a false claim than a Democrat.

2017


After Angie Drobnic Holan replaced Bill Adair as PolitiFact editor, we saw a tendency for PolitiFact to give Republicans many more false ("False" plus "Pants on Fire") ratings than Democrats. In 2013, 2015, 2016 and 2017 the percentage was exactly 25 percent each year. Except for 2007, which we count as an anomaly, that percentage marked the record high for Democrats. It appeared likely that Holan was aware of our research and leading PolitiFact toward more careful exercise of its subjective ratings.

Of course, if PolitiFact fixes its approach to the point where the percentages are roughly even, this powerfully shows that the disparities from 2009 through 2014 represent ideological bias. If one fixes a problem it serves to acknowledge there was a problem in need of fixing.


In 2018, however, the "Pants on Fire" bias fell pretty much right in line with PolitiFact's overall history. Republicans in 2018 were about 50 percent more likely to receive a "Pants on Fire" rating for a claim PolitiFact considered false.

The "Republicans Lie More!" defense doesn't work

Over the years we've had a hard time explaining to people why it doesn't explain away our data to simply claim that Republicans lie more.

That's because of two factors.

First, we're not basing our bias measure on the number of "Pants on Fire" ratings PolitiFact doles out. We're just looking at the percentage of false claims given the "Pants on Fire" rating.

Second, our research provides no reason to believe that the "Pants on Fire" rating has empirical justification. PolitiFact could invent a definition for what makes a claim "Pants on Fire" false. PolitiFact might even invent a definition based on some objective measurement. And in that case the "Republicans lie more!" excuse could work. But we have no evidence that PolitiFact's editors are lying when they tell the public that the difference between the two ratings is subjective.

If the difference is subjective, as it appears, then PolitiFact's tendency to more likely give a Republican's false statement a "Pants on Fire" rating counts as a very clear indicator of ideological bias.

To our knowledge, PolitiFact has never addressed this research with public comment.

Thursday, January 3, 2019

PolitiFact's 10 Worst Fact Check Flubs of 2018

The worst of the mainstream fact checkers, PolitiFact, produced many flawed fact checks in 2018. Here's our list of PolitiFact's 10 worst fact check flubs from 2018.


10 PolitiFact Wisconsin's Worry-O-Meter

Republican Leah Vukmir challenged Democrat Tammy Baldwin for one of Wisconsin's senate seats in 2018. Vukmir challenged Baldwin's willingness to take a hard line on terrorism by saying Baldwin was more worried about "the mastermind of 9/11" than supporting Trump's nominee head of the CIA.

How does a fact checker measure worry?

No worries! PolitiFact Wisconsin claimed to have looked for signs Baldwin worried about Khalid Sheik Mohammed and didn't find anything. So it rated Vukmir's claim "Pants on Fire." PolitiFact Wisconsin skillfully circumnavigated Vukmir's clearly implied reference to a key reason Democrats opposed Trump's nominee, Gina Haspel: She had followed orders to implement enhanced interrogation techniques, including waterboarding. Mohammed was one of those to whom the technique was applied. Those are not the kinds of dots a fact checker like PolitiFact can connect.


Current immigration policy costs as much as $300 billion according to one study

The White House published an infographic claiming immigration policy costs the government money--as much as $300 billion according to one study. PolitiFact examined the question and found that it was "Half True" because it supposedly left out important details, like "U.S.-born children with at least one foreign-born parent are among the strongest economic and fiscal contributors, thanks in part to the spending by local governments on their education." No, really. That's critical missing context in PolitiFact's book. At least in this case.


Trump says senior White House official who said North Korean summit would be impossible to keep does not exist

The New York Times reported a "senior White House official" said U.S. summit with North Korea would be impossible to keep on its original date. It turned out the official didn't quite say that, but instead said words to the effect that keeping the original date would prove extremely difficult.

When Trump tweeted that the the source did not exist, PolitiFact fact checked the claim. In doing so, the fact checkers set aside the idea that Trump was saying no senior White House official had made the claim attributed by the Times. The fact checkers concluded that Trump's tweet was "Pants on Fire" false because the person to whom the Times attributed its dubious paraphrase was a real person.

We count this as a classic example of uncharitable interpretation.


Sen. Ted Cruz claims he has consistently opposed government shutdowns

After Sen. Ted Cruz (R-Texas) said he has consistently opposed government shutdowns, PolitiFact rated his claim "Pants on Fire" because Cruz joined a failed vote against cloture on a bill that would have ended a government shutdown. PolitiFact said Cruz had failed his own test for supporting a shutdown: Cruz said shutdowns happen when senators vote to deny cloture on a funding bill. But obviously on a failed cloture vote the shutdown does not occur even though some senators voted to deny cloture on a funding bill. PolitiFact tried to make Cruz look like a hypocrite by taking his statement out of context.


PolitiFact claims Trump was wrong that a civilian in the room with Omar Mateen might have prevented the Pulse nightclub massacre

(Via Zebra Fact Check) When President Trump tweeted that a civilian with a gun in the room with Mateen might have prevented or reduced the casualties from the Pulse nightclub shooting, PolitiFact ruled the claim "False."

But PolitiFact made an incoherent case for its ruling. Trump was stating a counterfactual scenario, that if a civilian with a gun had been in the room with Mateen then the killing might have been prevented. PolitiFact argued, in effect, that the police detective doing guard duty in the Pulse parking lot counted as the civilian in the room with Mateen and had no effect on the outcome. A person in a parking lot is not the same as a person in the room, we say. And we see no grounds for the implication that the detective in the parking lot failed to contribute to a better outcome compared to having no armed guard in the parking lot.


PolitiFact determines 4.1 percent GDP growth objectively not "amazing."

After President Trump went to Twitter to declare 4.1 percent GDP growth rate "amazing," PolitiFact fact checked the claim and determined it "False." The Weekly Standard took note of PolitiFact's factual interest in a matter of opinion.

PolitiFact often fails to follow its principle against fact-checking opinion or hyperbole, and this case serves as an excellent example.


Does the European Union export cars to the U.S. by the millions?

After President Trump claimed the EU exports cars to the United States by the millions, PolitiFact interpreted the claim to refer specifically (and separately) to Mercedes and BMW vehicles (Trump mentioned both in his tweet) or to Germany in particular. Additionally, PolitiFact assumed that the rate of imports had to exceed 1 million per year to make Trump's claim true. That's despite the fact Trump specified no timed rate.

PolitiFact found its straw man version of Trump's claim "False." As a matter of fact, the number of cars manufactured in the European Union and exported to the United States exceeded 1 million from 2014 through 2016 (the latest numbers when Trump tweeted). Definitely false, then?
 

3 Did added debt in 2017 exceed cumulative debt over the United States' first 200 years in terms of GDP?

When MSNBC host Joe Scarborough said the Trump administration had added more debt than was accumulated in the nation's first 200 years, PolitiFact fact checked the claim. It was true, PolitiFact found, in terms of raw dollars. But experts told PolitiFact that the debt as a measure percentage of GDP serves as the best measure. So PolitiFact incorrectly interpreted the total accumulated debt in 2017 as added debt and proclaimed that Scarborough's claim checked out in terms of percentage of GDP.

Scarborough received a "Mostly True" rating for a claim that was incorrect in terms of GDP--what PolitiFact reported as the most appropriate measure.

Making this one even better, PolitiFact declined to fix the problem after we pointed it out to them.


2 PolitiFact decides who built what

After the right-leaning Breitbart news site published a fact check announcing that immigrants did not build Fall River, Massachusetts ("mostly false," according to that fact check), PolitiFact published a fact checking finding the Breitbart fact check "False." PolitiFact and Breitbart reported that established residents of Fall River built factories. Immigrants came to work in the factories. We agree with Breitbart that it does not make sense to withhold all credit from the people who built the factories.


1 PolitiFact flip-flops on its 2013 "Lie of the Year"

Republican senatorial candidate Josh Hawley (R-Mo.) tagged his opponent, Democrat Claire McCaskill, with chiming in on PolitiFact's 2013 "Lie of the Year"--the promise that Americans would be able to keep their existing health care plans under the Affordable Care Act.

Hawley accurately summarized PolitiFact's reasoning for its 2013 award. Obama's promise was emphatic that people would not lose their existing plans. Yet millions received cancellation notices in 2013 from insurance companies electing to simply drop potentially grandfathered plans. That led to Obama's promise sharing the "Lie of Year." PolitiFact baselessly claimed that when Hawley said people lost their plans he was sending the message that the millions of people completely lost insurance instead of simply losing the plans they preferred.



Happy New Year!



Correction Jan 3, 2018: Did a strikethrough correction, changing "measure of GDP" to "percentage of GDP"

Wednesday, December 5, 2018

Handicapping PolitiFact's "Lie of the Year" Candidates (Updated)


It's that time of year again, when the supposedly non-partisan and unbiased folks at PolitiFact prepare an op-ed about the most significant lie of the year, PolitiFact's "Lie of the Year" award.

At PolitiFact Bias we have made a tradition of handicapping PolitiFact's list of candidates.

So, without further ado:


To the extent that Democrats think Trump's messaging on immigration helped Republicans in the 2018 election cycle, this candidate has considerable strength. I (Jeff can offer his own breakdown if he wishes) rate this entry as a 6 on a scale of 1-10 with 10 representing the strongest.


This claim involving Saudi Arabia qualifies as my dark horse candidate. By itself the claim had relatively little political impact. But Trump's claim relates to the murder of Saudi journalist (and U.S. resident) Jamal Khashoggi. Journalists have disproportionately gravitated toward that issue. Consonant with journalists' high estimation of their own intelligence and perception, this is the smart choice. 6.


This claim has much in common with the first one. It deals with one of the key issues of the 2018 election cycle, and Democrats may view this messaging as one of the reasons the "blue wave" did not sweep over the U.S. Senate. But the first claim came from a popular ad. And the first claim was rated "Pants on Fire" while PolitiFact gave this one a mere "False" rating. So this one gets a 5 from me instead of a 6.



PolitiFact journalists may like this candidate because it undercuts Trump's narrative about the success of his economic policies. Claiming U.S. Steel is opening new plants after Trump slapped tariffs on aluminum and steel makes the tariffs sound like a big success. But not so much if there's no truth to it. How significant was it politically? Not so much. I rate this one a 4.



If this candidate carries significant political weight, it comes from the way the media narrative contradicting Trump's claim helped lead to the administration's reversal of its border policy. That reversal negated, at least to some extent, a potentially effective Democratic Party election-year talking point. I rate this one a 5.


That's five from President Trump. Are PolitiFact's candidates listed "in no particular order"? PolitiFact does not say.



Bernie Sanders' claim about background checks for firearm purchases was politically insignificant. Pickings from the Democratic Party side were slim. Democrats only had about 12 false ratings through this point in 2018, including "Pants on Fire" ratings. Republicans had over 80, for comparison. I give this claim a 1.



As with the Sanders' claim, the one from Ocasio-Cortez was politically insignificant. It was ignorant, sure, but Ocasio-Cortez was guaranteed to win in her district regardless of what she said. Her statement would have been just as significant politically if she said it to herself in a closet. This claim, like Sanders', rates as a 1.




Is this the first time a non-American made PolitiFact's list of candidates? This claim ties into the same subject as last year's winner, Russian election interference. About last year's selection I predicted "PolitiFact will hope the Mueller investigation will eventually provide enough backing to keep it from getting egg on its face." One year later it remains uncertain whether the Mueller investigation will produce a report that shows much more than the purchase of some Facebook ads. If and only if the Russia story gets new life in December will PolitiFact make this item its "Lie of the Year." I give this item a 4, with a higher ceiling depending on the late 2018 news narrative.




Yawn. 1.





This claim from one of Trump's economic advisors rates about the same as Ocasio-Cortez's claim on its face. I think Kudlow may have referred to deficit projections and not deficits. But that aside, this item may appeal to PolitiFact because it strikes at the idea that tax cuts pay for themselves. Democrats imagine that Republicans commonly believe that (it may be true--I don't know). So even though this item should rate in the same range as the Sanders and Ocasio-Cortez claims I will give it a 4 to recognize its potential appeal to PolitiFact's left-leaning staff. It has a non-zero chance of winning.



Afters

A few notes:  Once again, PolitiFact drew only from claims rated "False" or "Pants on Fire" to make up its list of candidates. President Obama's claim about Americans keeping their health insurance plans remains the only candidate to receive a "Half True" rating.

With five Trump statements among the 10 nominees we have to allow that PolitiFact will return to its ways of the past and make "Trump's statements as president" (or something like that) the winner.


Jeff Adds:

Knowing PolitiFact's Lie of the Year stunt is more about generating teh clickz as opposed to a function of serious journalism or truth seeking, my pick is the Putin claim.

The field of candidates is, once again, intentionally weak outside of the Putin rating. Of all the Pants on Fire ratings they passed out to Trump this year, PolitiFact filled the list with claims that were simply False (and this is pretending there's some objective difference between any of PolitiFact's subjective ratings.)

Giving the award to Bernie won't generate much buzz, so you can cross him off the list.

It's doubtful the nonpartisan liberals at PolitiFact would burden Ocasio-Cortez with such an honor when she's already taking well-deserved heat for her frequent gaffes. And as far as this pick creating web traffic, I submit that AOC isn't nearly as talked about in Democrat circles as the ire she elicits from the right would suggest. That said, she should be considered a dark horse pick.

It's not hard to imagine PolitiFacter Aaron Sharockman cooking up a scheme during a Star Chamber session to pick AOC as an attempt at outreach to conservative readers and beefing up their "we pick both sides!" street cred (a credibility, by the way, that only PolitiFact and the others in their fishbowl of liberal confirmation bias actually believe exists.)

More people in America know Spencer Pratt sells healing crystals than have ever heard of Larry Kudlow. You can toss this one aside.

The inclusion of the David Hogg claim seems like a PolitiFact intern was given the task of picking out a few False nuggets from liberals and that was what they came up with. Don't expect PolitiFact to pick on the young but misinformed activist. [Update: This is a completely embarrassing take on my part. I was in a rush to publish my thoughts on the Lie of the Year candidates, and in that rush, I glossed over this claim. Obviously, I didn't even give it a passing notice. I'm confident that had I actually paid attention to it, I would have ignored it as a contender anyways (and I still think it's a lame pick on its face.) But that's not an excuse.

I let readers down and I embarrassed myself. As I repeatedly and mockingly point out to fact checkers: Confirmation bias is a helluva drug. I was convinced of the winner, and I ignored information that didn't support that outcome.

I regret that I didn't dismiss it with a coherent argument. My bad.-Jeff]

Putin is the obvious  pick. Timed perfectly with the release of the Mueller report, it piggybacks onto the Russian interference buzz. Additionally, it allows ostensibly serious journos to include PolitiFact's Lie of the Year piece into their own articles about Russian involvement in the election (the catnip of liberal click-bait.) It gets bonus points for confirming for PolitiFact's Democrat fan base that Trump is an illegitimate president that stole the election.

The Putin claim has everything: Anti-Trump, stokes Russian interference RT's and Facebook shares, and gets links from journalists at other news outlets sympathetic to PolitiFact's cause.

The only caveat here is if PolitiFact continues their recent history of coming up with some hybrid, too-clever-by-half Lie of the Year winner that isn't actually on the list. But even if they do that the reasoning is the same: PolitiFact is not an earnest journalism outlet engaged in fact spreading. PolitiFact exists to get your clicks and your cash.

Don't believe the hype.





Updated: Added Jeff Adds section 1306 PST 12/10/2018 - Jeff
Edit: Corrected misspelling of Ocasio-Cortez in Jeff Adds portion 2025 PST 12/10/2018 - Jeff
Updated: Strike-through text of Hogg claim analysis in Jeff Adds section, added three paragraph mea culpa defined by brackets 2157 PST 12/12/2018 -Jeff


Wednesday, November 14, 2018

PolitiFact misses obvious evidence in Broward recount fact check

On Nov. 13, 2018 PolitiFact's "PunditFact" brand issued a "Pants on Fire" rating to conservative Ken Blackwell for claiming Democrats and their allies were manufacturing voters in the Florida election recount.


The problem?

PolitiFact somehow overlooked obvious evidence reported in the mainstream media. The Tampa Bay Times, former owner of PolitiFact before it was transferred to the nonprofit Poynter Institute, published a version of the story:
Broward's elections supervisor accidentally mixed more than a dozen rejected ballots with nearly 200 valid ones, a circumstance that is unlikely to help Brenda Snipes push back against Republican allegations of incompetence.

The mistake — for which no one had a solution Friday night — was discovered after Snipes agreed to present 205 provisional ballots to the Broward County canvassing board for inspection. She had initially intended to handle the ballots administratively, but agreed to present them to the canvassing board after Republican attorneys objected.
The Times story says counting the 205 provisional ballots resulted in at least 20 illegal votes ending up in Broward County's vote totals.

The Times published its story on Nov. 10, 2018.

PolitiFact/PunditFact published its fact check on Nov. 13, 2018 (2:24 p.m. time stamp). The fact check contains no mention at all that Broward County included invalid votes in its vote totals.

Instead, PolitiFact reporter John Kruzel gives us the breezy assurance that neither he nor the state found evidence supporting Blackwell's charge.
Our ruling

Blackwell said, "Democrats and their allies (...) are manufacturing voters."

We found no evidence, nor has the state, to support this claim. Blackwell provided no evidence to support his statement.

We rate this Pants on Fire.
Inconceivable, you say?



via GIPHY

Friday, November 9, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 4

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:
4. Reader support allows us to stay independent.

Our independent journalism, seeking only to sort out the truth in American policy, is what motivates us to keep publishing for the benefit of our readers. We began over a decade ago as a politics project at Florida’s largest daily newspaper, the Tampa Bay Times. Today, we are a nonprofit newsroom that is part of the Poynter Institute, a school for journalists based in Florida.
As with Holan's third point offered to show PolitiFact is not biased, her fourth point gives no reason to think PolitiFact is not biased.

Does anybody need a list of partisan causes supported by public donations? Does anybody have the slightest doubt that PolitiFact's "Truth Squad" membership skews strongly left? If anyone harbors the second doubt we recommend checking out polls like this one that show moderates and conservatives place little trust in the media.

If PolitiFact relies on donations primarily from liberals, then how does that make it more independent instead of less independent? Were PolitiFact to displease its liberal base it could expect its primary source of private donations to shrink.

Here's something we'd like to see. And it's something that will never happen. Let PolitiFact poll its "Truth Squad" to find out how its ideology trends as a group. If conservatives and moderates are a distinct minority PolitiFact can use that information to bolster its membership outreach to those groups: "We need more support from conservatives who care about objective fact-checking!"

And of course that will never happen. It tears down the facade PolitiFact built to suggest that its reliance on public support somehow keeps it politically neutral. PolitiFact has no interest in that kind of transparency. That kind of truth is not in PolitiFact's self-interest.


The Main Point? Reader $upport

We do not buy that PolitiFact sincerely tried to put forth a serious argument that it is unbiased. The argument Holan put forward toward that end was simply way too weak to make that easily believable. We think the main point was to soften misgivings people may have about joining PolitiFact's financial support club, which it has dubbed its "Truth Squad."

Holan tipped off that purpose early in her article (bold emphasis added):
We expect it (accusations of bias--ed.). Afterall [sic], as an independent group measuring accuracy, we are disrupting the agendas of partisans and political operatives across the ideological spectrum. We do it to give people the information they need to govern themselves in a democracy, and to uphold the tradition of a free and independent press.

Still, we think it’s worth explaining our mission and methods, both to answer those who make the charge against us, and for our supporters when confronted by naysayers.
Also see the trimmed screen capture image below (bottom) with an ad asking readers to support PolitiFact.

If you ask us, PolitiFact's "Truth Squad" isn't worthy of the name if they buy Holan's argument. "Dupe Squad" would be more like it. Holan wants your money. And it looks like she's willing to put forward an argument she wouldn't buy herself to help keep that money flowing.

Holan offers no real answer to those who claim PolitiFact is biased. To do that, Holan would need to specifically answer the arguments critics use to support their claims.

PolitiFact finds it preferable to simply say it is unbiased without offering real evidence supporting its claim. And without rebutting the arguments of its detractors.


Is the embedded ad asking for money a mere coincidence? We added the red border for emphasis.

Thursday, November 8, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 3

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

3. We make mistakes sometimes, but we correct our errors promptly.
The facts comes first with us. That’s why it’s important for us -- or any reputable news organization -- to correct mistakes promptly and clearly. We follow a published corrections policy than anyone can read. Readers also can easily access a list of fact-checks that have been corrected or updated after the original publication.
I make mistakes sometimes, but I correct my errors promptly. Would that make me unbiased? Who believes that?

A willingness to correct errors does not bear directly on the issue of bias. Consider PolitiFact's move of paying researchers to look for examples of biased language in its work (the study found no systematic evidence of biased language). Would a policy of correcting mistakes promptly cancel out a strong propensity to use biased language?

Of course not. Correcting mistakes would only have an effect on biased language if the publisher viewed biased language as a mistake and corrected it as such.

In our experience PolitiFact often refuses to consider itself mistaken when it makes a real mistake.

What good is a thorough and detailed corrections policy if the publishing entity can't recognize the true need for a correction?

And doesn't it go without saying that the failure to recognize the need for a correction may serve as a strong indicator of bias?


Wonderful-Sounding Claim Meaning Nearly Nothing

How great is PolitiFact's corrections policy? Just let Holan tell you:
We believe it is one of the most robust and detailed corrections policies in American fact-checking.
We were momentarily tempted to fact check Holan's claim. Except she starts with "We believe" which immediately moves the claim into the realm of opinion. But if it were a claim of fact Holan could probably easily defend it because the claim doesn't really mean anything.

Think about it. "One of the the most robust and detailed corrections policies in American fact-checking." Let's take a look at the set of American fact checkers, using the list of IFCN-verified fact-checkers. When we looked on Nov. 8, 2018 there were eight (including PolitiFact).

With a pool that small PolitiFact could have the least robust and detailed corrections policy among the eight and plausibly say it has one of the most robust and detailed corrections policies in American fact-checking. Our opinion? In a pool of eight there's nothing to crow about unless you're No.1. Coming in No. 4 puts one in the middle of the pack, after all.

We think PolitiFact's corrections policy is less robust than that of its parent organization, the Poynter Institute. We're wondering why that should be the case.


Summary

The first two reasons Holan offered to support PolitiFact's "not biased" claim were incredibly weak. But the third item managed to register even lighter weight on the scale of evidence. A robust corrections policy is a poor protection against ideological bias. It's a bit like using a surgical mask for protection against mustard gas.