Friday, January 17, 2020

Fact checkers decide not to check facts in fact check of Bernie Sanders

As a near-perfect follow up to our post about progressives ragging on PolitiFact over its centrist bias, we present this Jan. 15, 2020 PolitiFact fact check of Democratic presidential candidate Sen. Bernie Sanders:


Sanders said his plan would "end" $100 billion in health care industry profits, and PolitiFact plants a "True" Truth-O-Meter graphic just to the right of that claim.

But there's no fact check here of whether Sanders' plan would end $100 billion in profits. Instead the fact check looks at whether the health care industry makes $100 billion in profits (bold emphasis added):
The Sanders campaign shared its math, and it’s comprehensive.

The $100 billion total comes from adding the 2018 net revenues -- as disclosed by the companies -- for 10 pharmaceutical companies and 10 companies that work in health insurance.

We redid the numbers. Sanders is correct: The total net revenues, or profits, these companies posted in 2018 comes to just more than $100 billion - $100.96 billion, in fact. We also spoke to three independent health economists, who all told us that the math checks out.

There are a couple of wrinkles to consider. Some of the companies included -- Johnson & Johnson, for instance -- do more than just health care. Those other services likely affect their bottom lines.

But more importantly, $100 billion is likely an underestimate, experts told us.
It looks to us like PolitiFact meticulously double-checked equations that did not adequately address the issue of health care profits.

On the one hand "We redid the numbers. Sanders is correct." But on the other hand "$100 billion is likely an underestimate."

The fact checkers are telling us Sanders was accurate but probably wrong.

But we've only covered a premise of Sanders' claim. The meat of the claim stems from Sanders saying he will "end" those profits.

Did Sanders mean he would cut $100 billion in profit or simply reduce profits by some unspecified amount? We don't see how a serious fact-check effort can proceed without somehow addressing that question.

PolitiFact proceeds to try to prove us wrong (bold emphasis added):
Sanders suggested that Medicare for All would "end" the $100 billion per year profits reaped by the health care industry.

The proposal would certainly give Washington the power to do that.

"If you had Medicare for All, you have a single payer that would be paying lower prices," Meara said.

That means lower prices and profits for pharmaceuticals, lower margins for insurers and lower prices for hospitals and health systems.

That could bring tradeoffs: for instance, fewer people choosing to practice medicine. But, Meara noted, the number supports Sanders’ larger thesis. "There’s room to pay less."
Though PolitiFact showed no inclination to pin down Sanders' meaning, the expert PolitiFact cited (professor of health economics Ellen Meara) translates Sanders' point as "There's room to pay less."

Do the fact checkers care how much less? Is PolitiFact actually fact-checking whether Sanders' plan would lower profit margins and it doesn't matter by how much?

Side note: PolitiFact's expert donates politically to Democrats. PolitiFact doesn't think you need to know that. PolitiFact is also supposedly a champion of transparency.

Where's the Fact Check?

PolitiFact does not know how much, if at all, the Sanders plan would cut profit margins.

PolitiFact does not specify how it interprets Sanders' claim of bringing an "end" to $100 billion in profits (the cited expert expects a lower profit margin but offers no estimate).

The bulk of the fact check is a journalistic hole. It fails to offer any kind of serious estimate of how much the Sanders' plan might trim profits. If the plan trims profits down to $75 billion, presumably PolitiFact would count that as ending $100 billion in profits.

Using that slippery understanding, quite a few outcomes could count as ending $100 billion in profits. But how many prospective voters think Sanders is promising to save consumers that $100 billion?

"Fact-checking."

That's no "centrist bias." That's doing Sanders a huge favor. It's liberal bias, the prevalent species at PolitiFact.

Wednesday, January 15, 2020

Progressives accusing PolitiFact of "centrist bias"

Left-leaning The Week has put out a couple of articles recently accusing PolitiFact of a "centrist bias."

Here's one of the accusations:
Is Joe Biden, contrary to his centrist reputation, a tax-and-spend liberal? That was the argument made by Politifact's Amy Sherman, defending him against accusations from the Bernie Sanders camp that in 2018, "Biden lauded Paul Ryan for proposing cuts to Social Security and Medicare." Not so, says Politifact: "The Sanders campaign plucked out part of what Biden said but omitted the full context of his comments. We rate this statement False."

Unfortunately, it's a tendentious argument that totally misreads Biden's politics and history. He did indeed call for cuts to Social Security and Medicare in a 2018 speech at the Brookings Institution — part of a decades-long career of hawking pointless austerity. Yet, just like they did with Medicare-for-all, fact checkers are bending the truth to advance an ideological centrist agenda.
The argument, unlike many from-the-left criticisms of PolitiFact, isn't frivolous. We noted during the 2016 election that PolitiFact seemed tougher on Sanders than on his opponent, Hillary Rodham Clinton. It makes sense that wherever PolitiFact's ideology falls on the political continuum those to either side may experience a resulting bias.

And, in fact, that's our purpose in highlighting the accusation. A charge of centrist bias proves consistent with the charge of liberal bias. The Week is saying PolitiFact is biased toward political positions to its left and right. The Week just doesn't bother to highlight any of the "centrist" bias that harms conservatives.

We do that.

Plus we highlight good examples of PolitiFact's anti-progressive bias under the "Left Jab" tag.



Note: The "bending the truth" example from The Week doesn't wash.

Monday, January 13, 2020

Busted: PolitiFact catches Nikki Haley using hyperbole without a license


Some things never change.

Among those things, apparently, is PolitiFact's tradition of taking Republican hyperbole literally.

Case in point:


The hyperbole should have been easy to spot based on the context.

Former UN Ambassador Nikki Haley appeared on Fox News' "Hannity" show with host Sean Hannity.




Transcript ours (starting at about 2:12):

Do you agree with, uh, listen I've always liked General Petraeus. He's a great, great general, hero, patriot in this country. He said it's impossible to overstate the importance of this particular action. It's more significant than the killing of bin Laden, even the death of al Baghdadi. And he said Soleimani was the architect, operational commander of the Iranian effort to solidify control of the so-called Shia Crescent stretching from Iran to Iraq through Syria and southern Lebanon. I think that's the reason why Jordanians, Egyptians and Saudis are now working with the Israelis, which I don't think anybody saw coming.

NH
Well, and I'll tell you this: You don't see anyone standing up for Iran. You're not hearing any of the Gulf members, you're not hearing China, you're not hearing Russia. The only ones mourning the loss of Soleimani are our Democrat leadership. And our Democrat presidential candidates. No one else in the world, because they knew that this man had evil veins. They knew what he was capable of and they saw the destruction and, and the lives lost (based?) from his hand. And so--

SH
What a dumb (?). We've been hearing "Oh, he's evil, he's a murderer he killed Americans and he, this is the No. 1 state sponsor of terror and they're fighting all these proxy wars but we don't want to make 'em mad." That's what it sounds like to me.

NH
You know, and you go tell that to the 608 American families who lost a loved one. Go tell that to the military members who lost a limb. This was something that needed to be done and should be celebrated. And I'll tell you right now, partisan politics should stop when it comes to foreign policy. This is about America united. We need to be completely behind the president, what he did, because every one of those countries are watching our news media right now seeing what everyone's saying. And this is a moment of strength for the United States. It's a moment of strength from President Trump.
Haley's "mourning" comment comes after her emphasis Iran received no support ("You don't see anyone standing up for Iran") regarding the killing of Soleimani. So it makes very good sense to take "mourning" as a hyperbolic amplification of that point.

Hannity's response to Haley's comment came in the same vein, in fact mocking Democrats who acknowledged Soleimani got what he deserved while questioning the wisdom of the move.

PolitiFact could legitimately check to see if world leaders offered statements much in the same vein leading Democrats offered. Instead of doing that, PolitiFact used a wooden-literal interpretation of Haley's remarks as a basis for its fact check.

How do mistakes like this (and these) make it past PolitiFact's exalted "Star Chamber" of experienced fact check editors?

Could be bias.

Thursday, January 2, 2020

PolitiFact's "Pants on Fire" bias in 2019

As our research has documented, PolitiFact has consistently failed to offer any objective means of distinguishing objectively between false political claims and ridiculously false political claims.

On the contrary, PolitiFact's founding editor, Bill Adair, said decisions about the "Truth-O-Meter" ratings are "entirely subjective." And current editor Angie Drobnic Holan in 2014 explained the difference between the "False" and "Pants on Fire" ratings by saying "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."

Given the understanding that the difference between "False" and "Pants on Fire" rests on subjective grounds, we have conducted ongoing research on the chances a claim PolitiFact considers false will receive the "Pants on Fire" designation.

Our research suggests at least two things.

First, PolitiFact National is biased against Republicans.

Second, the statement selection process renders "Truth-O-Meter" ratings an entirely unreliable guide to candidate truthfulness even assuming the subjective ratings are objectively accurate(!).

Without further ado, an updated chart for both political parties showing the percentage of false ratings given the "Pants on Fire" rating:


We'll address one potential criticism right off the bat.

We should expect a higher percentage for the party that lies more!

We would agree with that criticism if the PolitiFact data stemmed from objective considerations in the fact checks. We have no evidence to support that and considerable evidence to counter it (see above). All the evidence suggests the "Pants on Fire" rating is a purely subjective judgment.

Subjective judgment is incompatible with neutrality.

A Review of the Findings

False statements from Democrats were rated "Pants on Fire" just 9.09 percent of the time in 2019, tying the record low set in 2009. The Republican percentage stayed very close to its historic baseline, which cumulatively stands at 27.21 percent. The long-term average for Democrats dropped slightly to 17.41 percent. Over PolitiFact National's entire history, Republicans are about 60 percent more likely to receive the subjective "Pants on Fire" rating.

That's bias.

PolitiFact's wildly unscientific selection process

The Trump presidency ought to end permanently any supposition that PolitiFact's story selection process approximates random (scientific) representative selection in any way.

Of the 14 "Pants on Fire" ratings given to Republicans in our 2019 data, 13 went to President Trump. The other one went to Mr. Trump's son-in-law, Jared Kushner.

Of the 39 "False" ratings given to Republicans in our 2019 data, 29 went to Mr. Trump.

Combined, then, 42 of 53 of Republicans' false "Truth-O-Meter" ratings went to Mr. Trump.

For comparison, in 2011 PolitiFact rated 88 Republican claims false with none of them coming from Mr. Trump. From 88 down to 10? Is the Republican Party, aside from Trump, that much more honest with the passage of time? Nonsense. That hypothesis is completely implausible on its face.

The explanation is painfully simple: As PolitiFact admits, its editors choose the claims PolitiFact rates. They use editorial judgment to select stories, not scientific selection. The editors are, in the words of the editors, "not social scientists."

If anything counts as a proof that the supposedly unbiased fact checkers at PolitiFact are deliberately pulling the wool over the eyes of their readers, it is PolitiFact's unrepentant use of its aggregated ratings as voter guides.

They have to know better.

They do it anyway. And the practice was part of PolitiFact's aim from the start, which helps explain why it won't go away.

PolitiFact and Bernie Sanders explain the gender pay gap

Everybody knows about the gender pay gap, right?

It's the statistic Democrats habitually misuse to amplify their focus on "equal pay for equal work." Fact checkers like PolitiFact punish that traditional deception by rating it "Mostly True" most of the time, or sometimes just "True."

Let's take a look at PolitiFact latest PolitiSplainer on the gender wage gap, this time featuring Democratic Party presidential candidate and "democratic socialist" Bernie Sanders.

Such articles might more appropriately wear the label "unexplainer."

PolitiFact starts out with exactly the kind of ambiguity Democratic Party leaders love, obscuring the difference between the raw gender wage gap and the part of the gap (if any) caused by gender discrimination:
The disparity in how much women make compared with men comes up often in the political discourse, tagged with a call to action to help women’s paychecks catch up.
Running just above that sentence the featured image directs readers toward the gender discrimination explanation for the gender pay gap. Plausibly deniable? Of course. PolitiFact didn't mean it that way or something, right?


PolitiFact goes on to tell its readers that a number of Democrats have raised the gender pay gap issue while on the campaign trail. The paragraph contains four hotlinks:
Several leading Democratic presidential candidates recently highlighted one of the biggest imbalances — saying that a Latina woman must work 23 months to make the amount a white man makes in one year, or that they make 54 cents on the dollar.
Each of the statements from Democrats highlighted the gender pay gap in an ambiguous and misleading way. None of the statements bothered to distinguish between the raw pay gap, caused by a variety of things including women working fewer hours, and the hard-to-measure pay gap caused by employers' sexual discrimination.

The claim from Mayor Pete Buttigieg was pretty much incoherent and would have made great fodder for a fact check (54 cents on the dollar isn't enough to live on? Doesn't that depend on the size of the dollar in the comparison?).

PolitiFact highlighted the version of the claim coming from Sen. Sanders:



Sanders' use of the gender pay gap fits the standard pattern of deception. He leads with a figure from the raw wage gap, then assures the audience that "Equal pay is not radical ... It's an issue of basic justice."

But Sanders is misleading his audience. "Equal pay for equal work" isn't radical and may count as an issue of basic justice. But equal pay regardless of the work done is very radical in the United States. And that's what Democratic Candidates imply when they base their calls for equal pay on the disparities in the raw gender wage gap.

If only there were fact checkers who could explain that deception to the public!

But, no, PolitiFact does not explain Sanders' deception.

In fact, it appears PolitiFact has never rated Sanders on a claim related to the gender wage gap.

PolitiFact did not rate the misleading tweet featured in its PolitiSplainer. Nor did it rate any of these:
PolitiFact ratings of the gender wage gap tend to graciously overlook the fact that Democrats almost invariably invoke the raw gender wage gap when stumping for equal pay for equal work, as Sanders did above. Does the raw gender wage gap have much of anything to do with the wage gap just from discrimination? No. There's hardly any relationship.

Should Democrats admit they want equal pay for unequal work, it's likely the American people will let them know that the idea is not mainstream and not an issue of basic fairness.

PolitiFact ought to know that by now. But you won't find it in their fact checks or PolitiSplainers dealing with the gender wage gap.

How Big is the Pay Gap from Discrimination?

Remarkably, PolitiFact's PolitiSplainer on the pay gap almost takes a pass on pinning down the role discrimination might play. One past PolitiSplainer from 2015 actually included the line from the CONSAD report's Foreword (by the Department of Labor) suggesting there may be no significant gender discrimination at all found in the raw wage gap.

In the 2019 PolitiSplainer we got this:
We often hear that discriminatory practices are a reason why on average women are paid less than men. Expert say it’s hard to measure how much of a role that discrimination plays in the disparity.

"Research shows that more than half of the gap is due to job and industry segregation — essentially, women tend to work in jobs done primarily by other women, and men tend to work in jobs done primarily by other men and the ‘men’s jobs’ are paid more," said Jennifer Clark, a spokeswoman for the Institute for Women’s Policy Research.

Clark cited education and race as other factors, too.
Such a weak attempt to explain the role of discrimination in the gender pay gap perhaps indicates that PolitiFact's aim was to explain the raw gender wage gap. Unfortunately for the truth, that explanation largely stayed within the lines of the traditional Democratic Party deceit: Mention the raw gender wage gap and then advocate legislation supposedly helping women receive equal work for equal pay.

That juxtaposition sends the clear message the raw gender wage gap relates to discrimination.

Supposedly neutral and objective fact checkers approve the deception, so it must be okay.

We have no reason to suppose mainstream fact checkers like PolitiFact will stop playing along with the misdirection.

Thursday, December 19, 2019

PolitiFact disagrees with IG on IG report on CrossFire Hurricane

What to do when the Inspector General and PolitiFact disagree on what an IG report says?

Well, the IG has no Pulitzer Prize, so maybe trust the fact checkers?

PolitiFact, Dec. 11, 2019:

The investigation was not politically motivated

The IG report also dismissed the notion that the investigation was politically motivated.
Inspector General Michael Horowitz, during Senate testimony on Dec. 18, 2019 (The Epoch Times):
Then [Sen. Josh Hawley (R-Mo.)] asked, “Was it your conclusion that political bias did not affect any part of the Page investigation, any part of Crossfire Hurricane?”

“We did not reach that conclusion,” Horowitz told him. He added, “We have been very careful in connection with the FISA for the reasons you mentioned to not reach that conclusion, in part, as we’ve talked about earlier: the alteration of the email, the text messages associated with the individual who did that, and then our inability to explain or understand or get good explanations so we could understand why this all happened.”
We confirmed The Epoch Times' account via C-SPAN. It edits the exchange for clarity without altering the basic meaning of the exchange. We invite readers to confirm it for themselves via an embedded clip (around 3:11)*:



Seriously, we count the PolitiFact's Pulitzer as no kind of reasonable evidence supporting its reliability. Pulitzer juries do not fact check content before awarding prizes.

It seems clear PolitiFact committed the fallacy of argumentum ad ignorantiam. When the IG report repeatedly said it "found no testimonial or documentary evidence that these operations resulted from political bias or other improper considerations" or similar words, PolitiFact made the fallacious leap to conclude there was no political bias.

Pulitzer Prize-winning and IFCN-verified PolitiFact.

We need better measures of trustworthiness.


*Our embedded clip ended up shorter than we expected, for which we apologize to our readers. Find the full clip here.

Monday, December 16, 2019

A political exercise: PolitiFact chooses non-impactful (supposed) falsehood as its "Lie of the Year"

PolitiFact chose President Trump's claim that a whistleblower's complaint about his phone call with Ukrainian leader Volodymyr Zelensky got the facts "almost completely wrong."

We had deemed it unlikely PolitiFact would choose that claim as its "Lie of the Year," reasoning that it failed to measure up to the supposed criterion of carrying a high impact.

We failed to take into account PolitiFact's dueling criteria, explained by PolitiFact Editor Angie Drobnic Holan back in 2016:
Each year, PolitiFact awards a "Lie of the Year" to take stock of a misrepresentation that arguably beats all others in its impact or ridiculousness.
To be sure, "arguably beats all others in its impact" counts as a subjective criterion. As a bonus, PolitiFact offers itself an alternative criterion based on the "ridiculousness" of a claim.

Everybody who thinks there's an objective way to gauge relative "ridiculousness" raise your hand.

We will not again make the mistake of trying to handicap the "Lie of the Year" choice based on the criteria PolitiFact publicizes. Those criteria are hopelessly subjective and don't tell the real story.

It's more simple and direct to predict the outcome based on what serves PolitiFact's left-leaning interests.


Thursday, December 12, 2019

William Barr, PolitiFact and the biased experts game

Is it okay for fact checkers to rely on biased experts for their findings?

Earlier this year, Facebook restricted distribution of a video by pro-life activist Lila Rose. Rose's group complained the fact check was biased. Facebook relied on the International Fact-Checking Network to investigate. The investigator ruled (very dubiously) that the fact check was accurate but that the fact checker should have disclosed the bias of experts it cited:
The failure to declare to their readers that two individuals who assisted Science Feedback, not in writing the fact-check but in reviewing the evidence, had positions within advocacy organizations, and the failure to clarify their role to readers, fell short of the standards required of IFCN signatories. This has been communicated to Science Feedback.
Perhaps it's fine for fact checkers to rely on biased experts so long as those experts do not hold positions in advocacy organizations.

Enter PolitiFact and its December 11, 2019 fact check of Attorney General William Barr.

The fact check itself hardly deals with the substance of Barr's claim that the "Crossfire Hurricane" investigation of possible collusion between the Trump campaign and Russia was started on the thinnest of evidence. Instead, PolitiFact sticks with calling the decision to investigate "justified" by the Inspector General's report while omitting the report's observation that the law sets a low threshold for starting an investigation (bold emphasis added).
Additionally, given the low threshold for predication in the AG Guidelines and the DIOG, we concluded that the FFG informat ion, provided by a government the United Stat es Intelligence Community (USIC) deems trustworthy, and describing a first-hand account from an FFG employee of a conversation with Papadopoulos, was sufficient to predicate the investigation. This information provided the FBI with an articulable factual basis that, if true, reasonably indicated activity constituting either a federal cri me or a threat to national security, or both, may have occur red or may be occurring. For similar reasons, as we detail in Chapter Three, we concluded that the quantum of information articulated by the FBI to open the individual investigations on Papadopoulos, Page, Flynn, and Manafort in August 2016 was sufficient to satisfy the low threshold established by the Department and the FBI.
The "low threshold" is consistent with Barr's description of "thinnest of suspicions" in the context of prosecutorial discretion and the nature of the event that supposedly justified the investigation (the Papadoupolous caper)*.

But in this post we will focus on the experts PolitiFact cited.

Rosa Brooks

Rosa Brooks, professor of law and policy at Georgetown University, told us that Barr’s assessment that the suspicions were thin "appears willfully inaccurate."

"The report concluded precisely the opposite," she said. "The IG report makes it clear that the decision to launch the investigation was justified."
If PolitiFact were brazen enough, it could pick out Brooks as a go-to (biased) expert based on her Twitter retweets from Dec. 10, 2019.



Brooks' tweets also portray her as a Democrat voter. So does her pattern of political giving.

Jennifer Daskal

Jennifer Daskal, professor of law at American University, agreed. "Barr’s statement is at best a misleading statement, if not a deliberate distortion, of what the report actually found," she said.
Daskal's Internet history shows little to suggest she pre-judged her view on Barr's statement. On the other hand, it seems pretty plain she prefers the presidential candidacy of Pete Buttgieg (one example among several). Plus Daskal has tended to donate politically to Democrats.

Robert Litt

PolitiFact contacted Litt for his expert opinion but did not mention him in the text of the fact check.

We deem it unlikely PolitiFact tabbed Litt to counterbalance the leftward lean of Brooks and Daskal. Litt was part of the Obama administration and his appointment carried an unusual political dimension to it. Litt failed his background check but was installed in the Clinton Justice Department in a roundabout way.

Litt, like Brooks and Daskal, gives politically to Democrats.


So what's the problem?

We think it's okay for PolitiFact to cite experts who lean left and donate politically to the Democratic Party. That's not the problem.

The problem is the echo-chamber effect PolitiFact achieves by choosing a small pool of experts all of whom lean markedly left. As we've noted before, that's no way to establish anything akin to an expert consensus. But it serves as an excellent method for excluding or marginalizing contrary arguments.

It's not like those are hard to find. It seems PolitiFact simply has no interest in them.



*It's worth noting that the information Papadoupoulos shared with the Australian, Downer, came in turn from the mysterious Joseph Mifsud.