Thursday, March 5, 2020

PolitiFact Wisconsin: Real wages increasing but not keeping up with inflation

PolitiFact Wisconsin published a fact check of a claim by Rep. Mark Pocan (D-Wisc.) that says real wages for Americans have gone up in the past 30 years, yet the increase fails (by a long shot!) to keep up with inflation.

We hope that red flags went up for every person reading that sentence.

"Real Wages" takes inflation into account. If real wages stay perfectly flat, then wages are keeping even with inflation. If real wages increase then wages are increasing faster than inflation.

The fact check is something to behold. It may perhaps be the early leader for worst fact check of 2020.

We faulted this fact check right away for failing to link to the source of the Pocan quotation.

Here's the source:

We're seeing the failure to link to the primary source of claims all too often from PolitiFact lately.

As the image above the video embed shows, PolitiFact Wisconsin focused on Pocan's wage comparison involving the Amazon distribution center in Kenosha.

Ignore Illogical Spox?

It didn't take long for us to find a second reason to fault PolitiFact Wisconsin. As PolitiFact related in its fact check, Pocan's communications director, Usamah Adrabi, said Pocan was talking about pay in the auto industry in the 1990s.

PolitiFact Wisconsin blew Adrabi off, in effect:
Andrabi said Pocan often uses auto worker pay to make his point, because auto manufacturing was the dominant industry in Kenosha when he was growing up there.

But Pocan did not mention auto pay in his claim, and pay in that industry historically is far higher than many other jobs. So, we focused on the weekly and hourly earnings data from the federal Bureau of Labor Statistics.
Instead of looking at the comparison Andrabi specified, PolitiFact Wisconsin decided to look at whether real wages were flat nationally over the past 30 years.

Just $3 in Thirty Years?

Before we knew it, we had a third reason to fault PolitiFact Wisconsin. After reporting the wage difference over 30 years without adjusting for inflation, PolitiFact tried to show the insignificance of the increase by adjusting for inflation. But PolitiFact used misleading language to make its point:
But using the Bureau’s inflation calculator, the 1990 weekly wage translates to $800.88 per week in today’s dollars, or $20.02 an hour. So, that’s a roughly $3 increase in 30 years.
To communicate clearly, a journalist would express the increase to the weekly wage in dollars and the increase in the hourly pay in dollars per hour.

PolitiFact Wisconsin used dollars to refer to the increase in dollars per hour, leaving readers with the impression that weekly pay increased from about $800 to $804.

Here's what one fix of that misleading error of ambiguity might look like (bold emphasis to highlight the change):
But using the Bureau’s inflation calculator, the 1990 weekly wage translates to $800.88 per week in today’s dollars, or $20.02 an hour. So, that’s an increase of roughly $3 an hour in 30 years.
Using the same language as in the preceding sentence ("an hour") tips the reader to connect the $3 change to the hourly rate instead of the weekly rate.

The Coup de Grace

Finally, we encountered the gigantic error we highlighted at the beginning.

PolitiFact admitted Pocan was literally wrong for (supposedly) suggesting that real wages were flat. Real wages have gone up. PolitiFact National had underscored that fact with a 2017 fact check of a claim from House Speaker Nancy Pelosi.

Ah, but that literal untruth only came to light by looking narrowly at Pocan's claim. PolitiFact said Pocan's true point, Adrabi notwithstanding, was "that wage growth has been largely stagnant."

PolitiFact cited a Pew Research Study that supposedly showed that the growth of real wages for groups below the top 10 percent of earners were "nearly flat" from 2000 through 2018.

All of them went up noticeably (look), but PolitiFact said they were "nearly flat."

We call that spin.

And it quickly got worse:
What’s more, the cost of living has undergone a much steeper hike: from 1983 to 2013, the Bureau of Labor Statistics reported a roughly 3% annual increase in rent and food prices, and a 1.3% annual increase in new vehicle prices.

So, a small growth in median wages is dwarfed next to the rise in cost of other goods.
That's fact check baloney.

It's true the BLS reported annual increases in rent, food and vehicle prices between 1983 and 2013, but those were inflationary changes, not inflation-adjusted changes.

It's wrong to say that inflation outpaced wage growth if real wages increased. It's startling that a fact checker could commit that error.

To be sure, real wages are calculated in a way that counts as arbitrary in a sense, totaling the price of a "basket of goods" where the goods in the basket vary over time. But still, it's ludicrous to say wages that have gone up after adjusting for inflation--that's what "real wages" are--failed to keep pace with inflation. Some items in the "basket of goods" might see higher inflation than others, but would it be proper to cherry pick those to claim that wages generally weren't keeping pace with inflation?

We don't think so.

PolitiFact Wisconsin wildly altered Rep. Pocan's point and after that completely blew its fact check of what it had decided he must be saying.


We alerted PolitiFact Wisconsin about these problems by responding to its tweet of its fact check and followed that up with a message to in the late afternoon of March 3, 2020.

We noticed no attempt to correct the flawed fact check through March 4, 2020.

We won't be surprised if PolitiFact never corrects its mistakes in the Pocan fact check.

But we will update this item if we see that PolitiFact Wisconsin has updated it.

Monday, February 24, 2020

Nothing To See Here: Sanders blasts health insurance "profiteering"

While researching PolitiFact's false accusation that Democratic presidential candidate used "bad math" to criticize the budget gap created by fellow candidate Bernie Sanders' spending proposals, we stumbled over a claim from Sen. Sanders that was ripe for fact-checking.

Sanders said his proposed health care plan would end profiteering practices from insurance and drug companies that result in $100 billion or so in annual profits (bold emphasis added):
Just the other day, a major study came out from Yale epidemiologist in Lancet, one of the leading medical publications in the world. What they said, my friends, is Medicare for all will save $450 billion a year, because we are eliminating the absurdity of thousands of separate plans that require hundreds of billions of dollars of administration and, by the way, ending the $100 billion a year in profiteering from the drug companies and the insurance companies.
PolitiFact claims to use an "Is that true?" standard as one of its main criteria for choosing which claims to check.

We have to wonder if that's true, or else how could a fact checker pass over the claim that profiteering netted $100 billion in profits for those companies? Do fact checkers think "profit" and "profiteering" are the same thing?

Is a fact checker who thinks that worthy of the name?

Sanders' claim directly implies that the Affordable Care Act passed by Democrats in 2010 was ineffective with its efforts to circumscribe insurance company profits. The ACA set limits on profits and overhead ("medical loss ratios"). Excess profits, by law, get refunded to the insured.

Sanders said it's not working. And the fact checkers don't care enough to do a fact check?

Of course PolitiFact went through the motions of checking a similar claim, as we pointed out. But using "profiteering" in the claim changes things.

Or should.

Ultimately, it depends on whether PolitiFact has the same interest in finding falsehoods from Democrats as it does for Republicans.

Sunday, February 23, 2020

PolitiFact absurdly charges Pete Buttigieg with "bad math"

PolitiFact gave some goofy treatment to a claim from Democratic presidential candidate Pete Buttigieg.

Buttigieg compared the 10-year unpaid cost of fellow candidate Bernie Sanders' new spending proposals to the current U.S. GDP.

PolitiFact cried foul. Or, more precisely, PolitiFact cried "bad math."

Note that PolitiFact says Buttigieg did "bad math."

PolitiFact's fact check never backs that claim.

If Buttigieg is guilty of bad anything, it was a poor job of providing thorough context for the measure he used to illustrate the size of Sanders "budget hole." Buttigieg was comparing a cumulative 10-year budget hole with one year of U.S. GDP.

PolitiFact notwithstanding, there's nothing particularly wrong with doing that. Maybe Buttigieg should have provided more context, but there's a counterargument to that point: Buttigieg was on a debate stage with a sharply limited amount of time to make his point. In addition, the debate audience and contestants may be expected to have some familiarity with cost estimates and GDP. In other words, it's likely many or most in the audience knew what Buttigieg was saying.

Let's watch PolitiFact try to justify its reasoning:
But there’s an issue with Buttigieg’s basic comparison of Sanders’ proposals to the U.S. economy. He might have been using a rhetorical flourish to give a sense of scale, but his words muddled the math.

The flaw is that he used 10-year cost and revenue estimates for the Sanders plans and stacked them against one year of the nation’s GDP.
PolitiFact tried to justify the "muddled math" charge by noting Buttigieg compared a 10-year cost estimate to a one-year figure for GDP.

But it's not muddled math. The 10-year estimates are the 10-year estimates, mathematically speaking. And the GDP figure is the GDP figure. Noting that the larger figure is larger than the smaller figure is solid math.

PolitiFact goes on to say that the Buttigieg comparison does not compare apples to apples, but so what? Saying an airplane is the size of a football field is also an apples-to-oranges comparison. Airplanes, after all, are not football fields. But the math remains solid: 100 yards equals 100 yards.

Ambiguity differs from error

In fact-checking the correct response to ambiguity is charitable interpretation. After applying charitable interpretation, the fact checker may then consider ways the message could mislead the audience.

If Buttigieg had run a campaign ad using the same words, it would make more sense to grade his claim harshly. Such a message in an ad is likely to reach people without the knowledge base to understand the comparison. But many or most in a debate audience would understand Buttigieg's comparison without additional explanation.

It's an issue of ambiguous context, not "bad math."

Correction Feb. 26, 2018: Omitted the first "i" in "Buttigieg" in the final occurrence in the next-to-last paragraph. Problem corrected.

Friday, February 21, 2020

PolitiFact's dishonest dedication to the "Trump is a liar" narrative

It's quite true that President Trump makes far more than his share of false statements, albeit many of those represent hyperbole. Indeed, it may be argued that Trump blurs the line between the concepts of hyperbole and deceit.

But Trump's reputation for inaccuracy also serves as a confirmation bias trap for journalists.

Case in point, from PolitiFact's Twitter account:

The tweet does not tell us what Trump said about windmills and wildlife, though it links to a supposedly "similar claim" that it fact-checked in the past.

That fact check concerned something Trump said about the number of eagles killed by wind turbines:

The linked fact check had its own problems, which we noted at the time.

One of the things we noted was that PolitiFact gave short shrift to the facts to prefer advancing the narrative that Trump says false things:
PolitiFact's interpretation lacks clear justification in the context of Trump's remarks, but fits PolitiFact's narrative about Trump.

A politician's lack of clarity does not give fact checkers justification for interpreting statements as they wish. The neutral fact checker notes for readers the lack of clarity and then examines the possible interpretations that are at the same time plausible. The neutral fact checker applies the same standard of charitable interpretation to all, regardless of popular public narratives.
PolitiFact's tweet amplifies the distortion in its earlier fact check. Trump said wind turbines kill eagles by the hundreds. PolitiFact made a number of assumptions about what Trump meant (for example, assuming Mr. Trump's "by the hundreds" referred to an annual death toll) then produced its subjective "Mostly False" rating based on those assumptions.

Did Trump say something comparable in Colorado?

PolitiFact's tweet communicates to readers that Trump uttered another mostly falsehood in Colorado. But what did Trump say that PolitiFact found similar to saying wind turbines kill hundreds of eagles every year?

Here's what Trump said in Colorado, via (bold emphasis added):
We are right now energy independent, can you believe it? They want to use wind, wind, wind. Blow wind, please. Please blow. Please keep the birds away from those windmills, please. Tell those beautiful bald eagles, oh, a bald eagle. You know, if you shoot a bald eagle, they put you in jail for a long time, but the windmills knock them out like crazy. It’s true. And I think they have a rule, after a certain number are killed you have to close down the windmill until the following year. Do you believe this? Do you believe this? And they’re all made in China and in Germany. Siemans.
Got it? "Knocking (Bald Eagles) out like crazy"="(killing eagles) by the hundreds"

How many is "like crazy"? Pity the fact checker who thinks that's a claim a fact checker ought to check. If wind turbines kill tens of Bald Eagles instead of hundreds, that can support the opinion that the turbines kill the eagles "like crazy," particularly given the context.

It's hard to argue that Trump said something false about Bald Eagles in his Colorado speech, yet PolitiFact did just that, relying largely on hiding from its audience what Trump actually said.

How many eagles do wind turbines kill? That's hard to say. But federal permits for wind farming potentially allow for dozens of eagle deaths per year:
(Reuters) - Wind farms will be granted 30-year U.S. government permits that could allow for thousands of accidental eagle deaths due to collisions with company turbines, towers and electrical wires, U.S. wildlife managers said on Wednesday.
Does it follow that Trump said something "Mostly False" in Colorado?

Or are the fact checkers at PolitiFact once again chasing their narrative facts-be-damned?

Thursday, February 20, 2020

PolitiFact weirdly unable to answer criticism

Our title plays off a PolitiFact critique Dave Weigel wrote back in 2011 (Slate). PolitiFact has a chronic difficulty responding effectively to criticism.

Most often PolitiFact doesn't bother responding to criticism. But if it makes its liberal base angry enough sometimes it will trot out some excuses.

This time PolitiFact outraged supporters of Democratic (Socialist) presidential candidate Bernie Sanders with a "Mostly False" rating of Sanders' claim that fellow Democratic presidential candidate Michael Bloomberg "opposed modest proposals during Barack Obama’s presidency to raise taxes on the wealthy, while advocating for cuts to Medicare and Social Security."

Reactions from left-leaning journalists Ryan Grim and Ryan Cooper were typical of the genre.

The problem isn't that Sanders wasn't misleading people. He was. The problem stems from PolitiFact's inability to reasonably explain what Sanders did wrong. PolitiFact offered a poor explanation in its fact check, appearing to reason that what Sanders said was true but misleading and therefore "Mostly False."

That type of description typically fits a "Half True" or a "Mostly True" rating--particularly if the subject isn't a Republican.

PolitiFact went to Twitter to try to explain its decision.

First, PolitiFact made a statement making it appear that Sanders was pretty much right:

Then PolitiFact (rhetorically) asked how the true statements could end up with a "Mostly False" rating. In reply to its own question, we got this:
Because Sanders failed to note the key role of deficit reduction for Bloomberg.
Seriously? Missing context tends to lead to the aforementioned "Mostly True" or "Half True" ratings, not "Mostly False" (unless it's a Republican). Sanders is no Republican, so of course there's outrage on the left.

Anyway, who cuts government programs without having deficit reduction in mind? That's pretty standard, isn't it?

How can PolitiFact be this bad at explaining itself?

In its next explanatory tweet PolitiFact did much better by pointing out Bloomberg agreed the Obama deficit reduction plan should raise taxes, including taxes on wealthy Americans.

That's important not because it's on the topic of deficit reduction but because Sanders's made it sound like Bloomberg opposed tax hikes on the wealthy at the federal level. Recall Sanders' words (bold emphasis added): "modest proposals during Barack Obama’s presidency to raise taxes on the wealthy."

Mentioning the proposals occurred during the Obama presidency led the audience to think Bloomberg was talking about tax hikes at the federal level. But Sanders was talking about Bloomberg's opposition to tax hikes in New York City, not nationally.

PolitiFact mentioned that Bloomberg had opposed the tax hikes in New York, but completely failed to identify Sanders' misdirection.

PolitiFact's next tweet only created more confusion, saying "Sanders’ said Bloomberg wanted entitlement cuts and no tax hikes. That is not what Bloomberg said."

But that's not what Sanders said. 

It's what Sanders implied by juxtaposing mention of the city tax policy with Obama-era proposals for slowing the growth of Medicare and Social Security spending.

And speaking of those two programs, that's where PolitiFact really failed with this fact check. In the past PolitiFact has distinguished, albeit inconsistently, between cutting a government program and slowing its growth. It's common in Washington D.C. to call the slowing of growth a "cut," but such a cut from a higher growth projection differs from cutting a program by making its funding literally lower from one year to the next. Fact checkers should identify the baseline for the cut. PolitiFact neglected that step.

If PolitiFact had noted that Bloomberg's supposed cuts to Social Security and Medicare were cuts to future growth projections, it could have called out Sanders for the misleading imprecision.

PolitiFact could have said the Social Security/Medicare half of Sanders' claim was "Half True" and that taking the city tax policy out of context was likewise "Half True." And if PolitiFact did not want to credit Sanders with a "Half True" claim by averaging those ratings then it could have justified a "Mostly False" rating by invoking the misleading impression Sanders achieved by juxtaposing the two half truths.

 Instead, we got yet another case of PolitiFact weirdly unable to to answer criticism.

Monday, February 10, 2020

Nothing To See Here: Stephanopoulos Interviews Joe Biden

Democratic presidential candidate Joe Biden appeared on "This Week" with interviewer George Stephanopoulos on Feb. 9, 2020.

Biden made a number of questionable claims during the interview, particularly where he claimed President Donald Trump has never condemned white supremacy (Washington Examiner).

Biden also said the 2009 stimulus bill passed by Democrats and the Obama administration had no waste or fraud to it.

For our money, a left-leaning operation like PoliitFact is likely to ignore Biden's claims on "This Week" in favor of getting to the bottom of whether Biden was quoting actor John Wayne when he ("jokingly") called a woman a "lying dog-faced pony soldier."

I guess we'll see!

Sunday, February 9, 2020

PolitiFact's charity for the Democrats

PolitiFact is partial to Democrats.

Back in 2018 we published a post that lists the main points in our argument that PolitiFact leans left. But today's example doesn't quite fit any of the items on that list, so we're adding to it:

PolitiFact's treatment of ambiguity leans left

When politicians make statements that may mean more than one thing, PolitiFact tends to see the ambiguity in favor of Democrats and against Republicans.

That's the nature of this example, updating an observation from my old blog Sublime Bloviations back in 2011.

When politician say "taxes" and does not describe in context what taxes are they talking about, what do they mean?

PolitiFact decided the Republican, Michele Bachmann, was talking about all taxes.

PolitiFact decided the Democrat, Marcia Fudge, was talking about income taxes.

Based on the differing interpretations, Bachmann got a "False" rating from PolitiFact while Fudge received a "True" rating.

That brings us to the 2020 election campaign and PolitiFact's not-really-a-fact-check article "Fact-checking the Democratic claim that Amazon doesn't pay taxes."

The article isn't a fact check as such because PolitiFact skipped out on giving "Truth-O-Meter" ratings to Andrew Yang and Sen. Elizabeth Warren. Both could easily have scored Bachmannesque "False" ratings.

Yang and Warren both said about the same thing, that Amazon paid no taxes.

Various news agencies have reported that Amazon paid no federal corporate income taxes in 2017 and 2018. But news reports have also made clear that Amazon paid taxes other than federal corporate income taxes.

Of course neither Yang nor Warren will receive the "False" rating PolitiFact bestowed on Bachmann for a comparable error. PolitiFact treated both their statements as though they restricted their claims to federal corporate income tax.

Is it true that despite making billions of dollars, Amazon pays zero dollars in federal income tax?

Short answer: Amazon’s tax returns are private, so we don’t know for sure what Amazon pays in federal taxes. But Amazon’s estimates on its annual 10-K filings with the U.S. Securities and Exchange Commission are the closest information we have on this matter. They show mixed results for the past three years: no federal income tax payments for 2017 and 2018, but yes on payments for 2019.

That's the type of impartiality a Democrat can usually expect from PolitiFact. They do not need to specify what kind of taxes they are talking about. PolitiFact will interpret their statements charitably. 


It's worth noting that PolitiFact admitted not knowing whether Amazon paid federal income taxes in 2017 and 2018 ("we don’t know for sure what Amazon pays in federal taxes"). And PolitiFact suspends its "burden of proof" criterion yet again for Democrats.

Feb. 10, 2020: Edited to remove a few characters of feline keyboard interference.

Sunday, February 2, 2020

PolitiFact updates its website, makes "Corrections and Updates" even harder to find (Updated: Fixed!)

Over the years we've enjoyed poking fun at PolitiFact's haphazard observance of its policy on corrections. Aside from simply not doing quite a few needed corrections, PolitiFact does things like:
  • Correcting stories without a correction notice
  • Not tagging stories with "Corrections and Updates" as promised in its statement of principles
We've also needled PolitiFact over the way it hides its supposedly transparent page of corrected or updated claims. Looking up "corrections and updates" along with "PolitiFact" using a search engine would allow readers to easily find the page, but finding that page from PolitiFact's home page was so hilariously complicated that we posted instructions on how to do it.

Now in February 2020 PolitiFact has revamped its website and at long last fixed the problem succeeded in making the problem even worse.

Hopefully the worsening of the situation is only temporary, but PolitiFact's history marinates that hope in thick, gooey skepticism.

Our Feb. 1, 2020 survey of the PolitiFact website makes the "Corrections and Updates" page look like an orphan.

We tried to help. Seriously.

When I (Bryan) heard on Twitter that PolitiFact was updating its website, I tweeted out a reminder for PolitiFact to make its "Corrections and Updates" page more available to readers:

Instead of fixing it, the "Corrections and Updates" page is one of the very few (this is the only other one we found) that did not experience a facelift in keeping with the new look of the website.

For our money, the redesign looks pretty bad on the big screen. And it's not much better on mobile.

But one thing we did like, though perhaps that means it won't last.

What We Liked

In addition to PolitiFact's dodgy behavior on corrections, we've endlessly criticized PolitiFact for publishing sciencey-looking graphs of aggregated "Truth-O-Meter" ratings with no regular disclaimer attached. The ratings are subjective and PolitiFact does not attempt to choose a scientifically representative sample of claims. So the graphs are nonsense in terms of representing a politician's overall reliability.

PolitiFact still isn't attaching any disclaimer, but the new design largely neuters the visual impact of its graphs.

Let's look at Donald Trump's PolitiFact "scorecard" before and after the update.


That has some visual impact. The graph groups the bars closely, emphasizing the visual difference between, say, 5 percent and 35 percent.


What a difference! Thirty-four percent looks visually smaller on the new graph than 5 percent looked on the old graph. Sure, there's an attempt to spice it up by adding colors, but the short graph scale and thin lines suck away almost all of the impact.

Do we think PolitiFact did this intentionally so that the graphs would do less to mislead readers? No, unless it's part of an effort to farm out the deception.* But if it stands, it doesn't really matter if it's an accident or a mistake. PolitiFact will probably deceive fewer of its readers as a result.

One Other Positive!

PolitiFact has always offered the total number of each rating for individual politicians. But now it is publishing the totals for PolitiFact as a whole, as well as the states.

That makes doing certain types of research on PolitiFact's numbers easier. Though researchers will still need to realize that the subjectivity of the ratings means they tell researchers about PolitiFact, not so much about the politicians making the claims.

It's a very simple matter now to document how many more ratings Donald Trump has received than did Barack Obama, and in a shorter span of time as candidate/president.

*The downside? Those who are motivated to use PolitiFact "data" to prove Republicans are liars and whatnot will have less work to do in collecting the numbers. People irresponsibly publishing such nonsense may end up misleading more people in spite of the positives we noted.


Centered text? Seriously?

Updated Feb. 3, 2020 with edits thought already complete: strikethrough and URL linking earlier PFB post about finding PolitiFact's "Corrections and updates" page.

Update Feb 4, 2020: Whether it was the plan all along or whether in response to our cajoling, PolitiFact has added "Corrections and Updates" to its menu, under the heading "About Us."  That fixes one of our major complaints about PolitiFact. Now will PolitiFact add corrected "articles" to its list of corrected stories?

Sunday, January 26, 2020

PolitiFact tweets out gender pay gap blunder

So long as Democratic Socialist presidential candidate Bernie Sanders doesn't tangle with Joe Biden, he can probably count on getting a break from the left-leaning "non-partisan fact checkers" (liberal bloggers) at PolitiFact.

Another case in point, this time from PolitiFact's Twitter account:

PolitiFact mismatched the fact check of Democrat Bobby Scott with the Sanders claim. Though PolitiFact is inconsistent, it usually rates claims like Sanders' "Mostly False."

In rating Scott's claim, PolitiFact Virginia took pains to detect nuance (bold emphasis added):
The statistics are across-the-board comparisons for all jobs lumped together and do not specifically compare men and women performing the same jobs. Many people, citing the statistic, wrongly use it as an apples-to-apples comparison of pay for equal work.

Scott’s statement, however, is nuanced. He says women get 80 percent pay for doing "similar" jobs as white men, which is different than saying the "same" job as men.
In spite of that, PolitiFact's tweet suggests Sanders' statement about women receiving 79 cents on the dollar for the same work is "Mostly True."

And after PolitiFact Virginia did all that work to argue "similar" means something different than "the same" (baloney, we say), PolitiFact's tweet tries to send PolitiFact Virginia's message using the word "equivalent."

That sure clears things up (Merriam-Webster).

Definition of equivalent

1 : equal in force, amount, or value also : equal in area or volume but not superposable a square equivalent to a triangle

2a : like in signification or import

b : having logical equivalence equivalent statements

3 : corresponding or virtually identical especially in effect or function

4 obsolete : equal in might or authority

5 : having the same chemical combining capacity equivalent quantities of two elements

6a : having the same solution set equivalent equations

b : capable of being placed in one-to-one correspondence equivalent sets

c : related by an equivalence relation

Why couldn't PolitiFact just take the straightforward route and present one of its old ratings directly parallel to the claim it attributed to Sanders?

Perhaps we'll never know. But it was probably bias.

Saturday, January 25, 2020

We republished this item because we neglected to give it a title when it was first published.

Forgetting the title results in a cumbersome URL making it a good idea to republish it.

So that's what we did. Find the post here.

Friday, January 17, 2020

Fact checkers decide not to check facts in fact check of Bernie Sanders

As a near-perfect follow up to our post about progressives ragging on PolitiFact over its centrist bias, we present this Jan. 15, 2020 PolitiFact fact check of Democratic presidential candidate Sen. Bernie Sanders:

Sanders said his plan would "end" $100 billion in health care industry profits, and PolitiFact plants a "True" Truth-O-Meter graphic just to the right of that claim.

But there's no fact check here of whether Sanders' plan would end $100 billion in profits. Instead the fact check looks at whether the health care industry makes $100 billion in profits (bold emphasis added):
The Sanders campaign shared its math, and it’s comprehensive.

The $100 billion total comes from adding the 2018 net revenues -- as disclosed by the companies -- for 10 pharmaceutical companies and 10 companies that work in health insurance.

We redid the numbers. Sanders is correct: The total net revenues, or profits, these companies posted in 2018 comes to just more than $100 billion - $100.96 billion, in fact. We also spoke to three independent health economists, who all told us that the math checks out.

There are a couple of wrinkles to consider. Some of the companies included -- Johnson & Johnson, for instance -- do more than just health care. Those other services likely affect their bottom lines.

But more importantly, $100 billion is likely an underestimate, experts told us.
It looks to us like PolitiFact meticulously double-checked equations that did not adequately address the issue of health care profits.

On the one hand "We redid the numbers. Sanders is correct." But on the other hand "$100 billion is likely an underestimate."

The fact checkers are telling us Sanders was accurate but probably wrong.

But we've only covered a premise of Sanders' claim. The meat of the claim stems from Sanders saying he will "end" those profits.

Did Sanders mean he would cut $100 billion in profit or simply reduce profits by some unspecified amount? We don't see how a serious fact-check effort can proceed without somehow addressing that question.

PolitiFact proceeds to try to prove us wrong (bold emphasis added):
Sanders suggested that Medicare for All would "end" the $100 billion per year profits reaped by the health care industry.

The proposal would certainly give Washington the power to do that.

"If you had Medicare for All, you have a single payer that would be paying lower prices," Meara said.

That means lower prices and profits for pharmaceuticals, lower margins for insurers and lower prices for hospitals and health systems.

That could bring tradeoffs: for instance, fewer people choosing to practice medicine. But, Meara noted, the number supports Sanders’ larger thesis. "There’s room to pay less."
Though PolitiFact showed no inclination to pin down Sanders' meaning, the expert PolitiFact cited (professor of health economics Ellen Meara) translates Sanders' point as "There's room to pay less."

Do the fact checkers care how much less? Is PolitiFact actually fact-checking whether Sanders' plan would lower profit margins and it doesn't matter by how much?

Side note: PolitiFact's expert donates politically to Democrats. PolitiFact doesn't think you need to know that. PolitiFact is also supposedly a champion of transparency.

Where's the Fact Check?

PolitiFact does not know how much, if at all, the Sanders plan would cut profit margins.

PolitiFact does not specify how it interprets Sanders' claim of bringing an "end" to $100 billion in profits (the cited expert expects a lower profit margin but offers no estimate).

The bulk of the fact check is a journalistic hole. It fails to offer any kind of serious estimate of how much the Sanders' plan might trim profits. If the plan trims profits down to $75 billion, presumably PolitiFact would count that as ending $100 billion in profits.

Using that slippery understanding, quite a few outcomes could count as ending $100 billion in profits. But how many prospective voters think Sanders is promising to save consumers that $100 billion?


That's no "centrist bias." That's doing Sanders a huge favor. It's liberal bias, the prevalent species at PolitiFact.

Wednesday, January 15, 2020

Progressives accusing PolitiFact of "centrist bias"

Left-leaning The Week has put out a couple of articles recently accusing PolitiFact of a "centrist bias."

Here's one of the accusations:
Is Joe Biden, contrary to his centrist reputation, a tax-and-spend liberal? That was the argument made by Politifact's Amy Sherman, defending him against accusations from the Bernie Sanders camp that in 2018, "Biden lauded Paul Ryan for proposing cuts to Social Security and Medicare." Not so, says Politifact: "The Sanders campaign plucked out part of what Biden said but omitted the full context of his comments. We rate this statement False."

Unfortunately, it's a tendentious argument that totally misreads Biden's politics and history. He did indeed call for cuts to Social Security and Medicare in a 2018 speech at the Brookings Institution — part of a decades-long career of hawking pointless austerity. Yet, just like they did with Medicare-for-all, fact checkers are bending the truth to advance an ideological centrist agenda.
The argument, unlike many from-the-left criticisms of PolitiFact, isn't frivolous. We noted during the 2016 election that PolitiFact seemed tougher on Sanders than on his opponent, Hillary Rodham Clinton. It makes sense that wherever PolitiFact's ideology falls on the political continuum those to either side may experience a resulting bias.

And, in fact, that's our purpose in highlighting the accusation. A charge of centrist bias proves consistent with the charge of liberal bias. The Week is saying PolitiFact is biased toward political positions to its left and right. The Week just doesn't bother to highlight any of the "centrist" bias that harms conservatives.

We do that.

Plus we highlight good examples of PolitiFact's anti-progressive bias under the "Left Jab" tag.

Note: The "bending the truth" example from The Week doesn't wash.

Monday, January 13, 2020

Busted: PolitiFact catches Nikki Haley using hyperbole without a license

Some things never change.

Among those things, apparently, is PolitiFact's tradition of taking Republican hyperbole literally.

Case in point:

The hyperbole should have been easy to spot based on the context.

Former UN Ambassador Nikki Haley appeared on Fox News' "Hannity" show with host Sean Hannity.

Transcript ours (starting at about 2:12):

Do you agree with, uh, listen I've always liked General Petraeus. He's a great, great general, hero, patriot in this country. He said it's impossible to overstate the importance of this particular action. It's more significant than the killing of bin Laden, even the death of al Baghdadi. And he said Soleimani was the architect, operational commander of the Iranian effort to solidify control of the so-called Shia Crescent stretching from Iran to Iraq through Syria and southern Lebanon. I think that's the reason why Jordanians, Egyptians and Saudis are now working with the Israelis, which I don't think anybody saw coming.

Well, and I'll tell you this: You don't see anyone standing up for Iran. You're not hearing any of the Gulf members, you're not hearing China, you're not hearing Russia. The only ones mourning the loss of Soleimani are our Democrat leadership. And our Democrat presidential candidates. No one else in the world, because they knew that this man had evil veins. They knew what he was capable of and they saw the destruction and, and the lives lost (based?) from his hand. And so--

What a dumb (?). We've been hearing "Oh, he's evil, he's a murderer he killed Americans and he, this is the No. 1 state sponsor of terror and they're fighting all these proxy wars but we don't want to make 'em mad." That's what it sounds like to me.

You know, and you go tell that to the 608 American families who lost a loved one. Go tell that to the military members who lost a limb. This was something that needed to be done and should be celebrated. And I'll tell you right now, partisan politics should stop when it comes to foreign policy. This is about America united. We need to be completely behind the president, what he did, because every one of those countries are watching our news media right now seeing what everyone's saying. And this is a moment of strength for the United States. It's a moment of strength from President Trump.
Haley's "mourning" comment comes after her emphasis Iran received no support ("You don't see anyone standing up for Iran") regarding the killing of Soleimani. So it makes very good sense to take "mourning" as a hyperbolic amplification of that point.

Hannity's response to Haley's comment came in the same vein, in fact mocking Democrats who acknowledged Soleimani got what he deserved while questioning the wisdom of the move.

PolitiFact could legitimately check to see if world leaders offered statements much in the same vein leading Democrats offered. Instead of doing that, PolitiFact used a wooden-literal interpretation of Haley's remarks as a basis for its fact check.

How do mistakes like this (and these) make it past PolitiFact's exalted "Star Chamber" of experienced fact check editors?

Could be bias.

Thursday, January 2, 2020

PolitiFact's "Pants on Fire" bias in 2019

As our research has documented, PolitiFact has consistently failed to offer any objective means of distinguishing objectively between false political claims and ridiculously false political claims.

On the contrary, PolitiFact's founding editor, Bill Adair, said decisions about the "Truth-O-Meter" ratings are "entirely subjective." And current editor Angie Drobnic Holan in 2014 explained the difference between the "False" and "Pants on Fire" ratings by saying "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."

Given the understanding that the difference between "False" and "Pants on Fire" rests on subjective grounds, we have conducted ongoing research on the chances a claim PolitiFact considers false will receive the "Pants on Fire" designation.

Our research suggests at least two things.

First, PolitiFact National is biased against Republicans.

Second, the statement selection process renders "Truth-O-Meter" ratings an entirely unreliable guide to candidate truthfulness even assuming the subjective ratings are objectively accurate(!).

Without further ado, an updated chart for both political parties showing the percentage of false ratings given the "Pants on Fire" rating:

We'll address one potential criticism right off the bat.

We should expect a higher percentage for the party that lies more!

We would agree with that criticism if the PolitiFact data stemmed from objective considerations in the fact checks. We have no evidence to support that and considerable evidence to counter it (see above). All the evidence suggests the "Pants on Fire" rating is a purely subjective judgment.

Subjective judgment is incompatible with neutrality.

A Review of the Findings

False statements from Democrats were rated "Pants on Fire" just 9.09 percent of the time in 2019, tying the record low set in 2009. The Republican percentage stayed very close to its historic baseline, which cumulatively stands at 27.21 percent. The long-term average for Democrats dropped slightly to 17.41 percent. Over PolitiFact National's entire history, Republicans are about 60 percent more likely to receive the subjective "Pants on Fire" rating.

That's bias.

PolitiFact's wildly unscientific selection process

The Trump presidency ought to end permanently any supposition that PolitiFact's story selection process approximates random (scientific) representative selection in any way.

Of the 14 "Pants on Fire" ratings given to Republicans in our 2019 data, 13 went to President Trump. The other one went to Mr. Trump's son-in-law, Jared Kushner.

Of the 39 "False" ratings given to Republicans in our 2019 data, 29 went to Mr. Trump.

Combined, then, 42 of 53 of Republicans' false "Truth-O-Meter" ratings went to Mr. Trump.

For comparison, in 2011 PolitiFact rated 88 Republican claims false with none of them coming from Mr. Trump. From 88 down to 10? Is the Republican Party, aside from Trump, that much more honest with the passage of time? Nonsense. That hypothesis is completely implausible on its face.

The explanation is painfully simple: As PolitiFact admits, its editors choose the claims PolitiFact rates. They use editorial judgment to select stories, not scientific selection. The editors are, in the words of the editors, "not social scientists."

If anything counts as a proof that the supposedly unbiased fact checkers at PolitiFact are deliberately pulling the wool over the eyes of their readers, it is PolitiFact's unrepentant use of its aggregated ratings as voter guides.

They have to know better.

They do it anyway. And the practice was part of PolitiFact's aim from the start, which helps explain why it won't go away.

PolitiFact and Bernie Sanders explain the gender pay gap

Everybody knows about the gender pay gap, right?

It's the statistic Democrats habitually misuse to amplify their focus on "equal pay for equal work." Fact checkers like PolitiFact punish that traditional deception by rating it "Mostly True" most of the time, or sometimes just "True."

Let's take a look at PolitiFact latest PolitiSplainer on the gender wage gap, this time featuring Democratic Party presidential candidate and "democratic socialist" Bernie Sanders.

Such articles might more appropriately wear the label "unexplainer."

PolitiFact starts out with exactly the kind of ambiguity Democratic Party leaders love, obscuring the difference between the raw gender wage gap and the part of the gap (if any) caused by gender discrimination:
The disparity in how much women make compared with men comes up often in the political discourse, tagged with a call to action to help women’s paychecks catch up.
Running just above that sentence the featured image directs readers toward the gender discrimination explanation for the gender pay gap. Plausibly deniable? Of course. PolitiFact didn't mean it that way or something, right?

PolitiFact goes on to tell its readers that a number of Democrats have raised the gender pay gap issue while on the campaign trail. The paragraph contains four hotlinks:
Several leading Democratic presidential candidates recently highlighted one of the biggest imbalances — saying that a Latina woman must work 23 months to make the amount a white man makes in one year, or that they make 54 cents on the dollar.
Each of the statements from Democrats highlighted the gender pay gap in an ambiguous and misleading way. None of the statements bothered to distinguish between the raw pay gap, caused by a variety of things including women working fewer hours, and the hard-to-measure pay gap caused by employers' sexual discrimination.

The claim from Mayor Pete Buttigieg was pretty much incoherent and would have made great fodder for a fact check (54 cents on the dollar isn't enough to live on? Doesn't that depend on the size of the dollar in the comparison?).

PolitiFact highlighted the version of the claim coming from Sen. Sanders:

Sanders' use of the gender pay gap fits the standard pattern of deception. He leads with a figure from the raw wage gap, then assures the audience that "Equal pay is not radical ... It's an issue of basic justice."

But Sanders is misleading his audience. "Equal pay for equal work" isn't radical and may count as an issue of basic justice. But equal pay regardless of the work done is very radical in the United States. And that's what Democratic Candidates imply when they base their calls for equal pay on the disparities in the raw gender wage gap.

If only there were fact checkers who could explain that deception to the public!

But, no, PolitiFact does not explain Sanders' deception.

In fact, it appears PolitiFact has never rated Sanders on a claim related to the gender wage gap.

PolitiFact did not rate the misleading tweet featured in its PolitiSplainer. Nor did it rate any of these:
PolitiFact ratings of the gender wage gap tend to graciously overlook the fact that Democrats almost invariably invoke the raw gender wage gap when stumping for equal pay for equal work, as Sanders did above. Does the raw gender wage gap have much of anything to do with the wage gap just from discrimination? No. There's hardly any relationship.

Should Democrats admit they want equal pay for unequal work, it's likely the American people will let them know that the idea is not mainstream and not an issue of basic fairness.

PolitiFact ought to know that by now. But you won't find it in their fact checks or PolitiSplainers dealing with the gender wage gap.

How Big is the Pay Gap from Discrimination?

Remarkably, PolitiFact's PolitiSplainer on the pay gap almost takes a pass on pinning down the role discrimination might play. One past PolitiSplainer from 2015 actually included the line from the CONSAD report's Foreword (by the Department of Labor) suggesting there may be no significant gender discrimination at all found in the raw wage gap.

In the 2019 PolitiSplainer we got this:
We often hear that discriminatory practices are a reason why on average women are paid less than men. Expert say it’s hard to measure how much of a role that discrimination plays in the disparity.

"Research shows that more than half of the gap is due to job and industry segregation — essentially, women tend to work in jobs done primarily by other women, and men tend to work in jobs done primarily by other men and the ‘men’s jobs’ are paid more," said Jennifer Clark, a spokeswoman for the Institute for Women’s Policy Research.

Clark cited education and race as other factors, too.
Such a weak attempt to explain the role of discrimination in the gender pay gap perhaps indicates that PolitiFact's aim was to explain the raw gender wage gap. Unfortunately for the truth, that explanation largely stayed within the lines of the traditional Democratic Party deceit: Mention the raw gender wage gap and then advocate legislation supposedly helping women receive equal work for equal pay.

That juxtaposition sends the clear message the raw gender wage gap relates to discrimination.

Supposedly neutral and objective fact checkers approve the deception, so it must be okay.

We have no reason to suppose mainstream fact checkers like PolitiFact will stop playing along with the misdirection.