Wednesday, February 27, 2019

PolitiFact's sample size deception

Is the deception one of readers, of self, or of both?

For years we have criticized as misleading PolitiFact's selection-bias contaminated charts and graphs of its "Truth-O-Meter" ratings. Charts and graphs look science-y and authoritative. But when the data set is not representative (selection bias) and the ratings are subjective (PolitiFact admits it), the charts serve no good function other than to mislead the public (if that even counts as a good function).

One of our more recent criticisms (September 2017) poked fun at PolitiFact using the chart it had published for "The View" host Joy Behar. Behar made one claim, PolitiFact rated it false, and her chart make Behar look like she lies 100 percent of the time--which was ironic because Behar had used PolitiFact charts to draw false generalizations about President Trump.

Maybe our post helped prompt the change and maybe it didn't, but PolitiFact has apparently instituted some sort of policy on the minimum number of ratings it takes to qualify for a graphic representation of one's "Truth-O-Meter" ratings.

Republican Rep. Matt Gaetz (Fla.) has eight ratings. No chart.

But on May 6, 2018 Gaetz had six ratings and a chart. A day later, on May 7, Gaetz had the same six ratings and no chart.

For PolitiFact Florida, at least, the policy change went into effect in May 2018.

But it's important to know that this policy change is a sham that would hide the central problem with PolitiFact's charts and graphs.

Enlarging the sample size does not eliminate the problem of selection bias. There's essentially one exception to that rule, which occurs in cases where sample encompasses all the data--and in such cases "sample" is a bit of a misnomer in the first place.

What does that mean?

It means that PolitiFact, by acting as though small sample size is a good enough reason to refrain from publishing a chart, is giving its audience the false impression that enlarging the sample size without eliminating the selection bias yields useful graphic representations of its ratings.

If PolitiFact does not realize what it is doing, then those in charge are dangerously ignorant (in terms of improving public discourse and promoting sound reasoning).

If PolitiFact realizes what it is doing wrong and does it regardless, then those in charge are acting unethically.

Readers who can think of any other option (apart from some combination of the ones we identified) are encouraged to offer suggestions in the comments section.



Afters


When do the liberal bloggers at PolitiFact think their sample sizes are big enough to allow for a chart?

Sen. George Allen has 23 ratings and a chart. So it's between 8 and 23.

Tennessee Republican Marsha Blackburn has 10 ratings and a chart. We conclude that PolitiFact thinks 10 ratings warrant a chart (yes, we found a case with 9 ratings and no chart).

Science. And stuff.


Thursday, February 21, 2019

PolitiFact's Magic Math on 'Medicare For All' (Updated)

Would you believe that PolitiFact bungled an explainer article on the Democratic Party's Medicare For All proposal?

PolitiFact's Feb. 19, 2019 PolitiSplainer, "Medicare For All, What it is, what it isn't" stumbled into trouble when it delved into the price tag attached to government-run healthcare (bold emphasis added):
How much would Medicare for All cost?

This is the great unknown.

A study of Medicare for All from the libertarian-oriented Mercatus Center at George Mason University put the cost at more than $32 trillion over 10 years. Health finance expert Kenneth Thorpe at Emory University looked at Sanders' earlier version during the 2016 campaign and figured it would cost about $25 trillion over a 10-year span.

Where would the money come from?

Sanders offered some possibilities. He would redirect current government spending of about $2 trillion per year into the program. To that, he would raise taxes on income over $250,000, reaching a 52 percent marginal rate on income over $10 million. He suggested a wealth tax on the top 0.1 percent of households.
PolitiFact introduces the funding issue by mentioning two estimates of the spending M4A would add to the budget. But when explaining how Sanders proposes to pay for the new spending PolitiFact claims Sanders would "redirect current government spending" to cover about $20 trillion of the 25-to-32 trillion increase from the estimates.

Superficially, the idea sounds theoretically possible. If the defense budget was $2 trillion per year, for example, then one could redirect that money toward the M4A program and it would cover a big hunk of the expected budget increase.

But the entire U.S. defense budget is less than $1 trillion per year. So what is the supposed source of this funding?

We looked to the document PolitiFact linked, sourced from Sanders official government website.

We found no proposal from Sanders to "redirect current government spending" to cover M4A.

We found this (bold emphasis added):
Introduction

Today, the United States spends more than $3.2 trillion a year on health care. About sixty-five percent of this funding, over $2 trillion, is spent on publicly financed health care programs such as Medicare, Medicaid, and other programs. At $10,000 per person, the United States spends far more on health care per capita and as a percentage of GDP than any other country on earth in both the public and private sectors while still leaving 28 million Americans uninsured and millions more under-insured.
Nothing else in the linked document anywhere near approaches PolitiFact's claim of $2 trillion per year "redirected" into M4A.


It's a Big ($2 trillion per year) Mistake

We do not think Sanders was proposing what PolitiFact said he was proposing. The supposed $2 trillion over 10 years, or $20 trillion, cannot help pay for the $25 trillion cost Thorpe estimated. Nor can it help pay for the $32 trillion cost that Charles Blahous (Mercatus Center) estimated.

Why? The reason is simple.

Both of those estimates pertained to costs added to the budget by M4A.

In other words, current government spending on healthcare programs was already accounted for in both estimates. The estimates deal specifically with what M4A would add to the budget on top of existing costs.

Need proof? Good. The expectation of proof is reasonable.

Mercatus Center/Blahous
M4A would add approximately $32.6 trillion to federal budget commitments during the first 10 years of its implementation (2022–2031).
Clear enough? The word "add" shows that Blahous is talking about costs over and above current budget commitments to government health care programs.

Page 5 of the full report features an example illustrating the same point:
National health expenditures (NHE) are currently projected to be $4.562 trillion in 2022. Subtracting the $10 billion decrease in personal health spending, as calculated in the previous paragraph, and crediting the plan with $83 billion in administrative cost savings results in an NHE projection under M4A of $4.469 trillion. Of this, $4.244 trillion in costs would be borne by the federal government. Compared with the current projection of $1.709 trillion of federal healthcare subsidy costs, this would be a net increase of $2.535 trillion in annual costs, or roughly 10.7 percent of GDP.

Performing similar calculations for each year results in an estimate that M4A would add approximately $32.6 trillion to federal budget commitments during the period from 2022 through 2031, with the annual cost increase reaching nearly 12.7 percent of GDP by 2031 and continuing to rise afterward.
The $1.709 trillion in "federal healthcare subsidy costs" represents expected spending under Medicare, Medicaid and other federally supported health care programs. That amount is already accounted for in Blahous' estimate of the added cost of M4A.

Kenneth Thorpe

Thorpe's description of his estimate doesn't make perfectly clear that he is estimating added costs. But his mention of the Sanders' campaign's estimate of $1.377 trillion per year provides the contextual key (bold emphasis added):
The plan is underfinanced by an average of nearly $1.1 trillion per year. The Sanders campaign estimates the average annual financing of the plan at $1.377 trillion per year between 2017 and 2026. Over the same time period, we estimate the average financing requirements of $2.47 trillion per year--about $1.1 trillion more on average per year over the same time period. 
When we look at the estimate from the Sanders campaign we find that the $1.377 trillion estimate pertained to added budget costs, not the gross cost of the plan.

Friedman's Table 1 makes it plain, listing 13,773 (billions) as "added public spending":


It's magical math to take the approximately $27 trillion in "continued government spending" to pay down the $13.8 trillion in "new public spending." But PolitiFact seems to suggest Sanders would use current spending to pay for his program's new spending.

Again, we do not think that is what Sanders suggests.

The liberal bloggers at PolitiFact simply botched the reporting.

Our Eye On Corrections

We think this PolitiBlunder clearly deserves a correction and apology from PolitiFact.

Will it happen?

Great question.

As part of our effort to hold PolitiFact (and the International Fact-Checking Network) accountable, we report PolitiFact's most obvious errors through the recommended channels to see what happens.

In this case we notified the author, Jon Greenberg, via Twitter about the problem. We also used Twitter (along with Facebook) to try to draw PolitiFact's attention to the mistake. When those outreach efforts drew no acknowledgement we did as PolitiFact recommends and emailed a summary of the problem to "truthometer@politifact.com."

Should PolitiFact continue to let the error stand, we will report the error to the International Fact-Checking Network (under the auspices of the Poynter Institute, just like PolitiFact) and track whether that organization will hold PolitiFact to account for its mistakes.

We will update this section to note future developments or the lack of future developments.


Update Feb. 27, 2019

A full week after we started informing PolitiFact of the mistake in its Medicare PolitiSplainer, the bad reporting in the financing section remains unchanged.


Monday, February 18, 2019

PolitiFact California rates "clean water" claim "True," doesn't define "clean water"

What is "clean water"? It depends on whom you ask, among a variety of different factors.

In California, for example, one can use the federal drinking water standard or the California drinking water standard. Not that using either standard would invalidate the opinion of one who wants to use Mexican water quality standards.

In spite of the lack of any set standard for drawing a clean (pun intended) distinction between "clean water" and "unclean water," PolitiFact California crowned Gov. Gavin Newsom's claim that over 1 million Californians do not have clean water for drinking or bathing with its highest truth designation: "True."


That's supposed to mean that Newsom left nothing significant out, such as what standard he was using to call water "clean."

It's a basic rule of fact checking. If somebody says over 1 million people do not have X, the fact checker needs to have X clearly defined before it's possible to check the numerical claim.

PolitiFact California simply didn't bother, instead leaning on the expert opinions of environmental justice advocate Jonathan London of UC Davis and the equally non-neutral Kelsey Hinton (bold emphasis added):
"Unfortunately, (Newsom’s) number is true,’ added Kelsey Hinton, spokesperson for the Community Water Center, a San Joaquin Valley nonprofit that advocates for clean drinking water.

As evidence, both London and Hinton pointed to a 2017 drinking water compliance report by the State Water Resources Control Board, which regulates water quality. The report shows that an estimated 592,000 Californians lived in a public water district that received a water quality violation in 2017. But that doesn’t include people living in private, unregulated districts.
What neutral and objective fact-checker deliberately seeks out only experts who double as advocates for the subject of a fact check?

PolitiFact California's fact check successfully obscures the fact that drinking water standards are arbitrary. They are arbitrary in that those setting the standards are weighing costs and benefits. There is no set point at which contaminants suddenly turn harmful.

See, for example, the World Health Organization's statement about its standard for poisonous chemical arsenic:
The current recommended limit of arsenic in drinking-water is 10 μg/L, although this guideline value is designated as provisional because of practical difficulties in removing arsenic from drinking-water. Every effort should therefore be made to keep concentrations as low as reasonably possible and below the guideline value when resources are available.
The case is the same with other contaminants, including those yet to be named by regulators. There is no objective line of demarcation between "clean water" and "unclean water." At best there's a widely-accepted standard. PolitiFact California only mentions state standards in its fact check (the word "federal" does not appear) while citing reports that refer to both state and federal standards.

There's a clear solution to this problem. Politicians, if you're tempted to talk about how many do not have access to water meeting a given standard, cite the standard by name. Fact-checkers, if you fact-check claims about water quality that avoid mentioning specific regulatory standards and instead use the slippery term "clean water," realize that you're fact-checking an opinion.

PolitiFact California let slip a wonderful opportunity to educate its readers about water quality standards and what they mean in real life.


Afters: 

PolitiFact California describes environmental justice advocate London as "a UC Davis professor who’s written about contaminated drinking water."

Did PolitiFact California not know London advocates for "environmental justice" or did it deliberately hide that fact from its readers?

Thursday, February 14, 2019

PolitiFact and the 'Green New Deal' Fiction Depiction

What if the GOP released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

We don't see why not.

But what if one of the bill's proponents released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

For PolitiFact, apparently the answer is "no."

A week after Green New Deal proponent Alexandria Ocasio-Cortez released a FAQ about the proposal on her website, including its supposed proposal to provide "economic security for all who are unable or unwilling to work," PolitiFact published a Feb. 12, 2019 article on the Green New Deal that apparently shows that publishers of the false information will not face PolitiFact's "Truth-O-Meter."

PolitiFact toed the line on the media narrative that somehow, some way, incorrect information was published by someone. Okay, it was Ocasio-Cortez's staff, but so what?
We should distinguish the official resolution with some additional documents that were posted and shared by Ocasio-Cortez’s staff around the time the resolution was introduced. Once they became public, portions of these additional documents became grist for ridicule among her critics.

Some critics took aim at a line in one of the FAQs that said "we aren’t sure that we’ll be able to fully get rid of farting cows and airplanes." The same document promised "economic security for all who are unable or unwilling to work."
Under normal circumstances, PolitiFact makes politicians accountable for what appears on their official websites.

Are these special circumstances? Why doesn't PolitiFact attribute the false information to Ocasio-Cortez in this case? Are the objective and neutral fact checkers trying to avoid creating a false equivalency by not fact-checking morally-right-but-factually-wrong information on Ocasio-Cortez's website?

A round of Mulligans for everyone!

Many will benefit from PolitiFact's apparent plan to give out "Truth-O-Meter" mulligans over claimed aspects of the Green New Deal resolution not actually in the resolution. Critics of those parts of the plan will not have their attacks rated on the Truth-O-Meter. And those responsible for generating the controversy in the first place by publishing FAQs based on something other than the actual resolution also find themselves off the hook.

Good call by the fact checkers?

We think it shows PolitiFact's failure to equally apply its standards.


A Morally Right Contradiction?

If Ocasio-Cortez's website says the Green New Deal provides economic security for persons unwilling to work but the Green New Deal resolution does not provide economic security for persons unwilling to work, then the resolution contradicts Ocasio-Cortez's claim. That's assuming PolitiFact follows past practice in making politicians accountable for what appears on their websites.

So PolitiFact could have caught Ocasio-Cortez in a contradiction, and could have represented that finding somehow with its rating system.

In January we pointed out how PolitiFact falsely concluded that President Trump had contradicted himself in a tweet. The false finding of a contradiction resulted in a (bogus) "False" rating for Mr. Trump.

What excuse could possibly dress this methodology up as objective and unbiased? What makes a contradictory major policy FAQ unworthy of a rating compared to a non-contradictory presidential tweet?

Guess what? PolitiFact is biased. And we're not going to get coherent and satisfactory answers to our questions so long as PolitiFact insists on presenting itself as objective and unbiased.

Saturday, February 2, 2019

PolitiFact Editor: "Most of our fact checks are really rock solid as far as the reporting goes"

Why do we love it when PolitiFact's principals do interviews?

Because it almost always provides us with material for PolitiFact Bias.

PolitiFact Editor Angie Drobnic Holan, in a recent interview for the Yale Politic, damned her own fact-checking organization with faint praise (bold emphasis added):

We do two things–well, we do more than two things–but we do two things that I want to mention for public trust. First, we have a list of our sources with every fact check. If you go into a fact check on the right-hand rail [of the PolitiFact webpage], we list of all of our sources, and then we explain in detail how we came to our conclusion. I also wrote a recent column on why PolitiFact is not biased to try to answer some of the critique that we got during the latest election season. What we found was that when a campaign staffer does not like our fact check on their candidate, they usually do not argue the facts with us– they usually come straight at us and say that we are biased. So, I wrote this column in response to that. And the reason that they don’t come straight out and dispute the facts with us is because the fact checks are solid. We do make some mistakes like any other human beings, but most of our fact checks are really rock solid as far as the reporting goes. And yet, partisans want to attack us anyway.
We find Holan's claim plausible. The reporting on more than half of PolitiFact''s fact checks may well be rock solid. But what about the rest? Are the failures fair game for critics? Holan does not appear to think so, complaining that even though the reporting for PolitiFact's fact checks is solid more than half the time "partisans want to attack us anyway."

The nerve of those partisans!

Seriously, with a defender like Holan who needs partisan attacks? Imagine Holan composing ad copy extolling the wonders of PolitiFact:

PolitiFact: Rock Solid Reporting Most of the Time


Holan's attempt to discredit PolitiFact's critics hardly qualifies as coherent. Even if PolitiFact's reporting was "rock solid" 99 percent of the time criticizing the errors should count as fair game. And a 1 percent error rate favoring the left would indicate bias.

Holan tries to imply that the quality reporting results in a lack of specific criticism, but research connected to Holan's predecessor at PolitiFact, Bill Adair of Duke University, contradicts that notion:
Conservative outlets were much more likely to question the validity of fact-checks and accuse fact-checkers of making errors in their research or logic.
It isn't that conservatives do not criticize PolitiFact on its reporting. They do (we do). But PolitiFact tends to ignore the criticisms. Perhaps because the partisan critiques are "rock solid"?

More interviews, please.

Friday, February 1, 2019

Not a Lot of Reader Confusion XII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.


Ho-hum. Another day, another example showing that PolitiFact's charts and graphs mislead its readers.

When Sen. Cory Booker (D-NJ) announced he was running for president, PolitiFact was on the case looking for clicks by publishing one of its misleading graphs of Booker's (subjective) "Truth-O-Meter" ratings.

PolitiFact used a pie chart visual with its Facebook posting:


Note the absence of any disclaimer admitting that selection bias affects story selection, along with no mention of the fundamental subjectivity of the "Truth-O-Meter" ratings.

How is this type of graph supposed to help inform PolitiFact's readers? We say it misinforms readers, such as those making these types of comments (actual comments from PolitiFact's Facebook page in response to the Booker posting):

  • "A pretty sad state of affairs when a candidate lies or exaggerates 35% of the time."
  • "Funny how Politifact find Trump lying about 70% of the time." 
  • "(W)hile that's much better than (T)rump's scorecard, it's not honest enough for my tastes."


PolitiFact seems to want people to vote for the candidate with the best-looking scorecard. But given that the ratings are subjective and the stories are selected in a non-random manner (nearly guaranteeing selection bias), the "best" scorecard is, in effect, an editorial position based on opinions, not facts.

PolitiFact is selling editorials to its readers with these graphs.

Many in the fact-checking community oppose the use of gimmicky meters (FactCheck.org, Full Fact (UK), for example). If the International Fact-Checking Network and PolitiFact were not both under the auspices of the Poynter Institute then perhaps we'd soon see an end to the cheesy meters.


Correction May 3, 2019: We did XI twice! We updated the second XI to change the title to XII and now we're changing the title of this one to XIII.