Friday, March 15, 2019

Remember Back When PolitiFact was Fair & Balanced?

PolitiFact has leaned left from the outset (2007).

It's not uncommon to see people lament PolitiFact's left-leaning bias along with the claim that once upon a time PolitiFact did an even-handed job on its fact-checking.

But we've never believed the fairy tale that PolitiFact started out well. It's always been notably biased to the left. And we just stumbled across a PolitiFact fact check from 2008 that does a marvelous job illustrating the point.

It's a well-known fact that nearly half of U.S. citizens pay no net income tax, right?

Yet note how the fact checker, in this case PolitiFact's founding editor Bill Adair, frames President Obama's claim:
In a speech on March 20, 2008, Obama took a different approach and emphasized the personal cost of the war.

"When Iraq is costing each household about $100 a month, you're paying a price for this war," he said in the speech in Charleston, W.Va.
Hold on there, PolitiFact.

How can the cost of the war, divided up per family, rightly get categorized as a "personal cost" when about half of the families aren't paying any net federal income tax?

If the fact check was serious about the personal cost, then it would look at the differences in tax burdens. Families paying a high amount of federal income tax would pay far more than the the price of their cable bill. And families paying either a small amount of income tax or no net income tax would pay much less then the cost of their cable service for the Iraq War (usually $0).

PolitiFact stuffs the information it should have used to pan Obama's claim into paragraph No. 8, where it is effectively quarantined with parentheses (parentheses in the original):
(Of course, Obama's simplified analysis does not reflect the variations in income tax levels. And you don't have to write a check for the war each month. The war costs are included in government spending that is paid for by taxes.)
President Obama's statement was literally false and highly misleading as a means of expressing the personal cost of the war.

But PolitiFact couldn't or wouldn't see it and rated Mr. Obama's claim "True."

Not that much has changed, really.

Afters (for fun)

The author of that laughable fact check is the same Bill Adair later elevated to the Knight Chair for Duke University's journalism program.

We imagine Adair earned his academic throne in recognition of his years of neutral and unbiased  fact-checking even knowing President Obama was watching him from behind his desk.

Wednesday, March 13, 2019

Gender Pay Gap Shenanigans from PolitiFact Virginia

PolitiFact has a long history of botching gender wage gap stories, often horrifically.

PolitiFact Virginia's March 13, 2019 treatment of the subject does nothing improve the reliability of PolitiFact's reporting on the topic.

It's not true that women earn 80 percent of the pay men earn doing the same job, though Democrats proclaim otherwise from time to time. And that's probably what Scott did, using "similar" in its role as a synonym for "same."

PolitiFact was apparently very eager to use the technique of charitable interpretation--most likely because Scott is a Democrat. Republicans rarely receive the benefit of that feature of competent fact-checking from PolitiFact.

We're partial to the using charitable interpretation when appropriate, but PoltiFact Virginia ends up running data through the confirmation bias filter in its effort to bail out Scott.

We'd judge Scott's use of the term "similar" as an ambiguity. PolitiFact Virginia calls it "nuance":
Scott’s statement, however, is nuanced. He says women get 80 percent pay for doing "similar" jobs as white men, which is different than saying the "same" job as men.
PolitiFact Virginia apparently skipped the step of checking the thesaurus to see if the terms "similar" and "same" may be used interchangeably. They can. The two terms have overlapping meanings, in fact.

We find it notable that PolitiFact Virginia set aside the usual PolitiFact practice of relying on explanations from spokespeople representing the figure being fact-checked.

Scott's staff said he got his numbers from sources relying on Census Bureau data.

PolitiFact Virginia:
Stephanie Lalle, Scott’s deputy communications director, told us the congressman got the statistic from separate reports published in late 2018 by the Institute for Women’s Policy Research, and the National Partnership for Women and Families.

Both reports said the statistic comes from the U.S. Census Bureau. The latest gender-gap statistics from the Bureau show in 2017 women earned 80.5 percent of what men made - the same percentage as in 2016.
PolitiFact Virginia's determination to defend Scott's statement leads it to spout statistical mumbo-jumbo. Based on apparently nothing more the Scott's "nuanced" use of the term "similar," PolitiFact Virginia tried to reverse engineer an explanation of his statistic to replace the explanation offered by Scott's staff.

What if "similar" meant broad classes of jobs, and data from the Bureau of Labor Statistics showed white men making more than women in those classes of jobs?

PolitiFact Virginia thought it was worth a shot:
Women out-earned men in three occupations: wholesale and retail buying; frontline supervisor of construction trades and extraction workers; and, as we mentioned, dining room and cafeteria attendants and bartender helpers.

Fact-checking Scott, however, requires a deeper dive. The percentages we just discussed compare the full-time weekly earnings of all women to all men in these occupations. Scott, in his statement, compared the earnings of all women to white men in similar jobs.

The BLS’s data set that compares gender pay by specific jobs does not sort men and women by race. It does, however, categorize the jobs into 29 broad fields of work and, in each of those fields, breaks down women and men by sex.

Overall in 2018, women earned 78.7 percent less than white men in the same areas of work. The comparison of women’s pay to white men’s produces a bigger gender gap than the comparison to all men. That’s because white males tend to earn more than black males.

White men out-earned women in all 29 fields of work.
Note that PolitiFact Virginia isn't really showing its work. And what it does show contains appalling mistakes.

Let's break it down piece by piece.

Piece by Piece, Step by Step

"The percentages we just discussed compare the full-time weekly earnings of all women to all men in these occupations."

Do the percentages compare the full-time weekly earnings of all women to all men in those occupations? It's hard to tell from PolitiFact's linked source document. If author Warren Fiske was talking about Table 18, as we believe, then the fact check should refer to Table 18 by name.

Looking at Table 18, it seems Fiske reasoned improperly. The table mentions 121 groups of occupations but most of occupations nested under the list headers have no estimate of a gender wage gap, entering a dash instead of a number. In the notes at the bottom of the table, BLS warns that a dash means "no data or data that do not meet publication criteria." That makes it improper to extrapolate the listed results into a nationally representative number. Nor should a fact-checker assume that the subject of the fact check had such creative reasoning in mind.

In short, using numbers from Table 18 to support Scott represents unjustifiable cherry-picking.

"Scott, in his statement, compared the earnings of all women to white men in similar jobs."

We cannot find any citation in PolitiFact Virginia's fact check that offers data addressing the racial aspect of Scott's claim. Without that data, how can the fact checker reach a reasonable conclusion about the claim?

"The BLS’s data set that compares gender pay by specific jobs does not sort men and women by race."

That's bad news for this fact check. As noted above, without the data on race there's no checking the claim.

"It does, however, categorize the jobs into 29 broad fields of work and, in each of those fields, breaks down women and men by sex."

Breaking down men and women by sex is not the same as breaking them down by race, so this "however" doesn't point the way to a solution to the problem.

"Overall in 2018, women earned 78.7 percent less than white men in the same areas of work."

The fact checkers probably meant 78.7 cents to the dollar compared to men, not 78.7 percent less (about 22 cents to the dollar). But we still do not know the source of the race-based claim. We'd love to see a clear clue from PolitiFact Virginia regarding the specific source of this claim.

"The comparison of women’s pay to white men’s produces a bigger gender gap than the comparison to all men. That’s because white males tend to earn more than black males."

This part we follow. But it doesn't help us understand how PolitiFact can claim how white men out-earned women in all 29 fields of work in the BLS data.

"White men out-earned women in all 29 fields of work."

Based on what? We just went through PolitiFact's argument step by step. There's no reasoning in the fact check to justify it, and we can't find any citation that appears to lead to a justifying document.


PolitiFact failed to offer information justifying its key data point ("White men out-earned women in all 29 fields of work). It failed to show how cherry-picking that information, even if legitimately sourced, would justify Scott's statement in the context of "fair pay" legislation. And it simply blundered with the claim that women earned 78.7 percent less than white men (in same areas of work or otherwise).

Making the bad news worse, we could write another article about the problems in PolitiFact Virginia's wage gap story without repeating the same points.

Update March 14, 2019

After we contacted PolitiFact Virginia on March 13, 2019 it corrected the "78.7 percent less" mistake.

PolitiFact Virginia attached no editor's note to the story indicating either a correction or clarification.

Note this from PolitiFact's policy on corrections:
Errors of fact – Errors of fact that do not impact the rating or do not change the general outlook of the fact-check receive a mark of correction at the bottom of the fact-check.

The text of the fact-check is updated with the new information. The correction states the correct information that has been added to the report. If necessary for clarity, it repeats the incorrect information. Corrected fact-checks receive a tag of "Corrections and updates."

Typos, grammatical errors, misspellings – We correct typos, grammatical errors, misspellings, transpositions and other small errors without a mark of correction or tag and as soon as they are brought to our attention.
So we're supposed to believe writing "earned 78.7 percent less" instead of "78.7 percent as much" counts as one of the following:
  • typo
  • grammatical error
  • misspelling
  • transposition
  • other small error
Who buys it?

Monday, March 4, 2019

The underlying point saves the day for Bernie Sanders falsehood?

For some reason there are people who believe that if a fact checker checks both sides that means that the fact checker is neutral.

We've kept pointing out that checking both sides is no kind of guarantee of nonpartisanship. It's a simple matter to give harsher ratings to one side while rating both sides. Or softer ratings to one side while rating both sides.

Latest case in point: Democratic presidential candidate Bernie Sanders.

Sanders claimed that the single-payer health care system in Canada offers "quality care to all people  without out of pocket expenses."

PolitiFact found that the Canadian system does not eliminate out-of-pocket expenses (contradicting Sanders' claim).

And then PolitiFact gave Sanders' claim a "Half True" rating.

Seriously. That's what PolitiFact did.

PolitiFact's summary is remarkable for not explaining how Sanders managed to eke out a "Half True" rating for a false statement. PolitiFact describes what's wrong with the statement (how it's false) and then proclaims the "Half True" ruling:
Sanders said, "In Canada, for a number of decades they have provided quality care to all people without out-of-pocket expenses. You go in for cancer therapy, you don't take out your wallet."

So long as the care comes from a doctor or at a hospital, the Canadian system covers the full cost. But the country’s public insurance doesn’t automatically pay for all services, most significantly, prescription drugs, including drugs needed to fight cancer.

Out-of-pocket spending is about 15 percent of all Canadian health care expenditures, and researchers said prescription drugs likely represented the largest share of that.

The financial burden on people is not nearly as widespread or as severe as in the United States, but Sanders made it sound as though out-of-pocket costs were a non-issue in Canada.

We rate this claim Half True.

PolitiFact says Sanders made it sound like Canadians do not pay out-of-pocket at all for health care. But Canadians do pay a substantial share out of pocket, therefore making it sound like they don't is "Half True."

Republicans, don't get the idea that you can say something PolitiFact describes as false in its fact check and then skate with a "Half True" rating on the "Truth-O-Meter."

Friday, March 1, 2019

PolitiFact Tweezes Green New Deal Falsehoods

In our post "PolitiFact's Green New Deal Fiction Depiction" we noted how PolitiFact had decided that a Democrat posting a falsehood-laden FAQ about the Green New Deal on her official congressional website escaped receiving a negative rating on PolitiFact's "Truth-O-Meter."

At the time we noted that PolitiFact's forbearance held benefits for Democrats and Republicans alike:
Many will benefit from PolitiFact's apparent plan to give out "Truth-O-Meter" mulligans over claimed aspects of the Green New Deal resolution not actually in the resolution. Critics of those parts of the plan will not have their attacks rated on the Truth-O-Meter. And those responsible for generating the controversy in the first place by publishing FAQs based on something other than the actual resolution also find themselves off the hook.
 We were partly right.

Yes, PolitiFact let Democrats who published a false and misleading FAQ about the Green New Deal off the hook.

But apparently PolitiFact has reserved the right to fault Republicans and conservatives who base their criticisms of the Green New Deal on the false and misleading information published by the Democrats.

PolitiFact Florida tweezed out a such a tidbit from an editorial written by Sen. Rick Scott (R-Fla.):

False? It doesn't matter at all that Ocasio-Cortez said otherwise on her official website? There is no truth to it whatsoever? And Ocasio-Cortez gets no "False" rating for making an essentially identical claim on her website?

This case will get our "tweezers or tongs" tag because PolitiFact is once again up to its traditional shenanigan of tweezing out one supposed falsehood from a background of apparent truths:
Sen. Rick Scott, R-Fla., outlined his opposition to the Democrats’ Green New Deal in a Feb. 25th Orlando Sentinel op-ed:

"If you are not familiar with it, here’s the cliff notes version: It calls for rebuilding or retrofitting every building in America in the next 10 years, eliminating all fossil fuels in 10 years, eliminating nuclear power, and working towards ending air travel (to be replaced with high-speed rail)."


Let’s hit the brakes right there -- do the Democrats want to end air travel?
See what PolitiFact did, there?

Scott can get three out of four points right, but PolitiFact Florida will pick on one point to give Scott a "False" rating and build for him an unflattering graph of  "Truth-O-Meter" ratings shaped by PolitiFact's selection bias.

The Jestation Hypothesis

How does PolitiFact Florida go about discounting the fact that Ocasio-Cortez claimed on her website that the Green New Deal aimed to make air travel obsolete?

The objective and neutral fact checkers give us the Jestation Hypothesis. She must have been kidding.

No, really. Perhaps the idea came directly from one of the three decidedly non-neutral experts PolitiFact cited in its fact check (bold emphasis added):
"It seems to me those lines from the FAQ were lighthearted and ill-considered, and it’s not clear why they were posted," said Sean Hecht, Co-Executive Director, Emmett Institute on Climate Change and the Environment at UCLA law school.
Hecht's FEC contributions page is hilariously one-sided.

Does anyone need more evidence that the line about making air travel obsolete was just a joke?
"No serious climate experts advocate ending air travel -- that's simply a red-herring," said Bledsoe, who was a climate change advisor to the Clinton White House.
Former Clinton White House advisor Bledsoe is about as neutral as Hecht. The supposed "red-herring," we remind readers, was published on Ocasio-Cortez's official House of Representatives website.

The neutral and objective fact-checkers of PolitiFact Florida deliver their jestational verdict (bold emphasis added):
Scott wrote in an op-ed that the Democrats’ Green New Deal includes "working towards ending air travel."

The resolution makes no mention of ending air travel. Instead, it calls for "overhauling transportation systems," which includes "investment in high-speed rail." Scott seized on a messaging document from Democrats that mentioned, perhaps in jest, getting rid of "farting cows and airplanes." But we found no evidence that getting rid of airplanes is a serious policy idea from climate advocates.
Apparently it cannot count as evidence that Democrats have advocated getting rid of airplanes if a popular Democratic Party representative publishes this on her website:
The Green New Deal sets a goal to get to net-zero, rather than zero emissions, at the end of this 10-year plan because we aren’t sure that we will be able to fully get rid of, for example, emissions from cows or air travel before then. However, we do believe we can ramp up renewable manufacturing and power production, retrofit every building in America, build the smart grid, overhaul transportation and agriculture, restore our ecosystem, and more to get to net-zero emissions.
Oh! Ha ha ha ha ha! Get it? We may not be able to fully get rid of emissions from cows or air travel in only 10 years! Ha ha ha!

So the claim was quite possibly a joke, even if no real evidence supports that idea.

But it's all PolitiFact needs to give a Republican a "False" rating and the Democrat no rating at all for saying essentially the same thing.

This style of fact-checking undermines fact checkers' credibility with centrists and conservatives, as well as with discerning liberals.


There was one more expert PolitiFact cited apart from the two we showed/noted were blatantly partisan.

That was "David Weiskopf, climate policy director for NextGen Climate America."

Here's a snippet from the home page for NextGen Climate America:

So basically neutral, right?

PolitiFact Florida "fact checker" (liberal blogger) Amy Sherman seems to have a special gift for citing groups of experts who skew hilariously left.

Wednesday, February 27, 2019

PolitiFact's sample size deception

Is the deception one of readers, of self, or of both?

For years we have criticized as misleading PolitiFact's selection-bias contaminated charts and graphs of its "Truth-O-Meter" ratings. Charts and graphs look science-y and authoritative. But when the data set is not representative (selection bias) and the ratings are subjective (PolitiFact admits it), the charts serve no good function other than to mislead the public (if that even counts as a good function).

One of our more recent criticisms (September 2017) poked fun at PolitiFact using the chart it had published for "The View" host Joy Behar. Behar made one claim, PolitiFact rated it false, and her chart make Behar look like she lies 100 percent of the time--which was ironic because Behar had used PolitiFact charts to draw false generalizations about President Trump.

Maybe our post helped prompt the change and maybe it didn't, but PolitiFact has apparently instituted some sort of policy on the minimum number of ratings it takes to qualify for a graphic representation of one's "Truth-O-Meter" ratings.

Republican Rep. Matt Gaetz (Fla.) has eight ratings. No chart.

But on May 6, 2018 Gaetz had six ratings and a chart. A day later, on May 7, Gaetz had the same six ratings and no chart.

For PolitiFact Florida, at least, the policy change went into effect in May 2018.

But it's important to know that this policy change is a sham that would hide the central problem with PolitiFact's charts and graphs.

Enlarging the sample size does not eliminate the problem of selection bias. There's essentially one exception to that rule, which occurs in cases where sample encompasses all the data--and in such cases "sample" is a bit of a misnomer in the first place.

What does that mean?

It means that PolitiFact, by acting as though small sample size is a good enough reason to refrain from publishing a chart, is giving its audience the false impression that enlarging the sample size without eliminating the selection bias yields useful graphic representations of its ratings.

If PolitiFact does not realize what it is doing, then those in charge are dangerously ignorant (in terms of improving public discourse and promoting sound reasoning).

If PolitiFact realizes what it is doing wrong and does it regardless, then those in charge are acting unethically.

Readers who can think of any other option (apart from some combination of the ones we identified) are encouraged to offer suggestions in the comments section.


When do the liberal bloggers at PolitiFact think their sample sizes are big enough to allow for a chart?

Sen. George Allen has 23 ratings and a chart. So it's between 8 and 23.

Tennessee Republican Marsha Blackburn has 10 ratings and a chart. We conclude that PolitiFact thinks 10 ratings warrant a chart (yes, we found a case with 9 ratings and no chart).

Science. And stuff.

Thursday, February 21, 2019

PolitiFact's Magic Math on 'Medicare For All' (Updated)

Would you believe that PolitiFact bungled an explainer article on the Democratic Party's Medicare For All proposal?

PolitiFact's Feb. 19, 2019 PolitiSplainer, "Medicare For All, What it is, what it isn't" stumbled into trouble when it delved into the price tag attached to government-run healthcare (bold emphasis added):
How much would Medicare for All cost?

This is the great unknown.

A study of Medicare for All from the libertarian-oriented Mercatus Center at George Mason University put the cost at more than $32 trillion over 10 years. Health finance expert Kenneth Thorpe at Emory University looked at Sanders' earlier version during the 2016 campaign and figured it would cost about $25 trillion over a 10-year span.

Where would the money come from?

Sanders offered some possibilities. He would redirect current government spending of about $2 trillion per year into the program. To that, he would raise taxes on income over $250,000, reaching a 52 percent marginal rate on income over $10 million. He suggested a wealth tax on the top 0.1 percent of households.
PolitiFact introduces the funding issue by mentioning two estimates of the spending M4A would add to the budget. But when explaining how Sanders proposes to pay for the new spending PolitiFact claims Sanders would "redirect current government spending" to cover about $20 trillion of the 25-to-32 trillion increase from the estimates.

Superficially, the idea sounds theoretically possible. If the defense budget was $2 trillion per year, for example, then one could redirect that money toward the M4A program and it would cover a big hunk of the expected budget increase.

But the entire U.S. defense budget is less than $1 trillion per year. So what is the supposed source of this funding?

We looked to the document PolitiFact linked, sourced from Sanders official government website.

We found no proposal from Sanders to "redirect current government spending" to cover M4A.

We found this (bold emphasis added):

Today, the United States spends more than $3.2 trillion a year on health care. About sixty-five percent of this funding, over $2 trillion, is spent on publicly financed health care programs such as Medicare, Medicaid, and other programs. At $10,000 per person, the United States spends far more on health care per capita and as a percentage of GDP than any other country on earth in both the public and private sectors while still leaving 28 million Americans uninsured and millions more under-insured.
Nothing else in the linked document anywhere near approaches PolitiFact's claim of $2 trillion per year "redirected" into M4A.

It's a Big ($2 trillion per year) Mistake

We do not think Sanders was proposing what PolitiFact said he was proposing. The supposed $2 trillion over 10 years, or $20 trillion, cannot help pay for the $25 trillion cost Thorpe estimated. Nor can it help pay for the $32 trillion cost that Charles Blahous (Mercatus Center) estimated.

Why? The reason is simple.

Both of those estimates pertained to costs added to the budget by M4A.

In other words, current government spending on healthcare programs was already accounted for in both estimates. The estimates deal specifically with what M4A would add to the budget on top of existing costs.

Need proof? Good. The expectation of proof is reasonable.

Mercatus Center/Blahous
M4A would add approximately $32.6 trillion to federal budget commitments during the first 10 years of its implementation (2022–2031).
Clear enough? The word "add" shows that Blahous is talking about costs over and above current budget commitments to government health care programs.

Page 5 of the full report features an example illustrating the same point:
National health expenditures (NHE) are currently projected to be $4.562 trillion in 2022. Subtracting the $10 billion decrease in personal health spending, as calculated in the previous paragraph, and crediting the plan with $83 billion in administrative cost savings results in an NHE projection under M4A of $4.469 trillion. Of this, $4.244 trillion in costs would be borne by the federal government. Compared with the current projection of $1.709 trillion of federal healthcare subsidy costs, this would be a net increase of $2.535 trillion in annual costs, or roughly 10.7 percent of GDP.

Performing similar calculations for each year results in an estimate that M4A would add approximately $32.6 trillion to federal budget commitments during the period from 2022 through 2031, with the annual cost increase reaching nearly 12.7 percent of GDP by 2031 and continuing to rise afterward.
The $1.709 trillion in "federal healthcare subsidy costs" represents expected spending under Medicare, Medicaid and other federally supported health care programs. That amount is already accounted for in Blahous' estimate of the added cost of M4A.

Kenneth Thorpe

Thorpe's description of his estimate doesn't make perfectly clear that he is estimating added costs. But his mention of the Sanders' campaign's estimate of $1.377 trillion per year provides the contextual key (bold emphasis added):
The plan is underfinanced by an average of nearly $1.1 trillion per year. The Sanders campaign estimates the average annual financing of the plan at $1.377 trillion per year between 2017 and 2026. Over the same time period, we estimate the average financing requirements of $2.47 trillion per year--about $1.1 trillion more on average per year over the same time period. 
When we look at the estimate from the Sanders campaign we find that the $1.377 trillion estimate pertained to added budget costs, not the gross cost of the plan.

Friedman's Table 1 makes it plain, listing 13,773 (billions) as "added public spending":

It's magical math to take the approximately $27 trillion in "continued government spending" to pay down the $13.8 trillion in "new public spending." But PolitiFact seems to suggest Sanders would use current spending to pay for his program's new spending.

Again, we do not think that is what Sanders suggests.

The liberal bloggers at PolitiFact simply botched the reporting.

Our Eye On Corrections

We think this PolitiBlunder clearly deserves a correction and apology from PolitiFact.

Will it happen?

Great question.

As part of our effort to hold PolitiFact (and the International Fact-Checking Network) accountable, we report PolitiFact's most obvious errors through the recommended channels to see what happens.

In this case we notified the author, Jon Greenberg, via Twitter about the problem. We also used Twitter (along with Facebook) to try to draw PolitiFact's attention to the mistake. When those outreach efforts drew no acknowledgement we did as PolitiFact recommends and emailed a summary of the problem to ""

Should PolitiFact continue to let the error stand, we will report the error to the International Fact-Checking Network (under the auspices of the Poynter Institute, just like PolitiFact) and track whether that organization will hold PolitiFact to account for its mistakes.

We will update this section to note future developments or the lack of future developments.

Update Feb. 27, 2019

A full week after we started informing PolitiFact of the mistake in its Medicare PolitiSplainer, the bad reporting in the financing section remains unchanged.

Monday, February 18, 2019

PolitiFact California rates "clean water" claim "True," doesn't define "clean water"

What is "clean water"? It depends on whom you ask, among a variety of different factors.

In California, for example, one can use the federal drinking water standard or the California drinking water standard. Not that using either standard would invalidate the opinion of one who wants to use Mexican water quality standards.

In spite of the lack of any set standard for drawing a clean (pun intended) distinction between "clean water" and "unclean water," PolitiFact California crowned Gov. Gavin Newsom's claim that over 1 million Californians do not have clean water for drinking or bathing with its highest truth designation: "True."

That's supposed to mean that Newsom left nothing significant out, such as what standard he was using to call water "clean."

It's a basic rule of fact checking. If somebody says over 1 million people do not have X, the fact checker needs to have X clearly defined before it's possible to check the numerical claim.

PolitiFact California simply didn't bother, instead leaning on the expert opinions of environmental justice advocate Jonathan London of UC Davis and the equally non-neutral Kelsey Hinton (bold emphasis added):
"Unfortunately, (Newsom’s) number is true,’ added Kelsey Hinton, spokesperson for the Community Water Center, a San Joaquin Valley nonprofit that advocates for clean drinking water.

As evidence, both London and Hinton pointed to a 2017 drinking water compliance report by the State Water Resources Control Board, which regulates water quality. The report shows that an estimated 592,000 Californians lived in a public water district that received a water quality violation in 2017. But that doesn’t include people living in private, unregulated districts.
What neutral and objective fact-checker deliberately seeks out only experts who double as advocates for the subject of a fact check?

PolitiFact California's fact check successfully obscures the fact that drinking water standards are arbitrary. They are arbitrary in that those setting the standards are weighing costs and benefits. There is no set point at which contaminants suddenly turn harmful.

See, for example, the World Health Organization's statement about its standard for poisonous chemical arsenic:
The current recommended limit of arsenic in drinking-water is 10 μg/L, although this guideline value is designated as provisional because of practical difficulties in removing arsenic from drinking-water. Every effort should therefore be made to keep concentrations as low as reasonably possible and below the guideline value when resources are available.
The case is the same with other contaminants, including those yet to be named by regulators. There is no objective line of demarcation between "clean water" and "unclean water." At best there's a widely-accepted standard. PolitiFact California only mentions state standards in its fact check (the word "federal" does not appear) while citing reports that refer to both state and federal standards.

There's a clear solution to this problem. Politicians, if you're tempted to talk about how many do not have access to water meeting a given standard, cite the standard by name. Fact-checkers, if you fact-check claims about water quality that avoid mentioning specific regulatory standards and instead use the slippery term "clean water," realize that you're fact-checking an opinion.

PolitiFact California let slip a wonderful opportunity to educate its readers about water quality standards and what they mean in real life.


PolitiFact California describes environmental justice advocate London as "a UC Davis professor who’s written about contaminated drinking water."

Did PolitiFact California not know London advocates for "environmental justice" or did it deliberately hide that fact from its readers?

Thursday, February 14, 2019

PolitiFact and the 'Green New Deal' Fiction Depiction

What if the GOP released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

We don't see why not.

But what if one of the bill's proponents released a FAQ that gave false or misleading information about the Democrats' "Green New Deal" resolution? Would that be fair game for fact checkers?

For PolitiFact, apparently the answer is "no."

A week after Green New Deal proponent Alexandria Ocasio-Cortez released a FAQ about the proposal on her website, including its supposed proposal to provide "economic security for all who are unable or unwilling to work," PolitiFact published a Feb. 12, 2019 article on the Green New Deal that apparently shows that publishers of the false information will not face PolitiFact's "Truth-O-Meter."

PolitiFact toed the line on the media narrative that somehow, some way, incorrect information was published by someone. Okay, it was Ocasio-Cortez's staff, but so what?
We should distinguish the official resolution with some additional documents that were posted and shared by Ocasio-Cortez’s staff around the time the resolution was introduced. Once they became public, portions of these additional documents became grist for ridicule among her critics.

Some critics took aim at a line in one of the FAQs that said "we aren’t sure that we’ll be able to fully get rid of farting cows and airplanes." The same document promised "economic security for all who are unable or unwilling to work."
Under normal circumstances, PolitiFact makes politicians accountable for what appears on their official websites.

Are these special circumstances? Why doesn't PolitiFact attribute the false information to Ocasio-Cortez in this case? Are the objective and neutral fact checkers trying to avoid creating a false equivalency by not fact-checking morally-right-but-factually-wrong information on Ocasio-Cortez's website?

A round of Mulligans for everyone!

Many will benefit from PolitiFact's apparent plan to give out "Truth-O-Meter" mulligans over claimed aspects of the Green New Deal resolution not actually in the resolution. Critics of those parts of the plan will not have their attacks rated on the Truth-O-Meter. And those responsible for generating the controversy in the first place by publishing FAQs based on something other than the actual resolution also find themselves off the hook.

Good call by the fact checkers?

We think it shows PolitiFact's failure to equally apply its standards.

A Morally Right Contradiction?

If Ocasio-Cortez's website says the Green New Deal provides economic security for persons unwilling to work but the Green New Deal resolution does not provide economic security for persons unwilling to work, then the resolution contradicts Ocasio-Cortez's claim. That's assuming PolitiFact follows past practice in making politicians accountable for what appears on their websites.

So PolitiFact could have caught Ocasio-Cortez in a contradiction, and could have represented that finding somehow with its rating system.

In January we pointed out how PolitiFact falsely concluded that President Trump had contradicted himself in a tweet. The false finding of a contradiction resulted in a (bogus) "False" rating for Mr. Trump.

What excuse could possibly dress this methodology up as objective and unbiased? What makes a contradictory major policy FAQ unworthy of a rating compared to a non-contradictory presidential tweet?

Guess what? PolitiFact is biased. And we're not going to get coherent and satisfactory answers to our questions so long as PolitiFact insists on presenting itself as objective and unbiased.

Saturday, February 2, 2019

PolitiFact Editor: "Most of our fact checks are really rock solid as far as the reporting goes"

Why do we love it when PolitiFact's principals do interviews?

Because it almost always provides us with material for PolitiFact Bias.

PolitiFact Editor Angie Drobnic Holan, in a recent interview for the Yale Politic, damned her own fact-checking organization with faint praise (bold emphasis added):

We do two things–well, we do more than two things–but we do two things that I want to mention for public trust. First, we have a list of our sources with every fact check. If you go into a fact check on the right-hand rail [of the PolitiFact webpage], we list of all of our sources, and then we explain in detail how we came to our conclusion. I also wrote a recent column on why PolitiFact is not biased to try to answer some of the critique that we got during the latest election season. What we found was that when a campaign staffer does not like our fact check on their candidate, they usually do not argue the facts with us– they usually come straight at us and say that we are biased. So, I wrote this column in response to that. And the reason that they don’t come straight out and dispute the facts with us is because the fact checks are solid. We do make some mistakes like any other human beings, but most of our fact checks are really rock solid as far as the reporting goes. And yet, partisans want to attack us anyway.
We find Holan's claim plausible. The reporting on more than half of PolitiFact''s fact checks may well be rock solid. But what about the rest? Are the failures fair game for critics? Holan does not appear to think so, complaining that even though the reporting for PolitiFact's fact checks is solid more than half the time "partisans want to attack us anyway."

The nerve of those partisans!

Seriously, with a defender like Holan who needs partisan attacks? Imagine Holan composing ad copy extolling the wonders of PolitiFact:

PolitiFact: Rock Solid Reporting Most of the Time

Holan's attempt to discredit PolitiFact's critics hardly qualifies as coherent. Even if PolitiFact's reporting was "rock solid" 99 percent of the time criticizing the errors should count as fair game. And a 1 percent error rate favoring the left would indicate bias.

Holan tries to imply that the quality reporting results in a lack of specific criticism, but research connected to Holan's predecessor at PolitiFact, Bill Adair of Duke University, contradicts that notion:
Conservative outlets were much more likely to question the validity of fact-checks and accuse fact-checkers of making errors in their research or logic.
It isn't that conservatives do not criticize PolitiFact on its reporting. They do (we do). But PolitiFact tends to ignore the criticisms. Perhaps because the partisan critiques are "rock solid"?

More interviews, please.

Friday, February 1, 2019

Not a Lot of Reader Confusion XII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Ho-hum. Another day, another example showing that PolitiFact's charts and graphs mislead its readers.

When Sen. Cory Booker (D-NJ) announced he was running for president, PolitiFact was on the case looking for clicks by publishing one of its misleading graphs of Booker's (subjective) "Truth-O-Meter" ratings.

PolitiFact used a pie chart visual with its Facebook posting:

Note the absence of any disclaimer admitting that selection bias affects story selection, along with no mention of the fundamental subjectivity of the "Truth-O-Meter" ratings.

How is this type of graph supposed to help inform PolitiFact's readers? We say it misinforms readers, such as those making these types of comments (actual comments from PolitiFact's Facebook page in response to the Booker posting):

  • "A pretty sad state of affairs when a candidate lies or exaggerates 35% of the time."
  • "Funny how Politifact find Trump lying about 70% of the time." 
  • "(W)hile that's much better than (T)rump's scorecard, it's not honest enough for my tastes."

PolitiFact seems to want people to vote for the candidate with the best-looking scorecard. But given that the ratings are subjective and the stories are selected in a non-random manner (nearly guaranteeing selection bias), the "best" scorecard is, in effect, an editorial position based on opinions, not facts.

PolitiFact is selling editorials to its readers with these graphs.

Many in the fact-checking community oppose the use of gimmicky meters (, Full Fact (UK), for example). If the International Fact-Checking Network and PolitiFact were not both under the auspices of the Poynter Institute then perhaps we'd soon see an end to the cheesy meters.

Wednesday, January 23, 2019

Not a Lot of Reader Confusion XI

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Today on its Facebook Page, PolitiFact continued its custom of highlighting one of its politician "report cards."

In keeping with our expectation that PolitiFact's report cards mislead the PolitiFact audience, we found the following:

We're not making the commenters impossible to verify, but we're blacking out the names and emphasizing that nobody should harass these people over their comments in any way. Somebody already pointed out a problem with their view, so there's not even a need to do that. Just leave it alone and allow this to stand as the latest proof that PolitiFact has its head in the sand over the way its charts mislead its audience. That's using the charitable assumption that PolitiFact isn't deliberately deceiving its audience.

As we have endlessly pointed out, the non-scientific method PolitiFact uses to sample the universe of political claims make statements about the overall percentages of false statements unreliable.

PolitiFact knows this but declines to make it clear to readers with the use of a consistent disclaimer.

The type of response we captured above happens routinely.

Thursday, January 17, 2019

PolitiFact's Heart Transplant

In the past we have mocked PolitiFact's founding editor Bill Adair for saying the "Truth-O-Meter" is the "heart of PolitiFact."

We have great news. PolitiFact has given itself a heart transplant.

PolitiFact's more recent (since May of 2018) self-descriptions now say that fact-checking is the heart of PolitiFact:

That's a positive move we applaud, while continuing to disparage the quality of PolitiFact's fact checks.

It was always silly to call a subjective sliding-scale Gimmick-O-Meter the heart of PolitiFact (even if it was or remains true).

The new approach at least represents improved branding.

Now if PolitiFact could significantly upgrade the quality of its work ...

Post-publication update: Added hotlinks to the first paragraph leading to past commentary on PolitiFact's heart.

Tuesday, January 15, 2019

PolitiFact and the Contradiction Fiction

We consider it incumbent on fact checkers to report the truth.

PolitiFact's struggles in that department earn it our assessment as the worst of the mainstream fact checkers. In our latest example, PolitiFact reported that President Donald Trump had contradicted his claim that he had never said Mexico would pay for the border wall with a check.

We label that report PolitiFact's contradiction fiction. Fact checkers should know the difference between a contradiction and a non-contradiction.

PolitiFact (bold emphasis added):
"When during the campaign I would say ‘Mexico is going to pay for it,’ obviously, I never said this, and I never meant they're going to write out a check," Trump told reporters. "I said they're going to pay for it. They are."

Later on the same day while visiting the border in Texas, Trump offered the same logic: "When I say Mexico is going to pay for the wall, that's what I said. Mexico is going to pay. I didn't say they're going to write me a check for $20 billion or $10 billion."

We’ve seen the president try to say he never said something that he very much said before, so we wondered about this case.

Spoiler: Trump has it wrong.

We found several instances over the last few years, and in campaign materials contradicting the president’s statement.
PolitiFact offers three links in evidence of its "found several instances" argument, but relies on the campaign material for proof of the claimed contradiction.

We'll cover all of PolitiFact evidence and show that none of it demonstrated that Mr. Trump contradicted himself on this point. Because we can.

Campaign Material

PolitiFact made two mistakes in trying to prove its case from a Trump Campaign description of how Trump would make Mexico pay for the border wall. First, it ignored context. Second, it applied spin to one of the quotations it used from the document.

PolitiFact (bold emphasis added):
"It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year," the memo said.

Trump proposed measures to compel Mexico to pay for the wall, such as cutting off remittances sent from undocumented Mexicans in the U.S. via wire transfers.

Then, the memo says, if and when the Mexican government protested, they would be told to pay a lump sum "to the United States to pay for the wall, the Trump Administration will not promulgate the final rule, and the regulation will not go into effect." The plan lists a few other methods if that didn’t work, like the trade deficit, canceling Mexican visas or increasing visa fees.
We placed bold emphasis on the part of the memo PolitiFact mentioned but ignored in its reasoning.

If the plan mentions methods to use if Mexico did not fork over the money directly, then how can the memo contradict Trump's claim he did not say Mexico would pay by check? Are fact checkers unable to distinguish between "would" and "could"? If Trump says Mexico could pay by check he does not contradict that claim by later saying he did not say Mexico would pay by check.

And what's so hard to understand about that? How can fact checkers not see it?

To help cinch its point, PolitiFact quotes from another section of the document, summarizing it as saying Mexico would pay for the wall with a lump sum payment: "(Mexico) would be told to pay a lump sum 'to the United States to pay for the wall'"). Except the term "lump sum" doesn't occur in the document.

There's reason for suspicion any time a journalist substitutes for the wording in the original document, using only a partial quotation and picking up mid-sentence. Here's the reading from the original:
On day 3 tell Mexico that if the Mexican government will contribute the funds needed to the United States to pay for the wall ...
We see only one potential justification for embroidering the above to make it refer to a "lump sum." That's from interpreting "It's an easy decision for Mexico: make a one-time payment of $5-10 billion to ensure that $24 billion continues to flow into their country year after year" as specifying a lump sum payment. We think confirmation bias would best explain that interpretation. It's more reasonable to take the statement to mean that paying for the wall once and having it over with is an obvious choice when it helps preserve a greater amount of income for Mexico annually after that. And the line does not express an expectation of a lump-sum payment but instead the strength (rightly or wrongly) of the bargaining position of the United States.

In short, PolitiFact utterly failed to make its case with the example it chose to emphasize.

... And The Rest

 (these are weak, so they occur after a page break)

Monday, January 7, 2019

Research shows PolitiFact leans left: The "Pants on Fire" bias

In 2011 PolitiFact Bias started a study of the way PolitiFact employs its "Pants on Fire" rating.

We noted that PolitiFact's definitions for "False" and "Pants on Fire" ratings appeared to differ only in that the latter rating represents a "ridiculous" claim. We had trouble imagining how one would objectively measure ridiculousness. PolitiFact's leading lights appeared to state in interviews that the difference in the ratings was subjective. And our own attempt to survey PolitiFact's reasoning turned up nothing akin to an empirically measurable difference.

We concluded that the "Pants on Fire" rating was likely just as subjective as PolitiFact editors described it. And we reasoned that if a Republican statement PolitiFact considered false was more likely than the same type of statement from a Democrat to receive a "Pants on Fire" rating we would have a reasonable measure of ideological bias at PolitiFact.

Every year we've updated the study for PolitiFact National. In 2017, PolitiFact was 17 percent more likely to give a Democrat a "Pants on Fire" rating for a false statement. But the number of Democrats given false ratings was so small that it hardly affected the historical trend. Over PolitiFact's history, Republicans are over 50 percent more likely to receive a "Pants on Fire" rating for a false claim than a Democrat.


After Angie Drobnic Holan replaced Bill Adair as PolitiFact editor, we saw a tendency for PolitiFact to give Republicans many more false ("False" plus "Pants on Fire") ratings than Democrats. In 2013, 2015, 2016 and 2017 the percentage was exactly 25 percent each year. Except for 2007, which we count as an anomaly, that percentage marked the record high for Democrats. It appeared likely that Holan was aware of our research and leading PolitiFact toward more careful exercise of its subjective ratings.

Of course, if PolitiFact fixes its approach to the point where the percentages are roughly even, this powerfully shows that the disparities from 2009 through 2014 represent ideological bias. If one fixes a problem it serves to acknowledge there was a problem in need of fixing.

In 2018, however, the "Pants on Fire" bias fell pretty much right in line with PolitiFact's overall history. Republicans in 2018 were about 50 percent more likely to receive a "Pants on Fire" rating for a claim PolitiFact considered false.

The "Republicans Lie More!" defense doesn't work

Over the years we've had a hard time explaining to people why it doesn't explain away our data to simply claim that Republicans lie more.

That's because of two factors.

First, we're not basing our bias measure on the number of "Pants on Fire" ratings PolitiFact doles out. We're just looking at the percentage of false claims given the "Pants on Fire" rating.

Second, our research provides no reason to believe that the "Pants on Fire" rating has empirical justification. PolitiFact could invent a definition for what makes a claim "Pants on Fire" false. PolitiFact might even invent a definition based on some objective measurement. And in that case the "Republicans lie more!" excuse could work. But we have no evidence that PolitiFact's editors are lying when they tell the public that the difference between the two ratings is subjective.

If the difference is subjective, as it appears, then PolitiFact's tendency to more likely give a Republican's false statement a "Pants on Fire" rating counts as a very clear indicator of ideological bias.

To our knowledge, PolitiFact has never addressed this research with public comment.

Thursday, January 3, 2019

PolitiFact's 10 Worst Fact Check Flubs of 2018

The worst of the mainstream fact checkers, PolitiFact, produced many flawed fact checks in 2018. Here's our list of PolitiFact's 10 worst fact check flubs from 2018.

10 PolitiFact Wisconsin's Worry-O-Meter

Republican Leah Vukmir challenged Democrat Tammy Baldwin for one of Wisconsin's senate seats in 2018. Vukmir challenged Baldwin's willingness to take a hard line on terrorism by saying Baldwin was more worried about "the mastermind of 9/11" than supporting Trump's nominee head of the CIA.

How does a fact checker measure worry?

No worries! PolitiFact Wisconsin claimed to have looked for signs Baldwin worried about Khalid Sheik Mohammed and didn't find anything. So it rated Vukmir's claim "Pants on Fire." PolitiFact Wisconsin skillfully circumnavigated Vukmir's clearly implied reference to a key reason Democrats opposed Trump's nominee, Gina Haspel: She had followed orders to implement enhanced interrogation techniques, including waterboarding. Mohammed was one of those to whom the technique was applied. Those are not the kinds of dots a fact checker like PolitiFact can connect.

Current immigration policy costs as much as $300 billion according to one study

The White House published an infographic claiming immigration policy costs the government money--as much as $300 billion according to one study. PolitiFact examined the question and found that it was "Half True" because it supposedly left out important details, like "U.S.-born children with at least one foreign-born parent are among the strongest economic and fiscal contributors, thanks in part to the spending by local governments on their education." No, really. That's critical missing context in PolitiFact's book. At least in this case.

Trump says senior White House official who said North Korean summit would be impossible to keep does not exist

The New York Times reported a "senior White House official" said U.S. summit with North Korea would be impossible to keep on its original date. It turned out the official didn't quite say that, but instead said words to the effect that keeping the original date would prove extremely difficult.

When Trump tweeted that the the source did not exist, PolitiFact fact checked the claim. In doing so, the fact checkers set aside the idea that Trump was saying no senior White House official had made the claim attributed by the Times. The fact checkers concluded that Trump's tweet was "Pants on Fire" false because the person to whom the Times attributed its dubious paraphrase was a real person.

We count this as a classic example of uncharitable interpretation.

Sen. Ted Cruz claims he has consistently opposed government shutdowns

After Sen. Ted Cruz (R-Texas) said he has consistently opposed government shutdowns, PolitiFact rated his claim "Pants on Fire" because Cruz joined a failed vote against cloture on a bill that would have ended a government shutdown. PolitiFact said Cruz had failed his own test for supporting a shutdown: Cruz said shutdowns happen when senators vote to deny cloture on a funding bill. But obviously on a failed cloture vote the shutdown does not occur even though some senators voted to deny cloture on a funding bill. PolitiFact tried to make Cruz look like a hypocrite by taking his statement out of context.

PolitiFact claims Trump was wrong that a civilian in the room with Omar Mateen might have prevented the Pulse nightclub massacre

(Via Zebra Fact Check) When President Trump tweeted that a civilian with a gun in the room with Mateen might have prevented or reduced the casualties from the Pulse nightclub shooting, PolitiFact ruled the claim "False."

But PolitiFact made an incoherent case for its ruling. Trump was stating a counterfactual scenario, that if a civilian with a gun had been in the room with Mateen then the killing might have been prevented. PolitiFact argued, in effect, that the police detective doing guard duty in the Pulse parking lot counted as the civilian in the room with Mateen and had no effect on the outcome. A person in a parking lot is not the same as a person in the room, we say. And we see no grounds for the implication that the detective in the parking lot failed to contribute to a better outcome compared to having no armed guard in the parking lot.

PolitiFact determines 4.1 percent GDP growth objectively not "amazing."

After President Trump went to Twitter to declare 4.1 percent GDP growth rate "amazing," PolitiFact fact checked the claim and determined it "False." The Weekly Standard took note of PolitiFact's factual interest in a matter of opinion.

PolitiFact often fails to follow its principle against fact-checking opinion or hyperbole, and this case serves as an excellent example.

Does the European Union export cars to the U.S. by the millions?

After President Trump claimed the EU exports cars to the United States by the millions, PolitiFact interpreted the claim to refer specifically (and separately) to Mercedes and BMW vehicles (Trump mentioned both in his tweet) or to Germany in particular. Additionally, PolitiFact assumed that the rate of imports had to exceed 1 million per year to make Trump's claim true. That's despite the fact Trump specified no timed rate.

PolitiFact found its straw man version of Trump's claim "False." As a matter of fact, the number of cars manufactured in the European Union and exported to the United States exceeded 1 million from 2014 through 2016 (the latest numbers when Trump tweeted). Definitely false, then?

3 Did added debt in 2017 exceed cumulative debt over the United States' first 200 years in terms of GDP?

When MSNBC host Joe Scarborough said the Trump administration had added more debt than was accumulated in the nation's first 200 years, PolitiFact fact checked the claim. It was true, PolitiFact found, in terms of raw dollars. But experts told PolitiFact that the debt as a measure percentage of GDP serves as the best measure. So PolitiFact incorrectly interpreted the total accumulated debt in 2017 as added debt and proclaimed that Scarborough's claim checked out in terms of percentage of GDP.

Scarborough received a "Mostly True" rating for a claim that was incorrect in terms of GDP--what PolitiFact reported as the most appropriate measure.

Making this one even better, PolitiFact declined to fix the problem after we pointed it out to them.

2 PolitiFact decides who built what

After the right-leaning Breitbart news site published a fact check announcing that immigrants did not build Fall River, Massachusetts ("mostly false," according to that fact check), PolitiFact published a fact checking finding the Breitbart fact check "False." PolitiFact and Breitbart reported that established residents of Fall River built factories. Immigrants came to work in the factories. We agree with Breitbart that it does not make sense to withhold all credit from the people who built the factories.

1 PolitiFact flip-flops on its 2013 "Lie of the Year"

Republican senatorial candidate Josh Hawley (R-Mo.) tagged his opponent, Democrat Claire McCaskill, with chiming in on PolitiFact's 2013 "Lie of the Year"--the promise that Americans would be able to keep their existing health care plans under the Affordable Care Act.

Hawley accurately summarized PolitiFact's reasoning for its 2013 award. Obama's promise was emphatic that people would not lose their existing plans. Yet millions received cancellation notices in 2013 from insurance companies electing to simply drop potentially grandfathered plans. That led to Obama's promise sharing the "Lie of Year." PolitiFact baselessly claimed that when Hawley said people lost their plans he was sending the message that the millions of people completely lost insurance instead of simply losing the plans they preferred.

Happy New Year!

Correction Jan 3, 2018: Did a strikethrough correction, changing "measure of GDP" to "percentage of GDP"