Saturday, August 11, 2018

Did an Independent Study Find PolitiFact Is Not Biased?

An email alert from August 10, 2018 led us to a blaring headline from the International Fact-Checking Network:

Is PolitiFact biased? This content analysis says no

Though "content analysis" could mean the researchers looked at pretty much anything having to do with PolitiFact's content, we suspected the article was talking about an inventory of PolitiFact's word choices, looking for words associated with a political point of view. For example, "anti-abortion" and "pro-life" signal political points of view. Using those and similar terms may tip off readers regarding the politics those who produce the news.

PolitiFact Bias has never used the presence of such terms to support our argument that PolitiFact is biased. In fact, I (Bryan) tweeted out a brief judgment of the study on Twitter back on July 16, 2018:
We have two major problems with the the IFCN article at Poynter.org (by Daniel Funke).

First, it implies that the word-use inventory somehow negates the evidence of bias that PolitiFact's critics use that do not include the types of word choices the study was was designed to detect:
It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “'Fact Checkers' Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.
The second paragraph mentions selection bias (taking the Weekly Standard quotation out of context) and other types of bias noted by PolitiFact Bias ("an entire blog dedicated to showing the ways in which PolitiFact is biased"--close enough, we suppose, thanks for linking us).

The third paragraph says PolitiFact has "rebuffed those accusations." We think "ignores those accusations" describes the situation more accurately.

The third paragraph goes on to mention PolitiFact's "transparent methodology" (true if you ignore the ambiguity and inconsistency) and transparent funding (yes, funded by some left-wing sources but PolitiFact Bias does not use that as an evidence of PolitiFact's bias). before claiming that PolitiFact "has an academic study to back it up."

"It"=PolitiFact's rebuffing of accusations it is biased????

That does not follow logically. To support PolitiFact's denials of the bias of which it is accused, the study would have to offer evidence countering the specific accusations. It doesn't do that.

Second, Funke's article suggests that the study shows a lack of bias. We see that idea in the title of Funke's piece as well as in the material from the third paragraph.

But that's not how science works. Even for the paper's specific area of study, it does not show that PolitiFact has no bias. At best it could show the word choices it tested offer no significant indication of bias.

The difference is not small, and Funke's article even includes a quotation from one of the study's authors emphasizing the point:
But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there's really nothing to find, or because our tools aren't powerful enough to find what's there,” he said.
So the co-author says maybe the study's tools were not powerful enough to find the bias that exists. Yet Funke sticks with the title "Is PolitiFact biased? This content analysis says no."

Is it too much to ask for the title to agree with a co-author's description of the meaning of the study?

The content analysis did not say "no." It said (we summarize) "not in terms of these biased language indicators."

Funke's article paints a very misleading picture of the content and meaning of the study. The study refutes none of the major critiques of PolitiFact of which we are aware.


Afters

PolitiFact's methodology, funding and verified IFCN signatory status is supposed to assure us it has no political point of view?

We'd be more impressed if PolitiFact staffers revealed their votes in presidential elections and more than a tiny percentage voted Republican more than once in the past 25 years.

It's anybody's guess why fact checkers do not reveal their voting records, right?


Correction Aug. 11, 2018: Altered headline to read "an Independent Study" instead of "a Peer-Reviewed Study"

The Weekly Standard Notes PolitiFact's "Amazing" Fact Check

The Weekly Standard took note of PolitiFact's audacity in fact-checking Donald Trump's claim that the economy grew at the amazing rate of 4.1 percent rate in the second quarter.
The Trumpian assertion that moved the PolitiFact’s scrutineers to action? This one: “In the second quarter of this year, the United States economy grew at the amazing rate of 4.1 percent.” PolitiFact’s objection wasn’t to the data—the economy really did grow at 4.1 percent in the second quarter—but to the adjective: amazing.
That's amazing!

PolitiFact did not rate the statement on its "Truth-O-Meter" but published its "Share The Facts" box featuring the judgment "Strong, but not amazing."

PolitiFact claims it does not rate opinions and grants license for hyperbole.

As we have noted before, it must be the fault of Republicans who keep trying to use hyperbole without a license.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out what a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Reader: "PolitiFact is not biased. Republicans just lie more."

Every few years or so we recognize a Comment of the Week.

Jehosephat Smith dropped by on Facebook to inform us that PolitiFact is not biased:
Politifact is not biased, Republicans just lie more. That is objectively obvious by this point and if your mind isn't moved by current realities then you're willfully ignorant.
As we have prided ourselves on trying to communicate clearly exactly why we find PolitiFact biased, we find such comments fascinating on two levels.


First, how can one claim that PolitiFact is not biased? On what evidence would one rely to support such a claim?

Second, how can one contemplate claiming PolitiFact isn't biased without making some effort to address the arguments we've made showing PolitiFact is biased?

We invited Mr. Smith to make his case either here on the website or on Facebook. But rather than simply heaping Smith's burden of proof on his head we figured his comment would serve us well as an excuse to again summarize the evidence showing PolitiFact's bias to the left.


Journalists lean left
Journalists as a group lean left. And they lean markedly left of the general U.S. population. Without knowing anything else at all about PolitiFact we have reason to expect that it is made up mostly of left-leaning journalists. If PolitiFact journalists lean left as a group then right out of the box we have reason to look for evidence that their political leaning affects their fact-checking.

PolitiFact's errors lean left I
When PolitiFact makes a egregious reporting error, the error tends to harm the right or fit with left-leaning thinking. For example, when PolitiFact's Louis Jacobson reported that the Hobby Lobby's policy on health insurance "barred" women from using certain types of birth control, we noted that pretty much anybody with any rightward lean would have spotted the mistake and prevented its publication. Instead, PolitiFact published it and later changed it without posting a correction notice. We have no trouble finding such examples.

PolitiFact's errors lean left II
We performed a study of PolitiFact's calculations of percentage error. PolitiFact often performs the calculation incorrectly, and errors tend to benefit Democrats (caveat: small data set).

PolitiFact's ratings lean left I
When PolitiFact rates Republicans and Democrats on closely parallel claims Democrats often fare better. For example, when PolitiFact investigated a Democratic Party charge that Rep. Bill McCollum raised his own pay while in Congress PolitiFact said it was true. But when PolitiFact investigated a Republican charge that Sherrod Brown had raised his own pay PolitiFact discovered that members of Congress cannot raise their own pay and rated the claim "False." We have no trouble finding such examples.

PolitiFact's ratings lean left II
We have done an ongoing and detailed study looking at partisan differences in PolitiFact's application of its "Pants on Fire" rating. PolitiFact describes no objective difference in distinguishing between "False" and "Pants on Fire" ratings, so we hypothesize that the difference between the two ratings is subjective. Republicans are over 50 percent more likely than Democrats to have a false rating deemed "Pants on Fire" false for apparently subjective reasons.

PolitiFact's explanations lean left
When PolitiFact explains topics its explanations tend to lean left. For example, when Democrats and liberals say Social Security has never contributed a dime to the deficit PolitiFact gives it a rating such as "Half True," apparently unable to discover the fact that Social Security has run a deficit during years when the program was on-budget (and therefore unquestionably contributed directly to the deficit those years). PolitiFact resisted Republican claims that the ACA cut Medicare, explaining that the so-called Medicare cuts were not truly cuts because the Medicare budget continued to increase. Yet PolitiFact discovered when the Trump administration slowed the growth of Medicaid it was okay to refer to the slowed growth as a program cut. Again, we have no trouble finding such examples.

How can a visitor to our site (including Facebook) contemplate declaring PolitiFact isn't biased without coming prepared to answer our argument?


Friday, July 6, 2018

PolitiFact: "European Union"=Germany

PolitiFact makes all kinds of mistakes, but some serve as better examples of ideological bias than others. A July 2, 2018 PolitiFact fact check of President Donald Trump serves as pretty good evidence of a specific bias against Mr. Trump:


The big clue that PolitiFact botched this fact check occurs in the image we cropped from PolitiFact's website.

Donald Trump states that the EU sends millions of cars to the United States. PolitiFact performs adjustments to that claim, suggesting Trump specified German cars and specifying that the EU sends millions of German cars per year. Yet Trump did not specify German cars and did not specify an annual rate.

PolitiFact quotes Trump:
At one point, he singled out German cars.

"The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions," Trump said.
Saying Trump "singled out German cars" counts as twisting the truth. Trump "singled out" German cars in the sense of offering two examples of German cars among the millions sent to the United States by the European Union.

It counts as a major error for a fact checker to ignore the clear context showing that Trump was talking about the European Union and not simply German cars of one make (Mercedes) or another (BMW). And if those German makes account for large individual shares of EU exports to the United States then Trump deserves credit for choosing strong examples.

It counts as another major error for a fact checker to assume an annual rate in the millions when the speaker did not specify any such rate. How did PolitiFact determine that Trump was  not talking about a monthly rate, or the rate over a decade? Making assumptions is not the same thing as fact-checking.

When a speaker uses ambiguous language, the responsible fact checker offers the speaker charitable interpretation. That means using the interpretation that makes the best sense of the speaker's words. In this case, the point is obvious: The European Union exports millions of cars to the United States.

But instead of looking at the number of cars the European Union exports to the United States, PolitiFact cherry picked German cars. That focus came through strongly in PolitiFact's concluding paragraphs:
Our ruling

Trump said, "The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions."

Together, Mercedes, BMW and Volkswagen imported less than a million cars into the United States in 2017, not "millions."

More importantly, Trump ignores that a large proportion of German cars sold in the United States were also built here, using American workers and suppliers whose economic fortunes are boosted by Germany’s carnakers [sic]. Other U.S.-built German cars were sold as exports.

We rate the statement False.
That's sham fact-checking.

A serious fact check would look at the European Union's exports specifically to the United States. The European Automobile Manufacturers Association has those export numbers available from 2011 through 2016. From 2011 through 2013 the number was under 1 million annually. For 2014 through 2016 the number was over 1 million annually.

Data through September 2017 from the same source shows the European Union on pace to surpass 1 million units for the fourth consecutive year.


Does exporting over 1 million cars to the United States per year for three or four consecutive years count as exporting cars to the United States by the millions (compare the logic)?

We think we can conclude with certainty that the notion does not count as "False."

Our exit question for PolitiFact: How does a non-partisan fact checker justify ignoring the context of Trump's statement referring specifically to the European Union? How did the European Union get to be Germany?

Friday, June 22, 2018

PolitiFact Corrects, We Evaluate the Correction

PolitiFact corrected an error in one of its fact checks this past week, most likely in response to an email we sent on June 20, 2018.
Dear PolitiFact,

A recent PolitiFact fact check contains the following paragraph (bold emphasis added):
Soon after, in February 2017, Nehlen wrote on Twitter that Islam was not a religion of peace and posted a photo of a plane striking the World Trade Center with the caption, "9/11 would’ve been a Wonderful #DayWithoutImmigrants." In the following months, Nehlen also tweeted that "Islam is not your friend," implied that Muslim communities should be bombed and retweeted posts saying Bill and Hillary Clinton were murdering associates.

The hotlink ("implied") leads to an archived Twitter page. Unless I'm missing somelthing [sic], the following represents the best candidate as a supporting evidence:


Unless "Muslim no-go zones" represent typical Muslim communities, PolitiFact's summary of Nehlen's tweet distorts the truth. If a politician similarly omitted context in this fashion, would PolitiFact not mete out a "Half True" rating or worse?

If PolitiFact excuses itself from telling the truth where people accused of bigotry are involved, that principle ought to appear in its statement of principles.

Otherwise, a correction or clarification is in order. Thanks.
We were surprised to see that PolitiFact updated the story with a clarification within two days. And PolitiFact did most things right with the fix, which it labeled a "clarification."

Here's a checklist:
  1. Paid attention to the criticism
  2. Updated the article with a clarification
  3. Attached a clarification notice at the bottom of the fact check
  4. Added the "Corrections and Updates" tag to the article, ensuring it would appear on PolitiFact's "Corrections and Updates" page
Still, we think PolitiFact can do better.

Specifically, we fault PolitiFact for its lack of transparency regarding the specifics of the mistake.

Note what Craig Silverman, long associated with PolitiFact's owner, the Poynter Institute, said in an American Press Institute interview about letting readers know what changed:

News organizations aren’t the only ones on the internet who are practicing some form of journalism. There are a number of sites or blogs or individual bloggers who may not have the same standards for corrections. Is there any way journalists or anyone else can contribute to a culture of corrections? Where does it start?

SILVERMAN: Bloggers actually ended up doing a little bit of correction innovation. In the relatively early blogging days, you’d often see <strike>strikethrough</strike> used to cross out a typo or error. This was a lovely use of the medium, as it showed what was incorrect and also included the correct information after. In that respect, bloggers modelled good behavior, and showed how digital corrections can work. We can learn from that.

It all starts with a broad commitment to acknowledge and even publicize mistakes. That is the core of the culture, the ethic of correction.
We think Silverman has it right. Transparency in corrections involves letting the reader know what the story got wrong. In this case, PolitiFact reported that a tweet implied that somebody wanted to bomb Muslim communities. The tweet referred, in fact, to a small subset of Muslim communities (so small PolitiFact says they do not exist hey that one failed its fact check and I forgot to remove it from the first published draft) referred to as "no-go zones"--areas where non-Muslims allegedly face unusual danger to their person and property.

PolitiFact explained its error like this:
This fact-check has been updated to more precisely refer to a previous Nehlen tweet
That notice is transparent about the fact the text of the fact check was changed and transparent about the part of the fact check that was changed (information about a Nehlen tweet). But it mostly lacked transparency about what the fact check got wrong and the misleading impression it created.

We think journalists, including PolitiFact, stand to gain public trust by full transparency regarding errors. Though that boost to public trust assumes that errors aren't so ridiculous and rampant that transparency instead destroys the organization's credibility.

Is that what PolitiFact fears when it issues these vague descriptions of its inaccuracies?

Still, we're encouraged that PolitiFact performed a clarification and mostly followed its corrections policy. Ignoring needed corrections is worse than falling short of best practices with the corrections.

Monday, June 18, 2018

PolitiFact Wisconsin: The Future is Now!

A May 2, 2018 fact check from PolitiFact Wisconsin uses projected numbers from the 2018-2019 budget year to assess a claim that Wisconsinites are now paying twice as much for debt service on road work as they were paying in 2010-2011 before Republican Scott Walker took over as Wisconsin's governor.


Democratic candidate for governor Kelda Helen Roys and her interviewer used a 22-23 percent figure to represent current spending on road work debt service in Wisconsin.

PolitiFact Wisconsin gave both a pass on their fudging of the facts, but lowered Roys' rating from "True" down to "Mostly True" because the numbers used were mere estimates:
The figure is projected to reach 20.9 percent during the second year of the current two-year state budget Walker signed, which is nearly doubling.

With the caveat that the figure for the current budget is an estimate, we rate Roys’ statement Mostly True.
We think that reasoning would work better as a fact check of Roys' claim if the estimated number represented what Wisconsin is paying now for debt service on its road work. Unless PolitiFact Wisconsin is saying the future is now, the estimate for budget year 2017-2018 would better fit the bill.

PolitiFact Wisconsin reported the 2017-2018 estimate as 20 percent but used the higher figure for the following budget year to judge Roys' accuracy.

And that was just one of three ways PolitiFact Wisconsin massaged the Democrat's statement into a closer semblance of the truth.

What is "Just Basic Road Repair and Maintenance"?

Roys' claimed the debt service was "for just basic road repair and maintenance," which would apparently exclude new construction. PolitiFact tested her claim using the numbers for the transportation-related share of the budget (bold emphasis added):
In analyzing 2017-’19 two-year state budget enacted by Walker and the GOP-controlled Legislature, the bureau provided figures on the total of all transportation debt service as a percentage of gross transportation fund revenue -- in other words, what portion of transportation revenue for road work would be going to paying off debt.
PolitiFact's other truth-massage credited Roys with making clear that the debt service increase she spoke of was the debt service amount as a percentage of total spending on roads. Aside from the fact Roys talked about "just basic road repair and maintenance," she offered listeners no clue that she used the same measure PolitiFact Wisconsin used to fact check her claim.

The clue that likely drove PolitiFact to check the debt service as a percentage of road work expenses came from WisconsinEye senior producer Steve Walters, who conducted the interview of Roys. Walter referred no less than twice to a "22 to 23 percent" figure for debt service during the interview.

Since that number came from Walters, PolitiFact Wisconsin apparently felt no need to fact check its accuracy.

Does Some Road Construction Go Beyond 'Basic'?

We think the phrase "basic road repair and maintenance" may leave some members of the audience with the impression that more involved road work such as replacing bridges would balloon the cost of debt service even higher than described.

We found a page run by the Wisconsin Department of Transportation describing its road projects. Here's the description of one costing $9.6 million:
Description of work: The project consists of a full reconstruction of WIS 55 (Delanglade Street) from I-41 to Lawe Street in the city of Kaukauna. Improvements will include roundabouts at the intersections of I-41 ramps, Maloney/Gertrude, and County OO. New traffic signals will be installed at County J/WIS 55/WIS 96, and bike/pedestrian accommodations will be added throughout the project limits along WIS 55. Other work includes storm sewer, sanitary sewer, water main, sidewalks, retaining walls, street lighting, and incidentals.
It appears to us that PolitiFact Wisconsin simply assumed that all the described work rightly fits under Roys' description.

We're skeptical that such assumptions hold a rightful place among the best practices for fact checkers.

Summary


If we assume that Roys was talking about all expenses attached to road work, and also assume she was talking about the increase in the estimated dollar amount of debt service in raw dollars, her estimate is off by only about 7 percent. In that case, PolitiFact Wisconsin did not really need to use future estimates to justify Roys' statement about how much Wisconsin is spending now. It could have just used the measure Roys' described and rated that against the estimate for this year's spending.

But a fact checker could easily have justified asking Roys to define what she meant by "basic road repair and maintenance" and then using that definition to grade her accuracy. A better fact check would likely result.

We wonder if Roys would need to join the Republican Party to make that happen.

Thursday, June 14, 2018

Different Strokes for Different Quotes: What does "voted for tax cuts" really mean?

"What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site."

-PolitiFact Editor Angie Drobnic Holan



Sometimes PolitiFact can do things right.

PolitiFact New York did something right recently that deserves mention because it's the correct way to journomalist:





PolitiFact added the Trump camp "did not get back to us with information supporting his claim, so we can't say for sure what he was talking about in his endorsement."

PolitiFact noted that Trump tweeted about the Tax Cuts and Jobs Act "four other times in May" but acknowledged Trump did not reference that law in the tweet it fact checked.

In our view this is the correct approach.

We think a persuasive argument could be made that Trump inaccurately implied Donovan was a Tax Cut and Jobs Act supporter, but that argument belongs on the editorial page, not in a fact check. PolitiFact examined the claim Trump made without inventing assumptions about what he meant or what he was implying. In this case PolitiFact stuck to the facts.

Notwithstanding our longtime opposition to rating facts on a sliding scale, we think PolitiFact did this one right and we're happy to point it out.


The Other Guy

Readers may wonder "How could a fact checker screw this one up?" Donovan had a documented history of voting for tax cuts, and Trump's claim was not only unambiguous but also easy to check.

How could a serious fact checker get this wrong?







When the Washington Post's unabashed Trump basher/unbiased truthsayer tweeted that "fact checkers sometimes disagree" we were curious. PolitiFact rated Trump's tweet as accurate, while Kessler deemed the exact same tweet false. How can that be?

As it turns out the two fact checkers aren't disagreeing at all.

PolitiFact correctly identified the claim Trump made and ruled based on his actual words. Kessler invented a claim and then gave Trump a false rating for his own fantasy. The fact checkers aren't disagreeing because they're not checking the same claim.

Kessler says Trump's claim that Donovan "voted for tax cuts" is false because "Donovan voted against Trump's tax cut three times." For those of you that aren't experts in journalism or logic, voting against the Tax Cut and Jobs Act does not negate the fact that Donovan has previously voted for other tax cuts.

As far as we can tell, Kessler offered no justification for calling Trump's claim false other than Donovan's opposition to the 2017 tax bill.

Kessler's reasoning here is flatly wrong. And if one wanted to treat Kessler with the same painful pedantry as he applies to Trump in his chart, one could note there's no such thing as "Trump's tax cuts" because only Congress can pass tax bills.

Petty word games aside, this "disagreement" among fact checkers affirms that our fact-divining betters are neither scientific agents of truth nor objective determiners of evidence. When a fact checker can substitute a person's actual words for their own interpretation of what that person meant it counts as commentary, not an adjudication of facts.

Kudos to PolitiFact New York for taking the correct approach. Sometimes PolitiFact can do things right.


Monday, June 11, 2018

PolitiFact 2016: Despite No Evidence Supporting Our Conclusion, It's "Half True" Donald Trump Doesn't Believe in Equal Pay for Equal Work


Our part-time effort to hold PolitiFact accountable allows many problems to slip through the cracks. Sometimes our various research projects bring a problematic fact check to our attention.

Case in point:


PolitiFact's Nov. 2, 2016 "fact check" found Democratic presidential nominee Hillary Clinton's claim "Half True" despite finding no evidence supporting it other than a fired campaign organizer's complaint of gender discrimination.

We must be exaggerating, right?

We challenge anybody to find concrete evidence of Trump's disbelief in equal pay in PolitiFact's "fact check" apart from the allegation we just described.


In fact, PolitiFact's fact check makes it look like the fact checkers have difficulty distinguishing between the raw pay gap and differences in pay stemming from discrimination. Trump comes across looking like he makes that distinction. PolitiFact comes across looking like it interprets Trump's insistence on that distinction as support for Clinton's claim.

Take for example this unsupportive piece of supporting evidence PolitiFact received from the Clinton campaign:
Clinton’s campaign pointed to another August 2015 interview, in which CNN’s Chris Cuomo asked Trump if he would pass equal pay legislation.

Trump said he was looking into it "very strongly."

"One of the problems you have is you get to have an economy where it's no longer free enterprise economy," Trump said.

Trump said he favored the concept, but that it’s "very complicated."

"I feel strongly -- the concept of it, I love," Trump said. "I just don't want it to be a negative, where everybody ends up making the same pay. That's not our system. You know, the world, everybody comes in to get a job, they make -- people aren't the same."
Trump says he favors the concept (PolitiFact's paraphrase) and in Trump's own words he "loves" the concept. Trump cautions that everybody might end up making the same pay. Does that sound like equal pay for equal work? Is all work equal?

If anything, Clinton's evidence against Trump helped undermine its own case.

Making this fact check even more bizarre, PolitiFact's summary fails to reference its best evidence of Trump's disbelief in equal pay--the went-nowhere gender discrimination case:
Our ruling

Clinton said Trump "doesn't believe in equal pay."

Trump’s campaign website does not have a stipulated stance on equal pay for men and women, but his campaign says he supports "equal pay for equal work." Trump has said men and women doing the same job should get the same pay, but it’s hard to determine what’s "the same job," and that if everybody gets equal pay, "you get away from capitalism in a sense."

Trump has also said pay should be based on performance, not gender -- so he does appear to favor uniform payment if performance is alike.

Clinton’s statement is partially accurate but leaves out important details or takes things out of context. We rate it Half True.
Put bluntly, PolitiFact put nothing in its summary in support of Clinton's claim.

Noting that "Trump's campaign website does not have a stipulated stance on equal pay for men and women" counts as an argument from silence. Making matters worse for Clinton, the campaign breaks its silence to endorse the concept of equal work for equal pay.

PolitiFact claims it places the burden of proof on the one making the claim, in this case Hillary Clinton. The evidence suggests PolitiFact instead placed the burden of proof on the Trump campaign.

PolitiFact makes a snippet mosaic out of Trump's statements that appear to show that he doesn't believe men and women should make equal pay regardless of whether they do equal work. Is that supposed to serve as evidence Trump does not believe in equal pay for equal work?

In the end, PolitiFact gave Clinton a "Half True" rating despite finding no real evidence in support of her claim and an abundance of evidence contradicting it.


Afters I

The 2016 discrimination complaint from Elizabeth Mae Davidson was never litigated and was dropped earlier this year.

Afters II

PolitiFact's description of the gender wage gap in the Clinton fact check puts it somewhat at odds with some of its own past fact checks. Note this from the Clinton fact check:
We’ve detailed key issues about the gender wage gap in our PolitiFact Sheet, but a consistent argument is that women earn 77 cents on the dollar that men earn.

(...)

The Institute for Policy Women’s Research says discrimination is a big factor for why the gender wage gap still persists. Experts consider "occupational segregation" another reason for the wage gap, which means women more often than men work in jobs that pay low and minimum wages.
If you've got a "big reason" and "another reason" which reason should you expect to have the greatest effect? The "big reason," right? Some of PolitiFact's past fact checks have correctly cast serious doubt on that proposition. PolitiFact's summary article on the gender wage gap  (the same "PolitiFact Sheet" referenced above in the Clinton fact check) features a fine example (bold emphasis added):
THE BIG PICTURE

Just before Obama took office in 2009, the Department of Labor released a study because, as a deputy assistant secretary explained it, "The raw wage gap continues to be used in misleading ways to advance public policy agendas without fully explaining the reasons behind the gap." The study by CONSAD Research Corp. took into account women being more likely to work part-time for lower pay, leave the labor force for children or elder care, and choose work that is "family friendly" with fuller benefit packages over higher pay. The study found that, when factoring in those variables, the gap narrows to between 93 cents and 95 cents on the dollar.
We would remind readers that the CONSAD study is not saying that gender discrimination accounts for 5 to 7 percent of the raw gender wage gap. It estimates that 5 to 7 percent of the gender wage gap is not explained by a combination of women's occupational and family choices. Those aren't the only factors influencing the raw wage gap.

So about two-thirds of the raw wage gap is explained by the job choices women make, and 7 percent remains unexplained with part of that 7 percent perhaps explained by gender discrimination.

Did PolitiFact not make that clear?

Afters III

In the search from some charitable explanation for PolitiFact's fact-checking, I had to consider the possibility that Clinton's claim is literally correct: Trump does not believe in equal pay if "equal pay" means paying everyone equally regardless of the job or the quality of the work.

Using that interpretation of "equal pay" would make Clinton's claim literally true but at the same time a whopper of deceit. PolitiFact appeared to take Clinton to mean "equal pay for equal work" except possibly when it used Trump's statements in support of Clinton's claim.

If it was PolitiFact's position that Clinton was saying Trump did not believe men and women should earn the same regardless of the job or the work performed then it should have stated so clearly.

Either way, PolitiFact's fact check looks incoherent.

Wednesday, May 30, 2018

PolitiFact: Senior White House Officials Exist (Contrary to What Trump Said)

Apparently PolitiFact is beyond understanding why people regard news and fact-checking skeptically.

PolitiFact and The New York Times provided a handy 'Splainer of the problem, if they would only pay attention to themselves.

The New York Times reported that "senior White House official" said Trump's summit with North Korea, which Trump had called off, would be "impossible" to keep on its original date.

The Times used that reporting as part of a story supporting a narrative of administrative infighting within the Trump White House.

Trump went to Twitter in response:
The Failing @nytimes quotes “a senior White House official,” who doesn’t exist, as saying “even if the meeting were reinstated, holding it on June 12 would be impossible, given the lack of time and the amount of planning needed.” WRONG AGAIN! Use real people, not phony sources.
PolitiFact fact checked Trump's claim, noting that even if the Times went too far with its paraphrase that Trump did not complain about that. PolitiFact ruled Trump's claim "Pants on Fire" because the Times' source exists even if the source did not say what the Times claimed.

That's a Crazy Way to Check Facts, PolitiFact

PolitiFact reasoned (!) that if the Times had a source for its article even if the source did not say what the article claimed, then it was ridiculously false to say that the source did not exist.
Trump is wrong that the senior White House official cited by the New York Times "doesn’t exist" and is "phony." In fact, the official in question, Pottinger, gave an authorized background briefing to dozens of reporters in person and via phone.

The New York Times article that Trump criticized may have gone too far by paraphrasing Pottinger as saying that a June 12 summit would be impossible, since Pottinger didn’t use that specific word. However, Pottinger did express a significant degree of skepticism about the prospect of a June 12 summit.

If that was Trump's gripe, it isn't what he said. The White House source did exist. We rate Trump’s statement Pants on Fire.
PolitiFact admitted the Times might have "gone too far" with its paraphrase of the source and despite that rated Trump's claim "Pants on Fire."

But in the real world of political communication, Trump's point was clear: No representative of the White House said the meeting was impossible.

A real fact check of Trump could have noted that Trump said the Times "quoted" its source. The Times paraphrased its source instead of using a quotation, so Trump was wrong about that.

But Trump was right that having the summit on its original date was not deemed impossible. And, taken literally, Trump accurately claimed that the White House source of the "impossible" claim does not exist.

PolitiFact does The New York Times a solid

PolitiFact told its readers it was giving them a "transcript" of the audio journalists produced to support the Times' reporting. But PolitiFact's supposed transcript was missing a key line that helps make the story easier to understand.

We're using the transcript posted by Mollie Z. Hemingway at the Federalist, using bold emphasis for the parts PolitiFact left out. We emphasize that the audio posted at PolitiFact matches the Federalist's version:
REPORTER: Can you clarify that…the President obviously announced in the letter and at the top of the bill signing that the summit is called off. But then, later, he said it’s possible the existing summit could take place, or a summit at a later date. Is he saying that it’s possible that June 12th could still happen?

WHITE HOUSE OFFICIAL: That’s…

REPORTER: Or has that ship sailed, right?

WHITE HOUSE OFFICIAL: I think that the main point, I suppose, is that the ball is in North Korea’s court right now. And there’s really not a lot of time. We’ve lost quite a bit of time that we would need in order to, I mean, there’s been an enormous amount of preparation that’s gone on over the past few months at the White House, at State, and with other agencies and so forth. But there’s a certain amount of actual dialogue that needs to take place at the working level with your counterparts to ensure that the agenda is clear in the minds of those two leaders when they sit down to actually meet and talk and negotiate, and hopefully make a deal. And June 12 is in 10 minutes, and it’s going to be, you know. But the President has said that he has — someday, that he looks forward to meeting with Kim.
With the missing part of the transcript, one can see that the unidentified reporter was asking a leading question ("that ship sailed, right?"). The "White House Official" did not affirm the leading question in so many words, but the foregone conclusion appeared in the so-called paraphrase of that official.

That's how the journalistic sausage is made.

And PolitiFact hid that from its readers for some mysterious reason.

That's what we call fact-checking, right?

That, PolitiFact, is why people distrust you and The New York Times. You don't tell the whole story and you don't tell it straight.

Tuesday, May 22, 2018

PolitiFact rewrites the Logan Act

We know that PolitiFact is non-partisan because it doesn't make mistakes like this.


A May 22, 2018 PolitiFact article (with no "Truth-O-Meter" rating) by John Kruzel looked at allegations of a secret meeting at a Paris restaurant between former secretary of state John Kerry and Iranian representatives.

PolitiFact judged that no solid evidence supported the allegations. More interestingly, PolitiFact framed its article as a defense of Kerry from charges he violated the Logan Act.

And that's where PolitiFact slipped up. Badly.

PolitiFact (bold emphasis added):
Trump and right-wing backers challenged Kerry’s actions as violating the 18th century Logan Act, which prevents U.S. citizens from privately meeting with a foreign government to sway its decisions on matters involving the United States.
PolitiFact implies that because the private restaurant meeting probably didn't take place therefore charges Kerry violated the Logan Act have no basis in fact.

The problem? The Logan Act doesn't forbid U.S. citizens from privately meeting with foreign governments to make policy agreements on behalf of the United States. The Logan act prevents private citizens from conducting U.S. foreign policy on behalf of the United States.

The Logan Act:
Any citizen of the United States, wherever he may be, who, without authority of the United States, directly or indirectly commences or carries on any correspondence or intercourse with any foreign government or any officer or agent thereof, with intent to influence the measures or conduct of any foreign government or of any officer or agent thereof, in relation to any disputes or controversies with the United States, or to defeat the measures of the United States, shall be fined under this title or imprisoned not more than three years, or both.

This section shall not abridge the right of a citizen to apply, himself or his agent, to any foreign government or the agents thereof for redress of any injury which he may have sustained from such government or any of its agents or subjects.
Making PolitiFact's fact check even more hilarious (and slanted), the paragraph preceding PolitiFact's erroneous description of the Logan Act describes Kerry meeting with various foreign officials, including an Iranian, regarding the Iran deal (bold emphasis added):
In the weeks before Trump’s May 8 decision to exit the deal and re-impose sanctions on Iran, Kerry had worked frantically behind the scenes to preserve the deal he helped craft in 2015, according to the Boston Globe. Ahead of the U.S. withdrawal, Kerry, who was secretary of state under President Barack Obama, met with Iran’s Foreign Minister Javad Zarif, courted European officials and made dozens of calls to members of Congress in hopes of salvaging the accord.
PolitiFact's apparent effort to exonerate Kerry with its framing of the story ends up convicting Kerry, with the Logan Act properly understood.

How does a non-partisan fact checker make such a huge mistake?

Don't ask us.


Update May 23, 2018: Updated link to Internet Archive version of the PolitiFact article. The first version of that URL was somehow defective.

Sunday, May 20, 2018

PolitiFact Wisconsin and the Worry-O-Meter

PolitiFact Wisconsin had no representation in our article on the worst 17 PolitiFact fact checks of 2017.

A May 18, 2018 fact check of Republican Leah Vukmir should help ensure PolitiFact Wisconsin makes the list for 2018.


Vukmir, a Republican looking for an opportunity to run against Sen. Tammy Baldwin (D-Wisc.) in the 2018 election cycle, has used a hyperbolic ad campaign to paint Baldwin as weak on terrorism. Vukmir said Baldwin worried more about the architect of the 9-11 terrorist attacks than confirming Gina Haspel to head the CIA.

The key to Democrat opposition to the Haspel nomination stemmed from Haspel's involvement in the enhanced interrogation program, which included the technique of waterboarding. The CIA released a disciplinary review saying Haspel had no involvement in the decision to use enhanced interrogation, but that she simply carried out the orders she was issued.

PolitiFact Wisconsin adroitly skipped over all that and took the liberty of re-interpreting Vukmir's claim:
Does U.S. Sen. Tammy Baldwin have so much more concern for a 9/11 terrorist, compared to the president’s nominee to run the CIA, that she would vote against the nominee?
Vukmir's claim was more simple than PolitiFact Wisconsin's creative paraphrase (source: PolitiFact):
Tammy and her party are more interested, and they’re more worried about, the mastermind of 9/11 -- the individual that plotted and ultimately killed over 3,000 Americans on our soil. And she‘s more worried about those individuals than to support a very strong woman with a track record to be the head of the CIA.
Note that Vukmir did not say anything about what motivated Baldwin to withhold support for Haspel.

We suspect PolitiFact Wisconsin counts as a minority for its inability to figure out Vukmir's message: Opposing Haspel's nomination based merely on her following orders within the CIA hampers the CIA's ability to do its job effectively. Imagine working at the CIA and thinking one must second-guess the orders one receives to have a realistic shot at one day leading the CIA.

PolitiFact Wisconsin's fact check spent not a word on that angle of the story, sticking instead to its own idea that Vukmir must show that Baldwin personally showed significant worry about Khalid Sheik Mohammed in order to earn a rating better than "Pants on Fire."

Farcical Fact-Checking

To fact check what Vukmir actually said, PolitiFact Wisconsin would have needed evidence not only showing Baldwin's level of worry for Mohammed but also her level of worry for Haspel's nomination. Otherwise there's no baseline for determining one is greater than the other.

After all, Vukmir clearly made a claim comparing the two.

And how does one assess levels of worry without asserting an opinion? One might go by what a person said, but that assumes an entirely forthright subject. We don't know the answer. And PolitiFact offered no evidence it has an answer.

PolitiFact's approach was preposterous from the outset. It showed no specific level of worry over Mohammed and no specific level of worry over the Haspel nomination. And yet concluded that one was not lower than the other.

Vukmir's statement was best interpreted as hyperbole expressing the damage to CIA operations stemming from refusing a leadership role to a fully qualified woman for nothing more than following orders associated with the enhanced interrogation program--a program that the CIA described to leading congressional members of both parties without apparent objection at the time.

PolitiFact says it grants license for hyperbole. Exceptions doubtless stem, as we've said before, from Republicans trying to use hyperbole without a license.
• Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.
PolitiFact says it doesn't rate opinions. We suppose PolitiFact is entitled to its own opinion.


After Vukmir made her claim about Baldwin, Baldwin ended up voting in opposition to the Haspel nomination.

Wednesday, May 16, 2018

Not a Lot of Reader Confusion XI

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.
 
A comment today at PolitiFact's Facebook page reminds us yet again that PolitiFact's graphs and charts mislead its audience.


The comment referred to PolitiFact Editor Angie Drobnic Holan's Dec. 11, 2015 opinion article in The New York Times. When a reader scoffed at the use of the Times as a reliable reference the person making the comment defended it with this (bold emphasis added):
ALL of the statistics both come from Politifact itself. I use it because it has a very nice graphic that clearly shows that Republicans as a whole are far more full of pony poop than Democrats are.
As we've pointed out repeatedly, PolitiFact has admitted its "Truth-O-Meter" ratings are subjective and that its sampling method makes no attempt to simulate randomness (therefore one may assume the data suffer from selection bias).

Yet another satisfied PolitiFact customer, misled in a way that Holan says doesn't happen a lot.

If it doesn't happen very often then why is it so easy to find examples of it happening, we wonder?

If PolitiFact was concerned about misleading people in this way, then it would attach disclaimers to every one of its graphs to warn against jumping to conclusions the PolitiFact data cannot rightly support.

Thursday, May 10, 2018

PolitiFact Editor: "There Might Be Some Inconsistencies From Time to Time"

During a PolitiFact Reddit AMA, we asked about PolitiFact's defense of the Affordable Care Act's "cuts" to Medicare compared to its antagonistic reporting of GOP efforts to cut Medicare and Medicaid.

PolitiFact Editor Angie Drobnic Holan responded:
Angie here ... We talked about the reasons for that in the Medicaid check. The Medicare reduction was aimed at cost efficiency, while the Medicaid reduction was aimed at reducing the number of enrollees. Go back and look at the Medicaid checks and you should see that. With as many checks as we do -- more thant [sic] 13,000 -- there might be inconsistencies from time to time, but I disagree that there's any longstanding pattern.
Inconsistencies from time to time ... yeah, that's one way to put it.


(in what follows we lean heavily on our earlier analysis of this issue)

We added URLS to the quotations for emphasis and to lead to the source of the quotation. This survey helps establish beyond reasonable doubt that PolitiFact disagreed with the characterization that the ACA cut Medicare.



DEMOCRATS CUT MEDICARE GROWTH 


"The ad loses points for accuracy because the $500 billion aren't actual cuts but reductions to future spending for a program that will still grow significantly in the next 10 years."

"The new law would indeed slow the rate of growth of the broader Medicare program by roughly that amount over 10 years. But it's not a slam-dunk that this represents a cut."

"The ad conflates actual cuts with decreases in future spending, over the next decade for a program expected to expand, and it fails to mention any of the benefits to seniors under the new Medicare program."

"Boxer voted for the health care bill, but it didn't cut $500 billion out of the current Medicare program. Instead, it slowed growth over the next 10 years."

"That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion."

"The vote taken by Congress was not to cut Medicare but to reduce the rate of growth by $500 billion by targeting inefficiencies in the program."

"The $500 billion in 'cuts' are actually cuts to future increased spending, so we give Haridopolos some credit for not simply calling it cuts to Medicare as so many others have done."

"So while the health care law reduces the amount of future spending growth in Medicare, the law doesn't cut Medicare."

"But it incorrectly describes it as $500 billion in Medicare cuts, rather than as decreases in the rate of growth of future spending."

"The referenced $500 billion figure depends on a slowing in the pace of Medicare cost increases. That’s not the same as cutting back"

"But the law attempts to curtail the rapid growth of future Medicare spending, not cut current funding."


We could go on and on like this, for PolitiFact never seemed to tire of correcting Republican claims that the ACA cut Medicare. If somebody said the ACA cut Medicare, PolitiFact's formula was to say the ACA did not cut Medicare but instead cut the growth of spending.

PolitiFact has shown comparatively little interest in frequent claims from Democrats that Republicans are cutting/slashing/gutting Medicaid or Medicare.

REPUBLICANS CUT MEDICAID/MEDICARE

 

June 26, 2017 (checking Republican Kellyanne Conway):

"And on one level, she has a point, we at PolitiFact found. Future savings are not always 'cuts.'"

PolitiFact got around Conway's point by asserting that the BCRA cut Medicaid enrollment and that feature made the bill a cut. That's despite Conway's claim coming from a context that focused on dollars spent.


October 26, 2017 (PolitiFact New York checking Democrat Charles Schumer)

PolitiFact New York (bold emphasis added, hyperlink in the original):
The Senate Budget Committee has a point that Medicare spending will be going up, just not as fast as it would under the status quo. It also has a point that more modest cuts sooner could stave off bigger cuts later. (Experts have often told us that it’s presumptuous to assume significant economic growth impacts before they materialize.)

But we don’t find it unreasonable for Schumer to call cumulative reductions to Medicare and Medicaid spending in the hundreds of billions of dollars "cuts."
 When did it turn reasonable to call cumulative reductions to Medicare or Medicaid spending "cuts"?

September 13, 2017
February 20, 2018 (two "Trump-O-Meter entries at the same URL)

The 2017 entry (bold emphasis added, hyperlinks in the original):
The 2018 White House budget proposal released in May left Medicare benefits largely untouched compared with Medicaid, which would see a more than $600 billion decrease over 10 years compared to current spending levels. Still, Medicare spending would decrease by more than $50 billion in the next decade compared with current levels.
Is a decrease the same as a cut? If a decrease is the same as a cut, why doesn't PolitiFact inform its readers that the decrease happens relative to projected future spending and not current levels? Spending under the budget proposal goes up compared with current levels, contrary to PolitiFact's claim.

The 2018 entry (bold emphasis added, hyperlink in the original):
Over 10 years, Trump's 2019 budget proposal says it would cut Medicare spending by a cumulative $236 billion, including by reductions in "waste" and "fraud" and by changing the way drugs are priced and paid for in the program.
Again, PolitiFact finds it completely unimportant to distinguish between cuts to current spending and slowing the projected growth of spending.



March 29, 2018 (checking Democrat Paul Wyden)

Wyden claimed the GOP wants to take away Social Security, Medicare and Medicaid. PolitiFact found that false. And PolitiFact pointed out that though the Trump budget cuts (PolitiFact's term) Medicare and Medicaid, it does not show a desire to take away those programs.

PolitiFact (bold emphasis added, hyperlink in the original):
For Medicaid, Republican-proposed cuts could lead to specific individuals losing their coverage.
Hey, PolitiFact, are those actual cuts or merely slowing the growth of the program's spending? Is it even important to distinguish between the two?


Summary

PolitiFact's framing of cuts to program growth changed systematically and dramatically, mostly (there are a small handful of exceptions) depending on which party was receiving criticism. The pattern over a period of years is obvious and irrefutable.


"Some inconsistencies from time to time." O-kay.

Tuesday, April 17, 2018

What??? No Pulitzer for PolitiFact in 2018?

We're not surprised PolitiFact failed to win a Pulitzer Prize in 2018.

The Pulitzer it won back in 2009 was a bit of a fluke to begin with, stemming from prize submissions in the public service category and arbitrarily used by the Pulitzer board to justify the prize for "National Reporting."

Since the win in 2009, PolitiFact has won exactly the same number of Pulitzer Prizes that PolitiFact Bias has won: Zero.

So, no, we're not surprised. But we think Hitler might be, judging from his reaction when PolitiFact failed to win a Pulitzer back in 2014.

Friday, April 13, 2018

PolitiFact continues to botch the gender pay gap


We can depend on PolitiFact to perform lousy fact-checking on the gender wage gap.

PolitiFact veteran Louis Jacobson proved PolitiFact consistent ineptitude with an April 13, 2018 fact check of Sen. Tina Smith (D-Min.), Sen. Al Franken's replacement.

Sen. Smith claimed that women earn only 80 cents on the dollar for doing the same jobs as men. That's false, and PolitiFact rated it "Mostly False."


That 80-cents-on-the-dollar wage gap is calculated based on full-time work irrespective of the job type and irrespective of hours worked once above the full-time threshold. The figure represents the median, not the average.

But isn't "Mostly False" a Fair Rating for Smith?

Good question! PolitiFact noted that the figure Smith was using did not take the type of job specifically into account. And PolitiFact pointed out that Smith made a common mistake. People often fail to mention that the raw wage gap figure doesn't take the type of job into account.

PolitiFact's Jacobson doesn't precisely spell out why PolitiFact finds a germ of truth in Smith's statement. Presumably PolitiFact's reasoning matches that of its earlier ratings where it noted that the wage gap statistic is accurate except for the part about it applying to equal work. So it's true except for the part that makes it false, therefore "Mostly False" instead of "False."

Looking at it objectively, however, it's just plain false that women earn 80 cents on the dollar for doing the same work. Researchers talk about an "unexplained gap" after taking various factors into account to explain the gap, and the ceiling for gender discrimination looks like it falls to around 5 percent to 7 percent.

Charitably using the 7 percent figure as the ceiling for gender-based wage discrimination, Smith exaggerated the gap by 186 percent. It's likely the exaggeration was far greater than that.

For comparison, when Bernie Sanders said 40 percent of U.S. gun sales occur without background checks, PolitiFact gave him a "False" rating for exaggerating the right figure by 90 percent.

The Ongoing Democratic Deception PolitiFact Overlooks

If a Democrat describes the 80 percent raw pay gap accurately, why not give it a "True" rating? Or at least "Mostly True"?

Democrats tend to trot out the raw gender pay gap statistic while proposing legislation that supposedly addresses gender discrimination. By repeatedly associating the raw wage gap with the issue of wage discrimination, Democrats send the implicit message that the raw wage gap describes gender discrimination. It uses the anchoring bias to mislead the audience about the size of the pay gap stemming from gender discrimination.

Democrats habitually use "Equal Pay Day," based on the raw wage gap, to argue for equal pay for equal work. But the raw wage gap doesn't take the type of job into account.

Trust PolitiFact not to notice the deception.

Fact checkers ought to assist in making Democrats clarify their position. Are Democrats in favor of equal pay regardless of the job or hours worked? Or do Democrats believe the demands for equal pay apply only to matters of gender discrimination?

If the latter, Democrats' continued use of the raw wage gap to peg the date of its "Equal Pay Day" counts as a blatant deception.

If the former, voters deserve to know what Democrats stand for.

Afters


It amused us that Jacobson directly referenced an earlier PolitiFact Florida botched treatment of the gender pay gap.

PolitiFact Florida couldn't figure out that claiming the gap occurs "simply because she isn't a man" is equivalent to claiming the raw gap is for men and women doing the same work. Think about it. If the gap occurs "simply because she isn't a man" then the reason for the disparity cannot be because she is doing different work. Doing different work would be a factor in addition to her not being a man.

PolitiFact Florida hilariously rated that claim "Mostly True." We wrote about it on March 14, 2017.

Fact checkers. D'oh.

Thursday, April 12, 2018

Not a Lot of Reader Confusion X: "I admit that there are flaws in this ..."

So hold my beer, Nelly.

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Quite a few folks apparently have no clue at all that PolitiFact's charts and graphs lack anything like a scientific basis. Others know that something isn't right about the charts and graphs but against all reason find some value in them anyway.

PolitiFact itself would fall in the latter camp, based on the way it uses its charts and graphs.

So would Luciano Gonzalez, writing at Patheos. Gonzalez listened to Speaker of the House Paul Ryan's speech announcing his impending retirement and started wondering about Ryan's record of honesty.

PolitiFact's charts and graphs don't tell people about the honesty of politicians because of many flaky layers of selection bias, but people can't seem to help themselves (bold emphasis added):
I decided after hearing his speech at his press conference to independently check if this House Speaker has made more honest claims than his predecessors. To did this I went to Politifact and read the records of Nancy Pelosi (House Speaker from January 4th 2007-January 3rd 2011), John Boehner (House Speaker from January 5th 2011-October 29th, 2015), and of course of the current House Speaker Paul Ryan (October 29th 2015 until January 2019). I admit that there are flaws in this, such as the fact that not every political claim a politician makes is examined (or even capable of being examined) by Politifact and of course the inherent problems in giving political claims “true”, “mostly”, “half-true”, “mostly false”, “false”, & “pants on fire” ratings but it’s better than not examining political claims and a candidate’s level of honesty or awareness of reality at all.
If we can't have science, Gonzalez appears to say, pseudoscience is better than nothing at all.

Gonzalez proceeds to crunch the meaningless numbers, which "support" the premise of his column that Ryan isn't really so honest.

That accounts for the great bulk of Gonzalez's column.

Let's be clear: PolitiFact encourages this type of irresponsible behavior by publishing its nonsense graphs without the disclaimers that spell out for people that the graphs cannot be reasonably used to gauge people's honesty.

PolitiFact encourages exactly the type of behavior that fact checkers ought to discourage.

Monday, April 2, 2018

PolitiFact Bias Fights Fact Checker Falsehoods

A December 2017 project report by former PolitiFact intern Allison Colburn of the University of Missouri-Columbia School of Journalism made a number of misleading statements about PolitiFact Bias. This is our second post addressing that report. Find the first one here.

Colburn:
A blog, PolitiFactBias.com, devotes itself to finding specific instances of PolitiFact being unfair to conservatives. The blog does not provide analysis or opinion about fact-checks that give Republicans positive ratings. Rather, it mostly focuses on instances of PolitiFact being too hard on conservatives.
We find all three sentences untrue.

Does PolitiFact Bias devote itself to finding specific instances of PolitiFact being unfair to conservatives?

The PolitiFact Bias banner declares the site's purpose as "Exposing bias, mistakes and flimflammery at the PolitiFact fact check website." Moreover, the claim is specious on its face. After the page break we posted the title of each PolitiFact Bias blog entry from 2017, the year when Colburn published her report. The titles alone provide strong evidence contradicting Colburn's claim.

PolitiFact Bias exists to show the strongest evidence of the left-leaning bias that a plurality of Americans detect in the mainstream media, specific to PolitiFact. As such, we look for any manifestations of bias, including patterns in the use of words, patterns in the application of subjective ratings, biased framing and inconsistent application of principles.


Does PolitiFact Bias not provide analysis or opinion about about fact-checks that give Republicans positive ratings?

PolitiFact Bias focuses its posts on issues that accord with its purpose of exposing PolitiFact's bias, mistakes and flimflammery. Our focus by its nature is technically orthogonal to PolitiFact giving Republicans positive ratings. And, in fact, PolitiFact Bias does analyze cases where Republicans received high ratings. PolitiFact Bias even highlights some criticisms of PolitiFact from the left.

We simply do not find many strong criticisms of PolitiFact from the left. There are plenty of criticisms of PolitiFact from the right that we likewise find weak.

Does PolitiFact Bias "mostly focus" on PolitiFact's harsh treatment of conservatives?

PolitiFact Bias recognizes the subjectivity of PolitiFact's "Truth-O-Meter" ratings. PolitiFact's rating system offers no dependable means of objectively grading the truth value of political statements. For that reason, this site tends to avoid specifically faulting PolitiFact's assigned ratings. Instead, PolitiFact Bias places its emphasis on cases showing PolitiFact inconsistency in applying its ratings. In two similar cases where a Democrat received a positive rating and a Republican received a lower rating it might be the case that PolitiFact went easy on the Democrat.

That said, the list of post titles again shows that PolitiFact Bias produces a great deal of content that is not focused on showing PolitiFact should give conservatives more positive ratings. Holan's statement jibes with Colburn's false statement about the focus at PolitiFact Bias.

Why the misleading claims about PolitiFact Bias?

As far as we can tell, the entire evidence Colburn used in her report's judgment of PolitiFact Bias came from her interview with PolitiFact Editor Angie Drobnic Holan:
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from. But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
Note that in Holan's response to Colburn's first question about PolitiFact Bias she suggests the site focuses on PolitiFact not giving enough positive ratings to conservatives.

Are Colburn and Holan lying?

PolitiFact Bias co-editor Jeff D. has used the PolitiFact Bias Twitter account to charge Colburn and Holan with lying.

The charge isn't unreasonable.

Colburn very likely read the PolitiFact Bias site to some extent before asking Holan about it. Even a cursory read ought to have informed a reasonable person that Holan's description of the site was slanted at best. Yet Holan's description apparently underpinned Colburn's description of PolitiFact Bias.

Likewise, Holan's familiarity with the PolitiFact Bias site ought to have informed her that her description of it was wrong and misleading.

When a person knowingly makes a false or misleading statement, it counts as a lie. Colburn and Holan were both very likely to have reason to know their statements were false or misleading.

We're pondering a second post pressing the issue still further in Holan's case.