Friday, December 30, 2016

PolitiFact's top eleven fake fact checks of 2016

We've promised a list of PolitiFact's top contributions to fake news--but we don't want to get into a useless semantic argument about what constitutes "fake news." For that reason, we're calling this list PolitiFact's top "fake fact checks," and that term refers to fact checks that misinform, whether intentionally or not.

11 Mike Pence denied evolution!

PolitiFact California rated "True" Governor Jerry Brown's claim that Republican presidential candidate Mike Pence denied evolution. The truth? Pence made a statement consistent with theistic evolution without affirming or denying evolution. We called out the error here. So PolitiFact California later changed its rating to "Half True." Because if Pence did not deny evolution that means that it is half true that he denied evolution. It's fact checker logic. You wouldn't understand.

10 Ron Johnson denies humans contribute to climate change!

When Democratic candidate Russ Feingold charged that his Republican opponent Ron Johnson does not accept any human role in climate change, PolitiFact Wisconsin was there. It rated Feingold's claim "Mostly True." The problem? PolitiFact Wisconsin's evidence showed Feingold making a number of clear statements to the contrary, including one where Johnson specifically said he does not deny humans affect the climate. PolitiFact Wisconsin went with its ability to interpret Johnson's more ambiguous statements as a denial of what Johnson said plainly. We wrote about the mistake, but PolitiFact Wisconsin has stayed with its "Mostly True" rating.

9 Social Security is not a Ponzi scheme!

PolitiFact has an established precedent of denying the similarities between Social Security's "pay-as-you-go" financing and Ponzi financing. PolitiFact reinforced its misleading narrative by giving voters advance warning that they might hear the Ponzi lie in 2016. The problem? Voters can find that supposed lie repeated commonly in professional literature, written by the kind of experts PolitiFact might have interviewed to learn the truth.

Will PolitiFact ever repent of misleading its readers on this topic?

8 LGBT the group most often victimized by hate crimes!

Attorney General Loretta Lynch declared in 2016 Lesbian-Gay-Bisexual-Transgendered folks are the group most often victimized by hate crimes. PolitiFact gave Lynch's statement a "True" rating, meaning the statement is true without leaving out any important information. The problem? Lynch's statement is only true on a per capita basis. In other words, large minority groups experience more hate crimes victimization than LGBT. But an individual in the LGBT group would more likely experience a hate crime than a member of the other groups.

How is that not significant enough to affect the rating?

7 The gender wage gap is real(ly big)! Or something!

Mainstream fact checkers are consistently awful on the gender wage gap. The game works like this: Democrat candidate wants to leverage concern over gender discrimination, so Democratic candidate cites a statistic that has hardly any relationship to gender discrimination. Democratic party candidates can count on fact checkers to go along with the game so long as they do not specifically say the raw gender wage gap is caused by gender discrimination.

PolitiFact Missouri's 2016 gender wage gap story, exposed here and here, did that approach one better by badly misinterpreting its source material to exaggerate the size of the gap caused by discrimination.

6 Torture doesn't work!

PolitiFact Florida weighed in on torture and waterboarding when a Florida Republican running for Marco Rubio's senate seat said waterboarding works. PolitiFact Florida ruled the claim "False," after admitting that nobody has tested the proposition scientifically. In short, we (including PolitiFact) don't know for a fact whether waterboarding works. PolitiFact Florida's error was pointed out at Flopping Aces and here.

5 France and Germany did not think Iraq had WMD!

When former Assistant Secretary of Defense Paul Wolfowitz said the French and Germans believed Iraq had WMD, PolitiFact ruled it "Mostly False." The creepy "1984" nature of this fact check stems from PolitiFact turning lack of certitude into near-certitude of lack. And PolitiFact has to win some sort of award for avoiding French President Jacques Chirac's 2003 statement, during the approach to the war, that Iraq "probably" possessed WMD.

4 Colorado Republican tried to redefine rape!

PolitiFact Colorado makes our list with its liberal "Mostly True" rating given to abortion rights champion Emily's List. Emily's list charged a Colorado Republican with trying to "redefine rape" in an abortion-related statute. PolitiFact Colorado apparently neglected to look up the traditional definition of rape (and its forcible/statutory distinction) to see whether it had changed thanks to the proposed wording. It had not, leaving the impression that PolitiFact Colorado essentially took the word of Emily's List at face value. Fellow PolitiFact critic Dustin Siggins led the way in flagging the problems with this PolitiFact Colorado item.

3 In California, it's easier to buy a gun than a Happy Meal!

Matthew Hoy, another one of our favorite PolitiFact critics, flagged this hilarious item. This was not a fact check, but rather a Twitter incident where PolitiFact California retweeted somebody else. California Democrat Gavin Newsom received bogus PolitiCover for claiming there are more gun dealers in California than McDonald's. Newsom tweeted out the bogus vindication under the absurd headline "FACT: It's easier to get a gun than a Happy Meal in California." Partly because a gun costs less than a Happy Meal?

2 Donald Trump is causing an increase in bullying in our schools!

PolitiFact ostensibly checked Hillary Clinton's claim that teachers noticed a "Trump Effect" that amounted to an increase in bullying behavior in the nation's schools. But anecdotal reports ought to mean close to squat in fact-checking circles, so PolitiFact accepted a motley collection of anecdotes from the left-leaning Southern Poverty Law Center as reason enough to give Clinton a "Mostly True" rating. We chronicled the numerous problems with the so-called "Trump Effect" here and at Zebra Fact Check here and here.

1 Mike Pence advocated diverting federal funds from AIDS patients to gay conversion therapy!

PolitiFact California heads the list with its second mostly fact-free fact check of Mike Pence. Back around the year 2000, when Pence was first running for the House of Representatives, he suggested that AIDS care dollars under the Ryan White Care Act should not go to organizations that celebrated behavior likely to spread AIDS. Pence said funds under the Act should go to people seeking to "change their sexual behavior." About 15 years later, Pence's statement was construed to mean that he wanted AIDS care funding to go toward gay conversion therapy. There's no serious argument supporting that notion, and Timothy P. Carney pointed that out even before PolitiFact checked the claim. But PolitiFact California gave Gavin Newsom a "True" rating for the accusation.

PolitiFact California's recent publication of its most popular fact checks for 2016 helps explain why this item tops our list. PolitiFact claimed its "Half True" rating of Newsom was its most popular story. But for months the story ran with a "True" rating. Which version of the story got the most clicks, eh?

Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.

Friday, December 23, 2016

Evidence of PolitiFact's bias? The Paradox Project I

Matt Shapiro, a data analyst, started publishing a series of PolitiFact evaluations on Dec. 16, 2012. It appears at the Paradox Project website as well as at the Federalist.

Given our deep and abiding interest in the evidences showing PolitiFact's liberal bias, we cannot resist reviewing Shapiro's approach to the subject.

Shapiro's first installment focuses on truth averages and disparities in the lengths of fact checks.

Truth Averages

Shapiro looks at how various politicians compare using averaged "Truth-O-Meter" ratings:
We decided to start by ranking truth values to see how PolitiFact rates different individuals and aggregate groups on a truth scale. PolitiFact has 6 ratings: True, Mostly True, Half-True, Mostly False, False, and Pants on Fire. Giving each of these a value from 0 to 5, we can find an “average ruling” for each person and for groups of people.
Unlike many (not all) past attempts to produce "Truth-O-Meter" averages for politicians, Shapiro uses his averages to gain insight into PolitiFact:
Using averages alone, we already start to see some interesting patterns in the data. PolitiFact is much more likely to rate Republicans as their worst of the worst “Pants on Fire” rating, usually only reserved for when they feel a candidate is not only wrong, but lying in an aggressive and malicious way.
Using 2012 Republican presidential candidate Mitt Romney as his example, Shapiro suggests bias serves as the most reasonable explanation of the wide disparities.

We agree, noting that Shapiro's insight stems from the same type of inference we used in our ongoing study of PolitiFact's application of its "Pants on Fire" rating. But Shapiro disappointed us by defining the "Pants on Fire" rating differently than PolitiFact defines it. PolitiFact does not define a "Pants on Fire" statement as an aggressive or malicious lie. It is defined as "The statement is not accurate and makes a ridiculous claim."

As our study argued, the focus on the "Pants on Fire" rating serves as a useful measure of PolitiFact's bias given that PolitiFact offers nothing at all in its definitions to allow an objective distinction between "False" and "Pants on Fire." On the contrary, PolitiFact's principals on occasion confirm the arbitrary distinction between the two.

Shapiro's first evidence is pretty good, at least as an inference toward the best explanation. But it's been done before and with greater rigor.

Word Count

Shapiro says disparities in the word counts for PolitiFact fact checks offer an indication of PolitiFact's bias:
The most interesting metric we found when examining PolitiFact articles was word count. We found that word count was indicative of how much explaining a given article has to do in order to justify its rating.
While Shapiro offered plenty of evidence showing PolitiFact devotes more words to ratings of Republicans than to its ratings of Democrats, he gave little explanation supporting the inference that the disparities show an ideological bias.

While it makes intuitive sense that selection bias could lead toward spending more words on fact checks of Republicans, as when the fact checker gives greater scrutiny to a Republican's compound statement than to a Democrat's (recent example), we think Shapiro ought to craft a stronger case if he intends to change any minds with his research.


Shapiro's analysis based on rating averages suffers the same types of problems that we think we addressed with our "Pants on Fire" study: Poor averages for Republicans make a weak argument unless the analysis defuses the excuse that Republicans simply lie more.

As for Shapiro's examination of word counts, we certainly agree that the differences are so significant that they mean something. But Shapiro needs a stronger argument to convince skeptics that greater word length for fact checks of Republicans shows liberal bias.

Update Dec. 23, 2016: Made a few tweaks to the formatting and punctuation, as well as adding links to Shapiro's article at the Paradox Project and the Federalist (-bww).

Jeff adds:

I fail to see how Shapiro contributes anything worthwhile to the conversation, and he certainly doesn't offer anything new. Every criticism of PolitiFact in his piece has been written about in depth before and, in my view, written much better.

Shapiro's description of PolitiFact's "Pants on Fire" rating is flatly wrong. The definition is published at PolitiFact if he had an interest in looking it up. Shapiro asserts that a "Pants on Fire" rating "requires the statement to be both false and malicious" and "usually only reserved for when they feel a candidate is not only wrong, but aggressively and maliciously lying." This is pure fiction. Whether this indicates sloppiness or laziness I'm not sure, but in any event mischaracterizing PolitiFact's ratings only gives fuel to PolitiFact's defenders. Shapiro's error at the very least shows an unfamiliarity with his subject.

Shapiro continues the terribly flawed tradition of some conservative outlets, including the Federalist, where his article was published, by attempting to find clear evidence of bias by simply adding up  PolitiFact's ratings. Someone with Shapiro's skills should know this is a dubious method.

In fact, he does know it:
This method assumes this or that article might have a problem, but you have to look at the “big picture” of dozens of fact-checks, which inevitably means glossing over the fact that biased details do not add up to an unbiased whole.
That's all well and good, but then Shapiro goes on to ask his readers to accept that exact same method for his own study. He even came up with his own chart that simply mirrors the same dishonest charts PolitiFact pushes.

At first blush, his "word count" theory seems novel and unique, but does it prove anything? If it is evidence of something, Shapiro failed to convince me. And I'm eager to believe it.

Unfortunately, it seems Shapiro assumes what his word count data is suppose to prove. Higher word counts do not necessarily show political bias. It's entirely plausible those extra words were the result of PolitiFact giving someone the benefit of the doubt, or granting extra space for a subject to explain themselves. Shapiro's making his assertion without offering evidence. It's true that he offered a few examples, but unless he scientifically surveyed the thousands of articles and confirmed the additional words are directly tied to justifying the rating he could reasonably be accused of cherry-picking.

“When you’re explaining, you’re losing,” may well be a rock solid tenet of lawyers and politicians, but as data-based analysis it is unpersuasive.

We founded this website to promote and share the best criticisms of PolitiFact. While we doubt it matters to him or the Federalist, Shapiro's work fails to meet that standard. 

Shapiro offers nothing new and nothing better. This is a shame because, judging from his Twitter feed and previous writings, Shapiro is a very bright, thoughtful and clever person. We hope his next installments in this series do a better job of exposing PolitiFact's bias.

We've been criticizing and compiling quality critiques of PolitiFact for six years now. Documenting PolitiFact's bias is the main reason for this site's existence. We're exceedingly predisposed to accept and promote good evidence of PolitiFact's flaws.

If your data project can't convince two guys that started a website called PolitiFact Bias and who devote countless hours of our free time preaching to people that PolitiFact is biased, then perhaps your data project isn't very convincing.

Tuesday, December 20, 2016

PolitiFact Wisconsin don't need no stinkin' evidence

Is a fact check a fact check if it doesn't bother checking facts?

PolitiFact Wisconsin brings this question to the fore with its Dec. 16, 2016 fact check of former Democratic senator Russ Feingold. Feingold said Social Security was pretty much invented at the University of Wisconsin-Madison, and that's where President Franklin Delano Roosevelt got the idea.

PolitiFact agreed, giving Feingold's claim a "True" rating:

But a funny thing happened when we looked for PolitiFact Wisconsin's evidence in support of Feingold's claims. The fact check omits those facts, if they exist.

Let's review what PolitiFact offered as evidence:
When we asked Feingold spokesman Josh Orton for backup, he pointed to several Wisconsinites and people tied to the University of Wisconsin-Madison — where Feingold graduated in 1975 — who were influential in developing Social Security.
PolitiFact went on to list four persons with UW-Madison connections (among many) who were influential in bringing Social Security to pass in the United States.

Then PolitiFact Wisconsin summarized its evidence, with help from an unbiased expert from UW-Madison:
Current UW-Madison professor Pamela Herd agreed that Wisconsinites tied to the university were key figures in the development of Social Security.

"There were a lot of people involved in the creation of this program, but some of the most important players were from Wisconsin," said Herd, an expert on Social Security.
Okay, got it? Now on to the conclusion:
Feingold said that the idea for Social Security "was basically invented up on Bascom Hill, my alma mater here; that's where Franklin Roosevelt got the idea."

Historical accounts show, and an expert agrees, that officials who helped propose and initially operate Social Security had deep ties to UW-Madison.

We rate Feingold’s statement True.
And there you have it. Fact-checking.

If officials who helped propose and initially operate Social Security had deep ties to UW-Madison, then Social Security was basically invented at UW-Madison. And that's where President Roosevelt got the idea. "True."

Where was PolitiFact when Al Gore claimed to have taken the initiative in creating the Internet?

Seriously: PolitiFact Wisconsin's fact check produces no solid or unequivocal evidence supporting one of Feingold's claims and completely ignores fact-checking the other (why?). Yet Feingold's claims receive a "True" rating?

What happened to comparing the content of the federal Social Security Act to its precursor from UW-Madison? What happened to looking at where Roosevelt got his ideas about providing social insurance?

That's not fact-checking. That's rubber-stamping.


The silver lining from PolitiFact Wisconsin's fact check comes from its links to the Social Security Administration website, which offer facts instead of supposition about the history of Social Security.

PolitiFact Wisconsin did a stellar job of keeping inconvenient facts from the Social Security website out of its fact check.

Sunday, December 18, 2016

Fact-checking the wrong way, featuring PolitiFact

Let PolitiFact help show you the right way to fact check by avoiding its mistakes

Fake and skewed news do present a problem for society. Having the best possible information allows us to potentially make the best possible decisions. Bad information hampers good decision-making. And the state of public discourse, including the state of the mainstream media, makes it valuable for the average person to develop fact-checking skills.

We found a December 11, 2016 fact check from PolitiFact that will help us learn better how to interpret claims and make use of expert sources.

The interpretation problem

PolitiFact believed it was fact-checking Republican Reince Priebus' claim that there was no report available saying Russia tried to interfere with the 2016 presidential election:

Was Priebus saying there was no "specific report" saying Russia tried to "muddy" the election? Here's how PolitiFact viewed it:
"Let's clear this up. Do you believe -- does the president-elect believe that Russia was trying to muddy up and get involved in the election in 2016?" Meet the Press host Chuck Todd asked on Dec. 11, 2016.

"No. 1, you don't know it. I don't know it," Priebus said. "There's been no conclusive or specific report to say otherwise."

That’s wrong. There is a specific report.

It was made public on Oct. 7, 2016, in the form of a joint statement from the Department of Homeland Security and the Office of the Director of National Intelligence. At the time, the website WikiLeaks was releasing a steady flow of emails stolen from the Democratic National Committee and top Hillary Clinton adviser John Podesta.

"The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from U.S. persons and institutions, including from U.S. political organizations," the statement said. "These thefts and disclosures are intended to interfere with the U.S. election process."
Based on the context of Priebus' appearance on "Meet the Press," we think PolitiFact botched its interpretation. NBC's Chuck Todd went back and forth with Priebus for a number of minutes on the nature of the evidence supporting the charge of Russian interference with the 2016 U.S. presidential election. The main topic was recent news reports suggesting Russia interfered with the U.S. election to help Republican candidate Donald Trump. Todd's question after "Let's clear this up" had little chance of clearing up that point. Priebus would not act unreasonably by interpreting Todd's question to refer to interference intended to help the Republican Party.

But the bigger interpretation problem centers on the word "specific." Given the discussion between Todd and Priebus around the epistemological basis for the Russian hacking angle, including "You don't know it and I don't know it" in the immediate context, both the "conclusive" and the "specific" definitions of the word address the nature of the evidence.

"Conclusive" means incontrovertible, not merely featuring a conclusion. "Specific" means including a specific evidence or evidences, and therefore would refer to a report showing evidences, not merely a particular (second definition as opposed to the first) report.

In short, PolitiFact made a poor effort at interpreting Priebus in the most sensible way. Giving conservatives short shrift in the interpretation department occurs routinely at PolitiFact.

Was the report PolitiFact cited incontrovertible? PolitiFact offered no argument to that effect.

Did the report give a clear and detailed description of Russia's attempt to influence the 2016 election? Again, PolitiFact offered no argument to that effect.

PolitiFact's "fact-checking" in this case amounted to playing games with the definitions of words.

The problem of the non-expert expert

PolitiFact routinely cites experts either without investigating or reporting (or both) their partisan leanings. Our current case gives us an example of that, as well as a case of giving the expert a platform to offer a non-expert opinion:
Yoshiko Herrera, a University of Wisconsin political scientist who focuses on Russia, called that letter, "a pretty strong statement." Herrera said Priebus’ comment represents a "disturbing" denial of facts.

"There has been a specific report, and politicians who wish to comment on the issue should read and comment on that report rather than suggest there is no such report or that no work has been done on the topic," Herrera said.
What relevant expertise does a political scientist focused on Russia bring to the evaluation of statements on issues specific to U.S. security? Even taking for granted that the letter Herrera talks about was objectively "a pretty strong statement," Herrera has no obvious qualification that lends weight to her opinion. An expert on international intelligence issues might lend weight to that opinion by expressing it.

The same goes for PolitiFact's second paragraph quoting Herrera. The opinion in this case gains some weight from Herrera's status as a political scientist (the focus on Russia counts as superfluous), but her implicit opinion that Trump made the error she cautions about does not stem from Herrera's field of expertise.

Note to would-be fact checkers: Approach your interviews with experts seeking comments that rely on their field of expertise. Anything else is fluff, and you may end up embarrassing the experts you cite by relying on their expertise for information that does not reflect their expertise.

Was Herrera offering her neutral expert opinion on Trump's comment? We don't see how her comments rely on her expertise. And reason exists to doubt her neutrality.

Source: FEC. Click image for larger view.

Yoshiko Herrera's FEC record of political giving shows her giving exclusively to Democrats, including a modest string of donations to the campaign of Hillary Rodham Clinton.

Did PolitiFact give its readers that information? No.

The wrap

Interpret comments fairly, and make sure you only quote expert sources when their opinion comes from their area of expertise. Don't ask an expert on political science and Russia for an opinion that requires a different area of expertise.

For the sake of transparency, I advocate making interview material available to readers. Did PolitiFact lead Herrera toward the conclusion a "specific report" exists? Or did Herrera offer that conclusion without any leading? An interview transcript allows readers to answer that question.

PolitiFact had announced that it plans to start making interview materials available as a standard practice. Someday? Somewhere over the rainbow?


Since the time we started this evaluation of PolitiFact's fact check, U.S. intelligence agencies have weighed in with comments hinting that they possess specific evidence showing a Russian government connection to election-related hacking and information leaks. But even these new reports do not contradict Priebus until the reports include the clear and detailed evidence of Russian meddling--from named and reliable sources.

Thursday, December 15, 2016

Angry PolitiFact co-founder speaks truth to power?

PolitiFact's founding editor Bill Adair tends to get all the credit (or blame!) these days for creating PolitiFact, but news technologist Matt Waite, once of the St. Petersburg (later Tampa Bay) Times has a claim to a role as PolitiFact's principal developer.

Thus we were surprised that Waite stuck his neck out by blaming journalists for the problems facing journalism.

Waite was one of the journalists Nieman Journalism Lab chose to make bold predictions for the coming year in journalism. Waite's contribution was a scathingly worded prediction that journalists would not resolve any of the big problems because ... journalists would not admit they are the problem. Yes, PolitiFact Bias readers, click the link and read it. It's short, pithy and angry.

PolitiFact is mainstream media journalism. The problems Waite describes in the mainstream media apply broadly to PolitiFact.


We appreciate Mollie Z. Hemingway's prescription for journalism published at The Federalist.

Tuesday, December 13, 2016

We called it: PolitiFact's 2016 "Lie of the Year" is Fake News

Back on Dec. 5, 2016, we at PolitiFact Bias reviewed PolitiFact's candidates for "Lie of the Year" and agreed that PolitiFact would choose "Fake News" as its winner:
This surprise nominee has everything going for it. Fake news is fake by definition, so who can criticize the choice? It's total journalistic hotness, as noted. And the choice represents a call to action, opposing fake news, in symphony with a call that is already reverberating in fact-checking circles.

Is it a lame choice? Yes, it's as lame as all get out. I'd doubt journalists even have a clue about the impact of fake news, not to even mention the role fact checkers play in supporting false news memes that liberals favor.
The article PolitiFact published to announce its choice, written by editor Angie Drobnic Holan, jibes right on down the line with the reasons Jeff and I thought it was an idiotic-yet-predictable choice.

"Fake News" is so broad, we thought, that it would end up as PolitiFact's self-justification for its own existence.

We also pointed out that the choice shields PolitiFact from criticism. On the one hand, Fake News gives PolitiFact an excuse not to pick a good candidate, such as Hillary Clinton's aggressive and false defense of her email activity--one with a fairly inevitable impact on Clinton's election prospects--that would risk offending its primarily liberal audience. On the other hand, Fake News is so vague that it offers no target for criticism.

Holan even took the predictable path of tying Fake News to the election results without any real data to back her argument:
Fake news found a willing enabler in Trump, who at times uttered outrageous falsehoods and legitimized made-up reports. Clinton emboldened her detractors and turned off undecideds with a lawyerly parsing of facts that left many feeling that she was lying. Her enemies ran wild.
Got the subtext? Fake News was a big key to Trump's victory.

Another subtext throughout Holan's article presents fact-checking (the mainstream media brand, with PolitiFact as its star player) as the antidote to the society-destroying effects of Fake News.


Even before PolitiFact announced its Lie of the Year for 2016, we planned to dip our toe into Fake News outrage by compiling a list of PolitiFact's top fake news stories for 2016. That is, we will show people how PolitiFact's articles sometimes reinforce and spread false beliefs.

We expect to publish the list next week.

Does changing from "True" to "Half True" count as a correction? Clarification? Update? Anything?

The use and abuse of the PolitiMulligan

We've pointed out before PolitiFact's propensity to correct or update its stories on the sly, contrary to statements of journalistic ethics (including its own statement of principles) regarding transparency.

Thanks to PolitiFact, we have another example in the genre, where PolitiFact California, instead of announcing a correction or update, simply executed a do-over on one of its stories.

On July 28, 2016, PolitiFact ruled it "True" that vice-presidential candidate Mike Pence had advocated diverting federal money from AIDS care services to "conversion therapy." But Timothy P. Carney, writing for the right-leaning Washington Examiner, had published an item the day before explaining why the evidence used by Pence's critics did not wash.

I wrote about PolitiFact California's faulty fact check on July 29, 2016 at Zebra Fact Check.

On Dec. 2, 2016, PolitiFact partly reversed itself, publishing a new version of the fact check with a "Half True" rating replacing the original "True" rating.

To be sure, the new item features a lengthy editor's note explaining the reason for the new version of PolitiFact California's fact check. But readers should note that PolitiFact completely avoids any admission of error in its explanation:
EDITOR’S NOTE: On July 28, 2016, PolitiFact California rated as True a statement by Democratic Lt. Gov. Gavin Newsom that Republican Indiana Governor and now Vice President-Elect Mike Pence "advocated diverting taxpayer dollars to so-called conversion therapy." We based that ruling on a statement Pence made in 2000 on his congressional campaign website, in which Pence says "Resources should be directed toward those institutions which provide assistance to those seeking to change their sexual behavior." Subsequently, our readers and other fact-checking websites examined the claim and made some points that led us to reconsider the fact check. Readers pointed out that Pence never explicitly advocated for conversion therapy in his statement and that he may have been pushing for safer sex practices. Pence’s words are open to other interpretations: Gay and lesbian leaders, for example, say his statement continues to give the impression that he supported the controversial practice of conversion therapy when his words are viewed in context with his long opposition to LGBT rights. Taking all of this into account, we are changing our rating to Half True and providing this new analysis.

PolitiFact California’s practice is to consider new evidence and perspectives related to our fact checks, and to revise our ratings when warranted.
While we credit PolitiFact California for keeping an archived version of its first attempt available for readers, we find PolitiFact's approach a bit puzzling.

First of all, there are no "new evidence and perspectives" involved in this case. Carney's July 27 article ought to have pre-empted the flaw in PolitiFact California's original July 28 fact check, and Zebra Fact Check highlighted the problem again two days later: A fact checker needs to account for the difference in wording between "changing sexual behavior" and "changing sexual preference." Also noted was PolitiFact California's failure to explain the immediate context of the smoking gun quotation it used to convict Pence: The Ryan White Care Act.

PolitiFact California made two major mistakes in its fact-checking. First, it failed to offer due attention to the wording of Pence's statement. Second, it failed to consider the context.

The two major errors resulted in no admission of error. And PolitiFact California's do-over fails to even show up on PolitiFact's list of stories that were updated or corrected.

As for the new "Half True" rating? If "changing their sexual behavior" in the context of the Ryan White Care Act is open to interpretation as "changing their sexual orientation," then we claim as our privilege the interpretation of "Half True" as "False."

In other words, PolitiFact California, creative interpretation is no substitute for facts.


So apparently it is an update. Just not the type of update that PolitiFact includes on its "Corrections and Updates" page.

Friday, December 9, 2016

PolitiFact agrees to disagree with itself on budget cuts

Bless your heart, PolitiFact.

PolitiFact has lately started to wrap up its "Obameter" feature, rating whether President Obama delivered on the set of campaign promises PolitiFact tracked.

One recent item caught our eye, as Obama earned a "Compromise" rating for partially delivering on a re-election campaign promise to cut $1 trillion to $1.5 trillion in spending.

Veteran PolitiFact fact checker Louis Jacobson wrote this one.

Jacobson and PolitiFact received some great advice from an expert, then proceeded to veer into the weeds:
"Like anything else in budgeting, it's all about the baseline," said Steve Ellis, vice president of Taxpayers for Common Sense. "It's a cut relative to what?"

The most obvious way to judge how much spending went down on Obama's watch is to start with how much spending was expected in 2012, when he made the promise, and then compare that to the amount of spending that actually materialized.
Huh? What happened to the incredibly obvious method of measuring how much was spent in 2012 when he made his promise and then looking about how much was spent at the end of his term in office?

Jacobson's Dec. 5, 2016 Obameter story doesn't even acknowledge the method we're pointing out, yet Jacobson appeared well aware of it when he wrote a budget cut fact check way back in 2014:
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
Jacobson faulted a critic of Obama's health care law for using "cuts" to describe slowing the growth of future spending. Yet Jacobson finds that deceptive method "the most obvious way" to determine whether Obama delivered his promised spending cut.

But at least there's a happy ending to the discrepancy. The National Republican Senatorial Committee has "the most obvious method" counted against it (as deceptive), while President Obama receives the benefit when PolitiFact uses the Republicans' deceptive method to rate the president's promise on cutting spending.

There's nothing wrong with favoring the good guys over the bad guys, right?

Inside Baseball Stuff

In terms of fact-checking, we noted a particularly interesting feature of Louis Jacobson's rating of President Obama's promise of cutting spending by $1 trillion to $1.5 trillion: Obama was promising that spending cut over and above one he was already claiming to have achieved. Though PolitiFact's presentation makes this part of Obama's statement obvious, PolitiFact does not bother to confirm the claim.

Consideration of context serves as a fact checker's primary tool for interpreting claims. If Obama saved $1 trillion before making his promise of equal or greater savings in his second term, the means he used to achieve that savings is the means we should expect him to use to fulfill his promise for his second term unless he specifies differently.

We failed to find any mainstream fact check addressing Obama's claim of saving $1 trillion in 2011:
PRESIDENT OBAMA: Well, I-- I have to tell you, David, if-- if you look at my track record over the last two years, I cut spending by over a trillion dollars in 2011.
If Obama did not save $1 trillion in the first place, he cannot fulfill a promise to cut "another" $1 trillion. At best he can fulfill part of the promise: to cut $1 trillion.

Louis Jacobson and PolitiFact did not notice?

Checking whether Obama saved that $1 trillion in 2011 should have served as a prerequisite for rating Obama's second-term promise to save another $1 trillion or more. Fact checkers could then assume Obama would save the second $1 trillion under the same terms as the first.

Monday, December 5, 2016

Handicapping PolitiFact's "Lie of the Year" for 2016

A full plate of stuff to write about has left me a little behind in getting to PolitiFact's list of finalists for its "Lie of the Year" award--the award that makes it even more obvious that PolitiFact does opinion journalism, since judging the importance of a "lie" obviously requires subjective judgments.

I clipped an image of the most important part of the electronic ballot:

One thing jumped out right away. Typically the list of finalists includes about 10 specific fact checks from a number of sources. This year's menu includes only four specific fact checks, two each from Hillary Clinton and Donald Trump. PolitiFact rounds out the menu with two category choices, the whole 2016 election and the "fake news" phenomenon that, without much hint of irony, has galvanized the mainstream press to make even greater efforts to recapture its (legendary?) role as the the gatekeeper of what people ought to know and accept.

Given PolitiFact's recent tendency to select a "Lie of the Year" made up of multiple finalists, these changes make a great deal of sense. We've already pointed out one of the advantages PolitiFact gains from this approach. Having a multi-headed hydra as the winner allows PolitiFact to dodge criticisms of its choice. Oh, that hydra head got lopped off? No worries. These others continue to writhe and gnash their teeth.

Without further ado, I'll rate the chances of the six listed finalists. Doubtless my co-editor Jeff D. will weigh in at some point with his own comments and predictions.

Clinton "never received nor sent any email that was marked as classified" on her private email server while acting as Secretary of State

Of the specific claims listed, this one probably had the biggest impact on the election. Clinton made this claim a key part of her ongoing defense of her use of the private email server. When FBI Director James Comey contradicted this part of Clinton's story, it cinched one of Clinton's key negatives heading into the 2016 election. This one would serve as a pretty solid choice for "Lie of the Year." The main drawback of this selection stems from liberal denial of Clinton's weakness as a presidential candidate. This choice might generate some lasting resentment  from a significant segment of PolitiFact's liberal fanbase, some of whom will insist Clinton was telling the truth.

Clinton says she received Comey's seal of approval regarding her truthfulness about the email server

This item gave us a notable case where a major political figure made a pretty much indefensible and clear statement that was quickly publicized as such. Was this one politically significant? I think journalists were a bit shocked that Clinton made this unforced error. But I doubt voters regarded this case as anything other than a footnote to Clinton's earlier dissembling about her email server.

Trump claims "large-scale voter fraud"

Talk about awkward!

Trump was pilloried by the mainstream press along with pundits and politicians aplenty for his statements calling the presidential election results into doubt. But the political importance of this one gets complicated by liberal challenges to the election results in states where Trump's margin of victory was not particularly narrow (Michigan, Pennsylvania, and Wisconsin). Why challenge the results if they were not skewed by some form of large-scale fraud? This selection also suffers from the nature of the evidence. Trump received the rating not because it is known that no large-scale voter fraud has taken place in 2016, but because of a lack of evidence supporting the claim.

Donald Trump said he was against the war in Iraq

This one counts as the weakest of the specific fact checks on the list. PolitiFact and its fact-checking brethren built a very weak case that Trump had supported the Iraq War. Making this one by itself the "Lie of the Year" will result in some very good challenges in the mainstream and conservative press.

"The entire 2016 election, when falsehoods overran the facts"

Now things get interesting! Could PolitiFact opt for a "Lie of the Year" awarded to a candidate even more generalized than "campaign statements by Donald Trump," which won in 2016? And does PolitiFact have the ability to objectively quantify this election's overrunning of the facts compared to elections in the past? And could PolitiFact admit that falsehoods overran the facts despite proclamations that fact-checking enjoyed a banner year? If falsehoods overrun the facts while fact checkers enjoy a banner year then what will journalists prescribe to remedy the situation? More of what hasn't worked?

This choice will likely have good traction with PolitiFact's editors if they see a way toward picking this one while avoiding the appearance of admitting failure.

The fake news phenomenon(?)

Fake news has been around a good while, but it's the new hotness in journalistic circles. If mainstream journalism can conquer fake news, then maybe the mainstream press can again take its rightful place as society's gatekeepers of information! That idea excites mainstream journalists.

This surprise nominee has everything going for it. Fake news is fake by definition, so who can criticize the choice? It's total journalistic hotness, as noted. And the choice represents a call to action, opposing fake news, in symphony with a call that is already reverberating in fact-checking circles.

Is it a lame choice? Yes, it's as lame as all get out. I'd doubt journalists even have a clue about the impact of fake news, not to even mention the role fact checkers play in supporting false news memes that liberals favor.


Clinton's claim she never sent or received material marked as classified on her private server is the favorite according to the early established norms of the "Lie of the Year" award. But the fake news choice serves as the clear favorite in terms of sympathy with its Democratic-leaning readership and promoting its own sense of mission. I expect the latter favorite to prevail.

Jeff Adds:

I don't see much to disagree with Bryan. You can dismiss any claim relating to Trump right off the bat. Giving the award to Trump would neither shock people that hate him nor would it upset people that love him (who presumably already have low regard for liberal fact checkers.) It would be a yawner of a pick that would fail to generate buzz.

The Clinton pick would be a favorite in any other year. Because Clinton has already lost the election and her status on the left has been diminished, handing her the award wouldn't do any harm to her, but it would provide PolitiFact with a bogus token of neutrality ("See! We call Democrats liars too!") Likewise, the resulting outrage of PolitiFact's devoted liberal fanbase would generate plenty of clicks, and typically that's what the Lie of the Year has been about. It's true they would temporarily upset the faithful, but we've seen this exact scenario play out before with little consequence. Historically, PolitiFact seems motivated by clicks (and even angry liberal clicks will do, not to mention they keep the "we upset both sides" charade going.)

But the Fake News pick is the obvious favorite here. It's the hottest of hot topics in journalist circles, and PolitiFact sees themselves on the front lines in the war against opposing viewpoints unfacts. They're already trying to rally the troops and want to be seen as a beacon of truthiness in a sea of deceit.

It's been my view that while PolitiFact formerly cared primarily about generating buzz, since Holan's ascension [Angie Drobnic Holan replacing Bill Adair as chief editor--bww] they've behaved more and more like political activists. The Clinton choice would get more clicks, but I'd bet on Fake News being this year's rally cry for PolitiFact's army of Truth Hustlers.

Viva la Factismo!

Monday, November 21, 2016

Great Moments in the Annals of Subjectivity (Updated)

Did Republican Donald Trump win the electoral college in a landslide?

We typically think of a "landslide" as an overwhelming victory, and there's certainly doubt whether Trump's margin of victory in the electoral college unequivocally counts as overwhelming.

"Overwhelming" itself is hard to pin down in objective terms.

So that's why we have PolitiFact, the group of liberal bloggers that puts "fact" in its name and then proceeds to publish "fact check journalism" based on subjective "Truth-O-Meter" judgments.

When RNC Chairman Reince Priebus (and Trump's pick for his chief of staff) called Trump's electoral college victory a "landslide," PolitiFact Wisconsin's liberal bloggers sprang into action to do their thing (bold emphasis added):
Landslide, of course, is not technically defined. When we asked for information to back Priebus’ claim, the Republican National Committee merely recited the electoral figures and repeated that it was a landslide.
If "landslide" is not technically defined then what fact is PolitiFact Wisconsin checking? Is "landslide" non-technically defined to the point one can judge it true or false?

PolitiFact Wisconsin follows typical PolitiFact procedure in collecting expert opinions about whether Priebus' use of "landslide" matches its non-technical definition. One of the 10 experts PolitiFact consulted said Trump's margin was "close" to a landslide. PolitiFact said the other nine said it fell short, so PolitiFact ruled Priebus' claim "False."
Priebus said Trump’s win was "an electoral landslide."

But aside from the fact Trump lost the popular vote, his margin in the Electoral College isn’t all that high, either. None of the 10 experts we contacted said Trump’s win crosses that threshold.

We rate Priebus’ claim False.
One has to marvel at expertise sufficient to say whether the use of a term meets a non-technical definition.

One has to marvel all the more at fact checkers who concede that a term has a mushy definition ("not technically defined") and then declare that some use of the term fails to cross "that threshold."

What threshold?

One of the election experts said if Trump won by a landslide then Obama won by an even greater landslide.

RollCall, 2015:
In 2006, Democrats won back the House; two years later, President Barack Obama won by a landslide.
LA Times, 2012:
Obama officially wins in electoral vote landslide.
NPR, 2015:
President Obama won in a landslide.
NYU Journalism, 2008:
Obama Wins Landslide Victory, Charts New Course for United States.
Since Obama did not win by a landslide, therefore one cannot claim Trump won by a landslide? Is that it?

It is folly for fact checkers to try to judge the truth of ambiguous claims. PolitiFact often pursues that folly, of course, and in the end simply underscores what it occasionally admits: The ratings are subjective.

Finding experts willing to participate in the folly does not reduce the magnitude of the folly. This would have been a good subject for PolitiFact to use in continuing its Voxification trend. PolitiFact might have produced an "In context" article to talk about electoral landslides and how experts view the matter. But trying to corral the use of a term that is traditionally hard to tame simply makes a mockery of fact-checking.

Jeff Adds (Dec. 1, 2016):

Add this to a long list of opinions that PolitiFact treats as verifiable facts, including these two gems:

- Radio host John DePetro opined that the Boston Marathon bomber was buried "not far" from President John Kennedy. PolitiFact used their magical powers of objective divinity to determine the unarguable demarcation of "not far."

- Rush Limbaugh claimed "some of the wealthiest American's are African-Americans now." Using the divine wizardry of the nonpartisan Truth-O-Meter, PolitiFact's highly trained social scientists were able to conjure up a determinant definition of what "wealthiest" means, and specifically which people were included in the list.

Reasonable people may discount Trump's claim of a "landslide" victory assuming the conventional use of the term, but it's not a verifiable fact that can be confirmed or dismissed with evidence. It's an opinion.

The reality is that the charlatans at PolitiFact masquerade as truthsayers when they do little more than contribute to the supposed fake news epidemic by shilling their own opinions as unarguable fact. They're dangerous frauds whose declaration of objectivity doesn't withstand the slightest scrutiny.

Sunday, November 13, 2016

PolitiFact's "many" problems

On Nov. 3, 2016 we brought some focus to the "Mostly False" rating PolitiFact gave Donald Trump for saying many Americans were paying more for health care than for their mortgage or rent.

PFB co-editor Jeff D. today reminded me about an "Afters" section I added to a post from Sept. 1, 2016:
PolitiFact exaggerated the survey evidence supposedly supporting Clinton by claiming "many" teachers blamed Trump for increasing bullying and harassment:
Many of these teachers, unsolicited, cited Trump’s campaign rhetoric and the accompanying discourse as the likely reason for this behavior.
The Zebra Fact Check investigation suggests PolitiFact was misled about the number of teachers saying Trump was responsible for increasing bullying or harassment. Out of almost 2,000 teachers participating in the survey, 849 answered the question about bullying or biased language and of those 123 mentioned Trump. A fraction of those placed any kind of blame on Trump for anything. We would generously estimate that 25 teachers blamed Trump for something (not necessarily bullying or harassment) in answering that question. This implies that, to PolitiFact, "many" can be less than 1.25 percent of 2,000.
That's right. The hypocritical liberal bloggers at PolitiFact said "many" teachers cited Trump's rhetoric as the likely reason for bullying and/or harassing behavior. PolitiFact shoveled that to its readers as a fact, though the data showed a small fraction of the surveyed teachers offered that opinion. Then. about a month later, PolitiFact said Trump's statement about health care costs was "Mostly False."

Could PolitiFact be right in both cases?

That seems like a stretch.

Wrong in both cases?

That's more likely.

Wednesday, November 9, 2016

Another day, another deceptive PolitiFact chart

On election day, PolitiFact helpfully trotted out a set of its misleading "report card" graphs, including an updated version of its comparison between Democrat Hillary Clinton and Republican Donald Trump.

What is the point of publishing such graphs?

The graphs make an implicit argument to prefer the Democratic Party nominee in the general election. See how much more honest she is! Or, alternatively, see how the Republican tells many falsehoods!

The problem? This is the same PolitiFact deception we have pointed out for years.

The chart amounts to a political ad, making the claim Clinton is more truthful than Trump. But to properly support that conclusion, the underlying data should fairly represent typical political claims from Clinton and Trump--the sort of thing scientific studies capture by randomizing the capture of data.

In the same vein, a scientific study would allow for verification of its ratings. A scientific study would permit this process by using a carefully defined set of ratings. One might then duplicate the results by independently repeating the fact check and reaching the same results.

Yet none of that is possible with these collected "Truth-O-Meter" ratings.

Randomly selected stories aren't likely to grip readers. So editors select the fact-checks to maximize reader interest and/or serve some notion of the public good.

So much for a random sample.

And trying to duplicate the ratings through following objective scientific procedure counts as futile. PolitiFact editor Bill Adair recently confirmed this yet again with the frank admission that "the decision about a Truth-O-Meter rating is entirely subjective."

So much for objectively verifying the results.

PolitiFact passes off graphs of its opinions as though they represent hard data about candidate truthfulness.

This practice ought to offend any conscientious journalist, and that should go double for any conscientious fact-checking journalist.

We have called for PolitiFact to include some type of disclaimer each time it publishes this type of item. Such disclaimers happen only on occasion. The example embedded in this post contains no hint of a disclaimer.

Wonder why Republicans and Trump voters do not trust mainstream media fact-checking?

Take a look in the mirror, PolitiFact.

Saturday, November 5, 2016

PolitiFact founder Bill Adair: "Lord knows the decision about a Truth-O-Meter rating is entirely subjective"

 What have we said for years? PolitiFact's "Truth-O-Meter" rating system is hopelessly subjective.

And now, thanks to an interview by freelance journalist Michael Schulson, PolitiFact's founding editor has made perhaps his clearest statement yet confirming that key charge against PolitiFact (bold emphasis added):
[Michael Schulson]
But there is some subjectivity baked into the process, in terms of which claims you check, and where you draw the line between statements of opinion and statements of fact. Objective journalists are still making subjective choices.

[Bill Adair]
Oh, absolutely. But they always have!

I think that transparency is key. You need to have your own guidelines on how you select what you fact-check.

But yeah, we’re human. We’re making subjective decisions. Lord knows the decision about a Truth-O-Meter rating is entirely subjective. As Angie Holan, the editor of PolitiFact, often says, the Truth-O-Meter is not a scientific instrument.
How often does PolitiFact's Angie Drobnic Holan say the "Truth-O-Meter" is not a scientific instrument? Not nearly enough for our tastes. PolitiFact announces new candidate report card updates and comparisons by the week. But a Google search on Nov. 5, 2016 for "Truth-O-Meter" and "scientific instrument" drew only seven pages of hits. And a good number of those were not directly related to PolitiFact. Many of the rest were duplicates of this page (38 hits).

Would it alter PolitiFact's impact if every "report card" or report card story it published wore the disclaimer that the "Truth-O-Meter" ratings are subjective?

We think it would. Plus, doing so would represent an important step toward full transparency for PolitiFact.

So why don't they do it?

Note: Those who read Schulson's interview of Adair may wish to also read my response.

Friday, November 4, 2016

The Daily Caller: "PolitiFact Used Doctored Clinton Foundation Memo On Its HIV/AIDS Program"

The conservative website The Daily Caller has wound up in a bit of a feud with PolitiFact. The Caller ran an article criticizing the Clinton Foundation. PolitiFact did a fact check of the Caller in response. And the Caller has responded blow for blow.

This one'll leave a mark:
High-ranking officers with the Clinton Foundation gave PolitiFact a doctored version of a 2008 memo lauding its HIV/AIDS program presumably to defend against congressional charges that the charity distributed ‘watered down’ drugs to poor patients on the African continent, according to new information acquired by The Daily Caller News Foundation Investigative Group.

The altered memo went to Politifact Sept. 21, 2016, three days after TheDCNF published a story entitled, “Clinton Foundation AIDS Program Distributed ‘Watered-Down’ Drugs To Third World Countries.” (RELATED: EXCLUSIVE: Clinton Foundation AIDS Program Distributed ‘Watered-Down’ Drugs To Third World Countries)
The Caller also achieved a minor miracle by getting PolitiFact's Aaron Sharockman to respond.
Steps were reportedly taken to verify the authenticity of the Clinton Foundation document, Aaron Sharockman, executive director of PolitiFact, claimed in a statement to TheDCNF.

He told TheDCNF that the memo “was provided to us by the Clinton Foundation in response to our questions,” adding that PolitiFact “verified its authenticity through emails sent and received at that time.”
Read the whole thing here.

Correction/clarification Nov. 6, 2016: "The Daily Caller has wound up a bit of a feud"=>"The Daily Caller has wound up in a bit of a feud"

Thursday, November 3, 2016

PolitiFact: "Mostly False" that many are paying more for health care than for mortgage or rent

Sometimes reading a PolitiFact fact check is like being whisked off to Wonderland for a conversation with the Mad Hatter.

Case in point: Donald Trump says that many Americans are paying more for health care than to pay their rent or mortgage for the first time in history. PolitiFact finds it "Mostly False."

This rating drew our attention right away because of the ambiguity of Trump's claim. How does a fact-check go about making a truth distinction about "many instances"? How many is "many"? And if pinning down "many" poses a challenge, how does one go from that challenge to finding out whether it's happening for the first time in history?

After getting into the text of the fact check, it was a matter of trying to control laughter over the way PolitiFact approached the problem.

Wednesday, November 2, 2016

Fact, motivation, and PolitiFact's inconsistency

One of the oldest legislative tricks involves introducing a bill that will not pass so that one party can slam the member of the opposing party for not supporting one part of the bill.

We've seen the Democrats use that technique to terrific effect with the "Violence Against Women Act." And Republicans do the same type of thing to Democrats.

A Democrat or a Republican might have motivations behind their opposition that undercut the message their opponents try to use against them.

But does PolitiFact treat these same types of campaign ads the same way for both parties?

It sure doesn't look like it.

PolitiFact Missouri today graded a Republican claim in this category "Half True."

Note how PolitiFact Missouri justifies its conclusion (bold emphasis added):
Greitens says Koster voted against a 2007 bill requiring the state to pay for rape victims’ medical exams.In reality, the bill did more than that.

Koster says he objected to wording that made it possible for convicted murderers to be granted parole by claiming they were victims of domestic abuse. Koster said the language made it possible for murderers to manufacture evidence to be released before the completion of their sentence.

Greitens is cherry-picking one part of the legislation to paint his opponent as soft on domestic abuse. We rate his claim Half True.
As we noted back in August, PolitiFact Florida gave a "True" rating to Democrat Patrick Murphy when he made a parallel claim about his Republican opponent:

And note how PolitiFact Florida justifies its conclusion:
Murphy said Rubio "voted against the bipartisan Violence Against Women Act."

Rubio voiced support for the original law, but he and some Republicans in both the Senate and House opposed certain provisions added to the bill pertaining to spending and federal oversight. Rubio voted against the bill in 2012 and 2013, but it passed with bipartisan support the second time.

Even though he had clearly stated his reasons why, Rubio still voted nay. We rate Murphy’s statement True.
Both cases feature the same type of deception, and PolitiFact's fact checkers take note of the deception in both cases. But the Republican gets a "Half True" rating while the Democrat gets a "True" rating.

This type of example isn't atypical. It's just another day at the office for PolitiFact's left-leaning fact checkers.


It's worth pointing out that our previous post shows PolitiFact Wisconsin using essentially this same illicit ad technique against Senator Ron Johnson (R-Wisc.).
The next year, Johnson voted against a Senate amendment to affirm that human activity significantly contributes to climate change.

While all but one senator supported an earlier amendment affirming the existence of climate change, only five Republicans this time voted to acknowledge there is a human impact. The amendment, seen as a symbolic effort by the Democrats to force GOP senators to take a position, failed 50 to 49 (it required a 3/5 majority).
PolitiFact Wisconsin saw nothing wrong with using Johnson's opposition to the amendment as a solid evidence that Johnson thinks humans have no role in climate change even though the amendment did not narrowly address that issue.

Q: What's the difference between PolitiFact and the Democratic Party?
A: The Democratic Party doesn't claim to be nonpartisan.

Tuesday, November 1, 2016

More PolitiFact climate change shenanigans, featuring PolitiFact Wisconsin

One of PolitiFact's more reliable bends to the left occurs on the issue of climate change. The arbiters of truth, for example, class Republicans as climate change deniers if they do not go on record affirming man-made climate change. So much for PolitiFact's burden of proof criterion, right?


This related example comes from PolitiFact Wisconsin, checking on a claim from Senate candidate Russ Feingold that his Republican opponent does not believe humans contribute to climate change.

PolitiFact Wisconsin's approach to the fact check resembles the incompetent methods used by other iterations of the PolitiFact family. A Zebra Fact Check critique of PolitiFact's past fact check foreshadows the problems with PolitiFact Wisconsin's fact check of Feingold:
First, interpret an unclear statement according to a more clear statement by the same source.  Second, in judging what a person thinks in the present place greater weight on more recent statements.
PolitiFact Wisconsin does not apply these commonsense principles.

PolitiFact Wisconsin's evidences, in chronological order

Johnson: "I absolutely do not believe in the science of man-caused climate change. It’s not proven by any stretch of the imagination."

"There are other forces that cause climate to change," Johnson told Here and Now’s Robin Young. "So climate does change and I don’t deny that man has some effect on that. It certainly has a great deal of effect on spoiling our environment in many different ways."

But Johnson softened his view as soon as the next sentence: "I’ve got a very open mind, but I don’t have the arrogance that man can really do much to affect climate."
Johnson votes against a proposed amendment to a bill touching the Keystone Pipeline. The amendment would have described the sense of the Senate on the issue of anthropogenic climate change, including the ideas that humans "significantly" affect the climate and that climate change increases the severity of extreme weather events (such as hurricanes).

"Man-made global warming remains unsettled science. World-renowned climate experts have raised serious objections to the theories behind these claims. I believe it is a bad idea to impose a policy that will raise taxes on every American, will balloon energy prices and will hurt our economic competiveness (sic) – especially on such uncertain predictions."
"Listen, man can affect the environment; no doubt about it," he said. "The climate has always changed, it always will. … The question is, how much does man cause changes in our environment, changes in our climate, and what we could possibly even do about it?"

Assessing PolitiFact Wisconsin's evidences

Following the principles mentioned above, Johnson's clearest statements on humans having some role in climate change comes from the 2014 and 2016 quotations. In 2014, Johnson said he does not deny humans have a role in climate change. In 2016, Johnson said humans "clearly" have a role in changing the climate. Johnson's clearest statements on the subject directly contradict Feingold's claim.

Our principles also guide us toward giving a preference to more recent statements. Therefore, we consider the 2015 climate change amendment for some sign that Johnson denied a human role in climate change.

Is there a worse proof of a legislator's specific views on a topic than their willingness to vote in favor of a "sense of the Senate" amendment? Particularly when that amendment does not feature language narrowly tailored to suit the question?

Would Johnson have voted in favor of the amendment if he believed there was good evidence that undefined "climate change" causes an increase in severe weather events? Who knows? We don't. But if you're PolitiFact Wisconsin you can simply assume the answer is "no" and call it fact-checking.

PolitiFact concludes:
Johnson did not support a Senate amendment to acknowledge a man-made role in climate change and expressed skepticism each of the few times he acknowledged humans might contribute. He has acknowledged at times that humans can play a role but downplayed how significant that role might be.

For a statement that is accurate but needs additional clarification, our rating is Mostly True.

PolitiFact's conclusion consists of spin.

The Senate amendment was not simply about "a man-made role in climate change." It stipulated a significant role as well as a worsening effect on severe weather.

When Johnson said humans play a role in climate change he did not express skepticism about whether humans play a role. He expressed skepticism about the extent of that role. They're not the same thing, and skepticism about the latter does not contradict Johnson's recognition that humans play some role in affecting the climate. PolitiFact says Johnson says humans "can" play a role. But that's just more spin. Johnson did not simply say humans "can" play a role. He said humans do play a role, and he said he does not deny humans play a role.

If Johnson says humans play some role in causing climate change, that statement cannot support Feingold's claim that Johnson does not believe humans play any role in climate change.

Johnson's statement cannot reasonably justify the "Mostly True" rating with which PolitiFact Wisconsin gifted Feingold. The statements could reasonably justify "False" or "Mostly False" ratings if PolitiFact's definitions for its ratings meant something.

PolitiFact's continued inability to apply simple logic in the course of its fact checks continues to boggle our minds. At the same time, we're not surprised. This is the type of error that results when left-leaning journalists rate the truth of political statements on a subjective scale.

Thursday, October 27, 2016

Promises, promises

What's more useless than PolitiFact's trademarked "Truth-O-Meter"? How about its device for rating presidential promises, the Obameter?

Years ago, we pointed out an absurd rating when PolitiFact gave President Barack Obama a "Promise Kept" rating for staying in office while the nation achieved on its own what Obama had proposed achieving through a "Renewable Portfolio Standard." Obama promised to change the RPS. PolitiFact gave him credit for keeping the promise before it was kept.

In this new case, we will partly defend the president, albeit without doing him any favors.

This "Obameter" item is focused on the $2,500 promise, as PolitiFact separately failed on the promise to sign a health care bill providing universal coverage.

Obama fulfilled this promise in its literal sense. "Up to $2,500 a year" covers everything equal to or below $2,500 per year. If Obama increased costs to families by $5,000 per year, it fulfills his promise to decrease costs to families by up to $2,500 per year.

Absurdly, Obama could only break this promise by saving a typical family more than $2,500 per year.

Obama's promise was a classic political promise because it didn't really mean anything while at the same time sounding like a wonderful promise. The president implied the typical family would save about $2,500 per year under the legislation he promised. The promise paints operations like PolitiFact into a corner. They can either grade the promise on its implied meaning, which PolitiFact did, or else admit Obama's promise effectively promised nothing.

Obama delivered on the empty literal promise, unquestionably.

PolitiFact deserves partial credit for highlighting Obama's failure to achieve the promise he implied. But a fact-checker can help arm the public against misleading campaign rhetoric by explaining the deception of an "up to" clause.

PolitiFact did not do that.

Wednesday, October 26, 2016

Adding an annotation to PolitiFact's annotation of the third 2016 presidential debate

Is it news that fact-checkers are far from perfect?

Behold, a screen capture from PolitiFact's annotated version of the third presidential debate, hosted at Medium. PolitiFact says you can't see it unless you follow PolitiFact on Medium. If our readers can't see it without following PolitiFact, then maybe they're right (we have our doubts about that, too):

PolitiFact highlights Trump's claim that Clinton wants open borders. By hovering over an asterisk on the sidebar, a window appears showing PolitiFact's comment. PolitiFact says it rated Trump's claim that Clinton wants open borders "False."

Click on the link and you eventually end up on PolitiFact's web page and PolitiFact's fact check of Trump's claim about wanting open borders, where it is rated "Mostly False."

There's no editor's note announcing a change in the rating, so we assume that no issue of timing excuses PolitiFact for falsely reporting its own finding.

PolitiFact. The best of the best. Right?