Friday, December 30, 2016

PolitiFact's top eleven fake fact checks of 2016

We've promised a list of PolitiFact's top contributions to fake news--but we don't want to get into a useless semantic argument about what constitutes "fake news." For that reason, we're calling this list PolitiFact's top "fake fact checks," and that term refers to fact checks that misinform, whether intentionally or not.

11 Mike Pence denied evolution!

PolitiFact California rated "True" Governor Jerry Brown's claim that Republican presidential candidate Mike Pence denied evolution. The truth? Pence made a statement consistent with theistic evolution without affirming or denying evolution. We called out the error here. So PolitiFact California later changed its rating to "Half True." Because if Pence did not deny evolution that means that it is half true that he denied evolution. It's fact checker logic. You wouldn't understand.


10 Ron Johnson denies humans contribute to climate change!

When Democratic candidate Russ Feingold charged that his Republican opponent Ron Johnson does not accept any human role in climate change, PolitiFact Wisconsin was there. It rated Feingold's claim "Mostly True." The problem? PolitiFact Wisconsin's evidence showed Feingold making a number of clear statements to the contrary, including one where Johnson specifically said he does not deny humans affect the climate. PolitiFact Wisconsin went with its ability to interpret Johnson's more ambiguous statements as a denial of what Johnson said plainly. We wrote about the mistake, but PolitiFact Wisconsin has stayed with its "Mostly True" rating.


9 Social Security is not a Ponzi scheme!

PolitiFact has an established precedent of denying the similarities between Social Security's "pay-as-you-go" financing and Ponzi financing. PolitiFact reinforced its misleading narrative by giving voters advance warning that they might hear the Ponzi lie in 2016. The problem? Voters can find that supposed lie repeated commonly in professional literature, written by the kind of experts PolitiFact might have interviewed to learn the truth.

Will PolitiFact ever repent of misleading its readers on this topic?


8 LGBT the group most often victimized by hate crimes!

Attorney General Loretta Lynch declared in 2016 Lesbian-Gay-Bisexual-Transgendered folks are the group most often victimized by hate crimes. PolitiFact gave Lynch's statement a "True" rating, meaning the statement is true without leaving out any important information. The problem? Lynch's statement is only true on a per capita basis. In other words, large minority groups experience more hate crimes victimization than LGBT. But an individual in the LGBT group would more likely experience a hate crime than a member of the other groups.

How is that not significant enough to affect the rating?


7 The gender wage gap is real(ly big)! Or something!

Mainstream fact checkers are consistently awful on the gender wage gap. The game works like this: Democrat candidate wants to leverage concern over gender discrimination, so Democratic candidate cites a statistic that has hardly any relationship to gender discrimination. Democratic party candidates can count on fact checkers to go along with the game so long as they do not specifically say the raw gender wage gap is caused by gender discrimination.

PolitiFact Missouri's 2016 gender wage gap story, exposed here and here, did that approach one better by badly misinterpreting its source material to exaggerate the size of the gap caused by discrimination.


6 Torture doesn't work!

PolitiFact Florida weighed in on torture and waterboarding when a Florida Republican running for Marco Rubio's senate seat said waterboarding works. PolitiFact Florida ruled the claim "False," after admitting that nobody has tested the proposition scientifically. In short, we (including PolitiFact) don't know for a fact whether waterboarding works. PolitiFact Florida's error was pointed out at Flopping Aces and here.


5 France and Germany did not think Iraq had WMD!

When former Assistant Secretary of Defense Paul Wolfowitz said the French and Germans believed Iraq had WMD, PolitiFact ruled it "Mostly False." The creepy "1984" nature of this fact check stems from PolitiFact turning lack of certitude into near-certitude of lack. And PolitiFact has to win some sort of award for avoiding French President Jacques Chirac's 2003 statement, during the approach to the war, that Iraq "probably" possessed WMD.


4 Colorado Republican tried to redefine rape!

PolitiFact Colorado makes our list with its liberal "Mostly True" rating given to abortion rights champion Emily's List. Emily's list charged a Colorado Republican with trying to "redefine rape" in an abortion-related statute. PolitiFact Colorado apparently neglected to look up the traditional definition of rape (and its forcible/statutory distinction) to see whether it had changed thanks to the proposed wording. It had not, leaving the impression that PolitiFact Colorado essentially took the word of Emily's List at face value. Fellow PolitiFact critic Dustin Siggins led the way in flagging the problems with this PolitiFact Colorado item.


3 In California, it's easier to buy a gun than a Happy Meal!

Matthew Hoy, another one of our favorite PolitiFact critics, flagged this hilarious item. This was not a fact check, but rather a Twitter incident where PolitiFact California retweeted somebody else. California Democrat Gavin Newsom received bogus PolitiCover for claiming there are more gun dealers in California than McDonald's. Newsom tweeted out the bogus vindication under the absurd headline "FACT: It's easier to get a gun than a Happy Meal in California." Partly because a gun costs less than a Happy Meal?

2 Donald Trump is causing an increase in bullying in our schools!

PolitiFact ostensibly checked Hillary Clinton's claim that teachers noticed a "Trump Effect" that amounted to an increase in bullying behavior in the nation's schools. But anecdotal reports ought to mean close to squat in fact-checking circles, so PolitiFact accepted a motley collection of anecdotes from the left-leaning Southern Poverty Law Center as reason enough to give Clinton a "Mostly True" rating. We chronicled the numerous problems with the so-called "Trump Effect" here and at Zebra Fact Check here and here.

1 Mike Pence advocated diverting federal funds from AIDS patients to gay conversion therapy!

PolitiFact California heads the list with its second mostly fact-free fact check of Mike Pence. Back around the year 2000, when Pence was first running for the House of Representatives, he suggested that AIDS care dollars under the Ryan White Care Act should not go to organizations that celebrated behavior likely to spread AIDS. Pence said funds under the Act should go to people seeking to "change their sexual behavior." About 15 years later, Pence's statement was construed to mean that he wanted AIDS care funding to go toward gay conversion therapy. There's no serious argument supporting that notion, and Timothy P. Carney pointed that out even before PolitiFact checked the claim. But PolitiFact California gave Gavin Newsom a "True" rating for the accusation.

PolitiFact California's recent publication of its most popular fact checks for 2016 helps explain why this item tops our list. PolitiFact claimed its "Half True" rating of Newsom was its most popular story. But for months the story ran with a "True" rating. Which version of the story got the most clicks, eh?


Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


Afters

The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.

Friday, December 23, 2016

Evidence of PolitiFact's bias? The Paradox Project I

Matt Shapiro, a data analyst, started publishing a series of PolitiFact evaluations on Dec. 16, 2012. It appears at the Paradox Project website as well as at the Federalist.

Given our deep and abiding interest in the evidences showing PolitiFact's liberal bias, we cannot resist reviewing Shapiro's approach to the subject.

Shapiro's first installment focuses on truth averages and disparities in the lengths of fact checks.

Truth Averages

Shapiro looks at how various politicians compare using averaged "Truth-O-Meter" ratings:
We decided to start by ranking truth values to see how PolitiFact rates different individuals and aggregate groups on a truth scale. PolitiFact has 6 ratings: True, Mostly True, Half-True, Mostly False, False, and Pants on Fire. Giving each of these a value from 0 to 5, we can find an “average ruling” for each person and for groups of people.
Unlike many (not all) past attempts to produce "Truth-O-Meter" averages for politicians, Shapiro uses his averages to gain insight into PolitiFact:
Using averages alone, we already start to see some interesting patterns in the data. PolitiFact is much more likely to rate Republicans as their worst of the worst “Pants on Fire” rating, usually only reserved for when they feel a candidate is not only wrong, but lying in an aggressive and malicious way.
Using 2012 Republican presidential candidate Mitt Romney as his example, Shapiro suggests bias serves as the most reasonable explanation of the wide disparities.

We agree, noting that Shapiro's insight stems from the same type of inference we used in our ongoing study of PolitiFact's application of its "Pants on Fire" rating. But Shapiro disappointed us by defining the "Pants on Fire" rating differently than PolitiFact defines it. PolitiFact does not define a "Pants on Fire" statement as an aggressive or malicious lie. It is defined as "The statement is not accurate and makes a ridiculous claim."

As our study argued, the focus on the "Pants on Fire" rating serves as a useful measure of PolitiFact's bias given that PolitiFact offers nothing at all in its definitions to allow an objective distinction between "False" and "Pants on Fire." On the contrary, PolitiFact's principals on occasion confirm the arbitrary distinction between the two.

Shapiro's first evidence is pretty good, at least as an inference toward the best explanation. But it's been done before and with greater rigor.

Word Count

Shapiro says disparities in the word counts for PolitiFact fact checks offer an indication of PolitiFact's bias:
The most interesting metric we found when examining PolitiFact articles was word count. We found that word count was indicative of how much explaining a given article has to do in order to justify its rating.
While Shapiro offered plenty of evidence showing PolitiFact devotes more words to ratings of Republicans than to its ratings of Democrats, he gave little explanation supporting the inference that the disparities show an ideological bias.

While it makes intuitive sense that selection bias could lead toward spending more words on fact checks of Republicans, as when the fact checker gives greater scrutiny to a Republican's compound statement than to a Democrat's (recent example), we think Shapiro ought to craft a stronger case if he intends to change any minds with his research.


Summary

Shapiro's analysis based on rating averages suffers the same types of problems that we think we addressed with our "Pants on Fire" study: Poor averages for Republicans make a weak argument unless the analysis defuses the excuse that Republicans simply lie more.

As for Shapiro's examination of word counts, we certainly agree that the differences are so significant that they mean something. But Shapiro needs a stronger argument to convince skeptics that greater word length for fact checks of Republicans shows liberal bias.


Update Dec. 23, 2016: Made a few tweaks to the formatting and punctuation, as well as adding links to Shapiro's article at the Paradox Project and the Federalist (-bww).


Jeff adds:

I fail to see how Shapiro contributes anything worthwhile to the conversation, and he certainly doesn't offer anything new. Every criticism of PolitiFact in his piece has been written about in depth before and, in my view, written much better.

Shapiro's description of PolitiFact's "Pants on Fire" rating is flatly wrong. The definition is published at PolitiFact if he had an interest in looking it up. Shapiro asserts that a "Pants on Fire" rating "requires the statement to be both false and malicious" and "usually only reserved for when they feel a candidate is not only wrong, but aggressively and maliciously lying." This is pure fiction. Whether this indicates sloppiness or laziness I'm not sure, but in any event mischaracterizing PolitiFact's ratings only gives fuel to PolitiFact's defenders. Shapiro's error at the very least shows an unfamiliarity with his subject.

Shapiro continues the terribly flawed tradition of some conservative outlets, including the Federalist, where his article was published, by attempting to find clear evidence of bias by simply adding up  PolitiFact's ratings. Someone with Shapiro's skills should know this is a dubious method.

In fact, he does know it:
This method assumes this or that article might have a problem, but you have to look at the “big picture” of dozens of fact-checks, which inevitably means glossing over the fact that biased details do not add up to an unbiased whole.
That's all well and good, but then Shapiro goes on to ask his readers to accept that exact same method for his own study. He even came up with his own chart that simply mirrors the same dishonest charts PolitiFact pushes.

At first blush, his "word count" theory seems novel and unique, but does it prove anything? If it is evidence of something, Shapiro failed to convince me. And I'm eager to believe it.

Unfortunately, it seems Shapiro assumes what his word count data is suppose to prove. Higher word counts do not necessarily show political bias. It's entirely plausible those extra words were the result of PolitiFact giving someone the benefit of the doubt, or granting extra space for a subject to explain themselves. Shapiro's making his assertion without offering evidence. It's true that he offered a few examples, but unless he scientifically surveyed the thousands of articles and confirmed the additional words are directly tied to justifying the rating he could reasonably be accused of cherry-picking.

“When you’re explaining, you’re losing,” may well be a rock solid tenet of lawyers and politicians, but as data-based analysis it is unpersuasive.

We founded this website to promote and share the best criticisms of PolitiFact. While we doubt it matters to him or the Federalist, Shapiro's work fails to meet that standard. 

Shapiro offers nothing new and nothing better. This is a shame because, judging from his Twitter feed and previous writings, Shapiro is a very bright, thoughtful and clever person. We hope his next installments in this series do a better job of exposing PolitiFact's bias.

We've been criticizing and compiling quality critiques of PolitiFact for six years now. Documenting PolitiFact's bias is the main reason for this site's existence. We're exceedingly predisposed to accept and promote good evidence of PolitiFact's flaws.

If your data project can't convince two guys that started a website called PolitiFact Bias and who devote countless hours of our free time preaching to people that PolitiFact is biased, then perhaps your data project isn't very convincing.

Tuesday, December 20, 2016

PolitiFact Wisconsin don't need no stinkin' evidence

Is a fact check a fact check if it doesn't bother checking facts?

PolitiFact Wisconsin brings this question to the fore with its Dec. 16, 2016 fact check of former Democratic senator Russ Feingold. Feingold said Social Security was pretty much invented at the University of Wisconsin-Madison, and that's where President Franklin Delano Roosevelt got the idea.

PolitiFact agreed, giving Feingold's claim a "True" rating:


But a funny thing happened when we looked for PolitiFact Wisconsin's evidence in support of Feingold's claims. The fact check omits those facts, if they exist.

Let's review what PolitiFact offered as evidence:
When we asked Feingold spokesman Josh Orton for backup, he pointed to several Wisconsinites and people tied to the University of Wisconsin-Madison — where Feingold graduated in 1975 — who were influential in developing Social Security.
PolitiFact went on to list four persons with UW-Madison connections (among many) who were influential in bringing Social Security to pass in the United States.

Then PolitiFact Wisconsin summarized its evidence, with help from an unbiased expert from UW-Madison:
Current UW-Madison professor Pamela Herd agreed that Wisconsinites tied to the university were key figures in the development of Social Security.

"There were a lot of people involved in the creation of this program, but some of the most important players were from Wisconsin," said Herd, an expert on Social Security.
Okay, got it? Now on to the conclusion:
Feingold said that the idea for Social Security "was basically invented up on Bascom Hill, my alma mater here; that's where Franklin Roosevelt got the idea."

Historical accounts show, and an expert agrees, that officials who helped propose and initially operate Social Security had deep ties to UW-Madison.

We rate Feingold’s statement True.
And there you have it. Fact-checking.

If officials who helped propose and initially operate Social Security had deep ties to UW-Madison, then Social Security was basically invented at UW-Madison. And that's where President Roosevelt got the idea. "True."

Where was PolitiFact when Al Gore claimed to have taken the initiative in creating the Internet?

Seriously: PolitiFact Wisconsin's fact check produces no solid or unequivocal evidence supporting one of Feingold's claims and completely ignores fact-checking the other (why?). Yet Feingold's claims receive a "True" rating?

What happened to comparing the content of the federal Social Security Act to its precursor from UW-Madison? What happened to looking at where Roosevelt got his ideas about providing social insurance?

That's not fact-checking. That's rubber-stamping.


Afters:

The silver lining from PolitiFact Wisconsin's fact check comes from its links to the Social Security Administration website, which offer facts instead of supposition about the history of Social Security.

PolitiFact Wisconsin did a stellar job of keeping inconvenient facts from the Social Security website out of its fact check.

Sunday, December 18, 2016

Fact-checking the wrong way, featuring PolitiFact

Let PolitiFact help show you the right way to fact check by avoiding its mistakes

Fake and skewed news do present a problem for society. Having the best possible information allows us to potentially make the best possible decisions. Bad information hampers good decision-making. And the state of public discourse, including the state of the mainstream media, makes it valuable for the average person to develop fact-checking skills.

We found a December 11, 2016 fact check from PolitiFact that will help us learn better how to interpret claims and make use of expert sources.

The interpretation problem

PolitiFact believed it was fact-checking Republican Reince Priebus' claim that there was no report available saying Russia tried to interfere with the 2016 presidential election:


Was Priebus saying there was no "specific report" saying Russia tried to "muddy" the election? Here's how PolitiFact viewed it:
"Let's clear this up. Do you believe -- does the president-elect believe that Russia was trying to muddy up and get involved in the election in 2016?" Meet the Press host Chuck Todd asked on Dec. 11, 2016.

"No. 1, you don't know it. I don't know it," Priebus said. "There's been no conclusive or specific report to say otherwise."

That’s wrong. There is a specific report.

It was made public on Oct. 7, 2016, in the form of a joint statement from the Department of Homeland Security and the Office of the Director of National Intelligence. At the time, the website WikiLeaks was releasing a steady flow of emails stolen from the Democratic National Committee and top Hillary Clinton adviser John Podesta.

"The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from U.S. persons and institutions, including from U.S. political organizations," the statement said. "These thefts and disclosures are intended to interfere with the U.S. election process."
Based on the context of Priebus' appearance on "Meet the Press," we think PolitiFact botched its interpretation. NBC's Chuck Todd went back and forth with Priebus for a number of minutes on the nature of the evidence supporting the charge of Russian interference with the 2016 U.S. presidential election. The main topic was recent news reports suggesting Russia interfered with the U.S. election to help Republican candidate Donald Trump. Todd's question after "Let's clear this up" had little chance of clearing up that point. Priebus would not act unreasonably by interpreting Todd's question to refer to interference intended to help the Republican Party.

But the bigger interpretation problem centers on the word "specific." Given the discussion between Todd and Priebus around the epistemological basis for the Russian hacking angle, including "You don't know it and I don't know it" in the immediate context, both the "conclusive" and the "specific" definitions of the word address the nature of the evidence.

"Conclusive" means incontrovertible, not merely featuring a conclusion. "Specific" means including a specific evidence or evidences, and therefore would refer to a report showing evidences, not merely a particular (second definition as opposed to the first) report.

In short, PolitiFact made a poor effort at interpreting Priebus in the most sensible way. Giving conservatives short shrift in the interpretation department occurs routinely at PolitiFact.

Was the report PolitiFact cited incontrovertible? PolitiFact offered no argument to that effect.

Did the report give a clear and detailed description of Russia's attempt to influence the 2016 election? Again, PolitiFact offered no argument to that effect.

PolitiFact's "fact-checking" in this case amounted to playing games with the definitions of words.

The problem of the non-expert expert

PolitiFact routinely cites experts either without investigating or reporting (or both) their partisan leanings. Our current case gives us an example of that, as well as a case of giving the expert a platform to offer a non-expert opinion:
Yoshiko Herrera, a University of Wisconsin political scientist who focuses on Russia, called that letter, "a pretty strong statement." Herrera said Priebus’ comment represents a "disturbing" denial of facts.

"There has been a specific report, and politicians who wish to comment on the issue should read and comment on that report rather than suggest there is no such report or that no work has been done on the topic," Herrera said.
What relevant expertise does a political scientist focused on Russia bring to the evaluation of statements on issues specific to U.S. security? Even taking for granted that the letter Herrera talks about was objectively "a pretty strong statement," Herrera has no obvious qualification that lends weight to her opinion. An expert on international intelligence issues might lend weight to that opinion by expressing it.

The same goes for PolitiFact's second paragraph quoting Herrera. The opinion in this case gains some weight from Herrera's status as a political scientist (the focus on Russia counts as superfluous), but her implicit opinion that Trump made the error she cautions about does not stem from Herrera's field of expertise.

Note to would-be fact checkers: Approach your interviews with experts seeking comments that rely on their field of expertise. Anything else is fluff, and you may end up embarrassing the experts you cite by relying on their expertise for information that does not reflect their expertise.

Was Herrera offering her neutral expert opinion on Trump's comment? We don't see how her comments rely on her expertise. And reason exists to doubt her neutrality.

Source: FEC. Click image for larger view.

Yoshiko Herrera's FEC record of political giving shows her giving exclusively to Democrats, including a modest string of donations to the campaign of Hillary Rodham Clinton.

Did PolitiFact give its readers that information? No.

The wrap

Interpret comments fairly, and make sure you only quote expert sources when their opinion comes from their area of expertise. Don't ask an expert on political science and Russia for an opinion that requires a different area of expertise.

For the sake of transparency, I advocate making interview material available to readers. Did PolitiFact lead Herrera toward the conclusion a "specific report" exists? Or did Herrera offer that conclusion without any leading? An interview transcript allows readers to answer that question.

PolitiFact had announced that it plans to start making interview materials available as a standard practice. Someday? Somewhere over the rainbow?



Afters

Since the time we started this evaluation of PolitiFact's fact check, U.S. intelligence agencies have weighed in with comments hinting that they possess specific evidence showing a Russian government connection to election-related hacking and information leaks. But even these new reports do not contradict Priebus until the reports include the clear and detailed evidence of Russian meddling--from named and reliable sources.

Thursday, December 15, 2016

Angry PolitiFact co-founder speaks truth to power?

PolitiFact's founding editor Bill Adair tends to get all the credit (or blame!) these days for creating PolitiFact, but news technologist Matt Waite, once of the St. Petersburg (later Tampa Bay) Times has a claim to a role as PolitiFact's principal developer.

Thus we were surprised that Waite stuck his neck out by blaming journalists for the problems facing journalism.

Waite was one of the journalists Nieman Journalism Lab chose to make bold predictions for the coming year in journalism. Waite's contribution was a scathingly worded prediction that journalists would not resolve any of the big problems because ... journalists would not admit they are the problem. Yes, PolitiFact Bias readers, click the link and read it. It's short, pithy and angry.


PolitiFact is mainstream media journalism. The problems Waite describes in the mainstream media apply broadly to PolitiFact.



Afters:

We appreciate Mollie Z. Hemingway's prescription for journalism published at The Federalist.

Tuesday, December 13, 2016

We called it: PolitiFact's 2016 "Lie of the Year" is Fake News

Back on Dec. 5, 2016, we at PolitiFact Bias reviewed PolitiFact's candidates for "Lie of the Year" and agreed that PolitiFact would choose "Fake News" as its winner:
This surprise nominee has everything going for it. Fake news is fake by definition, so who can criticize the choice? It's total journalistic hotness, as noted. And the choice represents a call to action, opposing fake news, in symphony with a call that is already reverberating in fact-checking circles.

Is it a lame choice? Yes, it's as lame as all get out. I'd doubt journalists even have a clue about the impact of fake news, not to even mention the role fact checkers play in supporting false news memes that liberals favor.
The article PolitiFact published to announce its choice, written by editor Angie Drobnic Holan, jibes right on down the line with the reasons Jeff and I thought it was an idiotic-yet-predictable choice.

"Fake News" is so broad, we thought, that it would end up as PolitiFact's self-justification for its own existence.

We also pointed out that the choice shields PolitiFact from criticism. On the one hand, Fake News gives PolitiFact an excuse not to pick a good candidate, such as Hillary Clinton's aggressive and false defense of her email activity--one with a fairly inevitable impact on Clinton's election prospects--that would risk offending its primarily liberal audience. On the other hand, Fake News is so vague that it offers no target for criticism.

Holan even took the predictable path of tying Fake News to the election results without any real data to back her argument:
Fake news found a willing enabler in Trump, who at times uttered outrageous falsehoods and legitimized made-up reports. Clinton emboldened her detractors and turned off undecideds with a lawyerly parsing of facts that left many feeling that she was lying. Her enemies ran wild.
Got the subtext? Fake News was a big key to Trump's victory.

Another subtext throughout Holan's article presents fact-checking (the mainstream media brand, with PolitiFact as its star player) as the antidote to the society-destroying effects of Fake News.


Afters:

Even before PolitiFact announced its Lie of the Year for 2016, we planned to dip our toe into Fake News outrage by compiling a list of PolitiFact's top fake news stories for 2016. That is, we will show people how PolitiFact's articles sometimes reinforce and spread false beliefs.


We expect to publish the list next week.

Does changing from "True" to "Half True" count as a correction? Clarification? Update? Anything?

The use and abuse of the PolitiMulligan

We've pointed out before PolitiFact's propensity to correct or update its stories on the sly, contrary to statements of journalistic ethics (including its own statement of principles) regarding transparency.

Thanks to PolitiFact, we have another example in the genre, where PolitiFact California, instead of announcing a correction or update, simply executed a do-over on one of its stories.

On July 28, 2016, PolitiFact ruled it "True" that vice-presidential candidate Mike Pence had advocated diverting federal money from AIDS care services to "conversion therapy." But Timothy P. Carney, writing for the right-leaning Washington Examiner, had published an item the day before explaining why the evidence used by Pence's critics did not wash.

I wrote about PolitiFact California's faulty fact check on July 29, 2016 at Zebra Fact Check.

On Dec. 2, 2016, PolitiFact partly reversed itself, publishing a new version of the fact check with a "Half True" rating replacing the original "True" rating.

To be sure, the new item features a lengthy editor's note explaining the reason for the new version of PolitiFact California's fact check. But readers should note that PolitiFact completely avoids any admission of error in its explanation:
EDITOR’S NOTE: On July 28, 2016, PolitiFact California rated as True a statement by Democratic Lt. Gov. Gavin Newsom that Republican Indiana Governor and now Vice President-Elect Mike Pence "advocated diverting taxpayer dollars to so-called conversion therapy." We based that ruling on a statement Pence made in 2000 on his congressional campaign website, in which Pence says "Resources should be directed toward those institutions which provide assistance to those seeking to change their sexual behavior." Subsequently, our readers and other fact-checking websites examined the claim and made some points that led us to reconsider the fact check. Readers pointed out that Pence never explicitly advocated for conversion therapy in his statement and that he may have been pushing for safer sex practices. Pence’s words are open to other interpretations: Gay and lesbian leaders, for example, say his statement continues to give the impression that he supported the controversial practice of conversion therapy when his words are viewed in context with his long opposition to LGBT rights. Taking all of this into account, we are changing our rating to Half True and providing this new analysis.

PolitiFact California’s practice is to consider new evidence and perspectives related to our fact checks, and to revise our ratings when warranted.
While we credit PolitiFact California for keeping an archived version of its first attempt available for readers, we find PolitiFact's approach a bit puzzling.

First of all, there are no "new evidence and perspectives" involved in this case. Carney's July 27 article ought to have pre-empted the flaw in PolitiFact California's original July 28 fact check, and Zebra Fact Check highlighted the problem again two days later: A fact checker needs to account for the difference in wording between "changing sexual behavior" and "changing sexual preference." Also noted was PolitiFact California's failure to explain the immediate context of the smoking gun quotation it used to convict Pence: The Ryan White Care Act.

PolitiFact California made two major mistakes in its fact-checking. First, it failed to offer due attention to the wording of Pence's statement. Second, it failed to consider the context.

The two major errors resulted in no admission of error. And PolitiFact California's do-over fails to even show up on PolitiFact's list of stories that were updated or corrected.

As for the new "Half True" rating? If "changing their sexual behavior" in the context of the Ryan White Care Act is open to interpretation as "changing their sexual orientation," then we claim as our privilege the interpretation of "Half True" as "False."

In other words, PolitiFact California, creative interpretation is no substitute for facts.


Afters


So apparently it is an update. Just not the type of update that PolitiFact includes on its "Corrections and Updates" page.

Friday, December 9, 2016

PolitiFact agrees to disagree with itself on budget cuts

Bless your heart, PolitiFact.

PolitiFact has lately started to wrap up its "Obameter" feature, rating whether President Obama delivered on the set of campaign promises PolitiFact tracked.

One recent item caught our eye, as Obama earned a "Compromise" rating for partially delivering on a re-election campaign promise to cut $1 trillion to $1.5 trillion in spending.

Veteran PolitiFact fact checker Louis Jacobson wrote this one.

Jacobson and PolitiFact received some great advice from an expert, then proceeded to veer into the weeds:
"Like anything else in budgeting, it's all about the baseline," said Steve Ellis, vice president of Taxpayers for Common Sense. "It's a cut relative to what?"

The most obvious way to judge how much spending went down on Obama's watch is to start with how much spending was expected in 2012, when he made the promise, and then compare that to the amount of spending that actually materialized.
Huh? What happened to the incredibly obvious method of measuring how much was spent in 2012 when he made his promise and then looking about how much was spent at the end of his term in office?

Jacobson's Dec. 5, 2016 Obameter story doesn't even acknowledge the method we're pointing out, yet Jacobson appeared well aware of it when he wrote a budget cut fact check way back in 2014:
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
Jacobson faulted a critic of Obama's health care law for using "cuts" to describe slowing the growth of future spending. Yet Jacobson finds that deceptive method "the most obvious way" to determine whether Obama delivered his promised spending cut.

But at least there's a happy ending to the discrepancy. The National Republican Senatorial Committee has "the most obvious method" counted against it (as deceptive), while President Obama receives the benefit when PolitiFact uses the Republicans' deceptive method to rate the president's promise on cutting spending.

There's nothing wrong with favoring the good guys over the bad guys, right?


Inside Baseball Stuff

In terms of fact-checking, we noted a particularly interesting feature of Louis Jacobson's rating of President Obama's promise of cutting spending by $1 trillion to $1.5 trillion: Obama was promising that spending cut over and above one he was already claiming to have achieved. Though PolitiFact's presentation makes this part of Obama's statement obvious, PolitiFact does not bother to confirm the claim.


Consideration of context serves as a fact checker's primary tool for interpreting claims. If Obama saved $1 trillion before making his promise of equal or greater savings in his second term, the means he used to achieve that savings is the means we should expect him to use to fulfill his promise for his second term unless he specifies differently.

We failed to find any mainstream fact check addressing Obama's claim of saving $1 trillion in 2011:
PRESIDENT OBAMA: Well, I-- I have to tell you, David, if-- if you look at my track record over the last two years, I cut spending by over a trillion dollars in 2011.
If Obama did not save $1 trillion in the first place, he cannot fulfill a promise to cut "another" $1 trillion. At best he can fulfill part of the promise: to cut $1 trillion.

Louis Jacobson and PolitiFact did not notice?

Checking whether Obama saved that $1 trillion in 2011 should have served as a prerequisite for rating Obama's second-term promise to save another $1 trillion or more. Fact checkers could then assume Obama would save the second $1 trillion under the same terms as the first.

Monday, December 5, 2016

Handicapping PolitiFact's "Lie of the Year" for 2016

A full plate of stuff to write about has left me a little behind in getting to PolitiFact's list of finalists for its "Lie of the Year" award--the award that makes it even more obvious that PolitiFact does opinion journalism, since judging the importance of a "lie" obviously requires subjective judgments.

I clipped an image of the most important part of the electronic ballot:



One thing jumped out right away. Typically the list of finalists includes about 10 specific fact checks from a number of sources. This year's menu includes only four specific fact checks, two each from Hillary Clinton and Donald Trump. PolitiFact rounds out the menu with two category choices, the whole 2016 election and the "fake news" phenomenon that, without much hint of irony, has galvanized the mainstream press to make even greater efforts to recapture its (legendary?) role as the the gatekeeper of what people ought to know and accept.

Given PolitiFact's recent tendency to select a "Lie of the Year" made up of multiple finalists, these changes make a great deal of sense. We've already pointed out one of the advantages PolitiFact gains from this approach. Having a multi-headed hydra as the winner allows PolitiFact to dodge criticisms of its choice. Oh, that hydra head got lopped off? No worries. These others continue to writhe and gnash their teeth.

Without further ado, I'll rate the chances of the six listed finalists. Doubtless my co-editor Jeff D. will weigh in at some point with his own comments and predictions.

Clinton "never received nor sent any email that was marked as classified" on her private email server while acting as Secretary of State

Of the specific claims listed, this one probably had the biggest impact on the election. Clinton made this claim a key part of her ongoing defense of her use of the private email server. When FBI Director James Comey contradicted this part of Clinton's story, it cinched one of Clinton's key negatives heading into the 2016 election. This one would serve as a pretty solid choice for "Lie of the Year." The main drawback of this selection stems from liberal denial of Clinton's weakness as a presidential candidate. This choice might generate some lasting resentment  from a significant segment of PolitiFact's liberal fanbase, some of whom will insist Clinton was telling the truth.


Clinton says she received Comey's seal of approval regarding her truthfulness about the email server

This item gave us a notable case where a major political figure made a pretty much indefensible and clear statement that was quickly publicized as such. Was this one politically significant? I think journalists were a bit shocked that Clinton made this unforced error. But I doubt voters regarded this case as anything other than a footnote to Clinton's earlier dissembling about her email server.

Trump claims "large-scale voter fraud"

Talk about awkward!

Trump was pilloried by the mainstream press along with pundits and politicians aplenty for his statements calling the presidential election results into doubt. But the political importance of this one gets complicated by liberal challenges to the election results in states where Trump's margin of victory was not particularly narrow (Michigan, Pennsylvania, and Wisconsin). Why challenge the results if they were not skewed by some form of large-scale fraud? This selection also suffers from the nature of the evidence. Trump received the rating not because it is known that no large-scale voter fraud has taken place in 2016, but because of a lack of evidence supporting the claim.

Donald Trump said he was against the war in Iraq

This one counts as the weakest of the specific fact checks on the list. PolitiFact and its fact-checking brethren built a very weak case that Trump had supported the Iraq War. Making this one by itself the "Lie of the Year" will result in some very good challenges in the mainstream and conservative press.


"The entire 2016 election, when falsehoods overran the facts"

Now things get interesting! Could PolitiFact opt for a "Lie of the Year" awarded to a candidate even more generalized than "campaign statements by Donald Trump," which won in 2016? And does PolitiFact have the ability to objectively quantify this election's overrunning of the facts compared to elections in the past? And could PolitiFact admit that falsehoods overran the facts despite proclamations that fact-checking enjoyed a banner year? If falsehoods overrun the facts while fact checkers enjoy a banner year then what will journalists prescribe to remedy the situation? More of what hasn't worked?

This choice will likely have good traction with PolitiFact's editors if they see a way toward picking this one while avoiding the appearance of admitting failure.


The fake news phenomenon(?)

Fake news has been around a good while, but it's the new hotness in journalistic circles. If mainstream journalism can conquer fake news, then maybe the mainstream press can again take its rightful place as society's gatekeepers of information! That idea excites mainstream journalists.

This surprise nominee has everything going for it. Fake news is fake by definition, so who can criticize the choice? It's total journalistic hotness, as noted. And the choice represents a call to action, opposing fake news, in symphony with a call that is already reverberating in fact-checking circles.

Is it a lame choice? Yes, it's as lame as all get out. I'd doubt journalists even have a clue about the impact of fake news, not to even mention the role fact checkers play in supporting false news memes that liberals favor.


Summary

Clinton's claim she never sent or received material marked as classified on her private server is the favorite according to the early established norms of the "Lie of the Year" award. But the fake news choice serves as the clear favorite in terms of sympathy with its Democratic-leaning readership and promoting its own sense of mission. I expect the latter favorite to prevail.


Jeff Adds:

I don't see much to disagree with Bryan. You can dismiss any claim relating to Trump right off the bat. Giving the award to Trump would neither shock people that hate him nor would it upset people that love him (who presumably already have low regard for liberal fact checkers.) It would be a yawner of a pick that would fail to generate buzz.

The Clinton pick would be a favorite in any other year. Because Clinton has already lost the election and her status on the left has been diminished, handing her the award wouldn't do any harm to her, but it would provide PolitiFact with a bogus token of neutrality ("See! We call Democrats liars too!") Likewise, the resulting outrage of PolitiFact's devoted liberal fanbase would generate plenty of clicks, and typically that's what the Lie of the Year has been about. It's true they would temporarily upset the faithful, but we've seen this exact scenario play out before with little consequence. Historically, PolitiFact seems motivated by clicks (and even angry liberal clicks will do, not to mention they keep the "we upset both sides" charade going.)

But the Fake News pick is the obvious favorite here. It's the hottest of hot topics in journalist circles, and PolitiFact sees themselves on the front lines in the war against opposing viewpoints unfacts. They're already trying to rally the troops and want to be seen as a beacon of truthiness in a sea of deceit.

It's been my view that while PolitiFact formerly cared primarily about generating buzz, since Holan's ascension [Angie Drobnic Holan replacing Bill Adair as chief editor--bww] they've behaved more and more like political activists. The Clinton choice would get more clicks, but I'd bet on Fake News being this year's rally cry for PolitiFact's army of Truth Hustlers.

Viva la Factismo!