Thursday, December 19, 2019

PolitiFact disagrees with IG on IG report on CrossFire Hurricane

What to do when the Inspector General and PolitiFact disagree on what an IG report says?

Well, the IG has no Pulitzer Prize, so maybe trust the fact checkers?

PolitiFact, Dec. 11, 2019:

The investigation was not politically motivated

The IG report also dismissed the notion that the investigation was politically motivated.
Inspector General Michael Horowitz, during Senate testimony on Dec. 18, 2019 (The Epoch Times):
Then [Sen. Josh Hawley (R-Mo.)] asked, “Was it your conclusion that political bias did not affect any part of the Page investigation, any part of Crossfire Hurricane?”

“We did not reach that conclusion,” Horowitz told him. He added, “We have been very careful in connection with the FISA for the reasons you mentioned to not reach that conclusion, in part, as we’ve talked about earlier: the alteration of the email, the text messages associated with the individual who did that, and then our inability to explain or understand or get good explanations so we could understand why this all happened.”
We confirmed The Epoch Times' account via C-SPAN. It edits the exchange for clarity without altering the basic meaning of the exchange. We invite readers to confirm it for themselves via an embedded clip (around 3:11)*:

Seriously, we count the PolitiFact's Pulitzer as no kind of reasonable evidence supporting its reliability. Pulitzer juries do not fact check content before awarding prizes.

It seems clear PolitiFact committed the fallacy of argumentum ad ignorantiam. When the IG report repeatedly said it "found no testimonial or documentary evidence that these operations resulted from political bias or other improper considerations" or similar words, PolitiFact made the fallacious leap to conclude there was no political bias.

Pulitzer Prize-winning and IFCN-verified PolitiFact.

We need better measures of trustworthiness.

*Our embedded clip ended up shorter than we expected, for which we apologize to our readers. Find the full clip here.

Monday, December 16, 2019

A political exercise: PolitiFact chooses non-impactful (supposed) falsehood as its "Lie of the Year"

PolitiFact chose President Trump's claim that a whistleblower's complaint about his phone call with Ukrainian leader Volodymyr Zelensky got the facts "almost completely wrong."

We had deemed it unlikely PolitiFact would choose that claim as its "Lie of the Year," reasoning that it failed to measure up to the supposed criterion of carrying a high impact.

We failed to take into account PolitiFact's dueling criteria, explained by PolitiFact Editor Angie Drobnic Holan back in 2016:
Each year, PolitiFact awards a "Lie of the Year" to take stock of a misrepresentation that arguably beats all others in its impact or ridiculousness.
To be sure, "arguably beats all others in its impact" counts as a subjective criterion. As a bonus, PolitiFact offers itself an alternative criterion based on the "ridiculousness" of a claim.

Everybody who thinks there's an objective way to gauge relative "ridiculousness" raise your hand.

We will not again make the mistake of trying to handicap the "Lie of the Year" choice based on the criteria PolitiFact publicizes. Those criteria are hopelessly subjective and don't tell the real story.

It's more simple and direct to predict the outcome based on what serves PolitiFact's left-leaning interests.

Thursday, December 12, 2019

William Barr, PolitiFact and the biased experts game

Is it okay for fact checkers to rely on biased experts for their findings?

Earlier this year, Facebook restricted distribution of a video by pro-life activist Lila Rose. Rose's group complained the fact check was biased. Facebook relied on the International Fact-Checking Network to investigate. The investigator ruled (very dubiously) that the fact check was accurate but that the fact checker should have disclosed the bias of experts it cited:
The failure to declare to their readers that two individuals who assisted Science Feedback, not in writing the fact-check but in reviewing the evidence, had positions within advocacy organizations, and the failure to clarify their role to readers, fell short of the standards required of IFCN signatories. This has been communicated to Science Feedback.
Perhaps it's fine for fact checkers to rely on biased experts so long as those experts do not hold positions in advocacy organizations.

Enter PolitiFact and its December 11, 2019 fact check of Attorney General William Barr.

The fact check itself hardly deals with the substance of Barr's claim that the "Crossfire Hurricane" investigation of possible collusion between the Trump campaign and Russia was started on the thinnest of evidence. Instead, PolitiFact sticks with calling the decision to investigate "justified" by the Inspector General's report while omitting the report's observation that the law sets a low threshold for starting an investigation (bold emphasis added).
Additionally, given the low threshold for predication in the AG Guidelines and the DIOG, we concluded that the FFG informat ion, provided by a government the United Stat es Intelligence Community (USIC) deems trustworthy, and describing a first-hand account from an FFG employee of a conversation with Papadopoulos, was sufficient to predicate the investigation. This information provided the FBI with an articulable factual basis that, if true, reasonably indicated activity constituting either a federal cri me or a threat to national security, or both, may have occur red or may be occurring. For similar reasons, as we detail in Chapter Three, we concluded that the quantum of information articulated by the FBI to open the individual investigations on Papadopoulos, Page, Flynn, and Manafort in August 2016 was sufficient to satisfy the low threshold established by the Department and the FBI.
The "low threshold" is consistent with Barr's description of "thinnest of suspicions" in the context of prosecutorial discretion and the nature of the event that supposedly justified the investigation (the Papadoupolous caper)*.

But in this post we will focus on the experts PolitiFact cited.

Rosa Brooks

Rosa Brooks, professor of law and policy at Georgetown University, told us that Barr’s assessment that the suspicions were thin "appears willfully inaccurate."

"The report concluded precisely the opposite," she said. "The IG report makes it clear that the decision to launch the investigation was justified."
If PolitiFact were brazen enough, it could pick out Brooks as a go-to (biased) expert based on her Twitter retweets from Dec. 10, 2019.

Brooks' tweets also portray her as a Democrat voter. So does her pattern of political giving.

Jennifer Daskal

Jennifer Daskal, professor of law at American University, agreed. "Barr’s statement is at best a misleading statement, if not a deliberate distortion, of what the report actually found," she said.
Daskal's Internet history shows little to suggest she pre-judged her view on Barr's statement. On the other hand, it seems pretty plain she prefers the presidential candidacy of Pete Buttgieg (one example among several). Plus Daskal has tended to donate politically to Democrats.

Robert Litt

PolitiFact contacted Litt for his expert opinion but did not mention him in the text of the fact check.

We deem it unlikely PolitiFact tabbed Litt to counterbalance the leftward lean of Brooks and Daskal. Litt was part of the Obama administration and his appointment carried an unusual political dimension to it. Litt failed his background check but was installed in the Clinton Justice Department in a roundabout way.

Litt, like Brooks and Daskal, gives politically to Democrats.

So what's the problem?

We think it's okay for PolitiFact to cite experts who lean left and donate politically to the Democratic Party. That's not the problem.

The problem is the echo-chamber effect PolitiFact achieves by choosing a small pool of experts all of whom lean markedly left. As we've noted before, that's no way to establish anything akin to an expert consensus. But it serves as an excellent method for excluding or marginalizing contrary arguments.

It's not like those are hard to find. It seems PolitiFact simply has no interest in them.

*It's worth noting that the information Papadoupoulos shared with the Australian, Downer, came in turn from the mysterious Joseph Mifsud.

PolitiFact's "Lie of the Year" farce, 2019 edition

From the start, PolitiFact's "Lie of the Year" award has counted as a farce.


Because it has supposedly neutral and unbiased fact-checkers offering their collective editorial opinion on the most impactful falsehood from the past year.

How better to illustrate their neutrality than by offering an opinion?

PolitiFact's actions in choosing its "Lie of the Year" have borne out farcical nature of the exercise, with farces including naming true statements as the "Lie of the Year" and the immortal ObamaCare bait and switch.

On With the Latest Farce

This year we quickly noticed that all of the nominated falsehoods received "Pants on Fire" ratings. That's a first. The nominees usually received either a "False" or a "Pants on Fire" rating in the past, with the ObamaCare bait and switch counting as the lone exception.

Next, we noticed that of the three nominations connected to Democratic Party politicians only one came from PolitiFact National. PolitiFact California and PolitiFact Texas each scored one of those nominations.

No state PolitiFact operation had a Republican subject nominated, and President Trump received three of the four.

Is This the Weakest Field Ever?

PolitiFact says it awards its "Lie of the Year" to "the most impactful significant falsehood."

We don't see much impact on PolitiFact's list of nominations. We'll go through and handicap the list based on PolitiFact's claimed criterion.

But first we remind readers that PolitiFact has a history of not limiting its choice to an item from its own list of nominations. This is a good year in which to pull that stunt.

Says Nancy Pelosi diverted "$2.4 billion from Social Security to cover impeachment costs."
— Viral image on Wednesday, October 9th, 2019 in a Facebook post
Anybody remember any real-world impact from this claim? We don't. 0

"The first so-called second hand information ‘Whistleblower’ got my phone conversation almost completely wrong."

— Donald Trump on Saturday, October 5th, 2019 in a tweet
How about this footnote from the Trump impeachment parade? The impact of this supposed falsehood (isn't it closer to opinion than a specific statement of fact?), if any, comes from its symbolic representation of the case for impeachment. The appeal of this choice comes from the ability of the media to clap itself on the back for its impeachment reporting. 5

Between 27,000 and 200,000 Wisconsinites were "turned away" from the polls in 2016 due to lack of proper identification.

— Hillary Clinton on Tuesday, September 17th, 2019 in a speech at George Washington University
Again, what was the real-world impact of Clinton's claim? If PolitiFact bothered to tie together the exaggerated election interference claims from the Democratic presidential candidates plus failed Democratic nominee for the governorship of Georgia, Stacey Abrams, then maybe PolitiFact could reasonably say the collected falsehoods carried some impact. 1

Originally "almost all models predicted" Dorian would hit Alabama.

— Donald Trump on Wednesday, September 4th, 2019 in a tweet
Although we're not aware of any real-world impact from this Trump tweet, this one had a pretty big impact in the world of journalism (not to be mistaken for the real world).

That may well prove enough to give this claim the win. The press made a huge deal of this presidential tweet, and the issue eventually led to accusations Trump broke the law by altering a weather report (not kidding).

Were the media correct that Trump inspired disproportionate worry in the state of Alabama? We would not expect the media to offer strong support for that proposition. Thanks, Washington Post, for providing us an exemplar of our expectations. 6

"The vast majority" of San Francisco’s homeless people "also come in from — and we know this — from Texas. Just (an) interesting fact."

— Gavin Newsom on Sunday, June 23rd, 2019 in an interview on "Axios on HBO"
Gavin Newsom isn't well known nationally, and his statement had negligible real-world impact, by our estimation. This nomination is another tribute to PolitiFact's difficulty in finding falsehoods from Democrats. If PolitiFact had rated any of the many claims from Stacey Abrams that she won the Georgia election then we might have had a legitimate contender from the Democrats. 1

U.S. tariffs on China are "not hurting anybody" in the United States.

— Peter Navarro on Sunday, August 18th, 2019 in an interview
This "gotcha" fact check ignores much of the context of the Navarro interview, especially Navarro's point about China's major devaluation of its currency. That aside, despite media attempts to trumpet the harm to American consumers Americans seem mostly okay with whatever harm they're supposedly receiving.

Tariffs and the trade war make up a big issue, but it's sad if this shallow fact check treatment had any real-world impact. 2

"Remember after the shooting in Las Vegas, (President Donald Trump) said, ‘Yeah, yeah, we’re going to ban the bump stocks.’ Did he ban the bump stocks? No."

— Kirsten Gillibrand on Sunday, June 2nd, 2019 in a Fox News town hall
Ah. The fact check that brought an end to Kristen Gillibrand's hopes for the Democratic Party's presidential nomination.

Just kidding. Gillibrand never caught fire in the Democratic primary and there's no reason to suppose her statement criticizing Trump had anything to do with it. We doubt many are familiar with either Gillibrand or the fact check. Real world impact? Not so much. But at least PolitiFact National can take credit for this fact check of a Democrat. That's something. 0

Video shows Nancy Pelosi slurring her speech at a public event.
— Politics WatchDog on Wednesday, May 22nd, 2019 in a Facebook post
The video that misled thousands into thinking Speaker Pelosi slurred her speech in public. 1

"There has never been, ever before, an administration that’s been so open and transparent."
— Donald Trump on Monday, May 20th, 2019 in remarks at the White House
What was the context of this Trump claim?

PolitiFact used a short video snippet posted to Twitter as its primary source. PolitiFact offered no surrounding context. Is everyone good with that?

Can a potentially hyperbolic statement lifted out of context serve as the most significant falsehood of 2019?

This is PolitiFact we're talking about. 4

Says John Bolton "fundamentally was a man of the left."
Tucker Carlson on Tuesday, September 10th, 2019 in a TV segment
Fired Trump national security adviser John Bolton might have better name recognition than Kirsten Gillibrand. Not that we'd bet money on it or anything. Carlson was offering the opinion that Bolton's willingness to use government power (particularly military power) marked him as a progressive.

So what? Even if we suppose that political alignment counts as a matter of fact, who cares? 1


Two Trump claims have a chance of ending up with the "Lie of the Year" award. But the weak field makes us think there's an excellent chance PolitiFact will do what it has done in the past by settling on a set of claims or a topic that failed to make its list of nominees.

PolitiFact's list offers few nominees with significant political impact.

Correction Dec. 17, 2019: We misquoted PolitiFact's description of its "Lie of the Year" criterion and we have overlooked a second criterion PolitiFact claims to use (at least potentially). We fixed our use of the wrong word with a strikethrough and replaced it with the correct word ("significant" for "impactful").

Tuesday, December 3, 2019

PolitiFact ratings aren't scientific--or are they?

We stand bemused by PolitiFact's attempt to straddle the fence regarding its aggregated "Truth-O-Meter" ratings.

On the one hand, as PolitiFact Editor Angie Drobnic Holan told us a few weeks ago, "It’s important to note that we don’t do a random or scientific sample."

On the other hand, we have this eye-popping claim Holan sent out via email today (also published to the website):
Trump remains a unique figure in American politics and at PolitiFact. There’s no one we’ve fact-checked as many times who has been as consistently wrong. Out of 738 claims we’ve fact-checked as of this writing, more than 70% have been rated Mostly False, False or Pants on Fire. After four years of watching Trump campaign and govern, we see little to no increase in his use of true facts and evidence to support his arguments.
Even though it is supposedly important to note that PolitiFact doesn't do a random or scientific sample, Holan makes no mention of that important caveat in her email. Instead, she makes it appear to the reader as though Trump's "Truth-O-Meter" record offers a reasonable basis for judging whether Trump has increased "his use of true facts and evidence to support his arguments."

How would PolitiFact judge whether Trump was using more true facts to support his arguments other than by looking at unscientific fact checker ratings?

PolitiFact has made a pattern of this deception.

The occasional admission of non-random, unscientific story selection is perfunctory. It is a fig leaf.

PolitiFact wants readers to judge politicians based on its aggregated "Truth-O-Meter" ratings.

For some unknown reason, though maybe we can guess, the fact checkers think their collected ratings are pretty much accurate judges of character regardless of their departure from the scientific method. And that's why PolitiFact over its entire history has implicitly and explicitly encouraged readers to rely on its graphs and charts to judge politicians. And has steadfastly resisted the obligation to attach a disclaimer to those charts and graphs making the supposedly important point that the selection process is not random or scientific.

We call it an obligation because we view it as a requirement for journalists to avoid deliberately deceiving their audiences.

PolitiFact deceives its audience daily in this way without any visible repentance.