Thursday, December 12, 2019

William Barr, PolitiFact and the biased experts game

Is it okay for fact checkers to rely on biased experts for their findings?

Earlier this year, Facebook restricted distribution of a video by pro-life activist Lila Rose. Rose's group complained the fact check was biased. Facebook relied on the International Fact-Checking Network to investigate. The investigator ruled (very dubiously) that the fact check was accurate but that the fact checker should have disclosed the bias of experts it cited:
The failure to declare to their readers that two individuals who assisted Science Feedback, not in writing the fact-check but in reviewing the evidence, had positions within advocacy organizations, and the failure to clarify their role to readers, fell short of the standards required of IFCN signatories. This has been communicated to Science Feedback.
Perhaps it's fine for fact checkers to rely on biased experts so long as those experts do not hold positions in advocacy organizations.

Enter PolitiFact and its December 11, 2019 fact check of Attorney General William Barr.

The fact check itself hardly deals with the substance of Barr's claim that the "Crossfire Hurricane" investigation of possible collusion between the Trump campaign and Russia was started on the thinnest of evidence. Instead, PolitiFact sticks with calling the decision to investigate "justified" by the Inspector General's report while omitting the report's observation that the law sets a low threshold for starting an investigation (bold emphasis added).
Additionally, given the low threshold for predication in the AG Guidelines and the DIOG, we concluded that the FFG informat ion, provided by a government the United Stat es Intelligence Community (USIC) deems trustworthy, and describing a first-hand account from an FFG employee of a conversation with Papadopoulos, was sufficient to predicate the investigation. This information provided the FBI with an articulable factual basis that, if true, reasonably indicated activity constituting either a federal cri me or a threat to national security, or both, may have occur red or may be occurring. For similar reasons, as we detail in Chapter Three, we concluded that the quantum of information articulated by the FBI to open the individual investigations on Papadopoulos, Page, Flynn, and Manafort in August 2016 was sufficient to satisfy the low threshold established by the Department and the FBI.
The "low threshold" is consistent with Barr's description of "thinnest of suspicions" in the context of prosecutorial discretion and the nature of the event that supposedly justified the investigation (the Papadoupolous caper)*.

But in this post we will focus on the experts PolitiFact cited.

Rosa Brooks

Rosa Brooks, professor of law and policy at Georgetown University, told us that Barr’s assessment that the suspicions were thin "appears willfully inaccurate."

"The report concluded precisely the opposite," she said. "The IG report makes it clear that the decision to launch the investigation was justified."
If PolitiFact were brazen enough, it could pick out Brooks as a go-to (biased) expert based on her Twitter retweets from Dec. 10, 2019.

Brooks' tweets also portray her as a Democrat voter. So does her pattern of political giving.

Jennifer Daskal

Jennifer Daskal, professor of law at American University, agreed. "Barr’s statement is at best a misleading statement, if not a deliberate distortion, of what the report actually found," she said.
Daskal's Internet history shows little to suggest she pre-judged her view on Barr's statement. On the other hand, it seems pretty plain she prefers the presidential candidacy of Pete Buttgieg (one example among several). Plus Daskal has tended to donate politically to Democrats.

Robert Litt

PolitiFact contacted Litt for his expert opinion but did not mention him in the text of the fact check.

We deem it unlikely PolitiFact tabbed Litt to counterbalance the leftward lean of Brooks and Daskal. Litt was part of the Obama administration and his appointment carried an unusual political dimension to it. Litt failed his background check but was installed in the Clinton Justice Department in a roundabout way.

Litt, like Brooks and Daskal, gives politically to Democrats.

So what's the problem?

We think it's okay for PolitiFact to cite experts who lean left and donate politically to the Democratic Party. That's not the problem.

The problem is the echo-chamber effect PolitiFact achieves by choosing a small pool of experts all of whom lean markedly left. As we've noted before, that's no way to establish anything akin to an expert consensus. But it serves as an excellent method for excluding or marginalizing contrary arguments.

It's not like those are hard to find. It seems PolitiFact simply has no interest in them.

*It's worth noting that the information Papadoupoulos shared with the Australian, Downer, came in turn from the mysterious Joseph Mifsud.

PolitiFact's "Lie of the Year" farce, 2019 edition

From the start, PolitiFact's "Lie of the Year" award has counted as a farce.


Because it has supposedly neutral and unbiased fact-checkers offering their collective editorial opinion on the most impactful falsehood from the past year.

How better to illustrate their neutrality than by offering an opinion?

PolitiFact's actions in choosing its "Lie of the Year" have borne out farcical nature of the exercise, with farces including naming true statements as the "Lie of the Year" and the immortal ObamaCare bait and switch.

On With the Latest Farce

This year we quickly noticed that all of the nominated falsehoods received "Pants on Fire" ratings. That's a first. The nominees usually received either a "False" or a "Pants on Fire" rating in the past, with the ObamaCare bait and switch counting as the lone exception.

Next, we noticed that of the three nominations connected to Democratic Party politicians only one came from PolitiFact National. PolitiFact California and PolitiFact Texas each scored one of those nominations.

No state PolitiFact operation had a Republican subject nominated, and President Trump received three of the four.

Is This the Weakest Field Ever?

PolitiFact says it awards its "Lie of the Year" to "the most impactful falsehood."

We don't see much impact on PolitiFact's list of nominations. We'll go through and handicap the list based on PolitiFact's claimed criterion.

But first we remind readers that PolitiFact has a history of not limiting its choice to an item from its own list of nominations. This is a good year in which to pull that stunt.

Says Nancy Pelosi diverted "$2.4 billion from Social Security to cover impeachment costs."
— Viral image on Wednesday, October 9th, 2019 in a Facebook post
Anybody remember any real-world impact from this claim? We don't. 0

"The first so-called second hand information ‘Whistleblower’ got my phone conversation almost completely wrong."

— Donald Trump on Saturday, October 5th, 2019 in a tweet
How about this footnote from the Trump impeachment parade? The impact of this supposed falsehood (isn't it closer to opinion than a specific statement of fact?), if any, comes from its symbolic representation of the case for impeachment. The appeal of this choice comes from the ability of the media to clap itself on the back for its impeachment reporting. 5

Between 27,000 and 200,000 Wisconsinites were "turned away" from the polls in 2016 due to lack of proper identification.

— Hillary Clinton on Tuesday, September 17th, 2019 in a speech at George Washington University
Again, what was the real-world impact of Clinton's claim? If PolitiFact bothered to tie together the exaggerated election interference claims from the Democratic presidential candidates plus failed Democratic nominee for the governorship of Georgia, Stacey Abrams, then maybe PolitiFact could reasonably say the collected falsehoods carried some impact. 1

Originally "almost all models predicted" Dorian would hit Alabama.

— Donald Trump on Wednesday, September 4th, 2019 in a tweet
Although we're not aware of any real-world impact from this Trump tweet, this one had a pretty big impact in the world of journalism (not to be mistaken for the real world).

That may well prove enough to give this claim the win. The press made a huge deal of this presidential tweet, and the issue eventually led to accusations Trump broke the law by altering a weather report (not kidding).

Were the media correct that Trump inspired disproportionate worry in the state of Alabama? We would not expect the media to offer strong support for that proposition. Thanks, Washington Post, for providing us an exemplar of our expectations. 6

"The vast majority" of San Francisco’s homeless people "also come in from — and we know this — from Texas. Just (an) interesting fact."

— Gavin Newsom on Sunday, June 23rd, 2019 in an interview on "Axios on HBO"
Gavin Newsom isn't well known nationally, and his statement had negligible real-world impact, by our estimation. This nomination is another tribute to PolitiFact's difficulty in finding falsehoods from Democrats. If PolitiFact had rated any of the many claims from Stacey Abrams that she won the Georgia election then we might have had a legitimate contender from the Democrats. 1

U.S. tariffs on China are "not hurting anybody" in the United States.

— Peter Navarro on Sunday, August 18th, 2019 in an interview
This "gotcha" fact check ignores much of the context of the Navarro interview, especially Navarro's point about China's major devaluation of its currency. That aside, despite media attempts to trumpet the harm to American consumers Americans seem mostly okay with whatever harm they're supposedly receiving.

Tariffs and the trade war make up a big issue, but it's sad if this shallow fact check treatment had any real-world impact. 2

"Remember after the shooting in Las Vegas, (President Donald Trump) said, ‘Yeah, yeah, we’re going to ban the bump stocks.’ Did he ban the bump stocks? No."

— Kirsten Gillibrand on Sunday, June 2nd, 2019 in a Fox News town hall
Ah. The fact check that brought an end to Kristen Gillibrand's hopes for the Democratic Party's presidential nomination.

Just kidding. Gillibrand never caught fire in the Democratic primary and there's no reason to suppose her statement criticizing Trump had anything to do with it. We doubt many are familiar with either Gillibrand or the fact check. Real world impact? Not so much. But at least PolitiFact National can take credit for this fact check of a Democrat. That's something. 0

Video shows Nancy Pelosi slurring her speech at a public event.
— Politics WatchDog on Wednesday, May 22nd, 2019 in a Facebook post
The video that misled thousands into thinking Speaker Pelosi slurred her speech in public. 1

"There has never been, ever before, an administration that’s been so open and transparent."
— Donald Trump on Monday, May 20th, 2019 in remarks at the White House
What was the context of this Trump claim?

PolitiFact used a short video snippet posted to Twitter as its primary source. PolitiFact offered no surrounding context. Is everyone good with that?

Can a potentially hyperbolic statement lifted out of context serve as the most significant falsehood of 2019?

This is PolitiFact we're talking about. 4

Says John Bolton "fundamentally was a man of the left."
Tucker Carlson on Tuesday, September 10th, 2019 in a TV segment
Fired Trump national security adviser John Bolton might have better name recognition than Kirsten Gillibrand. Not that we'd bet money on it or anything. Carlson was offering the opinion that Bolton's willingness to use government power (particularly military power) marked him as a progressive.

So what? Even if we suppose that political alignment counts as a matter of fact, who cares? 1


Two Trump claims have a chance of ending up with the "Lie of the Year" award. But the weak field makes us think there's an excellent chance PolitiFact will do what it has done in the past by settling on a set of claims or a topic that failed to make its list of nominees.

PolitiFact's list offers few nominees with significant political impact.

Tuesday, December 3, 2019

PolitiFact ratings aren't scientific--or are they?

We stand bemused by PolitiFact's attempt to straddle the fence regarding its aggregated "Truth-O-Meter" ratings.

On the one hand, as PolitiFact Editor Angie Drobnic Holan told us a few weeks ago, "It’s important to note that we don’t do a random or scientific sample."

On the other hand, we have this eye-popping claim Holan sent out via email today (also published to the website):
Trump remains a unique figure in American politics and at PolitiFact. There’s no one we’ve fact-checked as many times who has been as consistently wrong. Out of 738 claims we’ve fact-checked as of this writing, more than 70% have been rated Mostly False, False or Pants on Fire. After four years of watching Trump campaign and govern, we see little to no increase in his use of true facts and evidence to support his arguments.
Even though it is supposedly important to note that PolitiFact doesn't do a random or scientific sample, Holan makes no mention of that important caveat in her email. Instead, she makes it appear to the reader as though Trump's "Truth-O-Meter" record offers a reasonable basis for judging whether Trump has increased "his use of true facts and evidence to support his arguments."

How would PolitiFact judge whether Trump was using more true facts to support his arguments other than by looking at unscientific fact checker ratings?

PolitiFact has made a pattern of this deception.

The occasional admission of non-random, unscientific story selection is perfunctory. It is a fig leaf.

PolitiFact wants readers to judge politicians based on its aggregated "Truth-O-Meter" ratings.

For some unknown reason, though maybe we can guess, the fact checkers think their collected ratings are pretty much accurate judges of character regardless of their departure from the scientific method. And that's why PolitiFact over its entire history has implicitly and explicitly encouraged readers to rely on its graphs and charts to judge politicians. And has steadfastly resisted the obligation to attach a disclaimer to those charts and graphs making the supposedly important point that the selection process is not random or scientific.

We call it an obligation because we view it as a requirement for journalists to avoid deliberately deceiving their audiences.

PolitiFact deceives its audience daily in this way without any visible repentance.

Wednesday, November 20, 2019

PolitiFact as Rumpelstiltskin.

“Round about, round about,
Lo and behold!
Reel away, reel away,
Straw into gold!”

PolitiFact's Nov. 19, 2019 fact check of something President Donald Trump said on the Dan Bongino Show gives us yet another example of a classic fact checker error, the mistake of interpreting ambiguous statements as clear statements.

Here's PolitiFact's presentation of a statement it found worthy of a fact check:
In an interview with conservative show host Dan Bongino, Trump said a false rendition of that call by House Intelligence chairman Adam Schiff, D-Calif., forced him to release the readout of that call.

"They never thought, Dan, that I was going to release that call, and I really had no choice because Adam Schiff made up a call," Trump said Nov. 15. "He said the president said this, and then he made up a call."

The problem with Trump’s statement is that Schiff spoke after the White House released the memo of the phone call, not before.
 Note that PolitiFact finds a timeline problem with Trump's claim.

But also note that Trump makes no clear statement regarding a timeline. If Trump said "I released the transcript after Schiff did his 'parody' version of the telephone call," then he would have established an order of events. Trump's words imply an order of events, but it is not a hard logical implication (A, therefore B).

PolitiFact treats the case exactly like a hard implication.

Here's why that's the wrong approach.

First, significant ambiguity should always slow a fact-checker's progress toward an interpretation.

Second, Trump gave a speech on Sept 24, 2019 that announced the impending release of the transcript (memorandum of telephone conversation). The "transcript" was released on Sept. 25. Schiff gave his "parody" account of the call the next day, on Sept. 26.  And Trump responded to Schiff's "parody" version of his call on Sept. 30 during an event honoring the late Justice Antonin Scalia:
Adam Schiff — representative, congressman — made up what I said.  He actually took words and made it up.  The reason is, when he saw my call to the President of Ukraine, it was so good that he couldn’t quote from it because it — there was nothing done wrong.  It was perfect.
PolitiFact's interpretation asks us to believe that Trump either forgot what he said on Sept. 30 or else deliberately decided to reverse the chronology.

What motive would support that decision? Is one version more politically useful than the other?

It's not uncommon for people to speak of "having no choice" based on an event subsequent to that choice. The speaker means that the choice would have had to take place eventually.

When a source makes two claims touching the same subject and differ in content, the following rule applies: Use the more clear statement to make sense of the less clear statement.

Fact checkers who manufacture certitudes out of equivocal language give fact-checking a bad name.

They are Rumpelstiltskins, trying to spin straw into gold.


We would draw attention to a parallel highlighted at (Bryan's) Zebra Fact Check last month.

During a podcast interview Hillary Clinton used equivocal language in saying "they" were grooming Democratic Party presidential hopeful Tulsi Gabbard as a third-party candidate to enhance Trump's chances of winning the 2020 election.

No fact checker picked out Clinton's claim for a fact check. And that's appropriate, because the identity of "they" was never perfectly clear. Clinton probably meant the Russians, but "probably" doesn't make it a fact.

In that case, the fact checkers picked on those who interpreted Clinton to mean the Russians were grooming Gabbard (implicitly finding that Clinton's ambiguity clearly meant "Republicans").

Fact checkers have no business doing such things.

Until fact checkers can settle on a consistent approach to their craft, we justifiably view it as a largely subjective enterprise.

Monday, November 18, 2019

We want Bill Adair subjected to the "Flip-O-Meter"

It wasn't that long ago we reported on Bill Adair's article for Columbia Journalism Review declaring "Bias is good" along with a chart indicating fact-check journalism has more opinion to it than either investigative reporting or news analysis.

Yet WRAL, in announcing its new partnership with PolitiFact North Carolina, quoted Adair saying PolitiFact is unbiased:
“What is important about PolitiFact is not just that it’s not biased,” Adair said, “but that we show our work and that we show all of our sources.”
Naturally we cannot allow that to pass. We used WRAL's contact form to reach out to the writer of the article, Ashley Talley.

We pointed out the discrepancy between what Talley reported from Adair and what Adair wrote for Columbia Journalism Review. We suggested somebody should fact check Adair.

Next we'll be contacting Paul Specht of PolitiFact North Carolina over this quotation:
“One thing I love about PolitiFact is that the format is very structured and it's not up to me to decide what is or isn't true,” said Paul Specht, WRAL’s PolitiFact reporter who has been covering local, state and national politics for years. “It's up to me to go do the research and then it's up to the research to tell us what is true.”
We're not sure how that's supposed to square with Adair's declaration from a few years ago that "Lord knows the decision about a Truth-O-Meter rating is entirely subjective."

What changed?

In addition to its "Truth-O-Meter" PolitiFact publishes "Flip-O-Meter" items.

We'd like to see Adair on the Flip-O-Meter.

Friday, November 15, 2019

PolitiFact editor: "It’s important to note that we don’t do a random or scientific sample"

As we have mentioned before, we love it when PolitiFact's movers and shakers do interviews. It nearly guarantees us good material.

PolitiFact Editor Angie Drobnic Holan appeared on Galley by CJR (Columbia Journalism Review) with Mathew Ingram to talk about fact-checking.

During the interview Ingram asked about PolitiFact's process for choosing which facts to check (bold emphasis added):
One question I've been asking many of our interview guests is how they choose which lies or hoaxes or false reports to fact-check when there are just so many of them? And do you worry about the possibility of amplifying a fake news story by fact-checking it? This is a problem Whitney Phillips and Joan Donovan have warned about in interviews I've done with them about this topic.
Great questions! We use our news judgement to decide what to fact-check, with the main criteria being that it’s a topic in the news and it’s something that would make a regular person say, "Hmmm, I wonder if that’s true." If it sounds wrong, we’re even more eager to do it. It’s important to note that we don’t do a random or scientific sample.
It's important, Holan says, to note that PolitiFact does not do a random or scientific sample when it chooses the topics for its fact check stories.

We agree wholeheartedly with Holan's statement in bold. And that's an understatment. We've been harping for years on PolitiFact's failure to make its non-scientific foundations clear to its audience. And here Holan apparently agrees with us by saying it's important.

How important is it?

PolitiFact's statement of principles says PolitiFact uses news judgment to pick out stories, and also mentions the "Is that true?" standard Holan mentions in the above interview segment. But what you won't find in PolitiFact's statement of principles is any kind of plain admission that its process is neither random nor scientific.

If it's important to note those things, then why doesn't the statement of principles mention it?

At PolitiFact, it's so important to note that its story selection is neither random nor scientific that no example from three pages of Google hits in the domain when searching for "random" AND "scientific" has anything to do with PolitiFact's method for story selection.

And despite commenters on PolitiFact's Facebook page commonly interpreting candidate report cards as representative of all of a politician's statements, Holan insists "There's not a lot of reader confusion" about it.
If there's not a lot of reader confusion about it, why say it's important to note that the story selection isn't random or scientific? People supposedly already know that.

We use the tag "There's Not a Lot of Reader Confusion" on occasional stories pointing out that people do suffer confusion about it because PolitiFact doesn't bother to explain it.

Post a chart of collected "Truth-O-Meter" ratings and there's a good chance somebody in the comments will extrapolate the data to apply to all of a politician's statements.

We say it's inexcusable that PolitiFact posts its charts without making their unscientific basis clear to readers.

They just keep right on doing it, even while admitting it's important that people realize a fact about the charts that PolitiFact rarely bothers to explain.

Monday, October 14, 2019

Remember back when it was False to say Nixon was impeached?

I remember reading a story years back about a tire company that enterprisingly tried to drum up business by sending out a team to spread roofing nails on the local roads.

Turns out there's a version of that technique in PolitiFact's fact-checking tool box.

Nixon was Never Impeached

Back on June 13th, 2019 PolitiFact's PunditFact declared it "False" that Nixon was impeached. PunditFact said "Nixon was never officially impeached." We're not sure what would count as "unofficially impeached." We're pretty sure it's the same as saying Nixon was not impeached.

But that was way back in June. Over three months have passed. And it's now sufficiently true that Nixon was impeached so that PolitiFact can spread the idea on Twitter and write an impeachment PolitiSplainer that refers multiple times to the Nixon impeachment.

Nixon was Impeached

Edit: (if embed isn't working use hotlink above)

Is Nixon a good example to include with Johnson and Clinton (let alone Trump) if Nixon wasn't impeached?
More than anything, the procedural details are derived from historical precedent, from the impeachment of President Andrew Johnson in the 1860s to that of President Richard Nixon in the 1970s and President Bill Clinton in the 1990s.

Got it? The impeachment of President Nixon. Because Nixon was impeached, right?
Experts pointed to a variety of differences between the Trump impeachment process and those that went before.

The differences begin with the substance of the charges. All prior presidential impeachments have concerned domestic issues — the aftermath of the Civil War in Johnson’s case, the Watergate burglary and coverup under Nixon, and the Monica Lewinsky affair for Clinton.
Got it? Nixon was impeached over the Watergate burglary. Because Nixon was impeached, right?
The impeachments of both Nixon and Clinton did tend to curb legislative action by soaking up all the attention in Washington, historians say.
Obviously a fact-checker will not refer to "the impeachments of both Nixon and Clinton" if Nixon was not impeached. Therefore, Nixon was impeached. Right?
Some congressional Republicans have openly supported Trump’s assertion that the allegations against Trump are dubious. This contrasts with the Nixon impeachment, when "on both sides there was a pretty universal acknowledgement that the charges being investigated were very important and that it was necessary to get to the bottom of what happened," said Frank O. Bowman III, a University of Missouri law professor and author of the book, "High Crimes and Misdemeanors: A History of Impeachment for the Age of Trump."
Obviously a fact-checker will only draw a parallel to the Nixon impeachment if Nixon was impeached. Therefore Nixon was impeached. Right?
Trump is facing possible impeachment about a year before running for reelection. By contrast, both Nixon and Clinton had already won second terms when they were impeached. (Johnson was such an outcast within his own party that he would have been an extreme longshot to win renomination, historians say.)
Got it? Nixon and Clinton had already won second terms when they were impeached. Because Nixon was impeached, right?
On the eve of impeachment for both Nixon and Clinton, popular support for impeachment was weak — 38% for Nixon and 29% for Clinton, according to a recent Axios analysis. (There was no public opinion polling when Johnson was president.)
Got it? "On the eve of impeachment for both Nixon and Clinton," because a fact checker doesn't refer to the eve of the Nixon impeachment if there was no Nixon impeachment.

Is there a Christmas Eve if there's no Christmas?

That's six times PolitiFact referred to the Nixon impeachment in just one PolitiSplainer article. And about three months after PolitiFact's PunditFact said Nixon was not impeached.

Want a seventh? We've got a seventh:
During Nixon’s impeachment, "people counted on the media to serve as arbiters of truth," he said. "Obviously, we don’t have that now."
 "During Nixon's impeachment" directly implies Nixon was impeached. Seven.

We've been going in order, too.

(Nixon Wasn't Impeached)

But behold! Context at last!
The uncertainty about Senate process stems from the rarity of the process. Nixon resigned before the House could vote to send articles to the Senate, leaving just one precedent -- Clinton’s trial — in the past century and a half.
Admittedly, that's not PolitiFact saying "Nixon was not impeached." On the other hand, it's PolitiFact directly implying Nixon was not impeached. Blink and you might miss it amidst all the talk about the Nixon impeachment.

Can we get to eight after that bothersome bit of context?

Nixon was Impeached, Continued 

We can:
The impeachments of both Nixon and Clinton did tend to curb legislative action by soaking up all the attention in Washington, historians say.
We're curious which historians PolitiFact talked to who explicitly referred to the impeachment of Nixon. There are no quotations in the text of the PolitiSplainer that would support this claim about what historians say.

PolitiFact flirted with nine in the next paragraph. We're capping the count at eight.

In summary, we'll just say this: If there's a sense of "impeachment" that doesn't mean literally getting impeached by Congress and standing trial in the Senate, then Jimmy Kimmel is entitled to that understanding when he says Nixon was the last president to be impeached.

Contrary to PolitiFact's framing, Kimmel was wrong not because Nixon was not impeached. Kimmel was wrong because President William J. Clinton was the last president to be impeached. There was never any need for PunditFact to focus on the fact Nixon wasn't impeached, unless it was to avoid emphasis on Clinton.

This all works out very well for PolitiFact. PolitiFact does what it can to spread the misperception Nixon was impeached. And then it can draw clicks to its PunditFact fact check showing that claim false.

Just like dropping roofing nails on the road.

Tuesday, September 3, 2019

Fact Check not at PolitiFact Illinois

One of the characteristics of PolitiFact that drags it below its competitors is its penchant for not bothering to fact check what it claims to fact check.

Our example this time comes from PolitiFact Illinois:

From the above, we judge that "Most climate scientists agree" that we have less than a decade to avert a worst case climate change scenario counts as the central claim in need of fact-checking. PolitiFact hints at the confusion it sows in its article by paraphrasing the issue as "Does science say time is running out to stop climate disaster?"

The fact is that time could be running out to stop climate disaster while at the same time (Democrat) Sean Casten's claim could count as entirely false. Casten made a claim about what a majority of scientist believe about a short window of opportunity to avoid a worst-case scenario. And speaking of avoidance, PolitiFact Illinois avoided the meat of Casten's claim in favor of fact-checking its watered-down summary of Casten's claim.

The Proof that Proves Nothing

The key evidence offered in support of Casten was a 2018 report by the United Nations Intergovernmental Panel on Climate Change.

The problem? The report offers no clear evidence showing a majority of climate scientists agree on anything at all, up to and including what Casten claims they believe. In fact, the report only mentions "scientist" or "scientists" once (in the Acknowledgments section):
A special thanks goes to the Chapter Scientists of this Report ...
A fact checker cannot accept that report as evidence of what a majority of scientists believe without strong justification. That justification does not occur in the so-called fact check. PolitiFact Illinois apparently checks the facts using the assumption that the IPCC report would not claim something if a majority of climate scientists did not believe it.

That's not fact-checking.

And More Proof of Nothing

Making this fact-checking farce even more sublime, PolitiFact Illinois correctly found the report does not establish any kind of hard deadline for bending the curve on carbon emissions (bold emphasis added):
Th(e) report said nations must take "unprecedented" actions to reduce emissions, which will need to be on a significantly different trajectory by 2030 in order to avoid more severe impacts from increased warming. However, it did not identify the hard deadline Casten and others have suggested. In part, that’s because serious effects from climate change have already begun.
So PolitiFact did not bother to find out whether a majority of scientists affirm the claim about "less than a decade" (burden of proof, anyone?) and moreover found the "less than a decade" claim was essentially false. We can toss PolitiFact's line about serious effects from climate change already occurring because Casten was talking about a "worst-case scenario."

PolitiFact Illinois rated Casten's claim "Mostly True."

Does that make sense?

Is it any wonder that Independents (nearly half) and Republicans (more than half) think fact checkers favor one side?


Also worth noting: Where does that "'worst-case scenario'" phrase come from? Does Casten put it inside quotation marks because he is quoting a source? Or is it a scare quote?

We confirmed, at least, that the phrase does not occur in the IPCC report that supposed served as Casten's source.

We will not try to explain PolitiFact Illinois' lack of curiosity on this point.

Let PolitiFact Illinois do that.

Update Sept. 4, 2019: We originally neglected to link to the flawed PolitiFact Illinois "fact check." This update remedies that problem.