Tuesday, September 3, 2019

Fact Check not at PolitiFact Illinois

One of the characteristics of PolitiFact that drags it below its competitors is its penchant for not bothering to fact check what it claims to fact check.

Our example this time comes from PolitiFact Illinois:

From the above, we judge that "Most climate scientists agree" that we have less than a decade to avert a worst case climate change scenario counts as the central claim in need of fact-checking. PolitiFact hints at the confusion it sows in its article by paraphrasing the issue as "Does science say time is running out to stop climate disaster?"

The fact is that time could be running out to stop climate disaster while at the same time (Democrat) Sean Casten's claim could count as entirely false. Casten made a claim about what a majority of scientist believe about a short window of opportunity to avoid a worst-case scenario. And speaking of avoidance, PolitiFact Illinois avoided the meat of Casten's claim in favor of fact-checking its watered-down summary of Casten's claim.

The Proof that Proves Nothing

The key evidence offered in support of Casten was a 2018 report by the United Nations Intergovernmental Panel on Climate Change.

The problem? The report offers no clear evidence showing a majority of climate scientists agree on anything at all, up to and including what Casten claims they believe. In fact, the report only mentions "scientist" or "scientists" once (in the Acknowledgments section):
A special thanks goes to the Chapter Scientists of this Report ...
A fact checker cannot accept that report as evidence of what a majority of scientists believe without strong justification. That justification does not occur in the so-called fact check. PolitiFact Illinois apparently checks the facts using the assumption that the IPCC report would not claim something if a majority of climate scientists did not believe it.

That's not fact-checking.

And More Proof of Nothing

Making this fact-checking farce even more sublime, PolitiFact Illinois correctly found the report does not establish any kind of hard deadline for bending the curve on carbon emissions (bold emphasis added):
Th(e) report said nations must take "unprecedented" actions to reduce emissions, which will need to be on a significantly different trajectory by 2030 in order to avoid more severe impacts from increased warming. However, it did not identify the hard deadline Casten and others have suggested. In part, that’s because serious effects from climate change have already begun.
So PolitiFact did not bother to find out whether a majority of scientists affirm the claim about "less than a decade" (burden of proof, anyone?) and moreover found the "less than a decade" claim was essentially false. We can toss PolitiFact's line about serious effects from climate change already occurring because Casten was talking about a "worst-case scenario."

PolitiFact Illinois rated Casten's claim "Mostly True."

Does that make sense?

Is it any wonder that Independents (nearly half) and Republicans (more than half) think fact checkers favor one side?


Afters

Also worth noting: Where does that "'worst-case scenario'" phrase come from? Does Casten put it inside quotation marks because he is quoting a source? Or is it a scare quote?

We confirmed, at least, that the phrase does not occur in the IPCC report that supposed served as Casten's source.

We will not try to explain PolitiFact Illinois' lack of curiosity on this point.

Let PolitiFact Illinois do that.


Update Sept. 4, 2019: We originally neglected to link to the flawed PolitiFact Illinois "fact check." This update remedies that problem.

Sunday, September 1, 2019

PolitiFact founder: "Bias is good"

It was wasn't even a year ago that PolitiFact pompously announced it isn't biased, but now PolitiFact founder Bill Adair has muddied the waters by announcing from his lofty perch at Duke University that bias is good.

Doubtless it is important to make take Adair's words in context.

We'll certainly try.

Here's the Columbia Journalism Review headline:

Op-ed: Bias is good. It just needs a label.


In context so far: Adair appears to say bias is good if the reader understands it (hence the need for a label).

Adair repeated the same point in the article and then used a graphic to spell out what he's saying:


It's hard not to notice that Adair's graphic appears to concede what we have argued for years here at PolitiFact Bias. Fact-checking is not some kind of objective and scientific pursuit even if we set aside the subjective linear-scale truth ratings. Adair understands fact-checking contains more opinion than does "news analysis," with no other form of journalism closer to "opinion."

Unfortunately Adair does little to distinguish the desirable types of bias he's probably talking about--bias toward truth and democracy, for example--from unhealthy cognitive biases. But at least he gives clear guidance that journalists should appropriately label their work.

Now we just need to find the appropriate label at PolitiFact, right?

PolitiFact is not biased -- here’s why

Okay, great. No problem, right?

Seriously, we're not aware of any prominent acknowledgement of bias labeling at PolitiFact.com.

If such a thing existed, perhaps we should expect to find it on PolitiFact's statement of principles. But we get this instead:
Our ethics policy for PolitiFact journalists

PolitiFact seeks to present the true facts, unaffected by agenda or biases. Our journalists set their own opinions aside as they work to uphold principles of independence and fairness.
Anybody see an expression of the idea "bias is good" in there? We don't.

PolitiFact over its history has encouraged readers to take its biased reporting as objective reporting.

It deceived and continues to deceive its readers by the standard Adair advocates.

Wednesday, August 14, 2019

A PolitiFact gloss on the Michael Brown "murder"

We've been tracking evidence of PolitiFact's look-the-other-way stance on Dem0crats' campaign rhetoric on race. PolitiFact sees no need to issue a "Truth-O-Meter" rating when Democrats call President Trump a racist, for example.

Now, with Democratic presidential candidates like Kamala Harris and Elizabeth Warren asserting that Michael Brown was murdered, again we see PolitiFact reluctant to apply fact-checking to Democratic Party falsehoods.

Instead of issuing a "Truth-O-Meter" rating for either Democratic Party candidate over their Michael Brown statements, PolitiFact published an absurd PolitiSplainer article.

A Fox News article hits most of the points that we would have emphasized:
The fact-checking website PolitiFact again came under fire for alleged political bias Wednesday after it posted a bizarre article that refused to rule on whether Michael Brown was in fact "murdered" by police officer Darren Wilson in Ferguson, Mo. in 2014, as Democratic presidential candidates Kamala Harris and Elizabeth Warren falsely claimed last week.
Indeed, Fox News emphasizes the key expert opinion from the PolitiFact PolitiSplainer:
Jacobson quoted Jean Brown, a communications professor who focuses on "media representations of African Americans," as saying that the entire question of whether Warren and Harris spread a falsehood was nothing more than an "attempt to shift the debate from a discussion about the killing of black and brown people by police."
The Fox article quotes the Washington Examiner's Alex Griswold asking why the expert opinion from Brown was included in the fact check.

We suggest that the quotation represents the reasoning PolitiFact used in deciding not to issue "Truth-O-Meter" ratings for Harris or Warren.

PolitiFact, per the Joe Biden gaffe, seems interested in truth, not facts.

Sunday, August 4, 2019

Highlights of PolitiFact's Reddit AMA from August 2, 2019

PolitiFact newbie Daniel Funke, former fact check reporter for the International Fact-Checking Network, represented PolitiFact for a Reddit AMA on Aug. 2, 2019.

We always look forward to public Q&A sessions with PolitiFact staff, for it nearly always provides us with material.

Funke stuck with PolitiFact boilerplate material for the most part, even channeling Bill Adair with his answer about PolitiFact's response to critics who suggest PolitiFact is biased.

Funke's chief error, in our view, was his repetition of a false PolitiFact public talking point:
As far as corrections: We're human beings, so we do make mistakes from time to time. That's why we have a corrections process. You can read our full corrections policy, but the bottom line is that we fix the wrong information and note it. If we give a new rating to a fact-check, we archive the old version so people can see exactly what we changed. Everything that gets a correction or an update gets tagged - see all tagged items.
We've pointed out dozens and dozens of mistakes at PolitiFact, and though we've prompted PolitiFact to fix quite a few mistakes the majority of the time PolitiFact ignores the critique and doesn't bother to fix anything. We tried to get PolitiFact Georgia not to interpret "pistol" as a synonym for "handgun" because revolvers count as handguns but do not count as pistols. No go. The mistake remains enshrined in PolitiFact's "database" of facts. And Funke's recent mistake in using a number PolitiFact found wanting as the deficit figure handed off from Bush to Obama still hasn't been fixed. Nor do we expect PolitiFact to break tradition by fixing it.

PolitiFact fixes mistakes if and only if PolitiFact feels like fixing the mistakes.

So Funke is wrong about the bottom line at PolitiFact. The PolitiFact "database" has more than its share of bad information.

As for archiving the old version of a fact check when the rating changes, contrary to what Funke says readers can't necessarily find the archived version. Here's an example from 2017. The new version contains no link to the old version. A reader would have to figure out how PolitiFact structures its URLs to track down the archived version (assuming there is one).

Finally, Funke repeats the falsehood that "Everything that gets a correction or an update gets tagged," complete with a link to the very incomplete list of corrected items. PolitiFact does not use tags on many of its articles, particularly those that do not feature a rating. Corrections on those articles do not get tagged and do not appear on the list of corrections. Moreover, PolitiFact simply neglects to tag corrected fact checks on occasion.

Apparently it's too much to ask that PolitiFact staffers know what they're talking about when they describe PolitiFact's corrections process.

Saturday, August 3, 2019

PolitiFact: The true half of Cokie Roberts' half truth is President Trump's half truth

Pity PolitiFact.

The liberal bloggers at PolitiFact may well see themselves as neutral and objective. If they see themselves that way, they are deluded.

Latest example:


PolitiFact's Aug. 3, 2019 fact check of President Trump finds he correctly said the homicide rate in Baltimore is higher than in some countries with a significant recent history of violence. But it wasn't fair of Trump to compare a city to a country for a variety of reasons, experts said.

So "Half True," PolitiFact said.

The problem?

Here at PolitiFact Bias we apparently remember what PolitiFact has done in the past better than PolitiFact remembers it. We remembered PolitiFact giving (liberal) pundit Cokie Roberts a "Half True" for butchering a comparison of the chance of being murdered in New York City compared to Honduras.




Roberts was way off on her numbers (to the point of being flatly false about them, we would say), but because she was right that the chance of getting murdered is greater in Honduras than in New York City, PolitiFact gave Roberts a "Half True" rating.

We think if Roberts' numbers are wrong (false) and her comparison is "Half True" because it isn't fair to compare a city to a country then Roberts seems to deserve a "Mostly False" rating.

That follows if PolitiFact judges Roberts by the same standard it applies to Mr. Trump.

But who are we kidding?

PolitiFact often fails to apply its standards consistently. Republicans and conservatives tend to receive the unfair harm from that inconsistency. Mr. Trump, thanks in part to his earned reputation for hyperbole and inaccuracy, tends to receive perhaps more unfair harm than anybody else.

It is understandable that fact checkers allow confirmation bias to influence their ratings of Mr. Trump.

It's also fundamentally unfair.

We think fact checkers should do better.

Thursday, August 1, 2019

That Time PolitiFact Used Facebook to Amplify a Misleading Message on Fiscal Responsibility


We wrote about PolitiFact's awful fact check of a tweet that used deficit numbers at the start and end of presidential terms in office to show it's wrong to think that Democrats cause deficits.

PolitiFact's FaceBook page took the misleading nature of that fact check and amplified it to the max with a false headline:


Contrary to the headline, the fact check does not tell how the past five presidents affected the deficit. Instead, the fact check pretends to address the accuracy of a tweet that suggests deficit numbers at the start and end of presidential administrations tell us which party causes deficits. That use of deficit numbers serves as an exceptionally poor metric, a fact PolitiFact barely hints at in giving the tweet a "Mostly True" rating.

The tweet falsely suggests those deficit numbers give us a reliable picture of party fiscal responsibility (and the way presidents affect the deficit), and PolitiFact amplifies those misleading messages.

It's almost like they think that's their job.

Tuesday, July 30, 2019

PolitiFact's Inconsistency on True-But-Misleading Factoids

People commonly mislead other people using the truth. Fact checkers have recognized this with various kinds of "True but False" designations. But the fact checkers tend to stink at applying consistent rules to the "True but False" game by creating examples in the "True but False but True" genre.

PolitiFact created a classic in the "True but False" genre for Sarah Palin (John McCain's pick for vice presidential nominee) years ago. Palin made a true statement about how U.S. military spending ranks worldwide as a measure of GDP. PolitiFact researched the ways in which that truth misled people and gave Palin a "Mostly False" rating.

On July 29, 2019, PolitiFact gave a great example of the "True but False but True" genre with a fact check of a tweet by Alex Cole (side note: This one goes on the report card for "Tweets" instead of a report card for "Alex Cole"):


PolitiFact rated Cole's tweet "Mostly True." But the tweet has the same kind of misleading features that led PolitiFact to give Palin a "Mostly False" rating in the example above. PolitiFact docked Palin for daring to compare U.S. defense spending as a percentage of GDP to very small countries as well as those experiencing strife.

But who thinks the deficit at the start and end of an administration serves as a good measure of party fiscal discipline?

Yet that's the argument in Cole's tweet, and it gets a near-total pass from PolitiFact.


And this isn't even one of those situations where PolitiFact focused on the numbers to the exclusion of the underlying argument. PolitiFact amplified Cole's argument by repeating it.

Note PolitiFact's lead:
A viral post portrays Democrats, not Republicans, as the party of fiscal responsibility, with numbers about the deficit under recent presidents to make the case.
PolitiFact sends out the false message that the above argument is "Mostly True."

That's ridiculous. For starters, the deficit is best measured as a percentage of GDP. Also, presidents do not have great control over the rise and fall of deficits. PolitiFact pointed out that second factor but without giving it the weight it should have had in undercutting Cole's argument. After all, the tweet suggests the presidents drove deficit changes without any hint of any other explanation.

Yes, this is the same fact-checking operation that laughably assured us back in November 2018 that "PolitiFact is not biased."

PolitiFact could easily have justified giving Cole the same treatment it gave Palin. But it did not. And this type of scenario plays out repeatedly at PolitiFact, with conservatives getting the cold shoulder from PolitiFact's star chamber.

Whether or not the liberal bloggers at PolitiFact are self-aware to the point of seeing their own bias, it comes out in their work.


Afters

Hilariously, in this article PolitiFact dinged the deficit tweet for using a figure of $1.2 trillion for the end of the George W. Bush presidency:
"(George W.) Bush 43 took it from 0 to 1.2 trillion." This is in the ballpark. Ignoring the fact that he actually started his presidency with a surplus, Bush left office in 2009 with a federal deficit of roughly $1.41 trillion.
Why is it funny?

It's funny because one of the PolitiFact articles cited in this one prefers the $1.2 trillion figure over the $1.4 trillion figure:

The Great Recession hit hard in 2008 and grew worse in 2009. In that period, the unemployment rate doubled from about 5 percent to 10 percent. With Democrats in charge of both houses of Congress and the White House, Washington passed a stimulus package that cost nearly $190 billion, according to the Congressional Budget Office. That included over $100 billion in new spending and a somewhat smaller amount in tax cuts, about $79 billion in fiscal year 2009.

George W. Bush was not in office when those measures passed. So a more accurate number for the deficit he passed on might be closer to $1.2 trillion.
But it's just fact-checking, so inaccuracy is okay so long as it's in the service of a desirable narrative.

?

Monday, July 29, 2019

Reporting on the Mueller Report from the Liberal Bubble

PolitiFact's treatment of things Mueller has fit well with its left-leaning reputation.

A PolitiFact fact check from July 24, 2019 serves as our example.


We would first draw the reader's attention to the way PolitiFact altered Rep. Ratcliffe's claim. Ratcliffe  said Mueller did not follow the special counsel rules. Not following rules may take place though omission or by elaborating on what the rules stipulate. But PolitiFact says Ratcliffe claimed Mueller broke the rules.

We think it's fairly clear that elaborating on the rules counts as failing to follow the rules. It's less clear that elaborating on the rules counts as breaking the rules.

So right off the bat, PolitiFact is spinning Ratcliffe's claim into a straw man that is more easily attacked.

Missing the Point?

Rep. Ratcliffe was repeating a point pretty familiar to conservatives, that the Mueller report failed to follow the special prosecutor statute because Mueller punted on deciding whether to recommend prosecution for obstruction of justice. Conservative pundit and legal expert Andrew McCarthy, for example, has written on the topic.

It's hard to see how PolitiFact's fact check addresses a position like McCarthy's.

PolitiFact contacted three legal experts for comment. But only Mark Osler (University of St. Thomas) was quoted on Ratcliffe's key issue:
Federal regulations say, "At the conclusion of the Special Counsel's work, he or she shall provide the Attorney General with a confidential report explaining the prosecution or declination decisions reached by the Special Counsel."

"It clearly includes declinations, which is taking no action," Osler said.
We humbly submit to the expert Osler that a declination is not merely a lack of action. Declination, in context, is a decision not to prosecute. An explanation of Special Counsel's decision not to prosecute meets the requirements of the statue. But an unexplained decision not to decide whether to prosecute should not meet the requirements even though it is lack of action.

And, hypothetically, taking no action at all as by not filing the report is taking no action but does not satisfy the statute.

A July 24, 2019 article in Washington Post helps make clear that Mueller pretty much declined to spell out why he declined to recommend prosecution for obstruction of justice:
John Yoo, a former top official in the George W. Bush Justice Department, said he found Mueller’s explanation “rather vague and somewhat mysterious,” and that he may have felt he should defer to the attorney general.

“Like everyone else, I have been trying to infer why he did what he did,” Yoo said.

But Mueller offered little elaboration on his reasoning as he was pressed Wednesday by lawmakers in both parties.
Again, the declination description required in the statute concerns the decision not to prosecute, not the decision not to explain the decision not to prosecute. Lack of action is not an explanation.

PolitiFact's Big Whiff

PolitiFact showed the true quality of its fact-checking by apparently knowing nothing about widely-published reasoning like McCarthy's. It's the Bubble!

Check out this faux pas in PolitiFact's summary:
We found no legal scholar who agreed with Ratcliffe.
PolitiFact could not find articles by Andrew McCarthy?

Couldn't find the comments by David Dorsen in this Newsweek article?

Couldn't find this piece by Alan Dershowitz for The Hill?

Trust fact checkers? Why?