Sunday, June 21, 2020

Trump again tries using hyperbole without a license

President Donald Trump said nobody had heard of "Juneteenth," the name given to a day many use to commemorate the end of U.S. slavery, until he popularized it. So PolitiFact fact-checked whether it was true that nobody had heard of it.




The result was a "Pants on Fire" rating. PolitiFact said millions of people knew about Juneteenth before Trump scheduled a campaign rally for that day.

PolitiFact cited the Wall Street Journal for its quotation of Trump. Here's how PolitiFact presented it to readers:

President Donald Trump took credit for boosting awareness of Juneteenth, a day that marks the end of slavery in America.

"I did something good: I made Juneteenth very famous," Mr. Trump said, in a Wall Street Journal interview. "It’s actually an important event, an important time. But nobody had ever heard of it."

PolitiFact claims in its statement of principles it recognizes the literary technique of hyperbole (bold emphasis added):

In deciding which statements to check, we consider these questions:

• Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.

Hyperbole involves the use of exaggeration to make a particular point. Hyperbole works as hyperbole when the audience understands that the exaggeration was not meant literally.

It's as though PolitiFact has caught Mr. Trump red-handed, trying to use hyperbole without a license.

We think Trump's statement certainly bears the obvious signs of hyperbole. If literally nobody had heard of Juneteenth before Trump scheduled his campaign rally, then Trump did not merely make Juneteenth very famous. He helped create it by inspiring others. But Trump's words, in fact, suggest that Juneteenth existed as "an important event, an important time" before that. Those words from Trump cue the average reader that "nobody had ever heard of it" was not meant literally but instead meant that Juneteenth was not well known.

Vice President Joe Biden illustrated what Trump likely meant. A (user-created) video clip from C-SPAN shows Biden on June 11, 2020 apparently expressing the belief that "Juneteenth" was the anniversary of the Tulsa Race Massacre. The massacre happened on June 1, 1921. Trump's rally was originally scheduled on "Juneteenth,"--June 19, 2020--but was moved back one day to June 20, 2020. The rally took place in Tulsa, which of course was the location of the Tulsa Race Massacre.

If Biden did not know about it then perhaps others did not know about it as well.

Maybe the problem is that PolitiFact does not set partisanship aside when it issues hyperbole licenses.


(Note: we'll add the full complement of tags after publishing, thanks to Blogger's new interface that only remembers one assigned tag when first publishing)

Does PolitiFact deliberately try to cite biased experts? (Updated)

If there's one thing PolitiFact excels at, it's finding biased experts to quote in its fact checks.

Sometimes there's an identifiable conservative, but PolitiFact favors majority rule when it surveys a handful of experts. It seems to us that PolitiFact lately is suppressing the appearance of dissent by not bothering to find a representative sample of experts.

How about a new example?



For this fact check on President Trump's criticism of President Obama, PolitiFact cited three experts, in support of its "Truth-O-Meter" ruling.

Two out of the three were appointed to Mr. Obama's "Task Force on 21st Century Policing." All three have FEC records showing they donate politically to Democrats:
The first two on the list, in fact, specifically donated to Mr. Obama's presidential campaign.  Thus making them perfect experts to comment on Mr. Trump's criticism of Mr. Obama?

Seriously, isn't this set of experts exactly the last sort of thing a nonpartisan fact-checking organization that declares itself "not biased" should do?

As bad as its selection of experts looks, the real problem with the fact check happens when PolitiFact arbitrarily decides that the thing Trump said President Obama did not try to do was "police reform" when Trump said "fix this." Plenty of things can fit under "police reform," and PolitiFact proves it by citing how "the Justice Department did overhaul its rules to address racial profiling."

Other evidence supposedly showing Trump wrong was the task force's (non-binding!) set of recommendations. The paucity of the evidence comes through in PolitiFact's summary:
The record shows that is not true. After the fatal shooting of Michael Brown in Ferguson and related racial justice protests, Obama established a task force to examine better policing practices. The Obama administration also investigated patterns or practices of misconduct in police departments and entered into court-binding agreements that require departments to correct misconduct.
So putting together a task force to make recommendations on police reform is trying to "fix this."

And, for what it's worth, the fact check offered no clear support for its claim "The Obama administration also investigated patterns or practices of misconduct in police departments." PolitiFact included a paragraph describing what the administration supposedly did, but that paragraph did not reference any of its experts and did not cite either by link or by name any source backing the claim.

Mr. Trump was not specific about what he meant by "fix this." Rather than granting fact-checkers license for free interpretation, that type of ambiguity in a statement makes it nearly impossible to fairly fact check the statement. Put simply, a fact checker has to have a pretty clear idea of what a claim means in order to fact check it adequately. Trump may have had in mind his administration's move to create a record of police behavior that would make it hard for officers with poor records to move to a different police department after committing questionable conduct. It's hard to say.

Here's Mr. Trump's statement with some context:
Donald Trump: (11:32)
Under this executive order departments will also need a share of information about credible abuses so that offers with significant issues do not simply move from one police department to the next, that's a problem. And the heads of our police department said, "Whatever you can do about that please let us know." We're letting you know, we're doing a lot about it. In addition, my order will direct federal funding to support officers in dealing with homeless individuals and those who have mental illness and substance abuse problems. We will provide more resources for co-responders, such as social workers who can help officers manage these complex encounters. And this is what they've studied and worked on all their lives, they understand how to do it. We're going to get the best of them put in our police departments and working with our police.

Donald Trump: (12:33)
We will have reform without undermining our many great and extremely talented law enforcement officers. President Obama and Vice President Biden never even tried to fix this during their eight-year period.
We can apparently credit the Obama administration with talking about doing some of the things Trump directed via executive order.

In PolitiFact's estimation, that seems to fully count as trying to actually do them.

And PolitiFact's opinion was backed by experts who give money to Democratic Party politicians, so how could it be wrong?


Update June 21, 2020:


The International Fact-Checking Network Code of Principles

In 2020 the International Fact-Checking Network beefed up its statement of principles, listing more stringent requirements in order to achieve "verified" status in adhering to its Code of Principles.

The requirements are so stringent that we can't help but think that it portends lower standards for applying the standards.

Take this, for example, from the form explaining to organizations how to demonstrate their compliance (bold emphasis added):
3. The applicant discloses in its fact checks relevant interests of the sources it quotes
where the reader might reasonably conclude those interests could influence the
accuracy of the evidence provided.
It also discloses in its fact checks any commercial
or other such relationships it has that a member of the public might reasonably
conclude could influence the findings of the fact-check.
Is there a way to read the requirement in bold that would relieve PolitiFact from the responsibility of disclosing that every one of the experts it chose for this fact check has an FEC record showing support for Democratic Party politics?

If there is, then we expect that IFCN verification will continue, as it has in the past, to serve as a deceitful fig leaf creating the appearance of adherence to standards fact checkers show little interest in following.

We doubt any number of code infractions could make the Poynter-owned IFCN suspend the verification status of Poynter-owned PolitiFact.

Note: Near the time of this update we also updated the list of story tags.



Edit 2050 PDT 6/21/20: Changed "a" "to" and "police" to "of" "for" and "officers" respectively for clarity in penultimate sentence of paragraph immediately preceding Trump 11:32 quote - Jeff

Monday, June 8, 2020

PolitiFact mangles fact check of Larry Elder

Editor's note June 8, 2020: We intended to acknowledge when we published that Newsbusters beat us to the punch with a story on PolitiFact's Larry Elder fact check. Our version does not rely on that version in any sense. We're taking the opportunity with this update to fix an improper use of its/it's in the second paragraph

The reason we do not trust PolitiFact fact-checking?

It's because we accepted PolitiFact's challenge to second-guess its work even before PolitiFact started asking. What we found then isn't pretty. It still isn't pretty.

Let's have a look at PolitiFact's June 5, 2020 fact check of conservative radio talk show host Larry Elder.


In Context

PolitiFact claims, as part of its statement of principles, to fact check claims in their original context.

How did PolitiFact do on that?

As the U.S. entered a second week of protests after the death of George Floyd, conservative radio host Larry Elder argued that "cops rarely kill anybody, let alone an unarmed black person."

"Last year, there were nine unarmed black people killed. Nineteen unarmed white people," Elder said June 2 on Fox News host Sean Hannity’s TV show.

We had no luck getting the linked Fox News video to play. Likely Fox News shoulders the blame for that. We got around the problem by going to a transcript posted at Fox News. A version of the Hannity show we found at an alternative source varied substantially from the Fox News transcript. But the transcript had the words PolitiFact used, so we're assuming the video version we found was somehow corrupted.

Here's the transcript version of Elder's words with bold emphasis to highlight the part PolitiFact quoted:

HANNITY: Shoot him in the leg he said, Larry. If he comes at you, if somebody comes up at you with a knife, just shoot him in the leg and not a word about all the officers shot, killed, injured in the process, that even last night. Not a word today.

LARRY ELDER, SALEM RADIO HOST: Yes, it's unreal. The number one responsibly of government is to protect people and property and that is not happening. And, Sean, what is so maddening about all of this, and we touched on this the other night, the premise is false. It is not true that the police are out there mowing down black people.

Again, according to the CDC, in the last 45 years, black -- killings of blacks by the police have declined 75 percent. Last year, there were nine unarmed black people killed, 19 unarmed white people. Name the unarmed white people who were killed. You can't because the media gives to the impression that this is something that happens all the time.

Obama says this ought not be normal. Mr. Former President, it's not normal, it is rare. Cops rarely kill anybody let alone an unarmed black person. And the idea that this happens all the time is why some of these young people are out in the streets, and it is simply false. Isn't that good news? It's not true!

We consider it an unorthodox treatment of a quotation to present the first part of the quotation before the second part. On the positive side, PolitiFact's construction does appear to capture the point Elder was trying to make: Police rarely kill unarmed black people. Elder's earlier comment about police not "mowing down black people" helps make clear he was talking about intentional actions resulting in the deaths of blacks.

The problem? PolitiFact treated Elder's claim as though he was making a different point.

PolitiFact:
(T)he number of unarmed people killed in encounters with law enforcement in 2019 is higher for both races than Elder claimed. How much higher is not clear.  What is clear, experts told us, is that despite what Elder’s absolute numbers may suggest, black people in the U.S. have died from fatal encounters with police at a disproportionate rate.
PolitiFact replaces Elder's point with "what Elder's absolute numbers may suggest," and uses the disagreement of experts with that suggestion to suggest Elder's point was wrong.

We'll see that PolitiFact argued a straw man.


Absolute Numbers

Elder was too vague in describing his statistic on police killings of unarmed persons, though arguably the context he established of "mowing down" was a legitimate clue he was talking about shootings. But PolitiFact did not rest its argument on Elder's ambiguity. PolitiFact argued Elder's raw numbers might produce a false impression that police killings of unarmed blacks are not disproportionate.

Elder said police killed nine unarmed blacks and 19 unarmed whites.

PolitiFact, using data from "Mapping Police Violence," corrected those numbers counting all deaths caused by police, whether on-duty or off-duty. The findings?

Mapping Police Violence said police killed 28 unarmed blacks and 51 unarmed whites.

By Elder's numbers, killings of unarmed blacks made up 32.1 percent of combined killings of unarmed whites and blacks.

By Mapping Police Violence's numbers, killings of unarmed blacks made up 35.4 percent of combined killings of unarmed whites and blacks.

That does not count as a major difference. If Elder had used the same numbers PolitiFact used PolitiFact could still have claimed Elder's raw numbers "may suggest" black people in the U.S. have not died from fatal encounters with police at a disproportionate rate.

Elder wasn't saying anything about disproportionate rates any more than Mapping Police Violence was. Elder was making the point that the killings have gone down over time to become rare.

Though Mapping Police Violence only posts data back through 2013, its chart from unarmed black victims of police killings would support Elder's point:






Disproportionate Rates?

PolitiFact (citing experts!) said deaths of unarmed black victims of police killings were disproportionally high. But PolitiFact made the comparison in terms of overall U.S. population. That counts as the wrong measure. Finding the proportionality of those killings requires apples-to-apples comparisons of the number of police encounters according to race.

The Centers for Disease Control has done preliminary research in that direction.

Missing the Point?

Bearing in mind Elder's apparent point that black deaths at the hands of police are decreasing, let's review PolitiFact's concluding rationale for its "Mostly False" rating.

PolitiFact credited Elder for using numbers that matched those published by the Washington Post. PolitiFact noted the Post's numbers "have increased since Elder made his claim," but PolitiFact principles say it grades statements according to information available at the time. So the increase to the Post's numbers ought to be moot in grading Elder's claim.

PolitiFact dinged Elder for not including all killings of unarmed blacks. But given Elder's point, his statistic only needs to serve as a representative benchmark for the decrease he claimed. PolitiFact presented no evidence Elder failed to do that. In other words, if counting all killings by police whatever the means leads to the same type of decrease over time, Elder's central point still finds support.

Finally, PolitiFact charged that Elder "omitted important context: that black people in the U.S. are disproportionately killed by police relative to their share of the population. But as we pointed out, share of the population is the wrong measure. In addition, it is not clear that Elder's point needs that context. A decrease in black deaths at the hands of police is a decrease regardless of  whether it remains disproportional. This part of PolitiFact's argument resembles a straw man.

This type of slipshod fact-checking occurs frequently at PolitiFact.



Afters: Experts Among Us

PolitiFact has unceremoniously dumped its past assurance to readers that it cites unbiased experts. Surveying the pool of experts PolitiFact cites tends to show a distinct leftward lean. Let's have a look at the pool of experts for this fact check:

  1. Frank Edwards: No FEC record we could find. Twitter account offers mere hints of a leftward lean
  2. Lorie Fridell: FEC record shows she gives to Democrats
  3. Brian Burghart: No FEC record we could find. Job: journalist
A leftward lean does not make an expert wrong, of course. We do find PolitiFact has an apparent history of picking sources that fit its chosen narrative while leaving out dissenting voices. And that tendency seems worse than ever this year.

Wednesday, June 3, 2020

Facebook should flag itself

We got tagged on Facebook by a person who apparently had one of their posts flagged through Facebook's fact-checker partnership. The original post did not show for us (we're looking into that), but we found it amusing that Facebook's notice contains a falsehood:



It's this part: "All fact-checkers who partner with Facebook must be signatories of the International Fact-Checking Network and follow their Code of Principles."

The IFCN verification process is soft. For example, IFCN signatories agree to "scrupulously" follow a clearly stated corrections policy. Zebra Fact Check has pointed out numerous times PolitiFact has failed in its adherence to its own corrections policy. What happens to PolitiFact as a result? Nothing. We've seen no apparent break in the Facebook partnership. More concerning than that, PolitiFact has still not acted to correct the great bulk of the errors that we've pointed out over the years. That includes things like botching a quotation. It's mostly stuff that's black-and-white error, not any kind of matter of opinion.

So, when Facebook tells you its fact checkers follow the IFCN Code of Principles they're trusting the IFCN to do the enforcement. And it just isn't happening in any strict sense.

It's worth noting, of course, that the non-profit Poynter Institute owns both PolitiFact and the accountability organization that oversees PolitiFact. No problem there, right?

PolitiFact: A peaceful protest is a peaceful protest is a peaceful protest

When President Trump said he supports peaceful protestors, the protectors of democracy at PolitiFact jumped into their batmobile and sprang into action, ready and willing to confront Trump's rhetoric with conflations of constitutional right to assembly with other forms of peaceful protest.
Trump has said before that peaceful protests are the hallmark of democracy.

...

But Trump has also pushed back against protests, especially the Black Lives Matter movement. We reviewed his record.
Did Trump push back against the right to protest or was it against the content of the protest? Do we even care?

To illustrate Trump's pushback against protests, especially Black Lives Matter protests, PolitiFact led with a lengthy subsection on former NFL player Colin Kaepernick. Kaepernick, while doing his job for an NFL football team, knelt during the National Anthem in support of the Black Lives Matter cause. The performance of the National Anthem precedes the start of NFL games.

It's not a freedom of assembly issue. But if Kaepernick assembled with others peacefully in public to take a knee during a performance of the anthem and Trump opposed the assembly and not the point of the protest, then PolitiFact would have Trump dead to rights.

That's a big "if."

Strike one.

Next up, PolitiFact presented the example of Rep. Maxine Waters, who called for U.S. society to shun and harass the members of Mr. Trump's cabinet. Presumably refusing service and generally harrassing Trump's cabinet on ideological grounds passes as some sort of peaceful pubic protest. PolitiFact made no particular effort to associate Waters' recommended protest with the Black Lives Matter movement, instead attaching it to border policy.

It doesn't seem certain that trying to totally exclude Trump's cabinet from conducting any type of business in public, including dining and grocery shopping, properly counts as a peaceful protest. If everyone followed Waters' prescription the Cabinet would need to grow its own food or else starve unless it met the demands of the peaceful protestors.

Needless to say, PolitiFact doesn't delve into that.

Strike two.

Apparently PolitiiFact finished with Trump's focus in opposition to Black Lives Matter, moving on to Mr. Trump's intolerance of heckling at his campaign rallies.

PolitiFact does not point out that heckling at a private rally open to the public is not a good example of the exercise of the right to free assembly.
Leading up the 2016 election, then-candidate Trump reserved harsh words for protesters who popped up at his rallies, including those whose actions were peaceful.
For PolitiFact, there is no important distinction between showing up to heckle at a campaign rally held in a private venue and the right to public assembly. Peaceful protest is peaceful protest is peaceful protest. I wonder how long I could peacefully protest in Jon Greenberg's office before seeing the issuance of a trespass order?

Greenberg opposes peaceful protest.

See how that works?

Strike three.

But PolitiFact lacks the good grace to return to the dugout after merely three strikes:

When opponents of placing Brett Kavanaugh on the Supreme Court marched and rallied, Trump referred to them as "a mob" and tagged all Democrats in the midterm elections as "too extreme and too dangerous to govern." 

"Republicans believe in the rule of law — not the rule of the mob," Trump tweeted Oct. 11, 2018.

We're not sure how PolitiFact deduced that Trump was talking about peaceful protests in his tweet. He wasn't responding to anybody else's tweet. We suppose that PolitiFact's sole evidence was the date of the tweet plus Trump's use of the word "mob." Because fact-checking?

Here's the tweet:
Are we playing "Pin the Context on the Tweet" or what?

And how would opposing giving in to protestors' demands oppose their right to protest? Is it appropriate to conflate opposition to protestors' demands with opposition to their right to peacefully protest?

Isn't that exactly what PolitiFact is doing?

It appears to us that PolitiFact argues that one cannot support peaceful protest without supporting the specific demands of the peaceful protestors.

But that's insane isn't it?

A fair examination of the topic must draw the distinction between supporting the right to protest and supporting the specific cause of the protestors.

Strike four. Go sit down, PolitiFact.

Tuesday, June 2, 2020

A picture worth a thousand words?

PolitiFact's executive director, Aaron Sharockman, sent out a mass email earlier today. One of those emails asking for reader support.

The image at the top of Sharockman's email was revealing, even if it happened to reveal what we already know about PolitiFact.



In journalism school we learned the importance of images. Journalists pay a great deal of attention to images and the messages they send.

So take a look. Break it down.

It's an image of protesters in front of the White House. We can see signs that say "No Justice No Peace" and "Black Lives Matter." The photo does not place the protestors in the distance as a third party. The image is taken from the midst of the protest. The point-of-view is from among the protestors. The protestors, and presumably their protests, are aimed at the White House.

The appeal makes sense, given that PolitiFact's fan base leans left and opposes Trump. The photo slyly (and with "plausible deniability!) sends an anti-Trump protest message those readers on the left will likely appreciate, even if many only appreciate it subconsciously.

The photo's an ideological tell.

But There's More

Before the fundraising email went out, we noticed PolitiFact's editor-in-chief, Angie Drobnic Holan, tweeting out an article blaming President Trump for violence directed against journalists.

Has anyone bothered to fact check that supposed cause-and-effect relationship?
We say that if a journalist truly believes Mr. Trump is responsible for violence against journalists, the journalist who thinks that should be presumed biased against Mr. Trump. Bias against Trump does not necessarily make PolitiFact's fact checks of Trump wrong. We emphasize these examples of bias mainly because PolitiFact falsely represents itself as unbiased.

Stop wearing the mask and we'll stop repeatedly peeling it away. Deal?

Sunday, May 24, 2020

Research update, May 2020

PolitiFact updated its website in 2020, and we acknowledged one clear improvement when the fact checkers added a link to the "Corrections and Updates" page on the main menu.

As we've moved to update our research, particularly the "Pants on Fire" bias research, we've noticed that PolitiFact has made made our job more tricky.

In times past, PolitiFact neatly separated its stories into categories. PolitiFact National once had its own page, and did not include stories from the state operations such as PolitiFact California and PolitiFact Texas.

This year PolitiFact changed it hierarchical tree. That means older links from our spreadsheet probably won't work.

PolitiFact now relies on its tagging system to separate and group its stories. The site has no "PolitiFact National" menu item, but using the "PolitiFact National" tag creates a page of such stories. Or would, if PolitiFact had done a precise job of applying its system of tags to its content. We're finding stories tagged with contradictory tags, such as "Pennsylvania" for PolitiFact Pennsylvania and "National" for PolitiFact National. Many items by the staff of PolitiFact National now appear categorized as though they came from the distinct state operations.

We'll have to take a closer look at some of these cases to categorize them appropriately for our research, and we'll take note when a state operation might as well count as PolitiFact National.

Is it PolitiFact Arizona or PolitiFact New York?


Sunday, May 17, 2020

Malleable principles at PolitiFact Pennsylvania

We don't look at every PolitiFact fact check. Not by a long shot. But when we do, we often find problems.

We did not start reading PolitiFact Pennsylvania's fact check of Repblican Mike Turzai looking for problems. It came on our radar because we were updating our "Pants on Fire" bias research. We noticed the fact check had tags for "National" and for "Pennsylvania." State tags do not normally occur on stories with the "National" tag.

But we had to give this one a closer look. It rated Turzai "False" for claiming children are not at risk from COVID-19 unless they have underlying medical issues. It seemed worth looking at since children seem substantially less affected than adults by the novel coronavirus.

It didn't take us long to notice that a study PolitiFact used to justify its rating was published on May 11, 2020:
Turzai was on the right track when he said that children in poor health who contract the coronavirus are at risk of becoming seriously ill. And it’s true that children are far less susceptible than adults. But his claim that other children are totally safe is incorrect, according to a study published recently in the medical journal JAMA Pediatrics.
And the rest of the justification came from an announcement made on May 11, 2020:
Publication of the study came the same day New York City officials announced that a growing cluster of children sickened with the coronavirus have developed a serious condition called pediatric multi-symptom inflammatory syndrome.
So, what's the problem?

PolitiFact's statement of principles stipulates it will judge claims based on information available when the claim was made. Turzai made his claim in a video released on May 9, 2020. Both sources of PolitiFact's rebuttal information came from May 11, 2020. The "False" ruling goes directly against PolitiFact's statement of principles (bold emphasis added):
The burden of proof is on the speaker, and we rate statements based on the information known at the time the statement is made.
The fact check's summary paragraph emphasizes that the ruling's justification came from the two sources identified above, both coming to light on May 11, 2020.
Our ruling

Speaking about the coronavirus, Turzai said children are "not at risk unless they have an underlying medical issue." A new study and a growing number of gravely ill children in New York City prove otherwise. We rate this statement False.
Once again, PolitiFact acted out-of-step with its own principles. In this case a Republican received unfair harm as a result.

That's the tendency we see from left-leaning PolitiFact.

Add caption


Afters

Unlike PolitiFact, we were not sure upon reading Turzai's claim what risk to children he meant. Risk of death? Risk of contracting the disease and/or carrying and spreading it? Risk of suffering severe illness upon contracting COVID-19?

PolitiFact settled on the last of those, without discussion. We think understanding him to mean the risk of death has equal justification.


PolitiFact builds straw man for PolitiSplainer on unmasking

If PolitiFact's one-sided PolitiSplainer on the Justice Department's reversal on Gen. Michael Flynn's prosecution was not enough, now we have a PolitiSpainer on the Obama administration's unmasking efforts on Flynn and others.

The new PolitiSplainer could win awards for the way it buries the fundamental problem of the unmasking efforts--evidence that knowledge of Russia/Trump collusion falsehood went to the top of the Obama administration and included Vice President Biden--to focus instead on making the unmasking sound like a normal everyday thing at the top of an administration.



But our favorite PolitiMisfire involves PolitiFact's straw-manning of the Republicans' side of the story.

You don't get a taste of Andy McCarthy's expert analysis. And you don't get Jonathan Turley's view. You don't get a host of stories written by well-known and experienced conservative pundits or experts.

No, you get a Facebook post from Diamond & Silk.

No, we're not making this up.
Does the unmasking list show that top officials "knew" that concerns about Flynn were a "lie"?

The pro-Trump social media personalities Diamond and Silk drew this conclusion in a Facebook post that featured images of the list of Obama administration officials:

"Obama knew. Clinton knew. Biden knew. Comey knew. Brennan knew. McCabe knew. Strzok knew. Clapper knew. Rosenstein knew. FBI knew. DOJ knew. CIA knew. State knew. They all knew it was a lie, a witch-hunt, a scandal, a plot, a conspiracy, a hoax. #ObamaGate #SubpoenaObama."

Partisans have an easy time jumping to conclusions. And that tendency represents our best guess as to why PolitiFact assumed Obama, Clinton, Strok, Clapper, Rosenstein, FBI, CIA and State all knowing "it" was a lie referred to the Flynn unmasking all by itself.

That's the beauty of picking on a short Facebook post. The "fact checker" (liberal blogger) can see a set of dots and then freely connect them to construct an understanding that fits a preconceived narrative.

There's nothing in Diamond & Silk's Facebook post explaining that the list follows solely from the Flynn unmasking disclosure. PolitiFact invented that and presented it to readers as fact.

PolitiFact can do that, you see, because it does not apply to itself the "Burden of Proof" principle it applies (on occasion) to others.

It's a shameful example of fact-checking. It qualifies as the straw man fallacy, in fact.

It's likely Diamond & Silk were talking about the Trump/Russia collusion narrative being a lie. not "concerns about Flynn."

Fact checkers, if you want the conservative argument about some aspect of executive office procedures try Andrew McCarthy sometime:

This week’s revelations about unmasking are important and intriguing. They should be thoroughly examined. In fact, they are only a snapshot of the unmasking issue — involving just one U.S. person (Flynn) over a period of less than three months. It is highly irregular for government officials on the political side of the national-security realm to seek the unmasking of Americans. It is eye-opening to learn that Vice President Biden and President Obama’s chief-of-staff (McDonough) unmasked the incoming Trump administration’s national security advisor. It is downright scandalous that Samantha Power, Obama’s ambassador to the United Nations, who had little reason to seek unmasking, reportedly requested 260 unmaskings . . . and then told Congress that she did not make the vast majority of requests attributed to her — though it remains unclear, years later, who did make them.

But let’s not miss the forest for the trees. This is not just about unmasking. It is about how pervasively the Obama administration was monitoring the Trump campaign.
PolitiFact's left-leaning fan base was spared any in-depth analysis from the right in favor of PolitiFact's straw man version of Diamond & Silk's Facebook post on the topic. And it got standard talking points from the left (unmasking is so totally normal!).

That kind of fact-checking qualifies as a disservice to readers who are interested in the truth.



Update May 17, 2020: clarified language in the second paragraph

Sunday, May 10, 2020

PolitiFact's Michael Flynn case PolitiSplainer (Updated & Corrected)

We almost titled this post "PolitiFact's Michael Flynn case PolitiSplainerainerainerainer" in honor of way PolitiFact reinforces the liberal echo chamber effect surrounding the Michael Flynn prosecution and reversal.

This week the Justice Department filed a motion declaring it would no longer prosecute the case against Flynn, citing recent document releases that drew into serious question the materiality of the statements over which Flynn was prosecuted. The Washington Post published a piece in 2018 by Phillip Bump that helps explain the importance of materiality:
Clearly, telling an FBI agent something untrue during an interview at FBI headquarters as you face criminal charges subjects you to charges of making false statements. Equally clearly, if an FBI agent friend of yours asks to borrow a dollar and you lie and say you don’t have any cash, that’s not going to get you taken away in handcuffs. But where is the line drawn between the two?
The Justice Department's filing says it believed it could no longer prove beyond a reasonable doubt that Flynn communicated a material falsehood to the FBI. The main key to that reversal stemmed from the disclosure that the FBI had an investigation specific to Flynn ("Crossfire Razor") and that the FBI had recommended closing that investigation for lack of evidence before deciding to interview Flynn.

Adding insult to injury, prosecutors never revealed that exculpatory evidence during trial or via disclosure to the defense. That failure deprived Flynn of a valuable tool that might have aided his defense.

In what we would take as an astonishing move if PolitiFact was an objective and impartial fact checker, its PolitiSplainer explains none of that.

Here's the whole of the case for Flynn, by PolitiFact's telling:

After Attorney General William Barr asked Jeff Jensen, the U.S. attorney in St. Louis, to review the case, Jensen concluded that a dismissal of the case was warranted. Barr agreed.

In its filing, the Justice Department argued that the FBI had no basis to continue investigating Flynn after failing to find illegal acts. Flynn’s answers during the interview were equivocal, not false, and weren’t relevant to the investigation, the department said.

"A crime cannot be established here," the attorney general told CBS, saying "people sometimes plead to things that turn out not to be crimes."

 Sure, PolitiFact reports the Department said Flynn's answers "weren't relevant to the investigation" but it does not explain that answer in terms of the law.

None of the experts PolitiFact quoted for the story had anything to say about it, either. Only two of the four had records of giving to Democrats this time (Barbara McQuade, James Robenalt). Open Secrets had no record of political giving from the other two.

PolitiFact also curiously failed to either link to or quote from the DOJ filing in the Flynn case. We wonder if anyone from PolitiFact bothered to read it.

Oh, for what might have been!:
FBI executives decided to keep open an investigation into whether Flynn was a Russian asset based on a conversation that was innocuous save for its interpretation as a violation of the (virtually ignored) Logan Act.

It's as though PolitiFact's PolitiSplainer was designed to keep people in the dark, or at least support a rapidly eroding media narrative.


Update/Correction May 12, 2020: PolitiFact Bias asserted that PolitiFact failed to quote from the DOJ filing. We fixed that with a pair of strikethroughs. PolitiFact used a pair of very short snippets near the beginning of its PolitiSplainer (bold emphasis added):
The filing said that a key interview of Flynn did not have "a legitimate investigative basis" and therefore the department does not consider Flynn’s statements from the interview to be "material even if untrue."
We still say PolitiFact's 'Splainer fails to explain the legal importance of materiality. If Flynn's alleged falsehoods were not relevant (material) to the active investigation then they were not illegal.

We also note that this part of PolitiFact's article undercuts the "expert" testimony discussed in our critique. If only "a key interview of Flynn" did not have the legitimate investigative basis then what of the other parts of the Flynn investigation? PolitiFact's Louis Jacobson should have noticed the discrepancy.
 

Tuesday, May 5, 2020

Pulitzer update: PolitiFact fails to grow its Pulitzer Prize collection in 2020

Pulitzer Prize winners were announced and PolitiFact extended its losing streak, which dates back to 2010. PolitiFact won a 2009 Pulitzer, for work done in 2008.

Nearly every year we at PolitiFact Bias make note of PolitiFact's losing streak. Why do we do that, given that our Pulitzer losing streak is almost as long as PolitiFact's?

We do it because the 2009 Pulitzer Prize has nothing to do with accuracy, even though PolitiFact advertises itself as "Winner of the Pulitzer Prize" as though to suggest the reverse. And from time to time we even encounter people who seem to want to argue that PolitiFact's 2009 Pulitzer Prize somehow helps offer evidence of its reliability.

That's also why PolitiFact Bias emphasized former Pulitzer Prize juror James Warren's interview with incoming Pulitzer administrator Dana Canedy. Warren mentioned that when he served as a juror he was tempted to fact check the work he evaluated but the rules prevented that. Canedy suggested that policy would likely continue.

This year's list of winners helps underline that policy, as The New York Times' factually challenged 1619 Project snagged a Pulitzer Prize for project principal Nikole Hannah-Jones.

That win ought to help scuttle the notion that Pulitzer Prizes have something to do with reliably reported facts.


Afters:

We suggested to PolitiFact that it should fact check the Times' 1619 Project. So far, PolitiFact does not appear interested in doing so.


Correction May 5, 2020: I (Bryan) inexplicably had the following in the first paragraph of this post: "PolitiFact won a 2009 Pulitzer, awarded in 2010." We know better, or ought to, having written about it numerous times. PolitiFact's 2009 Pulitzer was recognition for what it published in 2008. My apologies for the error, which is now fixed.

Saturday, May 2, 2020

'Objective' PolitiFact Uses Biased Framing

Though PolitiFact absurdly tries to claim it is unbiased, its work shows bias in a multitude of ways.

One bias that popped out this week was in PolitiFact's PolitiSplainer about Tara Reade and Democratic presidential candidate Joe Biden. Reade has accused Biden of sexual harassment to the point of rape. Her description of the alleged incident would technically meet at least one statutory definition of "rape."

But get a load of PolitiFact introductory paragraph concerning Reade and Biden:
More than two dozen women have accused President Donald Trump of sexual assault, and many of the allegations emerged not long before his election in November 2016. In October of that year, multiple women said he forced himself on them. A few months earlier, another woman who worked with Trump in the 1990s claimed he once pushed her against a wall and put his hand up her skirt.
There's not a word in there about Reade or Biden. The focus is entirely on sexual assault accusations against President Trump.

Why would an explainer on Reade and Biden start out focusing on allegations made against Trump, readers may wonder?

It's journalistic framing. That is, telling a story in a way to convey a particular message. The message in this case is "both sides do it" but Trump did it worse (so if it's between Biden and Trump vote Biden).

This is from the fact-checking organization that in 2018 published an article assuring readers it is unbiased. Because we could not see their faces as they published it we cannot say they published it with straight faces.

Blasey Ford/Kavanaugh, Hill/Thomas

How did PolitiFact treat parallel allegations against Justice Kavanaugh when Christine Blasey Ford accused him of attempted rape? Both sides do it?

Not quite:
As senators weigh the Supreme Court nomination of Brett Kavanaugh amid allegations of sexual misconduct, many Americans are thinking back to a previous example of accusations against a Supreme Court nominee.
Instead of "both sides do it" PolitiFact offered us a frame telling us that Republicans are doing it again. Clarence Thomas, though black, represented male power, which back in the olden days could easily dismiss accusations from women such as Thomas' accuser, Anita Hill:
The 1991 hearing "exposed critical fault lines in the lived-experience of those at the crossroads of race and gender," said Deborah Douglas, a journalist and visiting professor at DePauw University. "Thomas, a black man, could evoke the image of a ‘high-tech lynching’ to plead both innocence and male privilege, trumping the lived experience of a woman who represents a class of woman, the black woman, arguably, the last thought in the American public imagination."

Though Ford is white, gender has played out similarly in both cases, said Douglas, who is African-American.
What's missing from PolitiFact's PolitiSplainer on Blasey Ford and Kavanaugh? PolitiFact offers no assessment of the strength of the evidence. The article notes Anita Hill supposedly had corroborating witnesses, but does nothing to emphasize that point of contrast with the Blasey Ford allegations. Blasey Ford had no helpful corroboration from the time period the incident allegedly occurred. And PolitiFact somehow fails to mention it.

PolitiFact's Reade/Biden story, in contrast, puts focus on the evidence, albeit with signs of bias against Reade.

Spinning the Reade Evidence

Regarding the Larry King Live episode where Reade's mother apparently talked to King about an incident regarding her daughter, PolitiFact left out details supporting Reade's account. PolitiFact allowed excessive doubt to hang over the idea that it was Reade's mother who called:
On April 24, the Intercept reported on an August 1993 Larry King Live episode in which a woman calls into the CNN show to discuss her daughter’s "problems" with a senator. Reade says that caller was her mother, who has since died.
So all we have is Reade's word for it?

No.

Circumstantial evidence PolitiFact left out strongly supports Reade's account. The time frame (1993) matches. PolitiFact mentions the call occurred in 1993 but does not remind readers how this helps support Reade's account.

More importantly, the show identified the caller with the town San Luis Obispo. That's where Reade's mother lived (The Tribune, San Luis Obispo).
The woman who says former Vice President Joe Biden sexually assaulted her in 1993 used to live in Morro Bay and apparently returned here shortly after the alleged incident.

Video uncovered over the weekend and first reported by The Intercept shows an August 1993 segment on CNN’s “Larry King Live” in which a caller from San Luis Obispo County later confirmed by media outlets to be the mother of former Biden staffer Tara Reade appears to confirm that Reade had told her mother of an alleged sexual assault by Biden, who is the Democratic Party’s presumptive nominee for president.

Read more here: https://www.sanluisobispo.com/news/local/article242343826.html#storylink=cpy
PolitiFact saw no reason for its readers to know that.

Note the story we quoted from The Tribune ran on April 28, 2020. PolitiFact's fact check published on April 30, 2020. In fact, CNN had reported the San Luis Obispo connection on April 25, 2020. And PolitiFact doesn't have it figured out by April 30?

Can an unbiased source leave out something like that?

When PolitiFact assures its readers it is unbiased it is lying to them, if not to itself.

Friday, April 10, 2020

PolitiFact claims, without evidence, Trump touted chloroquine as a coronavirus cure

Should fact checkers hold themselves to the standards they expect others to meet?

We say yes.

Should fact checkers meet the standards they claim to uphold?

We say yes.

What does PolitiFact say?
(President Donald) Trump has touted chloroquine or hydroxychloroquine as a coronavirus cure in more than a half-dozen public events since March 19.
PolitiFact published the above claim in an April 8, 2020 PolitiSplainer about hydroxychloroquine, an antimalarial drug doctors have used in the treatment of coronavirus patients.

We were familiar with instances where Mr. Trump mentioned hydroxychloroquine as a potential treatment for coronavirus sufferers. But we had not heard him call it a cure. Accordingly, we tried to follow up on the evidence PolitiFact offered in support of its claim.

The article did not contain any mention of a source identifying the "half-dozen public events since March 19," so we skipped to the end to look at PolitiFact's source list. That proved disappointing.



We tweeted at the article's authors expressing our dismay at the lack of supporting documentation. Our tweet garnered no reply, no attempt to supply the missing information and no change to the original article.

Of note, when co-author Funke tweeted out a link to the article on April 8 his accompanying description counted as far more responsible than the language in the article itself:

"Here's what you need to know about hydroxychloroquine, the malaria drug that President Trump has repeatedly touted as a potential COVID-19 treatment."

Does "cure" mean the same thing as "potential treatment" in PolitiFactLand?

We've surveyed Mr. Trump's use of the terms "cure" and "game changer" at the White House website and found nothing that would justify the language PolitiFact used of the president.

What else does PolitiFact say?

The burden of proof is on the speaker, and we rate statements based on the information known at the time the statement is made.
 What if the speaker says "Trump has touted chloroquine or hydroxycloroquine as a coronavirus cure"? Does the speaker still have the burden of proof? If the speaker is PolitiFact, that is?

It looks like the fact-checkers have yet again allowed a(n apparently false) public narrative to guide their fact-checking.

Thursday, March 5, 2020

PolitiFact Wisconsin: Real wages increasing but not keeping up with inflation

PolitiFact Wisconsin published a fact check of a claim by Rep. Mark Pocan (D-Wisc.) that says real wages for Americans have gone up in the past 30 years, yet the increase fails (by a long shot!) to keep up with inflation.

We hope that red flags went up for every person reading that sentence.

"Real Wages" takes inflation into account. If real wages stay perfectly flat, then wages are keeping even with inflation. If real wages increase then wages are increasing faster than inflation.

The fact check is something to behold. It may perhaps be the early leader for worst fact check of 2020.


We faulted this fact check right away for failing to link to the source of the Pocan quotation.

Here's the source:



We're seeing the failure to link to the primary source of claims all too often from PolitiFact lately.

As the image above the video embed shows, PolitiFact Wisconsin focused on Pocan's wage comparison involving the Amazon distribution center in Kenosha.

Ignore Illogical Spox?


It didn't take long for us to find a second reason to fault PolitiFact Wisconsin. As PolitiFact related in its fact check, Pocan's communications director, Usamah Adrabi, said Pocan was talking about pay in the auto industry in the 1990s.

PolitiFact Wisconsin blew Adrabi off, in effect:
Andrabi said Pocan often uses auto worker pay to make his point, because auto manufacturing was the dominant industry in Kenosha when he was growing up there.

But Pocan did not mention auto pay in his claim, and pay in that industry historically is far higher than many other jobs. So, we focused on the weekly and hourly earnings data from the federal Bureau of Labor Statistics.
Instead of looking at the comparison Andrabi specified, PolitiFact Wisconsin decided to look at whether real wages were flat nationally over the past 30 years.

Just $3 in Thirty Years?


Before we knew it, we had a third reason to fault PolitiFact Wisconsin. After reporting the wage difference over 30 years without adjusting for inflation, PolitiFact tried to show the insignificance of the increase by adjusting for inflation. But PolitiFact used misleading language to make its point:
But using the Bureau’s inflation calculator, the 1990 weekly wage translates to $800.88 per week in today’s dollars, or $20.02 an hour. So, that’s a roughly $3 increase in 30 years.
To communicate clearly, a journalist would express the increase to the weekly wage in dollars and the increase in the hourly pay in dollars per hour.

PolitiFact Wisconsin used dollars to refer to the increase in dollars per hour, leaving readers with the impression that weekly pay increased from about $800 to $804.

Here's what one fix of that misleading error of ambiguity might look like (bold emphasis to highlight the change):
But using the Bureau’s inflation calculator, the 1990 weekly wage translates to $800.88 per week in today’s dollars, or $20.02 an hour. So, that’s an increase of roughly $3 an hour in 30 years.
Using the same language as in the preceding sentence ("an hour") tips the reader to connect the $3 change to the hourly rate instead of the weekly rate.

The Coup de Grace

Finally, we encountered the gigantic error we highlighted at the beginning.

PolitiFact admitted Pocan was literally wrong for (supposedly) suggesting that real wages were flat. Real wages have gone up. PolitiFact National had underscored that fact with a 2017 fact check of a claim from House Speaker Nancy Pelosi.

Ah, but that literal untruth only came to light by looking narrowly at Pocan's claim. PolitiFact said Pocan's true point, Adrabi notwithstanding, was "that wage growth has been largely stagnant."

PolitiFact cited a Pew Research Study that supposedly showed that the growth of real wages for groups below the top 10 percent of earners were "nearly flat" from 2000 through 2018.

All of them went up noticeably (look), but PolitiFact said they were "nearly flat."

We call that spin.

And it quickly got worse:
What’s more, the cost of living has undergone a much steeper hike: from 1983 to 2013, the Bureau of Labor Statistics reported a roughly 3% annual increase in rent and food prices, and a 1.3% annual increase in new vehicle prices.

So, a small growth in median wages is dwarfed next to the rise in cost of other goods.
That's fact check baloney.

It's true the BLS reported annual increases in rent, food and vehicle prices between 1983 and 2013, but those were inflationary changes, not inflation-adjusted changes.

It's wrong to say that inflation outpaced wage growth if real wages increased. It's startling that a fact checker could commit that error.

To be sure, real wages are calculated in a way that counts as arbitrary in a sense, totaling the price of a "basket of goods" where the goods in the basket vary over time. But still, it's ludicrous to say wages that have gone up after adjusting for inflation--that's what "real wages" are--failed to keep pace with inflation. Some items in the "basket of goods" might see higher inflation than others, but would it be proper to cherry pick those to claim that wages generally weren't keeping pace with inflation?

We don't think so.

PolitiFact Wisconsin wildly altered Rep. Pocan's point and after that completely blew its fact check of what it had decided he must be saying.


Afters

We alerted PolitiFact Wisconsin about these problems by responding to its tweet of its fact check and followed that up with a message to truthometer@politifact.com in the late afternoon of March 3, 2020.

We noticed no attempt to correct the flawed fact check through March 4, 2020.

We won't be surprised if PolitiFact never corrects its mistakes in the Pocan fact check.

But we will update this item if we see that PolitiFact Wisconsin has updated it.

Monday, February 24, 2020

Nothing To See Here: Sanders blasts health insurance "profiteering"

While researching PolitiFact's false accusation that Democratic presidential candidate used "bad math" to criticize the budget gap created by fellow candidate Bernie Sanders' spending proposals, we stumbled over a claim from Sen. Sanders that was ripe for fact-checking.

Sanders said his proposed health care plan would end profiteering practices from insurance and drug companies that result in $100 billion or so in annual profits (bold emphasis added):
Just the other day, a major study came out from Yale epidemiologist in Lancet, one of the leading medical publications in the world. What they said, my friends, is Medicare for all will save $450 billion a year, because we are eliminating the absurdity of thousands of separate plans that require hundreds of billions of dollars of administration and, by the way, ending the $100 billion a year in profiteering from the drug companies and the insurance companies.
PolitiFact claims to use an "Is that true?" standard as one of its main criteria for choosing which claims to check.

We have to wonder if that's true, or else how could a fact checker pass over the claim that profiteering netted $100 billion in profits for those companies? Do fact checkers think "profit" and "profiteering" are the same thing?

Is a fact checker who thinks that worthy of the name?

Sanders' claim directly implies that the Affordable Care Act passed by Democrats in 2010 was ineffective with its efforts to circumscribe insurance company profits. The ACA set limits on profits and overhead ("medical loss ratios"). Excess profits, by law, get refunded to the insured.

Sanders said it's not working. And the fact checkers don't care enough to do a fact check?

Of course PolitiFact went through the motions of checking a similar claim, as we pointed out. But using "profiteering" in the claim changes things.

Or should.

Ultimately, it depends on whether PolitiFact has the same interest in finding falsehoods from Democrats as it does for Republicans.

Sunday, February 23, 2020

PolitiFact absurdly charges Pete Buttigieg with "bad math"

PolitiFact gave some goofy treatment to a claim from Democratic presidential candidate Pete Buttigieg.

Buttigieg compared the 10-year unpaid cost of fellow candidate Bernie Sanders' new spending proposals to the current U.S. GDP.

PolitiFact cried foul. Or, more precisely, PolitiFact cried "bad math."


Note that PolitiFact says Buttigieg did "bad math."

PolitiFact's fact check never backs that claim.

If Buttigieg is guilty of bad anything, it was a poor job of providing thorough context for the measure he used to illustrate the size of Sanders "budget hole." Buttigieg was comparing a cumulative 10-year budget hole with one year of U.S. GDP.

PolitiFact notwithstanding, there's nothing particularly wrong with doing that. Maybe Buttigieg should have provided more context, but there's a counterargument to that point: Buttigieg was on a debate stage with a sharply limited amount of time to make his point. In addition, the debate audience and contestants may be expected to have some familiarity with cost estimates and GDP. In other words, it's likely many or most in the audience knew what Buttigieg was saying.

Let's watch PolitiFact try to justify its reasoning:
But there’s an issue with Buttigieg’s basic comparison of Sanders’ proposals to the U.S. economy. He might have been using a rhetorical flourish to give a sense of scale, but his words muddled the math.

The flaw is that he used 10-year cost and revenue estimates for the Sanders plans and stacked them against one year of the nation’s GDP.
PolitiFact tried to justify the "muddled math" charge by noting Buttigieg compared a 10-year cost estimate to a one-year figure for GDP.

But it's not muddled math. The 10-year estimates are the 10-year estimates, mathematically speaking. And the GDP figure is the GDP figure. Noting that the larger figure is larger than the smaller figure is solid math.

PolitiFact goes on to say that the Buttigieg comparison does not compare apples to apples, but so what? Saying an airplane is the size of a football field is also an apples-to-oranges comparison. Airplanes, after all, are not football fields. But the math remains solid: 100 yards equals 100 yards.

Ambiguity differs from error

In fact-checking the correct response to ambiguity is charitable interpretation. After applying charitable interpretation, the fact checker may then consider ways the message could mislead the audience.

If Buttigieg had run a campaign ad using the same words, it would make more sense to grade his claim harshly. Such a message in an ad is likely to reach people without the knowledge base to understand the comparison. But many or most in a debate audience would understand Buttigieg's comparison without additional explanation.

It's an issue of ambiguous context, not "bad math."



Correction Feb. 26, 2018: Omitted the first "i" in "Buttigieg" in the final occurrence in the next-to-last paragraph. Problem corrected.

Friday, February 21, 2020

PolitiFact's dishonest dedication to the "Trump is a liar" narrative

It's quite true that President Trump makes far more than his share of false statements, albeit many of those represent hyperbole. Indeed, it may be argued that Trump blurs the line between the concepts of hyperbole and deceit.

But Trump's reputation for inaccuracy also serves as a confirmation bias trap for journalists.

Case in point, from PolitiFact's Twitter account:


The tweet does not tell us what Trump said about windmills and wildlife, though it links to a supposedly "similar claim" that it fact-checked in the past.

That fact check concerned something Trump said about the number of eagles killed by wind turbines:



The linked fact check had its own problems, which we noted at the time.

One of the things we noted was that PolitiFact gave short shrift to the facts to prefer advancing the narrative that Trump says false things:
PolitiFact's interpretation lacks clear justification in the context of Trump's remarks, but fits PolitiFact's narrative about Trump.

A politician's lack of clarity does not give fact checkers justification for interpreting statements as they wish. The neutral fact checker notes for readers the lack of clarity and then examines the possible interpretations that are at the same time plausible. The neutral fact checker applies the same standard of charitable interpretation to all, regardless of popular public narratives.
PolitiFact's tweet amplifies the distortion in its earlier fact check. Trump said wind turbines kill eagles by the hundreds. PolitiFact made a number of assumptions about what Trump meant (for example, assuming Mr. Trump's "by the hundreds" referred to an annual death toll) then produced its subjective "Mostly False" rating based on those assumptions.

Did Trump say something comparable in Colorado?

PolitiFact's tweet communicates to readers that Trump uttered another mostly falsehood in Colorado. But what did Trump say that PolitiFact found similar to saying wind turbines kill hundreds of eagles every year?

Here's what Trump said in Colorado, via Rev.com (bold emphasis added):
We are right now energy independent, can you believe it? They want to use wind, wind, wind. Blow wind, please. Please blow. Please keep the birds away from those windmills, please. Tell those beautiful bald eagles, oh, a bald eagle. You know, if you shoot a bald eagle, they put you in jail for a long time, but the windmills knock them out like crazy. It’s true. And I think they have a rule, after a certain number are killed you have to close down the windmill until the following year. Do you believe this? Do you believe this? And they’re all made in China and in Germany. Siemans.
Got it? "Knocking (Bald Eagles) out like crazy"="(killing eagles) by the hundreds"

How many is "like crazy"? Pity the fact checker who thinks that's a claim a fact checker ought to check. If wind turbines kill tens of Bald Eagles instead of hundreds, that can support the opinion that the turbines kill the eagles "like crazy," particularly given the context.

It's hard to argue that Trump said something false about Bald Eagles in his Colorado speech, yet PolitiFact did just that, relying largely on hiding from its audience what Trump actually said.

How many eagles do wind turbines kill? That's hard to say. But federal permits for wind farming potentially allow for dozens of eagle deaths per year:
(Reuters) - Wind farms will be granted 30-year U.S. government permits that could allow for thousands of accidental eagle deaths due to collisions with company turbines, towers and electrical wires, U.S. wildlife managers said on Wednesday.
Does it follow that Trump said something "Mostly False" in Colorado?

Or are the fact checkers at PolitiFact once again chasing their narrative facts-be-damned?


Thursday, February 20, 2020

PolitiFact weirdly unable to answer criticism

Our title plays off a PolitiFact critique Dave Weigel wrote back in 2011 (Slate). PolitiFact has a chronic difficulty responding effectively to criticism.

Most often PolitiFact doesn't bother responding to criticism. But if it makes its liberal base angry enough sometimes it will trot out some excuses.

This time PolitiFact outraged supporters of Democratic (Socialist) presidential candidate Bernie Sanders with a "Mostly False" rating of Sanders' claim that fellow Democratic presidential candidate Michael Bloomberg "opposed modest proposals during Barack Obama’s presidency to raise taxes on the wealthy, while advocating for cuts to Medicare and Social Security."

Reactions from left-leaning journalists Ryan Grim and Ryan Cooper were typical of the genre.



The problem isn't that Sanders wasn't misleading people. He was. The problem stems from PolitiFact's inability to reasonably explain what Sanders did wrong. PolitiFact offered a poor explanation in its fact check, appearing to reason that what Sanders said was true but misleading and therefore "Mostly False."

That type of description typically fits a "Half True" or a "Mostly True" rating--particularly if the subject isn't a Republican.

PolitiFact went to Twitter to try to explain its decision.

First, PolitiFact made a statement making it appear that Sanders was pretty much right:



Then PolitiFact (rhetorically) asked how the true statements could end up with a "Mostly False" rating. In reply to its own question, we got this:
Because Sanders failed to note the key role of deficit reduction for Bloomberg.
Seriously? Missing context tends to lead to the aforementioned "Mostly True" or "Half True" ratings, not "Mostly False" (unless it's a Republican). Sanders is no Republican, so of course there's outrage on the left.

Anyway, who cuts government programs without having deficit reduction in mind? That's pretty standard, isn't it?

How can PolitiFact be this bad at explaining itself?

In its next explanatory tweet PolitiFact did much better by pointing out Bloomberg agreed the Obama deficit reduction plan should raise taxes, including taxes on wealthy Americans.

That's important not because it's on the topic of deficit reduction but because Sanders's made it sound like Bloomberg opposed tax hikes on the wealthy at the federal level. Recall Sanders' words (bold emphasis added): "modest proposals during Barack Obama’s presidency to raise taxes on the wealthy."

Mentioning the proposals occurred during the Obama presidency led the audience to think Bloomberg was talking about tax hikes at the federal level. But Sanders was talking about Bloomberg's opposition to tax hikes in New York City, not nationally.

PolitiFact mentioned that Bloomberg had opposed the tax hikes in New York, but completely failed to identify Sanders' misdirection.

PolitiFact's next tweet only created more confusion, saying "Sanders’ said Bloomberg wanted entitlement cuts and no tax hikes. That is not what Bloomberg said."

But that's not what Sanders said. 

It's what Sanders implied by juxtaposing mention of the city tax policy with Obama-era proposals for slowing the growth of Medicare and Social Security spending.

And speaking of those two programs, that's where PolitiFact really failed with this fact check. In the past PolitiFact has distinguished, albeit inconsistently, between cutting a government program and slowing its growth. It's common in Washington D.C. to call the slowing of growth a "cut," but such a cut from a higher growth projection differs from cutting a program by making its funding literally lower from one year to the next. Fact checkers should identify the baseline for the cut. PolitiFact neglected that step.

If PolitiFact had noted that Bloomberg's supposed cuts to Social Security and Medicare were cuts to future growth projections, it could have called out Sanders for the misleading imprecision.

PolitiFact could have said the Social Security/Medicare half of Sanders' claim was "Half True" and that taking the city tax policy out of context was likewise "Half True." And if PolitiFact did not want to credit Sanders with a "Half True" claim by averaging those ratings then it could have justified a "Mostly False" rating by invoking the misleading impression Sanders achieved by juxtaposing the two half truths.


 Instead, we got yet another case of PolitiFact weirdly unable to to answer criticism.