Sunday, January 29, 2017

PolitiFact continues its campaign of misinformation on waterboarding

Amazing. Simply amazing.

PolitiFact Bias co-editor Jeff D. caught PolitiFact continuing its tendency to misinform its readers about waterboarding in a Jan 29, 2017 tweet:
PolitiFact's claim was untrue, as I demonstrated in a May 30, 2016 article at Zebra Fact Check, "Torture narrative trumps facts at PolitiFact."

Though PolitiFact claims scientific research shows waterboarding doesn't work, the only "scientific evidence in the linked article concerns the related conditions of hypoxia (low oxygen) and hypercapnia (excess carbon dioxide). PolitiFact reasoned that because science shows that hypoxia and hypercapnia inhibit memory, therefore waterboarding would not work as a means of gaining meaningful intelligence.

The obvious problem with that line of evidence?

Waterboarding as practiced by the CIA takes mere seconds. Journalist Christopher Hitchens had himself waterboarded and broke, saying he would tell whatever he knew, after about 18 seconds.  Memos released by the Obama administration revealed that a continuous waterboarding treatment could last a maximum 40 seconds.

Prisoners could be subjected to waterboarding during one 30 day period

Maximum five treatment days per 30 days

Maximum two waterboarding sessions per treatment day
Max 2 hours per session (the length of time the prisoner is strapped down)
Maximum 40 seconds of continuous water application

Maximum six water applications over 10 seconds long per session
Maximum 240 seconds (four minutes) of waterboarding per session from applications over 10 seconds long
Maximum total of 12 minutes of treatment with water over any 24 hour period
Applications under 10 seconds long could make up a maximum 8 minutes on top of the four mentioned above

While it is worth noting that reports indicate the CIA exceeded these guidelines in the case of al Qaeda mastermind Khalid Sheik Mohammed, these limits are not conducive to creating significant conditions of hypoxia or hypercapnia.

The typical person can hold their breath for 40 seconds without too much difficulty or distress. The CIA's waterboarding was designed to bring about the sensation of drowning, not the literal effects of drowning (hypoxia, hypercapnia, aspiration and swallowing of water). That is why the techniques often break prisoners in about 10 seconds.

And the other problem?

The CIA did not interrogate prisoners while waterboarding them. Nor did the CIA use the technique to obtain confessions under duress. Waterboarding was used to make prisoners more amenable to conventional forms of interrogation.

None of this information is difficult to find.

Why do the fact checkers at PolitiFact (not to mention elsewhere) have such a tough time figuring this stuff out?

There likely isn't any significant scientific evidence either for or against the effectiveness of waterboarding. PolitiFact pretending there is does not make it so.

Friday, January 20, 2017

Hans Bader: "The Strange Ignorance of PolitiFact"

Hans Bader, writing at Liberty Unyielding, points out a Jan. 19, 2017 fact-checking train wreck from PolitiFact Pennsylvania. PolitiFact Pennsylvania looked at a claim Sen. Bob Casey (D-Penn.) used to try to discredit President-elect Donald Trump's nominee for Secretary of Education, Betsy DeVos.

Bader's article initially emphasized PolitiFact Pennsylvania's apparent ignorance of the "reasonable doubt" standard in United States criminal cases:
In an error-filled January 19 “fact-check,” PolitiFact’s Anna Orso wrote about “the ‘clear and convincing’ standard used in criminal trials.”  The clear and convincing evidence standard is not used in criminal trials. Even my 9-year old daughter knows that the correct standard is “beyond a reasonable doubt.”
By the time we started looking at this one, PolitiFact Pennsylvania had started trying to spackle over its faults. The record (at the Internet Archive) makes clear that PolitiFact's changes to its text got ahead of its policy of announcing corrections or updates.

Eventually, PolitiFact continued its redefinition of the word "transparency" with this vague description of its corrections:
Correction: An earlier version of this article incorrectly characterized the standard of evidence used in criminal convictions.
Though PolitiFact Pennsylvania corrected the most obvious and embarrassing problem with its fact check, other problems Bader pointed out still remain, such as its questionable characterization of the Foundation for Individual Rights in Education's civil rights stance as "controversial."

For our part, we question PolitiFact Pennsylvania for apparently uncritically accepting a key premise connected to the statement it claimed to fact check:
Specifically, Casey said the Philadelphia-based Foundation for Individual Rights in Education supports a bill that "would change the standard of evidence." He said the group is in favor of ditching the "preponderance of the evidence" standard most commonly used in Title IX investigations on college campuses and instead using the "beyond a reasonable doubt" standard used in criminal cases.
PolitiFact claimed to simply fact check whether DeVos had contributed to FIRE. But without the implication that FIRE is some kind of far-outside-the-mainstream group, who cares?

We say that given PolitiFact Pennsylvania's explanation of Casey's attack on DeVos, a fact checker needs to investigate whether FIRE supported a bill that would change the standard of evidence.

PolitiFact Pennsylvania offers its readers no evidence at all regarding any such bill. If there is no bill as Casey described, then PolitiFact Pennsylvania's "Mostly True" rating serves to buoy a false charge against DeVos (and FIRE).

Ultimately, PolitiFact Pennsylvania fails to coherently explain the point of contention. The Obama administration tried to restrict schools from using the "clear and convincing" standard.
Thus, in order for a school’s grievance procedures to be consistent with Title IX standards, the school must use a preponderance of the evidence standard (i.e., it is more likely than not that sexual harassment or violence occurred). The “clear and convincing” standard (i.e., it is highly probable or reasonably certain that the sexual harassment or violence occurred), currently used by some schools, is a higher standard of proof. Grievance procedures that use this higher standard are inconsistent with the standard of proof established for violations of the civil rights laws, and are thus not equitable under Title IX. Therefore, preponderance of the evidence is the appropriate standard for investigating allegations of sexual harassment or violence.
FIRE objected to that. But objecting to that move from the Obama administration does not mean FIRE advocated using the "beyond a reasonable doubt" (how PolitiFact's story reads now) standard. That also goes for the "clear and convincing" standard mentioned in the original version.

PolitiFact Pennsylvania simply skipped out on investigating the linchpin of Casey's argument.

There's more hole than story to this PolitiFact Pennsylvania fact check.

Be sure to read Bader's article for more.


Update Jan 21, 2017: Added link to the Department of Education's April 4, 2011 "Dear Colleague" letter
Update Jan 24, 2017: Added a proper ending to the second sentence in the third-to-last paragraph 
Update Feb. 2, 2017: Added "article" after "Bader's" in the second paragraph to make the sentence more sensible

Wednesday, January 18, 2017

Thought-Checkers: Protecting against Fakethink

Everything you can imagine is real
                                 -Pablo Picasso*


Not so fast, Pablo! 

We stumbled across this silly piece by Lauren Carroll (of fact-checking flat earth claims fame) where Carroll somehow determines as objective fact the limits of Betsy DeVos' ability to imagine things: 




DeVos was asked a question, she didn't know the answer, so she offered a guess, and explicitly stated she was offering a guess.

The difference between making a statement of fact and offering your best guess seems far too complicated for either Carroll or her editors that let this editorial opportunity escape their liberal grasp.

This isn't a fact check, it's a hit piece by a journalish that was apparently more eager to smear a Trump pick than they were in acknowledging what DeVos actually said. Oddly, Carroll doesn't list any attempts to contact either DeVos or anyone in the Trump camp to get a clarification, a courtesy they've extended in the past.

PolitiFact is pushing Fake News by accusing DeVos of making a claim when she was stating a theoretical possibility that she could imagine. The real crime here is garbage ratings like this will end up in DeVos' unscientific "report card" on those bogus charts PolitiFact dishonestly pimps out to readers as objective data.

PolitiFact's disdain for all things Trump is clear and it's only going to get worse. The administration hasn't even begun yet and they're already fact-checking what someone can or cannot imagine. 

Happy thoughts!




*attributed




Sunday, January 8, 2017

Not a fact checker's argument, but PolitiFact went there

A few days ago we highlighted a gun-rights research group's criticism of a PolitiFact California fact check. The fact check found it "Mostly True" that over seven children per day fall victim to gun violence, even though that number includes suicides and "children" aged 18 and 19.

A dubious finding? Sure. But least PolitiFact California's fact check did not try use the rationale that might have made all victims of gun violence "children." But the PolitiFact video used to help publicize the fact check (narrated by PolitiFact California's Chris Nichols) went there:

How many teenagers in the background photo are 18 or over, we wonder?

Any parent will tell you that any child of theirs is a child, regardless of age. But that definition makes the modifier "children" useless in a claim about the effect on children from gun violence. "Children" under that broad definition includes all human beings with parents. That counts most, if not all, human beings as children.

Nichols' argument does not belong in a fact check. It belongs in a political ad designed around the appeal to emotion.

The only sensible operative definition of "children" here is humans not yet of age (18 years, in the United States). All persons under 18 are "children" by this definition. But not all teenagers are "children" by this definition.

To repeat the gist of the earlier assessment, the claim was misleading but PolitiFact covered for it with an equivocation fallacy. The equivocation fallacy from the video, featuring an even more outrageous equivocation fallacy, just makes PolitiFact marginally more farcical.




Edit: Added link to CPRC in first graph-Jeff 0735PST 1/12/2017

Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

Monday, January 2, 2017

CPRC: "Is Politifact really the organization that should be fact checking Facebook on gun related facts?"

The Crime Prevention Research Center, on Dec. 29, 2016, published a PolitiFact critique that might well have made our top 11 if we had noticed it a few days sooner.

Though the title of the piece suggests a general questioning of PolitiFact's new role as one of Facebook's guardians of truth, the article mainly focuses on one fact check from PolitiFact California, rating "Mostly True" the claim that seven children die each day from gun violence.

The CPRC puts its strongest argument front and center:
Are 18 and 19 year olds “children”?

For 2013 through 2015 for ages 0 through 19 there were 7,838 firearm deaths.  If you exclude 18 and 19 year olds, the number firearm deaths for 2013 through 2015 is reduced by almost half to 4,047 firearm deaths.  Including people who are clearly adults drives the total number of deaths.

Even the Brady Campaign differentiates children from teenagers.  If you just look at those who aren’t teenagers, the number of firearm deaths declines to 692, which comes to 0.63 deaths per day.
This argument cuts PolitiFact California's fact check to the quick. Instead looking at "children" as something to question, the fact-checkers let it pass with a "he-said, she said" caveat (bold emphasis added):
These include all types of gun deaths from accidents to homicides to suicides. About 36 percent resulted from suicides.

Some might take issue with Speier lumping in 18 year-olds and 19 year-olds as children.

Gun deaths for these two ages accounted for nearly half of the 7,838 young people killed in the two-year period.
Yes, some might take issue with lumping 18 year-olds and 19 year-olds in as children, particularly when checking Merriam-Webster quickly reveals how the claim stretches the truth. The distortion maximizes the emotional appeal of protecting "children."

Merriam-Webster's definition No. 2:
a :  a young person especially between infancy and youth
b :  a childlike or childish person  
c :  a person not yet of age
"A person not yet of age" provides the broadest reasonable understanding of the claim PolitiFact California checked. In the United States, persons 18 and over qualify as "of age."

Taking persons over 18 out of the mix all by itself cuts the estimate nearly in half. Great job, PolitiFact California.

Visit CPRC for more, including the share of "gun violence" accounted for by suicide and justifiable homicide.