Friday, January 20, 2017

Hans Bader: "The Strange Ignorance of PolitiFact"

Hans Bader, writing at Liberty Unyielding, points out a Jan. 19, 2017 fact-checking train wreck from PolitiFact Pennsylvania. PolitiFact Pennsylvania looked at a claim Sen. Bob Casey (D-Penn.) used to try to discredit President-elect Donald Trump's nominee for Secretary of Education, Betsy DeVos.

Bader's initially emphasized PolitiFact Pennsylvania's apparent ignorance of the "reasonable doubt" standard in United States criminal cases:
In an error-filled January 19 “fact-check,” PolitiFact’s Anna Orso wrote about “the ‘clear and convincing’ standard used in criminal trials.”  The clear and convincing evidence standard is not used in criminal trials. Even my 9-year old daughter knows that the correct standard is “beyond a reasonable doubt.”
By the time we started looking at this one, PolitiFact Pennsylvania had started trying to spackle over its faults. The record (at the Internet Archive) makes clear that PolitiFact's changes to its text got ahead of its policy of announcing corrections or updates.

Eventually, PolitiFact continued its redefinition of the word "transparency" with this vague description of its corrections:
Correction: An earlier version of this article incorrectly characterized the standard of evidence used in criminal convictions.
Though PolitiFact Pennsylvania corrected the most obvious and embarrassing problem with its fact check, other problems Bader pointed out still remain, such as its questionable characterization of the Foundation for Individual Rights in Education's civil rights stance as "controversial."

For our part, we question PolitiFact Pennsylvania for apparently uncritically accepting a key premise connected to the statement it claimed to fact check:
Specifically, Casey said the Philadelphia-based Foundation for Individual Rights in Education supports a bill that "would change the standard of evidence." He said the group is in favor of ditching the "preponderance of the evidence" standard most commonly used in Title IX investigations on college campuses and instead using the "beyond a reasonable doubt" standard used in criminal cases.
PolitiFact claimed to simply fact check whether DeVos had contributed to FIRE. But without the implication that FIRE is some kind of far-outside-the-mainstream group, who cares?

We say that given PolitiFact Pennsylvania's explanation of Casey's attack on DeVos, a fact checker needs to investigate whether FIRE supported a bill that would change the standard of evidence.

PolitiFact Pennsylvania offers its readers no evidence at all regarding any such bill. If there is no bill as Casey described, then PolitiFact Pennsylvania's "Mostly True" rating serves to buoy a false charge against DeVos (and FIRE).

Ultimately, PolitiFact Pennsylvania fails to coherently explain the point of contention. The Obama administration tried to restrict schools from using the "clear and convincing" standard.
Thus, in order for a school’s grievance procedures to be consistent with Title IX standards, the school must use a preponderance of the evidence standard (i.e., it is more likely than not that sexual harassment or violence occurred). The “clear and convincing” standard (i.e., it is highly probable or reasonably certain that the sexual harassment or violence occurred), currently used by some schools, is a higher standard of proof. Grievance procedures that use this higher standard are inconsistent with the standard of proof established for violations of the civil rights laws, and are thus not equitable under Title IX. Therefore, preponderance of the evidence is the appropriate standard for investigating allegations of sexual harassment or violence.
FIRE objected to that. But objecting to that move from the Obama administration does not mean FIRE advocated using the "beyond a reasonable doubt" (how PolitiFact's story reads now). That also goes for the "clear and convincing" standard mentioned in the original version.

PolitiFact Pennsylvania simply skipped out on investigating the linchpin of Casey's argument.

There's more hole than story to this PolitiFact Pennsylvania fact check.

Be sure to read Bader's article for more.


Update Jan 21, 2017: Added link to the Department of Education's April 4, 2011 "Dear Colleague" letter

Wednesday, January 18, 2017

Thought-Checkers: Protecting against Fakethink

Everything you can imagine is real
                                 -Pablo Picasso*


Not so fast, Pablo! 

We stumbled across this silly piece by Lauren Carroll (of fact-checking flat earth claims fame) where Carroll somehow determines as objective fact the limits of Betsy DeVos' ability to imagine things: 




DeVos was asked a question, she didn't know the answer, so she offered a guess, and explicitly stated she was offering a guess.

The difference between making a statement of fact and offering your best guess seems far too complicated for either Carroll or her editors that let this editorial opportunity escape their liberal grasp.

This isn't a fact check, it's a hit piece by a journalish that was apparently more eager to smear a Trump pick than they were in acknowledging what DeVos actually said. Oddly, Carroll doesn't list any attempts to contact either DeVos or anyone in the Trump camp to get a clarification, a courtesy they've extended in the past.

PolitiFact is pushing Fake News by accusing DeVos of making a claim when she was stating a theoretical possibility that she could imagine. The real crime here is garbage ratings like this will end up in DeVos' unscientific "report card" on those bogus charts PolitiFact dishonestly pimps out to readers as objective data.

PolitiFact's disdain for all things Trump is clear and it's only going to get worse. The administration hasn't even begun yet and they're already fact-checking what someone can or cannot imagine. 

Happy thoughts!




*attributed




Sunday, January 8, 2017

Not a fact checker's argument, but PolitiFact went there

A few days ago we highlighted a gun-rights research group's criticism of a PolitiFact California fact check. The fact check found it "Mostly True" that over seven children per day fall victim to gun violence, even though that number includes suicides and "children" aged 18 and 19.

A dubious finding? Sure. But least PolitiFact California's fact check did not try use the rationale that might have made all victims of gun violence "children." But the PolitiFact video used to help publicize the fact check (narrated by PolitiFact California's Chris Nichols) went there:

How many teenagers in the background photo are 18 or over, we wonder?

Any parent will tell you that any child of theirs is a child, regardless of age. But that definition makes the modifier "children" useless in a claim about the effect on children from gun violence. "Children" under that broad definition includes all human beings with parents. That counts most, if not all, human beings as children.

Nichols' argument does not belong in a fact check. It belongs in a political ad designed around the appeal to emotion.

The only sensible operative definition of "children" here is humans not yet of age (18 years, in the United States). All persons under 18 are "children" by this definition. But not all teenagers are "children" by this definition.

To repeat the gist of the earlier assessment, the claim was misleading but PolitiFact covered for it with an equivocation fallacy. The equivocation fallacy from the video, featuring an even more outrageous equivocation fallacy, just makes PolitiFact marginally more farcical.




Edit: Added link to CPRC in first graph-Jeff 0735PST 1/12/2017

Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

Monday, January 2, 2017

CPRC: "Is Politifact really the organization that should be fact checking Facebook on gun related facts?"

The Crime Prevention Research Center, on Dec. 29, 2016, published a PolitiFact critique that might well have made our top 11 if we had noticed it a few days sooner.

Though the title of the piece suggests a general questioning of PolitiFact's new role as one of Facebook's guardians of truth, the article mainly focuses on one fact check from PolitiFact California, rating "Mostly True" the claim that seven children die each day from gun violence.

The CPRC puts its strongest argument front and center:
Are 18 and 19 year olds “children”?

For 2013 through 2015 for ages 0 through 19 there were 7,838 firearm deaths.  If you exclude 18 and 19 year olds, the number firearm deaths for 2013 through 2015 is reduced by almost half to 4,047 firearm deaths.  Including people who are clearly adults drives the total number of deaths.

Even the Brady Campaign differentiates children from teenagers.  If you just look at those who aren’t teenagers, the number of firearm deaths declines to 692, which comes to 0.63 deaths per day.
This argument cuts PolitiFact California's fact check to the quick. Instead looking at "children" as something to question, the fact-checkers let it pass with a "he-said, she said" caveat (bold emphasis added):
These include all types of gun deaths from accidents to homicides to suicides. About 36 percent resulted from suicides.

Some might take issue with Speier lumping in 18 year-olds and 19 year-olds as children.

Gun deaths for these two ages accounted for nearly half of the 7,838 young people killed in the two-year period.
Yes, some might take issue with lumping 18 year-olds and 19 year-olds in as children, particularly when checking Merriam-Webster quickly reveals how the claim stretches the truth. The distortion maximizes the emotional appeal of protecting "children."

Merriam-Webster's definition No. 2:
a :  a young person especially between infancy and youth
b :  a childlike or childish person  
c :  a person not yet of age
"A person not yet of age" provides the broadest reasonable understanding of the claim PolitiFact California checked. In the United States, persons 18 and over qualify as "of age."

Taking persons over 18 out of the mix all by itself cuts the estimate nearly in half. Great job, PolitiFact California.

Visit CPRC for more, including the share of "gun violence" accounted for by suicide and justifiable homicide.

Friday, December 30, 2016

PolitiFact's top eleven fake fact checks of 2016

We've promised a list of PolitiFact's top contributions to fake news--but we don't want to get into a useless semantic argument about what constitutes "fake news." For that reason, we're calling this list PolitiFact's top "fake fact checks," and that term refers to fact checks that misinform, whether intentionally or not.

11 Mike Pence denied evolution!

PolitiFact California rated "True" Governor Jerry Brown's claim that Republican presidential candidate Mike Pence denied evolution. The truth? Pence made a statement consistent with theistic evolution without affirming or denying evolution. We called out the error here. So PolitiFact California later changed its rating to "Half True." Because if Pence did not deny evolution that means that it is half true that he denied evolution. It's fact checker logic. You wouldn't understand.


10 Ron Johnson denies humans contribute to climate change!

When Democratic candidate Russ Feingold charged that his Republican opponent Ron Johnson does not accept any human role in climate change, PolitiFact Wisconsin was there. It rated Feingold's claim "Mostly True." The problem? PolitiFact Wisconsin's evidence showed Feingold making a number of clear statements to the contrary, including one where Johnson specifically said he does not deny humans affect the climate. PolitiFact Wisconsin went with its ability to interpret Johnson's more ambiguous statements as a denial of what Johnson said plainly. We wrote about the mistake, but PolitiFact Wisconsin has stayed with its "Mostly True" rating.


9 Social Security is not a Ponzi scheme!

PolitiFact has an established precedent of denying the similarities between Social Security's "pay-as-you-go" financing and Ponzi financing. PolitiFact reinforced its misleading narrative by giving voters advance warning that they might hear the Ponzi lie in 2016. The problem? Voters can find that supposed lie repeated commonly in professional literature, written by the kind of experts PolitiFact might have interviewed to learn the truth.

Will PolitiFact ever repent of misleading its readers on this topic?


8 LGBT the group most often victimized by hate crimes!

Attorney General Loretta Lynch declared in 2016 Lesbian-Gay-Bisexual-Transgendered folks are the group most often victimized by hate crimes. PolitiFact gave Lynch's statement a "True" rating, meaning the statement is true without leaving out any important information. The problem? Lynch's statement is only true on a per capita basis. In other words, large minority groups experience more hate crimes victimization than LGBT. But an individual in the LGBT group would more likely experience a hate crime than a member of the other groups.

How is that not significant enough to affect the rating?


7 The gender wage gap is real(ly big)! Or something!

Mainstream fact checkers are consistently awful on the gender wage gap. The game works like this: Democrat candidate wants to leverage concern over gender discrimination, so Democratic candidate cites a statistic that has hardly any relationship to gender discrimination. Democratic party candidates can count on fact checkers to go along with the game so long as they do not specifically say the raw gender wage gap is caused by gender discrimination.

PolitiFact Missouri's 2016 gender wage gap story, exposed here and here, did that approach one better by badly misinterpreting its source material to exaggerate the size of the gap caused by discrimination.


6 Torture doesn't work!

PolitiFact Florida weighed in on torture and waterboarding when a Florida Republican running for Marco Rubio's senate seat said waterboarding works. PolitiFact Florida ruled the claim "False," after admitting that nobody has tested the proposition scientifically. In short, we (including PolitiFact) don't know for a fact whether waterboarding works. PolitiFact Florida's error was pointed out at Flopping Aces and here.


5 France and Germany did not think Iraq had WMD!

When former Assistant Secretary of Defense Paul Wolfowitz said the French and Germans believed Iraq had WMD, PolitiFact ruled it "Mostly False." The creepy "1984" nature of this fact check stems from PolitiFact turning lack of certitude into near-certitude of lack. And PolitiFact has to win some sort of award for avoiding French President Jacques Chirac's 2003 statement, during the approach to the war, that Iraq "probably" possessed WMD.


4 Colorado Republican tried to redefine rape!

PolitiFact Colorado makes our list with its liberal "Mostly True" rating given to abortion rights champion Emily's List. Emily's list charged a Colorado Republican with trying to "redefine rape" in an abortion-related statute. PolitiFact Colorado apparently neglected to look up the traditional definition of rape (and its forcible/statutory distinction) to see whether it had changed thanks to the proposed wording. It had not, leaving the impression that PolitiFact Colorado essentially took the word of Emily's List at face value. Fellow PolitiFact critic Dustin Siggins led the way in flagging the problems with this PolitiFact Colorado item.


3 In California, it's easier to buy a gun than a Happy Meal!

Matthew Hoy, another one of our favorite PolitiFact critics, flagged this hilarious item. This was not a fact check, but rather a Twitter incident where PolitiFact California retweeted somebody else. California Democrat Gavin Newsom received bogus PolitiCover for claiming there are more gun dealers in California than McDonald's. Newsom tweeted out the bogus vindication under the absurd headline "FACT: It's easier to get a gun than a Happy Meal in California." Partly because a gun costs less than a Happy Meal?

2 Donald Trump is causing an increase in bullying in our schools!

PolitiFact ostensibly checked Hillary Clinton's claim that teachers noticed a "Trump Effect" that amounted to an increase in bullying behavior in the nation's schools. But anecdotal reports ought to mean close to squat in fact-checking circles, so PolitiFact accepted a motley collection of anecdotes from the left-leaning Southern Poverty Law Center as reason enough to give Clinton a "Mostly True" rating. We chronicled the numerous problems with the so-called "Trump Effect" here and at Zebra Fact Check here and here.

1 Mike Pence advocated diverting federal funds from AIDS patients to gay conversion therapy!

PolitiFact California heads the list with its second mostly fact-free fact check of Mike Pence. Back around the year 2000, when Pence was first running for the House of Representatives, he suggested that AIDS care dollars under the Ryan White Care Act should not go to organizations that celebrated behavior likely to spread AIDS. Pence said funds under the Act should go to people seeking to "change their sexual behavior." About 15 years later, Pence's statement was construed to mean that he wanted AIDS care funding to go toward gay conversion therapy. There's no serious argument supporting that notion, and Timothy P. Carney pointed that out even before PolitiFact checked the claim. But PolitiFact California gave Gavin Newsom a "True" rating for the accusation.

PolitiFact California's recent publication of its most popular fact checks for 2016 helps explain why this item tops our list. PolitiFact claimed its "Half True" rating of Newsom was its most popular story. But for months the story ran with a "True" rating. Which version of the story got the most clicks, eh?


Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


Afters

The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.