Friday, September 22, 2017

Joy Behar lies 100 percent of the time. It's from PolitiFact.

Of course the title of this post is intended solely to draw attention to its content. We do not think Joy Behar lies 100 percent of the time, no matter what PolitiFact or Politico say.

For the record, Behar's PolitiFact file as of Sept. 19, 2017:


As we have noted over the years, many people mistakenly believe  PolitiFact scorecards reasonably allow one to judge the veracity of politicians and pundits. We posted about Behar on Sept. 7, 2017, noting that she apparently shared that mistaken view.

PolitiFact surprised us by fact-checking Behar's statement. The fact check gave PolitiFact the opportunity to correct Behar's core misperception.

Unfortunately, PolitiFact and writer Joshua Gillin blew the opportunity.

A representative selection of statements?


Critics of PolitiFact, including PolitiFact Bias, have for years pointed out the obvious problems with treating PolitiFact's report cards as a means of judging general truthfulness. PolitiFact does not choose its statements in way that would ensure a representative sample, and an abundance of doubt surrounds the accuracy of the admittedly subjective ratings.

Gillin's fact check rates Behar's conclusion about Trump's percentage of lies "False," but he succeeds in tap-dancing around each of the obvious problems.

Let Fred Astaire stand aside in awe (bold emphasis added):
It appeared that Behar was referring to Trump’s PolitiFact file, which tracks every statement we’ve rated on the Truth-O-Meter. We compile the results of a person's most interesting or provocative statements in their file to provide a broad overview of the kinds of statements they tend to make.
Focusing on a person's most interesting or provocative statements will never provide a broad overview of the kinds of statements they tend to make. Instead, that focus will provide a collection of the most interesting or provocative statements the person makes, from the point of view of the ones picking the statements. Gillin's statement is pure nonsense, like proposing that sawing segments from a two-by-four will tend to help lengthen the two-by-four. In neither case can the method allow one to reach the goal.

Gillin's nonsense fits with a pattern we see from PolitiFact. Those in charge of PolitiFact will occasionally admit to the problems the critics point out, but PolitiFact's daily presentation obscures those same problems.

Gillin sustains the pattern as his fact check proceeds.

When is a subjective lie an objective lie?


In real life, the act of lying typically involves an intent to deceive. In PolitiFact's better moments, it admits the difficulty of appearing to accuse people of lying. In a nutshell, it's very dicey to state as fact a person was lying unless one is able to read minds. But PolitiFact apparently cannot resist the temptation of judging lies, or at least the temptation of appearing to make those judgments.

Gillin (bold emphasis added):
Behar said PolitiFact reported that "95 percent of what (Trump) says is a lie."

That’s a misreading of Trump’s file, which notes that of the 446 statements we’ve examined, only 5 percent earned a True rating. We’ve rated Trump’s statements False or Pants On Fire a total of 48 percent of the time.

The definitions of our Truth-O-Meter ratings make it difficult to call the bulk of Trump’s statements outright lies. The files we keep for people's statements act as a scorecard of the veracity of their most interesting claims.
Is Gillin able to read minds?

PolitiFact's fact checks, in fact, do not provide descriptions of reasoning allowing it to judge whether a person used intentionally deceptive speech.

PolitiFact's report cards tell readers only how PolitiFact rated the claims it chose to rate, and as PolitiFact's definitions do not mention the term "lie" in the sense of willful deception, PolitiFact ought to stick with calling low ratings "falsehoods" rather than "lies."

Of course Gillin fails to make the distinction clear.

We are not mind readers. However ...

Though we have warned about the difficulty of stating as fact that a person has engaged in deliberate deception, there are ways one may reasonably suggest it has occurred.

If good evidence exists that a party is aware of information contradicting that party's message and the party continues to send that same message anyway, it is reasonable to conclude that the party is (probably) lying. That is, the party likely engages in willful deception.

The judgment should not count as a matter of fact. It is the product of analysis and may be correct or incorrect.

Interviews with PolitiFact's principal figures often make clear that judging willful deception is not part of their fact-checking process. Yet PolitiFact has a 10-year history of blurring the lines around its judgments, ranging from the "Pants on Fire" rating ("Liar, liar, pants on fire!") for "ridiculous" claims, to articles like Gillin's that skip opportunities to achieve message clarity in favor of billows of smoke.

In between the two, PolitiFact has steadfastly avoided establishing a habit of attaching appropriate disclaimers to its charts and graphs. Why not continually remind people that the graphs only cover what PolitiFact has rated after judging it interesting or provocative?

We conclude that PolitiFact wants to imply that some politicians habitually tell intentional falsehoods while maintaining its own plausible deniability. In other words, the fact checkers want to judge people as liars under the deceptive label of nonpartisan "fact-checking" but with enough wiggle room to help shield it from criticism.

We think that is likely an intentional deception. And if it is intentional, then PolitiFact is lying.

Why would PolitiFact engage in that deception?

Perhaps it likes the influence it wields on some voters through the deception. Maybe it's just hungry for click$. We're open to other explanations that might make sense of PolitiFact's behavior.

Friday, September 8, 2017

PolitiFact's hypocrisy

PolitiFact manifests many examples of hypocrisy. This post will focus on just one.

On August 21, 2017 Speaker of the House Paul Ryan (R-Wis.) said American has dozens of counties with zero insurers. Ryan was talking about insurers committed to serving the exchanges that serve individual market customers.

On August 24, 2017, PolitiFact published a fact check rating Ryan's claim "Pants on Fire." PolitiFact noted that Ryan had relied on outdated information to back his claim. PolitiFact said only one county was expected to risk having no insurer, and Ryan should have been aware of it:
Now technically, that report wasn’t published until two days after Ryan spoke. But the government had the information, and a day before Ryan spoke, Politico reported that just one county remained without a potential insurance carrier in 2018. The Kaiser Family Foundation published the same information the day of Ryan’s CNN town hall.

And a week earlier, the government said there were only two counties at risk of having no participating insurer. Ryan was way off no matter what.
Fast forward to Sept. 7, 2017. PolitiFact elects to republicize its fact check of Ryan, reinforcing its message that only one county remains at risk no not having any insurance provider available through the exchange. PolitiFact publicized it on Twitter:
And PolitiFact publicized it on Facebook as well.

The problem? On Sept. 6, 2017, the Kaiser Family Foundation updated its information to show 63 counties at risk of having no insurer on the exchange. The information in the story PolitiFact shared was outdated.

Paul Ryan got a "Pants on Fire" for peddling outdated information.

What does PolitiFact get for doing the same thing?

Another Pulitzer Prize?

Thursday, September 7, 2017

"Not a lot of reader confusion" V

When will PolitiFact give up its absurd notion that its graphs and tables do not mislead large numbers of people?

Joy Behar of ABC's "The View" recently challenged White House spokesperson Sarah Sanders on the basis that PolitiFact says 95 percent of President Donald Trump's statements are untrue:
Joy Behar asked Sanders about a PolitiFact report that found 95 percent of the president's statements were less than completely true.

"The problem with that, Joy, is that you are doing exactly what we're talking about," Sanders responded. "Pushing a false narrative."
Apparently Sanders was the only person on the set who challenged the false narrative Behar was peddling.

For those PolitiFact continues to mislead, we repeat that if PolitiFact fails to use a representative sample of statements when it publishes its graphs and charts then the percentages tell you the opinions of PolitiFact editors for a select set of statements, not the percentage chance that a typical Trump statement is untrue.

(fingers crossed that the ABC embed works)



Not a lot of reader confusion? Seriously?

Give us a break, PolitiFact.

Wednesday, September 6, 2017

PolitiFact & Roy Moore: A smorgasbord of problems

When PolitiFact unpublished its Sept. 1, 2017 fact check of a claim attacking Alabama Republican Roy Moore, we had our red flag to look into the story. Taking down a published story itself runs against the current of journalistic ethics, so we decided to keep an eye on things to see what else might come of it.

We were rewarded with a smorgasbord of questionable actions by PolitiFact.

Publication and Unpublication

PolitiFact's Sept. 1, 2017 fact check found it "Mostly False" that Republican Roy Moore had taken $1 million from a charity he ran to supplement his pay as as Chief Justice in the Supreme Court of Alabama.

We have yet to read the original fact check, but we know the summary thanks to PolitiFact's Twitter confession issued later on Sept. 1, 2017:


We tweeted criticism of PolitiFact for not making an archived version of the fact check immediately available and for not providing an explanation for those who ended up looking for the story only to find a 404-page-not found-error.  We think readers should not have to rely on Twitter to know what is going on with the PolitiFact website.

John Kruzel takes tens of thousands of dollars from PolitiFact

(a brief lesson in misleading communications)

The way editors word a story's title, or even a subheading like the one above, makes a difference.

What business does John Kruzel have "taking" tens of thousands of dollars from PolitiFact? The answer is easy: Kruzel is an employee of PolitiFact, and PolitiFact pays Kruzel for his work. But we can make that perfectly ordinary and non-controversial relationship look suspicious with a subheading like the one above.

We have a parallel in the fact check of Roy Moore. Moore worked for the charity he ran and was paid for it. Note the title PolitiFact chose for its fact check:

Did Alabama Senate candidate Roy Moore take $1 million from a charity he ran?

 "Mostly True." Hmmm.

Kruzel wrote the fact check we're discussing. He did not necessarily compose the title.

We think it's a bad idea for fact-checkers to engage in the same misleading modes of communication they ought to criticize and hold to account.


Semi-transparent Transparency

For an organization that advocates transparency, PolitiFact sure relishes its semi-transparency. On Sept. 5, 2017, PolitiFact published an explanation of its correction but rationed specifics (bold emphasis added in the second instance):
Correction: When we originally reported this fact-check on Sept. 1, we were unable to determine how the Senate Leadership Fund arrived at its figure of "over $1 million," and the group didn’t respond to our query. The evidence seemed to show a total of under $1 million for salary and other benefits. After publication, a spokesman for the group provided additional evidence showing Moore received compensation as a consultant and through an amended filing, bringing the total to more than $1 million. We have corrected our report, and we have changed the rating from Mostly False to Mostly True.
PolitiFact included a table in its fact check showing relevant information gleaned from tax documents. Two of the entries were marked as for consulting and as an amended filing, which we highlighted for our readers:


Combining the two totals gives us $177,500. Subtracting that figure from the total PolitiFact used in its corrected fact check, we end up with $853,375.

The Senate Leadership Fund PAC (Republican) was off by a measly 14.7 percent and got a "Mostly False" in PolitiFact's original fact check? PolitiFact often barely blinks over much larger errors than that.

Take a claim by Sen. Brad Schneider (D-Ill.) from April 2017, for example. The fact check was published under the "PolitiFact Illinois" banner, but PolitiFact veterans Louis Jacobson and Angie Drobnic Holan did the writing and editing, respectively.

Schneider said that the solar industry accounts for 3 times the jobs from the entire coal mining industry. PolitiFact said the best data resulted in a solar having a 2.3 to 1 job advantage over coal, terming 2.3 "just short of three-to-one" and rating Schneider's claim "Mostly True."

Schneider's claim was off by over 7 percent even if we credit 2.5 as 3 by rounding up.

How could an error of under 15 percent have dropped the rating for the Senate Leadership Fund's claim all the way down to "Mostly False"?

We examine that issue next.

Compound Claim, Or Not?

PolitiFact recognizes in its statement of principles that sometimes claims have more than one part:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
We note that if PolitiFact does not weight the individual pieces equally, we have yet another area where subjective judgment might color "Truth-O-Meter" ratings.

Perhaps this case qualifies as one of those subjectively skewed cases.

The ad attacking Moore looks like a clear compound claim. As PolitiFact puts it (bold emphasis added), "In addition to his compensation as a judge, "Roy Moore and his wife (paid themselves) over $1 million from a charity they ran."

PolitiFact found the first part of the claim flatly false (bold emphasis added):
He began to draw a salary from the foundation in 2005, two years after his dismissal from the bench, according to the foundation’s IRS filings. So the suggestion he drew the two salaries concurrently is wrong.
Without the damning double dipping, the attack ad is a classic deluxe nothingburger with nothingfries and a super-sized nothingsoda.

Moore was ousted as Chief Justice in the Alabama Supreme Court, where he could have expected a raise up to $196,183 per year by 2008. After that ouster Moore was paid a little over $1 million over a nine-year period, counting his wife's salary for one year, getting well under $150,000 per year on average. On what planet is that not a pay cut? With the facts exposed, the attack ad loses all coherence. Where is the "more" that serves as the theme of the ad?

We think the fact checkers lost track of the point of the ad somewhere along the line. If the ad was just about what Moore was paid for running his charity while not doing a different job at the same time, it's more neutral biography than attack ad. The main point of the attack ad was Moore supplementing his generous salary with money from running a charitable (not-for-profit) organization. Without that main point, virtually nothing remains.

PolitiFact covers itself with shame by failing to see the obvious. The original "Mostly False" rating fit the ad pretty well regardless of whether the ad correctly reported the amount of money Moore was paid for working at a not-for-profit organization.

Assuming PolitiFact did not confuse itself?

If PolitiFact denies making a mistake by losing track of the point of the ad, we have another case that helps amplify the point we made with our post on Sept. 1, 2017. In that post, we noted that PolitiFact graded one of Trump's claims as "False" based on not giving Trump credit for his underlying point.

PolitiFact does not address the "underlying point" of claims in a consistent manner.

In our current example, the attack ad on Roy Moore gets PolitiFact's seal of "Mostly" approval only by ignoring its underlying point. The ad actually misled in two ways, first by saying Moore was supplementing his income as judge with income from his charity when the two source of income were not concurrent, and secondly by reporting the charity income while downplaying the period of time over which that income was spread. Despite the dual deceit, PolitiFact graded the claim "Mostly True."

"The decision about a Truth-O-Meter rating is entirely subjective"

Cases like this support our argument that PolitiFact tends to base its ratings on subjective judgments. This case also highlights a systemic failure of transparency at PolitiFact.

We will update this item if PolitiFact surprises us by running a second correction.



Afters

On top of problems we described above, PolitiFact neglected to tag its revised/republished story with the "Corrections and Updates" tag its says it uses for all corrected or updated stories.

PolitiFact has a poor record of following this part of its corrections policy.

We note, however, that after we pointed out the problem via Twitter and email PolitiFact fixed it without a long delay.

Friday, September 1, 2017

PolitiFact disallows Trump's underlying point?

PolitiFact's defenders sometimes opine that PolitiFact always justifies its rulings.

We accept that PolitiFact typically includes words in its fact checks intended to justify its rulings. But we detect bias in PolitiFact's inconsistent application of principles when it tries to justify its ratings.

Our example this time comes from an Aug. 31, 2017 fact check of President Donald Trump's claim that illegal border crossings have slowed by 78 percent.


Zebra Fact Check on Aug. 30, 2017 published criticisms of the way PolitiFact and the Washington Post Fact Checker handled this claim. PolitiFact's latest version corrects none of the specified problems, including the failure to attempt a reasonable fact check of how much of a drop in illegal Southwest border crossings Trump can claim.

As with its earlier fact check, PolitiFact offers examples of various cherry-picked statistics, implicitly demonstrating that cherry-picking leads to a divergent set of outcomes:
Here’s how the number of apprehensions have changed:
• From July 2016 to July 2017, down 46 percent;
• From June 2017 to July 2017, up 13 percent;
• From November 2016 to July 2017, down 61 percent.
As I explained over at Zebra Fact Check, a serious attempt to measure a drop in border crossings explains the use of a proxy measure (border apprehensions) and then picks a representative baseline against which to measure the change.

Zebra Fact Check calculated a 56 percent change. FactCheck.org calculated 58 percent.

PolitiFact has yet to make a reasonable attempt to establish a representative baseline. The best attempt in the cherry-picked set we quoted was the comparison of July 2016 to July 2017, showing a 46 percent change. That comparison suffers from offering a narrow picture, made up of individual months separated by a year in time. Also, 2016 was not a typical year for border apprehensions under the Obama administration. But at least it compared Trump to Obama in an apples-to-apples sense.

PolitiFact's 46 percent figure ends up in the ballpark with the 58 percent figure FactCheck.org produced.

That's where the problem comes in.


Where is Trump's underlying point?

Back in 2008, PolitiFact Editor Bill Adair published an article trying to explain how PolitiFact treats numbers claims.
To assess the truth for a numbers claim, the biggest factor is the underlying message.
Adair used as one of his examples a claim by then-presidential candidate Barack Obama that his mixed-marriage birth was illegal in 12 states when he has born. PolitiFact found it was illegal in 22 states but rated the claim "Mostly True." That is the potential power of the underlying argument. If one makes Obama's claim into "My mixed marriage birth was illegal in many states when I was born" then it's essentially accurate, but you can drop Obama down to just "Mostly True" since he underestimated the number of states by 10.

Does Trump have an underlying point that illegal Southwest border crossings have decreased under his watch?

It appears that he does, and it appears that PolitiFact offers him no credit for it. PolitiFact's fact check shows no interest at all in Trump's underlying point.

Why?




Correction Sept. 1, 2017: Swapped out "Did" for "Does" in the third-to-last paragraph.