Tuesday, December 26, 2017

Not a Lot of Reader Confusion VIII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing. This series looks to help acquaint Holan and others with the evidence.


Oh, look. Another journal article using PolitiFact's report card data to judge the veracity of a politician (bold emphasis added):
Political fact-checking organizations and the mainstream media reported extensively on Trump’s false statements of fact and unsubstantiated generalizations. And they noted that he made such statements in staggering proportions. For example, by April of 2017, Politifact assessed 20% of Trump’s statements as mostly false, 33% as false, and 16% as what it called “pants on fire” false— cumulatively suggesting that the vast majority of the time Trump was making either false or significantly misleading statements to the public.
That's from The Resilience of Noxious Doctrine: The 2016 Election, the Marketplace of Ideas, and the Obstinacy of Bias.

The article, by Leonard M. Niehoff and Deeva Shah, appeared in the Michigan Journal of Race and Law.

The authors should be ashamed of themselves for making that argument based on data subject to selection bias and ideological bias.

On the bright side, we suppose such use of PolitiFact data may successfully model the obstinacy of bias.

We recommend to the authors this section of their article:
Confirmation bias, discussed briefly above, is another common type of anchoring bias. Confirmation bias describes our tendency to value facts and opinions that align with those we have already formed. By only referencing information and viewpoints that affirm previously held beliefs, people confirm their biased views instead of considering conflicting data and ideas.
 




Correction: Fixed link to Noxious Doctrine paper 1838PST 12/26/2017-Jeff

Friday, December 22, 2017

Beware, lest Trump & PolitiFact turn your liberal talking point into a falsehood!

PolitiFact gave President Donald Trump a "False" rating for claiming the GOP tax bill had effectively repealed the Affordable Care Act.


We figured there was a good chance that defenders of the ACA had made the same claim.

Sure enough, we found an example from the prestigious left-leaning magazine The Atlantic. The Google preview tells the story, as does the story's URL, though the story's title tames things a little: "The GOP's High-Risk Move to Whack Obamacare in Its Tax Bill."

The key "repeal" line came from an expert The Atlantic cited in its story (bold emphasis added):
Make no mistake, repealing the individual mandate is tantamount to repealing the Affordable Care Act,” said Brad Woodhouse, campaign director for Protect Our Care, an advocacy group supportive of the ACA.
Would Woodhouse receive a "False" rating from PolitiFact if it rated his statement?

Would The Atlantic receive a "False" rating from PolitiFact?

Would PolitiFact even notice the claim if it wasn't coming from a Republican?



Afters (other liars who escaped PolitiFact's notice)

"GOP tax bill is just another way to repeal health care." (Andy Slavitt, USA Today)

"Republican tax bill to include Obamacare repeal" (Christian Science Monitor)

"Republicans undermine their own tax reform bill to repeal Obamacare" (Salon)

"Another Obamacare repeal effort doesn't actually have to be in the tax cuts bill, says the guy heading up popular vote loser Donald Trump's Office of Management and Budget." (Daily Kos)


Thursday, December 21, 2017

Layers of editors on PolitiFact's Facebook page

I can probably get away with posting this PolitiFact Facebook post from Dec. 21, 2017 without comment.

The mistake is obvious, right?






***SPOILER ALERT***






Surely they meant to post the second chart from the story instead of the one appearing above.



Tuesday, December 19, 2017

PolitiFact's "Pants on Fire" bias--2017 update (Updated)

What tale does the "Truth-O-Meter" tell?

For years, we at PolitiFact Bias have argued that PolitiFact's "Truth-O-Meter" ratings serve poorly to tell us about the people and organizations PolitiFact rates on the meter. But the ratings may tell us quite a bit about the people who run PolitiFact.

To put this notion into practice, we devised a simple examination of the line of demarcation between two ratings, "False" and "Pants on Fire." PolitiFact offers no objective means of distinguishing between a "False" rating and a "Pants on Fire" rating. In fact, PolitiFact's founding editor, Bill Adair (now on staff at Duke University) described the decision about the ratings as "entirely subjective."

Angie Drobnic Holan, who took over for Adair in 2013 after Adair took the position at Duke, said "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."

After searching in vain for dependable objective markers distinguishing the "Pants on Fire" rating from the "False" rating, we took PolitiFact at its word and assumed the difference between the two is subjective. We researched the way PolitiFact applied the two ratings as an expression of PolitiFact's opinion, reasoning that we could use the opinions to potentially detect PolitiFact's bias (details of how we sorted the data here).

Our earliest research showed that, after PolitiFact's first year, Republicans were much more likely than Democrats to have a false claim rated "Pants on Fire" instead of merely "False." Adair has said that the "Pants on Fire" rating was treated as a lighthearted joke at first--see this rating of a claim by Democrat Joe Biden as an example--and that probably accounts for the unusual results from 2007.

In 2007, the lighthearted joke year, Democrats were 150 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2008, Republicans were 31 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2009, Republicans were 214 percent more likely to receive a "Pants on Fire" rating for a false statement (not a typo).

In 2010, Republicans were 175 percent more likely to receive a "Pants on Fire" rating for a false statement (again, not a typo).

We published our first version of this research in August 2011, based on PolitiFact's first four years of operation.

In 2011, Republicans were 57 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2012, Republicans were 125 percent more likely to receive a "Pants on Fire" rating for a false statement.

Early in 2013, PolitiFact announced Adair would leave the project that summer to take on his new job at Duke. Deputy editor Angie Drobnic Holan was named as Adair's replacement on Oct. 2, 2013.

In 2013, the transition year, Republicans were 24 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans started to curb their appetite for telling outrageous falsehoods?

In 2014, Republicans were 95 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2015, Republicans were 2 percent (not a typo) more likely to receive a "Pants on Fire" rating for a false statement.

In 2016, Republicans were 17 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2017, Democrats were 13 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans gotten better than Democrats at reigning in their impulse to utter their false statements in a ridiculous form?

We suggest that our data through 2017 help confirm our hypothesis that the ratings tell us more about PolitiFact than they do about the politicians and organizations receiving the ratings.






Do the data give us trends in political lying, or separate journalistic trends for Adair and Holan?

We never made any attempt to keep our research secret from PolitiFact. From the first, we recognized that PolitiFact might encounter our work and change its practices to decrease or eliminate the appearance of bias from its application of the "Pants on Fire" rating. We did not worry about it, knowing that if PolitiFact corrected the problem it would help confirm the problem existed  regardless of what fixed it.

Has PolitiFact moderated or fixed the problem? Let's look at more numbers.

The "Pants on Fire" bias

From 2007 through 2012, PolitiFact under Adair graded 29.2 percent of its false claims from the GOP "Pants on Fire." For Democrats the percentage was 16.1 percent.

From 2014 through 2017, PolitiFact under Holan graded 26 percent of its false claims from the GOP "Pants on Fire" and 21.9 percent for Democrats.

It follows that under Adair PolitiFact was 81.4 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one for a Democrat. That includes the anomalous 2007 data showing a strong "Pants on Fire" bias against Democrats.

Under Holan, PolitiFact was just 18.7 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one from a Democrat.

Story selection bias

While tracking the percentage of false ratings given a "Pants on Fire" rating, we naturally tracked the sheer number of times PolitiFact issued false ratings (either "False" or "Pants on Fire"). That figure speaks to PolitiFact's story selection.

From 2007 through 2012, PolitiFact under Adair found an average of 55.3 false claims per year from Republicans and 25.8 false claims per year from Democrats. That includes 2007, when PolitiFact was only active for part of the year.

From 2014 through 2017, PolitiFact under Holan found an average of 81 false claims per year from Republicans and 16 false claims per year from Democrats.

Under Holan, the annual finding of false claims by Republicans increased by nearly 58 percent. At the same time, PolitiFact's annual finding of false claims by Democrats fell by 38 percent.

Update Jan. 1, 2018: GOP false claims reached 90 by year's end.


One might excuse the increase for the GOP by pointing to staff increases. But the same reasoning serves poorly to explain the decrease for the Democrats. Likewise, increased lying by Republicans does not automatically mean Democrats decreased their lying.

Did the Democrats as a party tend strongly toward greater truth-telling? With the notable blemish a greater tendency to go "Pants on Fire" when relating a falsehood?

Conclusion

We suggest that changes in PolitiFact's practices more easily make sense of these data than do substantial changes in the truth-telling patterns of the two major U.S. political parties. When Adair stepped down as PolitiFact's editor, a different person started running the "star chamber" meetings that decide the "Truth-O-Meter" ratings and a different set of editors voted on the outcomes.

Changing the group of people who decide subjective ratings will obviously have a substantial potential effect on the ratings.

We suggest that these results support the hypothesis that subjectivity plays a large role in PolitiFact's rating process. That conclusion should not surprise anyone who has paid attention to the way PolitiFact describes its rating process.

Has Holan cured PolitiFact of liberal bias?

We recognized from the first that the "Pant on Fire" bias served as only one measure of PolitiFact's ideological bias, and one that PolitiFact might address. Under Holan, the "Pants on Fire" bias serves poorly to demonstrate a clear ideological bias at PolitiFact.

On the other hand, PolitiFact continues to churn out anecdotal examples of biased work, and the difficulty Holan's PolitiFact has in finding false statements from Democrats compared to Adair's PolitiFact suggests our data simply show something of a trade-off.

When we started evaluating PolitiFact's state operations, such as PolitiFact Georgia, we noticed that lopsided numbers of false statements were often accompanied by a higher percentage of "Pants on Fire" statements from the party receiving many fewer false ratings. We hypothesized a compensatory bias might produce that effect when the fact checkers, consciously or unconsciously, encourage the appearance of fairness.

PolitiFact, after all, hardly needs to grade false Republican statements more harshly to support the narrative that Republicans lie more when it is finding, on average, five times more false statements from Republicans than Democrats.


We doubt not that defenders of PolitiFact can dream up some manner of excusing PolitiFact based on the "fact' that Republicans lie more. But we deeply doubt that any such approach can find a basis in empirical evidence. Subjective rating systems do not count as empirical evidence of the rate of lying.


In addition to empirically justifying the increase in GOP falsehoods, defenders will need to explain the decrease in Democratic Party falsehoods implied in PolitiFact's ratings. Why, with a bigger staff, is PolitiFact having a more difficult time finding false statements from Democrats than it did when Adair was steering the ship?

If Truth-O-Meter data were ostensibly objective, it would make sense to question the reliability of the data given the differing trends we see for PolitiFact under Adair and Holan.

Given PolitiFact's admissions that its story selection and ratings are substantially subjective, it makes sense for the objective researcher to first look to the most obvious explanation: PolitiFact bias. 

 

Notes on the research method

Our research on the "Pants on Fire" bias looks at partisan elected officials or officeholders as well as candidates and campaign officials (including family members participating in the campaign). We exclude PolitiFact ratings where a Republican attacked a Republican or a Democrat attacked a Democrat, reasoning that such cases may muddy the water in terms of ideological preference. The party-on-party exclusions occur rarely, however, and do not likely affect the overall picture much at all.

In the research, we use the term "false claims" to refer to claims PolitiFact rated either "False" or "Pants on Fire." We do not assume PolitiFact correctly judged the claims false.

Find the data spreadsheet here.


Afters

We have completed alternative versions of our charts with the data for Donald Trump removed, and we'll publish those separately from this article at a later time. The number of false claims from Republicans went down from 2015-2017 but with PolitiFact still issuing far more false ratings to Republicans. The "Pants on Fire" percentages were almost identical except for 2016. With Trump removed from the data the Republicans would have set an all-time record for either party for lowest percentage of "Pants on Fire" claims.

These results remain consistent with our hypothesis that PolitiFact's "False" and "Pants on Fire" ratings reflect a high degree of subjectivity (with the former perhaps largely influenced by story selection bias).



Update Dec. 19, 2017: Added intended hyperlink to explanations of the research and the notable Biden "Pants on Fire."
Update Dec. 21, 2017: Corrected date of previous update (incorrectly said Dec. 12), and updated some numbers to reflect new PolitiFact ratings of Donald Trump through Dec. 21, 2017: "13 percent"=>"10 percent",  "87.3 claims per year"=>"80.5 claims per year", "23.8"=>"26.1" and "8.7"="19.2." The original 87.3 and 23.8 figures were wrong for reasons apart from the new data. We will update the charts once the calendar year finishes out. Likewise the 8.7 figure derived in part from the incorrect 23.8.

Update Jan 1, 2017:  Changed "10 percent" back to "13 percent" to reflect updated data for the whole year. "80.5 claims per year" updated to "81 claims per year." We also changed "26.1" to "26" and "8.7" to "18.7." The latter change shows that we neglected to make the "8.7" to "19.2" change we announced in the description of the Dec. 21, 2017 update, for which we apologize.

Saturday, December 16, 2017

Update on that Pulitzer Prize mark of excellence

How often have we seen people appeal to PolitiFact's 2009 Pulitzer Prize as proof of its standard of accuracy?

We've tried to explain to people that the Pulitzer judges aren't likely to fact check the fact checkers. The Pulitzer judges look for things like style, impact and relevance.

Thankfully, we just ran across an interview that helps make our point.

The interviewer, James Warren, says he served on a Pulitzer jury (confirmed), and states the rules prevented him from following his impulse to fact check the work he was judging:
[JW]
Does the rise of fact-checking play into a new era at all? I recall a few times as a judge wanting to independently verify stuff in entries but not being allowed to. I might have wanted to know if a claimed exclusive was really what an entry later claimed.
[DC]
I'm not sure it's the role of the jury to second-guess work that is being submitted. Now it might be like a parent who over-praises their child. But that's only a matter of enthusiasm, not dishonesty. I don't think there's much of a record at all of Pulitzers suffering from choosing work that hasn't lived up to what it's awarded.
Warren said he was not allowed to independently verify material from Pulitzer entries.

It's worth noting that the interviewee, new Pulitzer Prize chief Dana Canedy, appears to affirm that Pulitzer juries do not see fact-checking contest entries as any part of the job.

It makes no sense to regard the Pulitzer Prize as any type of guarantee of journalistic accuracy. The jurors assume that the submitted works adhere to basic journalistic principles of accuracy and fairness unless the works themselves obviously contradict that idea.

Trust PolitiFact in 2018 because of a Pulitzer Prize awarded in 2009? Bad idea.

And it would have been a bad idea to trust PolitiFact in 2010 based on the Pulitzer Prize in 2009.

Tuesday, December 12, 2017

PolitiFact's lying "Lie of the Year" award for 2017 (Updated)

On Dec. 12, 2017, PolitiFact announced its 2017 "Lie of the Year." PolitiFact supposedly gave its award to a particular statement from President Trump.

PolitiFact (bold emphasis added):
"This Russia thing with Trump and Russia is a made-up story. It's an excuse by the Democrats for having lost an election that they should've won," said President Donald Trump in an interview with NBC’s Lester Holt in May.
PolitiFact Bias correctly predicted the winner. But even we hardly imagined the Olympic-grade gymnastics the editors of PolitiFact would perform in justifying their selection.

We thought PolitiFact would cross its fingers and hope the Mueller investigation would implicate Trump in some type of illegal collusion with the Russians.

Instead, PolitiFact turned Trump's statement into a complete denial that Russia interfered with the election. Instead of "Trump and Russia" like Trump said, PolitiFact trims the issue down to just "Russia."

No, seriously. PolitiFact did that. Let's start with the headline of its "Lie of the Year" announcement:

2017 Lie of the Year: Russian election interference is a 'made-up story'

Did Trump say anything in the winning statement about Russian election interference being a "made-up" story? We're not seeing it, and PolitiFact does not explain the connection. Maybe in context?

We looked to PolitiFact's original rating of Trump's claim for clues. That story suggested Trump was claiming that Democrats made up the Trump-Russia narrative. PolitiFact said James Comey's report of a "credible allegation" (or "reasonable basis to believe"!) was enough to "rebut" (refute?) Trump's charge that the narrative was made up.

How did PolitiFact know that the "credible allegation" was not made up and not by a Democrat? We do not know. PolitiFact will have to answer that one. We can only marvel at the idea that a "reasonable basis to believe" unequivocally serves as a foundation for stating something as fact.

Do we think PolitiFact's narrative that Trump completely denied Russian election interference stands up to scrutiny? We do not (Reuters, Jan 6, 2017):
WASHINGTON (Reuters) - President-elect Donald Trump accepts the U.S. intelligence community’s conclusion that Russia engaged in cyber attacks during the U.S. presidential election and may take action in response, his incoming chief of staff said on Sunday.
In opposition to PolitiFact's reasoning, we think it much more reasonable to take Trump to mean that the narrative attempting to connect the Trump campaign to Russian meddling has no evidence to back it. If such evidence existed, it would have served to help justify the Robert Mueller investigation. Instead, Mueller was given the job of looking at a broad category of interactions ("collusion") for something that could justify criminal charges.

In fact, PolitiFact's description of what Trump said bears little resemblance to what he said.

PolitiFact (bait in red, switch in blue, highlights added):

Trump could acknowledge the interference happened while still standing by the legitimacy of his election and his presidency — but he declines to do so. Sometimes he’ll state firmly there was "no collusion" between his campaign and Russia, an implicit admission that Russia did act in some capacity. Then he reverts back to denying the interference even happened.
Declining to acknowledge the interference, supposing the Reuters story cited above counts for nothing, is not the same thing as denying the interference ever happened.

If PolitiFact had any clear statement from Trump denying Russia made any effort to interfere in the U.S. presidential election, PolitiFact would have been smart to include it (see the "Afters" section, below).

Lacking that evidence, we conclude that PolitiFact has exaggerated, one might even say "made up," the degree to which President Trump denies Russian election interference.




Afters

We say PolitiFact offered no unequivocal evidence Trump denied all Russian meddling in the U.S. election. But PolitiFact did offer evidence that it perhaps interpreted that way.

We think it fair to let PolitiFact make its case:
Facebook, Google and Twitter have investigated their own networks, and their executives have concluded — in some cases after initial foot-dragging — that Russia used the online platforms in attempts to influence the election.

After all this, one man keeps saying it didn’t even happen.

"This Russia thing with Trump and Russia is a made-up story. It's an excuse by the Democrats for having lost an election that they should've won," said President Donald Trump in an interview with NBC’s Lester Holt in May.

On Twitter in September, Trump said, "The Russia hoax continues, now it's ads on Facebook. What about the totally biased and dishonest Media coverage in favor of Crooked Hillary?"

And during an overseas trip to Asia in November, Trump spoke of meeting with Putin: "Every time he sees me, he says, ‘I didn't do that.’ And I really believe that when he tells me that, he means it." In the same interview, Trump referred to the officials who led the intelligence agencies during the election as "political hacks."

Trump continually asserts that Russia’s meddling in the 2016 election is fake news, a hoax or a made-up story, even though there is widespread, bipartisan evidence to the contrary.
 We've covered PolitiFact's trading of "Trump and Russia" for just "Russia."

What "Russia hoax" was continuing? The hoax of Russian interference or the hoax of Trump and Russia collaborating to steal the election from its rightful winner?

If Trump says he thinks Putin's denials are sincere, does that likewise mean that Trump thinks nobody in Russia did anything to interfere with the U.S. election?

Who fact checks like that, not counting liberal bloggers?



Update Dec. 14, 2017: Jeff Adds:

I concur with Bryan's points above but wanted to add my gripes about PolitiFact's latest agitprop.

1) What exactly is "bipartisan evidence"? Can evidence be partisan? Can a fact have a political motive? If the nonpartisans at PolitiFact think so, it would explain a lot.

2) No decent editor should have allowed this line:
Sometimes he’ll state firmly there was "no collusion" between his campaign and Russia, an implicit admission that Russia did act in some capacity.
Huh? On what planet does denying Trump's campaign colluded with the Russians an implied admission the Russians interfered in the election? PolitiFact's argument is a non sequitur, if it even makes sense at all.

3) It seems to be an accepted truth on the left that Russian interference changed the outcome of the election, but is there any compelling evidence of that?
It seems unlikely — though not impossible — that Russia interference changed the outcome of the election. We at PolitiFact have seen no compelling evidence that it did so.
Talk about a buried lede!

The fact is currently the only evidence of Russian "interference" has been a disorganized social media campaign. There's been no evidence of vote tampering, no voting booth intimidation, no vote machine hacking. [Disclosure: I am frequent user of Twitter and Facebook but somehow overcame the onslaught of Russian brainwashing and did not vote for Trump.]

For PolitiFact to describe buying Facebook ads as "a threat to U.S. democracy" is Louise Mensch grade delusion. Further, Holan's assertion that Trump's refusal to acknowledge the "threat to democracy" is begging the question. She asserts as fact Russian interference, to whatever extent it existed, is a threat to America. Perhaps she could prove the threat is real before calling it a lie to deny it.

The premise of PolitiFact's argument rests comfortably in the swamp of liberal media where the words influence, interference, and election action all mean the same thing. Let's turn PolitiFact's trick back against it:
Trump could acknowledge the interference happened while still standing by the legitimacy of his election...
If the legitimacy of the election is a fact, then it's implied the Russians did not interfere in the election, since (using PolitiLogic throughout) if the Russians did interfere in the election, it would not be a legitimate election.

Perhaps PolitiFact chose the Russian "interference" story for their Lie of the Year because it hit so close to home. After all, misleading large swaths of impressionable users by exploiting social media to spread a political agenda with poorly written posts that don't hold up to scrutiny is PolitiFact's bread and butter.

It's hard for me to imagine PolitiFact editor Angie Holan ever persuading someone beyond her bubble that she is a convincing, coherent, and unbiased professional, but maybe that's just the vodka talking.

See you next year, comrades!

Thursday, December 7, 2017

Another partisan rating from bipartisan PolitiFact

"We call out both sides."

That is the assurance that PolitiFact gives its readers to communicate to them that it rates statements impartially.

We've pointed out before, and we will doubtless repeat it in the future, that rating both sides serves as no guarantee of impartiality if the grades skew left whether rating a Republican or a Democrat.

On December 1, 2017, PolitiFact New York looked at Albany Mayor Kathy M. Sheehan's claim that simply living in the United States without documentation is not a crime. PolitiFact rated the statement "Mostly True."


PolitiFact explained that while living illegally in the United States carries civil penalties, it does not count as a criminal act. So, "Mostly True."

Something about this case reminded us of one from earlier in 2017.

On May 31, 2017, PolitiFact's PunditFact looked at Fox News host Gregg Jarrett's claim that collusion is not a crime. PolitiFact rated the statement "False."


These cases prove very similar, not counting the ratings, upon examination.

Sheehan defended Albany's sanctuary designation by suggesting that law enforcement need not look at immigration status because illegal presence in the United States is not a crime.

And though PolitiFact apparently didn't notice, Jarrett made the point that Special Counsel Mueller was put in charge of investigating non-criminal activity (collusion). Special Counsels are typically appointed to investigate crimes, not to investigate to find out if a crime was committed.

On the one hand, Albany police might ask a driver for proof of immigration status. The lack of documentation might lead to the discovery of criminal acts such as entering the country illegally or falsifying government documents.

On the other hand, the Mueller investigation might investigate the relationship (collusion) between the Trump campaign and Russian operatives and find a conspiracy to commit a crime. Conspiring to commit a crime counts as a criminal act.

Sheehan and Jarrett were making essentially the same point, though collusion by itself doesn't even carry a civil penalty like undocumented immigrant status does.

So there's PolitiFact calling out both sides. Sheehan and Jarrett make almost the same point. Sheehan gets a "Mostly True" rating. Jarrett gets a "False."

That's the kind of non-partisanship you get when liberal bloggers do fact-checking.



Afters

Just to hammer home the point that Jarrett was right, we will review the damning testimony of the  three impartial experts who helped PunditFact reach the conclusion that Jarrett was wrong.
Nathaniel Persily at Stanford University Law School said one relevant statute is the Bipartisan Campaign Reform Act of 2002.

"A foreign national spending money to influence a federal election can be a crime," Persily said. "And if a U.S. citizen coordinates, conspires or assists in that spending, then it could be a crime."
The conspiracy to commit the crime, not the mere collusion, counts as the crime.

Next:
Another election law specialist, John Coates at Harvard University Law School, said if Russians aimed to shape the outcome of the presidential election, that would meet the definition of an expenditure.

"The related funds could also be viewed as an illegal contribution to any candidate who coordinates (colludes) with the foreign speaker," Coates said.
Conspiring to collect illegal contributions, not mere collusion, would count as the crime. Coats also offered the example of conspiring to commit fraud.
Josh Douglas at the University of Kentucky Law School offered two other possible relevant statutes.

"Collusion in a federal election with a foreign entity could potentially fall under other crimes, such as against public corruption," Douglas said. "There's also a general anti-coercion federal election law."
The corruption, not the mere collusion, would count as the crime.

How PolitiFact missed Jarrett's point after linking the article he wrote explaining what he meant is far beyond us.

Friday, December 1, 2017

Not a Lot of Reader Confusion VII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing.

We're here to help.

This comes from the lead edge of December 2017 and PolitiFact's own Facebook page:


Somebody introduced a subjective PolitiFact chart in answer to a call for a scientific study showing the unreliability of Fox News. So far as we can tell, the citation was intended as serious.

We predict that no number of examples short of infinity will convince Holan that we are right and she is wrong. At least publicly. Privately, maybe.

Wednesday, November 29, 2017

Handicapping the PolitiFact "Lie of the Year" for 2017 (Updated)

PolitiFact's "Lie of the Year" is a farce, of course, as it places the objective and non-partisan editors of PolitiFact in the position of making an obviously subjective decision about which false (or false-ish) statement was the "most significant."

In other words, they put on their pundit hats.

But we love the exercise because it gives us the opportunity to predict which claim PolitiFact will choose, basing our predictions on PolitiFact's liberalism and its self-interest.

We've got a pretty decent record of predicting the outcome.

This year, all of the nominees were rated "Pants on Fire" during the year. We note that because exceptions often occur. For example, President Obama's declaration that people could keep their insurance plans under the Affordable Care Act if they liked those plans wasn't rated at all during the year it received the award. Moreover, it was never rated lower than "Half True" by the nonpartisan liberal bloggers at PolitiFact. That (complicated and deceptive) pick was a case of PolitiFact covering its arse in response to a news cycle that demanded the pick.

This year's ballot resembles last year's. Voters just get to see the claim and the rating, though voters may click hotlinks to view the fact checks if desired.

PolitiFact puts on its neutral face by listing the claims in chronological order.


"That was the largest audience to witness an inauguration, period."
— Sean Spicer on Jan. 21, 2017, in a press conference



The size of the inauguration crowd should never count as an important political story representing the entire year. This nominee was picked to lose.


Says Barack Obama "didn’t do executive orders in the beginning."

— Whoopi Goldberg on Jan. 25, 2017, in a segment on ABC's “The View”


No claim coming from a host of "The View" should ever count as an important political story representing the entire year. This nominee was picked to lose.


Says Rex "Tillerson won't divest from Exxon."

— Charles Schumer on Jan. 27, 2017, in a tweet


Who's Rex Tillerson? Just kidding. This pick shows how PolitiFact had to scrape the bottom of the barrel for anything significant coming from a Democrat. This nominee is another placeholder made necessary by the hard time PolitiFact has giving Democrats a "Pants on Fire" rating. By our count, PolitiFact has only issued three "Pants on Fire" ratings to Democrats this year. This claim has no shot, as it was politically unimportant.


"I have not called for impeachment" of President Donald Trump.

— Maxine Waters on April 18, 2017, in an interview on MSNBC


This one's another place-holding, politically unimportant claim that has no shot of winning. Do we detect a pattern?


"Nobody dies because they don’t have access to health care."

— Raul Labrador on May 5, 2017, in a town hall event


This one I'll make my dark horse pick. Labrador is not particularly well-known, and the quotation is taken out of context. But if PolitiFact ignores those factors and the claim gets an unexpected boost from the reader's poll as representative of the health care debate, this one has a greater than zero shot of winning.


"This Russia thing with Trump and Russia is a made-up story. It's an excuse by the Democrats for having lost an election that they should've won."  

— Donald Trump on May 11, 2017, in an interview with NBC News


That's the overwhelming favorite. It fits the narrative PolitiFact loves (Trump the Liar). It fits the narrative PolitiFact's predominantly liberal audience loves (Russia, Russia, Russia!). Is there any solid evidence that Russia swayed the election results? No. But that shouldn't matter. We're talking narratives and clicks, things to which PolitiFact is addicted. PolitiFact will hope the Mueller investigation will eventually provide enough backing to keep it from getting egg on its face.


"Every single year that there's an increase (in temperature) it's within the margin of error -- meaning it isn't increasing."

— Greg Gutfeld on June 2, 2017, in Fox News’ “The Five” show


Global warming Climate change remains near and dear to liberal bloggers and liberals but ... Greg Gutfeld? This one could have had a chance coming from a major figure in the Trump administration. Coming from moderately popular television personality like Gutfeld it has no chance.



White nationalist protesters in Charlottesville "had a permit. The other group didn’t have a permit."

— Donald Trump on Aug. 15, 2017, in a question-and-answer session with reporters


That's my second pick. Again we've got the pull of the Trump/Liar narrative. And we've got the Trump's a Nazi tie-in. Was the statement politically significant? Only in terms of stimulating negative narratives about Trump. And that could help this one pull out the win.



"The United States ended slavery around the world, and maybe we should get some credit for that, too."

— Tucker Carlson on Aug. 15, 2017 in comments on “Tucker Carlson Tonight”


This one counts as another politically unimportant statement. This pick has no chance unless driven by the name "Tucker Carlson" and network that airs his show.



"We’ve got dozens of counties around America that have zero insurers left."

— Paul Ryan on Aug. 21, 2017 in a CNN town hall broadcast



The claim by Ryan comes in as my third pick.

It fits a popular liberal narrative--protecting the Affordable Care Act. Ryan has good name recognition and little popularity among liberals. PolitiFact's valiant exposure of Ryan's falsehood may have saved Obamacare from repeal! Or so I imagine PolitiFact may reason it.



So there it is. The 2017 award almost certainly goes to Trump and almost certainly for his claim about his ties to Russia affecting the election counting as fake news. It's worth noting that fact checkers like those at PolitiFact resent Trump's co-opting of the term. That should give this claim another advantage in claiming the award.

It does look like PolitiFact stacked the deck of nominees, most notably by only nominating claims that received "Pants on Fire" ratings. That's a first. Claims receiving that rating tend to be more trivial and thus politically unimportant. That decision helped clear the field for Trump.

If PolitiFact changes nothing about its biased approach to fact-checking and continues to draw its "Lie of the Year" finalists only from that year's list of claim it rated "Pants on Fire," statements from the Republican Party will surely dominate the awards in the years ahead. "Fake news" stories may start appearing on the list of nominations, however. "Fake news" stories pick up most of PolitiFact's "Pants on Fire" ratings these days.

Update Nov. 29, 2017:

Jeff Adds

The Trump/Russia claim seems like the safe bet here, and it's hard to argue against Bryan's case. If we believed PolitiFact actually adhered to its Lie of the Year criteria, I think it's the only one that meets those standards (namely, a claim that is politically significant.) It's also got the click-grabbing factor that drives people to PolitiFact's recently malware infested website, and clicks are what actually motivates PolitiFact more than any noble search for truth.

But PolitiFact is mildly self-aware, and sometimes they'll tweak things up as a matter of image control. This tic of theirs led me to correctly predict that 2011's Lie of the Year pick would go against the Left (though wrong about which specific claim would win.) I think PolitiFact wants to avoid giving Trump the award because it already catches flack for obsessively targeting him.

I'm going to #Resist the urge to go with the obvious pick and I predict that Sean Spicer wins for his crowd size claim.

This allows PolitiFact to avoid being mocked for picking on Trump himself while allowing it to pick on Trump's administration. The pick will be loved by PF's liberal fan base and the media (I repeat myself.) The headlines crowing "Trump admin earns Lie of the Year!" will serve as sufficient click-bait. I expect PolitiFact can spin the pick into the first lie of the Trump administration that set the tone for all the easily debunked and ridiculous falsehoods that followed.

For my Dark Horse I'm going to contradict myself: If PolitiFact repeats their recent tradition of making up a winner that wasn't actually in their list of finalists, I say they go rogue and give it to Trump for all of his falsehoods, and claim they couldn't pick just one. This has all the benefits of clickbait and will upset no one that matters to PolitiFact.

Whoever the winner is it's clear, as has been the case every year they've done this, the field of picks is an intentionally lopsided mixed bag of bland throw-aways and a couple of obvious picks.

Just like PolitiFact's ratings, the winner is already determined before the contest has begun.





Edit: Added "recenty" to first graph of Jeff Adds -Jeff 1948PST 11/29/17


Clarification Nov. 29, 2017: Changed "sometimes exceptions often occur" to "exceptions often occur"

Friday, November 10, 2017

'Not a lot of reader confusion' VI

PolitiFact editor Angie Drobnic Holan has claimed she does not notice much reader confusion regarding the interpretation of PolitiFact's "report card" charts and graphs.

This series of posts is designed to call shenanigans on that frankly unbelievable claim.

Rem Rieder, a journalist of some repute, showed himself a member of PolitiFact's confused readership with a Nov. 10, 2017 article published at TheStreet.com.
While most politicians are wrong some of the time, the fact-checking website PolitiFact has found that that [sic] Trump's assertions are inaccurate much more frequently than those of other pols.
When we say Rieder showed himself a member of PolitiFact's confused readership, that means we're giving Rieder the benefit of the doubt by assuming he's not simply lying to his readers.

As we have stressed repeatedly here at PolitiFact Bias, PolitiFact's collected "Truth-O-Meter" ratings cannot be assumed to reliably reflect the truth-telling patterns of politicians, pundits or networks. PolitiFact uses non-random methods of choosing stories (selection bias) and uses an admittedly subjective rating system (personal bias).

PolitiFact then reinforces the sovereignty of the left-leaning point of view--most journalists lean left of the American public--by deciding its ratings by a majority vote of its "star chamber" board of editors.

We have called on PolitiFact to attach disclaimers to each of its graphs, charts or stories related to its graphs and charts to keep such material from misleading unfortunate readers like Rieder.

So far, our roughly five years of lobbying have fallen on deaf ears.

Monday, November 6, 2017

PolitiFact gives the 8 in 10 lie a "Half True."

We can trust PolitiFact to lean left.

Sometimes we bait PolitiFact into giving us examples of its left-leaning tendencies. On November 1, 2017, we noticed a false tweet from President Barack Obama. So we drew PolitiFact's attention to it via the #PolitiFactThis hashtag.



We didn't need to have PolitiFact look into it to know that what Obama said was false. He presented a circular argument, in effect, using the statistics for people who had chosen an ACA exchange plan to mislead the wider public about their chances of receiving subsidized and inexpensive health insurance.


PolitiFact identified the deceit in its fact check, but used biased supposition to soften it (bold emphasis added):
"It only takes a few minutes and the vast majority of people qualify for financial assistance," Obama says. "Eight in 10 people this year can find plans for $75 a month or less."

Can 8 in 10 people get health coverage for $75 a month or less? It depends on who those 10 people are.

The statistic only refers to people currently enrolled in HealthCare.gov.
The video ad appeals to people who are uninsured or who might save money by shopping for health insurance on the government exchange. PolitiFact's wording fudges the truth. It might have accurately said "The statistic is correct for people currently enrolled in HealthCare.gov. but not for the population targeted by the ad."

In the ad, the statistic refers to the ad's target population, not merely to those currently enrolled in HealthCare.gov.

And PolitiFact makes thin and misleading excuses for Obama's deception:
(I)n the absence of statistics on HealthCare.gov visitors, the 8-in-10 figure is the only data point available to those wondering about their eligibility for low-cost plans within the marketplace. What’s more, the website also helps enroll people who might not have otherwise known they were eligible for other government programs.
The nonpartisan fact-checker implies that the lack of data helps excuse using data in a misleading way. We reject that type of excuse-making. If Obama does not provide his audience the context allowing it to understand the data point without being misled, then he deserves full blame for the resulting deception.

PolitiFact might as well be saying "Yes, he misled people, but for a noble purpose!"

PolitiFact, in fact, provided other data points in its preceding paragraph that helped contextualize Obama's misleading data point.

We think PolitiFact's excuse-making influences the reasoning it uses when deciding its subjective "Truth-O-Meter" ratings.
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
In objective terms, what keeps Obama's statement from deserving a "Mostly False" or "False" rating?
His statement was literally false when taken in context, and his underlying message was likewise false.

About 10 to 12 million are enrolled in HealthCare.Gov ("Obamacare") plans. About 80 percent of those receive the subsidies Obama lauds. About 6 million persons buying insurance outside the exchange fail to qualify for subsidies, according to PolitiFact. Millions among the uninsured likewise fail to qualify for subsidies.

Surely a fact-checker can develop a data point out of numbers like those.

But this is what happens when non-partisan fact checkers lean left.


Correction Nov. 6, 2017: Removed "About 6 million uninsured do not qualify for Medicaid or subsidies" as it was superseded by reporting later in the post).

Monday, October 23, 2017

PolitiFact's Evangelism & Revival Tour III

PolitiFact's Katie Sanders PolitiSplains why conservatives should trust PolitiFact

PolitiFact reached out to red state residents in three states, Alabama, Oklahoma and West Virginia thanks to a grant from the Knight Foundation. We're calling it PolitiFact's Evangelism and Revival Tour thanks to its resemblance to religious "love-bombing."

In our post from this series published on Oct. 22, 2017, we wondered what specific reasons PolitiFact was offering conservatives to convince them they should trust PolitiFact.

We're supposing the red state unwashed are hearing little more than the spiel PolitiFact's Katie Sanders gave in West Virginia.

MetroNews and Alex Thomas reported:
Organization deputy editor Katie Sanders said following the 2016 presidential campaign, they noticed a trend among conservatives regarding a distrust of news organizations.

“We are concerned about that because we are independent, we’re nonpartisan, we call out both sides, yet there’s still this skepticism,” she said on MetroNews’ “Talkline.”
PolitiFact is neutral and trustworthy because it is "independent"?

We like the response of the University of Miami's Joe Uscinski to that one:
We believe by "independent" PolitiFact means it does not allow outside entities to guide its process. The same is true of PolitiFact Bias. Does that make us unbiased?

PolitiFact is neutral and trustworthy because it is "nonpartisan"? 

Think tanks nearly all call themselves "nonpartisan." Yet news reports routinely report that a think tank is "right-leaning" or "left-leaning." "Nonpartisan" does not automatically equate with "unbiased," let alone neutral and trustworthy.

We might as well mention that PolitiFact Bias is "nonpartisan" by the same definition think-tanks (and likely PolitiFact) use (everything but "unbiased"). Does that make us unbiased?

PolitiFact is neutral and trustworthy because it calls out both sides?

Bush made mistakes. Obama made mistakes. Look, Ma, I'm neutral!

Calling out both sides does nothing to guarantee neutrality or trustworthiness. It's perfectly possible to call out one side with kid gloves and the other with a hammer.

At PolitiFact Bias, we think PolitiFact is often guilty of applying unequal standards, and we created this site in part to highlight such cases. We point out that PolitiFact sometimes unfairly harms Democrats as well as Republicans. Does that make us unbiased?

The argument for trust that Sanders used counts as flim-flam.

If PolitiFact wants trust from conservatives and moderates it will need a better sales pitch. That is, a sales pitch with specifics that actually address the issues that lead to the lack of trust.

Get to it, PolitiFact.

Sunday, October 22, 2017

The PolitiFact Evangelism & Revival Tour II

Thanks to a generous and wasteful grant from the Knight Foundation, PolitiFact is reaching out to red state voters!

These outreaches suspiciously correlate to new PolitiFact state franchises, in turn making it look like the Knight Foundation wants to help PolitiFact advertise itself.

Daniel Funke of the Poynter Institute posted a story about the Oklahoma leg of PolitiFact's dog & pony show. We reviewed that in our first part in this series. This installment concerns a Washington Post story about the third and final stage of the evangelism and revival tour, ending up in West Virginia.

What's the Purpose of This Tour, Again?


The Post article leads with a section that more-or-less paints PolitiFact's outreach as a failure.

PolitiFact planned to go out and tell people PolitiFact is nonpartisan and fair and let them see, at least to some degree, how PolitiFact works. That was supposed to lead to greater trust. But when given the opportunity to make that case, PolitiFact editor Amy Hollyfield comes across like Eeyore.
“I have discussions with people about the news all the time on Facebook, and I show them what I consider to be credible sources of information,” a man named Paul Epstein says from a middle row. “And they say, ‘Oh, that’s all biased.’ So how can you, or how can we, convince people to trust any mainstream media?”

Amy Hollyfield of PolitiFact, the Pulitzer Prize-winning fact-checking organization, considers the question. She hesitates a beat before telling Epstein and about 65 others in the audience that maybe you can’t. Not all the time.
Well, that's encouraging! What else does Hollyfield have?
“We have a lot of things on our website” that attest to PolitiFact’s impartiality and credibility, Holly­field says. “But I don’t think that seeps in when you’re having that kind of conversation. That’s why we’re trying to tell our story.”
Specifics? Aren't specifics always foremost in the minds of journalists? Okay, maybe Hollyfield gave the specifics. Maybe the Post's Paul Farhi left them out. But it seems to us beyond question that if the idea of the evangelism tour is to build trust of PolitiFact in red states then PolitiFact should focus on those specifics, whatever they are.
The fact-checkers keep steering the conversation back to Politi­Fact and its 10-year track record of rating political speech, including how it assigns its most damning rating, “Pants on Fire.”
What? It would be great to have some specifics on that. Pretty much the best description we have of the difference between PolitiFact's "False" and "Pants on Fire" ratings is PolitiFact Editor Angie Drobnic Holan's immortal "Sometimes we decide one way and sometimes decide the other." We'd like to know even more about this occult-yet-objective (?) process. But there's nothing new in the Post article. So not today.


Sharockman has the Evidence of Neutral Nonpartisanship (not)!


Just a few days ago we published a chart showing PolitiFact has published more fact checks of President Trump between his inauguration and Oct. 18 than it did of President Obama over the same period in 2009 and 2013 combined. We did it to show the utter ridiculousness of Executive Director Aaron Sharockman's argument that fact-checking Obama frequently serves as an evidence of PolitiFact's neutrality.

Lo and behold, the Post captured Sharockman making that same argument again. Christmas in October (bold emphasis added):
(Sharockman) bristles a bit at the conservative critique [The basic critique is that fact-checkers cherry-pick statements and facts to create a false impression — usually that conservative candidates are less truthful than the liberal kind"--bww]. “People say, ‘Why didn’t you fact-check Hillary Clinton’s claim about coming under fire [as first lady] in Bosnia?’ Well, we did. The person we fact-checked more than anyone else is Barack Obama. . . . The person we fact-check the most is the president. We’re going to hold the president accountable.”
As we pointed out in our earlier article accompanying the graph, yes of course national fact checkers check the president the most. That will be true regardless of party and therefore serves as no evidence whatsoever of impartiality, particularly if a Republican president may have drawn greater scrutiny than Obama. Sharockman's argument is flim-flam.

This article about PolitiFact trying to convince conservatives it is neutral and non-partisan gives conservatives no evidence of PolitiFact's neutrality or non-partisanship. These people could use some talking points that have greater strength than wet toilet paper.


Hey, the article mentions "PolitiFact Bias"!

Plus: How PolitiFact could build trust across the board


At the risk of humeral fracture from patting ourselves on the back, the best section of the Post article is the one that mentions PolitiFact Bias. That's not because it mentions PolitiFact Bias, though that's part of it (bold emphasis added)
(Sharockman)’s fully aware of the free-floating cynicism about fact-checking, a form that has enjoyed a boomlet in the past few years with such outfits as PolitiFact, FactCheck.org, Snopes and The Washington Post’s Fact Checker on the scene. In one poll last year, 88 percent of people who supported Trump during the 2016 campaign said they didn’t trust media fact-checking. (Overall, just 29 percent of likely voters in the survey said they did.) PolitiFact itself has come in for particularly intense criticism; a blog called PolitiFact Bias is devoted to “exposing [its] bias, mistakes and flimflammery.”

The basic critique is that fact-checkers cherry-pick statements and facts to create a false impression — usually that conservative candidates are less truthful than the liberal kind.
The fact is that the polls show that moderates and independents are more skeptical about mainstream media fact-checking than are Democrats. The corollary? The political group that most trusts political fact-checking is Democrats.

Shouldn't we expect moderates more than Democrats or Republicans to favor PolitiFact if it treats Democrats and Republicans with equal skepticism? Indeed, for years PolitiFact tried to argue for its neutrality by saying it gets attacked from both sides. Left unsaid was the fact that most of the attacking came from one side.

PolitiFact needs to hear the message in the numbers. Likely voters don't trust fact checkers (71 percent!). PolitiFact can't do meet-and-greets with 71 percent of likely voters.  To earn trust, PolitiFact needs to severely ramp up its transparency and address the criticism it receives. If the criticism is valid, make changes. If the criticism is invalid, then crush entities like PolitiFact Bias by publicly discrediting their arguments with better arguments.

Establish trust by modeling transparently trustworthy behavior, in other words.

Or PolitiFact can just keep doing what it's doing and see if that 30 percent or so that trusts it just happens to grow.

Good luck with that.


Afters

Is this true?
The fact of the matter is that both sides are becoming less moored to the truth, Sharockman says. The number of untrustworthy statements by Republicans and Democrats alike has grown over the past three presidential cycles, he noted.
Our numbers show that the number of false ("False" plus "Pants on Fire") statements from Democrats, as rated by PolitiFact, drop from PolitiFact's early years.  Though with a minor spike during the 2016 election cycle.


What data would support Sharockman's claim, we wonder?

Friday, October 20, 2017

PolitiFact and the principle of inconsistency

In October, six days apart, PolitiFact did fact checks on two parallel claims, each asserting the existence of a particular law. One, by U.S. Senate candidate Roy Moore, was found "False." The other, by a Saturday Night Live cast member, was found "Mostly True."



Moore asserted that an act of Congress made it "against the law" to fail to stand for the playing of the national anthem. PolitiFact confirmed the existence of the law Moore referenced, but noted that it merely offered guidance on proper etiquette. It did not provide any punishment for improper etiquette.

SNL's Colin Jost said a Texas law made it illegal to own more than six dildos. PolitiFact confirmed a Texas law made owning more than six "obscene devices" illegal. PolitiFact found that a federal court had ruled that law unconstitutional in 2008.

Both laws exist. The one Moore cited carries no teeth because it describes proper etiquette, not a legal requirement backed by government police power. The one Jost cited lacks teeth because the Court voided it.

How did PolitiFact and PolitiFact Texas justify their respective rulings?

PolitiFact (bold emphasis added):
Moore said NFL players taking a knee during the national anthem is "against the law."

Moore's basis is that a law on the books describes patriotic etiquette during the national anthem. But his statement gives the false impression the law is binding, when in fact it’s merely guidance that carries no penalty. Additionally, legal experts told us the First Amendment protects the right to kneel during the national anthem.

We rate this False.
PolitiFact Texas (bold emphasis added):
Jost said: "There is a real law in Texas that says it’s illegal to own more than six dildos."

Such a cap on "obscene devices" has been state law since the 1970s though it’s worth clarifying that the law mostly hasn’t been enforced since federal appeals judges found it unconstitutional in 2008.

We rate the claim Mostly True.
From where we're sitting, the thing PolitiFact Texas found "worth clarifying" in its "Mostly True" rating of Jost closely resembles in principle one of the reasons PolitiFact gave for rating Moore's statement "False" (neither law is binding, but for different reasons). As for the other rationale backing the "False" rating, from where we're sitting Jost equaled Moore in giving the impression that the Texas law is binding today. But PolitiFact Texas did not penalize Jost for offering a misleading impression.

We call these rulings inconsistent.

Inconsistency is a bad look for fact checkers.


Update Oct. 23, 2017: We appreciate Tim Graham highlighting this post at Newsbusters.

Wednesday, October 18, 2017

Fact-checking the president

When accused of focusing its fact checks on conservatives more than liberals, PolitiFact has been known to defend itself by pointing out that it has fact checked Barack Obama more than any other political figure.

We properly ridiculed that claim because it is natural for a national political fact checker to place special importance on the statements of a president. We should only be surprised if the fact checker fails to fact check the president most frequently. And now that President Donald Trump has succeeded President Obama in office, we can do some comparisons that help illustrate the point.

Please note that this comparison does have an apples-to-oranges aspect to it. PolitiFact started out with the aim of fact-checking the election campaign. Therefore, we should allow for PolitiFact to get a slow start on President Obama's first term.

We based the comparisons on the number of fact checks PolitiFact performed on the presidents between their inauguration (two of those for Obama) and Oct. 18. In fact, PolitiFact fact checked Obama more frequently in 2009 than it did when he launched his second term in 2013.



As the graph shows, through Oct. 18 PolitiFact has fact checked Trump more in 2017 than it did Obama in 2009 and 2013 combined.

Trump has an excellent shot at supplanting Obama as the figure most fact checked by PolitiFact within just four years of taking office.

And perhaps we'll never again hear PolitiFact's balance defended on the basis of its fact-checking Obama more often than other political figures.

Surprise! Another way PolitiFact rates claims inconsistently

When we saw PolitiFact give a "Mostly False" rating to the claim state spending in Oklahoma had reached an all-time high, it piqued our curiosity.

PolitiFact issued the "Mostly False" rating because the Oklahoma Council of Public Affairs used nominal dollars instead of inflation-adjusted dollar in making its claim.
The Oklahoma Council of Public Affairs said that spending this year is on track to be the highest ever. While the raw numbers show that, the statement ignores the impact of inflation, a standard practice when comparing dollars over time. Factoring in inflation shows that real spending was higher in 2009 to 2011.

When population and economic growth are added in, spending has been higher over most of the past decade.

The statement contains an element of truth but it ignores critical facts that would give a different impression. We rate this claim Mostly False.
Considering the claim was arguably "Half True" based on nominal dollars, we wondered if PolitiFact's ruling was consistent with similar cases involving the claims of Democrats.

Given our past experience with PolitiFact, we were not surprised at all to find PolitiFact giving a "Half True" to a Democratic National Committee claim that U.S. security funding for Israel had hit an all-time high. There was one main difference between the DNC's claim and the one from the Oklahoma Council of Public Affairs: The one from the DNC was false for either nominal dollars or inflation-adjusted dollars (bold emphasis added).
The ad says "U.S. security funding for Israel is at an all-time high." Actually, it was higher in one or two years, depending whether you use inflation-adjusted dollars. In addition, the ad oversells the credit Obama can take for this year’s number. The amount was outlined by a memorandum signed in 2007 under President George W. Bush. On balance, we rate the claim Half True.
Awesome!

That's not just inconsistent, it's PolitiFinconsistent!


Note

The fact check that drew our attention was technically from PolitiFact Oklahoma, but was perpetrated by Jon Greenberg and Angie Drobnic Holan, both veterans of PolitiFact National.

Tuesday, October 17, 2017

Can you trust what "Media Bias/Fact Check" says about PolitiFact? (Updated x2)

(See update at the end)

Somehow we got to the point where it makes sense to talk about Media Bias/Fact Check.

Media Bias/Fact Check bills itself as "The most comprehensive media bias resource." It's run by Dave Van Zandt, making it fair to say it's run by "some guy" ("Dave studied Communications in college" is his main claim to expertise).

We have nothing against "some guy" possessing expertise despite a lack of qualifications, of course. One doesn't need a degree or awards (or audience) to be right about stuff. But is Van Zandt and his Media Bias/Fact Check right about PolitiFact?

Media Bias/Fact Check rates PolitiFact as a "Least-biased" source of information. How does MB/FC reach that conclusion? The website has a "Methodology" page describing its methods:
The method for (rating bias) is determined by ranking bias in four different categories. In each category the source is rated on a 0-10 scale, with 0 meaning without bias and 10 being the maximum bias(worst). These four numbers are then added up and divided by 4. This 0-10 number is then placed on the line according to their Left or Right bias.
This system makes PolitiFact's "Truth-O-Meter" almost look objective by comparison. An 11-point scale? To obtain objectivity with an 11-point scale would require a very finely-grained system of objective bias measures--something that probably nobody on the planet has even dreamt of achieving.

It comes as no surprise that Van Zandt lacks those objective measures:

The categories are as follows (bold emphasis added):
  1. Biased Wording/Headlines- Does the source use loaded words to convey emotion to sway the reader. Do headlines match the story.
  2. Factual/Sourcing- Does the source report factually and back up claims with well sourced evidence.
  3. Story Choices: Does the source report news from both sides or do they only publish one side.
  4. Political Affiliation: How strongly does the source endorse a particular political ideology? In other words how extreme are their views. (This can be rather subjective)
Likely Van Zandt regards only the fourth category as subjective. All four are subjective unless Van Zandt has kept secret additional criteria he uses to judge bias. Think about it. Take the "biased wording" category, for example. Rate the headline bias for "PolitiFact Bias" on a scale of 0-10. Do it. What objective criteria guided the decision?

There is nothing to go on except for one's own subjective notion of where any observed bias falls on the 0-10 scale.

If the scale was worth something, researchers could put the rating system in the hands of any reasonable person and obtain comparable results. Systems with robust objective markers attached to each level of the scale can achieve that. Those lacking such markers will not.

Based on our experience with PolitiFact, we used Van Zandt's system on PolitiFact. Please remember that our experience will not render Van Zandt's system anything other than subjective.

Biased Wording/Headlines: 4
Factual/Sourcing: 3
Story Choices: 4
Political Affiliation: 3

Total=14
Formula calls for division by 4.
14/4=3.5
3.5=Left Center Bias

Why is Van Zandt's rating objectively more valid than ours? Or yours?

Here's more of Van Zandt's rating of PolitiFact.
Factual Reporting: VERY HIGH
World Press Freedom Rank: USA 43/180

Notes: PolitiFact.com is a project operated by the Tampa Bay Times, in which reporters and editors from the Times and affiliated media outlets “fact-check statements by members of Congress, the White House, lobbyists and interest groups”. They publish original statements and their evaluations on the PolitiFact.com website, and assign each a “Truth-O-Meter” rating. The ratings range from “True” for completely accurate statements to “Pants on Fire” (from the taunt “Liar, liar, pants on fire”) for false and ridiculous claims. Politifact has been called left biased by Extreme right wing and questionable sources. Our research indicates that Poltifact [sic] is an accurate fact checker and is considered the gold standard for political fact checking. (7/10/2016)

Source: http://www.politifact.com/

Notice the biased language from Van Zandt? Van Zandt only allows that PolitiFact has been called left-leaning by "Extreme right wing and questionable sources." In fact, PolitiFact has been called left-biased by many sources, including the non-partisan Allsides Project.

Van Zandt even has an opt-in poll on his PolitiFact page asking visitors how they rate PolitiFact's bias. Most of the respondents disagree with the site's rating of PolitiFact.


Over 50 percent of Van Zandt's respondents rated PolitiFact biased to the left. Does that mean that all those 2,000+ people were "Extreme right wing" or "questionable sources"?

Note: I voted "Left-Center."

Why is PolitiFact called the "gold standard" for fact checking instead of FactCheck.org, or even Zebra Fact Check? That's a mystery.

The crux of the matter


The temptation of subjective rating scales is obvious, but such scales misinform readers and probably tend to mislead their creators as well.

A rating scale that fails to base its ratings on quantifiable data is worthless. Van Zandt's ratings are worthless except to tell you his opinion.

Opinions about PolitiFact's bias start to have value when backed by specific, quantifiable findings. We've taken that approach for years here at PolitiFact Bias. When we see the biased headline, we write a post about it if it's of sufficient note. When we see the bad reporting, we write a post about it and document PolitiFact's failure with reliable sourcing. When we see PolitiFact skewing its story choices in a way that unfairly harms conservatives (or liberals), we write an article about it. When we see systematic signs of bias in PolitiFact's ratings, we do objective research on it.

We do that because specific examples trump subjective rating scales.

Until Dave Van Zandt adds objective markers to the MB/FC rating scales and justifies every rating with real objective data, take the ratings with a boulder of salt. They're worthless without specific backing data.


Afters

On its PolitiFact page, Media Bias/Fact Check links the flawed PolitiFact article we fisked here.

"VERY HIGH" factual reporting.

Hmm.


Update August 11, 2018:

 Dave Van Zandt contacted us on Aug. 9, 2018 to say MB/FC has changed its rating of PolitiFact to "Left-Center." But we can't find any evidence the change occurred so we have no response yet to the supposed change (perhaps Van Zandt's message was simply in error, intending to inform us of a more subtle shift in the rating).

The Internet Archive pretends to have plenty of saves but the only one it shows (when we checked) seems to be from January 2018.

The latest live version contains the following, which seems short of moving PolitiFact to the "Left-Center" category:
Overall, this update reveals a slight leftward shift in Politifact’s fact checking selection, but not enough to move them from the least biased category. (7/10/2016) Updated (D. Van Zandt 7/15/2018)

That seems a bit wishy-washy. Subjectivity can have that effect.



Update 2, June 6, 2021:


MB/FC updated its rating of PolitiFact to "Left-Center Bias" with an update time-stamped 04/28/2021:

Overall, we rate Politifact Left-Center Biased based on fact checks that tend to be more favorable for the left. We also rate them High for factual reporting and a credible fact-checker that is not without bias. (7/10/2016) Updated (D. Van Zandt (4/28/2021)
Van Zandt's 2021 update vanishes the 2018 update. It would be better to keep the publication and update dates all intact and link each to an appropriate URL at the Internet Archive.

The fact that Van Zandt has come around to our position does not mean that either one of us has rendered an objective judgment of PolitiFact's bias.

In our opinion, if PolitiFact's bias has shifted left since 2018 it was by a fraction. And we would say Van Zandt continues to overestimate PolitiFact's reliability.