Tuesday, December 26, 2017

Not a Lot of Reader Confusion VIII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing. This series looks to help acquaint Holan and others with the evidence.


Oh, look. Another journal article using PolitiFact's report card data to judge the veracity of a politician (bold emphasis added):
Political fact-checking organizations and the mainstream media reported extensively on Trump’s false statements of fact and unsubstantiated generalizations. And they noted that he made such statements in staggering proportions. For example, by April of 2017, Politifact assessed 20% of Trump’s statements as mostly false, 33% as false, and 16% as what it called “pants on fire” false— cumulatively suggesting that the vast majority of the time Trump was making either false or significantly misleading statements to the public.
That's from The Resilience of Noxious Doctrine: The 2016 Election, the Marketplace of Ideas, and the Obstinacy of Bias.

The article, by Leonard M. Niehoff and Deeva Shah, appeared in the Michigan Journal of Race and Law.

The authors should be ashamed of themselves for making that argument based on data subject to selection bias and ideological bias.

On the bright side, we suppose such use of PolitiFact data may successfully model the obstinacy of bias.

We recommend to the authors this section of their article:
Confirmation bias, discussed briefly above, is another common type of anchoring bias. Confirmation bias describes our tendency to value facts and opinions that align with those we have already formed. By only referencing information and viewpoints that affirm previously held beliefs, people confirm their biased views instead of considering conflicting data and ideas.
 




Correction: Fixed link to Noxious Doctrine paper 1838PST 12/26/2017-Jeff

Friday, December 22, 2017

Beware, lest Trump & PolitiFact turn your liberal talking point into a falsehood!

PolitiFact gave President Donald Trump a "False" rating for claiming the GOP tax bill had effectively repealed the Affordable Care Act.


We figured there was a good chance that defenders of the ACA had made the same claim.

Sure enough, we found an example from the prestigious left-leaning magazine The Atlantic. The Google preview tells the story, as does the story's URL, though the story's title tames things a little: "The GOP's High-Risk Move to Whack Obamacare in Its Tax Bill."

The key "repeal" line came from an expert The Atlantic cited in its story (bold emphasis added):
Make no mistake, repealing the individual mandate is tantamount to repealing the Affordable Care Act,” said Brad Woodhouse, campaign director for Protect Our Care, an advocacy group supportive of the ACA.
Would Woodhouse receive a "False" rating from PolitiFact if it rated his statement?

Would The Atlantic receive a "False" rating from PolitiFact?

Would PolitiFact even notice the claim if it wasn't coming from a Republican?



Afters (other liars who escaped PolitiFact's notice)

"GOP tax bill is just another way to repeal health care." (Andy Slavitt, USA Today)

"Republican tax bill to include Obamacare repeal" (Christian Science Monitor)

"Republicans undermine their own tax reform bill to repeal Obamacare" (Salon)

"Another Obamacare repeal effort doesn't actually have to be in the tax cuts bill, says the guy heading up popular vote loser Donald Trump's Office of Management and Budget." (Daily Kos)


Thursday, December 21, 2017

Layers of editors on PolitiFact's Facebook page

I can probably get away with posting this PolitiFact Facebook post from Dec. 21, 2017 without comment.

The mistake is obvious, right?






***SPOILER ALERT***






Surely they meant to post the second chart from the story instead of the one appearing above.



Tuesday, December 19, 2017

PolitiFact's "Pants on Fire" bias--2017 update (Updated)

What tale does the "Truth-O-Meter" tell?

For years, we at PolitiFact Bias have argued that PolitiFact's "Truth-O-Meter" ratings serve poorly to tell us about the people and organizations PolitiFact rates on the meter. But the ratings may tell us quite a bit about the people who run PolitiFact.

To put this notion into practice, we devised a simple examination of the line of demarcation between two ratings, "False" and "Pants on Fire." PolitiFact offers no objective means of distinguishing between a "False" rating and a "Pants on Fire" rating. In fact, PolitiFact's founding editor, Bill Adair (now on staff at Duke University) described the decision about the ratings as "entirely subjective."

Angie Drobnic Holan, who took over for Adair in 2013 after Adair took the position at Duke, said "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."

After searching in vain for dependable objective markers distinguishing the "Pants on Fire" rating from the "False" rating, we took PolitiFact at its word and assumed the difference between the two is subjective. We researched the way PolitiFact applied the two ratings as an expression of PolitiFact's opinion, reasoning that we could use the opinions to potentially detect PolitiFact's bias (details of how we sorted the data here).

Our earliest research showed that, after PolitiFact's first year, Republicans were much more likely than Democrats to have a false claim rated "Pants on Fire" instead of merely "False." Adair has said that the "Pants on Fire" rating was treated as a lighthearted joke at first--see this rating of a claim by Democrat Joe Biden as an example--and that probably accounts for the unusual results from 2007.

In 2007, the lighthearted joke year, Democrats were 150 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2008, Republicans were 31 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2009, Republicans were 214 percent more likely to receive a "Pants on Fire" rating for a false statement (not a typo).

In 2010, Republicans were 175 percent more likely to receive a "Pants on Fire" rating for a false statement (again, not a typo).

We published our first version of this research in August 2011, based on PolitiFact's first four years of operation.

In 2011, Republicans were 57 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2012, Republicans were 125 percent more likely to receive a "Pants on Fire" rating for a false statement.

Early in 2013, PolitiFact announced Adair would leave the project that summer to take on his new job at Duke. Deputy editor Angie Drobnic Holan was named as Adair's replacement on Oct. 2, 2013.

In 2013, the transition year, Republicans were 24 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans started to curb their appetite for telling outrageous falsehoods?

In 2014, Republicans were 95 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2015, Republicans were 2 percent (not a typo) more likely to receive a "Pants on Fire" rating for a false statement.

In 2016, Republicans were 17 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2017, Democrats were 13 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans gotten better than Democrats at reigning in their impulse to utter their false statements in a ridiculous form?

We suggest that our data through 2017 help confirm our hypothesis that the ratings tell us more about PolitiFact than they do about the politicians and organizations receiving the ratings.






Do the data give us trends in political lying, or separate journalistic trends for Adair and Holan?

We never made any attempt to keep our research secret from PolitiFact. From the first, we recognized that PolitiFact might encounter our work and change its practices to decrease or eliminate the appearance of bias from its application of the "Pants on Fire" rating. We did not worry about it, knowing that if PolitiFact corrected the problem it would help confirm the problem existed  regardless of what fixed it.

Has PolitiFact moderated or fixed the problem? Let's look at more numbers.

The "Pants on Fire" bias

From 2007 through 2012, PolitiFact under Adair graded 29.2 percent of its false claims from the GOP "Pants on Fire." For Democrats the percentage was 16.1 percent.

From 2014 through 2017, PolitiFact under Holan graded 26 percent of its false claims from the GOP "Pants on Fire" and 21.9 percent for Democrats.

It follows that under Adair PolitiFact was 81.4 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one for a Democrat. That includes the anomalous 2007 data showing a strong "Pants on Fire" bias against Democrats.

Under Holan, PolitiFact was just 18.7 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one from a Democrat.

Story selection bias

While tracking the percentage of false ratings given a "Pants on Fire" rating, we naturally tracked the sheer number of times PolitiFact issued false ratings (either "False" or "Pants on Fire"). That figure speaks to PolitiFact's story selection.

From 2007 through 2012, PolitiFact under Adair found an average of 55.3 false claims per year from Republicans and 25.8 false claims per year from Democrats. That includes 2007, when PolitiFact was only active for part of the year.

From 2014 through 2017, PolitiFact under Holan found an average of 81 false claims per year from Republicans and 16 false claims per year from Democrats.

Under Holan, the annual finding of false claims by Republicans increased by nearly 58 percent. At the same time, PolitiFact's annual finding of false claims by Democrats fell by 38 percent.

Update Jan. 1, 2018: GOP false claims reached 90 by year's end.


One might excuse the increase for the GOP by pointing to staff increases. But the same reasoning serves poorly to explain the decrease for the Democrats. Likewise, increased lying by Republicans does not automatically mean Democrats decreased their lying.

Did the Democrats as a party tend strongly toward greater truth-telling? With the notable blemish a greater tendency to go "Pants on Fire" when relating a falsehood?

Conclusion

We suggest that changes in PolitiFact's practices more easily make sense of these data than do substantial changes in the truth-telling patterns of the two major U.S. political parties. When Adair stepped down as PolitiFact's editor, a different person started running the "star chamber" meetings that decide the "Truth-O-Meter" ratings and a different set of editors voted on the outcomes.

Changing the group of people who decide subjective ratings will obviously have a substantial potential effect on the ratings.

We suggest that these results support the hypothesis that subjectivity plays a large role in PolitiFact's rating process. That conclusion should not surprise anyone who has paid attention to the way PolitiFact describes its rating process.

Has Holan cured PolitiFact of liberal bias?

We recognized from the first that the "Pant on Fire" bias served as only one measure of PolitiFact's ideological bias, and one that PolitiFact might address. Under Holan, the "Pants on Fire" bias serves poorly to demonstrate a clear ideological bias at PolitiFact.

On the other hand, PolitiFact continues to churn out anecdotal examples of biased work, and the difficulty Holan's PolitiFact has in finding false statements from Democrats compared to Adair's PolitiFact suggests our data simply show something of a trade-off.

When we started evaluating PolitiFact's state operations, such as PolitiFact Georgia, we noticed that lopsided numbers of false statements were often accompanied by a higher percentage of "Pants on Fire" statements from the party receiving many fewer false ratings. We hypothesized a compensatory bias might produce that effect when the fact checkers, consciously or unconsciously, encourage the appearance of fairness.

PolitiFact, after all, hardly needs to grade false Republican statements more harshly to support the narrative that Republicans lie more when it is finding, on average, five times more false statements from Republicans than Democrats.


We doubt not that defenders of PolitiFact can dream up some manner of excusing PolitiFact based on the "fact' that Republicans lie more. But we deeply doubt that any such approach can find a basis in empirical evidence. Subjective rating systems do not count as empirical evidence of the rate of lying.


In addition to empirically justifying the increase in GOP falsehoods, defenders will need to explain the decrease in Democratic Party falsehoods implied in PolitiFact's ratings. Why, with a bigger staff, is PolitiFact having a more difficult time finding false statements from Democrats than it did when Adair was steering the ship?

If Truth-O-Meter data were ostensibly objective, it would make sense to question the reliability of the data given the differing trends we see for PolitiFact under Adair and Holan.

Given PolitiFact's admissions that its story selection and ratings are substantially subjective, it makes sense for the objective researcher to first look to the most obvious explanation: PolitiFact bias. 

 

Notes on the research method

Our research on the "Pants on Fire" bias looks at partisan elected officials or officeholders as well as candidates and campaign officials (including family members participating in the campaign). We exclude PolitiFact ratings where a Republican attacked a Republican or a Democrat attacked a Democrat, reasoning that such cases may muddy the water in terms of ideological preference. The party-on-party exclusions occur rarely, however, and do not likely affect the overall picture much at all.

In the research, we use the term "false claims" to refer to claims PolitiFact rated either "False" or "Pants on Fire." We do not assume PolitiFact correctly judged the claims false.

Find the data spreadsheet here.


Afters

We have completed alternative versions of our charts with the data for Donald Trump removed, and we'll publish those separately from this article at a later time. The number of false claims from Republicans went down from 2015-2017 but with PolitiFact still issuing far more false ratings to Republicans. The "Pants on Fire" percentages were almost identical except for 2016. With Trump removed from the data the Republicans would have set an all-time record for either party for lowest percentage of "Pants on Fire" claims.

These results remain consistent with our hypothesis that PolitiFact's "False" and "Pants on Fire" ratings reflect a high degree of subjectivity (with the former perhaps largely influenced by story selection bias).



Update Dec. 19, 2017: Added intended hyperlink to explanations of the research and the notable Biden "Pants on Fire."
Update Dec. 21, 2017: Corrected date of previous update (incorrectly said Dec. 12), and updated some numbers to reflect new PolitiFact ratings of Donald Trump through Dec. 21, 2017: "13 percent"=>"10 percent",  "87.3 claims per year"=>"80.5 claims per year", "23.8"=>"26.1" and "8.7"="19.2." The original 87.3 and 23.8 figures were wrong for reasons apart from the new data. We will update the charts once the calendar year finishes out. Likewise the 8.7 figure derived in part from the incorrect 23.8.

Update Jan 1, 2017:  Changed "10 percent" back to "13 percent" to reflect updated data for the whole year. "80.5 claims per year" updated to "81 claims per year." We also changed "26.1" to "26" and "8.7" to "18.7." The latter change shows that we neglected to make the "8.7" to "19.2" change we announced in the description of the Dec. 21, 2017 update, for which we apologize.

Saturday, December 16, 2017

Update on that Pulitzer Prize mark of excellence

How often have we seen people appeal to PolitiFact's 2009 Pulitzer Prize as proof of its standard of accuracy?

We've tried to explain to people that the Pulitzer judges aren't likely to fact check the fact checkers. The Pulitzer judges look for things like style, impact and relevance.

Thankfully, we just ran across an interview that helps make our point.

The interviewer, James Warren, says he served on a Pulitzer jury (confirmed), and states the rules prevented him from following his impulse to fact check the work he was judging:
[JW]
Does the rise of fact-checking play into a new era at all? I recall a few times as a judge wanting to independently verify stuff in entries but not being allowed to. I might have wanted to know if a claimed exclusive was really what an entry later claimed.
[DC]
I'm not sure it's the role of the jury to second-guess work that is being submitted. Now it might be like a parent who over-praises their child. But that's only a matter of enthusiasm, not dishonesty. I don't think there's much of a record at all of Pulitzers suffering from choosing work that hasn't lived up to what it's awarded.
Warren said he was not allowed to independently verify material from Pulitzer entries.

It's worth noting that the interviewee, new Pulitzer Prize chief Dana Canedy, appears to affirm that Pulitzer juries do not see fact-checking contest entries as any part of the job.

It makes no sense to regard the Pulitzer Prize as any type of guarantee of journalistic accuracy. The jurors assume that the submitted works adhere to basic journalistic principles of accuracy and fairness unless the works themselves obviously contradict that idea.

Trust PolitiFact in 2018 because of a Pulitzer Prize awarded in 2009? Bad idea.

And it would have been a bad idea to trust PolitiFact in 2010 based on the Pulitzer Prize in 2009.

Tuesday, December 12, 2017

PolitiFact's lying "Lie of the Year" award for 2017 (Updated)

On Dec. 12, 2017, PolitiFact announced its 2017 "Lie of the Year." PolitiFact supposedly gave its award to a particular statement from President Trump.

PolitiFact (bold emphasis added):
"This Russia thing with Trump and Russia is a made-up story. It's an excuse by the Democrats for having lost an election that they should've won," said President Donald Trump in an interview with NBC’s Lester Holt in May.
PolitiFact Bias correctly predicted the winner. But even we hardly imagined the Olympic-grade gymnastics the editors of PolitiFact would perform in justifying their selection.

We thought PolitiFact would cross its fingers and hope the Mueller investigation would implicate Trump in some type of illegal collusion with the Russians.

Instead, PolitiFact turned Trump's statement into a complete denial that Russia interfered with the election. Instead of "Trump and Russia" like Trump said, PolitiFact trims the issue down to just "Russia."

No, seriously. PolitiFact did that. Let's start with the headline of its "Lie of the Year" announcement:

2017 Lie of the Year: Russian election interference is a 'made-up story'

Did Trump say anything in the winning statement about Russian election interference being a "made-up" story? We're not seeing it, and PolitiFact does not explain the connection. Maybe in context?

We looked to PolitiFact's original rating of Trump's claim for clues. That story suggested Trump was claiming that Democrats made up the Trump-Russia narrative. PolitiFact said James Comey's report of a "credible allegation" (or "reasonable basis to believe"!) was enough to "rebut" (refute?) Trump's charge that the narrative was made up.

How did PolitiFact know that the "credible allegation" was not made up and not by a Democrat? We do not know. PolitiFact will have to answer that one. We can only marvel at the idea that a "reasonable basis to believe" unequivocally serves as a foundation for stating something as fact.

Do we think PolitiFact's narrative that Trump completely denied Russian election interference stands up to scrutiny? We do not (Reuters, Jan 6, 2017):
WASHINGTON (Reuters) - President-elect Donald Trump accepts the U.S. intelligence community’s conclusion that Russia engaged in cyber attacks during the U.S. presidential election and may take action in response, his incoming chief of staff said on Sunday.
In opposition to PolitiFact's reasoning, we think it much more reasonable to take Trump to mean that the narrative attempting to connect the Trump campaign to Russian meddling has no evidence to back it. If such evidence existed, it would have served to help justify the Robert Mueller investigation. Instead, Mueller was given the job of looking at a broad category of interactions ("collusion") for something that could justify criminal charges.

In fact, PolitiFact's description of what Trump said bears little resemblance to what he said.

PolitiFact (bait in red, switch in blue, highlights added):

Trump could acknowledge the interference happened while still standing by the legitimacy of his election and his presidency — but he declines to do so. Sometimes he’ll state firmly there was "no collusion" between his campaign and Russia, an implicit admission that Russia did act in some capacity. Then he reverts back to denying the interference even happened.
Declining to acknowledge the interference, supposing the Reuters story cited above counts for nothing, is not the same thing as denying the interference ever happened.

If PolitiFact had any clear statement from Trump denying Russia made any effort to interfere in the U.S. presidential election, PolitiFact would have been smart to include it (see the "Afters" section, below).

Lacking that evidence, we conclude that PolitiFact has exaggerated, one might even say "made up," the degree to which President Trump denies Russian election interference.




Afters

We say PolitiFact offered no unequivocal evidence Trump denied all Russian meddling in the U.S. election. But PolitiFact did offer evidence that it perhaps interpreted that way.

We think it fair to let PolitiFact make its case:
Facebook, Google and Twitter have investigated their own networks, and their executives have concluded — in some cases after initial foot-dragging — that Russia used the online platforms in attempts to influence the election.

After all this, one man keeps saying it didn’t even happen.

"This Russia thing with Trump and Russia is a made-up story. It's an excuse by the Democrats for having lost an election that they should've won," said President Donald Trump in an interview with NBC’s Lester Holt in May.

On Twitter in September, Trump said, "The Russia hoax continues, now it's ads on Facebook. What about the totally biased and dishonest Media coverage in favor of Crooked Hillary?"

And during an overseas trip to Asia in November, Trump spoke of meeting with Putin: "Every time he sees me, he says, ‘I didn't do that.’ And I really believe that when he tells me that, he means it." In the same interview, Trump referred to the officials who led the intelligence agencies during the election as "political hacks."

Trump continually asserts that Russia’s meddling in the 2016 election is fake news, a hoax or a made-up story, even though there is widespread, bipartisan evidence to the contrary.
 We've covered PolitiFact's trading of "Trump and Russia" for just "Russia."

What "Russia hoax" was continuing? The hoax of Russian interference or the hoax of Trump and Russia collaborating to steal the election from its rightful winner?

If Trump says he thinks Putin's denials are sincere, does that likewise mean that Trump thinks nobody in Russia did anything to interfere with the U.S. election?

Who fact checks like that, not counting liberal bloggers?



Update Dec. 14, 2017: Jeff Adds:

I concur with Bryan's points above but wanted to add my gripes about PolitiFact's latest agitprop.

1) What exactly is "bipartisan evidence"? Can evidence be partisan? Can a fact have a political motive? If the nonpartisans at PolitiFact think so, it would explain a lot.

2) No decent editor should have allowed this line:
Sometimes he’ll state firmly there was "no collusion" between his campaign and Russia, an implicit admission that Russia did act in some capacity.
Huh? On what planet does denying Trump's campaign colluded with the Russians an implied admission the Russians interfered in the election? PolitiFact's argument is a non sequitur, if it even makes sense at all.

3) It seems to be an accepted truth on the left that Russian interference changed the outcome of the election, but is there any compelling evidence of that?
It seems unlikely — though not impossible — that Russia interference changed the outcome of the election. We at PolitiFact have seen no compelling evidence that it did so.
Talk about a buried lede!

The fact is currently the only evidence of Russian "interference" has been a disorganized social media campaign. There's been no evidence of vote tampering, no voting booth intimidation, no vote machine hacking. [Disclosure: I am frequent user of Twitter and Facebook but somehow overcame the onslaught of Russian brainwashing and did not vote for Trump.]

For PolitiFact to describe buying Facebook ads as "a threat to U.S. democracy" is Louise Mensch grade delusion. Further, Holan's assertion that Trump's refusal to acknowledge the "threat to democracy" is begging the question. She asserts as fact Russian interference, to whatever extent it existed, is a threat to America. Perhaps she could prove the threat is real before calling it a lie to deny it.

The premise of PolitiFact's argument rests comfortably in the swamp of liberal media where the words influence, interference, and election action all mean the same thing. Let's turn PolitiFact's trick back against it:
Trump could acknowledge the interference happened while still standing by the legitimacy of his election...
If the legitimacy of the election is a fact, then it's implied the Russians did not interfere in the election, since (using PolitiLogic throughout) if the Russians did interfere in the election, it would not be a legitimate election.

Perhaps PolitiFact chose the Russian "interference" story for their Lie of the Year because it hit so close to home. After all, misleading large swaths of impressionable users by exploiting social media to spread a political agenda with poorly written posts that don't hold up to scrutiny is PolitiFact's bread and butter.

It's hard for me to imagine PolitiFact editor Angie Holan ever persuading someone beyond her bubble that she is a convincing, coherent, and unbiased professional, but maybe that's just the vodka talking.

See you next year, comrades!

Thursday, December 7, 2017

Another partisan rating from bipartisan PolitiFact

"We call out both sides."

That is the assurance that PolitiFact gives its readers to communicate to them that it rates statements impartially.

We've pointed out before, and we will doubtless repeat it in the future, that rating both sides serves as no guarantee of impartiality if the grades skew left whether rating a Republican or a Democrat.

On December 1, 2017, PolitiFact New York looked at Albany Mayor Kathy M. Sheehan's claim that simply living in the United States without documentation is not a crime. PolitiFact rated the statement "Mostly True."


PolitiFact explained that while living illegally in the United States carries civil penalties, it does not count as a criminal act. So, "Mostly True."

Something about this case reminded us of one from earlier in 2017.

On May 31, 2017, PolitiFact's PunditFact looked at Fox News host Gregg Jarrett's claim that collusion is not a crime. PolitiFact rated the statement "False."


These cases prove very similar, not counting the ratings, upon examination.

Sheehan defended Albany's sanctuary designation by suggesting that law enforcement need not look at immigration status because illegal presence in the United States is not a crime.

And though PolitiFact apparently didn't notice, Jarrett made the point that Special Counsel Mueller was put in charge of investigating non-criminal activity (collusion). Special Counsels are typically appointed to investigate crimes, not to investigate to find out if a crime was committed.

On the one hand, Albany police might ask a driver for proof of immigration status. The lack of documentation might lead to the discovery of criminal acts such as entering the country illegally or falsifying government documents.

On the other hand, the Mueller investigation might investigate the relationship (collusion) between the Trump campaign and Russian operatives and find a conspiracy to commit a crime. Conspiring to commit a crime counts as a criminal act.

Sheehan and Jarrett were making essentially the same point, though collusion by itself doesn't even carry a civil penalty like undocumented immigrant status does.

So there's PolitiFact calling out both sides. Sheehan and Jarrett make almost the same point. Sheehan gets a "Mostly True" rating. Jarrett gets a "False."

That's the kind of non-partisanship you get when liberal bloggers do fact-checking.



Afters

Just to hammer home the point that Jarrett was right, we will review the damning testimony of the  three impartial experts who helped PunditFact reach the conclusion that Jarrett was wrong.
Nathaniel Persily at Stanford University Law School said one relevant statute is the Bipartisan Campaign Reform Act of 2002.

"A foreign national spending money to influence a federal election can be a crime," Persily said. "And if a U.S. citizen coordinates, conspires or assists in that spending, then it could be a crime."
The conspiracy to commit the crime, not the mere collusion, counts as the crime.

Next:
Another election law specialist, John Coates at Harvard University Law School, said if Russians aimed to shape the outcome of the presidential election, that would meet the definition of an expenditure.

"The related funds could also be viewed as an illegal contribution to any candidate who coordinates (colludes) with the foreign speaker," Coates said.
Conspiring to collect illegal contributions, not mere collusion, would count as the crime. Coats also offered the example of conspiring to commit fraud.
Josh Douglas at the University of Kentucky Law School offered two other possible relevant statutes.

"Collusion in a federal election with a foreign entity could potentially fall under other crimes, such as against public corruption," Douglas said. "There's also a general anti-coercion federal election law."
The corruption, not the mere collusion, would count as the crime.

How PolitiFact missed Jarrett's point after linking the article he wrote explaining what he meant is far beyond us.

Friday, December 1, 2017

Not a Lot of Reader Confusion VII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing.

We're here to help.

This comes from the lead edge of December 2017 and PolitiFact's own Facebook page:


Somebody introduced a subjective PolitiFact chart in answer to a call for a scientific study showing the unreliability of Fox News. So far as we can tell, the citation was intended as serious.

We predict that no number of examples short of infinity will convince Holan that we are right and she is wrong. At least publicly. Privately, maybe.