Tuesday, May 22, 2018

PolitiFact rewrites the Logan Act

We know that PolitiFact is non-partisan because it doesn't make mistakes like this.


A May 22, 2018 PolitiFact article (with no "Truth-O-Meter" rating) by John Kruzel looked at allegations of a secret meeting at a Paris restaurant between former secretary of state John Kerry and Iranian representatives.

PolitiFact judged that no solid evidence supported the allegations. More interestingly, PolitiFact framed its article as a defense of Kerry from charges he violated the Logan Act.

And that's where PolitiFact slipped up. Badly.

PolitiFact (bold emphasis added):
Trump and right-wing backers challenged Kerry’s actions as violating the 18th century Logan Act, which prevents U.S. citizens from privately meeting with a foreign government to sway its decisions on matters involving the United States.
PolitiFact implies that because the private restaurant meeting probably didn't take place therefore charges Kerry violated the Logan Act have no basis in fact.

The problem? The Logan Act doesn't forbid U.S. citizens from privately meeting with foreign governments to make policy agreements on behalf of the United States. The Logan act prevents private citizens from conducting U.S. foreign policy on behalf of the United States.

The Logan Act:
Any citizen of the United States, wherever he may be, who, without authority of the United States, directly or indirectly commences or carries on any correspondence or intercourse with any foreign government or any officer or agent thereof, with intent to influence the measures or conduct of any foreign government or of any officer or agent thereof, in relation to any disputes or controversies with the United States, or to defeat the measures of the United States, shall be fined under this title or imprisoned not more than three years, or both.

This section shall not abridge the right of a citizen to apply, himself or his agent, to any foreign government or the agents thereof for redress of any injury which he may have sustained from such government or any of its agents or subjects.
Making PolitiFact's fact check even more hilarious (and slanted), the paragraph preceding PolitiFact's erroneous description of the Logan Act describes Kerry meeting with various foreign officials, including an Iranian, regarding the Iran deal (bold emphasis added):
In the weeks before Trump’s May 8 decision to exit the deal and re-impose sanctions on Iran, Kerry had worked frantically behind the scenes to preserve the deal he helped craft in 2015, according to the Boston Globe. Ahead of the U.S. withdrawal, Kerry, who was secretary of state under President Barack Obama, met with Iran’s Foreign Minister Javad Zarif, courted European officials and made dozens of calls to members of Congress in hopes of salvaging the accord.
PolitiFact's apparent effort to exonerate Kerry with its framing of the story ends up convicting Kerry, with the Logan Act properly understood.

How does a non-partisan fact checker make such a huge mistake?

Don't ask us.


Update May 23, 2018: Updated link to Internet Archive version of the PolitiFact article. The first version of that URL was somehow defective.

Sunday, May 20, 2018

PolitiFact Wisconsin and the Worry-O-Meter

PolitiFact Wisconsin had no representation in our article on the worst 17 PolitiFact fact checks of 2017.

A May 18, 2018 fact check of Republican Leah Vukmir should help ensure PolitiFact Wisconsin makes the list for 2018.


Vukmir, a Republican looking for an opportunity to run against Sen. Tammy Baldwin (D-Wisc.) in the 2018 election cycle, has used a hyperbolic ad campaign to paint Baldwin as weak on terrorism. Vukmir said Baldwin worried more about the architect of the 9-11 terrorist attacks than confirming Gina Haspel to head the CIA.

The key to Democrat opposition to the Haspel nomination stemmed from Haspel's involvement in the enhanced interrogation program, which included the technique of waterboarding. The CIA released a disciplinary review saying Haspel had no involvement in the decision to use enhanced interrogation, but that she simply carried out the orders she was issued.

PolitiFact Wisconsin adroitly skipped over all that and took the liberty of re-interpreting Vukmir's claim:
Does U.S. Sen. Tammy Baldwin have so much more concern for a 9/11 terrorist, compared to the president’s nominee to run the CIA, that she would vote against the nominee?
Vukmir's claim was more simple than PolitiFact Wisconsin's creative paraphrase (source: PolitiFact):
Tammy and her party are more interested, and they’re more worried about, the mastermind of 9/11 -- the individual that plotted and ultimately killed over 3,000 Americans on our soil. And she‘s more worried about those individuals than to support a very strong woman with a track record to be the head of the CIA.
Note that Vukmir did not say anything about what motivated Baldwin to withhold support for Haspel.

We suspect PolitiFact Wisconsin counts as a minority for its inability to figure out Vukmir's message: Opposing Haspel's nomination based merely on her following orders within the CIA hampers the CIA's ability to do its job effectively. Imagine working at the CIA and thinking one must second-guess the orders one receives to have a realistic shot at one day leading the CIA.

PolitiFact Wisconsin's fact check spent not a word on that angle of the story, sticking instead to its own idea that Vukmir must show that Baldwin personally showed significant worry about Khalid Sheik Mohammed in order to earn a rating better than "Pants on Fire."

Farcical Fact-Checking

To fact check what Vukmir actually said, PolitiFact Wisconsin would have needed evidence not only showing Baldwin's level of worry for Mohammed but also her level of worry for Haspel's nomination. Otherwise there's no baseline for determining one is greater than the other.

After all, Vukmir clearly made a claim comparing the two.

And how does one assess levels of worry without asserting an opinion? One might go by what a person said, but that assumes an entirely forthright subject. We don't know the answer. And PolitiFact offered no evidence it has an answer.

PolitiFact's approach was preposterous from the outset. It showed no specific level of worry over Mohammed and no specific level of worry over the Haspel nomination. And yet concluded that one was not lower than the other.

Vukmir's statement was best interpreted as hyperbole expressing the damage to CIA operations stemming from refusing a leadership role to a fully qualified woman for nothing more than following orders associated with the enhanced interrogation program--a program that the CIA described to leading congressional members of both parties without apparent objection at the time.

PolitiFact says it grants license for hyperbole. Exceptions doubtless stem, as we've said before, from Republicans trying to use hyperbole without a license.
• Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.
PolitiFact says it doesn't rate opinions. We suppose PolitiFact is entitled to its own opinion.


After Vukmir made her claim about Baldwin, Baldwin ended up voting in opposition to the Haspel nomination.

Wednesday, May 16, 2018

Not a Lot of Reader Confusion XI

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.
 
A comment today at PolitiFact's Facebook page reminds us yet again that PolitiFact's graphs and charts mislead its audience.


The comment referred to PolitiFact Editor Angie Drobnic Holan's Dec. 11, 2015 opinion article in The New York Times. When a reader scoffed at the use of the Times as a reliable reference the person making the comment defended it with this (bold emphasis added):
ALL of the statistics both come from Politifact itself. I use it because it has a very nice graphic that clearly shows that Republicans as a whole are far more full of pony poop than Democrats are.
As we've pointed out repeatedly, PolitiFact has admitted its "Truth-O-Meter" ratings are subjective and that its sampling method makes no attempt to simulate randomness (therefore one may assume the data suffer from selection bias).

Yet another satisfied PolitiFact customer, misled in a way that Holan says doesn't happen a lot.

If it doesn't happen very often then why is it so easy to find examples of it happening, we wonder?

If PolitiFact was concerned about misleading people in this way, then it would attach disclaimers to every one of its graphs to warn against jumping to conclusions the PolitiFact data cannot rightly support.

Thursday, May 10, 2018

PolitiFact Editor: "There Might Be Some Inconsistencies From Time to Time"

During a PolitiFact Reddit AMA, we asked about PolitiFact's defense of the Affordable Care Act's "cuts" to Medicare compared to its antagonistic reporting of GOP efforts to cut Medicare and Medicaid.

PolitiFact Editor Angie Drobnic Holan responded:
Angie here ... We talked about the reasons for that in the Medicaid check. The Medicare reduction was aimed at cost efficiency, while the Medicaid reduction was aimed at reducing the number of enrollees. Go back and look at the Medicaid checks and you should see that. With as many checks as we do -- more thant [sic] 13,000 -- there might be inconsistencies from time to time, but I disagree that there's any longstanding pattern.
Inconsistencies from time to time ... yeah, that's one way to put it.


(in what follows we lean heavily on our earlier analysis of this issue)

We added URLS to the quotations for emphasis and to lead to the source of the quotation. This survey helps establish beyond reasonable doubt that PolitiFact disagreed with the characterization that the ACA cut Medicare.



DEMOCRATS CUT MEDICARE GROWTH 


"The ad loses points for accuracy because the $500 billion aren't actual cuts but reductions to future spending for a program that will still grow significantly in the next 10 years."

"The new law would indeed slow the rate of growth of the broader Medicare program by roughly that amount over 10 years. But it's not a slam-dunk that this represents a cut."

"The ad conflates actual cuts with decreases in future spending, over the next decade for a program expected to expand, and it fails to mention any of the benefits to seniors under the new Medicare program."

"Boxer voted for the health care bill, but it didn't cut $500 billion out of the current Medicare program. Instead, it slowed growth over the next 10 years."

"That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion."

"The vote taken by Congress was not to cut Medicare but to reduce the rate of growth by $500 billion by targeting inefficiencies in the program."

"The $500 billion in 'cuts' are actually cuts to future increased spending, so we give Haridopolos some credit for not simply calling it cuts to Medicare as so many others have done."

"So while the health care law reduces the amount of future spending growth in Medicare, the law doesn't cut Medicare."

"But it incorrectly describes it as $500 billion in Medicare cuts, rather than as decreases in the rate of growth of future spending."

"The referenced $500 billion figure depends on a slowing in the pace of Medicare cost increases. That’s not the same as cutting back"

"But the law attempts to curtail the rapid growth of future Medicare spending, not cut current funding."


We could go on and on like this, for PolitiFact never seemed to tire of correcting Republican claims that the ACA cut Medicare. If somebody said the ACA cut Medicare, PolitiFact's formula was to say the ACA did not cut Medicare but instead cut the growth of spending.

PolitiFact has shown comparatively little interest in frequent claims from Democrats that Republicans are cutting/slashing/gutting Medicaid or Medicare.

REPUBLICANS CUT MEDICAID/MEDICARE

 

June 26, 2017 (checking Republican Kellyanne Conway):

"And on one level, she has a point, we at PolitiFact found. Future savings are not always 'cuts.'"

PolitiFact got around Conway's point by asserting that the BCRA cut Medicaid enrollment and that feature made the bill a cut. That's despite Conway's claim coming from a context that focused on dollars spent.


October 26, 2017 (PolitiFact New York checking Democrat Charles Schumer)

PolitiFact New York (bold emphasis added, hyperlink in the original):
The Senate Budget Committee has a point that Medicare spending will be going up, just not as fast as it would under the status quo. It also has a point that more modest cuts sooner could stave off bigger cuts later. (Experts have often told us that it’s presumptuous to assume significant economic growth impacts before they materialize.)

But we don’t find it unreasonable for Schumer to call cumulative reductions to Medicare and Medicaid spending in the hundreds of billions of dollars "cuts."
 When did it turn reasonable to call cumulative reductions to Medicare or Medicaid spending "cuts"?

September 13, 2017
February 20, 2018 (two "Trump-O-Meter entries at the same URL)

The 2017 entry (bold emphasis added, hyperlinks in the original):
The 2018 White House budget proposal released in May left Medicare benefits largely untouched compared with Medicaid, which would see a more than $600 billion decrease over 10 years compared to current spending levels. Still, Medicare spending would decrease by more than $50 billion in the next decade compared with current levels.
Is a decrease the same as a cut? If a decrease is the same as a cut, why doesn't PolitiFact inform its readers that the decrease happens relative to projected future spending and not current levels? Spending under the budget proposal goes up compared with current levels, contrary to PolitiFact's claim.

The 2018 entry (bold emphasis added, hyperlink in the original):
Over 10 years, Trump's 2019 budget proposal says it would cut Medicare spending by a cumulative $236 billion, including by reductions in "waste" and "fraud" and by changing the way drugs are priced and paid for in the program.
Again, PolitiFact finds it completely unimportant to distinguish between cuts to current spending and slowing the projected growth of spending.



March 29, 2018 (checking Democrat Paul Wyden)

Wyden claimed the GOP wants to take away Social Security, Medicare and Medicaid. PolitiFact found that false. And PolitiFact pointed out that though the Trump budget cuts (PolitiFact's term) Medicare and Medicaid, it does not show a desire to take away those programs.

PolitiFact (bold emphasis added, hyperlink in the original):
For Medicaid, Republican-proposed cuts could lead to specific individuals losing their coverage.
Hey, PolitiFact, are those actual cuts or merely slowing the growth of the program's spending? Is it even important to distinguish between the two?


Summary

PolitiFact's framing of cuts to program growth changed systematically and dramatically, mostly (there are a small handful of exceptions) depending on which party was receiving criticism. The pattern over a period of years is obvious and irrefutable.


"Some inconsistencies from time to time." O-kay.

Tuesday, April 17, 2018

What??? No Pulitzer for PolitiFact in 2018?

We're not surprised PolitiFact failed to win a Pulitzer Prize in 2018.

The Pulitzer it won back in 2009 was a bit of a fluke to begin with, stemming from prize submissions in the public service category and arbitrarily used by the Pulitzer board to justify the prize for "National Reporting."

Since the win in 2009, PolitiFact has won exactly the same number of Pulitzer Prizes that PolitiFact Bias has won: Zero.

So, no, we're not surprised. But we think Hitler might be, judging from his reaction when PolitiFact failed to win a Pulitzer back in 2014.

Friday, April 13, 2018

PolitiFact continues to botch the gender pay gap


We can depend on PolitiFact to perform lousy fact-checking on the gender wage gap.

PolitiFact veteran Louis Jacobson proved PolitiFact consistent ineptitude with an April 13, 2018 fact check of Sen. Tina Smith (D-Min.), Sen. Al Franken's replacement.

Sen. Smith claimed that women earn only 80 cents on the dollar for doing the same jobs as men. That's false, and PolitiFact rated it "Mostly False."


That 80-cents-on-the-dollar wage gap is calculated based on full-time work irrespective of the job type and irrespective of hours worked once above the full-time threshold. The figure represents the median, not the average.

But isn't "Mostly False" a Fair Rating for Smith?

Good question! PolitiFact noted that the figure Smith was using did not take the type of job specifically into account. And PolitiFact pointed out that Smith made a common mistake. People often fail to mention that the raw wage gap figure doesn't take the type of job into account.

PolitiFact's Jacobson doesn't precisely spell out why PolitiFact finds a germ of truth in Smith's statement. Presumably PolitiFact's reasoning matches that of its earlier ratings where it noted that the wage gap statistic is accurate except for the part about it applying to equal work. So it's true except for the part that makes it false, therefore "Mostly False" instead of "False."

Looking at it objectively, however, it's just plain false that women earn 80 cents on the dollar for doing the same work. Researchers talk about an "unexplained gap" after taking various factors into account to explain the gap, and the ceiling for gender discrimination looks like it falls to around 5 percent to 7 percent.

Charitably using the 7 percent figure as the ceiling for gender-based wage discrimination, Smith exaggerated the gap by 186 percent. It's likely the exaggeration was far greater than that.

For comparison, when Bernie Sanders said 40 percent of U.S. gun sales occur without background checks, PolitiFact gave him a "False" rating for exaggerating the right figure by 90 percent.

The Ongoing Democratic Deception PolitiFact Overlooks

If a Democrat describes the 80 percent raw pay gap accurately, why not give it a "True" rating? Or at least "Mostly True"?

Democrats tend to trot out the raw gender pay gap statistic while proposing legislation that supposedly addresses gender discrimination. By repeatedly associating the raw wage gap with the issue of wage discrimination, Democrats send the implicit message that the raw wage gap describes gender discrimination. It uses the anchoring bias to mislead the audience about the size of the pay gap stemming from gender discrimination.

Democrats habitually use "Equal Pay Day," based on the raw wage gap, to argue for equal pay for equal work. But the raw wage gap doesn't take the type of job into account.

Trust PolitiFact not to notice the deception.

Fact checkers ought to assist in making Democrats clarify their position. Are Democrats in favor of equal pay regardless of the job or hours worked? Or do Democrats believe the demands for equal pay apply only to matters of gender discrimination?

If the latter, Democrats' continued use of the raw wage gap to peg the date of its "Equal Pay Day" counts as a blatant deception.

If the former, voters deserve to know what Democrats stand for.

Afters


It amused us that Jacobson directly referenced an earlier PolitiFact Florida botched treatment of the gender pay gap.

PolitiFact Florida couldn't figure out that claiming the gap occurs "simply because she isn't a man" is equivalent to claiming the raw gap is for men and women doing the same work. Think about it. If the gap occurs "simply because she isn't a man" then the reason for the disparity cannot be because she is doing different work. Doing different work would be a factor in addition to her not being a man.

PolitiFact Florida hilariously rated that claim "Mostly True." We wrote about it on March 14, 2017.

Fact checkers. D'oh.

Thursday, April 12, 2018

Not a Lot of Reader Confusion X: "I admit that there are flaws in this ..."

So hold my beer, Nelly.

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Quite a few folks apparently have no clue at all that PolitiFact's charts and graphs lack anything like a scientific basis. Others know that something isn't right about the charts and graphs but against all reason find some value in them anyway.

PolitiFact itself would fall in the latter camp, based on the way it uses its charts and graphs.

So would Luciano Gonzalez, writing at Patheos. Gonzalez listened to Speaker of the House Paul Ryan's speech announcing his impending retirement and started wondering about Ryan's record of honesty.

PolitiFact's charts and graphs don't tell people about the honesty of politicians because of many flaky layers of selection bias, but people can't seem to help themselves (bold emphasis added):
I decided after hearing his speech at his press conference to independently check if this House Speaker has made more honest claims than his predecessors. To did this I went to Politifact and read the records of Nancy Pelosi (House Speaker from January 4th 2007-January 3rd 2011), John Boehner (House Speaker from January 5th 2011-October 29th, 2015), and of course of the current House Speaker Paul Ryan (October 29th 2015 until January 2019). I admit that there are flaws in this, such as the fact that not every political claim a politician makes is examined (or even capable of being examined) by Politifact and of course the inherent problems in giving political claims “true”, “mostly”, “half-true”, “mostly false”, “false”, & “pants on fire” ratings but it’s better than not examining political claims and a candidate’s level of honesty or awareness of reality at all.
If we can't have science, Gonzalez appears to say, pseudoscience is better than nothing at all.

Gonzalez proceeds to crunch the meaningless numbers, which "support" the premise of his column that Ryan isn't really so honest.

That accounts for the great bulk of Gonzalez's column.

Let's be clear: PolitiFact encourages this type of irresponsible behavior by publishing its nonsense graphs without the disclaimers that spell out for people that the graphs cannot be reasonably used to gauge people's honesty.

PolitiFact encourages exactly the type of behavior that fact checkers ought to discourage.