Tuesday, June 13, 2017

PolitiFact: Gays (and lesbians!) most frequent victims of hate crimes

Isn't it clear that PolitiFact's behavior is most likely the result of liberal bias?

PolitiFact Bias co-editor Jeff D. caught PolitiFact pimping a flubbed fact check on Twitter, attaching it to the anniversary of the Orlando gay nightclub shooting.
The problem? It's not true.

As we pointed out when PolitiFact first ran its fact check, there's a big difference between claiming a group is the most frequent target of hate crimes and claiming a group is at the greatest risk (on a per-person basis) of hate crimes.

Blacks as a group experience the most targeted hate crimes (about 30 percent of the total), according to the imperfect FBI data. That makes blacks the most frequent targets of hate crimes, not gays and lesbians.




Perhaps LGBT as a group experience a greater individual risk of falling victim to a hate crime, but we do not trust the research on which PolitiFact based its ruling. We doubt the researchers properly considered the bias against various small groups, such as the Sikhs. Don't take our word for it. Use the hyperlinks.

There is reason to suspect the research was politicized. We recommend not drawing any conclusion until the question is adequately researched.

What would we do without fact checkers?


Clarification June 13, 2017: Added "(on a per person basis)" to accentuate the intended distinction. Also changed "greater risk" to "greater individual risk" for the same reason).

Sunday, June 11, 2017

PolitiFact New York: Facts be damned, what we think the Democrat was trying to say was true

Liberals like to consider the tendency of fact checkers to rate conservatives more harshly than liberals a fairly solid evidence that Republicans lie more. After all, as we are often reminded, "truth has a liberal bias." But the way fact checkers pick which stories to tell and what facts to check has a fundamental impact on how fact checkers rate claims by political party.

Take a June 9, 2017 fact check from PolitiFact New York, for example.

Image from PolitiFact.com

Lt. Gov. Kathy Hochul (D) of New York proclaimed that the state of New York has achieved pay equity.

Hochul also proclaimed women are paid 90 cents on the dollar compared to men.

Hochul's first claim seems flatly false, if we count women getting paid $1 for every $1 earned by a man as "pay equity."

Her second claim, putting an accurate number on the raw gender wage gap, typically rates either "Half True" or "Mostly True" according to PolitiFact. PolitiFact tends to overlook the fact that the statistic is almost invariably used in the context of gender discrimination (see "Afters" section below).

In fact, the PolitiFact New York fact check focuses exclusively on the second claim of fact and takes a pass on evaluating the first claim of fact. PolitiFact New York justified its rating by saying Hochul's point was on target (bold emphasis added):
Hochul's numbers are slightly off. The data reveals a gender pay gap, but her point that New York state has a significantly smaller gap compared with the national average is correct. We rate her claim Mostly True.
At PolitiFact Bias, we class these cases under the tag "tweezers or tongs." PolitiFact might focus on one part of a claim, or focus on what it has interpreted as the point the politician was trying to make. Or, PolitiFact might look at multiple parts of a claim and produce a rating of the claim's truthiness "on balance."

PolitiFact in this case appears to use tweezers to remove "We have pay equity" from consideration. That saves the Democrat, Hochul, from an ugly blemish on her PolitiFact report card.

The fact checkers have at least one widely recognized bias: They tend to look for problematic statements. When a fact checker ignores a likely problem statement like "We have pay equity" in favor of something more mundane in the same immediate context, it suggests a different bias affected the decision.

The beneficiary of PolitiFact's adjusted focus once again: a Democrat.

When this happens over and over again, as it does, this by itself calls into question whether PolitiFact's candidate "report cards" or comparisons of "Truth-O-Meter" ratings by party carry any scientific validity at all.



Afters

Did Hochul make her gender wage gap claim in the context of gender pay discrimination?

Our best clue on that issue comes from Hochul's statement, just outside the context quoted by PolitiFact New York, that "Now, it's got to get to 100 [cents on the dollar]."
We draw from that part of her statement that Hochul was very probably pushing the typical Democratic talking point that the raw wage gap results from gender discrimination, which is false. Interpreting her otherwise makes it hard to see the importance of pay equity regardless of the jobs men and women do. We doubt the popularity of having gender pay equity regardless of the job performed, even in the state of New York.

The acid test:
Will women's groups react with horror if women achieve an advantage in terms of the raw wage gap? When men make only 83 cents on the dollar compared to women? Or will they assure us that the differences in pay are okay as it is the result of the job choices people make? We'll find out in time.

Thursday, June 8, 2017

Incompetent PolitiFact engages in Facebook mission creep, false reporting

The liberal bloggers/mainstream fact checkers at PolitiFact are expanding their "fake news" police mission at Facebook. While they're at it, they're publishing misleading reports.

Facebook Mission Creep

Remember the pushback when Facebook announced that fact checkers would help it flag "fake news?" PolitiFact Editor Angie Drobnic Holan made the rounds to offer reassurance:
[STELTER:]Angie, there has been a lot of blowback already to this Facebook experiment. Some on the right are very skeptical, even mocking this. Why is it a worthwhile idea? Why are you helping Facebook try to fact-check these fake stories?

HOLAN: Go to Facebook, and they are going about their day looking to connect with friends and family. And then they see these headlines that are super dramatic and they wonder if they're right or not. And when they're wrong, sometimes they are really wrong. They're entirely made up.

It is not trying to censor anything. It is just trying to flag these reports that are fabricated out of thin air.
Fact check journalists spent their energy insisting that "fake news" was just made-up "news" items produced purely to mislead people.

Welcome to PolitiFact's version of Facebook mission creep. Sarah Palin posted a meme criticizing the Paris climate accord. The meme showed a picture of Florida legislators celebrating, communicating the attitude of those who support the Paris climate agreement:


The meme does not try to capture the appearance of a regular news story. It is primarily offering commentary, not communicating the idea that Florida legislators supported the Paris climate agreement. As such, it simply does not fit the definition of "fake news" that PolitiFact has insisted would guide the policing effort on Facebook.

Yet PolitiFact posted this in its fact check of Palin:
PolitiFact fact-checked Palin’s photo as part of our effort to debunk fake news on Facebook.
Fail. It's as though PolitiFact expects meme-makers to subscribe to the same sets of principles for using images that bind professional journalists (oops):



Maybe PolitiFact should flag itself as "fake news"?


Communicating Fact Checks Using Half Truths

Over and over we point out that PolitiFact uses the same varieties of deception that politicians use to sway voters. This fact check of Palin gives us yet another outstanding example.  What did Palin do wrong, in PolitiFact's eyes?
Says an Internet meme shows people rejoicing over the Paris agreement
PolitiFact provided no good evidence Palin said any such thing. The truth is that Palin posted an Internet meme (we don't know who created it) that used an image that did not match the story.

PolitiFact has posted images that do not match its reporting. We provided an example above, from a PolitiFact video about President Clinton's role in signing the North American Free Trade Agreement.

If we reported "PolitiFact said George W. and Jeb Bush Negotiated NAFTA," we would be giving a misleading report at best. At worst we'd be flatly lying. We apply the same standard to PolitiFact that we would apply to ourselves.


Afters

We sent a message to the writer and editor at PolitiFact Florida responsible for this fact check. We sent it before starting on the text of our post, but we're not waiting for a response from PolitiFact because PolitiFact usually fails to respond to substantive criticism. If we receive any substantive reply from PolitiFact, we will append it to this message and amend the title of the post to reflect the presence of an update (no, we won't hold our breath).

Dear Amy Sherman, Katie Sanders,

Your fact check of Sarah Palin's Paris climate accord meme is disgraceful for two big reasons.

First, you describe the fact check as part of the Facebook effort to combat "fake news." After laboring to insist to everyone that "fake news" is an intentionally false news item intended to mislead people, it looks like you've moved toward Donald Trump's definition of "fake news." The use of a photograph that does not match the story is bad and unethical practice in professional journalism. But it's pretty common in the production of political memes. Do you really want to expand your definition of "fake news" like that, after trying to reassure people that the Facebook initiative was not about limiting political expression? Would you want your PolitiFact video identifying George W. Bush/Jeb Bush as George H. W. Bush classified as "fake news" based on your use of an unrelated photograph?

Second, your fact check flatly states that Palin identified the Florida lawmakers as celebrants of the Paris climate accord. But that obviously is not the case. The fact check notes, in fact, that the post does not identify the people in the photo. All the meme does is make it easy to jump to the conclusion that the people in the photo were celebrating the Paris agreement. As such, it's a loose implication. But your fact check states the misdirection is explicit:
Palin posted a viral image that purportedly shows a group of people clapping as a result of the Paris agreement, presumably about the billions they will earn.
Purported by whom? It's implied, not stated.

Do you seriously think the purpose of the post was to convey to the audience that Florida legislators were either responsible for the Paris agreement or celebrating it? That would truly be fake news as PolitiFact has tried to define it. But that's not what this meme does, is it?

You're telling the type of half-truth you claim to expose.

Stop it.





Edit 6/9/2017: Added link to CNN interview in second graph-Jeff 

Wednesday, June 7, 2017

PolitiLies at PolitiFact Wisconsin II (Updated: PolitiFact amends)

In part one of "PolitiLies at PolitiFact Wisconsin," we shared our experience questioning PolitiFact's reporting from a fact check of U.S. Rep. Glenn Grothman (R-Wis.).

In part two, we will look at PolitiFact Wisconsin's response to having a clear error pointed out in one of its stories.

On May 11, 2017, PolitiFact Wisconsin published a "Pants on Fire" rating of U.S. Rep. Paul Ryan's claim that "Air Force pilots were going to museums to find spare parts over the last eight years."

PolitiFact issued the "Pants on Fire" ruling despite a Fox News report which featured an Air Force captain, identified by name, who said the Air Force had on seven occasions obtained parts for B-1 bombers from museums.

PolitiFact Wisconsin objected to the thin evidence, apparently including the failure of the report to identify any of the museums that allegedly served as parts repositories (bold emphasis added):
The only example Ryan’s office cited was a May 2016 Fox News article in which an Air Force captain said spare parts needed for a B-1 bomber at a base in South Dakota were taken from seven "museum aircraft" from around the country. The museums weren’t identified and no other details were provided.
Yet when we attempted to verify PolitiFact Wisconsin's reporting, we found the text version of the story said Capt. Travis Lytton (no other details were provided?) showed the Fox reporters a museum aircraft from which a part was stripped. Lytton also described the function of the part in the story (no other details were provided?).

The accompanying video showed a B-1 bomber situated next to the name of the museum: South Dakota Air and Space Museum.



If one of the seven museums was not the South Dakota Air and Space Museum, then the Fox News video was highly misleading. The viewer would conclude the South Dakota Air and Space Museum was one of the seven museums.

How did PolitiFact Wisconsin miss this information? And why, when Lytton was plainly identified in the Fox News report, did PolitiFact Wisconsin not try to contact Lytton to find out the names of the other museums?

"Readers who see an error should contact the writer or editor"


We like to contact the writer and the editor when we see an error.

In this case, we contacted writer Tom Kertscher and editor Greg Borowski (May 31, 2017):
Dear Tom Kertscher, Greg Borowski,
Your rating of Speaker Ryan's claim about the Air Force pulling parts from museum planes falsely claims that none of the seven museums were identified.

Yet the Fox News report said the Air Force officer showed reporters the museum plane from which a part was taken. And if you bothered to watch the video associated with the story, the name of the museum appears very plainly in front of the B-1 bomber the officer identified.

http://www.sdairandspacemuseum.com/

And if the names of the museums was a point worth mentioning, then why not contact the officer (identified by name in the Fox News report) and ask him? If he identified one of the museums, would he not identify the others?
After nearly a week, we have received no reply to our message and the PolitiFact Wisconsin fact check still features the same false information about the Fox News report.

Why?

Integrity?


Update June 10, 2017: On June 2017 we received a message from PolitiFact Wisconsin editor Greg Borowski. Borowski said he had not received our email message (we do not know if writer Tom Kertscher, to whom it was also sent, had the same experience). Borowski said after finding out about the criticism PolitiFact Wisconsin "added a note to the item."

PolitiFact Wisconsin removed two false statements from its fact check, one stating that the Fox News report identified none of the museums from which airplane parts were taken, and one stating that the report featured no other details beyond those mentioned in the fact check.

This editor's note was added at the end of the fact check:
Editor's note: This item was updated on June 9, 2017 to say that the Fox News report did identify one museum. That information does not change the rating.
As with the other correction we helped prompt this week, we are impressed by PolitiFact Wisconsin's ability to commit an error and then fix the error without admitting any mistake. The editor's note says the fact check was changed "to say the Fox News report did identify one museum." Why was that change made? The editor's note doesn't say. The truth is the change was made because PolitiFact Wisconsin made a mistake.

It's appropriate for journalists to admit to making mistakes when they make them.  We do not care for the spin we see in PolitiFact Wisconsin's update notices.

Are we being too tough on PolitiFact Wisconsin? We think noted journalist Craig Silverman would agree with us.
Rather than destroying trust, corrections are a powerful tool to reinforce how accountable and transparent we are.

“If you’re willing to admit you’re wrong, people will trust you more,” said Mathew Ingram of Gigaom. “If I said to someone ‘You know, I’m never wrong’ they would think I was a psychopath or a liar, so they would trust me less. That’s versus if I said ‘I screw up all the time.’ They trust you more because you’re more human.”

That’s the paradox of trust: admitting our mistakes and failings make us more deserving of trust.


Correction June 14, 2017: Commenter Vinni BoD noticed our update was dated Sept. 2017. The month was actually June, which was the correct month in two spots where we (inexplicably) had "Sept." instead.

PolitiLies at PolitiFact Wisconsin I (Updated: PolitiFact amends)

Back on May 15, 2017 we noticed a suspicious factoid in PolitiFact Wisconsin's fact check of congressman Glenn Grothman (R-Wis.) (bold emphasis added):
Grothman’s quick response: "Planned Parenthood is the biggest abortion provider in the country."

He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.
The notion that birth control services count as controversial looked suspiciously like the result of a liberal press filter. Curious whether the context of Grothman's statement supported PolitiFact Wisconsin's telling, we had a look at the context (17:55 through 20:55).



The crosstalk made it a bit hard for us to follow the conversation, but a partial transcript from an article by Jen Hayden at the left-leaning Daily Kos seemed reasonably accurate to us. Note the site also features a trimmed video of the same exchange.

It looked to us as though Grothman mentioned the "controversial programs" without naming them, instead moving on to talk about why his constituents can do without Planned Parenthood's role in providing contraceptive services. Just before Grothman started talking about alternatives to Planned Parenthood's contraceptive services, an audience member called out asking Grothman for examples of the "controversial programs." That question may have led to an assumption that Grothman was  naming contraceptive services as an example of "controversial programs."

In short, we could not see any solid justification for PolitiFact Wisconsin's reporting. So we emailed PolitiFact Wisconsin (writer Dave Umhoefer and editor Greg Borowski) to ask whether its evidence was better than it appeared:
Upon reading your recent fact check of Republican Glen Grothman, I was curious about the line claiming Grothman called birth control a "controversial" service.



He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.

I watched the video and had trouble hearing the audio (I've found transcripts that seem pretty much correct, however). It sounded like Grothman mentioned Planned Parenthood's support for some controversial services, then went on to talk about the ease with which people might obtain birth control. Was there some particular part of event that you might transcribe in clear support of your summary?

From what I can tell, the context does not support your account. If people can easily obtain birth control without Planned Parenthood's help, how would that make the service "controversial"? It would make the service less necessary, not controversial, right?

I urge you to either make clear the portion of the event that supports your interpretation, or else alter the interpretation to square with the facts of the event. By that I mean not guessing what Grothman meant when he referred to "controversial programs." If Grothman did not make clear what he was talking about, your account should not suggest otherwise.

If you asked Grothman what he was talking about and he made clear he believes birth control is a controversial service, likewise make that clear to your readers.
The replies we received offered no evidence in support of PolitiFact Wisconsin's reporting. In fact, the reply we received on May 18 from Borowski suggested that Umhoefer had (belatedly?) reached out to Grothman's office for clarification:
Dave has reached out to Grothman's office. So, you;ll [sic] have to be patient.
By June 4, 2017 we had yet to receive any further message with evidence backing the claim from the article. We sent a reminder message that day that has likewise failed to draw a reply.

[Update June 8, 2017: PolitiFact Wisconsin editor Greg Borowski alerted us that the fact check of Grothman was updated. We have reproduced the PolitiFact Wisconsin "Editor's note" at the end of this post]

What does it mean?

It looks like PolitiFact Wisconsin did careless reporting on the Grothman story. The story very likely misrepresented Grothman's view of the "controversial programs" he spoke about.

Grothman's government website offers a more reliable account of what Grothman views as Planned Parenthood's "controversial" programs.

It appears PolitiFact Wisconsin is aware it published something as fact without adequate backing information, and intends to keep its flawed article as-is so long as it anticipates no significant consequences will follow.

Integrity.


Afters

Also see PolitiLies at PolitiFact Wisconsin II,  published the same day as this part.

Update June 8, 2017: PolitiFact removed "such as birth control" from its summary of Grothman's statement about "controversial services."  PolitiFact Wisconsin appended the following editor's note to the story:
(Editor's note, June 7, 2017: An earlier version of this item quoted Grothman as saying that Planned Parenthood is an outspoken advocate for "controversial" services such as birth control. A spokesperson for his office said on June 7, 2017 that the video, in which Grothman's voice is hard to hear at times, may have led people to that conclusion, but that Grothman does not believe birth control is a controversial service. The birth control quote had no bearing on the congressman’s statement about Planned Parenthood and its role in abortions, so the rating of True is unchanged.)
We are impressed by PolitiFact Wisconsin's ability to run a correction while offering the appearance that it committed no error. Saying the original item "quoted Grothman" gives the reader the impression that Grothman must have misspoke. But benevolent PolitiFact Wisconsin covered for Grothman's mistake after his office clarified what he meant to say.

It's really not a model of transparency, and offers Grothman no apology for misrepresenting his views.

We stick with our assessment that PolitiFact Wisconsin reported carelessly. And we suggest that PolitiFact Wisconsin's error was the type of error that occurs when journalists think they know how conservatives think when in reality the journalists do not know how conservatives think (ideological bias).

On the bright side, the portion of the fact check that we criticized now reads as it should have read from the start. We credit PolitiFact Wisconsin for making that change. That fixes the main issue, for there's nothing wrong with having a bias if it doesn't show up in the reporting.

Of secondary importance, we judge the editor's note was subtly misleading and lacking in transparency.

We also note with sadness that the changes to PolitiFact Wisconsin's story do not count as either corrections or updates. We know this because PolitiFact Wisconsin added no "corrections and updates" tag to the story. Adding that tag would make a fact check appear on PolitiFact's page of stories that have been corrected or updated.



Correction June 9, 2017: Removed a redundant "because" from the final paragraph of the update.

Friday, June 2, 2017

An objective deception: "neutral" PolitiFact

PolitiFact's central deception follows from its presentation of itself as a "nonpartisan" and neutral judge of facts.

A neutral fact checker would apply the same neutral standards to every fact check. Naturally, PolitiFact claims it does just that. But that claim should not convince anyone given the profound level of inconsistency PolitiFact has achieved over the years.

To illustrate PolitiFact's inconsistency we'll use an example from 2014 via PolitiFact Rhode Island that we just ran across.

Rhode Island's Senator Sheldon Whitehouse said jobs in the solar industry outnumbered jobs in coal mining. PolitiFact used data from the Solar Foundation to help evaluate the claim, and included this explanation from the Solar Foundation's Executive Director Andrea Luecke:
Luecke said by the census report’s measure, "the solar industry is outpacing coal mining." But she noted, "You have to understand that coal-mining is one aspect of the coal industry - whereas we’re talking about the whole solar industry."

If you add in other coal industry categories, "it’s more than solar, for sure. But the coal-mining bucket is less, for sure."
Luecke correctly explained that comparing the numbers from the Solar Foundation's job census to "coal mining" jobs represented an apples-to-oranges comparison.

PolitiFact Rhode Island did not take the rigged comparison into account in rating Whitehouse's claim. PolitiFact awarded Whitehouse a "True" rating, defined as "The statement is accurate and there’s nothing significant missing." We infer from the rating that PolitiFact Rhode Island regarded the apples-to-oranges comparison as insignificant.

However, when Mitt Romney in 2012 made substantially accurate claims about Navy ships and Air Force planes, PolitiFact based its rating on the apples-to-oranges angle:
This is a great example of a politician using more or less accurate statistics to make a meaningless claim. Judging by the numbers alone, Romney was close to accurate.

...

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges.
PolitiFact awarded Romney's claim its lowest-possible "Truth-O-Meter" rating, "Pants on Fire."

If Romney's claim was "meaningless" thanks to advances in military technology, is it not reasonable to regard Whitehouse's claim as similarly meaningless? PolitiFact Rhode Island didn't even mention government subsidies of the solar energy sector, nor did it try to identify Whitehouse's underlying argument--probably something along the lines of "Focusing on renewable energy sources like solar energy, not on fossil fuels, will help grow jobs and the economy!"

Comparing mining jobs to jobs for the whole solar energy sector offers no reasonable benchmark for comparing the coal energy sector as a whole to the solar energy sector as a whole.

Regardless of whether PolitiFact's people think they are neutral, their work argues the opposite. They do not apply their principles consistently.

Wednesday, May 31, 2017

What does the "Partisan Selective Sharing" study say about PolitiFact?

A recent study called "Partisan Selective Sharing" (hereafter PSS) noted that Twitter users were more likely to share fact checks that aided their own side of the political aisle.

Duh?

On the other hand, the paper came up in a search we did of scholarly works mentioning "PolitiFact."

The search preview mentioned the University of Minnesota's Eric Ostermeier. So we couldn't resist taking a peek to see how the paper handled the data hinting at PolitiFact's selection bias problem.

The mention of Ostermeier's work was effectively neutral, we're happy to say. And the paper had some surprising value to it.

PSS coded tweets from the "elite three" fact checkers, FactCheck.org, PolitiFact and the Washington Post Fact Checker, classifying them as neutral, beneficial to Republicans or beneficial to Democrats.

In our opinion, that's where the study proved briefly interesting:
Preliminary analysis
Fact-checking tweets
42.3% of the 194 fact-check (n=82) tweets posted by the three accounts in October 2012 contained rulings that were advantageous to the Democratic Party (i.e., either positive to Obama or negative to Romney), while 23.7% of them (n=46) were advantageous to the Republican Party (i.e., either positive to Romney or negative to Obama). The remaining 34% (n=66) were neutral, as their statements contained either a contextualized analysis or a neutral anchor.

In addition to the relative advantage of the fact checks, the valence of the fact-checking tweet toward each candidate was also analyzed. Of the 194 fact checks, 34.5% (n=67) were positive toward Obama, 46.9% (n=91) were neutral toward Obama, and 18.6% (n=36) were negative toward Obama. On the other hand, 14.9% (n=29) of the 194 fact checks contained positive valence toward Romney, 53.6% (n=104) were neutral toward Romney, and 31.4% (n=61) were negative valence toward Romney.
Of course, many have no problem interpreting results like these as a strong indication that Republicans lie more than Democrats. And we cheerfully admit the data show consistency with the assumption that Republicans lie more.

Still, if one has some interest in applying the methods of science, on what do we base the hypothesis that Republicans lie more? We cannot base that hypothesis on these data without ruling out the idea that fact-checking journalists lean to the left. And unfortunately for the "Republicans lie more" hypothesis, we have some pretty good data showing that American journalists tend to lean to the left.

Until we have some reasonable argument why left-leaning journalists do not allow their bias to affect their work, the results of studies like PSS give us more evidence that the media (and the mainstream media subset "fact checkers") lean left while they're working.

The "liberal bias" explanation has better evidence than the "Republicans lie more" hypothesis. As PolitiFact tweeted 126 of the total 194 fact check tweets, a healthy share of the blame likely falls on PolitiFact.


We wish the authors of the study, Jieun Shin and Kjerstin Thorson, had separated the three fact checkers in their results.

Wednesday, May 24, 2017

What if we lived in a world where PolitiFact applied to itself the standards it applies to others?

In that impossible world where PolitiFact applied its own standards to itself, PolitiFact would doubtless crack down on PolitiFact's misleading headlines, like the following headline over a story by Lauren Carroll:


While the PolitiFact headline claims that the Trump budget cuts Medicaid, and the opening paragraph says Trump's budget "directly contradicts" President Trump's promise not to cut Medicaid, in short order Carroll's story reveals that the Medicaid budget goes up under the new Trump budget.

So it's a cut when the Medicaid budget goes up?

Such reasoning has precedent at PolitiFact. We noted in December 2016 that veteran PolitiFact fact-checker Louis Jacobson wrote that the most natural way to interpret "budget cut" was against the baseline of expected spending, not against the previous year's spending.

Jacobson's approach in December 2016 helped President Obama end up with a "Compromise" rating on his aim to cut $1 trillion to $1.5 trillion in spending. By PolitiFact's reckoning, the president cut $427 billion from the budget. PolitiFact obtained that figure by subtracting actual outlays from the estimates the Congressional Budget Office published in 2012 and using the cumulative total for the four years.

Jacobson took a different tack back in 2014 when he faulted a Republican ad attacking the Affordable Care Act's adjustments to Medicare spending (which we noted in the earlier linked article):
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
We can easily rework Jacobson's paragraph to address Carroll's story:
First, while the headline implies that the proposed budget is slicing Medicaid benefits, these are not cuts to current services. Rather, as Medicaid spending continues to rise over the next 10 years, it will do so at a slower pace than would occur without the law. So claims that Trump would "cut" Medicaid need more explanation to be fully accurate.
PolitiFact is immune to the standard it applies to others.

We also note that a pledge not to cut a program's spending is not reasonably taken as a pledge not to slow the growth of spending for that program. Yet that unreasonable interpretation is the foundation of PolitiFact's "Trump-O-Meter" article.


Correction May 24, 2017: Changed the first incidence of "law" in our reworking of Jacobson's sentence to "proposed budget." It better fits the facts that way.
Update May 26, 2017: Added link to the PolitiFact story by Lauren Carroll

Friday, May 19, 2017

What "Checking How Fact Checkers Check" says about PolitiFact

A study by doctoral student Chloe Lim (Political Science) of Stanford University gained some attention this week, inspiring some unflattering headlines like this one from Vocativ: "Great, Even Fact Checkers Can’t Agree On What Is True."

Katie Eddy and Natasha Elsner explain inter-rater reliability

Lim's research approach somewhat resembled research by Michelle A. Amazeen of Rider University. Amazeen and Lim both used tests of coding consistency to assess the accuracy of fact checkers, but the two reached roughly opposite conclusions. Amazeen concluded that consistent results helped strengthen the inference that fact-checkers fact-check accurately. Lim concluded that inconsistent fact-checker ratings may undermine the public impact of fact-checking.

Key differences in the research procedure help explain why Amazeen and Lim reached differing conclusions.

Data Classification

Lim used two different methods for classifying data from PolitiFact and the Washington Post Fact Checker. She converted PolitiFact ratings to a five-point scale corresponding to the Washington Post Fact Checker's "Pinocchio" ratings, and she divided ratings into "True" and "False" groups using the line between "Mostly False" and "Half True" as the barrier between true and false statements.

Amazeen opted for a different approach. Amazeen did not try to reconcile the two different rating systems at PolitiFact and the Fact Checker, electing to use a binary system that counted every statement rated other than "True" or "Geppetto check mark" as false.

Amazeen's method essentially guaranteed high inter-rater reliability, because "True" judgments from the fact checkers are rare.  Imagine comparing movie reviewers who use a five-point scale but with their data divided up into great movies or not-great movies. A one-star rating of "Ishtar" by one reviewer would show agreement with a four-star rating of the same movie by another reviewer. Disagreements only occur when one reviewer gives five stars while the other one gives a lower rating.

Professor Joseph Uscinski's reply to Amazeen's research, published in Critical Review, put it succinctly:
Amazeen’s analysis sets the bar for agreement so low that it cannot be taken seriously.
Amazeen found high agreement among fact checkers because her method guaranteed that outcome.

Lim's methods provide for more varied and robust data sets, though Lim experienced the same problem Amazeen found in that two different fact-checking organizations only rarely check the same claims. Both researchers used relatively small data sets.

The meaning of Lim's study

In our view, Lim's study rushes to its conclusion that fact-checkers disagree without giving proper attention to the most obvious explanation for the disagreement she measures.

The rating systems the fact checkers use lend themselves to subjective evaluations. We should expect that condition to lead to inconsistent ratings. When I reviewed Amazeen's method at Zebra Fact Check, I criticized it for applying inter-coder reliability standards to a process much less rigorous than social science coding.

Klaus Krippendorff, creator of the K-alpha measure Amazeen used in her research, explained the importance of giving coders good instructions to follow:
The key to reliable content analyses is reproducible coding instructions. All phenomena afford multiple interpretations. Texts typically support alternative interpretations or readings. Content analysts, however, tend to be interested in only a few, not all. When several coders are employed in generating comparable data, especially large volumes and/or over some time, they need to focus their attention on what is to be studied. Coding instructions are intended to do just this. They must delineate the phenomena of interest and define the recording units to be described in analyzable terms, a common data language, the categories relevant to the research project, and their organization into a system of separate variables.
The rating systems of PolitiFact and the Washington Post Fact Checker are gimmicks, not coding instructions. The definitions mean next to nothing, and PolitiFact's creator, Bill Adair, has called PolitiFact's determination of Truth-O-Meter ratings "entirely subjective."

Lim's conclusion is right. The fact checkers are inconsistent. But Lim's use of coder reliability ratings is, in our view, a little like using a plumb line to measure whether a building has collapsed due to earthquake. The tool is too sophisticated for the job. The "Truth-O-Meter" and "Pinocchio" rating systems as described and used by the fact checkers do not qualify as adequate sets of coding instructions.

We've belabored the point about PolitiFact's rating system for years. It's a gimmick that tends to mislead people. And the fact-checking organizations that do not use a rating system avoid it for precisely that reason.

Lucas Graves' history of the modern fact-checking movement, "Deciding What's True: The Rise of Political Fact-Checking in American Journalism," (Page 41) offers an example of the dispute:
The tradeoffs of rating systems became a central theme of the 2014 Global Summit of fact-checkers. Reprising a debate from an earlier journalism conference, Bill Adair staged a "steel-cage death match" with the director of Full Fact, a London-based fact-checking outlet that abandoned its own five-point rating scheme (indicated by a magnifying lens) as lacking precision and rigor. Will Moy explained that Full Fact decided to forgo "higher attention" in favor of "long-term reputation," adding that "a dodgy rating system--and I'm afraid they are inherently dodgy--doesn't help us with that."
Coding instructions should provide coders with clear guidelines preventing most or all debate in deciding between two rating categories.

Lim's study in its present form does its best work in creating questions about fact checkers' use of rating systems.

Sunday, May 14, 2017

PolitiFact and robots.txt (updated)

We were surprised earlier this week when our attempt to archive a PolitiFact fact check at the Internet Archive failed.



Saving a page to the Internet Archive has served as one of the standard methods for keeping record of changes at a website. PolitiFact Bias has often used the Internet Archive to document PolitiFact's mischief.

Webmasters have the option of instructing search engines to skip indexing content at a website through use of a "robots.txt" instruction. Historically, the Internet Archive has respected the presence of a robots.txt prohibition.

PolitiFact apparently decided to start using a limiting robots.txt recently. As a result, it's likely that none of the PolitiFact.com archived links will work for a time, either at PolitiFact Bias or elsewhere.

The good news in all of this? The Internet Archive is likely to start ignoring the robots.txt instruction in the very near future. Once that happens, PolitiFact's sketchy Web history will return from the shadows back into the light.

PolitiFact may have had a legitimate reason for the change, but our extension of the benefit of the doubt comes with a big caveat: The PolitiFact webmaster could have created an exception for the Internet Archive in its robots.txt instruction. That oversight creates an embarrassment for PolitiFact, at minimum.


Update May 18, 2017:

This week the Internet Archive Wayback Machine once again functioned properly in saving Web pages at PolitiFact.com. Links at PolitiFactBias.com to archived pages likewise function properly.

We do not know at this point whether PolitiFact created an exception for the Internet Archive (and others), or whether the Internet Archive has already started ignoring robots.txt. PolitiFact has made no announcement regarding any change, so far as we can determine.

Friday, April 7, 2017

PolitiFact fixes fact check on Syrian chemical weapons

When news reports recently appeared suggesting the Syrian government used chemical weapons, it presented a problem for PolitiFact. As noted by the Daily Caller, among others, PolitiFact said in 2014 it was "Mostly True" that 100 percent of Syrian chemical weapons were removed from that country.

If the Syrian government used chemical weapons, where did it get them? Was it a fresh batch produced after the Obama administration forged an agreement with Russia (seriously) to effect removal of the weapons?

Nobody really knows, just like nobody truly knew the weapons were gone when PolitiFact ruled it "Mostly True" that the weapons were "100 percent gone." (screen capture via the Internet Archive)


With public attention brought to its questionable ruling with the April 5, 2017 Daily Caller story, PolitiFact archived its original fact check and redirected the old URL to a new (also April 5, 2017) PolitiFact article: "Revisiting the Obama track record on Syria’s chemical weapons."

At least PolitiFact didn't make its old ruling simply vanish, but has PolitiFact acted in keeping with its commitment to the International Fact-Checking Network's statement of principles?
A COMMITMENT TO OPEN AND HONEST CORRECTIONS
We publish our corrections policy and follow it scrupulously. We correct clearly and transparently in line with our corrections policy, seeking so far as possible to ensure that readers see the corrected version.
And what is PolitiFact's clear and transparent corrections policy? According to "The Principles of PolitiFact, PunditFact and the Truth-O-Meter" (bold emphasis added):

When we find we've made a mistake, we correct the mistake.

  • In the case of a factual error, an editor's note will be added and labeled "CORRECTION" explaining how the article has been changed.
  • In the case of clarifications or updates, an editor's note will be added and labeled "UPDATE" explaining how the article has been changed.
  • If the mistake is significant, we will reconvene the three-editor panel. If there is a new ruling, we will rewrite the item and put the correction at the top indicating how it's been changed.
Is the new article an update? In at least some sense it is. PolitiFact removed and archived the fact check thanks to questions about its accuracy. And the last sentence in the replacement article calls the article an "update":
In the days and weeks to come, we will learn more about the recent attacks, but in the interest of providing clear information, we have replaced the original fact-check with this update.
If the new article counts as an update, we think it ought to wear the "update" tag that would make it appear on PolitiFact's "Corrections and Updates" page, where it has yet to appear (archived version).

And we found no evidence that PolitiFact posted this article to its Facebook page. How are readers misled about the original fact check supposed to encounter the update, other than by searching for it?

Worse still, the new article does not even appear on the list for the "The Latest From PolitiFact." What's the excuse for that oversight?

We believe that if PolitiFact followed its corrections policy scrupulously, we would see better evidence that PolitiFact publicized its admission it had taken down its "Mostly True" rating of the claim of an agreement removing 100 percent of Syria's chemical weapons.

Can evidence like this stop PolitiFact from receiving "verified" status in keeping the IFCN fact checkers' code?

We doubt it.


Afters
It's worth mentioning that PolitiFact's updated article does not mention the old article until the third paragraph. The fact that PolitiFact pulled and archived that article waits for the fifth paragraph, nearly halfway through the update.

Since PolitiFact's archived version of the pulled article omits the editor's name, we make things easy for our readers by going to the Internet Archive for the name: Aaron Sharockman.

PolitiFact's "star chamber" of editors approving the "Mostly True" rating likely included Angie Drobnic Holan and Amy Hollyfield.

Sunday, April 2, 2017

Angie Drobnic Holan: "Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

PolitiFact, thy name is Hypocrisy.

The editors of PolitiFact Bias often find themselves overawed by the sanctimonious pronouncements we see coming from PolitiFact (and other fact checkers).

Everybody screws up. We screw up. The New York Times screws up. PolitiFact often screws up. And a big part of journalistic integrity comes from what you do to fix things when you screw up. But for some reason that concept just doesn't seem to fully register at PolitiFact.

Take the International Fact-Checking Day epistle from PolitiFact's chief editor Angie Drobnic Holan:
Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency. (We adhere to those principles at PolitiFact and at the Tampa Bay Times, so if you’re reading this, you’ve made a good start.)
The first sentence qualifies as great advice. The parenthetical sentence that follows qualifies as a howler. PolitiFact adheres to principles of truthfulness, fairness and transparency?

We're coming fresh from a week where PolitiFact published a fact check that took conservative radio talk show host Hugh Hewitt out of context, said it couldn't find something that was easy to find, and (apparently) misrepresented the findings of the Congressional Budget Office regarding the subject.

And more to the issue of integrity, PolitiFact ignores the evidence of its failures and allows its distorted and false fact check to stand.

The fact check claims the CBO finds insurance markets under the Affordable Care Act stable, concluding that the CBO says there is no death spiral. In fact, the CBO said the ACA was "probably" stable "in most areas." Is it rightly a fact checker's job to spin the judgments of its expert sources?

PolitiFact improperly cast doubt on Hewitt's recollections of a New York Times article where the head of Aetna said the ACA was in a death spiral and people would be left without insurance:
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article ...
We found the article (quickly and easily). And we told PolitiFact the article exists. But PolitiFact's fact check still makes it look like Hewitt was wrong about the article appearing in the Times.

PolitiFact harped on the issue:
In another tweet, Hewitt referenced a Washington Post story that included remarks Aetna’s chief executive, Mark Bertolini. On the NBC Meet the Press, Hewitt referred to a New York Times article.
We think fact checkers crowing about their integrity and transparency ought to fix these sorts of problems without badgering from right-wing bloggers. And if they still won't fix them after badgering from right-wing bloggers, then maybe they do not qualify as "organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

Maybe they're more like liberal bloggers with corporate backing.



Correction April 3, 2017: Added a needed apostrophe to "fact checkers job."

Thursday, March 30, 2017

Hewitt v. PolitiFact: Two facts clearly favor Hewitt

Over the past week, conservative radio host Hugh Hewitt claimed on Sunday the health insurance industry has entered a "death spiral," PolitiFact rated Hewitt's claim "False" and Hewitt had PolitiFact Executive Director Aaron Sharockman on his radio show for an hour long interview (transcript here).

Aside from the central dispute over the "death spiral," where PolitiFact's work arguably commits a bifurcation fallacy, we have identified two areas where Hewitt has the right of the argument. PolitiFact has published (and superficially defended) false statements in both areas.

The New York Times article

PunditFact (PolitiFact):
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article, but a simple remark on how premiums are rising and insurers are leaving the marketplace is not enough evidence to meet the actuarial definition of a death spiral.
We found the article in just a few minutes (it was dead easy; see hit No. 5). Two quotations from it will show it matches the content, other than the term "president," that Hewitt described on his Sunday television appearance.

One:
Aetna CEO Mark Bertolini has pronounced the ACA's health insurance markets in a "death spiral."
Two:
___

This story has been corrected to show that consumers have reduced options, not that some consumers have no health care options.
So we have the "death spiral" comment from the head of AETNA that Hewitt described, as well as the dire statement that some people have no options, though that part was reported in error in the AP story that appeared in the Times.

During his radio interview, Sharockman tried to pin on Hewitt PolitiFact's failure to find the described Times story and flatly said the article was not in the Times:
AS: You said the president of Aetna. It’s the chairman and CEO, and it was not in the New York Times, as you also know. It was originally probably in the Wall Street Journal.
The article was in The New York Times, and we have informed PolitiFact writer Allison Graves (via Twitter) and Sharockman (via email).

We expect ethical journalists to make appropriate corrections.

Where does the CBO stand on the "death spiral"?

The mainstream media widely interpreted the CBO report addressing President Trump's health care proposal as a judgment the ACA has not entered a "death spiral."

PolitiFact did likewise in its fact check of Hewitt:
CBO, independent analysis: No death spiral
Others have also concluded that the Affordable Care Act is not in a death spiral. The nonpartisan Congressional Budget Office, as part of its recent analysis of the GOP legislation, described the Affordable Care Act as stable.
Though PolitiFact did not link to the CBO report in its fact check (contrary to PolitiFact's statement of standards), we believe the claim traces to this CBO report, which contains this assessment (bold emphasis added):

Stability of the Health Insurance Market

Decisions about offering and purchasing health insurance depend on the stability of the health insurance market—that is, on having insurers participating in most areas of the country and on the likelihood of premiums’ not rising in an unsustainable spiral. The market for insurance purchased individually (that is, nongroup coverage) would be unstable, for example, if the people who wanted to buy coverage at any offered price would have average health care expenditures so high that offering the insurance would be unprofitable. In CBO and JCT’s assessment, however, the nongroup market would probably be stable in most areas under either current law or the legislation.
Note the CBO report does not call the insurance market "stable" under the ACA. Instead it projects that insurance markets will probably remain stable in most areas. Assuming PolitiFact has no better support from the CBO than the portion we have quoted, we find PolitiFact's version a perversion of the original. The CBO statement leaves open the possibility of a death spiral.

Sharockman stood behind the fact check's apparent spin during his appearance on the Hugh Hewitt Show:
AS: Hugh, you’re misleading the listeners.

HH: …is that we have gone from 7…

AS: You’re misleading the listeners. The same CBO report that you’re quoting said that the markets are stable whether it’s the AHCA…
Again, unless Sharockman has some version of a CBO report different from what we have found, we judge that Sharockman and PolitiFact are misleading people about the content of the report.

We used email to point out the discrepancy to Sharockman and asked him to provide support for his and PolitiFact's interpretation of the CBO report.

We will update this article if we receive a response from Sharockman that includes such evidence.

Tuesday, March 28, 2017

Hugh Hewitt v. PolitiFact (Power Line Blog)

Via Power Line blog, the liberal bloggers at PolitiFact tangle with conservative radio show host Hugh Hewitt:

The issue: During a television appearance, Hewitt said the ACA is in a death spiral. PolitiFact did its usual limited survey of experts and ruled Hewitt's statement "False."

Part 1: PolitiFact Strikes Hugh Hewitt

A favorite part:
Allison Graves evaluates Hugh’s assertion regarding the Obamacare death spiral for PolitiFact. She defines the question in a manner that tends to belie Hugh’s assertion, cites some relevant authorities and rates Hugh’s assertion False.

I think this is a question on which reasonable minds can disagree, depending on how the question is framed. I would rate Graves’s judgment False in implying the contrary.
Part 2: Pol[i]tiFact strikes Hugh Hewitt (2)

A favorite part (quotation of Hewitt):
PolitiFact is a liberal-agenda-driven group of classically lefty “journalists” masquerading as a non-partisan evaluators of arguments. In this case their defense of their “journalism” rests on a partial and biased recounting of a 10:20 a.m. Meet the Press roundtable discussion, one which omits my stated acknowledgment of a differing argument therein, and their discounting of the expert testimony of a major insurance company president, along with a Sunday afternoon three-hour “deadline” window for response following a perfunctory email to a booker of a show that runs Monday through Friday, when the host is himself online and answering a journalists’ questions and comments.
To this we would add that PolitiFact's story misrepresents a Congressional Budget Office report.

PolitiFact cited the CBO in support of its finding that the ACA is not in a death spiral:
The nonpartisan Congressional Budget Office, as part of its recent analysis of the GOP legislation, described the Affordable Care Act as stable.
PolitiFact failed to link to the CBO in this fact check, but the source wasn't hard to find. The tough part was figuring out why PolitiFact added its own spin to the CBO's view (bold emphasis added):

Stability of the Health Insurance Market>

Decisions about offering and purchasing health insurance depend on the stability of the health insurance market—that is, on having insurers participating in most areas of the country and on the likelihood of premiums’ not rising in an unsustainable spiral. The market for insurance purchased individually (that is, nongroup coverage) would be unstable, for example, if the people who wanted to buy coverage at any offered price would have average health care expenditures so high that offering the insurance would be unprofitable. In CBO and JCT’s assessment, however, the nongroup market would probably be stable in most areas under either current law or the legislation.

Under current law, most subsidized enrollees purchasing health insurance coverage in the nongroup market are largely insulated from increases in premiums because their out-of-pocket payments for premiums are based on a percentage of their income; the government pays the difference. The subsidies to purchase coverage combined with the penalties paid by uninsured people stemming from the individual mandate are anticipated to cause sufficient demand for insurance by people with low health care expenditures for the market to be stable.

Under the legislation, in the agencies’ view, key factors bringing about market stability include subsidies to purchase insurance, which would maintain sufficient demand for insurance by people with low health care expenditures, and grants to states from the Patient and State Stability Fund, which would reduce the costs to insurers of people with high health care expenditures. Even though the new tax credits would be structured differently from the current subsidies and would generally be less generous for those receiving subsidies under current law, the other changes would, in the agencies’ view, lower average premiums enough to attract a sufficient number of relatively healthy people to stabilize the market.
Is it defensible journalistic practice to leave out the "probably" and "most areas" caveats in the CBO report?

Something tells us that if PolitiFact caught a Republican omitting that kind of information, it would result in a rating of "Half True" or worse. Assuming the Republican wasn't making a point that a liberal would like, of course.

Afters:

We just finished listening to PolitiFact's Aaron Sharockman spending an hour on the Hugh Hewitt Show. Sharockman reaffirmed the paraphrase of the CBO we quoted above. When a transcript becomes available, we will look at whether Sharockman magnified the distortion from the original fact check.

Thursday, March 23, 2017

Rorschach context

It seems as though the liberal bloggers (aka "mainstream fact checkers") at PolitiFact treat context like a sort of Rorschach inkblot, to interpret as they see fit.

What evidence prompts these unkind words? The evidence runs throughout PolitiFact's history, but two recent fact-checks inspired the imagery.

The PolitiFact Florida Lens

In our previous post, we pointed out the preposterous "Mostly True" rating PolitiFact Florida gifted on a Florida Democrat who equated the raw gender wage gap with the gender wage gap caused by sex discrimination. The fact checkers did not interpret words uttered in context, "simply because she isn't a man," as an argument that the raw wage gap was entirely the result of gender discrimination. Perhaps it wasn't specific enough, like saying the difference in pay occurred despite doing the same work ("Mostly False")?

Whatever the case, PolitiFact opted not to accept a crystal clear clue that it was checking a claim that mirrored one it had previously rated "Mostly False" and rated the similar claim "Mostly True."

The PolitiFact California Lens

A recent fact check from PolitiFact California makes for a jarring contrast with the PolitiFact Florida item.

California Lt. Governor Gavin Newsom tweeted that Republican Jason Chaffetz had treated the cost of an iPhone to the cost of health care "as if the 2 are the same." Newsom was making the point that health care costs more than an iPhone, so saying the two are the same misses the mark by a California mile.

But did Chaffetz say the costs are the same?

First let's look at how the PolitiFact California lens processed the evidence, then we'll put that evidence together with some surrounding context.

PolitiFact California:
We also examined Newsom’s final claim that Chaffetz had compared the iPhone and health care costs "as if they are the same."

Chaffetz’ comments, particularly his phrase "Americans have choices. And they’ve got to make a choice," leave the impression that obtaining health care is as simple as sacrificing the purchase of a smartphone.
It's worth noting at the outset that PolitiFact California's key evidence doesn't mention the iPhone and does not even imply any type of cost comparison. The only way to adduce Chaffetz's quotation as evidence of a price comparison would have to come from the context of Chaffetz's remarks. And a fact-checker ought to explain to readers how that works, unless the fact checker can count on his audience sharing his ideological bias.

Chaffetz (as quoted at length in the PolitiFact California fact check; bold emphasis added):
"Well we're getting rid of the individual mandate. We're getting rid of those things that people said they don't want. And you know what? Americans have choices. And they've got to make a choice. And so, maybe rather than getting that new iPhone that they just love and they want to go spend hundreds of dollars on that, maybe they should invest it in their own health care. They've got to make those decisions for themselves."
Chaffetz in no way offers anything approaching a clear suggestion that the cost of an iPhone equals the cost of health care or health insurance. His words about people having choices come right after he says the health care bill would eliminate the individual mandate. After that comes the mention of an iPhone costing "hundreds of dollars" that one might instead invest in health care. In context, the statement is just one example of a great number of choices one might make about paying for health care.

The PolitiFact California lens (like magic!) turns Chaffetz's words conveniently into what is needed to say the Democrat said something "Mostly True."

It's the bias, stupid.

We have PolitiFact Florida ignoring clear context to give a Democrat a more favorable rating than she deserves. We have PolitiFact California finding clear evidence from the context where none exists to give a Democrat a more favorable rating than he deserves.

Point out the absurdity to PolitiFact (as we did for the PolitiFact Florida flub) and somebody from the Tampa Bay Times will read the critique and no changes to the article will result.
How are they able to repeatedly overlook problems like these?

The simplest explanation? Because they're biased. Biased to the left. Biased to trust their own work (despite the incongruity with other PolitiFact fact checks!). And Dunning-Kruger out the wazoo.


Clarification: March 27, 2017: Added link to the PolitiFact California fact check of Gavin Newsom.

Tuesday, March 14, 2017

There You Go Again: PolitiFact Florida makes a hash of another gender wage gap ruling

Though PolitiFact is an unreliable fact-checker, at least one can bank on the mainstream fact-checker's ability to flub gender wage gap claims.

We hit PolitiFact on this issue often, but this latest one from PolitiFact Florida is a doozy, rivaling PolitiFact Oregon's remarkable turd from 2014.

Drum roll: PolitiFact Florida, March 14, 2017:

We're presenting a big hunk of the fact check as it appears at PolitiFact Florida to show how PolitiFact Florida effectively contradicts its own reasoning.

In the next-to-last paragraph of its summary, PolitiFact Florida explains that "differences in pay can be affected by the careers women and men choose and taking time off to care for children." Those aren't the only factors affecting the raw wage gap, by the way.

Yet in the ironically named "Share The Facts" version, the "Mostly True" rating blares its message aside Democrat Patricia Farley's claim the disparities occur purely based on gender ("simply because she isn't a man"). In other words, the cause is gender discrimination, not different job choices and the like--directly contradicting PolitiFact Florida's caveat. Farley didn't just leave out context. She explicitly denied the key bit of context.

Anyone who knows the difference between the raw gender wage gap and the wage gap based solely on gender discrimination but uses the large former gap in the context of arguing for legislation to reduce gender discrimination is deceiving people. The raw gender wage gap is not a realistic representation of gender discrimination in wages because of other factors, such as men and women tending to choose careers that pay differently.

So, yes, we're saying that unless Patricia Farley is ignorant about the difference between the gender wage gap and the wage gap caused by pay discrimination, she is lying, as in deliberately deceiving her audience. And PolitiFact Florida is calling her falsehood and potentially intentional deception "Mostly True."

The PolitiFact Florida wage gap fact check is below average for PolitiFact--and that's like failing to leap over a match box.


Correction March 15, 2017: Posted the intended URL for the PolitiFact Florida fact check. We had mistakenly used the URL to a related fact check concerning Donald Trump.

Monday, February 27, 2017

Daily Caller: "Politifact Says Trump Is Right, But Rates His Remark ‘Mostly False'"

The Daily Caller notes an item from PolitiFact where President Trump tweeted something PolitiFact found true, after which the fact checkers proceeded to rate the claim "Mostly False."

The Daily Caller's Alex Pfeiffer has the skinny:
The tweet from Trump came after Gateway Pundit reported on the change in the national debt under the two respective presidents and after former Godfather Pizza CEO Herman Cain brought up the figures on Fox News.

Politifact wrote: “The numbers check out. And in fact, the total public debt has dropped another $22 billion since the Gateway Pundit article published, according to data from the U.S. Department of Treasury.”

Despite this, Politifact still gave Trump a rating of “mostly false” and titled its article, “Why Donald Trump’s tweet about national debt decrease in his first month is highly misleading.”
We saw this item and considered writing it up. It seemed to us the type of thing that liberal (or even moderate) readers might excuse, judging that PolitiFact did enough to justify the "Mostly False" rating it gave to Trump's tweet.

The case needs additional information to show that it does not represent a fair fact check.

The definition of "Mostly False"

Did PolitiFact show that Trump's tweet met its definition of "Mostly False"? Here is the definition:
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
Trump's tweet did not simply contain "an element of truth." It was true (and misleading). PolitiFact's "Truth-O-Meter" definitions mean little. PolitiFact does not used objective criteria to decide the rating. If objective criteria decided the rating, then PolitiFact's creator would not declare that "Truth-O-Meter" ratings are "entirely subjective."

Sauce for the gander?


If PolitiFact applied its judgments consistently, then the Daily Caller and sites like ours would have little to complain about. But vague definitions that ultimately fail to guide the final rating make it virtually impossible even for well-meaning left-leaning journalists to keep the scales balanced.

Consider an example from the PolitiFact Oregon franchise. PolitiFact Oregon rated Democrat Brad Avakian "Mostly True" for a false statement:
Avakian, citing Census data and echoing claims by Obama and others, said women in Oregon "earn an average of 79 cents for every dollar that men earn for doing the same job." The report he relied on noted that the 79-cent figure applies to full-time, year-round work, although Avakian didn’t include those stipulations.

For starters, the commissioner loses points for cherry-picking the 79-cent figure. Other means of measuring pay gaps between men and women put it considerably less.

The same can be said of the "for doing the same job" piece. As PolitiFact has found previously, the existence of a pay gap doesn’t necessarily mean that all of the gap is caused by individual employer-level discrimination, as Avakian’s claim implies. Some of the gap is at least partially explained by the predominance of women in lower-paying fields, rather than women necessarily being paid less for the same job than men are.

Finally, Avakian used the term "average" when the report he relied on said "median." He could have avoided that by simply saying women "make 79 cents for every dollar a man earns," but since the information he cited contains only median incomes, we find the difference to be inconsequential.

Those caveats aside, he still is well inside the ballpark and the ratio he cited is a credible figure from a credible agency. We rate the claim Mostly True.
That's an inexcusably tilted playing field. If Avakian had described the raw pay gap without saying it compared men and women doing the same job, then his claim would have paralleled Trump's: a true but misleading statement. But Avakian's statement was not true and misleading. It was false and misleading at the same time.

Yet it received a "Mostly True" rating compared to Trump's "Mostly False" rating.

Doesn't fact-checking need better standards than that?



Jeff Adds (1922PST 2/27/17):
We'd love to see PolitiFact reconcile their Mostly False rating of Trump's claim with the rationale behind this gem:



Was there anything misleading about Clinton's statement?
Clinton’s figures check out, and they also mirror the broader results we came up with two years ago. Partisans are free to interpret these findings as they wish, but on the numbers, Clinton’s right. We rate his claim True.
Ha! Silly factseekers. When Trump makes an accurate claim PolitiFact conjures their magical powers of objectivity to decide what is misleading. When lovable ol' Bill makes a claim, heck, PolitiFact is just checkin' the numbers and all you partisans can figure out what it means.

Note that PolitiFact gave Bill Clinton a True rating, which they define as "The statement is accurate and there’s nothing significant missing." Must be nice to be in the club.

We've pointed out how PolitiFact's application of standards is akin to the game of Plinko. With ratings like this it's difficult to view PolitiFact as serious journalists instead of carnival barkers.