Wednesday, June 7, 2017

PolitiLies at PolitiFact Wisconsin I (Updated: PolitiFact amends)

Back on May 15, 2017 we noticed a suspicious factoid in PolitiFact Wisconsin's fact check of congressman Glenn Grothman (R-Wis.) (bold emphasis added):
Grothman’s quick response: "Planned Parenthood is the biggest abortion provider in the country."

He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.
The notion that birth control services count as controversial looked suspiciously like the result of a liberal press filter. Curious whether the context of Grothman's statement supported PolitiFact Wisconsin's telling, we had a look at the context (17:55 through 20:55).



The crosstalk made it a bit hard for us to follow the conversation, but a partial transcript from an article by Jen Hayden at the left-leaning Daily Kos seemed reasonably accurate to us. Note the site also features a trimmed video of the same exchange.

It looked to us as though Grothman mentioned the "controversial programs" without naming them, instead moving on to talk about why his constituents can do without Planned Parenthood's role in providing contraceptive services. Just before Grothman started talking about alternatives to Planned Parenthood's contraceptive services, an audience member called out asking Grothman for examples of the "controversial programs." That question may have led to an assumption that Grothman was  naming contraceptive services as an example of "controversial programs."

In short, we could not see any solid justification for PolitiFact Wisconsin's reporting. So we emailed PolitiFact Wisconsin (writer Dave Umhoefer and editor Greg Borowski) to ask whether its evidence was better than it appeared:
Upon reading your recent fact check of Republican Glen Grothman, I was curious about the line claiming Grothman called birth control a "controversial" service.



He added that the group is an outspoken advocate for what he termed "controversial" services such as birth control.

I watched the video and had trouble hearing the audio (I've found transcripts that seem pretty much correct, however). It sounded like Grothman mentioned Planned Parenthood's support for some controversial services, then went on to talk about the ease with which people might obtain birth control. Was there some particular part of event that you might transcribe in clear support of your summary?

From what I can tell, the context does not support your account. If people can easily obtain birth control without Planned Parenthood's help, how would that make the service "controversial"? It would make the service less necessary, not controversial, right?

I urge you to either make clear the portion of the event that supports your interpretation, or else alter the interpretation to square with the facts of the event. By that I mean not guessing what Grothman meant when he referred to "controversial programs." If Grothman did not make clear what he was talking about, your account should not suggest otherwise.

If you asked Grothman what he was talking about and he made clear he believes birth control is a controversial service, likewise make that clear to your readers.
The replies we received offered no evidence in support of PolitiFact Wisconsin's reporting. In fact, the reply we received on May 18 from Borowski suggested that Umhoefer had (belatedly?) reached out to Grothman's office for clarification:
Dave has reached out to Grothman's office. So, you;ll [sic] have to be patient.
By June 4, 2017 we had yet to receive any further message with evidence backing the claim from the article. We sent a reminder message that day that has likewise failed to draw a reply.

[Update June 8, 2017: PolitiFact Wisconsin editor Greg Borowski alerted us that the fact check of Grothman was updated. We have reproduced the PolitiFact Wisconsin "Editor's note" at the end of this post]

What does it mean?

It looks like PolitiFact Wisconsin did careless reporting on the Grothman story. The story very likely misrepresented Grothman's view of the "controversial programs" he spoke about.

Grothman's government website offers a more reliable account of what Grothman views as Planned Parenthood's "controversial" programs.

It appears PolitiFact Wisconsin is aware it published something as fact without adequate backing information, and intends to keep its flawed article as-is so long as it anticipates no significant consequences will follow.

Integrity.


Afters

Also see PolitiLies at PolitiFact Wisconsin II,  published the same day as this part.

Update June 8, 2017: PolitiFact removed "such as birth control" from its summary of Grothman's statement about "controversial services."  PolitiFact Wisconsin appended the following editor's note to the story:
(Editor's note, June 7, 2017: An earlier version of this item quoted Grothman as saying that Planned Parenthood is an outspoken advocate for "controversial" services such as birth control. A spokesperson for his office said on June 7, 2017 that the video, in which Grothman's voice is hard to hear at times, may have led people to that conclusion, but that Grothman does not believe birth control is a controversial service. The birth control quote had no bearing on the congressman’s statement about Planned Parenthood and its role in abortions, so the rating of True is unchanged.)
We are impressed by PolitiFact Wisconsin's ability to run a correction while offering the appearance that it committed no error. Saying the original item "quoted Grothman" gives the reader the impression that Grothman must have misspoke. But benevolent PolitiFact Wisconsin covered for Grothman's mistake after his office clarified what he meant to say.

It's really not a model of transparency, and offers Grothman no apology for misrepresenting his views.

We stick with our assessment that PolitiFact Wisconsin reported carelessly. And we suggest that PolitiFact Wisconsin's error was the type of error that occurs when journalists think they know how conservatives think when in reality the journalists do not know how conservatives think (ideological bias).

On the bright side, the portion of the fact check that we criticized now reads as it should have read from the start. We credit PolitiFact Wisconsin for making that change. That fixes the main issue, for there's nothing wrong with having a bias if it doesn't show up in the reporting.

Of secondary importance, we judge the editor's note was subtly misleading and lacking in transparency.

We also note with sadness that the changes to PolitiFact Wisconsin's story do not count as either corrections or updates. We know this because PolitiFact Wisconsin added no "corrections and updates" tag to the story. Adding that tag would make a fact check appear on PolitiFact's page of stories that have been corrected or updated.



Correction June 9, 2017: Removed a redundant "because" from the final paragraph of the update.

Friday, June 2, 2017

An objective deception: "neutral" PolitiFact

PolitiFact's central deception follows from its presentation of itself as a "nonpartisan" and neutral judge of facts.

A neutral fact checker would apply the same neutral standards to every fact check. Naturally, PolitiFact claims it does just that. But that claim should not convince anyone given the profound level of inconsistency PolitiFact has achieved over the years.

To illustrate PolitiFact's inconsistency we'll use an example from 2014 via PolitiFact Rhode Island that we just ran across.

Rhode Island's Senator Sheldon Whitehouse said jobs in the solar industry outnumbered jobs in coal mining. PolitiFact used data from the Solar Foundation to help evaluate the claim, and included this explanation from the Solar Foundation's Executive Director Andrea Luecke:
Luecke said by the census report’s measure, "the solar industry is outpacing coal mining." But she noted, "You have to understand that coal-mining is one aspect of the coal industry - whereas we’re talking about the whole solar industry."

If you add in other coal industry categories, "it’s more than solar, for sure. But the coal-mining bucket is less, for sure."
Luecke correctly explained that comparing the numbers from the Solar Foundation's job census to "coal mining" jobs represented an apples-to-oranges comparison.

PolitiFact Rhode Island did not take the rigged comparison into account in rating Whitehouse's claim. PolitiFact awarded Whitehouse a "True" rating, defined as "The statement is accurate and there’s nothing significant missing." We infer from the rating that PolitiFact Rhode Island regarded the apples-to-oranges comparison as insignificant.

However, when Mitt Romney in 2012 made substantially accurate claims about Navy ships and Air Force planes, PolitiFact based its rating on the apples-to-oranges angle:
This is a great example of a politician using more or less accurate statistics to make a meaningless claim. Judging by the numbers alone, Romney was close to accurate.

...

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges.
PolitiFact awarded Romney's claim its lowest-possible "Truth-O-Meter" rating, "Pants on Fire."

If Romney's claim was "meaningless" thanks to advances in military technology, is it not reasonable to regard Whitehouse's claim as similarly meaningless? PolitiFact Rhode Island didn't even mention government subsidies of the solar energy sector, nor did it try to identify Whitehouse's underlying argument--probably something along the lines of "Focusing on renewable energy sources like solar energy, not on fossil fuels, will help grow jobs and the economy!"

Comparing mining jobs to jobs for the whole solar energy sector offers no reasonable benchmark for comparing the coal energy sector as a whole to the solar energy sector as a whole.

Regardless of whether PolitiFact's people think they are neutral, their work argues the opposite. They do not apply their principles consistently.

Wednesday, May 31, 2017

What does the "Partisan Selective Sharing" study say about PolitiFact?

A recent study called "Partisan Selective Sharing" (hereafter PSS) noted that Twitter users were more likely to share fact checks that aided their own side of the political aisle.

Duh?

On the other hand, the paper came up in a search we did of scholarly works mentioning "PolitiFact."

The search preview mentioned the University of Minnesota's Eric Ostermeier. So we couldn't resist taking a peek to see how the paper handled the data hinting at PolitiFact's selection bias problem.

The mention of Ostermeier's work was effectively neutral, we're happy to say. And the paper had some surprising value to it.

PSS coded tweets from the "elite three" fact checkers, FactCheck.org, PolitiFact and the Washington Post Fact Checker, classifying them as neutral, beneficial to Republicans or beneficial to Democrats.

In our opinion, that's where the study proved briefly interesting:
Preliminary analysis
Fact-checking tweets
42.3% of the 194 fact-check (n=82) tweets posted by the three accounts in October 2012 contained rulings that were advantageous to the Democratic Party (i.e., either positive to Obama or negative to Romney), while 23.7% of them (n=46) were advantageous to the Republican Party (i.e., either positive to Romney or negative to Obama). The remaining 34% (n=66) were neutral, as their statements contained either a contextualized analysis or a neutral anchor.

In addition to the relative advantage of the fact checks, the valence of the fact-checking tweet toward each candidate was also analyzed. Of the 194 fact checks, 34.5% (n=67) were positive toward Obama, 46.9% (n=91) were neutral toward Obama, and 18.6% (n=36) were negative toward Obama. On the other hand, 14.9% (n=29) of the 194 fact checks contained positive valence toward Romney, 53.6% (n=104) were neutral toward Romney, and 31.4% (n=61) were negative valence toward Romney.
Of course, many have no problem interpreting results like these as a strong indication that Republicans lie more than Democrats. And we cheerfully admit the data show consistency with the assumption that Republicans lie more.

Still, if one has some interest in applying the methods of science, on what do we base the hypothesis that Republicans lie more? We cannot base that hypothesis on these data without ruling out the idea that fact-checking journalists lean to the left. And unfortunately for the "Republicans lie more" hypothesis, we have some pretty good data showing that American journalists tend to lean to the left.

Until we have some reasonable argument why left-leaning journalists do not allow their bias to affect their work, the results of studies like PSS give us more evidence that the media (and the mainstream media subset "fact checkers") lean left while they're working.

The "liberal bias" explanation has better evidence than the "Republicans lie more" hypothesis. As PolitiFact tweeted 126 of the total 194 fact check tweets, a healthy share of the blame likely falls on PolitiFact.


We wish the authors of the study, Jieun Shin and Kjerstin Thorson, had separated the three fact checkers in their results.

Wednesday, May 24, 2017

What if we lived in a world where PolitiFact applied to itself the standards it applies to others?

In that impossible world where PolitiFact applied its own standards to itself, PolitiFact would doubtless crack down on PolitiFact's misleading headlines, like the following headline over a story by Lauren Carroll:


While the PolitiFact headline claims that the Trump budget cuts Medicaid, and the opening paragraph says Trump's budget "directly contradicts" President Trump's promise not to cut Medicaid, in short order Carroll's story reveals that the Medicaid budget goes up under the new Trump budget.

So it's a cut when the Medicaid budget goes up?

Such reasoning has precedent at PolitiFact. We noted in December 2016 that veteran PolitiFact fact-checker Louis Jacobson wrote that the most natural way to interpret "budget cut" was against the baseline of expected spending, not against the previous year's spending.

Jacobson's approach in December 2016 helped President Obama end up with a "Compromise" rating on his aim to cut $1 trillion to $1.5 trillion in spending. By PolitiFact's reckoning, the president cut $427 billion from the budget. PolitiFact obtained that figure by subtracting actual outlays from the estimates the Congressional Budget Office published in 2012 and using the cumulative total for the four years.

Jacobson took a different tack back in 2014 when he faulted a Republican ad attacking the Affordable Care Act's adjustments to Medicare spending (which we noted in the earlier linked article):
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
We can easily rework Jacobson's paragraph to address Carroll's story:
First, while the headline implies that the proposed budget is slicing Medicaid benefits, these are not cuts to current services. Rather, as Medicaid spending continues to rise over the next 10 years, it will do so at a slower pace than would occur without the law. So claims that Trump would "cut" Medicaid need more explanation to be fully accurate.
PolitiFact is immune to the standard it applies to others.

We also note that a pledge not to cut a program's spending is not reasonably taken as a pledge not to slow the growth of spending for that program. Yet that unreasonable interpretation is the foundation of PolitiFact's "Trump-O-Meter" article.


Correction May 24, 2017: Changed the first incidence of "law" in our reworking of Jacobson's sentence to "proposed budget." It better fits the facts that way.
Update May 26, 2017: Added link to the PolitiFact story by Lauren Carroll

Friday, May 19, 2017

What "Checking How Fact Checkers Check" says about PolitiFact

A study by doctoral student Chloe Lim (Political Science) of Stanford University gained some attention this week, inspiring some unflattering headlines like this one from Vocativ: "Great, Even Fact Checkers Can’t Agree On What Is True."

Katie Eddy and Natasha Elsner explain inter-rater reliability

Lim's research approach somewhat resembled research by Michelle A. Amazeen of Rider University. Amazeen and Lim both used tests of coding consistency to assess the accuracy of fact checkers, but the two reached roughly opposite conclusions. Amazeen concluded that consistent results helped strengthen the inference that fact-checkers fact-check accurately. Lim concluded that inconsistent fact-checker ratings may undermine the public impact of fact-checking.

Key differences in the research procedure help explain why Amazeen and Lim reached differing conclusions.

Data Classification

Lim used two different methods for classifying data from PolitiFact and the Washington Post Fact Checker. She converted PolitiFact ratings to a five-point scale corresponding to the Washington Post Fact Checker's "Pinocchio" ratings, and she divided ratings into "True" and "False" groups using the line between "Mostly False" and "Half True" as the barrier between true and false statements.

Amazeen opted for a different approach. Amazeen did not try to reconcile the two different rating systems at PolitiFact and the Fact Checker, electing to use a binary system that counted every statement rated other than "True" or "Geppetto check mark" as false.

Amazeen's method essentially guaranteed high inter-rater reliability, because "True" judgments from the fact checkers are rare.  Imagine comparing movie reviewers who use a five-point scale but with their data divided up into great movies or not-great movies. A one-star rating of "Ishtar" by one reviewer would show agreement with a four-star rating of the same movie by another reviewer. Disagreements only occur when one reviewer gives five stars while the other one gives a lower rating.

Professor Joseph Uscinski's reply to Amazeen's research, published in Critical Review, put it succinctly:
Amazeen’s analysis sets the bar for agreement so low that it cannot be taken seriously.
Amazeen found high agreement among fact checkers because her method guaranteed that outcome.

Lim's methods provide for more varied and robust data sets, though Lim experienced the same problem Amazeen found in that two different fact-checking organizations only rarely check the same claims. Both researchers used relatively small data sets.

The meaning of Lim's study

In our view, Lim's study rushes to its conclusion that fact-checkers disagree without giving proper attention to the most obvious explanation for the disagreement she measures.

The rating systems the fact checkers use lend themselves to subjective evaluations. We should expect that condition to lead to inconsistent ratings. When I reviewed Amazeen's method at Zebra Fact Check, I criticized it for applying inter-coder reliability standards to a process much less rigorous than social science coding.

Klaus Krippendorff, creator of the K-alpha measure Amazeen used in her research, explained the importance of giving coders good instructions to follow:
The key to reliable content analyses is reproducible coding instructions. All phenomena afford multiple interpretations. Texts typically support alternative interpretations or readings. Content analysts, however, tend to be interested in only a few, not all. When several coders are employed in generating comparable data, especially large volumes and/or over some time, they need to focus their attention on what is to be studied. Coding instructions are intended to do just this. They must delineate the phenomena of interest and define the recording units to be described in analyzable terms, a common data language, the categories relevant to the research project, and their organization into a system of separate variables.
The rating systems of PolitiFact and the Washington Post Fact Checker are gimmicks, not coding instructions. The definitions mean next to nothing, and PolitiFact's creator, Bill Adair, has called PolitiFact's determination of Truth-O-Meter ratings "entirely subjective."

Lim's conclusion is right. The fact checkers are inconsistent. But Lim's use of coder reliability ratings is, in our view, a little like using a plumb line to measure whether a building has collapsed due to earthquake. The tool is too sophisticated for the job. The "Truth-O-Meter" and "Pinocchio" rating systems as described and used by the fact checkers do not qualify as adequate sets of coding instructions.

We've belabored the point about PolitiFact's rating system for years. It's a gimmick that tends to mislead people. And the fact-checking organizations that do not use a rating system avoid it for precisely that reason.

Lucas Graves' history of the modern fact-checking movement, "Deciding What's True: The Rise of Political Fact-Checking in American Journalism," (Page 41) offers an example of the dispute:
The tradeoffs of rating systems became a central theme of the 2014 Global Summit of fact-checkers. Reprising a debate from an earlier journalism conference, Bill Adair staged a "steel-cage death match" with the director of Full Fact, a London-based fact-checking outlet that abandoned its own five-point rating scheme (indicated by a magnifying lens) as lacking precision and rigor. Will Moy explained that Full Fact decided to forgo "higher attention" in favor of "long-term reputation," adding that "a dodgy rating system--and I'm afraid they are inherently dodgy--doesn't help us with that."
Coding instructions should provide coders with clear guidelines preventing most or all debate in deciding between two rating categories.

Lim's study in its present form does its best work in creating questions about fact checkers' use of rating systems.

Sunday, May 14, 2017

PolitiFact and robots.txt (updated)

We were surprised earlier this week when our attempt to archive a PolitiFact fact check at the Internet Archive failed.



Saving a page to the Internet Archive has served as one of the standard methods for keeping record of changes at a website. PolitiFact Bias has often used the Internet Archive to document PolitiFact's mischief.

Webmasters have the option of instructing search engines to skip indexing content at a website through use of a "robots.txt" instruction. Historically, the Internet Archive has respected the presence of a robots.txt prohibition.

PolitiFact apparently decided to start using a limiting robots.txt recently. As a result, it's likely that none of the PolitiFact.com archived links will work for a time, either at PolitiFact Bias or elsewhere.

The good news in all of this? The Internet Archive is likely to start ignoring the robots.txt instruction in the very near future. Once that happens, PolitiFact's sketchy Web history will return from the shadows back into the light.

PolitiFact may have had a legitimate reason for the change, but our extension of the benefit of the doubt comes with a big caveat: The PolitiFact webmaster could have created an exception for the Internet Archive in its robots.txt instruction. That oversight creates an embarrassment for PolitiFact, at minimum.


Update May 18, 2017:

This week the Internet Archive Wayback Machine once again functioned properly in saving Web pages at PolitiFact.com. Links at PolitiFactBias.com to archived pages likewise function properly.

We do not know at this point whether PolitiFact created an exception for the Internet Archive (and others), or whether the Internet Archive has already started ignoring robots.txt. PolitiFact has made no announcement regarding any change, so far as we can determine.

Friday, April 7, 2017

PolitiFact fixes fact check on Syrian chemical weapons

When news reports recently appeared suggesting the Syrian government used chemical weapons, it presented a problem for PolitiFact. As noted by the Daily Caller, among others, PolitiFact said in 2014 it was "Mostly True" that 100 percent of Syrian chemical weapons were removed from that country.

If the Syrian government used chemical weapons, where did it get them? Was it a fresh batch produced after the Obama administration forged an agreement with Russia (seriously) to effect removal of the weapons?

Nobody really knows, just like nobody truly knew the weapons were gone when PolitiFact ruled it "Mostly True" that the weapons were "100 percent gone." (screen capture via the Internet Archive)


With public attention brought to its questionable ruling with the April 5, 2017 Daily Caller story, PolitiFact archived its original fact check and redirected the old URL to a new (also April 5, 2017) PolitiFact article: "Revisiting the Obama track record on Syria’s chemical weapons."

At least PolitiFact didn't make its old ruling simply vanish, but has PolitiFact acted in keeping with its commitment to the International Fact-Checking Network's statement of principles?
A COMMITMENT TO OPEN AND HONEST CORRECTIONS
We publish our corrections policy and follow it scrupulously. We correct clearly and transparently in line with our corrections policy, seeking so far as possible to ensure that readers see the corrected version.
And what is PolitiFact's clear and transparent corrections policy? According to "The Principles of PolitiFact, PunditFact and the Truth-O-Meter" (bold emphasis added):

When we find we've made a mistake, we correct the mistake.

  • In the case of a factual error, an editor's note will be added and labeled "CORRECTION" explaining how the article has been changed.
  • In the case of clarifications or updates, an editor's note will be added and labeled "UPDATE" explaining how the article has been changed.
  • If the mistake is significant, we will reconvene the three-editor panel. If there is a new ruling, we will rewrite the item and put the correction at the top indicating how it's been changed.
Is the new article an update? In at least some sense it is. PolitiFact removed and archived the fact check thanks to questions about its accuracy. And the last sentence in the replacement article calls the article an "update":
In the days and weeks to come, we will learn more about the recent attacks, but in the interest of providing clear information, we have replaced the original fact-check with this update.
If the new article counts as an update, we think it ought to wear the "update" tag that would make it appear on PolitiFact's "Corrections and Updates" page, where it has yet to appear (archived version).

And we found no evidence that PolitiFact posted this article to its Facebook page. How are readers misled about the original fact check supposed to encounter the update, other than by searching for it?

Worse still, the new article does not even appear on the list for the "The Latest From PolitiFact." What's the excuse for that oversight?

We believe that if PolitiFact followed its corrections policy scrupulously, we would see better evidence that PolitiFact publicized its admission it had taken down its "Mostly True" rating of the claim of an agreement removing 100 percent of Syria's chemical weapons.

Can evidence like this stop PolitiFact from receiving "verified" status in keeping the IFCN fact checkers' code?

We doubt it.


Afters
It's worth mentioning that PolitiFact's updated article does not mention the old article until the third paragraph. The fact that PolitiFact pulled and archived that article waits for the fifth paragraph, nearly halfway through the update.

Since PolitiFact's archived version of the pulled article omits the editor's name, we make things easy for our readers by going to the Internet Archive for the name: Aaron Sharockman.

PolitiFact's "star chamber" of editors approving the "Mostly True" rating likely included Angie Drobnic Holan and Amy Hollyfield.

Sunday, April 2, 2017

Angie Drobnic Holan: "Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

PolitiFact, thy name is Hypocrisy.

The editors of PolitiFact Bias often find themselves overawed by the sanctimonious pronouncements we see coming from PolitiFact (and other fact checkers).

Everybody screws up. We screw up. The New York Times screws up. PolitiFact often screws up. And a big part of journalistic integrity comes from what you do to fix things when you screw up. But for some reason that concept just doesn't seem to fully register at PolitiFact.

Take the International Fact-Checking Day epistle from PolitiFact's chief editor Angie Drobnic Holan:
Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency. (We adhere to those principles at PolitiFact and at the Tampa Bay Times, so if you’re reading this, you’ve made a good start.)
The first sentence qualifies as great advice. The parenthetical sentence that follows qualifies as a howler. PolitiFact adheres to principles of truthfulness, fairness and transparency?

We're coming fresh from a week where PolitiFact published a fact check that took conservative radio talk show host Hugh Hewitt out of context, said it couldn't find something that was easy to find, and (apparently) misrepresented the findings of the Congressional Budget Office regarding the subject.

And more to the issue of integrity, PolitiFact ignores the evidence of its failures and allows its distorted and false fact check to stand.

The fact check claims the CBO finds insurance markets under the Affordable Care Act stable, concluding that the CBO says there is no death spiral. In fact, the CBO said the ACA was "probably" stable "in most areas." Is it rightly a fact checker's job to spin the judgments of its expert sources?

PolitiFact improperly cast doubt on Hewitt's recollections of a New York Times article where the head of Aetna said the ACA was in a death spiral and people would be left without insurance:
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article ...
We found the article (quickly and easily). And we told PolitiFact the article exists. But PolitiFact's fact check still makes it look like Hewitt was wrong about the article appearing in the Times.

PolitiFact harped on the issue:
In another tweet, Hewitt referenced a Washington Post story that included remarks Aetna’s chief executive, Mark Bertolini. On the NBC Meet the Press, Hewitt referred to a New York Times article.
We think fact checkers crowing about their integrity and transparency ought to fix these sorts of problems without badgering from right-wing bloggers. And if they still won't fix them after badgering from right-wing bloggers, then maybe they do not qualify as "organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

Maybe they're more like liberal bloggers with corporate backing.



Correction April 3, 2017: Added a needed apostrophe to "fact checkers job."