Friday, April 7, 2017

PolitiFact fixes fact check on Syrian chemical weapons

When news reports recently appeared suggesting the Syrian government used chemical weapons, it presented a problem for PolitiFact. As noted by the Daily Caller, among others, PolitiFact said in 2014 it was "Mostly True" that 100 percent of Syrian chemical weapons were removed from that country.

If the Syrian government used chemical weapons, where did it get them? Was it a fresh batch produced after the Obama administration forged an agreement with Russia (seriously) to effect removal of the weapons?

Nobody really knows, just like nobody truly knew the weapons were gone when PolitiFact ruled it "Mostly True" that the weapons were "100 percent gone." (screen capture via the Internet Archive)


With public attention brought to its questionable ruling with the April 5, 2017 Daily Caller story, PolitiFact archived its original fact check and redirected the old URL to a new (also April 5, 2017) PolitiFact article: "Revisiting the Obama track record on Syria’s chemical weapons."

At least PolitiFact didn't make its old ruling simply vanish, but has PolitiFact acted in keeping with its commitment to the International Fact-Checking Network's statement of principles?
A COMMITMENT TO OPEN AND HONEST CORRECTIONS
We publish our corrections policy and follow it scrupulously. We correct clearly and transparently in line with our corrections policy, seeking so far as possible to ensure that readers see the corrected version.
And what is PolitiFact's clear and transparent corrections policy? According to "The Principles of PolitiFact, PunditFact and the Truth-O-Meter" (bold emphasis added):

When we find we've made a mistake, we correct the mistake.

  • In the case of a factual error, an editor's note will be added and labeled "CORRECTION" explaining how the article has been changed.
  • In the case of clarifications or updates, an editor's note will be added and labeled "UPDATE" explaining how the article has been changed.
  • If the mistake is significant, we will reconvene the three-editor panel. If there is a new ruling, we will rewrite the item and put the correction at the top indicating how it's been changed.
Is the new article an update? In at least some sense it is. PolitiFact removed and archived the fact check thanks to questions about its accuracy. And the last sentence in the replacement article calls the article an "update":
In the days and weeks to come, we will learn more about the recent attacks, but in the interest of providing clear information, we have replaced the original fact-check with this update.
If the new article counts as an update, we think it ought to wear the "update" tag that would make it appear on PolitiFact's "Corrections and Updates" page, where it has yet to appear (archived version).

And we found no evidence that PolitiFact posted this article to its Facebook page. How are readers misled about the original fact check supposed to encounter the update, other than by searching for it?

Worse still, the new article does not even appear on the list for the "The Latest From PolitiFact." What's the excuse for that oversight?

We believe that if PolitiFact followed its corrections policy scrupulously, we would see better evidence that PolitiFact publicized its admission it had taken down its "Mostly True" rating of the claim of an agreement removing 100 percent of Syria's chemical weapons.

Can evidence like this stop PolitiFact from receiving "verified" status in keeping the IFCN fact checkers' code?

We doubt it.


Afters
It's worth mentioning that PolitiFact's updated article does not mention the old article until the third paragraph. The fact that PolitiFact pulled and archived that article waits for the fifth paragraph, nearly halfway through the update.

Since PolitiFact's archived version of the pulled article omits the editor's name, we make things easy for our readers by going to the Internet Archive for the name: Aaron Sharockman.

PolitiFact's "star chamber" of editors approving the "Mostly True" rating likely included Angie Drobnic Holan and Amy Hollyfield.

Sunday, April 2, 2017

Angie Drobnic Holan: "Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

PolitiFact, thy name is Hypocrisy.

The editors of PolitiFact Bias often find themselves overawed by the sanctimonious pronouncements we see coming from PolitiFact (and other fact checkers).

Everybody screws up. We screw up. The New York Times screws up. PolitiFact often screws up. And a big part of journalistic integrity comes from what you do to fix things when you screw up. But for some reason that concept just doesn't seem to fully register at PolitiFact.

Take the International Fact-Checking Day epistle from PolitiFact's chief editor Angie Drobnic Holan:
Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency. (We adhere to those principles at PolitiFact and at the Tampa Bay Times, so if you’re reading this, you’ve made a good start.)
The first sentence qualifies as great advice. The parenthetical sentence that follows qualifies as a howler. PolitiFact adheres to principles of truthfulness, fairness and transparency?

We're coming fresh from a week where PolitiFact published a fact check that took conservative radio talk show host Hugh Hewitt out of context, said it couldn't find something that was easy to find, and (apparently) misrepresented the findings of the Congressional Budget Office regarding the subject.

And more to the issue of integrity, PolitiFact ignores the evidence of its failures and allows its distorted and false fact check to stand.

The fact check claims the CBO finds insurance markets under the Affordable Care Act stable, concluding that the CBO says there is no death spiral. In fact, the CBO said the ACA was "probably" stable "in most areas." Is it rightly a fact checker's job to spin the judgments of its expert sources?

PolitiFact improperly cast doubt on Hewitt's recollections of a New York Times article where the head of Aetna said the ACA was in a death spiral and people would be left without insurance:
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article ...
We found the article (quickly and easily). And we told PolitiFact the article exists. But PolitiFact's fact check still makes it look like Hewitt was wrong about the article appearing in the Times.

PolitiFact harped on the issue:
In another tweet, Hewitt referenced a Washington Post story that included remarks Aetna’s chief executive, Mark Bertolini. On the NBC Meet the Press, Hewitt referred to a New York Times article.
We think fact checkers crowing about their integrity and transparency ought to fix these sorts of problems without badgering from right-wing bloggers. And if they still won't fix them after badgering from right-wing bloggers, then maybe they do not qualify as "organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

Maybe they're more like liberal bloggers with corporate backing.



Correction April 3, 2017: Added a needed apostrophe to "fact checkers job."

Thursday, March 30, 2017

Hewitt v. PolitiFact: Two facts clearly favor Hewitt

Over the past week, conservative radio host Hugh Hewitt claimed on Sunday the health insurance industry has entered a "death spiral," PolitiFact rated Hewitt's claim "False" and Hewitt had PolitiFact Executive Director Aaron Sharockman on his radio show for an hour long interview (transcript here).

Aside from the central dispute over the "death spiral," where PolitiFact's work arguably commits a bifurcation fallacy, we have identified two areas where Hewitt has the right of the argument. PolitiFact has published (and superficially defended) false statements in both areas.

The New York Times article

PunditFact (PolitiFact):
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article, but a simple remark on how premiums are rising and insurers are leaving the marketplace is not enough evidence to meet the actuarial definition of a death spiral.
We found the article in just a few minutes (it was dead easy; see hit No. 5). Two quotations from it will show it matches the content, other than the term "president," that Hewitt described on his Sunday television appearance.

One:
Aetna CEO Mark Bertolini has pronounced the ACA's health insurance markets in a "death spiral."
Two:
___

This story has been corrected to show that consumers have reduced options, not that some consumers have no health care options.
So we have the "death spiral" comment from the head of AETNA that Hewitt described, as well as the dire statement that some people have no options, though that part was reported in error in the AP story that appeared in the Times.

During his radio interview, Sharockman tried to pin on Hewitt PolitiFact's failure to find the described Times story and flatly said the article was not in the Times:
AS: You said the president of Aetna. It’s the chairman and CEO, and it was not in the New York Times, as you also know. It was originally probably in the Wall Street Journal.
The article was in The New York Times, and we have informed PolitiFact writer Allison Graves (via Twitter) and Sharockman (via email).

We expect ethical journalists to make appropriate corrections.

Where does the CBO stand on the "death spiral"?

The mainstream media widely interpreted the CBO report addressing President Trump's health care proposal as a judgment the ACA has not entered a "death spiral."

PolitiFact did likewise in its fact check of Hewitt:
CBO, independent analysis: No death spiral
Others have also concluded that the Affordable Care Act is not in a death spiral. The nonpartisan Congressional Budget Office, as part of its recent analysis of the GOP legislation, described the Affordable Care Act as stable.
Though PolitiFact did not link to the CBO report in its fact check (contrary to PolitiFact's statement of standards), we believe the claim traces to this CBO report, which contains this assessment (bold emphasis added):

Stability of the Health Insurance Market

Decisions about offering and purchasing health insurance depend on the stability of the health insurance market—that is, on having insurers participating in most areas of the country and on the likelihood of premiums’ not rising in an unsustainable spiral. The market for insurance purchased individually (that is, nongroup coverage) would be unstable, for example, if the people who wanted to buy coverage at any offered price would have average health care expenditures so high that offering the insurance would be unprofitable. In CBO and JCT’s assessment, however, the nongroup market would probably be stable in most areas under either current law or the legislation.
Note the CBO report does not call the insurance market "stable" under the ACA. Instead it projects that insurance markets will probably remain stable in most areas. Assuming PolitiFact has no better support from the CBO than the portion we have quoted, we find PolitiFact's version a perversion of the original. The CBO statement leaves open the possibility of a death spiral.

Sharockman stood behind the fact check's apparent spin during his appearance on the Hugh Hewitt Show:
AS: Hugh, you’re misleading the listeners.

HH: …is that we have gone from 7…

AS: You’re misleading the listeners. The same CBO report that you’re quoting said that the markets are stable whether it’s the AHCA…
Again, unless Sharockman has some version of a CBO report different from what we have found, we judge that Sharockman and PolitiFact are misleading people about the content of the report.

We used email to point out the discrepancy to Sharockman and asked him to provide support for his and PolitiFact's interpretation of the CBO report.

We will update this article if we receive a response from Sharockman that includes such evidence.

Tuesday, March 28, 2017

Hugh Hewitt v. PolitiFact (Power Line Blog)

Via Power Line blog, the liberal bloggers at PolitiFact tangle with conservative radio show host Hugh Hewitt:

The issue: During a television appearance, Hewitt said the ACA is in a death spiral. PolitiFact did its usual limited survey of experts and ruled Hewitt's statement "False."

Part 1: PolitiFact Strikes Hugh Hewitt

A favorite part:
Allison Graves evaluates Hugh’s assertion regarding the Obamacare death spiral for PolitiFact. She defines the question in a manner that tends to belie Hugh’s assertion, cites some relevant authorities and rates Hugh’s assertion False.

I think this is a question on which reasonable minds can disagree, depending on how the question is framed. I would rate Graves’s judgment False in implying the contrary.
Part 2: Pol[i]tiFact strikes Hugh Hewitt (2)

A favorite part (quotation of Hewitt):
PolitiFact is a liberal-agenda-driven group of classically lefty “journalists” masquerading as a non-partisan evaluators of arguments. In this case their defense of their “journalism” rests on a partial and biased recounting of a 10:20 a.m. Meet the Press roundtable discussion, one which omits my stated acknowledgment of a differing argument therein, and their discounting of the expert testimony of a major insurance company president, along with a Sunday afternoon three-hour “deadline” window for response following a perfunctory email to a booker of a show that runs Monday through Friday, when the host is himself online and answering a journalists’ questions and comments.
To this we would add that PolitiFact's story misrepresents a Congressional Budget Office report.

PolitiFact cited the CBO in support of its finding that the ACA is not in a death spiral:
The nonpartisan Congressional Budget Office, as part of its recent analysis of the GOP legislation, described the Affordable Care Act as stable.
PolitiFact failed to link to the CBO in this fact check, but the source wasn't hard to find. The tough part was figuring out why PolitiFact added its own spin to the CBO's view (bold emphasis added):

Stability of the Health Insurance Market>

Decisions about offering and purchasing health insurance depend on the stability of the health insurance market—that is, on having insurers participating in most areas of the country and on the likelihood of premiums’ not rising in an unsustainable spiral. The market for insurance purchased individually (that is, nongroup coverage) would be unstable, for example, if the people who wanted to buy coverage at any offered price would have average health care expenditures so high that offering the insurance would be unprofitable. In CBO and JCT’s assessment, however, the nongroup market would probably be stable in most areas under either current law or the legislation.

Under current law, most subsidized enrollees purchasing health insurance coverage in the nongroup market are largely insulated from increases in premiums because their out-of-pocket payments for premiums are based on a percentage of their income; the government pays the difference. The subsidies to purchase coverage combined with the penalties paid by uninsured people stemming from the individual mandate are anticipated to cause sufficient demand for insurance by people with low health care expenditures for the market to be stable.

Under the legislation, in the agencies’ view, key factors bringing about market stability include subsidies to purchase insurance, which would maintain sufficient demand for insurance by people with low health care expenditures, and grants to states from the Patient and State Stability Fund, which would reduce the costs to insurers of people with high health care expenditures. Even though the new tax credits would be structured differently from the current subsidies and would generally be less generous for those receiving subsidies under current law, the other changes would, in the agencies’ view, lower average premiums enough to attract a sufficient number of relatively healthy people to stabilize the market.
Is it defensible journalistic practice to leave out the "probably" and "most areas" caveats in the CBO report?

Something tells us that if PolitiFact caught a Republican omitting that kind of information, it would result in a rating of "Half True" or worse. Assuming the Republican wasn't making a point that a liberal would like, of course.

Afters:

We just finished listening to PolitiFact's Aaron Sharockman spending an hour on the Hugh Hewitt Show. Sharockman reaffirmed the paraphrase of the CBO we quoted above. When a transcript becomes available, we will look at whether Sharockman magnified the distortion from the original fact check.

Thursday, March 23, 2017

Rorschach context

It seems as though the liberal bloggers (aka "mainstream fact checkers") at PolitiFact treat context like a sort of Rorschach inkblot, to interpret as they see fit.

What evidence prompts these unkind words? The evidence runs throughout PolitiFact's history, but two recent fact-checks inspired the imagery.

The PolitiFact Florida Lens

In our previous post, we pointed out the preposterous "Mostly True" rating PolitiFact Florida gifted on a Florida Democrat who equated the raw gender wage gap with the gender wage gap caused by sex discrimination. The fact checkers did not interpret words uttered in context, "simply because she isn't a man," as an argument that the raw wage gap was entirely the result of gender discrimination. Perhaps it wasn't specific enough, like saying the difference in pay occurred despite doing the same work ("Mostly False")?

Whatever the case, PolitiFact opted not to accept a crystal clear clue that it was checking a claim that mirrored one it had previously rated "Mostly False" and rated the similar claim "Mostly True."

The PolitiFact California Lens

A recent fact check from PolitiFact California makes for a jarring contrast with the PolitiFact Florida item.

California Lt. Governor Gavin Newsom tweeted that Republican Jason Chaffetz had treated the cost of an iPhone to the cost of health care "as if the 2 are the same." Newsom was making the point that health care costs more than an iPhone, so saying the two are the same misses the mark by a California mile.

But did Chaffetz say the costs are the same?

First let's look at how the PolitiFact California lens processed the evidence, then we'll put that evidence together with some surrounding context.

PolitiFact California:
We also examined Newsom’s final claim that Chaffetz had compared the iPhone and health care costs "as if they are the same."

Chaffetz’ comments, particularly his phrase "Americans have choices. And they’ve got to make a choice," leave the impression that obtaining health care is as simple as sacrificing the purchase of a smartphone.
It's worth noting at the outset that PolitiFact California's key evidence doesn't mention the iPhone and does not even imply any type of cost comparison. The only way to adduce Chaffetz's quotation as evidence of a price comparison would have to come from the context of Chaffetz's remarks. And a fact-checker ought to explain to readers how that works, unless the fact checker can count on his audience sharing his ideological bias.

Chaffetz (as quoted at length in the PolitiFact California fact check; bold emphasis added):
"Well we're getting rid of the individual mandate. We're getting rid of those things that people said they don't want. And you know what? Americans have choices. And they've got to make a choice. And so, maybe rather than getting that new iPhone that they just love and they want to go spend hundreds of dollars on that, maybe they should invest it in their own health care. They've got to make those decisions for themselves."
Chaffetz in no way offers anything approaching a clear suggestion that the cost of an iPhone equals the cost of health care or health insurance. His words about people having choices come right after he says the health care bill would eliminate the individual mandate. After that comes the mention of an iPhone costing "hundreds of dollars" that one might instead invest in health care. In context, the statement is just one example of a great number of choices one might make about paying for health care.

The PolitiFact California lens (like magic!) turns Chaffetz's words conveniently into what is needed to say the Democrat said something "Mostly True."

It's the bias, stupid.

We have PolitiFact Florida ignoring clear context to give a Democrat a more favorable rating than she deserves. We have PolitiFact California finding clear evidence from the context where none exists to give a Democrat a more favorable rating than he deserves.

Point out the absurdity to PolitiFact (as we did for the PolitiFact Florida flub) and somebody from the Tampa Bay Times will read the critique and no changes to the article will result.
How are they able to repeatedly overlook problems like these?

The simplest explanation? Because they're biased. Biased to the left. Biased to trust their own work (despite the incongruity with other PolitiFact fact checks!). And Dunning-Kruger out the wazoo.


Clarification: March 27, 2017: Added link to the PolitiFact California fact check of Gavin Newsom.

Tuesday, March 14, 2017

There You Go Again: PolitiFact Florida makes a hash of another gender wage gap ruling

Though PolitiFact is an unreliable fact-checker, at least one can bank on the mainstream fact-checker's ability to flub gender wage gap claims.

We hit PolitiFact on this issue often, but this latest one from PolitiFact Florida is a doozy, rivaling PolitiFact Oregon's remarkable turd from 2014.

Drum roll: PolitiFact Florida, March 14, 2017:

We're presenting a big hunk of the fact check as it appears at PolitiFact Florida to show how PolitiFact Florida effectively contradicts its own reasoning.

In the next-to-last paragraph of its summary, PolitiFact Florida explains that "differences in pay can be affected by the careers women and men choose and taking time off to care for children." Those aren't the only factors affecting the raw wage gap, by the way.

Yet in the ironically named "Share The Facts" version, the "Mostly True" rating blares its message aside Democrat Patricia Farley's claim the disparities occur purely based on gender ("simply because she isn't a man"). In other words, the cause is gender discrimination, not different job choices and the like--directly contradicting PolitiFact Florida's caveat. Farley didn't just leave out context. She explicitly denied the key bit of context.

Anyone who knows the difference between the raw gender wage gap and the wage gap based solely on gender discrimination but uses the large former gap in the context of arguing for legislation to reduce gender discrimination is deceiving people. The raw gender wage gap is not a realistic representation of gender discrimination in wages because of other factors, such as men and women tending to choose careers that pay differently.

So, yes, we're saying that unless Patricia Farley is ignorant about the difference between the gender wage gap and the wage gap caused by pay discrimination, she is lying, as in deliberately deceiving her audience. And PolitiFact Florida is calling her falsehood and potentially intentional deception "Mostly True."

The PolitiFact Florida wage gap fact check is below average for PolitiFact--and that's like failing to leap over a match box.


Correction March 15, 2017: Posted the intended URL for the PolitiFact Florida fact check. We had mistakenly used the URL to a related fact check concerning Donald Trump.

Monday, February 27, 2017

Daily Caller: "Politifact Says Trump Is Right, But Rates His Remark ‘Mostly False'"

The Daily Caller notes an item from PolitiFact where President Trump tweeted something PolitiFact found true, after which the fact checkers proceeded to rate the claim "Mostly False."

The Daily Caller's Alex Pfeiffer has the skinny:
The tweet from Trump came after Gateway Pundit reported on the change in the national debt under the two respective presidents and after former Godfather Pizza CEO Herman Cain brought up the figures on Fox News.

Politifact wrote: “The numbers check out. And in fact, the total public debt has dropped another $22 billion since the Gateway Pundit article published, according to data from the U.S. Department of Treasury.”

Despite this, Politifact still gave Trump a rating of “mostly false” and titled its article, “Why Donald Trump’s tweet about national debt decrease in his first month is highly misleading.”
We saw this item and considered writing it up. It seemed to us the type of thing that liberal (or even moderate) readers might excuse, judging that PolitiFact did enough to justify the "Mostly False" rating it gave to Trump's tweet.

The case needs additional information to show that it does not represent a fair fact check.

The definition of "Mostly False"

Did PolitiFact show that Trump's tweet met its definition of "Mostly False"? Here is the definition:
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
Trump's tweet did not simply contain "an element of truth." It was true (and misleading). PolitiFact's "Truth-O-Meter" definitions mean little. PolitiFact does not used objective criteria to decide the rating. If objective criteria decided the rating, then PolitiFact's creator would not declare that "Truth-O-Meter" ratings are "entirely subjective."

Sauce for the gander?


If PolitiFact applied its judgments consistently, then the Daily Caller and sites like ours would have little to complain about. But vague definitions that ultimately fail to guide the final rating make it virtually impossible even for well-meaning left-leaning journalists to keep the scales balanced.

Consider an example from the PolitiFact Oregon franchise. PolitiFact Oregon rated Democrat Brad Avakian "Mostly True" for a false statement:
Avakian, citing Census data and echoing claims by Obama and others, said women in Oregon "earn an average of 79 cents for every dollar that men earn for doing the same job." The report he relied on noted that the 79-cent figure applies to full-time, year-round work, although Avakian didn’t include those stipulations.

For starters, the commissioner loses points for cherry-picking the 79-cent figure. Other means of measuring pay gaps between men and women put it considerably less.

The same can be said of the "for doing the same job" piece. As PolitiFact has found previously, the existence of a pay gap doesn’t necessarily mean that all of the gap is caused by individual employer-level discrimination, as Avakian’s claim implies. Some of the gap is at least partially explained by the predominance of women in lower-paying fields, rather than women necessarily being paid less for the same job than men are.

Finally, Avakian used the term "average" when the report he relied on said "median." He could have avoided that by simply saying women "make 79 cents for every dollar a man earns," but since the information he cited contains only median incomes, we find the difference to be inconsequential.

Those caveats aside, he still is well inside the ballpark and the ratio he cited is a credible figure from a credible agency. We rate the claim Mostly True.
That's an inexcusably tilted playing field. If Avakian had described the raw pay gap without saying it compared men and women doing the same job, then his claim would have paralleled Trump's: a true but misleading statement. But Avakian's statement was not true and misleading. It was false and misleading at the same time.

Yet it received a "Mostly True" rating compared to Trump's "Mostly False" rating.

Doesn't fact-checking need better standards than that?



Jeff Adds (1922PST 2/27/17):
We'd love to see PolitiFact reconcile their Mostly False rating of Trump's claim with the rationale behind this gem:



Was there anything misleading about Clinton's statement?
Clinton’s figures check out, and they also mirror the broader results we came up with two years ago. Partisans are free to interpret these findings as they wish, but on the numbers, Clinton’s right. We rate his claim True.
Ha! Silly factseekers. When Trump makes an accurate claim PolitiFact conjures their magical powers of objectivity to decide what is misleading. When lovable ol' Bill makes a claim, heck, PolitiFact is just checkin' the numbers and all you partisans can figure out what it means.

Note that PolitiFact gave Bill Clinton a True rating, which they define as "The statement is accurate and there’s nothing significant missing." Must be nice to be in the club.

We've pointed out how PolitiFact's application of standards is akin to the game of Plinko. With ratings like this it's difficult to view PolitiFact as serious journalists instead of carnival barkers.

Tuesday, February 21, 2017

Another nugget from the Hollyfield interview

In an earlier post we pointed out how managing editor Amy Hollyfield of PolitiFact described its "Truth-O-Meter" in terms hard to reconcile with those used by PolitiFact's creator, Bill Adair.

The Hollyfield interview published at The Politic (Yale University) contains other amusing nuggets, such as this howler (bold emphasis added):
We take accuracy very seriously. Transparency is one of the key things we focus on, which is why we publish all the sources for our fact checks. We flag every correction and have a subject tag called “correction,” so you can see every fact check we’ve put a correction on.
We find Hollyfield's assertion offensive, especially as it occurs in response to a question about this website, PolitiFact Bias.

PolitiFact does a poor job of consistently adding the subject tags to corrected articles.

We pointed out an example in December 2016. PolitiFact California changed the rating of a fact check from "True" to "Half True," publishing a new version of its fact check from months earlier. Weeks later, PolitiFact California has yet to add a tag to the article that would make it appear on PolitiFact's "Corrections and Updates" page.

Maybe PolitiFact California does not regard rewriting an article as a correction or update?

How about PolitiFact Pennsylvania from January 2017? Lawyers pointed out that the Pennsylvania PolitiFact franchise incorrectly described the standard of evidence courts use for criminal cases. PolitiFact Pennsylvania ran a correction (the correction made the fact check incoherent, but that's another story), but added no tag to the story.


So, contrary to what Hollyfield claims, the corrected story is not transparently presented on its "Corrections and Updates" page.

PolitiFact's spotty compliance with its statement of principles is not new. We even complained about the problem to Paul Tash, the president of the Tampa Bay Times (Nov. 18, 2016). But we've noticed no improvement.

PolitiFact does not have a page that transparently informs readers of all of its corrections.

Will you believe Amy Hollyfield or your own lyin' eyes?

Monday, February 20, 2017

PolitiFact's "Truth-O-Meter": Floor wax, or dessert topping?

The different messages coming from PolitiFact founder Bill Adair and current PolitiFact managing editor Amy Hollyfield in recent interviews reminded me of a classic Saturday Night Live sketch.

In one interview (Pacific Standard), Adair said deciding PolitiFact's "Truth-O-Meter" ratings was "entirely subjective."

In the other interview (The Politic), Hollyfield gave a different impression:
There are six gradations on our [Truth-O-Meter] scale, and I think someone who’s not familiar with it might think it’s hard to sort out, but for people who’ve been at it for so long, we’ve done over 13,000 fact checks. To have participated in thousands of those, we all have a pretty good understanding of what the lines are between “true” and “mostly true,” or “false” and “pants on fire.”
If PolitiFact's "star chamber" of editors has a good understand of the lines of demarcation between each of the ratings, that suggests objectivity, right?

Reconciling these statements about the "Truth-O-Meter" seems about as easy as reconciling New Shimmer's dual purposes as a floor wax and a dessert topping. Subjective and objective are polar opposites, perhaps even more so than floor wax and dessert topping.

If, as Hollyfield appears to claim, PolitiFact editors have objective criteria to rely on in deciding on "Truth-O-Meter" ratings, then what business does Adair have claiming the ratings are subjective?

Can both Adair and Hollyfield be right? Does New Shimmer's exclusive formula prevent yellowing and taste great on pumpkin pie?

Sorry, we're not buying it. We consider PolitiFact's messaging about its rating system another example of PolitiFact's flimflammery.

We think Adair must be right that the Truth-O-Meter is primarily subjective. The line between "False" and "Pants on Fire" as described by Hollyfield appears to support Adair's position:
“False” is simply inaccurate—it’s not true. The difference between that and “pants on fire” is that “pants on fire” is something that is utterly, ridiculously false. So it’s not just wrong, but almost like it’s egregiously wrong. It’s purposely wrong. Sometimes people just make mistakes, but sometimes they’re just off the deep end. That’s sort of where we are with “pants on fire.”
Got it? It's "almost like" and "sort of where we are" with the rating. Or, as another PolitiFact editor from the "star chamber" (Angie Drobnic Holan) memorably put it: "Sometimes we decide one way and sometimes decide the other."


Afters

Though PolitiFact has over the years routinely denied that it accuses people of lying, Hollyfield appears to have wandered off the reservation with her statement that "Pants on Fire" falsehoods on the "Truth-O-Meter" are "purposely wrong." A purposely wrong falsehood would count as a lie in its strong traditional sense: A falsehood intended to deceive the audience. But if that truly is part of the line of demarcation between "False" and "Pants on Fire," then why has it never appeared that way in PolitiFact's statement of principles?

Perhaps that criterion exists only (subjectively) in Hollyfield's mind?


Update Feb. 20, 2017: Removed an unneeded "the" from the second paragraph

Sunday, February 19, 2017

Power Line: "Trump 4, PolitiFact 1"

John Hinderaker, writing for the Power Line blog, does a quick rundown of five PolitiFact fact checks of President Donald Trump. Hinderaker scores the series 4-1 for Trump.

Read it through for the specifics.

Our favorite part occurs at the end:
We could go through this exercise multiple times every day. Correcting the Democratic Party “fact checkers” would be a full-time job that I don’t plan to undertake. Suffice it to say that Trump is more often right than are the press’s purported fact checkers who pretend to correct him.
We continue to marvel at PolitiFact's supernatural ability to ignore substantive criticism. How often does it answer charges that it has done its job poorly?

If PolitiFact is an honest and transparent attempt at objective fact-checking, then we think PolitiFact should aggressively defend itself against such charges, or else change its articles accordingly.

On the other hand, if PolitiFact is a sham attempt at objective fact-checking, maybe it's smart to ignore criticism, trusting that its readers will conclude the criticisms did not deserve an answer.

Maybe there's an explanation that splits the difference?

Friday, February 17, 2017

PolitiFact: That was then, this is now

Now (2017)

PolitiFact is independent! That means nobody chooses for PolitiFact what stories PolitiFact will cover. PolitiFact made that clear with its recent appeal for financial support though its "Truth Squad" members--persons who contribute financially to PolitiFact (bold emphasis added):
Our independence is incredibly valuable to us, and we don't let anyone — not politicians, not grant-making groups, not anyone — tell us what to fact-check or what our Truth-O-Meter rulings should be. At PolitiFact, those decisions are made solely by journalists. With your help, they always will be.
Got it? Story selection is done solely by PolitiFact journalists. That's independence.

Then (2015)

In early 2015, PolitiFact started its exploration of public funding with a Kickstarter program geared toward funding its live fact checks of the 2015 State of the Union address.

Supporters donating $100 or more got to choose what PolitiFact would fact check. Seriously. That's what PolitiFact offered:

Pledge $100 or more

Pick the fact-check. We’ll send you a list of four fact-checks we’re thinking of working on. You decide which one we do. Plus the coffee mug, the shout out and the mail.
We at PolitiFact Bias saw this scam for what it was back then: It was either a breach of journalistic ethics in selling its editorial discretion, or else a misleading offer making donors believe they were choosing the fact check when in reality the editorial discretion was kept in-house by the PolitiFact editors.

Either way, PolitiFact acted unethically. And if Angie Drobnic Holan is telling the truth that PolitiFact always has its editorial decisions made by journalists, then we can rest assured that PolitiFact brazenly misled people in advertising its 2015 Kickstarter campaign.


Clarification Feb. 18, 2017: Belatedly added the promised bold emphasis in the first quotation of PolitiFact.

Tuesday, February 14, 2017

PolitiFact California "fact": Undocumented immigrants count as Americans

The secret formula for finding PolitiFact mistakes: Just look at what fact PolitiFact is checking, try to imagine how a biased liberal would flub the fact check, then look to see if that mistake occurred.

PolitiFact California makes this technique work like magic. Case in point:

We wondered if PolitiFact California and Gov. Brown count undocumented immigrants as "Californians." We wondered if PolitiFact California would even concern itself over who counts as a "Californian."

The answer? No. And PolitiFact California made its mistake even more fundamental by putting a twist on what Gov. Brown claimed. This was the statement Brown made from his 2017 state of the state address:
This is California, the sixth most powerful economy in the world. One out of every eight Americans lives right here and 27 percent – almost eleven million – were born in a foreign land.
Brown did not say 27 percent of "Californians" are foreign-born. In context, he said 27 percent of the Americans (U.S citizens) in California are foreign born. If Brown had referred to "Californians," the dictionary would have given him some cover. A resident of California can qualify as a "Californian."

But Merriam-Webster provides no such cover for the definition of "American":


Only one of the four definitions fits the context of Brown's claim. That is definition No. 3.

The problem for Brown and PolitiFact California? Both relied on Census Bureau data. The Census Bureau counts citizens and non-citizens in its population survey. About 3 million of California's population  (Kaiser Family Foundation estimates about 5 million) do not hold American citizenship and do not count as "American" by definition No. 3. Subtract 3 million from the number PolitiFact California used as the number of Californians, and subtract 3 million from the number of foreign-born California residents, and the percentage of foreign-born Americans in California (definition No. 3) comes up as 22 percent, not 27 percent.

If the true number of undocumented Californians is 5 million then the percentage drops below 18 percent.

Gov. Brown's figure is off by at least 5 percentage points, representing a percentage error of almost 23 percent. And PolitiFact California found it completely true:
Gov. Jerry Brown claimed in his State of the State Address that 27 percent of Californians, almost 11 million, "were born in a foreign land."

A 2015 American Community Survey by the U.S. Census Bureau verifies that statistic. Additionally, a researcher at the Public Policy Institute of California, which studies the state’s immigration and demographic patterns, confirmed the census report is the best authority on California’s foreign born population.

We rate Brown's claim True.

TRUE – The statement is accurate and there’s nothing significant missing.
To us, this looks like a classic case of a journalist's liberal bias damping proper skepticism. This type of mistake was predictable. We predicted it. And PolitiFact California delivered it.

Wednesday, February 8, 2017

PolitiFact misleads its "Truth Squad"

PolitiFact is in the midst of conducting a successful campaign for raising financial support from its readers.

We found it interesting and ironic that PolitiFact is using a misleading appeal toward that end:
As readers have cheered us on, plenty of politicians have actively rooted against us. At the 2012 Republican National Convention, journalists challenged Mitt Romney’s campaign team about an ad that falsely claimed Barack Obama was ending work requirements for welfare. Romney pollster Neil Newhouse responded by saying, "We're not going to let our campaign be dictated by fact-checkers."
Problem one: So far as we can tell, the fact checkers never responded to vigorous criticisms of their ruling on President Barack Obama's welfare work requirement tweak. That's despite basing the ruling essentially on Obama administration claims about what it was trying to accomplish with its Welfare requirement waiver provision.

Problem two: PolitiFact is taking the statement from Neil Newhouse out of context. And all the mainstream (left-leaning) fact checkers seem to enjoy doing that to enhance the popular view of their work.

I exposed that deception with an article at Zebra Fact Check:
What was Newhouse saying? We think the context makes clear Newhouse was not expressing a disdain for facts but instead expressing his distrust of fact checkers. The ABC News report makes that clear with its paraphrase of Newhouse: “Newhouse suggested the problem was with the fact-checkers, not the facts themselves.”

We’ll see that all three of the major fact checkers ignored the meaning ABC News identified for Newhouse’s statement and replaced it with a meaning that better served their purposes.
The fact checkers, including PolitiFact, misleadingly use Newhouse's statement as evidence campaigns do not care about the truth, and that, in turn, helps justify their own existence. And apparently the fact checkers themselves are perfectly willing to twist the truth to achieve that noble (selfish) end.

PolitiFact's "Truth Squad" is likely to end up as a left-leaning mob interested primarily in supporting journalism that attacks Republicans, conservatives and President Trump in particular.







Edit: A draft version of a Jeff Adds section was published today in error. We have since removed the section. Prior to removal we saved a version of this page that included the section. That can be found at Internet Archive
-Jeff 1619PST 2/10/2017

Sunday, January 29, 2017

PolitiFact continues its campaign of misinformation on waterboarding

Amazing. Simply amazing.

PolitiFact Bias co-editor Jeff D. caught PolitiFact continuing its tendency to misinform its readers about waterboarding in a Jan 29, 2017 tweet:
PolitiFact's claim was untrue, as I demonstrated in a May 30, 2016 article at Zebra Fact Check, "Torture narrative trumps facts at PolitiFact."

Though PolitiFact claims scientific research shows waterboarding doesn't work, the only "scientific evidence in the linked article concerns the related conditions of hypoxia (low oxygen) and hypercapnia (excess carbon dioxide). PolitiFact reasoned that because science shows that hypoxia and hypercapnia inhibit memory, therefore waterboarding would not work as a means of gaining meaningful intelligence.

The obvious problem with that line of evidence?

Waterboarding as practiced by the CIA takes mere seconds. Journalist Christopher Hitchens had himself waterboarded and broke, saying he would tell whatever he knew, after about 18 seconds.  Memos released by the Obama administration revealed that a continuous waterboarding treatment could last a maximum 40 seconds.

Prisoners could be subjected to waterboarding during one 30 day period

Maximum five treatment days per 30 days

Maximum two waterboarding sessions per treatment day
Max 2 hours per session (the length of time the prisoner is strapped down)
Maximum 40 seconds of continuous water application

Maximum six water applications over 10 seconds long per session
Maximum 240 seconds (four minutes) of waterboarding per session from applications over 10 seconds long
Maximum total of 12 minutes of treatment with water over any 24 hour period
Applications under 10 seconds long could make up a maximum 8 minutes on top of the four mentioned above

While it is worth noting that reports indicate the CIA exceeded these guidelines in the case of al Qaeda mastermind Khalid Sheik Mohammed, these limits are not conducive to creating significant conditions of hypoxia or hypercapnia.

The typical person can hold their breath for 40 seconds without too much difficulty or distress. The CIA's waterboarding was designed to bring about the sensation of drowning, not the literal effects of drowning (hypoxia, hypercapnia, aspiration and swallowing of water). That is why the techniques often break prisoners in about 10 seconds.

And the other problem?

The CIA did not interrogate prisoners while waterboarding them. Nor did the CIA use the technique to obtain confessions under duress. Waterboarding was used to make prisoners more amenable to conventional forms of interrogation.

None of this information is difficult to find.

Why do the fact checkers at PolitiFact (not to mention elsewhere) have such a tough time figuring this stuff out?

There likely isn't any significant scientific evidence either for or against the effectiveness of waterboarding. PolitiFact pretending there is does not make it so.

Friday, January 20, 2017

Hans Bader: "The Strange Ignorance of PolitiFact"

Hans Bader, writing at Liberty Unyielding, points out a Jan. 19, 2017 fact-checking train wreck from PolitiFact Pennsylvania. PolitiFact Pennsylvania looked at a claim Sen. Bob Casey (D-Penn.) used to try to discredit President-elect Donald Trump's nominee for Secretary of Education, Betsy DeVos.

Bader's article initially emphasized PolitiFact Pennsylvania's apparent ignorance of the "reasonable doubt" standard in United States criminal cases:
In an error-filled January 19 “fact-check,” PolitiFact’s Anna Orso wrote about “the ‘clear and convincing’ standard used in criminal trials.”  The clear and convincing evidence standard is not used in criminal trials. Even my 9-year old daughter knows that the correct standard is “beyond a reasonable doubt.”
By the time we started looking at this one, PolitiFact Pennsylvania had started trying to spackle over its faults. The record (at the Internet Archive) makes clear that PolitiFact's changes to its text got ahead of its policy of announcing corrections or updates.

Eventually, PolitiFact continued its redefinition of the word "transparency" with this vague description of its corrections:
Correction: An earlier version of this article incorrectly characterized the standard of evidence used in criminal convictions.
Though PolitiFact Pennsylvania corrected the most obvious and embarrassing problem with its fact check, other problems Bader pointed out still remain, such as its questionable characterization of the Foundation for Individual Rights in Education's civil rights stance as "controversial."

For our part, we question PolitiFact Pennsylvania for apparently uncritically accepting a key premise connected to the statement it claimed to fact check:
Specifically, Casey said the Philadelphia-based Foundation for Individual Rights in Education supports a bill that "would change the standard of evidence." He said the group is in favor of ditching the "preponderance of the evidence" standard most commonly used in Title IX investigations on college campuses and instead using the "beyond a reasonable doubt" standard used in criminal cases.
PolitiFact claimed to simply fact check whether DeVos had contributed to FIRE. But without the implication that FIRE is some kind of far-outside-the-mainstream group, who cares?

We say that given PolitiFact Pennsylvania's explanation of Casey's attack on DeVos, a fact checker needs to investigate whether FIRE supported a bill that would change the standard of evidence.

PolitiFact Pennsylvania offers its readers no evidence at all regarding any such bill. If there is no bill as Casey described, then PolitiFact Pennsylvania's "Mostly True" rating serves to buoy a false charge against DeVos (and FIRE).

Ultimately, PolitiFact Pennsylvania fails to coherently explain the point of contention. The Obama administration tried to restrict schools from using the "clear and convincing" standard.
Thus, in order for a school’s grievance procedures to be consistent with Title IX standards, the school must use a preponderance of the evidence standard (i.e., it is more likely than not that sexual harassment or violence occurred). The “clear and convincing” standard (i.e., it is highly probable or reasonably certain that the sexual harassment or violence occurred), currently used by some schools, is a higher standard of proof. Grievance procedures that use this higher standard are inconsistent with the standard of proof established for violations of the civil rights laws, and are thus not equitable under Title IX. Therefore, preponderance of the evidence is the appropriate standard for investigating allegations of sexual harassment or violence.
FIRE objected to that. But objecting to that move from the Obama administration does not mean FIRE advocated using the "beyond a reasonable doubt" (how PolitiFact's story reads now) standard. That also goes for the "clear and convincing" standard mentioned in the original version.

PolitiFact Pennsylvania simply skipped out on investigating the linchpin of Casey's argument.

There's more hole than story to this PolitiFact Pennsylvania fact check.

Be sure to read Bader's article for more.


Update Jan 21, 2017: Added link to the Department of Education's April 4, 2011 "Dear Colleague" letter
Update Jan 24, 2017: Added a proper ending to the second sentence in the third-to-last paragraph 
Update Feb. 2, 2017: Added "article" after "Bader's" in the second paragraph to make the sentence more sensible

Wednesday, January 18, 2017

Thought-Checkers: Protecting against Fakethink

Everything you can imagine is real
                                 -Pablo Picasso*


Not so fast, Pablo! 

We stumbled across this silly piece by Lauren Carroll (of fact-checking flat earth claims fame) where Carroll somehow determines as objective fact the limits of Betsy DeVos' ability to imagine things: 




DeVos was asked a question, she didn't know the answer, so she offered a guess, and explicitly stated she was offering a guess.

The difference between making a statement of fact and offering your best guess seems far too complicated for either Carroll or her editors that let this editorial opportunity escape their liberal grasp.

This isn't a fact check, it's a hit piece by a journalish that was apparently more eager to smear a Trump pick than they were in acknowledging what DeVos actually said. Oddly, Carroll doesn't list any attempts to contact either DeVos or anyone in the Trump camp to get a clarification, a courtesy they've extended in the past.

PolitiFact is pushing Fake News by accusing DeVos of making a claim when she was stating a theoretical possibility that she could imagine. The real crime here is garbage ratings like this will end up in DeVos' unscientific "report card" on those bogus charts PolitiFact dishonestly pimps out to readers as objective data.

PolitiFact's disdain for all things Trump is clear and it's only going to get worse. The administration hasn't even begun yet and they're already fact-checking what someone can or cannot imagine. 

Happy thoughts!




*attributed




Sunday, January 8, 2017

Not a fact checker's argument, but PolitiFact went there

A few days ago we highlighted a gun-rights research group's criticism of a PolitiFact California fact check. The fact check found it "Mostly True" that over seven children per day fall victim to gun violence, even though that number includes suicides and "children" aged 18 and 19.

A dubious finding? Sure. But least PolitiFact California's fact check did not try use the rationale that might have made all victims of gun violence "children." But the PolitiFact video used to help publicize the fact check (narrated by PolitiFact California's Chris Nichols) went there:

How many teenagers in the background photo are 18 or over, we wonder?

Any parent will tell you that any child of theirs is a child, regardless of age. But that definition makes the modifier "children" useless in a claim about the effect on children from gun violence. "Children" under that broad definition includes all human beings with parents. That counts most, if not all, human beings as children.

Nichols' argument does not belong in a fact check. It belongs in a political ad designed around the appeal to emotion.

The only sensible operative definition of "children" here is humans not yet of age (18 years, in the United States). All persons under 18 are "children" by this definition. But not all teenagers are "children" by this definition.

To repeat the gist of the earlier assessment, the claim was misleading but PolitiFact covered for it with an equivocation fallacy. The equivocation fallacy from the video, featuring an even more outrageous equivocation fallacy, just makes PolitiFact marginally more farcical.




Edit: Added link to CPRC in first graph-Jeff 0735PST 1/12/2017

Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

Monday, January 2, 2017

CPRC: "Is Politifact really the organization that should be fact checking Facebook on gun related facts?"

The Crime Prevention Research Center, on Dec. 29, 2016, published a PolitiFact critique that might well have made our top 11 if we had noticed it a few days sooner.

Though the title of the piece suggests a general questioning of PolitiFact's new role as one of Facebook's guardians of truth, the article mainly focuses on one fact check from PolitiFact California, rating "Mostly True" the claim that seven children die each day from gun violence.

The CPRC puts its strongest argument front and center:
Are 18 and 19 year olds “children”?

For 2013 through 2015 for ages 0 through 19 there were 7,838 firearm deaths.  If you exclude 18 and 19 year olds, the number firearm deaths for 2013 through 2015 is reduced by almost half to 4,047 firearm deaths.  Including people who are clearly adults drives the total number of deaths.

Even the Brady Campaign differentiates children from teenagers.  If you just look at those who aren’t teenagers, the number of firearm deaths declines to 692, which comes to 0.63 deaths per day.
This argument cuts PolitiFact California's fact check to the quick. Instead looking at "children" as something to question, the fact-checkers let it pass with a "he-said, she said" caveat (bold emphasis added):
These include all types of gun deaths from accidents to homicides to suicides. About 36 percent resulted from suicides.

Some might take issue with Speier lumping in 18 year-olds and 19 year-olds as children.

Gun deaths for these two ages accounted for nearly half of the 7,838 young people killed in the two-year period.
Yes, some might take issue with lumping 18 year-olds and 19 year-olds in as children, particularly when checking Merriam-Webster quickly reveals how the claim stretches the truth. The distortion maximizes the emotional appeal of protecting "children."

Merriam-Webster's definition No. 2:
a :  a young person especially between infancy and youth
b :  a childlike or childish person  
c :  a person not yet of age
"A person not yet of age" provides the broadest reasonable understanding of the claim PolitiFact California checked. In the United States, persons 18 and over qualify as "of age."

Taking persons over 18 out of the mix all by itself cuts the estimate nearly in half. Great job, PolitiFact California.

Visit CPRC for more, including the share of "gun violence" accounted for by suicide and justifiable homicide.