Showing posts with label hypocrisy. Show all posts
Showing posts with label hypocrisy. Show all posts

Thursday, August 19, 2021

PolitiFact supplies misleading missing context

This week the fact checkers at PolitiFact fixed a supposed problem with missing context by supplying completely misleading context.

Gohmert wasn't talking about solar panel farms. He was talking about facilities that concentrate reflected sunlight. Nor did Gohmert suggest avian deaths would bring the nation down. But those blunders represent the least of our worries.

The big problem in the fact check comes from its attempt to set the record straight. PolitiFact claimed Gohmert left out the fact that fossil fuel plants cause far more deaths than solar energy plants like the one Gohmert mentioned: "Solar farms kill thousands of birds, but not as many as fossil fuel plants."

"Is that true?" we wondered.

It may be true, we suppose. But the reasoning PolitiFact provided was illegitimate.

"It is wrong to single out solar and wind (power) as having bird mortality issues," said David Jenkins, president of Conservatives for Responsible Stewardship. "The estimated number of birds killed by fossil fuel power plants through collisions, electrocution and poisoning actually dwarfs those attributed to solar and wind."

A 2016 study found that solar power plants cause 37,800 to 138,600 annual avian deaths in the U.S., compared with 14.5 million attributed to fossil fuel power plants. Another study attributed 365 million to 988 million avian deaths to collisions with buildings and windows.

The big problem (there are many small problems in the fact check) starts between the two paragraphs above. The Jenkins quotation sets up the reader to expect that avian deaths caused by fossil fuel plants will represent deaths from "collisions, electrocution and poisoning."

But the second paragraph betrays that expectation. The 14.5 million estimate in the second paragraph comes almost entirely from the predicted effects of climate change.

We must be kidding, right?

We're not kidding.

PolitiFact's link leads to A preliminary assessment of avian mortality at utility-scale solar energy facilities in the United States, hosted at Science Direct. That paper estimates bird deaths at facilities like the Ivanpah solar facility Gohmert mentioned, including those under construction. The paper says it includes collisions with facility structures along with birds killed while trying to fly through the concentrated sunlight (formatting tweaked to help simulate the appearance of the original):

There are currently 2 known types of direct solar energy-related bird mortality [9], [12], [13]:

  1. Collision-related mortality – mortality resulting from the direct contact of the bird with a solar project structure(s). This type of mortality has been documented at solar projects of all technology types.
  2. Solar flux-related mortality – mortality resulting from the burning/singeing effects of exposure to concentrated sunlight. Mortality may result in several ways: (a) direct mortality; (b) singeing of flight feathers that cause loss of flight ability, leading to impact with other objects; or (c) impairment of flight capability to reduce the ability to forage or avoid predators, resulting in starvation or predation of the individual [12]. Solar flux-related mortality has been observed only at facilities employing power tower technologies.

As for the estimate for fossil fuel energy generation, the authors derived that based on research from an earlier paper:

We ... used the mortalities calculated by Sovacool [25] as an estimate of avian mortalities associated with fossil fuel power plants across the United States.

The Sovacool paper did not limit itself to the avian death categories PolitiFact mentioned. PolitiFact readers would naturally conclude that in a typical year such as 2019 (after the study was published), fossil fuel power generation resulted in approximately 14 million dead birds from collisions, electrocutions and poisoning.

That's false.

In fact, the study got nearly that entire number by estimating future effects on bird populations in the United States from climate change.

So this PolitiFact fact check will be in the running for worst fact check of the year.

Sovacool:

Adding the avian deaths from coal mining, plant operation, acid rain, mercury, and climate change together results in a total of 5.18 fatalities per GWh (see Table 3).
Table 3:

 

Table 3 makes abundantly clear that Sovacool draws the great bulk of estimated avian deaths from fossil fuel electricity generation on the future effects of climate change.

Footnote No. 6 on the previous page makes that conclusion inescapable (bold emphasis added):

While there are more than 9800 species and an estimated global population of 100 billion to one trillion individual wild birds in the world, only 5.6 billion birds live in United States during the summer (Hughes et al., 1997; Elliott, 2003; Hassan et al., 2005). Taking the mean in climate change induced avian deaths expected by Thomas et al. (26%), one gets 1.5 billion birds spread across 41 years for the United States, or an average of 36.6 million dead birds per year. Attributing 39% of these deaths to power plants (responsible for 39% of the country’s carbon dioxide emissions), one gets 14.3 million birds for 2.87 million GWh per year, or 4.98 deaths per GWh.

Note that the number in Sovacool's footnote closely matches the estimate from paper PolitiFact cited (14.5 million annually).

So PolitiFact is peddling an apples-to-oranges comparison between two types of bird deaths at solar energy power plants and future predicted climate change effects from fossil fuel energy plants. And doesn't tell you that's what it's doing.

It's hypocrisy of the highest order.

There are more layers to this BS narrative on bird deaths from fossil fuels, but suffice it to say that PolitiFact's claim that fossil fuel generation causes far more bird deaths than solar is far more misleading than Gohmert's claim about Ivanpah.

Thursday, December 3, 2020

TDS symptom: PolitiFact fact checks jokes

 Sure, President Trump says plenty of false things. He truly does.

But that's actually a trap for left-leaning fact checkers who pretend to be nonpartisan. They have a hard time judging when they go too far. Like when they fact check jokes:

PolitiFact's Nov. 1, 2020 item fact-checking President Trump found "Pants on Fire" Trump's claim that his supporters were protecting challenger Joe Biden's campaign bus.

How do we know it was a joke?

We watched video of the Trump appearance where he made the statement. The claim comes in the midst of a segment of a speech done in the style of a classic stand-up comedy routine. Certainly Trump mixed in serious political claims, but many of the lines were intended to provoke laughter, and the one about protecting Biden's bus unquestionably drew laughter. In context, Trump was making a point about the enthusiasm of his supporters, and his story about cars and trucks surrounding the bus emphasized the number of vehicles involved.

PolitiFact played it completely straight:

"You see the way our people, they, you know, they were protecting his bus yesterday," Trump said Nov. 1 during a rally in Michigan. "Because they are nice. They had hundreds of cars."

The FBI’s San Antonio office said Nov. 1 that it is "aware of the incident and investigating."

Trump’s benevolent explanation lacks evidence.

How does a fact checker overlook/omit those contextual clues?

The left-leaning Huffington Post figured it out (bold emphasis added):

President Donald Trump on Sunday mockingly claimed that his supporters were “protecting” a campaign bus belonging to Democratic presidential nominee Joe Biden when a caravan of vehicles dangerously surrounded it on a Texas highway, leading to a vehicular collision.

“They were protecting their bus yesterday because they’re nice,” Trump said at a rally in Michigan to cheers, laughter and applause.

If PolitiFact noticed the audience laughing and intentionally suppressed evidence Trump was joking, then PolitiFact deceived its audience by omission.

When PolitiFact catches politicians doing that sort of thing a "Half True" rating often results.

PolitiFact does not hold itself to the same standard it applies to Republican politicians.


Correction 12/13/2020: We misspelled "PolitiFact" on the title line, omitting the first of two i's.

Friday, April 10, 2020

PolitiFact claims, without evidence, Trump touted chloroquine as a coronavirus cure

Should fact checkers hold themselves to the standards they expect others to meet?

We say yes.

Should fact checkers meet the standards they claim to uphold?

We say yes.

What does PolitiFact say?
(President Donald) Trump has touted chloroquine or hydroxychloroquine as a coronavirus cure in more than a half-dozen public events since March 19.
PolitiFact published the above claim in an April 8, 2020 PolitiSplainer about hydroxychloroquine, an antimalarial drug doctors have used in the treatment of coronavirus patients.

We were familiar with instances where Mr. Trump mentioned hydroxychloroquine as a potential treatment for coronavirus sufferers. But we had not heard him call it a cure. Accordingly, we tried to follow up on the evidence PolitiFact offered in support of its claim.

The article did not contain any mention of a source identifying the "half-dozen public events since March 19," so we skipped to the end to look at PolitiFact's source list. That proved disappointing.



We tweeted at the article's authors expressing our dismay at the lack of supporting documentation. Our tweet garnered no reply, no attempt to supply the missing information and no change to the original article.

Of note, when co-author Funke tweeted out a link to the article on April 8 his accompanying description counted as far more responsible than the language in the article itself:

"Here's what you need to know about hydroxychloroquine, the malaria drug that President Trump has repeatedly touted as a potential COVID-19 treatment."

Does "cure" mean the same thing as "potential treatment" in PolitiFactLand?

We've surveyed Mr. Trump's use of the terms "cure" and "game changer" at the White House website and found nothing that would justify the language PolitiFact used of the president.

What else does PolitiFact say?

The burden of proof is on the speaker, and we rate statements based on the information known at the time the statement is made.
 What if the speaker says "Trump has touted chloroquine or hydroxycloroquine as a coronavirus cure"? Does the speaker still have the burden of proof? If the speaker is PolitiFact, that is?

It looks like the fact-checkers have yet again allowed a(n apparently false) public narrative to guide their fact-checking.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Wednesday, March 28, 2018

How PolitiFact Fights Its Reputation for Anti-conservative Bias

This week we ran across a new paper with an intriguing title: Everyone Hates the Referee:How Fact-Checkers Mitigate a Public Perception of Bias.

The paper, by Allison Colburn, pretty much concludes that fact checkers do not know how to fight their reputation for bias. Aside from that, it lets the fact checkers describe what they do to try to seem fair and impartial.

The paper mentions PolitiFact Bias, and we'll post more about that later. We place our focus for this post on Colburn's October 2017 interview of PolitiFact Editor Angie Drobnic Holan. Colburn asks Holan directly about PolitiFact Bias (Colburn's words in bold, following the format from her paper):
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just  seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from.

But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
We could probably mine material from these answers for weeks. One visit to our About/FAQ page would prove enough to treat the bulk of Holan's misstatements. Aside from the FAQ, Jeff's tweet of Rodney Dangerfield's mug is the perfect response to Holan's suggestion that PolitiFact Bias is "one guy."



The Holan interview does deliver some on the promise of Colburn's paper. It shows how Holan tries to marginalize PolitiFact's critics.

I summed up one prong of PolitiFact's strategy in a post from Jan 30, 2018:
Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?
In that post, I reported on how Holan bemoaned the fact that PolitiFact critics do not offer factual criticisms of its fact checks, preferring instead to quibble over its subjective ratings.
If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Holan says my (our) criticism amounts to a call for more positive ratings for conservatives. So we're just carping about the ratings, right? Holan's summation does a tremendous disservice to our painstaking and abundant research pointing out PolitiFact's errors (for example).

In the Colburn interview Holan also says she has trouble taking criticism seriously when the critic doesn't write articles complimenting what PolitiFact does correctly.

We suppose it must suck to find one's self the victim of selection bias. We suppose Holan must have a tough time taking FactCheck.org seriously, given its policy against publishing fact checks showing a politician spoke the truth without misleading.

The hypocrisy from these people is just too much.

Exit question: Did Holan just not know what she was talking about, or was she simply lying?



Afters

For what it's worth, we sometimes praise PolitiFact for doing something right.



Correction March 31, 2018: We erred by neglecting to include the URL linking to Colburn's paper. We apologize to Allison Colburn and our readers for the oversight.

Tuesday, February 6, 2018

PolitiFact: One standard for me, and another for thee

On Feb. 5, 2018, PolitiFact published an article on cherry picking from one of its veteran writers, Louis Jacobson. Titled, "The Age of Cherry-picking," it led with a claim of fact as its main hook:
These days, it isn’t just that Republicans are from Mars and Democrats are from Venus. Increasingly, politicians on either side are cherry-picking evidence to support their version of reality.
With cherry-picking on the increase, and with both sides using it more, certainly readers would want to see what PolitiFact has to say about it.

But is it true? Is cherry-picking on the increase?

One had to read far down the column to reach Jacobson's evidence (bold emphasis added):
So is there more cherry-picking today in political rhetoric than in the past? That’s hard to say -- we couldn’t find anyone who measures it. But several political scientists and historians said that even if it’s not more common, the use of the tactic may have turned a corner.
Seriously?

If a writer tries to hook me into reading a story based on the claim that cherry-picking is on the increase, then takes over 20 paragraphs before getting around to telling me that no good evidence supports the claim, I want my money back.

This isn't hard, fact checkers. If it's hard to say if there is more cherry-picking today in political rhetoric than in the past, don't say "Increasingly, politicians on either side are cherry-picking evidence to support their version of reality."

Don't do it.

Even a Democrat probably couldn't entirely get away with a claim so poorly supported by the evidence, thanks to PolitiFact's occasionally-applied principle of the burden of proof:
Burden of proof – People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.
We used Twitter to needle PolitiFact over this issue, surprisingly drawing some response (nothing of substance). But the exchange ended up productive when co-editor Jeff D, who runs the PFB Twitter account, contributed this summary:
That about sums it up. One standard for me, and another for thee.



Update Feb. 7, 2018: Supplied URL to PolitiFact's article on cherry picking, added tag labels.

Friday, September 8, 2017

PolitiFact's hypocrisy

PolitiFact manifests many examples of hypocrisy. This post will focus on just one.

On August 21, 2017 Speaker of the House Paul Ryan (R-Wis.) said American has dozens of counties with zero insurers. Ryan was talking about insurers committed to serving the exchanges that serve individual market customers.

On August 24, 2017, PolitiFact published a fact check rating Ryan's claim "Pants on Fire." PolitiFact noted that Ryan had relied on outdated information to back his claim. PolitiFact said only one county was expected to risk having no insurer, and Ryan should have been aware of it:
Now technically, that report wasn’t published until two days after Ryan spoke. But the government had the information, and a day before Ryan spoke, Politico reported that just one county remained without a potential insurance carrier in 2018. The Kaiser Family Foundation published the same information the day of Ryan’s CNN town hall.

And a week earlier, the government said there were only two counties at risk of having no participating insurer. Ryan was way off no matter what.
Fast forward to Sept. 7, 2017. PolitiFact elects to republicize its fact check of Ryan, reinforcing its message that only one county remains at risk no not having any insurance provider available through the exchange. PolitiFact publicized it on Twitter:
And PolitiFact publicized it on Facebook as well.

The problem? On Sept. 6, 2017, the Kaiser Family Foundation updated its information to show 63 counties at risk of having no insurer on the exchange. The information in the story PolitiFact shared was outdated.

Paul Ryan got a "Pants on Fire" for peddling outdated information.

What does PolitiFact get for doing the same thing?

Another Pulitzer Prize?

Thursday, June 8, 2017

Incompetent PolitiFact engages in Facebook mission creep, false reporting

The liberal bloggers/mainstream fact checkers at PolitiFact are expanding their "fake news" police mission at Facebook. While they're at it, they're publishing misleading reports.

Facebook Mission Creep

Remember the pushback when Facebook announced that fact checkers would help it flag "fake news?" PolitiFact Editor Angie Drobnic Holan made the rounds to offer reassurance:
[STELTER:]Angie, there has been a lot of blowback already to this Facebook experiment. Some on the right are very skeptical, even mocking this. Why is it a worthwhile idea? Why are you helping Facebook try to fact-check these fake stories?

HOLAN: Go to Facebook, and they are going about their day looking to connect with friends and family. And then they see these headlines that are super dramatic and they wonder if they're right or not. And when they're wrong, sometimes they are really wrong. They're entirely made up.

It is not trying to censor anything. It is just trying to flag these reports that are fabricated out of thin air.
Fact check journalists spent their energy insisting that "fake news" was just made-up "news" items produced purely to mislead people.

Welcome to PolitiFact's version of Facebook mission creep. Sarah Palin posted a meme criticizing the Paris climate accord. The meme showed a picture of Florida legislators celebrating, communicating the attitude of those who support the Paris climate agreement:


The meme does not try to capture the appearance of a regular news story. It is primarily offering commentary, not communicating the idea that Florida legislators supported the Paris climate agreement. As such, it simply does not fit the definition of "fake news" that PolitiFact has insisted would guide the policing effort on Facebook.

Yet PolitiFact posted this in its fact check of Palin:
PolitiFact fact-checked Palin’s photo as part of our effort to debunk fake news on Facebook.
Fail. It's as though PolitiFact expects meme-makers to subscribe to the same sets of principles for using images that bind professional journalists (oops):



Maybe PolitiFact should flag itself as "fake news"?


Communicating Fact Checks Using Half Truths

Over and over we point out that PolitiFact uses the same varieties of deception that politicians use to sway voters. This fact check of Palin gives us yet another outstanding example.  What did Palin do wrong, in PolitiFact's eyes?
Says an Internet meme shows people rejoicing over the Paris agreement
PolitiFact provided no good evidence Palin said any such thing. The truth is that Palin posted an Internet meme (we don't know who created it) that used an image that did not match the story.

PolitiFact has posted images that do not match its reporting. We provided an example above, from a PolitiFact video about President Clinton's role in signing the North American Free Trade Agreement.

If we reported "PolitiFact said George W. and Jeb Bush Negotiated NAFTA," we would be giving a misleading report at best. At worst we'd be flatly lying. We apply the same standard to PolitiFact that we would apply to ourselves.


Afters

We sent a message to the writer and editor at PolitiFact Florida responsible for this fact check. We sent it before starting on the text of our post, but we're not waiting for a response from PolitiFact because PolitiFact usually fails to respond to substantive criticism. If we receive any substantive reply from PolitiFact, we will append it to this message and amend the title of the post to reflect the presence of an update (no, we won't hold our breath).

Dear Amy Sherman, Katie Sanders,

Your fact check of Sarah Palin's Paris climate accord meme is disgraceful for two big reasons.

First, you describe the fact check as part of the Facebook effort to combat "fake news." After laboring to insist to everyone that "fake news" is an intentionally false news item intended to mislead people, it looks like you've moved toward Donald Trump's definition of "fake news." The use of a photograph that does not match the story is bad and unethical practice in professional journalism. But it's pretty common in the production of political memes. Do you really want to expand your definition of "fake news" like that, after trying to reassure people that the Facebook initiative was not about limiting political expression? Would you want your PolitiFact video identifying George W. Bush/Jeb Bush as George H. W. Bush classified as "fake news" based on your use of an unrelated photograph?

Second, your fact check flatly states that Palin identified the Florida lawmakers as celebrants of the Paris climate accord. But that obviously is not the case. The fact check notes, in fact, that the post does not identify the people in the photo. All the meme does is make it easy to jump to the conclusion that the people in the photo were celebrating the Paris agreement. As such, it's a loose implication. But your fact check states the misdirection is explicit:
Palin posted a viral image that purportedly shows a group of people clapping as a result of the Paris agreement, presumably about the billions they will earn.
Purported by whom? It's implied, not stated.

Do you seriously think the purpose of the post was to convey to the audience that Florida legislators were either responsible for the Paris agreement or celebrating it? That would truly be fake news as PolitiFact has tried to define it. But that's not what this meme does, is it?

You're telling the type of half-truth you claim to expose.

Stop it.





Edit 6/9/2017: Added link to CNN interview in second graph-Jeff 

Wednesday, May 24, 2017

What if we lived in a world where PolitiFact applied to itself the standards it applies to others?

In that impossible world where PolitiFact applied its own standards to itself, PolitiFact would doubtless crack down on PolitiFact's misleading headlines, like the following headline over a story by Lauren Carroll:


While the PolitiFact headline claims that the Trump budget cuts Medicaid, and the opening paragraph says Trump's budget "directly contradicts" President Trump's promise not to cut Medicaid, in short order Carroll's story reveals that the Medicaid budget goes up under the new Trump budget.

So it's a cut when the Medicaid budget goes up?

Such reasoning has precedent at PolitiFact. We noted in December 2016 that veteran PolitiFact fact-checker Louis Jacobson wrote that the most natural way to interpret "budget cut" was against the baseline of expected spending, not against the previous year's spending.

Jacobson's approach in December 2016 helped President Obama end up with a "Compromise" rating on his aim to cut $1 trillion to $1.5 trillion in spending. By PolitiFact's reckoning, the president cut $427 billion from the budget. PolitiFact obtained that figure by subtracting actual outlays from the estimates the Congressional Budget Office published in 2012 and using the cumulative total for the four years.

Jacobson took a different tack back in 2014 when he faulted a Republican ad attacking the Affordable Care Act's adjustments to Medicare spending (which we noted in the earlier linked article):
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
We can easily rework Jacobson's paragraph to address Carroll's story:
First, while the headline implies that the proposed budget is slicing Medicaid benefits, these are not cuts to current services. Rather, as Medicaid spending continues to rise over the next 10 years, it will do so at a slower pace than would occur without the law. So claims that Trump would "cut" Medicaid need more explanation to be fully accurate.
PolitiFact is immune to the standard it applies to others.

We also note that a pledge not to cut a program's spending is not reasonably taken as a pledge not to slow the growth of spending for that program. Yet that unreasonable interpretation is the foundation of PolitiFact's "Trump-O-Meter" article.


Correction May 24, 2017: Changed the first incidence of "law" in our reworking of Jacobson's sentence to "proposed budget." It better fits the facts that way.
Update May 26, 2017: Added link to the PolitiFact story by Lauren Carroll

Sunday, April 2, 2017

Angie Drobnic Holan: "Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

PolitiFact, thy name is Hypocrisy.

The editors of PolitiFact Bias often find themselves overawed by the sanctimonious pronouncements we see coming from PolitiFact (and other fact checkers).

Everybody screws up. We screw up. The New York Times screws up. PolitiFact often screws up. And a big part of journalistic integrity comes from what you do to fix things when you screw up. But for some reason that concept just doesn't seem to fully register at PolitiFact.

Take the International Fact-Checking Day epistle from PolitiFact's chief editor Angie Drobnic Holan:
Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency. (We adhere to those principles at PolitiFact and at the Tampa Bay Times, so if you’re reading this, you’ve made a good start.)
The first sentence qualifies as great advice. The parenthetical sentence that follows qualifies as a howler. PolitiFact adheres to principles of truthfulness, fairness and transparency?

We're coming fresh from a week where PolitiFact published a fact check that took conservative radio talk show host Hugh Hewitt out of context, said it couldn't find something that was easy to find, and (apparently) misrepresented the findings of the Congressional Budget Office regarding the subject.

And more to the issue of integrity, PolitiFact ignores the evidence of its failures and allows its distorted and false fact check to stand.

The fact check claims the CBO finds insurance markets under the Affordable Care Act stable, concluding that the CBO says there is no death spiral. In fact, the CBO said the ACA was "probably" stable "in most areas." Is it rightly a fact checker's job to spin the judgments of its expert sources?

PolitiFact improperly cast doubt on Hewitt's recollections of a New York Times article where the head of Aetna said the ACA was in a death spiral and people would be left without insurance:
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article ...
We found the article (quickly and easily). And we told PolitiFact the article exists. But PolitiFact's fact check still makes it look like Hewitt was wrong about the article appearing in the Times.

PolitiFact harped on the issue:
In another tweet, Hewitt referenced a Washington Post story that included remarks Aetna’s chief executive, Mark Bertolini. On the NBC Meet the Press, Hewitt referred to a New York Times article.
We think fact checkers crowing about their integrity and transparency ought to fix these sorts of problems without badgering from right-wing bloggers. And if they still won't fix them after badgering from right-wing bloggers, then maybe they do not qualify as "organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

Maybe they're more like liberal bloggers with corporate backing.



Correction April 3, 2017: Added a needed apostrophe to "fact checkers job."

Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


Afters

The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.