Showing posts with label Interviews. Show all posts
Showing posts with label Interviews. Show all posts

Thursday, April 4, 2019

The Worst of PolitiFact's April 2, 2019 Reddit AMA

As we mentioned in a Feb. 2, 2019 post, we love it when PolitiFact folks do interviews. It's a near guarantee of generating material worth posting. In celebration of "International Fact-Checking Day," PolitiFact Director Aaron Sharockman and PolitiFact Editor Angie Drobnic Holan conducted a Reddit "Ask Me Anything" event.

I asked PolitiFact to describe why it advocates transparency while keeping the identities and votes of its "Star Chamber" secret. PolitiFact's "Star Chamber" votes on each "Truth-O-Meter" rating. The majority vote rules, though PolitiFact claims it achieves unanimity for most votes. My question wasn't answered (no great surprise there).

Most of the interactions were boilerplate answers to boilerplate questions. But there were a few items of special interest.


Observing the PolitiFact Code?

Card

Though Holan flatly said "Everything that gets a correction or an update gets tagged (see all tagged items)," we were ready with two recent cases contradicting her claim. And we let that cat out of the bag.

Card

PolitiFact makes statements giving readers the impression that it scrupulously follows its code of principles. In fact, PolitiFact loosely follows its code of principles, as in this example. How do Holan and Sharockman not know this?

One of the examples we used was corrected on approximately March 16, 2019. Most of the uncertainly about the correction date comes from PolitiFact Virginia's decision not to mark the date of the correction. As of April 3, 2019 PolitiFact Virginia had not added the "Corrections or Updates" tag and the story did not appear on PolitiFact's supposed list of all of its corrected or updated stories.


Mythical Truth-O-Meter Consistency

One participant asked a question suggesting PolitiFact does not rate statements consistently (suggesting contemporary ratings of Trump make past ratings look far too harsh). Sharockman implied PolitiFact has kept its system consistent over the years:
But beyond the sheer volume [of Trump ratings--ed.], the standards we use to use [sic] to issue our ratings really hasn't [sic] evolved in the 11 years we've been doing this. In that sense, a Pants on Fire in 2009 should still be a Pants on Fire claim today, and vice versa.
There are two big problems with Sharockman's claim. First, PolitiFact itself announced a change to its rating methodology back in 2012.

Second, PolitiFact has admitted that its ratings are pretty much subjective. Sharockman's chosen example, the dividing line between "False" and "Pants on Fire" is perhaps the most sensational example of that subjectivity. How does Sharockman not know that?


The Vast Right-Wing Conspiracy Against PolitiFact?

Someone (not me!) asked "Who fact checks you [PolitiFact--ed.]?"

We found Sharockman's response fascinating and very probably false:

Card

Sharockman's answer paints PolitiFact as the focus of a concentrated group of hostile editors. With all those people combing PolitiFact material for hours on end, it's amazing that PolitiFact makes mistakes so rarely.

Right?

But is there any evidence at all supporting Sharockman's supposition that "a lot of people are reading everything we write looking for mistakes"? We at PolitiFact Bias announced long ago that we could not do a thorough job vetting PolitiFact's body of work:
As PolitiFact expands its state operations, the number of stories it produces far exceeds our capacity to review and correct even just the most egregious examples of journalistic error or bias.  We aim to encourage an army of Davids to counteract the mistakes and bias in PolitiFact's stories.
Who else could Sharockman have had in mind? Media Matters For America? The (defunct) Weekly Standard?

(We asked Sharockman via Twitter the other day whom he had in mind but received no immediate reply)

We suspect Sharockman of Trumpian exaggeration. He knows at least some people look at some of PolitiFact's work for errors. So to convey his point he turns that into "a lot of people" looking at "everything" PolitiFact publishes looking for errors. It's likely the only organization combing over PolitiFact's entire body of work looking for errors is PolitiFact itself.

And look how many times it fails, without swallowing the fiction that this represents the entire number.

***

Holan and Sharockman are politicians advocating for PolitiFact. It appears we cannot trust PolitiFact to hold its own to account.

Saturday, February 2, 2019

PolitiFact Editor: "Most of our fact checks are really rock solid as far as the reporting goes"

Why do we love it when PolitiFact's principals do interviews?

Because it almost always provides us with material for PolitiFact Bias.

PolitiFact Editor Angie Drobnic Holan, in a recent interview for the Yale Politic, damned her own fact-checking organization with faint praise (bold emphasis added):

We do two things–well, we do more than two things–but we do two things that I want to mention for public trust. First, we have a list of our sources with every fact check. If you go into a fact check on the right-hand rail [of the PolitiFact webpage], we list of all of our sources, and then we explain in detail how we came to our conclusion. I also wrote a recent column on why PolitiFact is not biased to try to answer some of the critique that we got during the latest election season. What we found was that when a campaign staffer does not like our fact check on their candidate, they usually do not argue the facts with us– they usually come straight at us and say that we are biased. So, I wrote this column in response to that. And the reason that they don’t come straight out and dispute the facts with us is because the fact checks are solid. We do make some mistakes like any other human beings, but most of our fact checks are really rock solid as far as the reporting goes. And yet, partisans want to attack us anyway.
We find Holan's claim plausible. The reporting on more than half of PolitiFact''s fact checks may well be rock solid. But what about the rest? Are the failures fair game for critics? Holan does not appear to think so, complaining that even though the reporting for PolitiFact's fact checks is solid more than half the time "partisans want to attack us anyway."

The nerve of those partisans!

Seriously, with a defender like Holan who needs partisan attacks? Imagine Holan composing ad copy extolling the wonders of PolitiFact:

PolitiFact: Rock Solid Reporting Most of the Time


Holan's attempt to discredit PolitiFact's critics hardly qualifies as coherent. Even if PolitiFact's reporting was "rock solid" 99 percent of the time criticizing the errors should count as fair game. And a 1 percent error rate favoring the left would indicate bias.

Holan tries to imply that the quality reporting results in a lack of specific criticism, but research connected to Holan's predecessor at PolitiFact, Bill Adair of Duke University, contradicts that notion:
Conservative outlets were much more likely to question the validity of fact-checks and accuse fact-checkers of making errors in their research or logic.
It isn't that conservatives do not criticize PolitiFact on its reporting. They do (we do). But PolitiFact tends to ignore the criticisms. Perhaps because the partisan critiques are "rock solid"?

More interviews, please.

Tuesday, January 30, 2018

PolitiFact editor: "Tell me where the fact-check is wrong"

Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?

PolitiFact Editor Angie Drobnic Holan gave us another stanza of that song-and-dance in a Jan. 26, 2018 interview with Digital Charlotte. Digital Charlotte's Stephanie Bunao asked Holan whether she sees a partisan difference in the email and commentary PolitiFact receives from readers.

Holan's response (bold emphasis added):
Well, we get, you know, nobody likes it when their team is being criticized, so we get mail a lot of different ways. I think obviously there's a kind of repeated slogan from the conservative side that when they see media reports they don't like, that it's liberal media or fake news. On the left, the criticism is a little different – like they accuse us of having false balance. You know, when we say the Democrats are wrong, they say, ‘Oh, you're only doing that to try to show that you're independent.’ I mean it gets really like a little bit mental, when people say why we're wrong. If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Let us count the ways Holan achieves disingenuousness, starting with the big one at the end:

1) "Tell me where the fact-check is wrong"

We've been doing that for years here at PolitiFact Bias, making our point in blog posts, emails and tweets. Our question for Holan? If you think that's a useful conversation to have then why do you avoid having the conversation? On Jan. 25, 2018, we sent Holan an email pointing out a factual problem with one of its fact checks. We received no reply. And on Jan. 26 she tells an interviewer that the conversation she won't have is a useful one?

2) "Every year in December we look at all the things that we fact-check, and we say, ‘What is the most significant lie we fact-checked this year’"

Huh? In 2013, PolitiFact worked hard to make the public believe it had chosen the president's Affordable Care Act promise that people would be able to keep plans they liked under the new health care law as its "Lie of the Year." But PolitiFact did not fact check the claim in 2013. PolitiFact Bias and others exposed PolitiFact's deception at the time, but PolitiFact keeps repeating it.

3) PolitiFact's "extreme transparency"

Asked how the media can regain public trust, Holan mentioned the use of transparency. We agree with her that far. But she used PolitiFact as an example of providing readers "extreme transparency."

That's a laugh.

Perhaps PolitiFact provides more transparency than the average mainstream media outlet, but does that equal "extreme transparency"? We say no. Extreme transparency is admitting your politics (PolitiFail), publishing the texts of expert interviews (PolitiFail, except for PolitiFact Texas), revealing the "Truth-O-Meter" votes of its editorial "star chamber" (PolitiFail) and more.

PolitiFact practices above-average transparency, not "extreme transparency." And the media tend to deliver a poor degree of transparency.

We remain prepared to have that "useful conversation" about PolitiFact's errors of fact and research.

You let us know when you're ready, Angie Drobnic Holan.

Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


Afters

The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.