Wednesday, November 20, 2019

PolitiFact as Rumpelstiltskin.

“Round about, round about,
Lo and behold!
Reel away, reel away,
Straw into gold!”

PolitiFact's Nov. 19, 2019 fact check of something President Donald Trump said on the Dan Bongino Show gives us yet another example of a classic fact checker error, the mistake of interpreting ambiguous statements as clear statements.

Here's PolitiFact's presentation of a statement it found worthy of a fact check:
In an interview with conservative show host Dan Bongino, Trump said a false rendition of that call by House Intelligence chairman Adam Schiff, D-Calif., forced him to release the readout of that call.

"They never thought, Dan, that I was going to release that call, and I really had no choice because Adam Schiff made up a call," Trump said Nov. 15. "He said the president said this, and then he made up a call."

The problem with Trump’s statement is that Schiff spoke after the White House released the memo of the phone call, not before.
 Note that PolitiFact finds a timeline problem with Trump's claim.

But also note that Trump makes no clear statement regarding a timeline. If Trump said "I released the transcript after Schiff did his 'parody' version of the telephone call," then he would have established an order of events. Trump's words imply an order of events, but it is not a hard logical implication (A, therefore B).

PolitiFact treats the case exactly like a hard implication.

Here's why that's the wrong approach.

First, significant ambiguity should always slow a fact-checker's progress toward an interpretation.

Second, Trump gave a speech on Sept 24, 2019 that announced the impending release of the transcript (memorandum of telephone conversation). The "transcript" was released on Sept. 25. Schiff gave his "parody" account of the call the next day, on Sept. 26.  And Trump responded to Schiff's "parody" version of his call on Sept. 30 during an event honoring the late Justice Antonin Scalia:
Adam Schiff — representative, congressman — made up what I said.  He actually took words and made it up.  The reason is, when he saw my call to the President of Ukraine, it was so good that he couldn’t quote from it because it — there was nothing done wrong.  It was perfect.
PolitiFact's interpretation asks us to believe that Trump either forgot what he said on Sept. 30 or else deliberately decided to reverse the chronology.

What motive would support that decision? Is one version more politically useful than the other?

It's not uncommon for people to speak of "having no choice" based on an event subsequent to that choice. The speaker means that the choice would have had to take place eventually.

When a source makes two claims touching the same subject and differ in content, the following rule applies: Use the more clear statement to make sense of the less clear statement.

Fact checkers who manufacture certitudes out of equivocal language give fact-checking a bad name.

They are Rumpelstiltskins, trying to spin straw into gold.


Afters

We would draw attention to a parallel highlighted at (Bryan's) Zebra Fact Check last month.

During a podcast interview Hillary Clinton used equivocal language in saying "they" were grooming Democratic Party presidential hopeful Tulsi Gabbard as a third-party candidate to enhance Trump's chances of winning the 2020 election.

No fact checker picked out Clinton's claim for a fact check. And that's appropriate, because the identity of "they" was never perfectly clear. Clinton probably meant the Russians, but "probably" doesn't make it a fact.

In that case, the fact checkers picked on those who interpreted Clinton to mean the Russians were grooming Gabbard (implicitly finding that Clinton's ambiguity clearly meant "Republicans").

Fact checkers have no business doing such things.

Until fact checkers can settle on a consistent approach to their craft, we justifiably view it as a largely subjective enterprise.

Monday, November 18, 2019

We want Bill Adair subjected to the "Flip-O-Meter"

It wasn't that long ago we reported on Bill Adair's article for Columbia Journalism Review declaring "Bias is good" along with a chart indicating fact-check journalism has more opinion to it than either investigative reporting or news analysis.

Yet WRAL, in announcing its new partnership with PolitiFact North Carolina, quoted Adair saying PolitiFact is unbiased:
“What is important about PolitiFact is not just that it’s not biased,” Adair said, “but that we show our work and that we show all of our sources.”
Naturally we cannot allow that to pass. We used WRAL's contact form to reach out to the writer of the article, Ashley Talley.

We pointed out the discrepancy between what Talley reported from Adair and what Adair wrote for Columbia Journalism Review. We suggested somebody should fact check Adair.

Next we'll be contacting Paul Specht of PolitiFact North Carolina over this quotation:
“One thing I love about PolitiFact is that the format is very structured and it's not up to me to decide what is or isn't true,” said Paul Specht, WRAL’s PolitiFact reporter who has been covering local, state and national politics for years. “It's up to me to go do the research and then it's up to the research to tell us what is true.”
We're not sure how that's supposed to square with Adair's declaration from a few years ago that "Lord knows the decision about a Truth-O-Meter rating is entirely subjective."

What changed?

In addition to its "Truth-O-Meter" PolitiFact publishes "Flip-O-Meter" items.

We'd like to see Adair on the Flip-O-Meter.

Friday, November 15, 2019

PolitiFact editor: "It’s important to note that we don’t do a random or scientific sample"

As we have mentioned before, we love it when PolitiFact's movers and shakers do interviews. It nearly guarantees us good material.

PolitiFact Editor Angie Drobnic Holan appeared on Galley by CJR (Columbia Journalism Review) with Mathew Ingram to talk about fact-checking.

During the interview Ingram asked about PolitiFact's process for choosing which facts to check (bold emphasis added):
MI
One question I've been asking many of our interview guests is how they choose which lies or hoaxes or false reports to fact-check when there are just so many of them? And do you worry about the possibility of amplifying a fake news story by fact-checking it? This is a problem Whitney Phillips and Joan Donovan have warned about in interviews I've done with them about this topic.
ADH
Great questions! We use our news judgement to decide what to fact-check, with the main criteria being that it’s a topic in the news and it’s something that would make a regular person say, "Hmmm, I wonder if that’s true." If it sounds wrong, we’re even more eager to do it. It’s important to note that we don’t do a random or scientific sample.
It's important, Holan says, to note that PolitiFact does not do a random or scientific sample when it chooses the topics for its fact check stories.

We agree wholeheartedly with Holan's statement in bold. And that's an understatment. We've been harping for years on PolitiFact's failure to make its non-scientific foundations clear to its audience. And here Holan apparently agrees with us by saying it's important.

How important is it?

PolitiFact's statement of principles says PolitiFact uses news judgment to pick out stories, and also mentions the "Is that true?" standard Holan mentions in the above interview segment. But what you won't find in PolitiFact's statement of principles is any kind of plain admission that its process is neither random nor scientific.

If it's important to note those things, then why doesn't the statement of principles mention it?

At PolitiFact, it's so important to note that its story selection is neither random nor scientific that no example from three pages of Google hits in the politifact.com domain when searching for "random" AND "scientific" has anything to do with PolitiFact's method for story selection.

And despite commenters on PolitiFact's Facebook page commonly interpreting candidate report cards as representative of all of a politician's statements, Holan insists "There's not a lot of reader confusion" about it.
If there's not a lot of reader confusion about it, why say it's important to note that the story selection isn't random or scientific? People supposedly already know that.

We use the tag "There's Not a Lot of Reader Confusion" on occasional stories pointing out that people do suffer confusion about it because PolitiFact doesn't bother to explain it.

Post a chart of collected "Truth-O-Meter" ratings and there's a good chance somebody in the comments will extrapolate the data to apply to all of a politician's statements.

We say it's inexcusable that PolitiFact posts its charts without making their unscientific basis clear to readers.

They just keep right on doing it, even while admitting it's important that people realize a fact about the charts that PolitiFact rarely bothers to explain.