PolitiFact Editor Angie Drobnic Holan appeared on Galley by CJR (Columbia Journalism Review) with Mathew Ingram to talk about fact-checking.
During the interview Ingram asked about PolitiFact's process for choosing which facts to check (bold emphasis added):
One question I've been asking many of our interview guests is how they choose which lies or hoaxes or false reports to fact-check when there are just so many of them? And do you worry about the possibility of amplifying a fake news story by fact-checking it? This is a problem Whitney Phillips and Joan Donovan have warned about in interviews I've done with them about this topic.
ADHIt's important, Holan says, to note that PolitiFact does not do a random or scientific sample when it chooses the topics for its fact check stories.
Great questions! We use our news judgement to decide what to fact-check, with the main criteria being that it’s a topic in the news and it’s something that would make a regular person say, "Hmmm, I wonder if that’s true." If it sounds wrong, we’re even more eager to do it. It’s important to note that we don’t do a random or scientific sample.
We agree wholeheartedly with Holan's statement in bold. And that's an understatment. We've been harping for years on PolitiFact's failure to make its non-scientific foundations clear to its audience. And here Holan apparently agrees with us by saying it's important.
How important is it?
PolitiFact's statement of principles says PolitiFact uses news judgment to pick out stories, and also mentions the "Is that true?" standard Holan mentions in the above interview segment. But what you won't find in PolitiFact's statement of principles is any kind of plain admission that its process is neither random nor scientific.
If it's important to note those things, then why doesn't the statement of principles mention it?
At PolitiFact, it's so important to note that its story selection is neither random nor scientific that no example from three pages of Google hits in the politifact.com domain when searching for "random" AND "scientific" has anything to do with PolitiFact's method for story selection.
And despite commenters on PolitiFact's Facebook page commonly interpreting candidate report cards as representative of all of a politician's statements, Holan insists "There's not a lot of reader confusion" about it.
If there's not a lot of reader confusion about it, why say it's important to note that the story selection isn't random or scientific? People supposedly already know that.
We use the tag "There's Not a Lot of Reader Confusion" on occasional stories pointing out that people do suffer confusion about it because PolitiFact doesn't bother to explain it.
Post a chart of collected "Truth-O-Meter" ratings and there's a good chance somebody in the comments will extrapolate the data to apply to all of a politician's statements.
We say it's inexcusable that PolitiFact posts its charts without making their unscientific basis clear to readers.
They just keep right on doing it, even while admitting it's important that people realize a fact about the charts that PolitiFact rarely bothers to explain.