What's Fake on the Internet: Unbiased fact checkers
We stumbled across
a farewell post for Caitlin Dewey's rumor-debunking "What was Fake” column in the
Washington Post. In the column, Dewey notes a change in the Internet hoax business, namely that rumors are often spread intentionally via professional satire sites seeking click traffic, and is calling it quits "because it’s started to feel a little pointless."
While Dewey's column focused more on viral Internet rumors than politics specifically, we were struck by the parallel between her observations and our own regarding PolitiFact. She laments that the bogus stories are so easily debunked that the people who spread the misinformation aren't likely to be swayed by objective evidence. She then highlights why hoax websites have proliferated:
Since early 2014, a series of Internet entrepreneurs have realized that not much drives traffic as effectively as stories that vindicate and/or inflame the biases of their readers. Where many once wrote celebrity death hoaxes or “satires,” they now run entire, successful websites that do nothing but troll convenient minorities or exploit gross stereotypes.
Consider that trend when you see this chart that ran with PolitiFact editor
Angie Holan's NYT opinion article:
|
Image via NYT screengrab |
The chart, complete with bar graphs and percentages, frames the content for readers with a form of scientific legitimacy. But discerning anything from the aggregate total of their ratings amounts to pure hokum. The chart doesn't provide solid evidence of anything (with the exception of PolitiFact's selection bias), but it surely serves to "vindicate and/or inflame the biases of their readers."
We've
gone into detail explaining why PolitiFact's charts and report cards amount to junk science before, but simply put there's multiple problems:
1) PolitiFact's own definitions of their ratings are largely
subjective, and their standards are applied
inconsistently between editors, reporters, and individual franchises. This problem is evidenced by having nearly identical claims regarding a 77-cent gender wage gap being rated everywhere from
True to Mostly False and everything in between.
2) Concluding anything from a summary of PolitiFact's ratings assumes each individual fact check was performed competently and
without error. Further, it assumes PolitiFact only rates claims where actual facts are in dispute as opposed to
opinions,
predictions, or
hyperbolic statements.
3) PolitiFact's selection bias extends beyond what specific claim to rate and into what specific person to attribute a claim to (something we've referred to as attribution bias). For example, an official IG report included an anecdote that the government was paying $16 for breakfast muffins. Bill O'Reilly, ABC and NPR all reported the figure in the report. PolitiFact later gave
Bill O'Reilly a Mostly False rating for repeating the official figure. This counts as a Mostly False on his report card while ABC, NPR, and even the IG who originally made the claim are all spared this on their "truthiness" chart.
4) The most obvious problem is
selection bias. Even if we assume PolitiFact performed their fact checks competently, applied their standards consistently, and attributed their ratings correctly, the charts and report cards still aren't evidence of anyone's honesty. Even PolitiFact admits this, contrary their constant promotion of their report cards.
To illustrate PolitiFact's flaw consider investigating a hundred claims from President Truman to determine their veracity. Suppose you find Truman made 20 false claims, and you then publish
only those 20 False claims on a chart. Is this a scientific evaluation of Harry Truman's honesty? Keep in mind
you get to select which claims to investigate and publish. Ultimately such an exercise would say more about you than Harry Truman. The defense that PolitiFact
checks both sides falls flat (PolitiFact get's to pick the True claims too, and in any event is an
appeal to the middle ground).
We've
documented how PolitiFact espouses contradictory positions on how to use their data. PolitiFact warns readers they're "not social scientists," but then engages in a near constant barrage promoting their "report cards," claiming they're useful for showing trends.
Whenever PolitiFact promotes charts like the one posted in the NYT article, the overwhelming response on Facebook and Twitter is to send the chart viral with unfounded claims that the conservative bogeymen are allergic to the truth. How can PolitiFact ethically promote such assertions when they know their report cards offer no objective data about the people they rate?
Instead of telling readers to discount any conclusions from their misleading charts, PolitiFact actively encourages and promotes these unscientific charts. That's not how honest, professional journalists seeking truth behave. On the other hand, it's behavior we might expect from partisan actors trolling for web traffic.
So why are people so intent on spreading PolitiFact's bogus charts based on bad information? Perhaps Dewey uncovered the answer:
[I]nstitutional distrust is so high right now, and cognitive bias so strong always, that the people who fall for hoax news stories are frequently only interested in consuming information that conforms with their views — even when it’s demonstrably fake.
No worries, though. PolitiFact editor Angie Holan assures us "
there's not a lot of reader confusion" about how to use their ratings.
We're bummed to see Dewey close down her weekly column, as she arguably did a more sincere job of spreading the truth to readers than PolitiFact ever has. But we're glad she pointed out the reason hoax websites are so popular. We suggest the same motivation is behind PolitiFact publishing their report cards. Is there much difference between hoax sites spreading bogus rumors and PolitiFact trolling for clicks by appealing to liberal confirmation bias with its sham charts and editorials masquerading as fact checks?
Contrary to being an actual journalistic endeavor, PolitiFact is little more than an agenda-driven purveyor of clickbait.