Friday, January 30, 2015

Predictable

Think we're exaggerating when we say PolitiFact's report cards encourage ill-founded conclusions? Read on, little camper:
Liberals and media critics have complained that Fox News has a habit of stretching the truth in its news and commentaries. Now they have some numbers to prove it.
The bloggers at AllGov, of course, have been misled by the liberal bloggers at PolitiFact. And from what we can tell, so long as publishing these worthless "report card" puff pieces proves popular, PolitiFact will persist in publishing them. Pfeh.

For those who don't see through the deception, PolitiFact doesn't bother to randomize its story selection. There's no good reason to think the 125 stories are representative of what companies like Fox and CNN are televising.

Sometimes PolitiFact offers weak disclaimers along the lines of "use caution in drawing conclusions," but often there's no such disclaimer. Posts like the one from AllGov inevitably result.

PolitiFact misleads people. This stuff amounts to political advertising.

Thursday, January 29, 2015

PunditFact officially less accurate than PolitiFact

Careful research over the last tens of hours has convinced us that PunditFact reports less accurately than PolitiFact.

On Jan 27, 2015, PunditFact posted an article about an update for PunditFact's networks scorecards. PunditFact falsely reported that its scorecards check "all claims made by pundits on air." We noted that announcement with a post that same day, linking to an archived version of the story. But the mistake remains as of this writing.

On Jan. 29, 2015, PolitiFact posted an article covering the same subject matter, coming to identical conclusions but without the claim that the scorecards check all claims made by pundits on air.

We conclude that between PunditFact and PolitiFact, PolitiFact reports more accurately. We've made the judgment easy for our readers with a side-by-side comparison:


Of course we're just having fun with this at PolitiFact's expense. PunditFact is simply a part of PolitiFact, the part that focuses on rating statements from pundits. PunditFact posted the faulty story, then PolitiFact posted the same story two days later with the false statement amended. I guess that way there's no need to publish a correction notice or fix the version of the story containing the error. It's enough to have one version of the story published without the mistake.

Or something.

Bless your heart, PolitiFact.


(Moments after publishing fixed side-by-side image captions, changing two cases of "2014" to "2015")

Tuesday, January 27, 2015

PunditFact: Our scorecards measure all claims made by pundits on air

Bless your heart, PunditFact.

Some people think PolitiFact's network scorecards can't be taken as a reliable litmus test of network truthfulness. After all, these skeptics say, PunditFact doesn't check every claim by every pundit on the networks.

But today PunditFact broke the news that it does check every claim on the networks (bold emphasis added):
MSNBC and CNN have improved ever so slightly on our TV network scorecards, while Fox News has moved a touch in the opposite direction.

We last looked at our network scorecards, which examine all the claims made by pundits on air, in September. The scorecards measure statements made by a pundit or a host or paid contributor on a particular network. They do not include statements made by elected leaders, declared candidates or party officials.
So that lends a great deal of credibility to PunditFact's scorecards. After all, if the scorecards are examining all the claims then there's no issue with selection bias. Right?

Or not. PunditFact added this later on in the article:
As we have said in the past, be cautious about using the scorecards to draw broad conclusions. We use our news judgment to pick the facts we’re going to check, so we certainly don’t fact-check everything. And we don’t fact-check the five network groups evenly.
Bless your heart, PunditFact.

Monday, January 19, 2015

PunditFact's "Pants on Fire" bias, 2014

We pledged to apply our "Pants on Fire" bias research methods to PolitiFact's "PunditFact" project.

PunditFact is the branch of PolitiFact that looks at and rates statements from pundits. PunditFact uses the same cheesy "Truth-O-Meter" system to which PolitiFact has wedded itself.

Our "Pants on Fire" research project looks at how PolitiFact disproportionately applies its "Pants on Fire" rating.

It's important to note that we don't simply look at the higher numbers of "Pants on Fire" ratings PolitiFact gives to Republicans and conservatives. Nor do we focus on which party receives more "False" ratings. We note that PolitiFact has never provided anything resembling an objective criterion for distinguishing between "False" and "Pants on Fire" ratings. We conclude from the evidence that PolitiFact very probably distinguishes between the two ratings subjectively. From that, we conclude that the proportion of "Pants on Fire" ratings in relation to the total number of false ("False" plus "Pants on Fire") ratings tells us something about the ideology of the people applying the ratings.

PunditFact shows a remarkable skew to the left.

PunditFact's PoF Bias number for 2014 came in at 2.19. That simply means PunditFact was 119 percent more likely to rate a conservative's false statement "Pants on Fire" than a liberal's.

Adding the small amount of data from the end of 2013, when PunditFact was first starting out, we obtain a cumulative figure of 2.57 for the PoF Bias number. So over PunditFact's entire lifespan, conservatives were 157 percent more likely than liberals to have a false statement rated "Pants on Fire."

We also have a hint in our data that PunditFact shows much more interest in statements coming from conservatives.

Of course we're only looking at two categories of statements, so we place no great importance on that aspect of the chart. We note, however, that PolitiFact's oft-stated criterion for choosing its subject matter ("Is that true?") fits just as well (if not better) for the hypothesis of liberal media bias than for the idea that conservatives simply lie more.

It's more natural to question statements that do not fit with what one accepts as true, after all.



Correction Jan 19, 2015: Revised 7th graf to correct description of cumulative totals.

Sunday, January 11, 2015

SaintPetersBlog: Reporters Are More Expensive Than Hookers

Noted Florida blogger Peter Schorsch wrote a great post regarding PolitiFact's deeply flawed Kickstarter campaign. Schorcsh brings up a few excellent points we missed in our critique, but here's our favorite:
Seriously, $15,000 for PolitiFact to do its job? Not some sort of in-depth, special investigation, but to cover a political event readers probably expected them to cover.

Don’t get me started on the argument that “fact-checking” major speeches like the State of the Union was what journalists once did without special “Pants On Fire Meters,” much less Kickstarter campaigns.
Schorsch is rightfully skeptical about the $15,000 figure. How much are they going to pay Aaron Sharockman to Google old PolitiFact articles?

Head over to SaintPetersBlog to read Schorsch's brief but excellent post.


Bryan adds:

I think Schorsch was too tough on the hapless liberal bloggers at PolitiFact. It's no secret that most newspaper organizations are losing gobs of money, and the Tampa Bay Times certainly qualifies on that score. The exciting announcement of the Kickstarter project was just positive spin on a sad state of affairs.

Schorsch hits the mark in pointing out that checking the State of the Union speech ought to fall within the basic duties of a political fact checking organization.  But the hooker comparison, while hilarious, is overly harsh.

On this one I'd leaven the snark with a little sympathy.

Tuesday, January 6, 2015

The "pick what we fact check" update

As Jeff D. noted in a post last week, PolitiFact has started a fundraising campaign at Kickstarter to pay for live fact-checking of the State of the Union address and response.

Jeff focused one of the perks for giving $100 or more: PolitiFact says it will give people who give at or above that level the privilege of picking what they fact check. Jeff pointed out that the offer places PolitiFact effectively on the horns of dilemma. Either PolitiFact is unethically selling control of its editorial decisions or else PolitiFact is making a sham offer to entice donors to give $100 or more.

We wanted to find out which it was, so Jeff came up with a scathingly brilliant idea: Join the campaign to gain the privilege of asking the questions we want answered, and see how PolitiFact will go about living up to its promises on a couple of the other lower-level perks given to supporters.

I probably wouldn't give two cents for PolitiFact's fact checking, but Jeff provided the funding and so far we're getting a decent return on the investment:

Bryan W. White about 9 hours ago

What hidden details of the "pick what we fact check" reward make it ethical, please?



Creator PolitiFact about 8 hours ago

Nothing hidden about the "pick what we fact check" reward, Bryan.

At PolitiFact, we have a constantly updating list of potential fact-checks. We call it the "buffet." And as you can imagine, there is more to fact-check than we can get to. We make our decisions based on news judgment, etc.

People who make a contribution of $100 or more will receive a list of four fact-checks from that buffet (likely two from conservatives, two from liberals). They'll get to pick what claim we check. Their choice won't stop us from checking other claims from the list, but it will assure that the claim they're interested in gets fact-checked.


It will also give them a sense of our process.

Let's unpack that response.

"Nothing hidden about the 'pick what we fact check' reward"


What? Of course there was hidden information about the reward. The description was ambiguous. Perhaps, as Jeff suggested, PolitiFact would offer the same four potential fact checks to all those supporting the project at the $100 or higher level. Or not. PolitiFact hid the precise nature of the reward with an imprecise description.

"People who make a contribution of $100 or more will receive a list of four fact-checks from that buffet (likely two from conservatives, two from liberals)."


Does each contributor of $100 or more choose from a unique list of four fact checks? We still don't know for sure from PolitiFact's description. So that information remains hidden. Probably after one of a set of four is chosen the remainder return to PolitiFact's "buffet" of potential fact checks. But that procedure potentially leads to the type of sham reward scenario Jeff touched on in our earlier post.

Suppose PolitiFact has a buffet consisting of 100 potential fact checks. And suppose PolitiFact has 97 contributors who paid $100 or more for the privilege of choosing what PolitiFact fact checks.

PolitiFact can potentially give each of those 97 a selection of four fact checks from that group of 100, and each of them (perhaps not simultaneously) can choose one of those four, leaving PolitiFact to publish 97 out of 100 fact checks it considered doing.

That's a sham offer of editorial control. PolitiFact is selling an illusion of editorial control in a scenario like that. PolitiFact can even pick the other three later on if it chooses, ending up choosing every single one of the 100 fact checks it was considering.

Simply put, PolitiFact could give subscribers a list of items they were going to check anyway, and no one would be the wiser.

Step right up, rubes!

"Their choice won't stop us from checking other claims from the list, but it will assure that the claim they're interested in gets fact-checked."


PolitiFact, of course, is left free to manipulate the offer behind the scenes to make it completely insignificant. Kind of like that carnival game where you pick up a plastic duck out of the water. The carny looks at a number on the bottom of the duck and gives the contestant the prize that corresponds to the number. But since the public can't see how the prizes are numbered, the carny can give the contestant any prize the carny chooses. PolitiFact practices the same type of transparency as the carnival swindler.

"It will also give them a sense of our process."


One wonders how that is supposed to work.



Jeff Adds:

This response from PolitiFact validates the suspicions I outlined in my previous post. PolitiFact implicitly responds to my article by stating they will not spike items unselected by donors. In doing so, they confirm the "pick a fact check" reward is pure theater, and a sham benefit.

It's also worth noting that while this $100 reward drew our attention, there is another one that is equally vague: The $25 pledge reward includes (emphasis added) "A follow and a Twitter shout out to our 180,000 followers (or a mention on Facebook)."

Unfortunately, PolitiFact doesn't specify how they determine which one you receive. For our money, however, we look forward to them informing their 180,000 Twitter followers about our existence and sincerely hope our tweets help motivate PolitiFact to become better fact checkers.

You can beat the rush and follow us on Twitter by clicking here.


Bryan adds:

There's an easy way for PolitiFact to keep this whole thing aboveboard. All it takes is a couple of quotation marks:

You "pick" what we fact check!

Friday, January 2, 2015

PolitiPimp: PolitiFact Kickstarter sells Journalism for Cash

Love that's fresh and still unspoiled, Love that's only slightly soiled,
Old love, new love, every love but true love, 
Love for sale
                                                                                            -Ella Fitzgerald


This week PolitiFact announced it had created a Kickstarter campaign to raise funds in order to cover the State of the Union address. For those not familiar with Kickstarter, it is an online fundraising site where users present an idea in the hopes of receiving financing. 

Typically, the creator offers incentives for the backers; e.g., an author seeking funds for a book may offer an autographed copy to backers who pledge a certain amount. While this incentivised system helps make Kickstarter a successful fundraising tool, it also makes it a bad idea for journalism. 

Check out one of the "rewards" PolitiFact is offering in exchange for cash:

 

Image from Twitter

A trip to PolitiFact's Kickstarter page offers slightly more information:



Image from Kickstarter

The flaws with this supposed reward are obvious, both ethically and practically.

It's hard to imagine any legitimate journalistic outlet would be willing to explicitly exchange what stories they cover, and what stories they ignore, for a specific price. Not since GOP Congressman "Duke" Cunningham's bribe list was uncovered has an offer to sell influence been so overt. By offering this reward, PolitiFact is openly encouraging the exchange of coverage for a cash gift. 

If the Koch brothers were to offer to pick up the entire $15,000 tab to cover the State of the Union address, would they be able to put the kibosh on fact checks that may turn out unfavorable to their business? 

How can a supposedly objective fact checker maintain a shred of credibility once it's offered to sell its services for a price? 

This brings us to the second problem with the fundraiser: The practical application of the rewards is impossible without performing the Pay to Play scheme described above.

As of the time of this writing, the reward to Pick a fact check has five backers. This means in order for the reward to be legitimate, each backer must receive four unique fact checks that PolitiFact is "thinking of working on." Otherwise, unless all five backers selected the same fact check, their selections would cancel the other backers selections out, meaning PolitiFact would end up checking all four items anyway. This would render a backers selection moot, or in other words: worthless.

What exactly is PolitiFact offering?


If they are in fact offering each backer four unique fact checks to choose from, then it clearly fits the definition of peddling influence and allowing those with cash to sway news coverage both in the affirmative (pick the one to check) and the negative (pick three stories to bury). 

If PolitiFact is offering up the same four fact checks to everyone that pledges $100, then the reward is meaningless because different individuals will select different items to check, meaning all four items will be checked anyway. The reward isn't really a reward at all. It's a promised benefit of the snake oil PolitiFact is hawking.

To be sure, we certainly don't begrudge PolitiFact attempting to further monetize their brand. Most journalism projects require revenue, and PolitiFact is no different. (While PFB is a noncommercial site and we fund our insignificant costs personally, Bryan's Zebra Fact Check project provides an opportunity to contribute to his efforts.) However, the idea of allowing someone to influence editorial decisions and cover up specific stories in exchange for cash is something that flies in the face of ethically sound journalism.

Depending on which way PolitiFact administers this reward, it's either pulling a scam or soliciting a bribe.

We'll add this to the list of poorly conceived decisions under Angie Holan's leadership of the PolitiFact brand. Whatever relevancy PolitiFact had left is now on sale for the low, low price of a crumpled up Benjamin.

Seems overpriced to us.



Bryan adds 

I almost hope PolitiFact gets 4 billion $100.00 pledges. 

The implications are obvious.


Update 1/6/2015: We asked PolitiFact to clarify the rewards and we wrote about their response here




Edit 01/02/2015: Changed "Either way" to "Depending on which way" in 
antepenultimate paragraph -Jeff

PolitiFact still biased after all these years

It's time for the annual update to our ongoing research project measuring PolitiFact's bias in its application of "Pants on Fire" ratings.

In spite of its frequent publication and pimping of "report cards" showing how various persons and organizations rate on its trademarked "Truth-O-Meter," PolitiFact openly admits that its process for choosing which claims to rate is not scientific. PolitiFact maintains it is making no effort to figure out which persons or groups lie more, albeit at the same time publishing its stories in a way that encourages readers to draw such conclusions based on unscientific data.

We at PolitiFact Bias have helped pioneer the practice of using the fact checkers' rating systems to measure ideological bias in journalism. The "Pants on Fire" bias study was the first we published. It examines differences in how PolitiFact applies its "Pants on Fire" rating compared to its other rating for false statements, "False."

PolitiFact's liberal defenders have a ready defense when it turns out that PolitiFact shows much more enthusiasm for giving "Pants on Fire" ratings to Republicans than it does for Democrats. "Republicans simply tell the biggest lies," they say, or some such.

Not so fast, we say.

One PolitiFact critic insider called PolitiFact's ratings "coin flips." Current PolitiFact editor Angie Drobnic Holan recently described the difference between a "False" rating and a "Pants on Fire" rating:
"We have definitions for all of our ratings. The definition for "False" is the statement is not accurate. The definition for "Pants on Fire" is the statement is not accurate and makes a ridiculous claim. So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other."
If awarding a claim a "Pants on Fire" rating was based on something objective, we suggest that somebody at PolitiFact could describe how that objective process operates. We've yet to see it in over seven years of observation.

Given PolitiFact's disinclination or inability to reveal any objective basis for the distinction between "False" and "Pants on Fire" statements, we conclude that the difference between the two is substantially or perhaps entirely subjective. So we compare the percentage of all false statements ("False" plus "Pants on Fire") rated "Pants on Fire" by party. It reveals that PolitiFact, after 2007, has consistently given false statements by Republicans "Pants on Fire" ratings at a much higher rate than those from Democrats.

And that, we claim, is an objective measure of ideological bias. The "Pants on Fire" rating amounts to an opinion poll of PolitiFact journalists as to which false ratings are ridiculous. PolitiFact subjectively finds the false claims of Republicans more ridiculous than those of Democrats.

It's not the only measure of ideological bias, of course, and it is subject to some limitations that we've described in earlier publications.

The new data for 2014

Our findings for 2014 proved very consistent with PolitiFact's performance from 2008 through 2013.

The PoF bias number for 2014 was 1.95, meaning a Republican's false statement, as designated by PolitiFact, was 95 percent more likely to receive a "Pants on Fire" rating than a false statement from a Democrat. That's nearly twice as likely.

The selection bias number for 2014 was 2.56, meaning the total number of false statements, "False" and "Pants on Fire" ratings combined, was 156 percent higher for Republicans than for Democrats.

Overall, that means PolitiFact rates more Republican statements false and rates Republican statements "Pants on Fire" at a much higher rate than for Democrats.


PolitiFact continued its recent trend of finding Democrats ever more truthful. Democrats tied their all-time record with only 16 statements found false, and improved on their performance in 2013 with only two of those 16 rated "Pants on Fire."

Republicans had also shown a recent trend toward fewer false ratings since a high mark in 2011, but that reversed in 2014. Still, Republicans posted their lowest percentage of "Pants on Fire" ratings since 2008, the year for which PolitiFact won its 2009 Pulitzer Prize.

The 1.95 PoF Bias number for 2014 boosted PolitiFact's cumulative figure a few hundredths to 1.74. Since its inception, PolitiFact has been 74 percent more likely to rate a false statement as "Pants on Fire" if the claim comes from a Republican instead of a Democrat.

Note

The above figures include only elected or appointed persons or party organizations such as the Democratic National Committee. We call these "Group A" figures and consider them the most reliable group in this study for showing partisan bias at PolitiFact.

We'll soon publish our findings for PunditFact, PolitiFact's effort focused on pundits, which represents "Group B" data for purposes of our research.