Monday, December 28, 2015

PunditFact's editorial page

Oops--are we being redundant? Sorry! Here's a portion of PunditFact's main page from Dec. 28, 2015.

Partial screen capture from Dec. 18, 2015

That, ladies and gentlemen, is a pictorial editorial.

It's not an unaltered photograph of Donald Trump. It's not a photo at all. It is an artist's rendering of Donald Trump, created to communicate an editorial message to PunditFact's readers. Note how Trump's fingers are crossed as he speaks, a traditional gesture of those who know they are speaking falsely.

Material like this makes PunditFact itself a pundit of sorts.

We've long noted the fact that PolitiFact and its various franchises blur the traditional line between straight news reporting--what some might expect from journalism billed as "fact-checking"--and editorializing. That's why we call PolitiFact "the Fox News of fact-checking" and PolitiFact staffers "liberal bloggers."

Their journalism is not objective. When PolitiFact creator Bill Adair calls fact checking a "new form of journalism" perhaps he has in mind that deliberate blurring of the lines.

Or not. Either way, we don't appreciate it.

Saturday, December 26, 2015

Yet more scintillating offsite criticism

Gotta love it.

A member of a message board posted some of our material for discussion--it seems somebody was using Angie Drobnic Holan's chart to draw firm conclusions about candidate truthfulness.

How were our views criticized this time?

For the record, the site (...) is run by two bloggers. One of whom lists beer as one of his interests and the other documents the bands that he likes as a credential.
So this time we get not just ad hominem fallacies but incredibly lame ad hominem fallacies. And the earlier post of our material included some of our responses to similar criticism. I guess it truly doesn't help to point out to people when they're reasoning fallaciously.

Dude: It's the movies I like that serve as a credential. Not the bands. Sheesh. What a maroon. j/k

On the positive side, he didn't accuse us of posting anonymously as some have done. So, credit where it's due.

Clearly we need to rush forward with our new page answering the most common criticisms (count ad hominem fallacies as one such) we receive.

PolitiMath from PolitiFact New Hampshire

What's new with PolitiMath?

PolitiFact New Hampshire, lately the Concord Monitor's partnership with PolitiFact, gives us a double dose of PolitiMath with its July 2, 2015 fact check of New Hampshire's chief executive, Governor Maggie Hassan (D).

Hassan was the only Democrat to receive any kind of false rating ("False" or "Pants on Fire") from PolitiFact New Hampshire in 2015. PolitiFact based its ruling on a numerical error by Hassan and added another element of interest for us by characterizing Hassan's error in terms of a fraction.

What type of numerical error earns a "False" from PolitiFact New Hampshire?

PolitiFact summarizes the numbers:
In her state of the state address, Hassan said that "6,000 people have already accessed services for substance misuse" through the state’s Medicaid program.

There is no question that substance abuse in the state is a real and pressing problem, and the statistics show that thousands have sought help as a result of the state’s expanded Medicaid program. But Hassan offered (and later corrected) a number that simply wasn’t accurate. The real total is closer to 2,000 -- about one-third the amount she cited.

We rate her claim False.
Describing Hassan's mistake as a percentage error using PolitiFact's figures, Hassan exaggerated her figure by about 230 percent. PolitiFact gave Hassan no credit for her underlying point.

In our PolitiMath series we found the closest match for this case from PolitiFact Oregon. PolitiFact Oregon said conservative columnist George Will exaggerated a figure--by as much as 225 percent by our calculations. The figure PolitiFact Oregon found was uncertain, however, so Will may have exaggerated considerably less using the range of numbers PolitiFact Oregon provided.

In any case, PolitiFact Oregon ruled Will's claim "False." PolitiFact Oregon gave Will no credit for his underlying argument, just as PolitiFact New Hampshire did with Gov. Hassan.

Percent Error and Partisanship

One of our research projects looks in PolitiFact's fact checks for a common error journalists make. We reasoned that journalists would prove less likely to make such careless errors for the party they prefer. Our study produced only a small set of examples, but the percentage of errors was high and favored Democrats.

PolitiFact New Hampshire's fact check of Gov. Hassan draws some consideration for this error, giving us the second mathematical element of note.

PolitiFact could have expressed Hassan's mistake using a standard percentage error calculation like the one we used. We calculated a 230 percent error. But PolitiFact New Hampshire did not use the correct figure (1,800) as the baseline for calculating error. Instead, the fact checkers used the higher, incorrect figure (6,000) as the baseline for comparison: "about one-third the amount she cited."

Using the number "one-third" frames Hassan's error nearer the low end. "One-third" doesn't sound so bad, numerically. Readers with slightly more sophistication may reason that the "one-third" figure means Hassan was off by two-thirds.

Sometimes using the wrong baseline makes the error look bigger and sometimes it makes the error look smaller. In this case the wrong baseline frames Hassan's mistake as a smaller error. The Democrat Hassan gains the benefit of PolitiFact's framing.

Wednesday, December 23, 2015

"Supreme conservative whinybaby" strikes back

As ever, we're interested in answering criticism of PolitiFact Bias. But you wouldn't believe the work we have to do to find anything that's actually worth taking to time to answer.

For example, check out this devastating set of criticisms from the "Radio Free Liberal" message board. One member of the board posted a link to our site, saying it would help explain how/why PolitiFact is biased to the left.

Then we get this:
This is an anti-PolitiFact website which means it is biased. Really, you could have done something better than picking the first website appearing on Google. This is almost as funny as the time when you were challenged to backup one of your claims and you posted a link to a white supremacist website and then claimed you didn't know it was a white supremacist website.
An anti-PolitiFact website is automatically biased? Doesn't that beg the question? Plus we're indicted for choosing a website name that puts us at the top of search results!

There's no substance at all to this criticism. Is it really helpful to people to explain when they're assuming their conclusion or committing a genetic fallacy? If that helps, here we go: The criticism assumes that criticism of PolitiFact is biased and therefore invalid. How would one justify that assumption?

How about this:
All he had to do is to go to the sponsor of the web blog to get the back story of how they manipulated their own "politifact" story. It's all in the open, Tell a lie and repeat it often... the dummies will follow. ... ts_31.html
I have no idea what this person is trying to say. The link goes to Jeff's cross-post of an article we also have posted at PolitiFact Bias. As far as I can tell, this chap thinks it represents evidence that we are admitting we are making up stuff about PolitiFact. If anybody thinks they see the same thing in that link he does, please drop us a line.

This next commenter apparently likes the guitarist Steve Morse, and therefore can't be all bad:
And the other guy sounds like a supreme conservative whinybaby. Look at these sites of his

Although, he seems to be a Steve Morse fan. I can get with that.
I guess it's a board for liberals to get together and whine about how whiny conservatives are. But let's just assume I'm not just a supreme conservative whinybaby. Let's assume I'm the supreme conservative whinybaby. With that out of the way, can we agree that a criticism that fails to get beyond name-calling amounts to an ad hominem fallacy?

Come on, Radio Free Liberal. Give us something challenging to work with. Assuming that criticism of PolitiFact is invalid thanks to bias is essentially the same error as assuming PolitiFact is invalid because it is biased.

You can do better, right? We await your counterwhine.

Refuting PolitiFact on mismatch theory

Back on Dec. 11, 2015, PolitiFact published a fact check of a statement by Supreme Court Justice Antonin Scalia. Scalia told his courtroom audience of a brief that said most black scientists do not come from top-flight schools for which they may not be academically prepared. Scalia was alluding to "mismatch" theory. As Scalia put it, "They come from lesser schools where they do not feel that they’re being pushed ahead in classes that are too fast for them."

Mismatch theory is the idea that if students aren't academically prepared to go to a higher-level school, the student is better off going to a school better suited to their level of preparation.

PolitiFact rated Scalia's claim "Half True." Scalia, PolitiFact said, was accurate that most black scientists come from lesser schools, but the statistic doesn't stem directly from academic mismatch.

But what really caught our eye about the PolitiFact fact check was this (bold emphasis added):
Regarding the larger point — that black students fare better at "lesser" schools because their academic credentials are better matched to the curriculum — the evidence is mired in controversy. There is some scholarly research that backs up this point, but there is also scholarly research that refutes it.
It's not uncommon to see the word "refutes" misused this way, improperly substituting it for a word like "rebuts." An argument that is refuted is a defeated argument. An argument that receives a rebuttal is contested. PolitiFact's choice of words was poor, biased or both.

We were reminded of PolitiFact's skim-the-surface treatment of Scalia's remarks by a much better treatment by Conor Friedersdorf in The Atlantic. This portion we found striking:
I’m baffled that any journalists are treating it as settled, even as tenured social-scientists at top-tier universities declare that it deserves to be taken seriously. No one, it seems, can yet provide a precise answer to the question, “at what point do disparities in GPA, SAT score, or high-school quality start to matter,” even though everyone surely agrees that they matter at some inflection point.
While PolitiFact stops short of claiming the issue is settled against mismatch theory, its treatment is decidedly more negative than the one at The Atlantic.

And since when is it a proper fact check of Scalia without specifically referencing the brief to which he referred?

PolitiFact points toward this brief as the likely source of Scalia's statement, but the fact check provides no link (whether by error or not). We can't find the document linked by PolitiFact and we don't see a clear connection between Scalia's remarks and the document PolitiFact pointed toward.

Moreover, as Alison Somin points out, the mainstream media narrative in which PolitiFact wallows skipped over the fact that justices routinely ask attorneys questions based on briefs they do not necessarily agree with. PolitiFact glossed over that context.

Critics have not refuted mismatch theory. It's fair game for Scalia to bring up that theory using words from an amicus brief. It's notable that the attorney in question did not respond as pedantically as did the mainstream media.

Matching Scalia's questioning to the right brief--that would be fact-checking. What PolitiFact did--not so much. PolitiFact's "fact check" was an op-ed about mismatch theory slanted against Scalia and mismatch theory.

Edit: Added link to PF story at "Half True" text- Jeff 12/23/2015 1508PST

Monday, December 21, 2015

The unbearable lameness of PolitiFact's 2015 "Lie of the Year"

PolitiFact has chosen statements by Donald Trump as its "Lie of the Year" for 2015, effectively making Donald Trump PolitiFact's liar of the year.

Why do we say it's lame?

PolitiFact's "Lie of the Year" has always been the fact checkers' editorial. As such, it will show a tendency to follow a journalistic narrative.

What's this year's key journalistic narrative? Donald Trump vs. the media.

Out of one side of its mouth PolitiFact denies it charges people with lying. With the other side ...

Two years in a row (arguably three), PolitiFact chose a winner that was not on the list of candidates readers were given when voting for the readers' version of PolitiFact's "Lie of the Year."

Picking a group winner works well for PolitiFact since it's hard to contest the choice. Contest one of the supposedly false claims on the basis of fact and PolitiFact can always point to another on the list and emphasize that one. Justified, see?

It's so predicable that I predicted it shortly after PolitiFact announced its candidates:
I'm going to invoke PolitiFact's history over the last two years of not putting the eventual winner on the list of candidates. My pick is "Statements by Donald Trump" as a parallel to last year's "Statements about Ebola."
It shouldn't be that easy.


Sunday, December 20, 2015

PolitiFact's Unethical Internet Fakery

What's Fake on the Internet: Unbiased fact checkers

We stumbled across a farewell post for Caitlin Dewey's rumor-debunking "What was Fake” column in the Washington Post. In the column, Dewey notes a change in the Internet hoax business, namely that rumors are often spread intentionally via professional satire sites seeking click traffic, and is calling it quits "because it’s started to feel a little pointless."

While Dewey's column focused more on viral Internet rumors than politics specifically, we were struck by the parallel between her observations and our own regarding PolitiFact. She laments that the bogus stories are so easily debunked that the people who spread the misinformation aren't likely to be swayed by objective evidence. She then highlights why hoax websites have proliferated:
Since early 2014, a series of Internet entrepreneurs have realized that not much drives traffic as effectively as stories that vindicate and/or inflame the biases of their readers. Where many once wrote celebrity death hoaxes or “satires,” they now run entire, successful websites that do nothing but troll convenient minorities or exploit gross stereotypes.
Consider that trend when you see this chart that ran with PolitiFact editor Angie Holan's NYT opinion article:

Image via NYT screengrab

The chart, complete with bar graphs and percentages, frames the content for readers with a form of scientific legitimacy. But discerning anything from the aggregate total of their ratings amounts to pure hokum. The chart doesn't provide solid evidence of anything (with the exception of PolitiFact's selection bias), but it surely serves to "vindicate and/or inflame the biases of their readers."

We've gone into detail explaining why PolitiFact's charts and report cards amount to junk science before, but simply put there's multiple problems:

1) PolitiFact's own definitions of their ratings are largely subjective, and their standards are applied inconsistently between editors, reporters, and individual franchises. This problem is evidenced by having nearly identical claims regarding a 77-cent gender wage gap being rated everywhere from True to Mostly False and everything in between.

2) Concluding anything from a summary of PolitiFact's ratings assumes each individual fact check was performed competently and without error. Further, it assumes PolitiFact only rates claims where actual facts are in dispute as opposed to opinions, predictions, or hyperbolic statements.

3) PolitiFact's selection bias extends beyond what specific claim to rate and into what specific person to attribute a claim to (something we've referred to as attribution bias). For example, an official IG report included an anecdote that the government was paying $16 for breakfast muffins. Bill O'Reilly, ABC and NPR all reported the figure in the report. PolitiFact later gave Bill O'Reilly a Mostly False rating for repeating the official figure. This counts as a Mostly False on his report card while ABC, NPR, and even the IG who originally made the claim are all spared this on their "truthiness" chart.

4) The most obvious problem is selection bias. Even if we assume PolitiFact performed their fact checks competently, applied their standards consistently, and attributed their ratings correctly, the charts and report cards still aren't evidence of anyone's honesty. Even PolitiFact admits this, contrary their constant promotion of their report cards.

To illustrate PolitiFact's flaw consider investigating a hundred claims from President Truman to determine their veracity. Suppose you find Truman made 20 false claims, and you then publish only those 20 False claims on a chart. Is this a scientific evaluation of Harry Truman's honesty? Keep in mind you get to select which claims to investigate and publish. Ultimately such an exercise would say more about you than Harry Truman. The defense that PolitiFact checks both sides falls flat (PolitiFact get's to pick the True claims too, and in any event is an appeal to the middle ground).

We've documented how PolitiFact espouses contradictory positions on how to use their data. PolitiFact warns readers they're "not social scientists," but then engages in a near constant barrage promoting their "report cards," claiming they're useful for showing trends.

Whenever PolitiFact promotes charts like the one posted in the NYT article, the overwhelming response on Facebook and Twitter is to send the chart viral with unfounded claims that the conservative bogeymen are allergic to the truth. How can PolitiFact ethically promote such assertions when they know their report cards offer no objective data about the people they rate?

Instead of telling readers to discount any conclusions from their misleading charts, PolitiFact actively encourages and promotes these unscientific charts. That's not how honest, professional journalists seeking truth behave. On the other hand, it's behavior we might expect from partisan actors trolling for web traffic.

So why are people so intent on spreading PolitiFact's bogus charts based on bad information? Perhaps Dewey uncovered the answer:
[I]nstitutional distrust is so high right now, and cognitive bias so strong always, that the people who fall for hoax news stories are frequently only interested in consuming information that conforms with their views — even when it’s demonstrably fake.
No worries, though. PolitiFact editor Angie Holan assures us "there's not a lot of reader confusion" about how to use their ratings.

We're bummed to see Dewey close down her weekly column, as she arguably did a more sincere job of spreading the truth to readers than PolitiFact ever has. But we're glad she pointed out the reason hoax websites are so popular. We suggest the same motivation is behind PolitiFact publishing their report cards. Is there much difference between hoax sites spreading bogus rumors and PolitiFact trolling for clicks by appealing to liberal confirmation bias with its sham charts and editorials masquerading as fact checks?

Contrary to being an actual journalistic endeavor, PolitiFact is little more than an agenda-driven purveyor of clickbait.

Attack of the offsite critics!

Unlike PolitiFact, PolitiFact Bias regularly engages its critics.

We gained an opportunity recently to have some back-and-forth with some off-site critics. We ran across some traffic to our site coming from a message board. Somebody had linked to this site, and the link had drawn dismissive responses. But the responses were quite vague about the reasons behind their dismissiveness. So I joined the forum to see if I could uncover those reasons.

The first two detailed reasons were based on misunderstandings. One critic thought we were saying that Republicans and Democrats lie equally. We don't say that because we have no idea. Finding out would take a detailed and complex study. We're not aware of any serious attempt at such a thing (we're aware of one unserious attempt).

Another critic thought we were saying Democrats and Republicans lie proportionately to each other. Again, that's not what we say. Finding out the proportions of Democrat and Republican lies would take a detailed study. We don't pretend to have done anything of the kind.

These misunderstandings stem from our attempts to describe what results PolitiFact's ratings ought to give if PolitiFact used the types of editorial discretion it describes, but did so with ideological blindness. PolitiFact's collection of ratings would still suffer from selection bias, so the ratings would not support generalizations about the proportions of lies for either of the two political parties. But the selections would have a "shape" of sorts, and we say that shape should be about proportional for both parties regardless of whether one states more untruths than the other.

What did we learn from engaging our critics on this occasion? We found no legitimate criticism, but found good reason to improve our description of a minor point we make at the site.

Correction Dec. 21, 2015: Fixed some minor typos and added a comma.

PolitiFact, Paul Ryan and cherry-picking

Does this add up?

Speaker of the House Paul Ryan (R-Wis.) said during a television appearance that Obamacare was making families pay double-digit premium increases.

PolitiFact gave Ryan's statement its "fact check" treatment, which we're inclined to call liberal blogging.

Step 1:
Ryan is suggesting ... increases in the "double digits." We decided to rate that claim on the Truth-O-Meter.
Step 2:
According to HHS data, 19 out of the 37 states in the federal exchange saw an average rate increase in the double digits. At the low end, rates in Missouri increased by 10.4 percent while Oklahoma saw the biggest hike at 35.7 percent.
Step 3:
Ryan has a point that some plans have seen increases of 10 percent or more with insurance purchased on However, Ryan is cherry-picking the high end of rate changes.
Got it? PolitiFact says Oklahoma is experiencing an average Obamacare exchange rate hike of 35.7 percent. And saying "double-digit" rate increases is cherry-picking from the high end of rate changes.

Given that a 10 percent rate hike is "double digits" and the top (average) exchange rate hike is over 35 percent, what kind of sense does it make to say Ryan is cherry-picking the high end of rate changes?

Silly liberal bloggers.

Ryan used a normal ambiguity when he spoke. "Families" does not mean "All families" as PolitiFact claimed Ryan was suggesting. If a substantial number of families are getting hit with double-digit rate increases then what Ryan said was accurate. And by "accurate" I don't mean "True" in the sense of a "Truth-O-Meter" rating. I mean "accurate" in the sense that PolitiFact uses the term when it defines "True" and "Mostly True" for purposes of its cheesy "Truth-O-Meter":

 PolitiFact's definitions are themselves "Half True." You can tell that's the case when Ryan's accurate statement receives its "Half True" rating while Bernie Sanders' inaccurate statement receives a "Mostly True" rating.

PolitiFact fact-checking is a game liberal bloggers play.


Other than Ryan's office providing supporting URLs dealing with the individual market, what would lead PolitiFact to simply ignore the much larger group plan market in its fact check? Do group plans not count? Does the lack of an employer mandate mean Obamacare does not affect group insurance despite the new regulations it imposes on the group market?

Rate increases? What rate increases?
And according to an Arthur J. Gallagher & Co. survey of smaller employers, most of which have less than 1,000 employees, released Friday, 44% reported premium rate hikes of 6% or more in 2014. Twenty-three percent saw rates in the double digits, the survey showed.
We'll say it again: Silly liberal bloggers.

After Afters

Before I forget the caboose on this PolitiFact trainwreck ...
Ryan missteps by saying the law alone is "making" the premiums increase. Rather, experts say, hikes are more likely the result of insurers underestimating how sick enrollees would be.
Silly cherry-picking liberal bloggers. Obamacare is the reason insurers don't know how sick their enrollees will be. Guaranteed issue. Remember? No, I guess you don't remember.

Correction Dec. 20, 2015: Changed from "Ind." to "Wis." the state which Ryan represents. Hat tip to commenter Pagan Raccoon for pointing out the error.
Correction Dec. 21, 2015: When repeating PolitiFact's 35.7 figure we typo-inflated it to 37.7. That's now fixed. Hat tip to "John Smith" for using the comments section to point out the error.

Saturday, December 19, 2015

Zebra Fact Check: "PolitiFact vs. Rubio on defense spending"

At my fact check site Zebra Fact Check, I took note of PolitiFact Florida's fact check of Marco Rubio from earlier this year:
Rubio said that the United States "is not building the aircraft, the long-range bombers, the additional aircraft carriers, the nuclear submarines."

The military has programs in place to build the types of equipment Rubio mentioned, including the largest aircraft procurement ever: the F-35 Joint Strike Fighter. It will take many years and billions of dollars to complete the procurement, but Rubio’s statement could mislead voters into thinking that the United States has closed up shop in the area of military equipment and isn’t building anything, which isn’t the case.

We rate this claim False.
In PolitiFact Florida's summary Rubio's statement gets shortened--PolitiFact quotes the full sentence earlier in the story (bold emphasis added): "We are the only nation that is not building the aircraft, the long-range bombers, the additional aircraft carriers, the nuclear submarines we need for our nation's defense."

The Zebra Fact Check article points out how PolitiFact Florida carelessly overlooked an alternative understanding of Rubio's words that is consistent with his speeches:
We  find it completely obvious that Rubio was saying the pace of and planning for defense acquisitions falls below what is needed for adequate defense.

What evidence supports our position? Rubio’s own past statements, for starters.
Rubio's speech from September 2014 goes over each of the military acquisition challenges he mentions in the statement PolitiFact gave its "fact check" treatment. How did PolitiFact Florida miss that information? Probably by not knowing the issue and by not bothering to look.

In other words, by allowing bias to influence the outcome of the fact check.

That's the meaning of these accumulated examples of flawed fact checks we highlight and analyze here at PolitiFact Bias. PolitiFact makes plenty of mistakes, and the tendency of those mistakes to more often unfairly harm conservatives and Republicans makes up one of our main evidences of PolitiFact's liberal bias.

No one example is intended to prove PolitiFact's bias. The bias comes through when considering its whole body of work, including the more rare cases where PolitiFact unfairly harms a liberal with its poor fact-checking.

We're finding it harder and harder to find serious criticism of PolitiFact coming from the political left, for what that's worth.

Friday, December 18, 2015

PolitiFact and opt-in polling

On Dec. 9, 2015, PolitiFact doled out a "Mostly False" to Donald Trump.

Trump had cited an opt-in poll that found about 25 percent of American* Muslims think it may be okay to commit violent acts against Americans in the name of jihad. PolitiFact found it improper to report the results of the poll:
Trump is referring to a poll conducted by the Center for Security Policy. However, polling experts raise numerous questions about the validity of the poll’s results, including its "opt-in" methodology and the dubiously large percentages of respondents who said they were unaware of ISIS or al-Qaida. Moreover, an official with the Center for Security Policy cautioned against generalizing the poll results to the entire Muslim-American community.
So, the opt-in methodology makes poll results suspect?

Is PolitiFact consistent on that point?
PolitiFact and PunditFact will soon announce our Lie of the Year -- the most significant falsehood of 2015, as chosen by our editors and reporters. We're also inviting PolitiFact readers to vote for the Readers' Choice award.

Here are our 11 finalists and a link to our survey so you can vote for your favorite. We accept write-ins. We also have a mobile-friendly version of this survey.
Does PolitiFact claim or imply that its "Reader's Choice" award accurately reflects the opinion of its readers?


*Though the poll surveyed American Muslims, Trump did not mention that aspect of the poll when he made his claim. PolitiFact apparently found the words Trump chose unimportant when it fact checked his claim.

Thursday, December 17, 2015

"There's not a lot of reader confusion" Update 2

The Houston Chronicle joins those suffering confusion over the significance of PolitiFact's "report cards."

The New York Times breathlessly excreted the "All Politicians Lie. Some Lie More Than Others." headline over an article by PolitiFact's Angie Drobnic Holan, even though the article mentioned nothing about intentional deceit--the commonly understood definition of "lie."

Rolling Stone published a graphic based on PolitiFact's ratings, following the Times' lead by calling false statements "lies" and framing the story as though percentages of false statements as judged by PolitiFact represent something significant.

Now along comes the Chronicle to provide its own endorsement of PolitiFact's junk science:
(PolitiFact) looked at top politicians and 2016 presidential candidates to analyze which ones have told the most false statements since 2007. The site’s editor Angie Drobnic Holan wrote an Op-Ed in the New York Times about the results of their fact-checking campaign. See the politicians who lie the most in the gallery.
Once again we have equivocal language with "lie" standing in for false. And the word "analyze" creep in there somehow, as though totaling the different ratings for each candidate is some type of science. If it's any kind of science it's junk science.

Do we get any disclaimer about PolitiFact's dubious objectivity (and that's being generous) or the obvious selection bias problem?

No. Nothing closer than this:
Politifact only evaluates statements that are clear, precise and are not “obviously true.” The site does not look at opinions or predictions, but statements that the general public would be interested in knowing the truth about.
Chronicle readers who are familiar with science may be able to figure out from that paragraph that PolitiFact's stories are not chosen to make up a representative sample. That means the percentages of "lies" reported in the story have no firm basis in evidence.

PolitiFact editor Angie Drobnic Holan claims there's not much reader confusion about PolitiFact's report cards. If she was right then stories based on the report cards would not draw much reader interest. The report cards draw reader interest precisely because of the way they feed confirmation bias.

PolitiFact, The New York Times, Rolling Stone and the Houston Chronicle all appear perfectly willing to stoke that confirmation bias.

Here's a sampling of the absence of reader confusion from the comments section under the Chronicle story:
"SHOCKER...Conservative Republicans and Religious Right candidates lie more than their Liberal counterparts. Those darn facts are always getting in the way."

"I don't know what is more disturbing--the fact that Republican candidates lie up to 80% of the time, or that the more they lie, the more popular they are within the party."

"The GOP candidates have issued a higher percentages of lies then their Democratic counterparts and the President...shocking. These are only the Republicans we know of. The GOP own the majority of Congress thus probably own the largest share of those lies."

"Lying is what the GOP does best, it would appear."

"The far right nuts obviously have a problem with truth and accuracy. But we've known all this for years."

"There is a clear pattern in these data. Only the feckless liberals feel the need to be accurate. Only the weak form their opinions and policies based on evidence and logic."

"If you ever visit the site and read the analysis, the impact of the statement is weighted to determine how egregious any misinformation is, and also whether there is benefit of doubt."
Facts, evidence, logic.

No reader confusion there. No sirree.

Wednesday, December 16, 2015

Rolling Stone jumps on PolitiFact's junk science bandwagon

Rolling Stone has joined The New York Times in promoting PolitiFact's unscientific ratings of politicians' truthfulness.

PolitiFact admits it exercises discretion in choosing which fact check stories to pursue. That results in obvious selection bias. So PolitiFact two-facedly admits its work is journalistic and not social science while encouraging readers to draw conclusions from its data with caution.

What kind of "caution"? The data are approximately worthless.

Image from Rolling Stone magazine

Rolling Stone indulges in the same equivocation games The New York Times foisted on its readership.

If PolitiFact rates a statement "False," Rolling Stone helpfully translates that into "Lied." That's exactly the kind of word game political ads often play. The liberal press does exactly the same thing while donning a deceitful mask of fairness.

They're hypocrites.

And how about the deck?
And Hillary Clinton and Bernie Sanders are the most truthful, according to PolitiFact's analysis

So the stories are selected via discretion (selection bias) and PolitiFact has an impossibly subjective ratings scale, but totaling the ratings of the various candidates is supposed to qualify as "analysis." That's a lie, if we can borrow the way the Times and Rolling Stone employ the term.

And why is the word "bias" completely absent from the story? Is Rolling Stone unaware that journalism is the profession exhibiting the greatest ideological imbalance in the United States?

Maybe these journalists know the data don't mean anything but promote it anyway to get clicks and influence elections.

Maybe these journalists don't have a clue.

Either way, not good.

Tuesday, December 15, 2015

Letters: "Your attempts at proving their bias are as weak as ..."

Recently we decided to establish a page addressing the criticisms this page receives.

An email exchange from Dec. 14-15, 2015 helps lay the foundation for that page, addressing the criticism that we don't prove PolitiFact's liberal bias.

For some odd reason, people skip right past the site description that says we expose PolitiFact's bias, mistakes and flimflammery. So right at the top we're telling visitors that not everything we post is intended to establish that PolitiFact is biased toward the left.

The email came from W. Thompson. That's his real name, so far as we know, though we're not using his first name because of his concerns about receiving right-wing attacks.

We condemn any attempt to identify and personally harass Mr. Thompson in any way over his politics. 

The first exchange between Thompson and me is above-the-fold. Those interested in the rest should follow the jump-break. We included a summary statement at the end.

Dec. 14-15, 2015

I just came across your web site and read the page I came to.

And you accuse PolitiFact of being biased? Your attempts at proving their bias are as weak as Republican attempts to spin their disasters off on the Democrats.

You wrote:

I just came across your web site and read the page I came to.

Thanks for visiting and reading.

And you accuse PolitiFact of being biased?

Yes, we accuse PolitiFact of being biased, and we provide evidence in support of the accusation. It isn't clear from your message whether you read any of that material.

Your attempts at proving their bias are as weak as Republican attempts to spin their disasters off on the Democrats.

As your analogy lacks any specifics at either end, it's hard to know whether your criticism has any meaning to it.
I'm in the process of developing a page for the critics of PolitiFact Bias. We encounter the same lame criticisms over and over again, so I figured it would help to provide visitors a road map to help them find where we've addressed their complaints.
I'd like your permission to print your message and your followup as a post to PolitiFact Bias.
Thanks again.

Monday, December 14, 2015

A lucky "Mostly True" for HRC?

We took note of a PolitiFact fact check fraught with problems earlier this year but did not write about it. Democratic presidential candidate Hillary Rodham Clinton said Americans do better economically under Democrat presidents--with the implication Democratic polices ought to receive the credit.

Ira Stoll spans the gap in the New York Sun:
Politifact, a Pulitzer-Prize-winning operation of the Tampa Bay Times, did take a look at Mrs. Clinton’s claim that the stock market does better with Democrats in the White House and rated it “mostly true.” That analysis seems a bit charitable to me. While Mrs. Clinton’s claim may not be in the “five Pinocchio” or “liar, liar, pants on fire” categories that get the fact-checkers and their fans worked up in a lather, at best it’s highly misleading.

Two Princeton University economists, Alan Blinder and Mark Watson, examined the matter in a 2013 paper, “Presidents and the Economy: A Forensic Investigation.” They looked at the years 1947 though 2013 and did find that the economy grew faster with Democrats in the White House, though — and here’s the catch — they attributed much of the difference to “good luck” rather than “good policy.” They write, “Democrats would no doubt like to attribute the large D-R growth gap to better macroeconomic policies, but the data do not support such a claim....It seems we must look instead to several variables that are mostly ‘good luck.’”
Stoll covers other attempts to fact check this claim from Clinton, though his story is more about how the media overlook the claim.

Read it all, please.

Sunday, December 13, 2015

Angie Drobnic Holan and the Times do some fact-chucking

Does PolitiFact accuse politicians of lying?

In a Dec. 7, 2015 review of a research paper I published over at Zebra Fact Check, I assured readers PolitiFact consistently says it avoids accusing politicians of lying, its "Pants on Fire" rating and "Lie of the Year" awards notwithstanding.

Days later, on Dec. 11, 2015, The New York Times published an op-ed by PolitiFact editor Angie Drobnic Holan that threatened to contradict my claim.

Though the Times' headline proclaimed "All Politicians Lie," the op-ed never really supports its blaring headline. Holan herself does not use the word "lie" in her op-ed and doesn't even refer to the concept of intentional deception--the most commonly understood meaning of "lie."

So, aside from the title of the op-ed, Holan stays consistent with PolitiFact's stance that it does not accuse politicians of intentionally deceiving people.

That's the good news. The bad news is that Holan's op-ed is misleading.
I’ve been fact-checking since 2007, when The Tampa Bay Times founded PolitiFact as a new way to cover elections. We don’t check absolutely everything a candidate says, but focus on what catches our eye as significant, newsworthy or potentially influential. Our ratings are also not intended to be statistically representative but to show trends over time.
Jeff, the other editor at PolitiFact Bias, highlighted Holan's literary pretzel on Twitter. What kind of trends do non-representative statistics show?

They show a non-representative trends, that's what kind. Anyone inclined to say differently should make their case by supporting with representative sampling the trend PolitiFact represents with its graphs and numbers. And after that vet PolitiFact's findings for accuracy and consistency. Without both, the graphs should not be taken seriously.

Even without knowing that journalists tend to lean ideologically left, it's irresponsible and misleading to present claims such as Holan's without real supporting evidence.

Shame on PolitiFact. Shame on Angie Drobnic Holan. Shame on The New York Times.

"Most dishonest" based merely on whether statements are true or false? Isn't that dishonest?

Clarification Dec. 13, 2015: The original caption under our image capture ended with "That's dishonest." We changed that to "Isn't that dishonest?" to change it from an apparent contradiction into an paradoxical charge against The New York Times.

Clarification Dec. 14, 2015:  Fixed some minor grammatical flaws.

Friday, December 11, 2015

Handicapping PolitiFact's 2015 "Lie of the Year" award

Yes, it's that time of year again. That time when "non-partisan" "objective" fact checkers like PolitiFact conduct their ritual editorial and subjective "Lie of the Year" contests.

PolitiFact's list of finalists prompts a few observations.

Every finalist is rated either "False" or "Pants on Fire." The 2013 winner (co-winner when PolitiFact is half-honest about it), President Barack Obama's promise that people could keep their existing health care plans, was never rated lower than "Half True." That case remains an outstanding exception to PolitiFact's usual practice.

Only two of the 11 finalists came from the lips of Democrats. This is the almost inevitable result of the fact PolitiFact has an increasingly difficult time giving "False" and "Pants on Fire" ratings to Democrats. PolitiFact recorded about a dozen in 2015. PunditFact found a whopping 15 for liberal pundits. For comparison, PolitiFact found about 58 "False" and "Pants on Fire" statements from Republicans while PunditFact charged conservatives with 34 falsehoods.

Donald Trump was nominated more times than all Democrats combined.

Ben Carson was nominated more times than all Democrats combined.

Jeff says Trump's claim about thousands of Muslims celebrating in New Jersey is a lock.

It's hard to argue with that pick, but I'm going to invoke PolitiFact's history over the last two years of not putting the eventual winner on the list of candidates. My pick is "Statements by Donald Trump" as a parallel to last year's "Statements about Ebola."

Wednesday, December 9, 2015

PolitiFact defines "bailout" (Updated)

On Dec. 1, 2015, PolitiFact handed down a "Mostly False" rating on Marco Rubio's claim he prevented a $2.5 billion bailout of health insurance companies under ObamaCare.

PolitiFact's ruling hinged partly on its definition of "bailout." PolitiFact said giving the insurance companies money to help keep them in business wasn't really a bailout:
But is it really a bailout?  Several experts told us no, stressing that a bailout usually refers to a program used to save a company after the fact, not a mechanism in place to deal with a problem that everyone assumes could occur.
Are these experts engaging in a "No True Scotsman" fallacy? Or did PolitiFact do it for them by torquing the paraphrase? Perhaps PolitiFact used leading interview questions?

We were curious about PolitiFact's history of defining the term "bailout."

April 21, 2010: PolitiFact

Early on, "bailout" might have meant anything to PolitiFact:
A big challenge in analyzing Reid's statement, or any like it, is figuring out what exactly the word "bailout" means.

"It is almost impossible to pin politicians down on this one because 'bailout' has no clear meaning," said Douglas Elliott, a fellow with the Brookings Institution, a public policy think tank. "It could cover a very wide range of things, some of which involve taxpayer money and some don't, and some of which are traditional central banking or deposit insurer roles and others of which are novel."
PolitiFact decided Dodd-Frank didn't prevent bailouts in this fact check, but the definition of "bailout" was not critical to the ruling.

Oct. 27, 2010: PolitiFact Florida

Any "rescue from financial distress" qualifies as a bailout to us. And last year, the National Association of Home Builders did indeed lobby Congress for — and win — a change in tax law that it argued was a "critical stimulus measure for the U.S. economy" that would provide "an infusion of monetary resources for firms struggling to retain workers and undertake economic activity."
The issue in this fact check from PolitiFact Florida was a chain email attack on parties opposing Florida's Amendment 4. Amendment 4 would have made real estate development far more difficult. It was a measure more likely supported by progressives, so PolitiFact Florida's broad definition tended to help progressives.

Oct. 29, 2010: PolitiFact Virginia

The Democratic Congressional Campaign Committee said a Republican was a hypocrite because his car dealership received bailout money from the "Cash for Clunkers" program. For PolitiFact Virginia, the definition of "bailout" wasn't even at issue. Apparently it's clear that "Cash for Clunkers" was a bailout program. The DCCC gets a "Half True" since PolitiFact Virginia couldn't pin down the amount received in the bailout:
But we can’t place a dollar amount on the benefit to Rigell. The ad wrongly suggests that his dealerships’ $441,000 in rebates were straight profits and there’s reason to believe Rigell’s actual gain from Clunkers was considerably smaller. So we find the claim to be Half True.

Dec. 18, 2011: PolitiFact Texas

Our sense? Taxpayers picked up built-up costs that otherwise could not be covered. It seems reasonable to call that expenditure a bailout.
The broad sense harmed Republican Rick Perry. The narrow sense would have helped him. PolitiFact Texas opted for the broad sense.

January 13, 2014: PunditFact

Conservative Pundit Charles Krauthammer foreshadowed the Rubio ruling by calling ObamaCare's risk corridor reimbursements a "huge government bailout." When asked, Krauthammer said he was using the broad definition and provided dictionary support. PunditFact decided his statement deserved treatment according to a narrower definition.
We asked Krauthammer why he called this a bailout and he said he relied on the definition from Merriam-Webster. "The act of saving or rescuing something (such as a business) from money problems," he quoted. "A rescue from financial distress."

Rescue is clearly the operative word. We looked at other definitions. The Palgrave Dictionary of Economics spoke of a rescue from "potential or actual insolvency." Investopedia had to prevent "the consequences that arise from a business's downfall."
Using the narrow definition of "bailout" as a principal justification, PunditFact rated Krauthammer's claim "Half True."

June 30, 2014: PolitiFact

If The New York Times says the Ex-Im Bank asked for a bailout in 1987 then it must be true. There's no reason to question the definition used by the Times, right? The definition was not a major issue for the fact check.

Was TARP a "Bailout" Program?

PolitiFact appears to consistently accept that the Troubled Asset Relief Program was a bailout program. But what if we applied the definition the experts suggested for the Rubio fact check?

TARP did not stop at helping out banks that were in trouble when the measure was passed. On the contrary, the measure foresaw banks running into trouble in the near future for the same reasons other banks had run into trouble.

The TARP timeline published by ProPublica makes clear the TARP program bears some key similarities to the ObamaCare features the experts would not call "bailouts."

PolitiFact Plays Games With Definitions

We've found cases where PolitiFact manipulated the definition of "bailout" resulting in unfair harm to conservatives. We found no cases where PolitiFact similarly harmed Democrats.

If anybody can find an example of the latter we missed, we'll be delighted to edit the article to include it. Drop us a line.

Update Dec. 10, 2015
We added the Jan. 13, 2014 PunditFact item, which we intended to include in the original version. Also fixed some stubborn formatting issues.

Tuesday, December 8, 2015

Justin Katz: "PolitiFact RI Bends Reality to Protect the Bureaucracy"

Justin Katz has a solid history of criticizing PolitiFact, particularly the Providence Journal's PolitiFact Rhode Island.

We belatedly recognize a criticism Katz wrote back in August 2015 posted at the Ocean State Current:
A Rhode Island conservative can only be grateful, I suppose, that PolitiFact RI — the long-standing shame of the Providence Journal — managed to get the word “true” somewhere in its rating of the following statement from the RI Center for Freedom & Prosperity:
Rhode Island will become just the second state to mandate the vaccine … and the only state to do so by regulatory fiat, without public debate, and without consideration from the elected representatives of the people.

Katz catches PolitiFact Rhode Island (the Providence Journal) rating a statement "Half True" for referring to a vaccine "mandate" just as the Providence Journal had done.

Who needs consistency when you're a fact checker?

Visit the Ocean State Current to read the whole of Katz's brief-but-thorough discrediting of PolitiFact Rhode Island's fact check.

Monday, December 7, 2015

PolitiFact making up "watch list" fact?

In a recent fact check of Republican presidential candidate Sen. Marco Rubio (R-Fla.), PolitiFact gave Rubio a "Mostly False" rating. PolitiFact appears to have awarded Rubio that rating based on a fact it created out of thin air.

Rubio appeared on the CNN program "State of the Union," addressing the defeat of an amendment that would grant the U.S. Attorney General the power to prevent people suspected of terrorism from buying guns. Rubio said people could have the same name as persons on terrorism watch lists, leading to the possibility that 700,000 Americans might have been affected by the amendment.

PolitiFact mostly ignored Rubio's argument to focus on the number of Americans appearing on the lists, irrespective of name-matching:
Rubio’s count is way off. The number of Americans on the consolidated terrorist watch list is likely in the thousands, not hundreds of thousands.
PolitiFact doesn't address name-matching, abundant in the original context of Rubio's remarks, until very late in the story:
It’s more likely that a person would have the same name as someone who is on the list, and that person could run into problems at the airport if a security agent makes a misidentification, (Martin) Reardon said. This happened to the late Sen. Ted Kennedy, D-Mass., who once wasn’t allowed to fly because he had a similar name to the alias of a suspected terrorist on the no-fly list.

But the problem of same names is less common than it used to be, and there is a reasonably efficient redress process for people to appeal to the government to get their name removed from the terrorist watch list, (Timothy) Edgar noted.

"That shows that the redress process is not a sham, but it also shows that a fairly significant number of people are put on the watchlist by mistake," he said.

Still, it’s nowhere close to 700,000 Americans.

"It's nowhere close to 700,000 Americans"

We find no evidence that PolitiFact estimated the number of Americans whose names might match those on the terrorism watch list. The story simply shows PolitiFact obtaining a professional opinion from Edgar that the name-matching problem isn't as bad as it once was.

What's the estimate of the number of Americans susceptible to the name-matching problem? Isn't that necessary to justify saying 700,000 isn't even close?

If PolitiFact obtained an estimate of the number of Americans potentially affected by the name-matching problem, that estimate belongs in the fact check. And the comparison between that number and the number Rubio used should serve as the basis for the fact check.

Fact checkers who can't figure that out are not worthy of the name "fact checkers."

Did we mention Lauren Carroll wrote this story? Katie Sanders edited the story? And it was reviewed by PolitiFact's "Star Chamber"?

Wednesday, December 2, 2015

Just wondering

A Dec. 2, 2015 fact check from the national PolitiFact outfit looks at Democratic Party presidential candidate Hillary Clinton's claim that Republican Sen. Ted Cruz has tried to ban contraception five times.

PolitiFact researched the issue and concluded Cruz had never tried to ban contraception, but at most might ban some types of birth control or make it more difficult in some cases to access birth control.

The strongest conclusion about Cruz’s views that one could draw from these examples is that he might support a ban on certain types of contraception (but not all) through his support for a personhood amendment. The other examples are even more limited and deal with what employers would be required to pay for, for instance, or whether a major birth control provider should continue to receive federal funding.

The statement contains some element of truth but ignores critical facts that would give a different impression, so we rate it Mostly False.
 The "Mostly False" ruling set us to wondering: If  PolitiFact can give a "Mostly False rating when none of the five examples from the Clinton ad featured Cruz banning birth control, what would it take to get a "Half True" rating?

What if Cruz had tried to ban all birth control in one of the five examples? Mostly False? Half True?

What if Cruz had tried to ban all birth control in two of the five examples? Half True? Mostly True?

What if Cruz had tried to ban all birth control in three of the five examples? Half True? Mostly True? Just plain True?

We're just wondering.

Tuesday, December 1, 2015

The liberal heartache over heartburn

Yesterday some Twitterers complained about our article detailing PolitiFact's failure on a fact check comparing antacid sales to political spending.

Matthew Chapman (@fawfulfan), the ringleader, kicked off the festivities:

Chapman stumbled out of the gate. Our article did not say PolitiFact should have counted proton-pump inhibitors like Nexium as antacids. Instead, we noted that PolitiFact was already counting PPIs as antacids. Simple consistency would demand that PolitiFact count both over-the-counter and prescription sales of whatever it counts as antacids. And that's what our article recommended.

Jeff pointed that out to Chapman. Chapman didn't buy it.

Note that I had joined the fray at his point.

Chapman took the position that the $1.96 billion figure PolitiFact used excluded PPIs. Further, he claimed "None of the sources PolitiFact used mention PPIs."

To the record we go.

The problem is, that antacid number was for sales worldwide. In America, the total is about $2 billion. We found that estimate from a couple of sources. The business website reported $1.96 billion in sales of antacid tablets in 2013. That excludes liquid antacids.

We found 2011 figures for both tablets and liquid antacids. Liquid antacid sales add about 5 percent more to the total -- $1.6 billion in tablets compared to about $83 million for liquids. Assuming the same trend held in 2013, we’re looking at a total sales figure around $2 billion.
Contrary to Chapman's impression, the URL PolitiFact used in the second quoted paragraph refers explicitly to PPIs. The chart PolitiFact used to estimate antacid sales in 2011 (using figures from 2013!) included PPIs such as Prilosec OTC.

PolitiFact apparently ignored the part of the document referring to "the $2.5 billion antacid market" and added up the numbers from an incomplete chart to derive its sales estimate.

Are we to believe the $1.96 billion figure for antacid tablets from 2013 excludes PPIs while the $2.5 billion figure for tablets and liquids also from 2013 (perhaps a different time frame) includes them? Adding PPIs boosted the market by less than half a billion? That's not plausible. Prilosec OTC by itself accounted for over $350 million in sales. Prilosec is a PPI.

The key to understanding the variation in the numbers is the two definitions of "antacid." We noted that in our original article (bold emphasis added):
Technically speaking, antacids are chemicals one ingests to neutralize excess stomach acid. But the term also serves as a catch-all for medications used to treat heartburn and acid reflux. Proton-pump inhibitors serve as one example.
 Were we making up the second definition? Not at all:

1. preventing, neutralizing, or counteracting acidity, as of the stomach.

2. an antacid agent.

Undaunted by the truth, Chapman insisted that PPIs are not antacids while trying to excuse PolitiFact's error by claiming that Boehner is wrong "either way." For Chapman, it's not supposed to matter how PolitiFact reaches its conclusion so long as the conclusion is right. But Chapman thinks it makes a big difference how Boehner's office justified his statement:
Boehner's office gave world-wide sales numbers for OTC antacids. Would it be reasonable for PolitiFact to use the world-wide number since Boehner did so? Of course not. Boehner's office likely went fishing for a supporting statistic, found a number that appeared to fill the bill and sent that on to PolitiFact.

The article Boehner's office used does describe the term "antacid" in the narrow sense of chemically neutralizing stomach acid. But that's probably just a mistake by the author. The worldwide sales number from that article likely refers to antacids in the broad sense. Online sources do not appear to track the sales statistics in any way other than using the broad sense. And the $10 billion figure from the article jibes with those other figures.

An added problem for Chapman's argument stems from the fact that PolitiFact did not count only OTC tablet/liquid antacids. There's no evidence in PolitiFact's fact check that it recognized two senses of the term "antacid."

While preparing this update we did find a current estimate for antacid sales (broad sense) in the United States for 2015: $14 billion.
No doctor would use the broad sense of "antacid"? Doctors describing the actions of drugs used for treating stomach acid problems would be more likely to use "antacid" in its narrow sense than others. But there's no reason why a doctor can't just as easily use "antacid" in its broader sense, as context demands.
We invited Chapman to offer his criticisms in the comments section.

He declined.

Chapman and his Twitter cohorts were wrong on numerous points, beyond what we document here. They demonstrated no flaw with our article but showed admirable flexibility in ignoring PolitiFact's obvious mistakes.

Additional reading.