Showing posts with label 2015. Show all posts
Showing posts with label 2015. Show all posts

Friday, February 5, 2016

Two Clinton fibs, two "Half True" ratings from PolitiFact

PolitiFact is too good to us.

Seriously. We don't have any formal agreement with PolitiFact binding them to produce horrible fact-checking in order to make sure we have stuff to write. They just do it anyway.

Case in point, co-editor Jeff D sent me an email about this item earlier today.

During the Democratic presidential debate on Feb. 4, 2016, Democratic presidential candidate Hillary Rodham Clinton said she waited until President Obama's Trans-Pacific Partnership agreement was finalized before passing judgment on it.

Try to wrap your head around PolitiFact's reasoning:
Did Clinton really withhold her support until the terms of the proposal had been finalized?

[...]

Speaking in Australia in 2012, Clinton hailed the deal as "setting the gold standard."

"This TPP sets the gold standard in trade agreements to open free, transparent, fair trade, the kind of environment that has the rule of law and a level playing field," Clinton said. "And when negotiated, this agreement will cover 40 percent of the world's total trade and build in strong protections for workers and the environment."
First, Clinton condemned the deal after it was finalized. It doesn't even make sense to ask whether Clinton withheld her support for the deal until after it was finalized. She never supported it after it was finalized. PolitiFact's headline makes the same error:


The statement supposedly from Clinton in PolitiFact's headline is flatly false. Clinton endorsed the deal before negotiations were finalized. The "Half True" doesn't belong within 10 miles of PolitiFact's headline.

If Clinton said she did not pass judgment on the deal before it was finalized that is likewise false.

The only way Clinton can escape with a shred of truth on this one is if she was saying she did not condemn the deal until after it was finalized. The problem? That statement does nothing to address the concern in the question posed by debate moderator Chuck Todd (transcript via The New York Times, bold emphasis added):
TODD: Secretary Clinton, let me turn to the issue of trade. In the ’90s you supported NAFTA. But you opposed it when you ran for the president in 2008. As secretary of state, you supported TPP, and then — which, of course, is that trade agreement with a lot of Asian countries, but you now oppose it as you make your second bid for president.

If elected, should Democrats expect that once you’re in office you will then become supportive of these trade agreements again?

CLINTON: You know, Chuck, I’ve only had responsibility for voting for trade agreements as a senator. And I voted a multinational trade agreement when I was senator, the CAFTA agreement, because I did not believe it was in the best interests of the workers of America, of our incomes, and I opposed it.

I did hope that the TPP, negotiated by this administration, would put to rest a lot of the concerns that many people have expressed about trade agreements. And I said that I was holding out that hope that it would be the kind of trade agreement that I was looking for.

I waited until it had actually been negotiated because I did want to give the benefit of the doubt to the administration. Once I saw what the outcome was, I opposed it.
It looks to us (see particularly the second paragraph of Clinton's response) like Clinton tried to downplay the support she offered the deal when she was Secretary of State. She seems to say that she didn't really support the deal back then. Apparently she was just being being a good soldier for President Obama, leaving her free to oppose the deal after she left the administration.

So ... no flip-flop because she was just doing Obama's bidding? And endorsing the deal before it was finalized is simply giving the administration--of which she was a part!--the benefit of the doubt?

Clinton's answer doesn't make much sense to us. She offers a thin excuse for her flip-flop.

PolitiFact's fact check doesn't make sense, either. The claim from PolitiFact's headline is false but receives a "Half True" rating. And it doesn't correctly capture what Clinton was saying in the first place.

What Clinton actually said might have been half true. She did not condemn the deal until after it was finalized. Though that claim carries a healthy dollop of misdirection downplaying her apparently insincere early endorsement of the TPP.

And did Clinton condemn the deal privately within the administration? Did President Obama hear from Clinton what it would take for her to support the deal? Ask the question, debate moderators.

The Second Fib

 Wait, didn't we say something about a second fib rated "Half True"?

Yes. Yes, we did.

The fact check we discuss above--containing the first fib--was from C. Eugene Emery, Jr., recently added to the staff at PolitiFact National after leading fact-check efforts for PolitiFact's Rhode Island franchise.

It looks like Emery relied on an earlier PolitiFact fact check for his analysis. That article contains the second fib, and probably helped nudge Emery toward his interpretation of Clinton's 2016 debate comments.

In that Oct. 13, 2015 fact check, it looks like Clinton did say she had reserved judgment on the TPP while Secretary of State. But with the contradictory evidence available and included in its story, PolitiFact gifted Clinton with a "Half True" rating on her claim that while serving as Secretary of State she merely "hoped" the TPP was the type of deal she could support.

 Try to figure out which half was true from PolitiFact's conclusion:
Clinton said when she was secretary of state, she was reserving judgment but "hoped (the Trans-Pacific Partnership) would be the gold standard."

She’s twisting her 2012 remarks a bit. Clinton said, "This TPP sets the gold standard in trade agreements," which is a more confident claim than if she had said she "hoped" it would meet that standard. This is in contrast to more recent comments where Clinton said she had concerns about the deal and that she ultimately opposes it.

The statement is distorting her previous comments. We rate it Half True.
Hooray for objective standards in fact-checking?

Tuesday, February 2, 2016

Left Jab: PolitiFact National vs. PF New Hampshire on per capita health care spending

I was hunting for some liberal criticism of PolitiFact associated with Democratic presidential candidate Bernie Sanders, one associated with polling numbers. I found a pretty good item aside from the one I sought.

Reddit commenter "wittenbunk" offered the following observation:
Yesterday Politifact published a rating of Bernie's often repeated claim that "We spend almost twice as much per capita on health care as do the people of any other country". They rated the claim false.

The issue is that on April 30th Politifact rated a nearly identical claim by Ben Carson Carson was quoted as "We spent twice as much per capita for health care in this country as the next closest nation". Politifact rated the claim Mostly False.

Despite the fact that Carson's wording allowed for much less interpretation, Politifact gave his quote a more truthful rating.
Wittenbunk went on to say that his example qualifies as a rare clear example of media bias. The post was solid up through that point. PolitiFact clearly used inconsistent standards in achieving the two different ratings for Sanders and Carson. But single cases of inconsistency make poor examples of media bias.

That's why we've always said the appropriate way to look for media bias at PolitiFact is to look for trends in unfair harm. Sen. Sanders was hit with unfair harm in this case. I've documented a separate case of unfair harm to Sanders at Zebra Fact Check. There may well be others.

This case does feature some special circumstances. It's a rating from a state operation, PolitiFact New Hampshire, conflicting with the rating from PolitiFact National. PolitiFact National published the rating that's probably harder to justify. We'll note that the story was written by intern Will Cabaniss, but since PolitiFact's "star chamber" of editors decides the rating we're not inclined to blame Cabaniss.

Note to liberal critics of PolitiFact: Open your eyes. This type of inconsistency is normal at PolitiFact.

Monday, December 28, 2015

PunditFact's editorial page

Oops--are we being redundant? Sorry! Here's a portion of PunditFact's main page from Dec. 28, 2015.

Partial screen capture from PolitiFact.com/PunditFact/ Dec. 18, 2015

That, ladies and gentlemen, is a pictorial editorial.

It's not an unaltered photograph of Donald Trump. It's not a photo at all. It is an artist's rendering of Donald Trump, created to communicate an editorial message to PunditFact's readers. Note how Trump's fingers are crossed as he speaks, a traditional gesture of those who know they are speaking falsely.

Material like this makes PunditFact itself a pundit of sorts.

We've long noted the fact that PolitiFact and its various franchises blur the traditional line between straight news reporting--what some might expect from journalism billed as "fact-checking"--and editorializing. That's why we call PolitiFact "the Fox News of fact-checking" and PolitiFact staffers "liberal bloggers."

Their journalism is not objective. When PolitiFact creator Bill Adair calls fact checking a "new form of journalism" perhaps he has in mind that deliberate blurring of the lines.

Or not. Either way, we don't appreciate it.

Saturday, December 26, 2015

PolitiMath from PolitiFact New Hampshire

What's new with PolitiMath?

PolitiFact New Hampshire, lately the Concord Monitor's partnership with PolitiFact, gives us a double dose of PolitiMath with its July 2, 2015 fact check of New Hampshire's chief executive, Governor Maggie Hassan (D).

Hassan was the only Democrat to receive any kind of false rating ("False" or "Pants on Fire") from PolitiFact New Hampshire in 2015. PolitiFact based its ruling on a numerical error by Hassan and added another element of interest for us by characterizing Hassan's error in terms of a fraction.

What type of numerical error earns a "False" from PolitiFact New Hampshire?

PolitiFact summarizes the numbers:
In her state of the state address, Hassan said that "6,000 people have already accessed services for substance misuse" through the state’s Medicaid program.

There is no question that substance abuse in the state is a real and pressing problem, and the statistics show that thousands have sought help as a result of the state’s expanded Medicaid program. But Hassan offered (and later corrected) a number that simply wasn’t accurate. The real total is closer to 2,000 -- about one-third the amount she cited.

We rate her claim False.
Describing Hassan's mistake as a percentage error using PolitiFact's figures, Hassan exaggerated her figure by about 230 percent. PolitiFact gave Hassan no credit for her underlying point.

In our PolitiMath series we found the closest match for this case from PolitiFact Oregon. PolitiFact Oregon said conservative columnist George Will exaggerated a figure--by as much as 225 percent by our calculations. The figure PolitiFact Oregon found was uncertain, however, so Will may have exaggerated considerably less using the range of numbers PolitiFact Oregon provided.

In any case, PolitiFact Oregon ruled Will's claim "False." PolitiFact Oregon gave Will no credit for his underlying argument, just as PolitiFact New Hampshire did with Gov. Hassan.

Percent Error and Partisanship

One of our research projects looks in PolitiFact's fact checks for a common error journalists make. We reasoned that journalists would prove less likely to make such careless errors for the party they prefer. Our study produced only a small set of examples, but the percentage of errors was high and favored Democrats.

PolitiFact New Hampshire's fact check of Gov. Hassan draws some consideration for this error, giving us the second mathematical element of note.

PolitiFact could have expressed Hassan's mistake using a standard percentage error calculation like the one we used. We calculated a 230 percent error. But PolitiFact New Hampshire did not use the correct figure (1,800) as the baseline for calculating error. Instead, the fact checkers used the higher, incorrect figure (6,000) as the baseline for comparison: "about one-third the amount she cited."

Using the number "one-third" frames Hassan's error nearer the low end. "One-third" doesn't sound so bad, numerically. Readers with slightly more sophistication may reason that the "one-third" figure means Hassan was off by two-thirds.

Sometimes using the wrong baseline makes the error look bigger and sometimes it makes the error look smaller. In this case the wrong baseline frames Hassan's mistake as a smaller error. The Democrat Hassan gains the benefit of PolitiFact's framing.

Sunday, December 20, 2015

PolitiFact's Unethical Internet Fakery

What's Fake on the Internet: Unbiased fact checkers

We stumbled across a farewell post for Caitlin Dewey's rumor-debunking "What was Fake” column in the Washington Post. In the column, Dewey notes a change in the Internet hoax business, namely that rumors are often spread intentionally via professional satire sites seeking click traffic, and is calling it quits "because it’s started to feel a little pointless."

While Dewey's column focused more on viral Internet rumors than politics specifically, we were struck by the parallel between her observations and our own regarding PolitiFact. She laments that the bogus stories are so easily debunked that the people who spread the misinformation aren't likely to be swayed by objective evidence. She then highlights why hoax websites have proliferated:
Since early 2014, a series of Internet entrepreneurs have realized that not much drives traffic as effectively as stories that vindicate and/or inflame the biases of their readers. Where many once wrote celebrity death hoaxes or “satires,” they now run entire, successful websites that do nothing but troll convenient minorities or exploit gross stereotypes.
Consider that trend when you see this chart that ran with PolitiFact editor Angie Holan's NYT opinion article:


Image via NYT screengrab



The chart, complete with bar graphs and percentages, frames the content for readers with a form of scientific legitimacy. But discerning anything from the aggregate total of their ratings amounts to pure hokum. The chart doesn't provide solid evidence of anything (with the exception of PolitiFact's selection bias), but it surely serves to "vindicate and/or inflame the biases of their readers."

We've gone into detail explaining why PolitiFact's charts and report cards amount to junk science before, but simply put there's multiple problems:

1) PolitiFact's own definitions of their ratings are largely subjective, and their standards are applied inconsistently between editors, reporters, and individual franchises. This problem is evidenced by having nearly identical claims regarding a 77-cent gender wage gap being rated everywhere from True to Mostly False and everything in between.

2) Concluding anything from a summary of PolitiFact's ratings assumes each individual fact check was performed competently and without error. Further, it assumes PolitiFact only rates claims where actual facts are in dispute as opposed to opinions, predictions, or hyperbolic statements.

3) PolitiFact's selection bias extends beyond what specific claim to rate and into what specific person to attribute a claim to (something we've referred to as attribution bias). For example, an official IG report included an anecdote that the government was paying $16 for breakfast muffins. Bill O'Reilly, ABC and NPR all reported the figure in the report. PolitiFact later gave Bill O'Reilly a Mostly False rating for repeating the official figure. This counts as a Mostly False on his report card while ABC, NPR, and even the IG who originally made the claim are all spared this on their "truthiness" chart.

4) The most obvious problem is selection bias. Even if we assume PolitiFact performed their fact checks competently, applied their standards consistently, and attributed their ratings correctly, the charts and report cards still aren't evidence of anyone's honesty. Even PolitiFact admits this, contrary their constant promotion of their report cards.

To illustrate PolitiFact's flaw consider investigating a hundred claims from President Truman to determine their veracity. Suppose you find Truman made 20 false claims, and you then publish only those 20 False claims on a chart. Is this a scientific evaluation of Harry Truman's honesty? Keep in mind you get to select which claims to investigate and publish. Ultimately such an exercise would say more about you than Harry Truman. The defense that PolitiFact checks both sides falls flat (PolitiFact get's to pick the True claims too, and in any event is an appeal to the middle ground).

We've documented how PolitiFact espouses contradictory positions on how to use their data. PolitiFact warns readers they're "not social scientists," but then engages in a near constant barrage promoting their "report cards," claiming they're useful for showing trends.

Whenever PolitiFact promotes charts like the one posted in the NYT article, the overwhelming response on Facebook and Twitter is to send the chart viral with unfounded claims that the conservative bogeymen are allergic to the truth. How can PolitiFact ethically promote such assertions when they know their report cards offer no objective data about the people they rate?

Instead of telling readers to discount any conclusions from their misleading charts, PolitiFact actively encourages and promotes these unscientific charts. That's not how honest, professional journalists seeking truth behave. On the other hand, it's behavior we might expect from partisan actors trolling for web traffic.

So why are people so intent on spreading PolitiFact's bogus charts based on bad information? Perhaps Dewey uncovered the answer:
[I]nstitutional distrust is so high right now, and cognitive bias so strong always, that the people who fall for hoax news stories are frequently only interested in consuming information that conforms with their views — even when it’s demonstrably fake.
No worries, though. PolitiFact editor Angie Holan assures us "there's not a lot of reader confusion" about how to use their ratings.

We're bummed to see Dewey close down her weekly column, as she arguably did a more sincere job of spreading the truth to readers than PolitiFact ever has. But we're glad she pointed out the reason hoax websites are so popular. We suggest the same motivation is behind PolitiFact publishing their report cards. Is there much difference between hoax sites spreading bogus rumors and PolitiFact trolling for clicks by appealing to liberal confirmation bias with its sham charts and editorials masquerading as fact checks?

Contrary to being an actual journalistic endeavor, PolitiFact is little more than an agenda-driven purveyor of clickbait.

PolitiFact, Paul Ryan and cherry-picking

Does this add up?

Speaker of the House Paul Ryan (R-Wis.) said during a television appearance that Obamacare was making families pay double-digit premium increases.

PolitiFact gave Ryan's statement its "fact check" treatment, which we're inclined to call liberal blogging.

Step 1:
Ryan is suggesting ... increases in the "double digits." We decided to rate that claim on the Truth-O-Meter.
Step 2:
According to HHS data, 19 out of the 37 states in the federal exchange saw an average rate increase in the double digits. At the low end, rates in Missouri increased by 10.4 percent while Oklahoma saw the biggest hike at 35.7 percent.
Step 3:
Ryan has a point that some plans have seen increases of 10 percent or more with insurance purchased on healthcare.gov. However, Ryan is cherry-picking the high end of rate changes.
Got it? PolitiFact says Oklahoma is experiencing an average Obamacare exchange rate hike of 35.7 percent. And saying "double-digit" rate increases is cherry-picking from the high end of rate changes.

Given that a 10 percent rate hike is "double digits" and the top (average) exchange rate hike is over 35 percent, what kind of sense does it make to say Ryan is cherry-picking the high end of rate changes?

Silly liberal bloggers.

Ryan used a normal ambiguity when he spoke. "Families" does not mean "All families" as PolitiFact claimed Ryan was suggesting. If a substantial number of families are getting hit with double-digit rate increases then what Ryan said was accurate. And by "accurate" I don't mean "True" in the sense of a "Truth-O-Meter" rating. I mean "accurate" in the sense that PolitiFact uses the term when it defines "True" and "Mostly True" for purposes of its cheesy "Truth-O-Meter":

 PolitiFact's definitions are themselves "Half True." You can tell that's the case when Ryan's accurate statement receives its "Half True" rating while Bernie Sanders' inaccurate statement receives a "Mostly True" rating.

PolitiFact fact-checking is a game liberal bloggers play.


Afters

Other than Ryan's office providing supporting URLs dealing with the individual market, what would lead PolitiFact to simply ignore the much larger group plan market in its fact check? Do group plans not count? Does the lack of an employer mandate mean Obamacare does not affect group insurance despite the new regulations it imposes on the group market?

Rate increases? What rate increases?
And according to an Arthur J. Gallagher & Co. survey of smaller employers, most of which have less than 1,000 employees, released Friday, 44% reported premium rate hikes of 6% or more in 2014. Twenty-three percent saw rates in the double digits, the survey showed.
We'll say it again: Silly liberal bloggers.

After Afters

Before I forget the caboose on this PolitiFact trainwreck ...
Ryan missteps by saying the law alone is "making" the premiums increase. Rather, experts say, hikes are more likely the result of insurers underestimating how sick enrollees would be.
Silly cherry-picking liberal bloggers. Obamacare is the reason insurers don't know how sick their enrollees will be. Guaranteed issue. Remember? No, I guess you don't remember.


Correction Dec. 20, 2015: Changed from "Ind." to "Wis." the state which Ryan represents. Hat tip to commenter Pagan Raccoon for pointing out the error.
Correction Dec. 21, 2015: When repeating PolitiFact's 35.7 figure we typo-inflated it to 37.7. That's now fixed. Hat tip to "John Smith" for using the comments section to point out the error.

Saturday, December 19, 2015

Zebra Fact Check: "PolitiFact vs. Rubio on defense spending"

At my fact check site Zebra Fact Check, I took note of PolitiFact Florida's fact check of Marco Rubio from earlier this year:
Rubio said that the United States "is not building the aircraft, the long-range bombers, the additional aircraft carriers, the nuclear submarines."

The military has programs in place to build the types of equipment Rubio mentioned, including the largest aircraft procurement ever: the F-35 Joint Strike Fighter. It will take many years and billions of dollars to complete the procurement, but Rubio’s statement could mislead voters into thinking that the United States has closed up shop in the area of military equipment and isn’t building anything, which isn’t the case.

We rate this claim False.
In PolitiFact Florida's summary Rubio's statement gets shortened--PolitiFact quotes the full sentence earlier in the story (bold emphasis added): "We are the only nation that is not building the aircraft, the long-range bombers, the additional aircraft carriers, the nuclear submarines we need for our nation's defense."

The Zebra Fact Check article points out how PolitiFact Florida carelessly overlooked an alternative understanding of Rubio's words that is consistent with his speeches:
We  find it completely obvious that Rubio was saying the pace of and planning for defense acquisitions falls below what is needed for adequate defense.

What evidence supports our position? Rubio’s own past statements, for starters.
Rubio's speech from September 2014 goes over each of the military acquisition challenges he mentions in the statement PolitiFact gave its "fact check" treatment. How did PolitiFact Florida miss that information? Probably by not knowing the issue and by not bothering to look.

In other words, by allowing bias to influence the outcome of the fact check.

That's the meaning of these accumulated examples of flawed fact checks we highlight and analyze here at PolitiFact Bias. PolitiFact makes plenty of mistakes, and the tendency of those mistakes to more often unfairly harm conservatives and Republicans makes up one of our main evidences of PolitiFact's liberal bias.

No one example is intended to prove PolitiFact's bias. The bias comes through when considering its whole body of work, including the more rare cases where PolitiFact unfairly harms a liberal with its poor fact-checking.

We're finding it harder and harder to find serious criticism of PolitiFact coming from the political left, for what that's worth.

Friday, December 18, 2015

PolitiFact and opt-in polling

On Dec. 9, 2015, PolitiFact doled out a "Mostly False" to Donald Trump.

Trump had cited an opt-in poll that found about 25 percent of American* Muslims think it may be okay to commit violent acts against Americans in the name of jihad. PolitiFact found it improper to report the results of the poll:
Trump is referring to a poll conducted by the Center for Security Policy. However, polling experts raise numerous questions about the validity of the poll’s results, including its "opt-in" methodology and the dubiously large percentages of respondents who said they were unaware of ISIS or al-Qaida. Moreover, an official with the Center for Security Policy cautioned against generalizing the poll results to the entire Muslim-American community.
So, the opt-in methodology makes poll results suspect?

Is PolitiFact consistent on that point?
PolitiFact and PunditFact will soon announce our Lie of the Year -- the most significant falsehood of 2015, as chosen by our editors and reporters. We're also inviting PolitiFact readers to vote for the Readers' Choice award.

Here are our 11 finalists and a link to our survey so you can vote for your favorite. We accept write-ins. We also have a mobile-friendly version of this survey.
Does PolitiFact claim or imply that its "Reader's Choice" award accurately reflects the opinion of its readers?

Hmm.


*Though the poll surveyed American Muslims, Trump did not mention that aspect of the poll when he made his claim. PolitiFact apparently found the words Trump chose unimportant when it fact checked his claim.

Monday, December 14, 2015

A lucky "Mostly True" for HRC?

We took note of a PolitiFact fact check fraught with problems earlier this year but did not write about it. Democratic presidential candidate Hillary Rodham Clinton said Americans do better economically under Democrat presidents--with the implication Democratic polices ought to receive the credit.

Ira Stoll spans the gap in the New York Sun:
Politifact, a Pulitzer-Prize-winning operation of the Tampa Bay Times, did take a look at Mrs. Clinton’s claim that the stock market does better with Democrats in the White House and rated it “mostly true.” That analysis seems a bit charitable to me. While Mrs. Clinton’s claim may not be in the “five Pinocchio” or “liar, liar, pants on fire” categories that get the fact-checkers and their fans worked up in a lather, at best it’s highly misleading.

Two Princeton University economists, Alan Blinder and Mark Watson, examined the matter in a 2013 paper, “Presidents and the Economy: A Forensic Investigation.” They looked at the years 1947 though 2013 and did find that the economy grew faster with Democrats in the White House, though — and here’s the catch — they attributed much of the difference to “good luck” rather than “good policy.” They write, “Democrats would no doubt like to attribute the large D-R growth gap to better macroeconomic policies, but the data do not support such a claim....It seems we must look instead to several variables that are mostly ‘good luck.’”
Stoll covers other attempts to fact check this claim from Clinton, though his story is more about how the media overlook the claim.

Read it all, please.

Tuesday, December 8, 2015

Justin Katz: "PolitiFact RI Bends Reality to Protect the Bureaucracy"

Justin Katz has a solid history of criticizing PolitiFact, particularly the Providence Journal's PolitiFact Rhode Island.

We belatedly recognize a criticism Katz wrote back in August 2015 posted at the Ocean State Current:
A Rhode Island conservative can only be grateful, I suppose, that PolitiFact RI — the long-standing shame of the Providence Journal — managed to get the word “true” somewhere in its rating of the following statement from the RI Center for Freedom & Prosperity:
Rhode Island will become just the second state to mandate the vaccine … and the only state to do so by regulatory fiat, without public debate, and without consideration from the elected representatives of the people.

Katz catches PolitiFact Rhode Island (the Providence Journal) rating a statement "Half True" for referring to a vaccine "mandate" just as the Providence Journal had done.

Who needs consistency when you're a fact checker?

Visit the Ocean State Current to read the whole of Katz's brief-but-thorough discrediting of PolitiFact Rhode Island's fact check.



Monday, December 7, 2015

PolitiFact making up "watch list" fact?

In a recent fact check of Republican presidential candidate Sen. Marco Rubio (R-Fla.), PolitiFact gave Rubio a "Mostly False" rating. PolitiFact appears to have awarded Rubio that rating based on a fact it created out of thin air.

Rubio appeared on the CNN program "State of the Union," addressing the defeat of an amendment that would grant the U.S. Attorney General the power to prevent people suspected of terrorism from buying guns. Rubio said people could have the same name as persons on terrorism watch lists, leading to the possibility that 700,000 Americans might have been affected by the amendment.

PolitiFact mostly ignored Rubio's argument to focus on the number of Americans appearing on the lists, irrespective of name-matching:
Rubio’s count is way off. The number of Americans on the consolidated terrorist watch list is likely in the thousands, not hundreds of thousands.
PolitiFact doesn't address name-matching, abundant in the original context of Rubio's remarks, until very late in the story:
It’s more likely that a person would have the same name as someone who is on the list, and that person could run into problems at the airport if a security agent makes a misidentification, (Martin) Reardon said. This happened to the late Sen. Ted Kennedy, D-Mass., who once wasn’t allowed to fly because he had a similar name to the alias of a suspected terrorist on the no-fly list.

But the problem of same names is less common than it used to be, and there is a reasonably efficient redress process for people to appeal to the government to get their name removed from the terrorist watch list, (Timothy) Edgar noted.

"That shows that the redress process is not a sham, but it also shows that a fairly significant number of people are put on the watchlist by mistake," he said.

Still, it’s nowhere close to 700,000 Americans.

"It's nowhere close to 700,000 Americans"

We find no evidence that PolitiFact estimated the number of Americans whose names might match those on the terrorism watch list. The story simply shows PolitiFact obtaining a professional opinion from Edgar that the name-matching problem isn't as bad as it once was.

What's the estimate of the number of Americans susceptible to the name-matching problem? Isn't that necessary to justify saying 700,000 isn't even close?

If PolitiFact obtained an estimate of the number of Americans potentially affected by the name-matching problem, that estimate belongs in the fact check. And the comparison between that number and the number Rubio used should serve as the basis for the fact check.

Fact checkers who can't figure that out are not worthy of the name "fact checkers."

Did we mention Lauren Carroll wrote this story? Katie Sanders edited the story? And it was reviewed by PolitiFact's "Star Chamber"?

Wednesday, December 2, 2015

Just wondering

A Dec. 2, 2015 fact check from the national PolitiFact outfit looks at Democratic Party presidential candidate Hillary Clinton's claim that Republican Sen. Ted Cruz has tried to ban contraception five times.

PolitiFact researched the issue and concluded Cruz had never tried to ban contraception, but at most might ban some types of birth control or make it more difficult in some cases to access birth control.

PolitiFact:
The strongest conclusion about Cruz’s views that one could draw from these examples is that he might support a ban on certain types of contraception (but not all) through his support for a personhood amendment. The other examples are even more limited and deal with what employers would be required to pay for, for instance, or whether a major birth control provider should continue to receive federal funding.

The statement contains some element of truth but ignores critical facts that would give a different impression, so we rate it Mostly False.
 The "Mostly False" ruling set us to wondering: If  PolitiFact can give a "Mostly False rating when none of the five examples from the Clinton ad featured Cruz banning birth control, what would it take to get a "Half True" rating?

What if Cruz had tried to ban all birth control in one of the five examples? Mostly False? Half True?

What if Cruz had tried to ban all birth control in two of the five examples? Half True? Mostly True?

What if Cruz had tried to ban all birth control in three of the five examples? Half True? Mostly True? Just plain True?

We're just wondering.

Tuesday, November 24, 2015

Hoystory: 'The Hacks at PolitiFact'

We're delighted to point readers toward a new(ish) critique of PolitiFact by Matthew Hoy, one of the critics who saw PolitiFact for what it was very early in the game.

Hoy takes a look at PolitiFact Texas' gnat-straining rating of Ted Cruz's claim the Democratic Party is shrinking. Then Hoy contrasts PolitiFact's treatment of Cruz with the national PolitiFact's kid gloves treatment of President Obama's claim of having contained ISIS (ISIL).

As we said, it did not take Hoy long to see PolitiFact's true face:
They are not fact-checkers, they’re political operatives with bylines.
Please visit Hoystory and read it all.

Lauren Carroll cannot contain herself

PolitiFact/PunditFact writer Lauren Carroll couldn't resist pushing back against criticism she received on her story looking at the containment of ISIS.

Carroll suggested on Twitter that Breitbart.com's John Nolte had not read her fact check. The evidence?
1) I am the only byline on the story 2) I fact-checked Ben Rhodes, not Obama. @NolteNC— Lauren Carroll (@LaurenFCarroll) November 16, 2015
The fact is that PunditFact gave Carroll's story more than one presentation.

In one of the presentations, a version of Carroll's story was combined with another story from a Sunday morning news show. That second version of the story has Linda Qiu listed on the byline. So Carroll's claim she's the only one on the byline rates a "Half True" on the Hack-O-Meter. Combined with her whinge about fact-checking Obama proxy (deputy national security advisor) Ben Rhodes instead of President Obama, Carroll provides an astonishingly thin defense of her work.

The critiques from Breitbart.com and the Washington Examiner both made the point that Obama was answering a question about ISIS' strength, not the range of its geographical control. Carroll completely accepted Rhodes' spin and ignored the point of the question Obama was asked.

Where's Carroll's explanation of her central error? It's certainly not in her clumsy jabs at John Nolte.

Friday, November 20, 2015

PolitiFact gives Bernie Sanders "Mostly True" rating for false statement

When Sen. Bernie Sanders (I-Vt.) said more than half of America's working blacks receive less than $15 per hour, PolitiFact investigated.

It turns out less than half of America's working blacks make less than $15 per hour:
(H)alf of African-American workers earned less than $15.60. So Sanders was close on this but exaggerated slightly. His claim is off by a little more than 4 percent.
PolitiFact found that half of African-American workers earned more than $15 per hour. That makes Sanders' claim false. PolitiFact said Sanders "exaggerated slightly." PolitiFact said he was "off by a little more than four percent." PolitiFact said he was "not far off."

Euphemisms aside, Sanders was wrong. But PolitiFact gave Sanders a "Mostly True" rating for his claim.

Here's a reminder of PolitiFact's definition for its "Mostly True" rating:
Mostly True – The statement is accurate but needs clarification or additional information.
Sanders' statement wasn't accurate. So how does it even begin to qualify for the "Mostly True" rating the way PolitiFact defines it?

The answer, dear reader, is that PolitiFact's definitions don't really mean anything. PolitiFact's "Star Chamber" panel of editors gives the rating they see fit to give. If the definitions conflict with that ruling then the definitions bend to the will of the editors.

Subjective-like.



Update 22:25 11/23/15: Added link to PF article in 4th graph - Jeff

Tuesday, November 17, 2015

ISIS "contained"?

When President Obama called ISIS ("ISIL") "contained" in a televised interview on Nov. 12, 2015, other politicians, including at least one Democrat, gave him some grief over the statement.

Mainstream fact checker PunditFact came to the president's defense. PunditFact said Obama was just talking about territorial expansion, so what he said was correct.

Conservative media objected.

John Nolte from Breitbart.com:
PolitiFact’s transparent sleight-of-hand comes from basing its “True” rating — not on the question Obama is asked — but how the President chose to answer it.

Stephanopoulos asks, “But ISIS is gaining strength aren’t they?”
T. Becket Adams from the Washington Examiner:
PunditFact has rated the Obama administration's claim that the Islamic State has been "contained" as "true," even after a recent series of ISIS-sponsored events around the world have claimed the lives of hundreds of civilians.

For the fact-checker, the White House's doesn't believe ISIS is no longer a global threat, as fatal attacks last week in Beirut and Paris would show. The president and his team merely believe that the insurgent terrorist group controls a smaller portion of the Middle East today than it did a few months ago.
We think PunditFact has a bit of a point when it claims the president's remarks are taken out of context. But as Nolte and Adams point out, the specific context of the Obama interview was the strength of ISIS, not its territorial expansion.

If the president was saying that containing ISIL's geographic control equates with containing its strength, then PunditFact ends up taking the president out of context to justify claiming the president was taken out of context.

There's something not quite right about that.


Clarification Dec. 10, 2015: Changed "wasn't" to "was" in the next-to-last paragraph

Thursday, November 12, 2015

PolitiFact turns liberal blogger into Obamacare expert (Updated)

"We go to original sources to verify the claims. We look for original government reports rather than news stories. We interview impartial experts."
--About PolitiFact

PolitiFact claims it interviews impartial experts. But is that the whole truth?

What if PolitiFact also interviewed partial experts, such as figures with a history of donating to one party or the other?

Or worse, what if PolitiFact arrogated to itself the privilege of elevating a liberal blogger to the status of trusted expert?

Would any conflict with PolitiFact's statement of its fact-checking procedure result?

Let's talk about Charles Gaba.

Charles Gaba


We ran across Gaba's blog quite some time ago. Gaba wrote blog posts claiming to represent the facts on the Affordable Care Act, commonly known as ObamaCare. We found Gaba's approach to his subject matter nakedly partisan, starting with the ObamaCare signup widget in the upper right hand corner of his blog. A Washington Post Wonkblog profile confirmed Gaba's personal partisanship:
He admits that he does have a rooting interest in seeing the law succeed – he’s a volunteer for the local Democratic Party. Still, he says he is just trying to figure out whether the law is working.

“I do think the good outweighs the bad, but I don’t think [Obamacare] is the greatest thing in the world," he said. "I’m a single-payer guy."
So basically neutral, right?

PolitiFact started to cite Gaba as an expert in February 2014:
Charles Gaba, a website developer and blogger in Michigan, has been tracking enrollment figures at his ACASignups.net site. His most recent estimate from late February shows 2.6 million Medicaid sign-ups once you subtract those falling into three categories -- those who signed up in states that didn’t expand Medicaid, those who were previously eligible and who "came out of the woodwork" to sign up, and an estimate of the typical "churn" for Medicaid sign-ups in those states.
In March 2014, April 2014 and November 2014 PolitiFact listed interviews with Gaba for stories on the ACA but did not quote him.

An October 2015 PolitiFact fact check of Donald Trump finally quoted Gaba:
"Yes, some people in some plans through some carriers in some states are, indeed, looking at rate hikes of ‘35 to 50 percent’ if they stick with those plans in 2016," said Charles Gaba, who runs the popular blog ACAsignups.net, which tracks Obamacare enrollment.
To be fair, the expert quoted in PolitiFact's following paragraph, Gail Wilensky, was part of the President George H.W. Bush administration. On the other hand, PolitiFact mentions that in the story. Gaba's apparently just one of those "impartial experts" PolitiFact says it interviews.

Perhaps nobody objected, or perhaps Gaba did nothing to jeopardize his status as an impartial expert, so PolitiFact went to him again in November 2015:
Here are some of the provisions of the law and estimates of how many people have benefited from each. The estimates are from Charles Gaba, who has spent several years crunching the numbers for usage of the law at the blog ACAsignups.net.
Is Gaba active in the Democratic Party and in favor of a national single-payer system? Sure. Though the Washington Post says he's "not a political operative" in the same article where it mentions the other facts. Credit to Newsbusters and P. J. Gladnick for highlighting that discrepancy back on March 20, 2014.

Our congratulations to Mr. Gaba for making the jump from liberal blogger to impartial expert.

We have PolitiFact to thank.


Update Nov. 20, 2015

Who can blame ActBlue for moving to support Gaba's supposedly non-partisan work documenting the truth about the Affordable Care Act?

Charles’s “ACA Signups” series was an incredible and hugely time-intensive undertaking that, as he explained in a recent diary, put a major strain on his life outside of Daily Kos:
“I'm absolutely swamped right now. … [K]eeping the site up to date has literally taken over my life. My business is suffering; my clients are losing patience; my family is starting to get concerned.” - Brainwrap, March 24
We don’t normally do this at Daily Kos, but Charles’s months-long contribution to the fight against right-wing lies about Obamacare was above and beyond. That’s why we’re asking Daily Kos readers to chip in as a way of thanking him for his work and to help him continue his “ACA Signups” series.
Kos diarist and PolitiFact's impartial expert. Nice work if you can get it.

Rubio wrong about welders and philosophers?

Republican presidential contender Marco Rubio made a stir with his debate-night claim that welders make more than philosophers.

A number of sources (Forbes and VOX, for example) have weighed in against Rubio on that claim.

PolitiFact joined the chorus with a fact check calling Rubio's claim "False":
Neither salary nor labor statistics back up Rubio’s claim. Statistically, philosophy majors make more money than welders -- with much more room to significantly increase pay throughout their careers.
We found Rubio's claim interesting from a fact-checking perspective before seeing PolitiFact's version of the story. We wondered if anyone who has a degree in philosophy counts as a philosopher. After all, a person could have a degree in philosophy yet work as a welder. Is that person a philosopher or a welder? The same goes for philosophy professors. Are philosophy professors paid for philosophizing or teaching?

We found a post at the conservative blog Power Line that expressed the argument nicely:
Polifact’s analysis is flawed. One doesn’t become a philosopher by majoring philosophy. John and I both so majored and we don’t claim ever to have been philosophers.

We became lawyers. Our pay reflected what lawyers, not what philosophers, make.
How would PolitiFact have evaluated the issue if Rubio's statement had come from a Democrat, we wonder?

PolitiFact catches Fiorina using hyperbole without a license

PolitiFact's statement of principles guidelines assures readers that PolitiFact allows license for hyperbole:
Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole. 
In practice, however, it's very difficult to uncover evidence that PolitiFact is able to identify hyperbole. The latest example involves GOP presidential candidate Carly Fiorina (bold emphasis added):
The Affordable Care Act -- Obamacare to some -- is a perennial target of Republicans. But at the GOP presidential debate in Milwaukee, Carly Fiorina made a particularly strong statement about the law’s ineffectiveness.

"Look, I'm a cancer survivor, okay?" Fiorina told moderator Maria Bartiromo of Fox Business Network. "I understand that you cannot have someone who's battled cancer just become known as a pre-existing condition. I understand that you cannot allow families to go bankrupt if they truly need help. But, I also understand that Obamacare isn't helping anyone."
 So PolitiFact fact checks the last sentence and rules it "Pants on Fire." No, we're not kidding.

We say it is odd PolitiFact can hear Fiorina's statement affirming two positive aspects of the Affordable Care Act yet fail to interpret her last statement (denying positive effects) as hyperbole.

Once again, PolitiFact catches a Republican using hyperbole without a license. Those lawless Republicans!

Sunday, November 8, 2015

PolitiFact inconsistent (again)

The fact checkers at PolitiFact exhibit a marvelous degree of inconsistency in their rulings.

Today's example comes from a fact check of GOP presidential contender Ben Carson.

Carson tried to address his lack of political experience by claiming none of the signers of the Declaration of Independence had previous political experience. The Washington Post Fact Checker tackled that claim and gave it four "Pinocchios."

PolitiFact was a little late in the game, and after the Johnny-come-latelys had started their fact check, Carson had amended the Facebook post that had drawn fact checkers' attention. It now read that none of the signers of the Declaration of Independence had federal elected office experience at the time.

PolitiFact went ahead with a two-pronged fact check, looking Carson's original claim and then evaluating the altered claim.

We find two types of inconsistency in this example.

First, if Carson is making the point that lack of political experience shouldn't overly concern voters, then it's particularly relevant which signers of the Declaration of Independence lacked political experience. PolitiFact focused purely on those who had political experience and ignored Carson's underlying point in the original claim.

Second, PolitiFact switched its focus to Carson's underlying argument for the altered version of his claim. It's beyond question that Carson is correct that none of the signers of the Declaration of Independence had experience in federal elective office. PolitiFact included quotations from experts affirming as much, like the following:
"It does not make sense to use the term ‘federal’ when no federal government existed," agreed Danielle Allen, a political theorist and author of Our Declaration: A Reading of the Declaration of Independence in Defense of Equality. "The signers of the declaration very often had leading political experience in their colony or, as they called them, in their ‘countries.’"
While it doesn't make sense as a support for Carson's underlying argument, the statement is at the same time undeniably true.

Consider what PolitiFact is doing, here. In one prong of its fact check it puts all its focus on the literal truth of Carson's statement and rates it "Pants on Fire." Yet if some of the signers of the Declaration of Independence lacked political experience Carson has some support for his underlying point. In the second prong of its fact check, PolitiFact sets aside the unequivocal truth of what Carson wrote and awards another "Pants on Fire" based entirely on the underlying argument.

Carson's argument from federal elected office experience is approximately as ridiculous as using the raw gender pay gap to argue for laws protecting against discrimination. Neither argument really makes sense. Yet PolitiFact won't rate that obviously flawed gender pay gap argument any lower than "Half True" since it's based on a legitimate statistic.

PolitiFact's inconsistent standards of judgment make pirates look principled by comparison.



Welcome aboard PolitiFact's Black Pearl, Dr. Carson.