Showing posts with label Linda Qiu. Show all posts
Showing posts with label Linda Qiu. Show all posts

Thursday, October 20, 2016

Demolition=construction? Yup, says PolitiFact

Bless PolitiFact's heart. Those fact-checking journalists just don't seem to realize that they're having trouble setting aside their biases. Unless they do realize it and wantonly lie in their fact checks.

This is actually the third in our series on PolitiFact's debate-night blogging. We broke with the tradition of mentioning that in the title to bring attention to PolitiFact's fundamental error in the case we will examine.

During the third debate, Democratic presidential candidate Hillary Rodham Clinton said her Republican opponent, Donald Trump, used undocumented workers to construct the Trump Tower in Manhattan.

Let's let PolitiFact's Linda Qiu tell it:
Clinton: "He used undocumented labor to build the Trump Tower."

This is True. Between 1979 and 1980, Trump hired a contractor to demolish a Manhattan building to make way for the eventual Trump Tower. That contractor in turn hired local union workers as well as 200 undocumented Polish workers to meet the tight deadlines.
According to Qiu and PolitiFact, demolishing a building is constructing a building. Or at least not different enough to make a difference in the rating. We assume that if Clinton had said Trump used undocumented workers to demolish the building that once stood where Trump Tower now stands that the claim could rate no higher than "True" on PolitiFact's "Truth-O-Meter." One version of the claim is no more accurate than the other by "Truth-O-Meter" standards.

We can't pass up the opportunity to remind our readers that PolitiFact prides itself on paying careful attention to the way politicians use words:
Words matter – We pay close attention to the specific wording of a claim. Is it a precise statement? Does it contain mitigating words or phrases?
That's a joke, right?

Contrary to PolitiFact, "demolition" and "construction" do not carry the same meaning. The construction of a new building will typically not start until after the complete demolition of the building occupying the site of the proposed new construction. If undocumented workers demolished the building the Trump Tower replaced, then they finished their work before construction of the Trump Tower began. If they finished their work before construction began, then it is misleading at best to say they helped construct the Trump Tower.

How can a fact checker botch something that obvious?

More notes on PolitiFact's debate night blogging

We don't have time to completely go through PolitiFact's election night blogging, but we'll keep picking out a few gems for comment as the week winds down.

Republican presidential candidate Donald Trump said Democratic presidential candidate Hillary Rodham Clinton wants open borders. A WikiLeaks release offered Trump's claim some support.

Observe how PolitiFact rationalizes calling Trump's claim "Mostly False":
In a brief speech expert from 2013, Clinton purportedly says, "My dream is a hemispheric common market, with open trade and open borders, some time in the future with energy that is as green and sustainable."

But we don’t have more context about what Clinton meant by "open borders" because she has not released the full speech. Her campaign has said she was talking about clean energy across the hemisphere.

We rated Trump’s claim Mostly False.
What other context is necessary to understand Clinton's comment? "Hemispheric common market" is pretty clear. "Open trade" is pretty clear. "Open borders" is pretty clear, particularly in the context of "hemispheric common market" and "open trade."

PolitiFact eventually falls back on "he said, she said" journalism by citing the Clinton campaign's explanation of her remarks: "Her campaign has said she was talking about clean energy across the hemisphere." So it was just about having "open borders" so we could trade clean energy in this hemisphere?

Does that even make any sense?

What kind of clean energy gets traded from one nation to another? Wind? Solar? Clean energy proponents bemoan barriers to investment, but what does "open borders" have to do with that?

PolitiFact is using the abbreviated context as a "get out of jail, free" card for Clinton. The context of her speech provides enough context to find Trump's claim at least "Half True."
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
In what way does the definition not fit, other than PolitiFact not knowing for sure Trump's statement is only partially accurate, or not knowing the statement was taken out of context?

Forgive us for pretending that PolitiFact's definitions for its ratings are not ultimately subjective.


Wednesday, October 19, 2016

Notes on PolitiFact's debate night blogging

It's the night of the third presidential debate, and PolitiFact is doing its so-called fact checking thing.

Democratic presidential candidate Hillary Rodham Clinton says Republican presidential candidate Donald Trump is the first major party nominee in 40 years not to release his tax returns. PolitiFact rules this "Mostly True" because there's only one exception out of 22.

Nearly 5 percent of the major party presidential nominees do not release their tax returns over the past 40  years.

That's right. There have been only 22 major-party presidential candidates nominated in the past 40 years. Jimmy Carter, Ronald Reagan, George H. W. Bush, Bill Clinton, George W. Bush and Barack Obama were each nominated twice. So this mighty precedent touches 16 candidates. Saying "40 years" makes it seem like more.

Using 16 nominees for the calculation edges the percentage up over 6 percent.

We're actually a little surprised PolitiFact didn't give Clinton a "True" rating, considering that her claim that she released all her emails was only off by about 30,000 but still received a "Half True" rating.





Friday, October 7, 2016

PolitiFact and the Trump tax return promise

The liberal bloggers at PolitiFact have weighed in on whether Republican presidential candidate Donald Trump has broken his promise to release his tax returns. Trump, PolitiFact says, has broken his promise. We know PolitiFact says this thanks to the "False" rating it gave Trump's running mate, Mike Pence, for saying Trump has not broken his promise.



There's a problem for PolitiFact, here. Pence is right, at least if PolitiFact is giving us the right version(s) of Trump's promise. PolitiFact provided no versions of the promise with any deadline attached.

A promise made with no deadline for keeping the promise is never broken, unless we count death as a deal breaker.

PolitiFact has apparently confused not keeping a promise with breaking a promise.

So long as Trump has not released his tax returns, he has not kept his promise to release his tax returns. But lacking a deadline for keeping the promise, until Trump has passed up every opportunity he will ever have to keep the promise, he has not broken the promise.

It's a simple matter of logic.

But, but, but, but ....


Trump implied that he would release his returns before election day!

Yes, perhaps so. It's an arguable point. But until election day rolls around and Trump has not released his tax returns, Trump has not broken his promise. And Pence is right to say so.


It's a simple matter of logic.



So why does PolitiFact struggle so with simple matters of logic?

Saturday, September 10, 2016

Brilliant: PolitiFact puts "Mostly False" claim in Trump's mouth

PolitiFact claims it takes context into account when it does its fact-checking.

That often is not the case.

Observe this example, from a Sept. 8 fact check of Donald Trump (bold emphasis added):
Donald Trump defended his praise of Russian President Vladimir Putin during NBC’s Commander-in-Chief Forum, claiming he was only returning the favor.

NBC host Matt Lauer listed some of the things Trump and Putin have said about each other, and asked if Trump wants admiration from someone who is at odds with U.S. foreign policy and may be meddling with the election.

"You said, ‘I will tell you, in terms of leadership, he’s getting an A, our president is not doing so well. And when referring to a comment that Putin made about you, I think he called you a brilliant leader, you said, ‘It’s always a great honor to be so nicely complimented by a man so highly respected within his country and beyond,’ " Lauer said. "But do you want to be complimented by that former KBG officer?"

Well, I think when he calls me brilliant, I’ll take the compliment, OK?" Trump said, "The fact is, look, it’s not going to get him anywhere. I’m a negotiator."
PolitiFact's opening paragraph mischaracterizes the exchange between Lauer and Trump. Trump was answering NBC host Matt Lauer's suggestion that receiving praise from Putin is a bad thing, not defending his own praise of Putin.

Returning the favor?

Linda Qiu, the PolitiFact writer responsible for the fact check, never provided any evidence that Trump said anything about returning a favor through his compliments of Putin.

Qiu didn't even link to a video or transcript, instead posting text-only references to the Commander-in-Chief event.

Time magazine posted a transcript of the event. We took the time to find where Qiu went wrong (bold emphasis added):
LAUER: Do you think the day that you become president of the United States, he’s going to change his mind on some of these key issues?

TRUMP: Possibly. It’s possible. I don’t know, Matt. It’s possible. And it’s not going to have any impact. If he says great things about me, I’m going to say great things about him. I’ve already said, he is really very much of a leader. I mean, you can say, oh, isn’t that a terrible thing — the man has very strong control over a country.

Now, it’s a very different system, and I don’t happen to like the system. But certainly, in that system, he’s been a leader, far more than our president has been a leader.
If one takes the sentence "If he says great things about me, I'm going to say great things about him" out of context, one might justify saying Trump claimed he complimented Putin only because Putin complimented him. However, given the context, that claim will not hold. Trump goes on to suggest that his praise of Putin is sincere. And, in the greater context, Trump is saying the praise either way is not likely to affect Trump's approach to negotiation ("and it's not going to have any impact").

It is deceptive to describe that exchange by saying Trump was "claiming he was only returning the favor." And that exchange had little to do with the claim PolitiFact was fact-checking.

Who said Putin called Trump "brilliant"?

It was Lauer who put forth the idea that Putin called Trump "brilliant," in the context of Putin praising Trump. Lauer followed up by asking Trump if he was comfortable accepting praise from a bad actor like Putin.

Trump responds by saying, using Lauer's example, that he is fine with accepting compliments from Putin, and that the praise would not affect his negotiating stance.

The nature of the compliment serves a very minor point in this exchange. Trump's point in response to Lauer should have taken precedence, but PolitiFact ignored it, preferring to strain gnats.

The key question is whether Trump is responsible for fact-checking Lauer. If Lauer thinks Putin said Trump was brilliant then it is reasonable for Trump to accept the premise of Lauer's question when he gives his answer.

Ignoring that principle may turn a fact checker into a pedant.

PolitiFact dons a fig leaf by pointing to past cases where Trump said Putin called him a "genius." But other fact checkers addressed those cases months ago. They carry no real relevance in this case, unless they show PolitiFact was eager to pile on.


Summary: two flubs for PolitiFact


PolitiFact misleadingly led its fact check by saying Trump was defending his praise of Putin. PolitiFact built on that error by falsely claiming Trump defended himself by saying he was just returning the favor.

What a great start.

From there, PolitiFact blamed Trump for accepting the premise of Matt Lauer's question. If it's appropriate to place blame for that, Lauer should receive the lion's share. PolitiFact made sure that would not happen, at least in terms of its fact check (whatever happened to PunditFact?).

This is what we've learned to expect from PolitiFact.

Thursday, September 1, 2016

"Trump Effect" reveals PolitiFact as mindless partisans

Need proof that PolitiFact is staffed and run by mindless partisans? Read on.

PolitiFact thoroughly butchered the truth with a fact check of one of Democratic presidential candidate Hillary Clinton's claims last week. Clinton said parents and teachers were complaining of a "Trump Effect" in our schools. The "Trump Effect" described by Clinton involved a rise in bullying and harassment targeting "students of color, Muslims, and immigrants.”

Clinton used a survey report from the left-leaning Southern Poverty Law Center as evidence supporting her statement. PolitiFact rated Clinton's claim "Mostly True," despite the fact that the report offered no real support for the claim. I fisked the report and PolitiFact's fact check with an article at Zebra Fact Check.

The final point in that article leads to some strong evidence of PolitiFact's liberal tilt. I charged that PolitiFact used a different standard for Clinton's claim about the "Trump Effect" than it used for a recent claim Trump made about rising crime.

PolitiFact's wildly different approaches to the two claims make a great case that PolitiFact allows liberal bias to influence its work. Given the likely role of bias in slanting the fact-check, I elected to expand on the point here at PolitiFact Bias.

Parallel Statistical Claims

Trump said crime is rising, and implicitly placed blame on the Obama administration.

Clinton said, by implication, that bullying and harassment are on the rise in our schools and placed explicit blame on Trump's campaign.

PolitiFact's eye-grabbing presentations





Supporting Evidence?

FBI crime statistics supported Trump's claim with respect to violent crime, and a number of newspaper articles directly supported that aspect of Trump's claim.

The only data supporting Clinton's claim came from a small number of anecdotes (less than 100) collected from a biased sample of teachers.

PolitiFact's interpretation of the claims

PolitiFact decided Trump was claiming the rise in crime was a trend and not simply an anomalous bump (we are unable to detect the evidence that helped PolitiFact make that determination).

PolitiFact made no suggestion that Clinton was claiming a trend.

PolitiFact's approach to the evidence

In Trump's case, PolitiFact used data from 2014 and before to judge Trump's 2016 claim that crime is rising. PolitiFact for some reason did not consider preliminary data from 2015 that supported Trump's claim with respect to violent crime.

In Clinton's case, PolitiFact ignored evidence of dropping school bullying and harassment prior to 2016 to focus on the evidence available from 2016: A handful of anecdotes. It's worth noting that one of the experts PolitiFact cited in its Clinton fact check says no real consensus exists on the reliability of bullying/harassment metrics.

PolitiFact reaches its conclusions

On Trump:
If you look at overall violent and property crimes -- the only categories that would seem inclusive enough to qualify as "crime," as Trump put it -- he is flat wrong. In fact, crime rates have been falling almost without fail for roughly a quarter-century. We rate his claim Pants on Fire.
The Trump addendum (after critics pointed out PolitiFact ignored data from 2015):
While the preliminary data shows spikes in crime rates in some cities, Trump’s statement was broad, without qualifiers, and it came amid comments that painted an overarching image of a nation in decline. Trump didn’t say that crime was rising "recently" or "in recent months" or "over the past year" or "in some places."
PolitiFact kept its "Pants on Fire" rating for Trump in spite of the supporting evidence.

On Clinton:
The term Trump Effect is a product of the survey’s authors. And the survey is unscientific because it's based on anecdotal reports. But experts in bullying told us the Southern Poverty Law Center’s survey and their sense of current trends in schools supports Clinton’s point.

PolitiFact's artistic license

The Trump and Clinton claims differ in that Trump's claim lacks specifics. As we see in the Trump addendum above,  PolitiFact takes what Trump did not say as its license to interpret the "crime is rising" claim as a broad trend instead of a observation supported by contemporary news reports--news reports PolitiFact did not bother to mention in its own reporting.

The two claims also differ in that Trump flatly stated crime was rising while Clinton implied it by attributing it to "parents and teachers." Would PolitiFact have been forced to give Trump a "Mostly True" rating if he had said "Newspapers are reporting crime is rising"?

We deeply doubt it. PolitiFact would assuredly have focused on the point Trump was implying, that crime is rising, not on the mere fact that newspapers were reporting it. Maybe Trump could have raised his rating all the way up to "False" with that rephrasing?

Yet if Trump had used that phrasing, we would have an extremely close parallel between the two claims. Both politicians would imply a statistical trend. But only Trump would have widely-accepted data to partially support his claim. Clinton would not, the alleged opinions of PolitiFact's experts notwithstanding.

The blame game?

 In Trump's case, the "Pants on Fire" ruling canceled any need to deal with Obama's responsibility for rising crime. If crime isn't rising, obviously Obama is not at all to blame for rising crime.

But what about that "Trump Effect" that parents and teachers are worried about? Is the "Trump Effect" Trump's fault if a handful of teachers say it is?

PolitiFact opted to simply drop the issue of causation in Clinton's case. Clinton was able to explicitly blame implicitly rising harassment ("Mostly True") on Trump without any consequences from PolitiFact.

Conclusion

PolitiFact's inconsistency favors the left. See the "Afters" section for even more evidence of the same.

Afters: The needs of the "many"

PolitiFact's hard-left defense of Clinton contained another notable inconsistency involving a comparison to a different fact check of Donald Trump.

PolitiFact exaggerated the survey evidence supposedly supporting Clinton by claiming "many" teachers blamed Trump for increasing bullying and harassment:
Many of these teachers, unsolicited, cited Trump’s campaign rhetoric and the accompanying discourse as the likely reason for this behavior.
The Zebra Fact Check investigation suggests PolitiFact was misled about the number of teachers saying Trump was responsible for increasing bullying or harassment. Out of almost 2,000 teachers participating in the survey, 849 answered the question about bullying or biased language and of those 123 mentioned Trump. A fraction of those placed any kind of blame on Trump for anything. We would generously estimate that 25 teachers blamed Trump for something (not necessarily bullying or harassment) in answering that question. This implies that, to PolitiFact, "many" can be less than 1.25 percent of 2,000.

Yet when Donald Trump said "many" of the celebrities supporting Hillary Clinton weren't so hot lately, PolitiFact offered no hint that it would allow the bar to be set so low:
Trump has a point that out of hundreds of celebrities endorsing Clinton, some have faded.
Whether Trump had a point or not, PolitiFact rated his claim "Mostly False."

Not bothering to actually fact check whether "many" (l.25 percent?) of Clinton's Hollywood supporters aren't so hot lately seems to be the key to the "Mostly False" rating.

We invite readers to do the fact-checking PolitiFact apparently did not do: Count how "many" teachers blamed Trump for an increase in minority-targeted bullying in their schools. But don't tell PolitiFact what you find. Make them do their own research.

Thursday, June 2, 2016

PolitiFact is California dreamin'

Hans Bader of the Competitive Enterprise Institute helpfully drew our attention to a recent PolitiFact Florida item showing PolitiFact's inconsistency. PolitiFact Florida fixed the "Mostly False" label on Enterprise Florida's claim that California's minimum wage law would cost that state 700,000 jobs.

What's wrong with PolitiFact Florida's verdict?

PolitiFact justified its ruling by claiming the ad suggested that the 700,000 lost jobs would mean 700,000 fewer California jobs than when the hike went into effect:
A radio ad by Enterprise Florida said, "Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage….Now Florida is adding 1 million jobs, not losing them."

This is misleading. The 700,000 figure refers to the number of jobs California could have added by 2026 if it didn’t increase the minimum wage, not a decline in net employment.
We don't think people would be misled by the ad. People would tend to understand the loss as compared to how the economy would perform without the hike.

Back in 2014, when PolitiFact Florida looked at Gov. Scott's claim that the Congressional Budget Office projected a 500,000 job loss from a federal minimum wage hike, the fact checkers had no trouble at all figuring out the 500,000 loss was from a projected baseline.

What's the difference in this case?

Enterprise Florida, an arm of Florida's state government, contrasted California's projected job loss with Florida's gain of 1 million jobs. The changes in the states' respective job numbers can't come from the same cause. Only California is giving its minimum wage a big hike.. So if Enterprise Florida was trying to directly compare the job figures the comparison is apples-to-oranges. But PolitiFact Florida's analysis overlooked the context the ad supplied (bold emphasis added):
"Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage," the ad says, as the Miami Herald reports. "Ready to leave California? Go to Florida instead — no state income tax, and Gov. Scott has cut regulations. Now Florida is adding 1 million jobs, not losing them."
PolitiFact Florida's fact check doesn't lift a finger to examine the effects of relaxed state regulations.

Incredibly, PolitiFact Florida ignores the tense and timing of the job gains Scott lauds ("Now Florida is adding") and insists on comparing future projections of raw job growth for California and Florida, as though California's size advantage doesn't make that an apples-to-oranges comparison.

We think Enterprise Florida muddles its message with its claim Florida is adding 1 million jobs. People hearing the ad likely lack the context needed to understand the message, which we suspect is the dubious idea that Scott's cutting of regulations accounts for Florida adding 1 million jobs.

But PolitiFact Florida oversteps its role as a fact checker by assuming Scott was talking about California losing 700,000 jobs while Florida would gain 1 million at the same time and in the same sense. The ad does not explicitly compare the two figures. And it provides context cluing listeners that the numbers are not directly comparable.

PolitiFact Florida's error, in detail


We'll illustrate PolitiFact's presumption with the classic illustration of ambiguity, courtesy of yourlogicalfallacyis.com.



Is it a chalice? Is it two people facing one another?

The problem with ambiguity is we don't know which it is. And the Enterprise Florida ad contains an ambiguity. Those hearing the ad do not know how they are supposed to compare California's loss of 700,000 jobs with Florida's gain of 1 million jobs. We pointed out contextual clues that might help listeners figure it out, but those clues do not entirely clean up the ambiguity.

PolitiFact's problem is its failure to acknowledge the ambiguity. PolitiFact has no doubt it is seeing two people facing one another, and evaluates the ad based on its own assumptions.

The ad should have received consideration as a chalice: California's 700,000 job loss represents a poor job climate caused by hiking the minimum wage while Florida's 1 million job gain represents an employment-friendly environment thanks to no state income tax and relaxed state regulations.

Conclusion

PolitiFact Florida succeeded in obscuring quite a bit of truth in Enterprise Florida's ad.

Update: Adding Insult to Injury

As we moved to finish our article pointing out PolitiFact Florida's unfair interpretation of Enterprise Florida's ad, PolitiFact California published its defense of California Governor Jerry Brown's reply to Enterprise Florida:
There’s a lot to unpack there. So we focused just on Brown’s statement about California adding twice as many jobs as Florida, and whether there was any context missing. It turns out California’s job picture is not really brighter than Florida’s, at least not during the period Brown described.
Why do we call it a "defense" instead of a "fact check"?

That's easy. The statement PolitiFact California examined was a classic bit of political deception: Say something true and imply that it means something false. For some politicians, typically liberals, PolitiFact will dutifully split the difference between the trivially true factoid and the false conclusion, ending up with a fairly respectable "Half True." Yes, PolitiFact California gave Brown a "Half True" rating for his claim.

Brown tried to make California's job picture look better than Florida's using a statistic that could not support his claim.

Was Brown's claim more true than Enterprise Florida's ad? We're not seeing it. But it's pretty easy to see that PolitiFact gave Brown more favorable treatment with its "Truth-O-Meter" ratings.


Note: This item was inadvertently published with a time backdated by hours--the scheduled date was wrong. We reverted the post to draft form, added this note, and scheduled it to publish at the originally planned time.

Friday, January 29, 2016

PolitiFact trumps the truth

No, this is not one of those classic cases where PolitiFact announces that a claim is true but rates it "False" anyway. This is just a case where PolitiFact's research backed the claim and PolitiFact rated the claim "False" in spite of the evidence.

The victim this time? Donald Trump.

PolitiFact sets the stage with a quotation from Trump's appearance on the CNN show Anderson Cooper 360. But PolitiFact does not include the question Trump was asked. From Trump's response, it looks like Trump was asked why he chose not to attend a presidential primary debate on Fox News:
He (Trump) says a biting Fox News release is why he pulled the plug.

"Well, I’m not a person that respects Megyn Kelly very much. I think she’s highly overrated. Other than that, I don’t care," he told CNN an hour before the debate. "I never once asked that she be removed. I don’t care about her being removed. What I didn’t like was that public relations statement where they were sort of taunting. I didn’t think it was appropriate. I didn’t think it was nice."

His assertion that he "never once" asked for Kelly’s removal piqued our interest.
Was Trump saying he never set Kelly's removal as a precondition for attending the debate? That seems possible, but without the full transcript we can't say.

Do we trust PolitiFact to adequately assess the context and provide that information. No, of course not. PolitiFact makes too many simple errors to garner that type of trust.

After PolitiFact questions whether Trump asked for Kelly's removal, it provides a ton of evidence supposedly supporting the "False" rating it eventually gave to Trump. Except it doesn't add up that way:
"Based on @MegynKelly's conflict of interest and bias she should not be allowed to be a moderator of the next debate," Trump tweeted Jan. 23 and made similar comments in a campaign rally in Iowa the same day
Saying Kelly should not be allowed to be a moderator is not the same as asking for Kelly's removal as moderator. It's certainly not a "full-throated" request for Kelly's removal.
Trump repeated his position that Kelly "should recuse herself from the upcoming Fox News debate," according to Boston Globe reporter James Pindell.
Likewise, saying Kelly should recuse herself from the debate is not the same as asking for her removal.
According to New York, Trump began to threaten a boycott a day later and toy with the idea of holding his own event.
If Trump threatens not to attend the debate and starts talking about holding his own event in the same time slot, is he asking for Kelly's removal? If so, the request is indirect.
Two days before the debate, Trump polled his Twitter followers, asking,"Should I do the #GOPDebate?" (Of over 150,000 responses, 56 percent were "Yes.")

In the tweet, Trump posted a link to an Instagram video, in which he said, "Megyn Kelly is really biased against me. She knows that, I know that., everybody knows. Do you really think she can be unbiased in a debate?"
We see that PolitiFact strikes the same note over and over. If Trump threatens to skip the debate and offers Kelly's involvment as a reason, PolitiFact reasons, then Trump is asking for Kelly's exclusion.

But that's not a case of black-and-white truth, is it?

If Trump never literally asked for Kelly's exclusion, then it's literally true that Trump never asked for Kelly's exclusion. At most, he's implying that he might participate in the debate if Fox removed Kelly from her role in moderating the debate.

Let's look at how PolitiFact justifies its rating of Trump in the conclusion (bold emphasis added):
Trump said, "I never once asked that (Megyn Kelly) be removed" as a debate moderator.

This statement greatly downplays Trump’s comments ahead of the debate, even if his absence really had more to do with a mocking Fox News release in the end.
In the same context where Trump said "I never once asked that she be removed" Trump repeated criticisms of Kelly ("I’m not a person that respects Megyn Kelly very much. I think she’s highly overrated"). And if Trump's main reason for not attending the debate was the nature of the Fox communications in response to his complaints, then downplaying those comments is appropriate.

PolitiFact never ruled out the possibility that Trump stayed away from the debate for the reason he claimed (see bold emphasis above).

PolitiFact's evidence does not support a "False" rating for Trump. That would require an unambiguous example of Trump asking for Kelly's removal. There's nothing like that in PolitiFact's fact check. PolitiFact also failed to provide enough context from the Anderson Cooper interview with Trump to allow readers to verify PolitiFact's judgment.

This is "gotcha" journalism designed to continue feeding a narrative PolitiFact is peddling about Trump. The evidence says Trump was literally correct in saying he did not ask for Kelly's removal. The missing context might further vindicate Trump.


Update Jan. 30, 2016: Clarified the third paragraph to make more clear the context of Trump's appearance on Anderson Cooper 360. 
Update Feb. 29, 2016: Fixed formatting to make clear "His assertion that he "never once" asked for Kelly’s removal piqued our interest" was part of a quotation of PolitiFact. Also added a link to the original PolitiFact article--we apologize for the delay, for we take it as standard practice to link to all our sources.

Tuesday, January 26, 2016

PolitiFact's partisan correlation correlation

The co-founder and co-editor of PolitiFact Bias, Jeff D, noted this outstanding example of PolitiFact's inconsistency. Jeff's a bit too busy to write up the example and so granted me that honor.

The problem Jeff noted has to do with PolitiFact's treatment of factual correlations. A correlation occurs when two things happen near the same time. When the correlation occurs regularly, it is often taken as a sign of causation. We infer that one of the things causes the other in some way.

Correlation, however, is not a proof of causation. PolitiFact recognizes that fact, as we can see from the explanation offered in a fact check of Lieutenant Governor Dan Patrick (R-Texas):
Lott’s study shows a 25 percent decrease in murder and violent crime across the country from 2007 to 2014, as well as a 178 percent rise in the number of concealed-carry permits. Those two trends may be correlated, but experts say there’s no evidence showing causation. Further, gun laws may have little to nothing to do with rates of falling crime.
PolitiFact ruled Patrick's statement "Mostly False," perhaps partly because Patrick emphasized open carry while Lott's research dealt with concealed-carry.

PolitiFact also noted that correlation does not equal causation while evaluating a claim from Democrat presidential nominee Hillary Clinton. Clinton said recessions happen much more often under Republican presidents:
The numbers back up Clinton’s claim since World War II: Of the 49 quarters in recession since 1947, eight occurred under Democrats, while 41 occurred under Republicans.

It’s important to note, however, that many factors contribute to general well-being of the economy, so one shouldn’t treat Clinton’s implication -- that Democratic presidencies are better for the economy -- with irrational exuberance.
Okay, maybe PolitiFact was a little stronger with its warning that correlation does not necessarily indicate causation while dealing with the Republican. But that doesn't necessarily mean that PolitiFact gave Clinton a better rating than Patrick.

Clinton's claim received a "Mostly True" rating, by the way.

Was Patrick's potentially faulty emphasis on open carry the reason he fared worse with his rating? We can't rule it out as a contributing factor, though PolitiFact wasn't quite crystal clear in communicating how it justified the rating. On the other hand, Clinton left out details from the research supporting her claim, such as the fact that the claim applied to the period since 1947. We see no evidence PolitiFact counted that against her.

Perhaps this comparison is best explained via biased coin tosses.


Post-publication note: We'll be looking at PolitiFact's stories on causation narratives to see if there's a partisan pattern in their ratings.

 

Sunday, December 20, 2015

PolitiFact, Paul Ryan and cherry-picking

Does this add up?

Speaker of the House Paul Ryan (R-Wis.) said during a television appearance that Obamacare was making families pay double-digit premium increases.

PolitiFact gave Ryan's statement its "fact check" treatment, which we're inclined to call liberal blogging.

Step 1:
Ryan is suggesting ... increases in the "double digits." We decided to rate that claim on the Truth-O-Meter.
Step 2:
According to HHS data, 19 out of the 37 states in the federal exchange saw an average rate increase in the double digits. At the low end, rates in Missouri increased by 10.4 percent while Oklahoma saw the biggest hike at 35.7 percent.
Step 3:
Ryan has a point that some plans have seen increases of 10 percent or more with insurance purchased on healthcare.gov. However, Ryan is cherry-picking the high end of rate changes.
Got it? PolitiFact says Oklahoma is experiencing an average Obamacare exchange rate hike of 35.7 percent. And saying "double-digit" rate increases is cherry-picking from the high end of rate changes.

Given that a 10 percent rate hike is "double digits" and the top (average) exchange rate hike is over 35 percent, what kind of sense does it make to say Ryan is cherry-picking the high end of rate changes?

Silly liberal bloggers.

Ryan used a normal ambiguity when he spoke. "Families" does not mean "All families" as PolitiFact claimed Ryan was suggesting. If a substantial number of families are getting hit with double-digit rate increases then what Ryan said was accurate. And by "accurate" I don't mean "True" in the sense of a "Truth-O-Meter" rating. I mean "accurate" in the sense that PolitiFact uses the term when it defines "True" and "Mostly True" for purposes of its cheesy "Truth-O-Meter":

 PolitiFact's definitions are themselves "Half True." You can tell that's the case when Ryan's accurate statement receives its "Half True" rating while Bernie Sanders' inaccurate statement receives a "Mostly True" rating.

PolitiFact fact-checking is a game liberal bloggers play.


Afters

Other than Ryan's office providing supporting URLs dealing with the individual market, what would lead PolitiFact to simply ignore the much larger group plan market in its fact check? Do group plans not count? Does the lack of an employer mandate mean Obamacare does not affect group insurance despite the new regulations it imposes on the group market?

Rate increases? What rate increases?
And according to an Arthur J. Gallagher & Co. survey of smaller employers, most of which have less than 1,000 employees, released Friday, 44% reported premium rate hikes of 6% or more in 2014. Twenty-three percent saw rates in the double digits, the survey showed.
We'll say it again: Silly liberal bloggers.

After Afters

Before I forget the caboose on this PolitiFact trainwreck ...
Ryan missteps by saying the law alone is "making" the premiums increase. Rather, experts say, hikes are more likely the result of insurers underestimating how sick enrollees would be.
Silly cherry-picking liberal bloggers. Obamacare is the reason insurers don't know how sick their enrollees will be. Guaranteed issue. Remember? No, I guess you don't remember.


Correction Dec. 20, 2015: Changed from "Ind." to "Wis." the state which Ryan represents. Hat tip to commenter Pagan Raccoon for pointing out the error.
Correction Dec. 21, 2015: When repeating PolitiFact's 35.7 figure we typo-inflated it to 37.7. That's now fixed. Hat tip to "John Smith" for using the comments section to point out the error.

Tuesday, November 24, 2015

Lauren Carroll cannot contain herself

PolitiFact/PunditFact writer Lauren Carroll couldn't resist pushing back against criticism she received on her story looking at the containment of ISIS.

Carroll suggested on Twitter that Breitbart.com's John Nolte had not read her fact check. The evidence?
1) I am the only byline on the story 2) I fact-checked Ben Rhodes, not Obama. @NolteNC— Lauren Carroll (@LaurenFCarroll) November 16, 2015
The fact is that PunditFact gave Carroll's story more than one presentation.

In one of the presentations, a version of Carroll's story was combined with another story from a Sunday morning news show. That second version of the story has Linda Qiu listed on the byline. So Carroll's claim she's the only one on the byline rates a "Half True" on the Hack-O-Meter. Combined with her whinge about fact-checking Obama proxy (deputy national security advisor) Ben Rhodes instead of President Obama, Carroll provides an astonishingly thin defense of her work.

The critiques from Breitbart.com and the Washington Examiner both made the point that Obama was answering a question about ISIS' strength, not the range of its geographical control. Carroll completely accepted Rhodes' spin and ignored the point of the question Obama was asked.

Where's Carroll's explanation of her central error? It's certainly not in her clumsy jabs at John Nolte.

Tuesday, November 17, 2015

ISIS "contained"?

When President Obama called ISIS ("ISIL") "contained" in a televised interview on Nov. 12, 2015, other politicians, including at least one Democrat, gave him some grief over the statement.

Mainstream fact checker PunditFact came to the president's defense. PunditFact said Obama was just talking about territorial expansion, so what he said was correct.

Conservative media objected.

John Nolte from Breitbart.com:
PolitiFact’s transparent sleight-of-hand comes from basing its “True” rating — not on the question Obama is asked — but how the President chose to answer it.

Stephanopoulos asks, “But ISIS is gaining strength aren’t they?”
T. Becket Adams from the Washington Examiner:
PunditFact has rated the Obama administration's claim that the Islamic State has been "contained" as "true," even after a recent series of ISIS-sponsored events around the world have claimed the lives of hundreds of civilians.

For the fact-checker, the White House's doesn't believe ISIS is no longer a global threat, as fatal attacks last week in Beirut and Paris would show. The president and his team merely believe that the insurgent terrorist group controls a smaller portion of the Middle East today than it did a few months ago.
We think PunditFact has a bit of a point when it claims the president's remarks are taken out of context. But as Nolte and Adams point out, the specific context of the Obama interview was the strength of ISIS, not its territorial expansion.

If the president was saying that containing ISIL's geographic control equates with containing its strength, then PunditFact ends up taking the president out of context to justify claiming the president was taken out of context.

There's something not quite right about that.


Clarification Dec. 10, 2015: Changed "wasn't" to "was" in the next-to-last paragraph

Wednesday, September 3, 2014

Zebra Fact Check: The Importance of Interpretation

Bryan has written an article over at his fact checking site, Zebra Fact Check, that I think is worth highlighting here. Bryan discusses the importance and benefits of correctly interpreting a person's claims, and uses a recent PunditFact article as an example of botching this critical exercise:
PunditFact fails to apply one of the basic rules of interpretation, which is to interpret less clear passages by what more clear passages say. 
Bryan profiles PunditFact's article on Tom DeLay, who was discussing the indictment of Texas governor Rick Perry. In addition to pointing out PundiFact's shoddy journalism, Bryan spots several ways their apparent bias affected the fact check:
We think PunditFact’s faulty interpretation did much to color the results of the fact check. Though PolitiFact’s headline announced a check of DeLay’s claim of ties between McCrum and Democrats, it’s hard to reconcile PolitiFact’s confirmation of such ties with the “Mostly False” rating it gave DeLay. PunditFact affirms “weak ties” to Democrats. Weak ties are ties.
Even more damning evidence of PundiFact's liberal bent comes from their selective use of a CNN chyron placed next to its ubiquitous Truth-O-Meter graphic, allowing PolitiFact to reinforce the editorial slant of its fact check.

While I'm admittedly biased, Bryan's piece is well done and I recommend you read the whole thing.

Afters: 
Bryan didn't mention the main thing I noticed when I first read PunditFact's DeLay article, namely, the superfluous inclusion of a personal smear. PunditFact writer Linda Qiu offered up this paragraph in summation:
This record of bipartisanship is not unusual nor undesired in special prosecutors, said Wisenberg, who considers himself a conservative and opposes the prosecution against DeLay. He pointed out that special prosecutor Ken Starr, famous for investigating President Bill Clinton, also had ties to both parties, and DeLay did not oppose him.
We're not sure what probative value these two sentences have beyond suggesting DeLay is a hypocrite. Highlighting hypocrisy is a very persuasive argument, but it's also a fallacious one. Tom DeLay's support or opposition to Ken Starr bears no relevance to the factual accuracy of the current claim PunditFact is supposedly checking. It serves only to tarnish DeLay's character with readers. That's not fact checking, and that's not even editorializing. It's immature trolling.


Wednesday, July 30, 2014

More of PunditFact's PolitiMath

Occasionally we have fun looking at how the degree of inaccuracy impacts PolitiFact's "Truth-O-Meter" ratings.  Naturally the same evaluations apply to PunditFact, which uses the same rating system as well as, we suspect, a similarly radical inconsistency in applying the ratings.

Today we're looking at PunditFact's July 16, 2014 "Half True" rating of Cokie Roberts comparison of the murder risk in Honduras with that risk in New York City.

Roberts was way off with her figures, and PunditFact surmised Roberts may have conflated the yearly risk of being murdered in Honduras with the annual risk of being murdered in New York City:
(T)he chances of getting murdered in Honduras are 1 in 1100 per year compared to 1 in 20,000 per year in New York. Over a lifetime, the chances of being murdered in Honduras are 1 in 15, compared to 1 in 250 in New York.

That makes Honduras more dangerous but not nearly to the levels Roberts described.

What Rattner may have done, and what Roberts repeated, was compare figures approaching the chances of being murdered in New York in one year (1 in 20,000).
Acting charitably toward Roberts, the risk of getting murdered in Honduras is at most 18 times greater than in New York City.  Roberts' numbers imply the risk is about 1780 times greater than that (and we're doing Roberts a favor by rounding that figure down).

These figures mean Roberts exaggerated the difference in risk by about 9,789 percent, which is another way of saying her figures magnified the difference in risk by almost 100 times.

Throwing darts while blindfolded?
That's a high level of inaccuracy.

For comparison, PolitiFact rated President Obama "False" for overstating the ACA's effect on the number of people obtaining insurance for the first time by a mere 288 percent.  We thought that degree of exaggeration might qualify Obama for a "Pants on Fire" rating given PolitiFact's history.

Using PunditFact's application of principle, however, perhaps Obama should have received a "Half True" in recognition of his point that some people were getting insurance for the first time.

It goes without saying that Republicans tend to face an even tougher time receiving consideration of their underlying points.

Examples like this show us the "Truth-O-Meter" has little to do with fact checking and a great deal to do with editorializing.