Showing posts with label Biased Interpretation. Show all posts
Showing posts with label Biased Interpretation. Show all posts

Sunday, June 12, 2022

Enter Strawman: PolitiFact uses interpretive follies to downgrade Republican claim (Updated)

PolitiFact routinely applies uncharitable interpretation to reach nonsensical conclusions with its trademarked "Truth-O-Meter" ratings. Rep. Glenn Grothman (R-Wisc.) received that treatment from PolitiFact Wisconsin on June 10, 2022.

Grothman said the proposed loan forgiveness plan would primarily benefit the wealthy. And he went on to emphasize the program occurs while low income Americans struggle.

PolitiFact did a fair job at first of presenting Grothman's words:

"Nearly 60% of all student loan debt is held by the rich and upper-middle class," he said in a May 21, 2022 newsletter. "So, by forgiving student loan debt, we would be handing the wealthy a financial windfall while low income Americans suffer further from inflation and rising costs."

DALL-E image of "Enter Strawman"
Enter Strawman

But then the twisting began:
For the purposes of this fact-check, we’re going to look at the portion of the claim about who holds student loan debt, and whether or not forgiveness would help low-income people.
Out of the blue, PolitiFact Wisconsin jumps to the conclusion Grothman's saying loan forgiveness extended to low-income persons would do little to help them.

PolitiFact simply declines to consider Grothmann might mean that executing a policy that most benefits the wealthy makes little sense during an inflation crunch that's particularly hurting lower-income Americans. PolitiFact confirms Grothman was right that the policy would tend to benefit the wealthy. But by pursuing its far-fetched interpretation of Grothman's claim, PolitiFact ends up defeating a straw man:
(Grothmann) misfires a bit in suggesting that loan forgiveness would not matter much to low-income people. For college graduates in lesser-paying jobs, it might make a huge difference in terms of their finances.

"Suggesting." That's PolitiFact-ese for "We made it up."

Grothman wasn't saying loan forgiveness would not help lower income people who received it. He was saying loan forgiveness mostly would benefit the wealthy when lower income people are the ones in need of relief.

But try to tell that to a liberal blogger wearing the "fact checker" label.


Update: Grok's image generation is leaps and bounds beyond what I used for that earlier image. But it was pretty much impossible to get Grok to give James Hetfield straw-textured skin all over. Closest I got was the hands. But this image is pretty darned good. May have to re-use it sometime.




Wednesday, June 1, 2022

Literally false and the underlying point is false, therefore "Mostly True"

 Have we mentioned PolitiFact is biased?

Check out this epic fail from the liberal bloggers at PolitiFact (red x added):


PolitiFact found it "Mostly True" that most of the "killers" to which Sen. Chris Murphy (D-Conn.) referred tend to be 18, 19 years old.

What's wrong with that?

Let us count the ways.

In reviewing the context, Sen. Murphy was arguing that raising the age at which a person may buy a gun would reduce school shootings. Right now that threshold stands at 18 in most states and for most legal guns, with certain exceptions.

If, as Murphy says, most school shootings come from 18 and 19-year-olds then a law moving the purchase age to 21 could potentially have quite an effect.

"Tend To Be"="Tend To Be Under"?

But PolitiFact took a curious approach to Murphy's claim. The fact checkers treated the claim as though Murphy was saying the "killers" (shooters) were 20 years old or below.

That's not what Murphy said, but giving his claim that interpretation counts as one way liberal bloggers posing as objective journalists could do Murphy a favor.

When PolitiFact checked Murphy's stats, it found half of the shooters were 16 or under:

When the Post analyzed these shootings, it found that more than two-thirds were committed by shooters under the age of 18. The analysis found that the median age for school shooters was 16.

So, using this criteria [sic], Murphy is correct, even slightly understating the case.

See what PolitiFact did, there?

Persons 16 and under are not 18, 19 years old. Not the way Murphy needs them to be 18, 19 years old.

If Murphy can change a law that makes it illegal for most shooters ("18, 19 years old") to buy a gun, that sounds like an effective measure. But persons 17 and under typically can't buy guns as things stand. So, for the true majority of shooters Murphy's law (pun intended?) wouldn't change their ability to buy guns. Rather it would simply remain illegal as it is now.

To emphasize, when PolitiFact found "the media age for school shooters was 16" that effectively means that most school shooters are 17 or below. That actually contradicts Murphy's claim that most are aged 18 or 19. We should expect that most are below the age of 17, in fact.

If Murphy argues for raising the age for buying a gun to 21 based on most shootings coming from persons below the age of 16, that doesn't make any sense. It doesn't make sense because it would not change anything for the majority of shooters. They can't buy guns now or under Murphy's proposed law.

Calculated Nonsense?

By spouting mealy-mouthed nonsense, Murphy succeeded laying out a narrative that gun control advocates would like to back. Murphy makes it seem that raising the gun-buying age to 21 might keep most school shooters from buying their guns.

As noted above, the facts don't back that claim. It's nonsense. But if a Democratic senator can get trusted media sources to back that nonsense, well then it becomes a compelling Media Narrative!

Strict Literal Interpretation

Under strict literal interpretation, Murphy's claim must count as false. If most school shooters are 16 years old or younger then the existence of just one 17 year-old shooter makes his claim false. Half plus one makes a majority every time.

Murphy's claim was false under strict literal (hyperliteral) interpretation.

Normal Interpretation

Normal interpretation is literal interpretation, but taking things like "raining cats and dogs" the way people (literally) understand them normally. We've reviewed how normal interpretation should work in this case. To support a legitimate argument for a higher gun-buying age, Murphy needs to do so by identifying a population that the legislation would reasonably affect. The ages Murphy named (18, 19) meeting that criterion. And, because Murphy used some language indicative of estimation ("tend to be") we can even reasonably count 20 years of age in Murphy's set.

Expanding his set down to 17 doesn't make sense because changing the gun purchase age from 18 to 21 has no effect on a 17-year-old's ability to purchase a gun at 17.

But combining the shootings from 18, 19 and 20 year-olds cannot make up "most" of the school shootings if the media age for the shooters is 16 and at least one shooter was either 17 or over 20.

Murphy's claim was false given normal (literal) interpretation.

Biased Interpretation

PolitiFact used biased interpretation. The fact checkers implicitly said Murphy meant most of the shootings came from people under the age of 18 or 19, even though that makes nonsense of Murphy's argument.

PolitiFact's biased interpretation enhanced a misleading media narrative attractive to liberals.

Coincidence?

Nah. PolitiFact is biased to the left. So we see them do this kind of thing over and over again.

So it's not surprising when PolitiFact rates a literally false statement from a Democrat as "Mostly True."


Correction June 1, 2022: Fixed a typo (we're=we've)

Sunday, December 18, 2016

Fact-checking the wrong way, featuring PolitiFact

Let PolitiFact help show you the right way to fact check by avoiding its mistakes

Fake and skewed news do present a problem for society. Having the best possible information allows us to potentially make the best possible decisions. Bad information hampers good decision-making. And the state of public discourse, including the state of the mainstream media, makes it valuable for the average person to develop fact-checking skills.

We found a December 11, 2016 fact check from PolitiFact that will help us learn better how to interpret claims and make use of expert sources.

The interpretation problem

PolitiFact believed it was fact-checking Republican Reince Priebus' claim that there was no report available saying Russia tried to interfere with the 2016 presidential election:


Was Priebus saying there was no "specific report" saying Russia tried to "muddy" the election? Here's how PolitiFact viewed it:
"Let's clear this up. Do you believe -- does the president-elect believe that Russia was trying to muddy up and get involved in the election in 2016?" Meet the Press host Chuck Todd asked on Dec. 11, 2016.

"No. 1, you don't know it. I don't know it," Priebus said. "There's been no conclusive or specific report to say otherwise."

That’s wrong. There is a specific report.

It was made public on Oct. 7, 2016, in the form of a joint statement from the Department of Homeland Security and the Office of the Director of National Intelligence. At the time, the website WikiLeaks was releasing a steady flow of emails stolen from the Democratic National Committee and top Hillary Clinton adviser John Podesta.

"The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from U.S. persons and institutions, including from U.S. political organizations," the statement said. "These thefts and disclosures are intended to interfere with the U.S. election process."
Based on the context of Priebus' appearance on "Meet the Press," we think PolitiFact botched its interpretation. NBC's Chuck Todd went back and forth with Priebus for a number of minutes on the nature of the evidence supporting the charge of Russian interference with the 2016 U.S. presidential election. The main topic was recent news reports suggesting Russia interfered with the U.S. election to help Republican candidate Donald Trump. Todd's question after "Let's clear this up" had little chance of clearing up that point. Priebus would not act unreasonably by interpreting Todd's question to refer to interference intended to help the Republican Party.

But the bigger interpretation problem centers on the word "specific." Given the discussion between Todd and Priebus around the epistemological basis for the Russian hacking angle, including "You don't know it and I don't know it" in the immediate context, both the "conclusive" and the "specific" definitions of the word address the nature of the evidence.

"Conclusive" means incontrovertible, not merely featuring a conclusion. "Specific" means including a specific evidence or evidences, and therefore would refer to a report showing evidences, not merely a particular (second definition as opposed to the first) report.

In short, PolitiFact made a poor effort at interpreting Priebus in the most sensible way. Giving conservatives short shrift in the interpretation department occurs routinely at PolitiFact.

Was the report PolitiFact cited incontrovertible? PolitiFact offered no argument to that effect.

Did the report give a clear and detailed description of Russia's attempt to influence the 2016 election? Again, PolitiFact offered no argument to that effect.

PolitiFact's "fact-checking" in this case amounted to playing games with the definitions of words.

The problem of the non-expert expert

PolitiFact routinely cites experts either without investigating or reporting (or both) their partisan leanings. Our current case gives us an example of that, as well as a case of giving the expert a platform to offer a non-expert opinion:
Yoshiko Herrera, a University of Wisconsin political scientist who focuses on Russia, called that letter, "a pretty strong statement." Herrera said Priebus’ comment represents a "disturbing" denial of facts.

"There has been a specific report, and politicians who wish to comment on the issue should read and comment on that report rather than suggest there is no such report or that no work has been done on the topic," Herrera said.
What relevant expertise does a political scientist focused on Russia bring to the evaluation of statements on issues specific to U.S. security? Even taking for granted that the letter Herrera talks about was objectively "a pretty strong statement," Herrera has no obvious qualification that lends weight to her opinion. An expert on international intelligence issues might lend weight to that opinion by expressing it.

The same goes for PolitiFact's second paragraph quoting Herrera. The opinion in this case gains some weight from Herrera's status as a political scientist (the focus on Russia counts as superfluous), but her implicit opinion that Trump made the error she cautions about does not stem from Herrera's field of expertise.

Note to would-be fact checkers: Approach your interviews with experts seeking comments that rely on their field of expertise. Anything else is fluff, and you may end up embarrassing the experts you cite by relying on their expertise for information that does not reflect their expertise.

Was Herrera offering her neutral expert opinion on Trump's comment? We don't see how her comments rely on her expertise. And reason exists to doubt her neutrality.

Source: FEC. Click image for larger view.

Yoshiko Herrera's FEC record of political giving shows her giving exclusively to Democrats, including a modest string of donations to the campaign of Hillary Rodham Clinton.

Did PolitiFact give its readers that information? No.

The wrap

Interpret comments fairly, and make sure you only quote expert sources when their opinion comes from their area of expertise. Don't ask an expert on political science and Russia for an opinion that requires a different area of expertise.

For the sake of transparency, I advocate making interview material available to readers. Did PolitiFact lead Herrera toward the conclusion a "specific report" exists? Or did Herrera offer that conclusion without any leading? An interview transcript allows readers to answer that question.

PolitiFact had announced that it plans to start making interview materials available as a standard practice. Someday? Somewhere over the rainbow?



Afters

Since the time we started this evaluation of PolitiFact's fact check, U.S. intelligence agencies have weighed in with comments hinting that they possess specific evidence showing a Russian government connection to election-related hacking and information leaks. But even these new reports do not contradict Priebus until the reports include the clear and detailed evidence of Russian meddling--from named and reliable sources.

Sunday, October 16, 2016

The problem with the "best chess fact-check ever written"

PolitiFact's founding editor Bill Adair, now a journalism professor at Duke University, used Twitter to heap praise on a recent PolitiFact fact check:




Here's the version from Share the Facts:



A fact checker ought to notice the problem right away. Indeed, the average reader likely sees a big hint about the problem in the Share the Facts version. The key part of the fact check is outside the quotation marks denoting what Republican presidential candidate Donald Trump said.

We presume the fact check more than adequately shows that the United States boasts multiple Grandmaster level chess players. We question whether PolitiFact established as fact that Trump said the United States has no Grandmaster chess players.

Here's what Trump said, via PolitiFact (bold emphasis added):
Trump was in the midst of criticizing international trade agreements, including the Trans-Pacific Partnership. He said he supports the idea of bilateral agreements, saying that such deals would make it possible for the United States to threaten to withdraw, then renegotiate on more favorable terms before the agreement expired.

Trump went on to say that with multilateral pacts like the TPP, "you can't terminate -- there's too many people, you go crazy. It's like you have to be a grand chess master. And we don't have any of them."
Before fact-checking this statement from Trump, one must figure out what he meant. Was he saying that the United States has nobody evaluating its trade deals with skills parallel to a chess Grandmaster? Or was he saying the United States boasts no citizens who have attained Grandmaster rank in chess?

If Trump had added something like "Bobby Fischer was the last one," it would have gone a long way toward confirming what Trump was saying. But how can a fact checker justify assuming the "them" in Trump's statement refers to literal chess players and not figurative ones involved in international trade on the behalf of the United States?

PolitiFact routinely finds its way toward favoring one interpretation over others without bothering to acknowledge the other possibilities and without justifying its choice.

It's one approach to fact-checking that fact-checkers ought to avoid.

Is this fact check the best one ever on chess? If it's the only one, then we suppose we won't argue Adair's claim. But it's not a good political fact check if we value fairness, accuracy, relevance, and non-partisanship.

Thursday, June 2, 2016

PolitiFact is California dreamin'

Hans Bader of the Competitive Enterprise Institute helpfully drew our attention to a recent PolitiFact Florida item showing PolitiFact's inconsistency. PolitiFact Florida fixed the "Mostly False" label on Enterprise Florida's claim that California's minimum wage law would cost that state 700,000 jobs.

What's wrong with PolitiFact Florida's verdict?

PolitiFact justified its ruling by claiming the ad suggested that the 700,000 lost jobs would mean 700,000 fewer California jobs than when the hike went into effect:
A radio ad by Enterprise Florida said, "Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage….Now Florida is adding 1 million jobs, not losing them."

This is misleading. The 700,000 figure refers to the number of jobs California could have added by 2026 if it didn’t increase the minimum wage, not a decline in net employment.
We don't think people would be misled by the ad. People would tend to understand the loss as compared to how the economy would perform without the hike.

Back in 2014, when PolitiFact Florida looked at Gov. Scott's claim that the Congressional Budget Office projected a 500,000 job loss from a federal minimum wage hike, the fact checkers had no trouble at all figuring out the 500,000 loss was from a projected baseline.

What's the difference in this case?

Enterprise Florida, an arm of Florida's state government, contrasted California's projected job loss with Florida's gain of 1 million jobs. The changes in the states' respective job numbers can't come from the same cause. Only California is giving its minimum wage a big hike.. So if Enterprise Florida was trying to directly compare the job figures the comparison is apples-to-oranges. But PolitiFact Florida's analysis overlooked the context the ad supplied (bold emphasis added):
"Seven hundred thousand. That’s how many California jobs will be lost thanks to the politicians raising the minimum wage," the ad says, as the Miami Herald reports. "Ready to leave California? Go to Florida instead — no state income tax, and Gov. Scott has cut regulations. Now Florida is adding 1 million jobs, not losing them."
PolitiFact Florida's fact check doesn't lift a finger to examine the effects of relaxed state regulations.

Incredibly, PolitiFact Florida ignores the tense and timing of the job gains Scott lauds ("Now Florida is adding") and insists on comparing future projections of raw job growth for California and Florida, as though California's size advantage doesn't make that an apples-to-oranges comparison.

We think Enterprise Florida muddles its message with its claim Florida is adding 1 million jobs. People hearing the ad likely lack the context needed to understand the message, which we suspect is the dubious idea that Scott's cutting of regulations accounts for Florida adding 1 million jobs.

But PolitiFact Florida oversteps its role as a fact checker by assuming Scott was talking about California losing 700,000 jobs while Florida would gain 1 million at the same time and in the same sense. The ad does not explicitly compare the two figures. And it provides context cluing listeners that the numbers are not directly comparable.

PolitiFact Florida's error, in detail


We'll illustrate PolitiFact's presumption with the classic illustration of ambiguity, courtesy of yourlogicalfallacyis.com.



Is it a chalice? Is it two people facing one another?

The problem with ambiguity is we don't know which it is. And the Enterprise Florida ad contains an ambiguity. Those hearing the ad do not know how they are supposed to compare California's loss of 700,000 jobs with Florida's gain of 1 million jobs. We pointed out contextual clues that might help listeners figure it out, but those clues do not entirely clean up the ambiguity.

PolitiFact's problem is its failure to acknowledge the ambiguity. PolitiFact has no doubt it is seeing two people facing one another, and evaluates the ad based on its own assumptions.

The ad should have received consideration as a chalice: California's 700,000 job loss represents a poor job climate caused by hiking the minimum wage while Florida's 1 million job gain represents an employment-friendly environment thanks to no state income tax and relaxed state regulations.

Conclusion

PolitiFact Florida succeeded in obscuring quite a bit of truth in Enterprise Florida's ad.

Update: Adding Insult to Injury

As we moved to finish our article pointing out PolitiFact Florida's unfair interpretation of Enterprise Florida's ad, PolitiFact California published its defense of California Governor Jerry Brown's reply to Enterprise Florida:
There’s a lot to unpack there. So we focused just on Brown’s statement about California adding twice as many jobs as Florida, and whether there was any context missing. It turns out California’s job picture is not really brighter than Florida’s, at least not during the period Brown described.
Why do we call it a "defense" instead of a "fact check"?

That's easy. The statement PolitiFact California examined was a classic bit of political deception: Say something true and imply that it means something false. For some politicians, typically liberals, PolitiFact will dutifully split the difference between the trivially true factoid and the false conclusion, ending up with a fairly respectable "Half True." Yes, PolitiFact California gave Brown a "Half True" rating for his claim.

Brown tried to make California's job picture look better than Florida's using a statistic that could not support his claim.

Was Brown's claim more true than Enterprise Florida's ad? We're not seeing it. But it's pretty easy to see that PolitiFact gave Brown more favorable treatment with its "Truth-O-Meter" ratings.


Note: This item was inadvertently published with a time backdated by hours--the scheduled date was wrong. We reverted the post to draft form, added this note, and scheduled it to publish at the originally planned time.

Tuesday, May 31, 2016

'Just the facts' from PolitiFact? Think again

We've pointed out before (including recently) that subjectivity affects the interpretation PolitiFact brings to the statements it rates on its trademarked "Truth-O-Meter." A May 31, 2016 fact check of Republican presidential candidate Donald Trump offers a great example of PolitiFact applying spin to a politician's statement.

We'll start with the version of the statement PolitiFact cited in its fact check, from the Washington Post:
Wind is very expensive. I mean, wind, without subsidy, wind doesn't work. You need massive subsidies for wind. There are places maybe for wind. But if you go to various places in California, wind is killing all of the eagles.

You know if you shoot an eagle, kill an eagle, they want to put you in jail for five years. Yet the windmills are killing hundreds and hundreds of eagles. One of the most beautiful, one of the most treasured birds — and they're killing them by the hundreds and nothing happens. So wind is, you know, it is a problem.
A fact checker needs to answer a basic question before proceeding with a fact check of the number of eagles killed by windmills: Which windmills is Trump talking about? Is it just windmills in "various places in California"? Windmills in the entire state of California? Or was California just an example of a place where windmills are killing eagles and the "hundreds and hundreds" mentioned later may come from locations across the United States?

We do not detect any strong argument for limiting Trump's claim about the number of eagles killed just to California. If Trump had said "Yet the windmills in California are killing hundreds and hundreds of eagles" then, sure, go for it. He's talking about California. But Trump is speaking in South Dakota about national energy policy. He has reason not to limit his comments to the wind energy industry's effects to California.

PolitiFact, surprising us not in the slightest, offers no good reason for concluding Trump was saying California windmills kill hundreds of eagles.

PolitiFact (bold emphasis added):
Setting aside Trump’s exaggeration about killing all of the eagles, we wondered, are wind turbines in California killing "hundreds and hundreds," as he said?
The fact check assumes without justification Trump was talking just about California. That assumption leads directly to the "Mostly False" rating PolitiFact pins on Trump's claim.

But what if Trump was talking about the whole of the United States? The words Trump used fit that interpretation at least as well as the interpretation PolitiFact used.

Adjusting the math


PolitiFact accepted an estimate of about 100 eagles killed per year by California wind turbines. PolitiFact accepted that wind turbines rarely kill Bald Eagles. Assuming Golden Eagles, the variety making up the majority killed in California, count as the type most vulnerable to wind turbine deaths, we can very roughly estimate eagle deaths by looking at the Golden Eagle's U.S. range.

Golden Eagles occur year-round in six of the top 15 states for generating wind energy. They occur in all 15 states for at least part of the year. California comes in at No. 2, after Texas. Golden Eagles range into the part of Texas known for generating wind energy.

It seems reasonable to estimate that wind turbines kill over 200 eagles per year in the United States.

That interpretation of Trump's comment makes his claim true, though based on an estimate, and supportive of his underlying point that wind turbines kill many eagles otherwise protected by federal law.

PolitiFact's interpretation lacks clear justification in the context of Trump's remarks, but fits PolitiFact's narrative about Trump.

A politician's lack of clarity does not give fact checkers justification for interpreting statements as they wish. The neutral fact checker notes for readers the lack of clarity and then examines the possible interpretations that are at the same time plausible. The neutral fact checker applies the same standard of charitable interpretation to all, regardless of popular public narratives.

The point? PolitiFact's rating of Trump was not based simply on the facts. It was based on an unjustified (and unjustifiable?) interpretation of what Trump said.


Jeff Adds (June 2, 2016): 
I'll give Greenberg a hat tip for applying PolitiFact's on-again, off-again policy on rating hyperbole for ignoring Trump's "wind is killing all of the eagles" blurb, but my praise ends there.

Bryan is right that Greenberg offers no reason to limit the death count to those killed in California. Likewise, Greenberg offers no justification for limiting Trump's claim to eagle deaths per year. Greenberg's own expert acknowledged "probably about 2,000" eagles had been killed in California alone since the early 80's. It is generally understood that thousands and thousands are more than hundreds and hundreds.

Greenberg's article is also an example of PolitiFact disregarding an underlying point in favor of the raw numbers. We think raw facts are the only objective thing "fact-checkers" can check, and evaluating underlying points is best counted as analysis or editorial. But PolitiFact has applied its methods so inconsistently we've likened it to playing the game of Plinko.

When PolitiFact fails to employ a consistent method for rating claims, it can't be taken seriously as  neutral or objective journalism.

Wednesday, September 3, 2014

Zebra Fact Check: The Importance of Interpretation

Bryan has written an article over at his fact checking site, Zebra Fact Check, that I think is worth highlighting here. Bryan discusses the importance and benefits of correctly interpreting a person's claims, and uses a recent PunditFact article as an example of botching this critical exercise:
PunditFact fails to apply one of the basic rules of interpretation, which is to interpret less clear passages by what more clear passages say. 
Bryan profiles PunditFact's article on Tom DeLay, who was discussing the indictment of Texas governor Rick Perry. In addition to pointing out PundiFact's shoddy journalism, Bryan spots several ways their apparent bias affected the fact check:
We think PunditFact’s faulty interpretation did much to color the results of the fact check. Though PolitiFact’s headline announced a check of DeLay’s claim of ties between McCrum and Democrats, it’s hard to reconcile PolitiFact’s confirmation of such ties with the “Mostly False” rating it gave DeLay. PunditFact affirms “weak ties” to Democrats. Weak ties are ties.
Even more damning evidence of PundiFact's liberal bent comes from their selective use of a CNN chyron placed next to its ubiquitous Truth-O-Meter graphic, allowing PolitiFact to reinforce the editorial slant of its fact check.

While I'm admittedly biased, Bryan's piece is well done and I recommend you read the whole thing.

Afters: 
Bryan didn't mention the main thing I noticed when I first read PunditFact's DeLay article, namely, the superfluous inclusion of a personal smear. PunditFact writer Linda Qiu offered up this paragraph in summation:
This record of bipartisanship is not unusual nor undesired in special prosecutors, said Wisenberg, who considers himself a conservative and opposes the prosecution against DeLay. He pointed out that special prosecutor Ken Starr, famous for investigating President Bill Clinton, also had ties to both parties, and DeLay did not oppose him.
We're not sure what probative value these two sentences have beyond suggesting DeLay is a hypocrite. Highlighting hypocrisy is a very persuasive argument, but it's also a fallacious one. Tom DeLay's support or opposition to Ken Starr bears no relevance to the factual accuracy of the current claim PunditFact is supposedly checking. It serves only to tarnish DeLay's character with readers. That's not fact checking, and that's not even editorializing. It's immature trolling.


Sunday, June 3, 2012

Cover your PolitifArse! PolitiFact goes shameless

PolitiFact has egg on its face worthy of the Great Elephant Bird of Madagascar.

On May 23, PolitiFact published an embarrassingly shallow and flawed fact check of two related claims from a viral Facebook post.  One of the claims held false a claim from Mitt Romney that President Obama has presided over an acceleration of government spending unprecedented in recent history.  The second claim, quoted from Rex Nutting of MarketWatch, held that "Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years." 

PolitiFact issued a "Mostly True" rating to these claims, claiming their math confirmed select portions of Nutting's math. The Associated Press and Glenn Kessler of the Washington Post, among others, gave Nutting's calculations very unfavorable reviews.

PolitiFact responded with an article titled "Lots of heat and some light," quoting some of the criticisms without comment other than to insist that they did not justify any change in the original "Mostly True" rating.  PolitiFact claimed its rating was defensible since it only incorporated part of Nutting's article.
(O)ur item was not actually a fact-check of Nutting's entire column. Instead, we rated two elements of the Facebook post together -- one statement drawn from Nutting’s column, and the quote from Romney.
I noted at that point that we could look forward to the day when PolitiFact would have to reveal its confusion in future treatments of the claim.

We didn't have to wait too long.

On May 31, last Thursday, PolitiFact gave us an addendum to its original story.  It's an embarrassment.

PolitiFact gives some background for the criticisms it received over its rating.  There's plenty to criticize there, but let's focus on the central issue:  Was PolitiFact's "Mostly True" ruling defensible?  Does this defense succeed?

The biggest reason this CYA fails

PolitiFact keeps excusing its rating by claiming it focuses on the Facebook post by "Groobiecat", rather than Nutting's article, and only fact checks the one line from Nutting included in the Facebook graphic.

Here's the line again:
Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years.
This claim figured prominently in the AP and Washington Post fact checks mentioned above.  The rating for the other half of the Facebook post (on Romney's claim) relies on this one.

PolitiFact tries to tell us, in essence, that Nutting was right on this point despite other flaws in his argument (such as the erroneous 1.4 percent figure embedded right in the middle), at least sufficiently to show that Romney was wrong.

A fact check of the Facebook graphic should have looked at Obama's spending from the time he took office until Romney spoke.  CBO projections should have nothing to do with it.  The fact check should attempt to pin down the term "recent history" without arbitrarily deciding its meaning. 

The two claims should have received their own fact checks without combining them into a confused and misleading whole.  In any case, PolitiFact flubbed the fact check as well as the follow up.

Spanners in the works

As noted above, PolitiFact simply ignores most of the criticisms Nutting received.  Let's follow along with the excuses.

PolitiFact:
Using and slightly tweaking Nutting’s methodology, we recalculated spending increases under each president back to Dwight Eisenhower and produced tables ranking the presidents from highest spenders to lowest spenders. By contrast, both the Fact Checker and the AP zeroed in on one narrower (and admittedly crucial) data point -- how to divide the responsibility between George W. Bush and Obama for the spending that occurred in fiscal year 2009, when spending rose fastest.
Stay on the lookout for specifics about the "tweaking."

Graphic image from Groobiecat.blogspot.com

I'm still wondering why PolitiFact ignored the poor foundation for the 1.4 percent average annual increase figure the graphic quotes from Nutting.  But no matter.  Even if we let PolitiFact ignore it in favor of  "slower than at any time in nearly 60 years" the explanation for their rating is doomed.

PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.
The fiscal year 2009 spending figure from the Office of Management and Budget was $3,517,677,000,000.  That means that $307 billion (there's a tweak!) is 8.7 percent of the 2009 total spending.  And it means before Obama even starts getting blamed for any spending he increased spending in FY 2009 over the 2008 baseline more than President Bush did.  I still don't find it clear where PolitiFact puts that spending on Obama's account.
(B)y our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent. That would place Obama either second from the bottom or third from the bottom out of the 10 presidents we rated, rather than last.
PolitiFact appears to say its calculations suggest that accepting the critics' points makes little difference.  We'll see that isn't the case while also discovering a key criticism of the "annual spending increase" metric.

Reviewing PolitiFact's calculations from earlier in its original story, we see that PolitiFact averages Obama's spending using fiscal years 2010 through 2013.  However, in this update PolitiFact apparently does not consider another key criticism of Nutting's method:  He cherry picked future projections.  Subtract $307 billion from the FY 2009 spending and the increase in FY 2010 ends up at 7.98 percent.  And where then do we credit the $307 billion?

An honest accounting requires finding a proper representation of Obama's share of FY 2009 spending.  Nutting provides no such accounting:
If we attribute that $140 billion in stimulus to Obama and not to Bush, we find that spending under Obama grew by about $200 billion over four years, amounting to a 1.4% annualized increase.
Neither does PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.

That’s quite a bit larger than Nutting’s $140 billion, but by our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent.
But where does the spending go once it is shifted? Obama's 2010?  It makes a difference.

"Lies, damned lies, and statistics":  PolitiFact, Nutting and the improper metric

Click image for larger view
The graphic embedded to the right helps illustrate the distortion one can create using the average increase in spending as a key statistic.  Nutting probably sought this type of distortion deliberately, and it's shameful for PolitiFact to overlook it.

Using an annual average for spending allows one to make much higher spending not look so bad.  Have a look at the graphic to the right just to see what it's about, then come back and pick up the reading.  We'll wait.

Boost spending 80 percent in your first year (A) and keep it steady thereafter and you'll average 20 percent over four years. Alternatively, boost spending 80 percent just in your final year (B) and you'll also average 20 percent per year. But in the first case you'll have spent far more money--$2,400 more over the course of four years.

It's very easy to obscure the amount of money spent by using a four-year average.  In case A spending increased by a total of $3,200 over the baseline total.  That's almost $800 more than the total derived from simply increasing spending 20 percent each year (C).

Note that in the chart each scenario features the same initial baseline (green bar), the same yearly average increase (red star), and widely differing total spending over the baseline (blue triangle).

Some of Nutting's conservative critics used combined spending over four-year periods to help refute his point.  Given the potential distortion from using the average annual increase it's very easy to understand why.  Comparing the averages for the four year total smooths out the misleading effects highlighted in the graphic.

We have no evidence that PolitiFact noted any of this potential for distorting the picture.  The average percentage increase should work just fine, and it's simply coincidence that identical total increases in spending look considerably lower when the largest increase happens at the beginning (example A) than when it happens at the end (example B).

Shenanigan review:
  • Yearly average change metric masks early increases in spending
  • No mention of the effects of TARP negative spending
  • Improperly considers Obama's spending using future projections
  • Future projections were cherry-picked
The shift of FY 2009 spending from TARP, the stimulus and other initiatives may also belong on the above list, depending on where PolitiFact put the spending.

I have yet to finish my own evaluation of the spending comparisons, but what I have completed so far makes it appear that Romney may well be right about Obama accelerating spending faster than any president in recent history (at least back through Reagan).  Looking just at percentages on a year-by-year basis instead of averaging them shows Obama's first two years allow him to challenge Reagan or George W. Bush as the biggest accelerator of federal spending in recent history.  And that's using PolitiFact's $307 billion figure instead of the higher $456 billion one.

So much for PolitiFact helping us find the truth in politics.

Note:

I have a spreadsheet on which I am performing calculations to help clarify the issues surrounding federal spending and the Nutting/PolitiFact interpretations.  I hope to produce an explanatory graphic or two in the near future based on the eventual numbers.  Don't expect all the embedded comments on the sheet to make sense until I finalize it (taking down the "work in progress" portion of the title).



Jeff adds:

It's not often PolitiFact admits to the subjective nature of their system, but here we have a clear case of editorial judgement influencing the outcome of the "fact" check:
Our extensive consultations with budget analysts since our item was published convinces us that there’s no single "correct" way to divvy up fiscal 2009 spending, only a variety of plausible calculations.
This tells us that PolitiFact arbitrarily chose the "plausible calculation" that was very favorable to Obama in its original version of the story. By using other equally plausible methods, the rating would have gone down. By presenting this interpretation of the calculations as objective fact, PolitiFact misleads their readers into believing the debate is settled.

This update also contradicts PolitiFact's reasons for the "Mostly True" rating:
So the second portion of the Facebook claim -- that Obama’s spending has risen "slower than at any time in nearly 60 years" -- strikes us as Half True. Meanwhile, we would’ve given a True rating to the Facebook claim that Romney is wrong to say that spending under Obama has "accelerated at a pace without precedent in recent history." Even using the higher of the alternative measurements, at seven presidents had a higher average annual increases in spending. That balances out to our final rating of Mostly True.
In the update, they're telling readers a portion of the Facebook post is Half-True, while the other portion is True, which balances out to the final Mostly True rating. But that's not what they said in the first rating (bold emphasis added):
The only significant shortcoming of the graphic is that it fails to note that some of the restraint in spending was fueled by demands from congressional Republicans. On balance, we rate the claim Mostly True.
In the first rating, it's knocked down because it doesn't give enough credit to the GOP for restraining Obama. In the updated version of the "facts", it's knocked down because of a "balance" between two portions that are Half-True and completely True. There's no mention of how the GOP's efforts affected the rating in the update.

Their attempts to distance themselves from Nutting's widely debunked article are also comically dishonest:
The Facebook post does rely partly on Nutting’s work, and our item addresses that, but we did not simply give our seal of approval to everything Nutting wrote.
That's what PolitiFact is saying now. But in the original article PolitiFact was much more approving:
The math simultaneously backs up Nutting’s calculations and demolishes Romney’s contention.
 And finally, we still have no explanation for the grossly misleading headline graphic, first pointed out by Andrew Stiles:

Image clipped from PolitiFact.com
Neither Nutting or the original Groobiecat post claim Obama had the "lowest spending record". Both focused on the growth rate of spending. This spending record claim is PolitiFact's invention, one the fact check does not address. But it sure looks nice right next to the "Mostly True" graphic, doesn't it? Sorting out the truth, indeed.

The bottom line is PolitiFact's CYA is hopelessly flawed, and offensive to anyone that is sincerely concerned with the truth. A fact checker's job is to illuminate the facts. PolitiFact's efforts here only obfuscate them.


Bryan adds:

Great points by Jeff across the board.  The original fact check was indefensible and the other fact checks of Nutting by the mainstream media probably did not go far enough in calling Nutting onto the carpet.  PolitiFact's attempts to glamorize this pig are deeply shameful.


Update:  Added background color to embedded chart to improve visibility with enlarged view.



Correction 6/4/2012:  Corrected one instance in which PolitiFact's $307 billion figure was incorrectly given as $317 billion.  Also changed the wording in a couple of spots to eliminate redundancy and improve clarity, respectively.

Thursday, May 31, 2012

Big Journalism: "PolitiFact Bases Entire Fact Check on Author's Intuition"

John Sexton of Big Journalism (and Verum Serum fame) joins Matthew Hoy in slamming PolitiFact's rating of a recent Crossroads GPS ad.

Sexton notes that the ad says one thing and PolitiFact claims the ad says something else:
The ad is clearly about the President's promise that you could keep your insurance, not some insurance. Instead of staying on that point, PolitiFact's introduces a novel new interpretation of the ad's meaning. Suddenly, it's not about the President's promise at all, rather " Its point seems to be simply that a lot of people will lose coverage." Really? Where does it say that?
Sexton draws attention to a recurrent problem at PolitiFact.  Statements that fail to accord with the views inside the left-skewed journalistic bubble often receive an uncharitable interpretation that the original speaker would scarcely recognize.  PolitiFact ends up appearing either unable or unwilling to understand the readily apparent meaning.

Sexton makes other good points as well, so visit Big Journalism and read it through start to finish.  Sexton gets Bill Adair on the record defending PolitiFact's journalistic malpractice, and that's always worth seeing even if it draws  from one of Adair's two favorite cliches:  People won't always agree with PolitiFact's ratings and PolitiFact gets criticized from conservatives and liberals (PolitiFact, therefore, is fair).

Thursday, November 3, 2011

Matthew Hoy: "You guys screwed up"

Ordinarily we highlight Matthew Hoy's criticisms of PolitiFact via the posts at his blog, Hoystory.  But this time we catch Hoy at his pithy best while blasting PolitiFact over at Facbook for its "Pants on Fire" rating of Herman Cain's supposed claim that China is trying to develop nuclear weapons.  PolitiFact took Cain to mean China was developing nuclear weapons for the first time, you see.

Hoy:
You guys screwed up. Congratulations. Read the whole context (which you provide) and it's ambiguous -- he very well may be referring to nuclear-powered AIRCRAFT CARRIERS -- which they don't have yet. Also, during Vietnam, Cain was working ballistics for the Navy, studying the range and capabilities of China's missiles. He knew they had nukes. It was inartfully said. Not a mistake. According to your own rules, you don't fact check things like this: "Is the statement significant? We avoid minor "gotchas"’ on claims that obviously represent a slip of the tongue."
That about says it all, but I'll just add one helpful informational link.

Given the ambiguity of Cain's statement, it speaks volumes about PolitiFact's ideological predisposition that no attempt was made to interpret Cain charitably.

Saturday, October 15, 2011

Sublime Bloviations: "PolitiFlub: The employee contribution to Social Security"

It's not often PolitiFact alters their standards so quickly on the exact same topic, but it happens. 

We spotted it right away and PFB editor Bryan White was on the case with his latest update regarding the recent flurry of tax related campaign flyers factchecks PolitiFact's been writing.

This one is pretty obvious. Let's see if our readers can spot it.

Here's PolitiFact's standard for determining tax contributions for Obama's hypothetical $50,000/year worker that pays a higher tax rate than someone making $50 million (bold added):
We asked two researchers at the [Brookings Institute] ... for their advice on how to factor in payroll taxes. They estimated that combining the workers’ share of the payroll tax with the employer’s share -- the usual practice among economists -- would mean an extra 15 percentage points for our hypothetical middle-class worker, and less than 2 additional percentage points for the high-income taxpayer.  Adding these to the percentages we previously found for the income tax alone produces a new, "final" rate of 22 to 23 percent for the construction worker...
Obama's final rating: Half True.

Here's their standard for determining the facts of Herman Cain's statement that "every worker pays 15.3 percent payroll tax":
What we found is that Cain is counting both worker and employer contributions to payroll taxes to arrive at the 15.3 percent number.
Uh-oh.
Cain said, "Every worker pays 15.3 percent payroll tax." That's not accurate. Workers only pay half that...You can reach that number only by including the half of the tax that employers pay.
If this sound went through your head just now; welcome to our world.

Instead of boring you with the rating they gave Cain, we suggest you head over to Bryan's article and read the whole thing

Once there you will find a deeper analysis as well as a handy chart Bryan has created that shows how PolitiFact has used one standard or the other in various tax fact checks.

Extra Credit: Guess which party benefits from the alternating definitions of what constitutes a tax contribution.

And if you haven't done so check out our recent reviews on this tax issue here and here.