Showing posts with label Underlying Argument. Show all posts
Showing posts with label Underlying Argument. Show all posts

Wednesday, September 6, 2017

PolitiFact & Roy Moore: A smorgasbord of problems

When PolitiFact unpublished its Sept. 1, 2017 fact check of a claim attacking Alabama Republican Roy Moore, we had our red flag to look into the story. Taking down a published story itself runs against the current of journalistic ethics, so we decided to keep an eye on things to see what else might come of it.

We were rewarded with a smorgasbord of questionable actions by PolitiFact.

Publication and Unpublication

PolitiFact's Sept. 1, 2017 fact check found it "Mostly False" that Republican Roy Moore had taken $1 million from a charity he ran to supplement his pay as as Chief Justice in the Supreme Court of Alabama.

We have yet to read the original fact check, but we know the summary thanks to PolitiFact's Twitter confession issued later on Sept. 1, 2017:


We tweeted criticism of PolitiFact for not making an archived version of the fact check immediately available and for not providing an explanation for those who ended up looking for the story only to find a 404-page-not found-error.  We think readers should not have to rely on Twitter to know what is going on with the PolitiFact website.

John Kruzel takes tens of thousands of dollars from PolitiFact

(a brief lesson in misleading communications)

The way editors word a story's title, or even a subheading like the one above, makes a difference.

What business does John Kruzel have "taking" tens of thousands of dollars from PolitiFact? The answer is easy: Kruzel is an employee of PolitiFact, and PolitiFact pays Kruzel for his work. But we can make that perfectly ordinary and non-controversial relationship look suspicious with a subheading like the one above.

We have a parallel in the fact check of Roy Moore. Moore worked for the charity he ran and was paid for it. Note the title PolitiFact chose for its fact check:

Did Alabama Senate candidate Roy Moore take $1 million from a charity he ran?

 "Mostly True." Hmmm.

Kruzel wrote the fact check we're discussing. He did not necessarily compose the title.

We think it's a bad idea for fact-checkers to engage in the same misleading modes of communication they ought to criticize and hold to account.


Semi-transparent Transparency

For an organization that advocates transparency, PolitiFact sure relishes its semi-transparency. On Sept. 5, 2017, PolitiFact published an explanation of its correction but rationed specifics (bold emphasis added in the second instance):
Correction: When we originally reported this fact-check on Sept. 1, we were unable to determine how the Senate Leadership Fund arrived at its figure of "over $1 million," and the group didn’t respond to our query. The evidence seemed to show a total of under $1 million for salary and other benefits. After publication, a spokesman for the group provided additional evidence showing Moore received compensation as a consultant and through an amended filing, bringing the total to more than $1 million. We have corrected our report, and we have changed the rating from Mostly False to Mostly True.
PolitiFact included a table in its fact check showing relevant information gleaned from tax documents. Two of the entries were marked as for consulting and as an amended filing, which we highlighted for our readers:


Combining the two totals gives us $177,500. Subtracting that figure from the total PolitiFact used in its corrected fact check, we end up with $853,375.

The Senate Leadership Fund PAC (Republican) was off by a measly 14.7 percent and got a "Mostly False" in PolitiFact's original fact check? PolitiFact often barely blinks over much larger errors than that.

Take a claim by Sen. Brad Schneider (D-Ill.) from April 2017, for example. The fact check was published under the "PolitiFact Illinois" banner, but PolitiFact veterans Louis Jacobson and Angie Drobnic Holan did the writing and editing, respectively.

Schneider said that the solar industry accounts for 3 times the jobs from the entire coal mining industry. PolitiFact said the best data resulted in a solar having a 2.3 to 1 job advantage over coal, terming 2.3 "just short of three-to-one" and rating Schneider's claim "Mostly True."

Schneider's claim was off by over 7 percent even if we credit 2.5 as 3 by rounding up.

How could an error of under 15 percent have dropped the rating for the Senate Leadership Fund's claim all the way down to "Mostly False"?

We examine that issue next.

Compound Claim, Or Not?

PolitiFact recognizes in its statement of principles that sometimes claims have more than one part:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
We note that if PolitiFact does not weight the individual pieces equally, we have yet another area where subjective judgment might color "Truth-O-Meter" ratings.

Perhaps this case qualifies as one of those subjectively skewed cases.

The ad attacking Moore looks like a clear compound claim. As PolitiFact puts it (bold emphasis added), "In addition to his compensation as a judge, "Roy Moore and his wife (paid themselves) over $1 million from a charity they ran."

PolitiFact found the first part of the claim flatly false (bold emphasis added):
He began to draw a salary from the foundation in 2005, two years after his dismissal from the bench, according to the foundation’s IRS filings. So the suggestion he drew the two salaries concurrently is wrong.
Without the damning double dipping, the attack ad is a classic deluxe nothingburger with nothingfries and a super-sized nothingsoda.

Moore was ousted as Chief Justice in the Alabama Supreme Court, where he could have expected a raise up to $196,183 per year by 2008. After that ouster Moore was paid a little over $1 million over a nine-year period, counting his wife's salary for one year, getting well under $150,000 per year on average. On what planet is that not a pay cut? With the facts exposed, the attack ad loses all coherence. Where is the "more" that serves as the theme of the ad?

We think the fact checkers lost track of the point of the ad somewhere along the line. If the ad was just about what Moore was paid for running his charity while not doing a different job at the same time, it's more neutral biography than attack ad. The main point of the attack ad was Moore supplementing his generous salary with money from running a charitable (not-for-profit) organization. Without that main point, virtually nothing remains.

PolitiFact covers itself with shame by failing to see the obvious. The original "Mostly False" rating fit the ad pretty well regardless of whether the ad correctly reported the amount of money Moore was paid for working at a not-for-profit organization.

Assuming PolitiFact did not confuse itself?

If PolitiFact denies making a mistake by losing track of the point of the ad, we have another case that helps amplify the point we made with our post on Sept. 1, 2017. In that post, we noted that PolitiFact graded one of Trump's claims as "False" based on not giving Trump credit for his underlying point.

PolitiFact does not address the "underlying point" of claims in a consistent manner.

In our current example, the attack ad on Roy Moore gets PolitiFact's seal of "Mostly" approval only by ignoring its underlying point. The ad actually misled in two ways, first by saying Moore was supplementing his income as judge with income from his charity when the two source of income were not concurrent, and secondly by reporting the charity income while downplaying the period of time over which that income was spread. Despite the dual deceit, PolitiFact graded the claim "Mostly True."

"The decision about a Truth-O-Meter rating is entirely subjective"

Cases like this support our argument that PolitiFact tends to base its ratings on subjective judgments. This case also highlights a systemic failure of transparency at PolitiFact.

We will update this item if PolitiFact surprises us by running a second correction.



Afters

On top of problems we described above, PolitiFact neglected to tag its revised/republished story with the "Corrections and Updates" tag its says it uses for all corrected or updated stories.

PolitiFact has a poor record of following this part of its corrections policy.

We note, however, that after we pointed out the problem via Twitter and email PolitiFact fixed it without a long delay.

Thursday, January 14, 2016

PolitiFact's Policy Plinko: What Rules Get Applied Today?

Yesterday Bryan wrote a piece noting PolitiFact ignores their own policies in favor of subjective whim and it's easy to find evidence supporting him. PolitiFact's application of standards resembles the game of Plinko, wherein they start off at one point, but can bounce around before they reach a final ruling. The notable difference between the two is that Plinko is much less predictable.

In 2012 former editor Bill Adair announced a new policy at PolitiFact that they would begin taking into account a person's underlying argument when determining a rating for a numbers claim. That new policy turned out to be bad news for Mitt Romney:





In Romney's case, PolitiFact says we need to look beyond the numbers and observe the historical context to find the truth:
The numbers are accurate but quite misleading....It's a historical pattern...not an effect of Obama's policies.

There is a small amount of truth to the claim, but it ignores critical facts that would give a different impression. We rate it Mostly False.
Romney's numbers are accurate, but, gee golly, PolitiFact needs to investigate in order to find out the meaning behind them so no one gets the wrong impression.

Thankfully for Democrats, just a few months later PolitiFact was back to dismissing the underlying argument and was simply performing a check of the numbers:




Some underlying arguments are more equal than others. Contrary to the Romney rating, PolitiFact chose to ignore the implication of the claim:
Our ruling

Clinton’s figures check out, and they also mirror the broader results we came up with two years ago. Partisans are free to interpret these findings as they wish, but on the numbers, Clinton’s right. We rate his claim True.
PolitiFact suddenly has no interest in whether or not the statistics are misleading. They're just here to make sure the numbers check out and all you partisans can decide what they mean.

Sometimes...



Look, kids! Wheel-O-Standards has come all the way back around! And just in time to hit the conservative group the Alliance Defending Freedom:
The organization does not provide mammograms at any of its health centers...

So Mattox is correct, by Planned Parenthood’s own acknowledgement, that the organization does not provide mammograms...

Federal data and Planned Parenthood’s own documents back up the claim from the Alliance Defending Freedom.

That puts the claim in the realm that won’t make either side happy: partially accurate but misleading without additional details. We rate the claim Half True.
We're back to numbers being accurate but misleading! In this rating PolitiFact finds the number of Planned Parenthood facilities licensed to perform mammograms (zero) is accurate, but after editorially judging the statistic gives the wrong impression, PolitiFact issues a rating based on the underlying argument. Because "fact checker" or something.

Why muck up such a great narrative just for the sake of applying consistent standards?

Thursday, May 7, 2015

PolitiFact Wisconsin: It's false until somebody fact-checks it

We've long registered our objections to PolitiFact's fallacious "burden of proof" criterion for political claims.

PolitiFact Wisconsin gives us a fantastic example of that flawed method with its fact check of Republican presidential candidate Ben Carson. Carson highlighted problems with advancement in the black community by saying there are more blacks involved with the criminal justice system than with higher education.

PolitiFact Wisconsin decided to evaluate claim and rated it "False." But of course there's a problem with the rating. PolitiFact Wisconsin found it had poor data with which to work:
(T)here is only one solid figure -- 75,000 black males ages 18 to 24 in prison. We’re not aware of any recent counts of the black males in that age group who were arrested, in jail, or on probation or parole at a particular time.
PolitiFact Wisconsin emphasized that relatively low solid figure in its summary paragraph:
Carson did not provide evidence that backs his claim. The latest federal figures we found show 75,000 black males in that age group who were in prison in 2013 and in the range of 690,000 to 779,000 who were in college. We are not aware of any recent figures for the number of black males ages 18 to 24 arrested, in jail, or on probation or parole at any particular time.

If figures do surface, we’ll re-evaluate this item, but we rate Carson’s claim False.
Perhaps it makes sense if the number of blacks ages 18 to 24 in college outnumber those involved with the criminal justice system 779,000 to 75,000. But that number comparison is rigged against Carson. PolitiFact Wisconsin acknowledges Carson's claim on unknown numbers of young blacks arrested, in jail, on probation or on parole.

And that's the Achilles' heel of PolitiFact Wisconsin's fact check. It collected enough information to enable rough estimates of those categories.

Was Carson's claim plausible?


We start our estimate by noting the percentage of blacks ages 18 to 24 in the federal prison population was fairly high: PolitiFact said the number of about 14 percent of the total black population.

We aim to create a conservative estimate, erring on the side of caution, so we'll assume that just 10 percent of blacks in the other categories fall in the 18 to 24 age range.

Arrests


PolitiFact Wisconsin noted that one individual might be arrested more than once. Still, the fact checkers gave the number 3 million for arrests of adult blacks in 2012. Ten percent of 3 million gives us a figure of 300,000, but to help account for multiple arrests we'll cut that number in half and use 150,000.


In Jail


PolitiFact Wisconsin gave a figure of 261,500 for jail inmates in mid-2013. Ten percent of that figure gives us about 26,000.

On Probation or Parole


PolitiFact Wisconsin said 4.75 million people of all races were on probation, parole or other supervision in 2013. The Bureau of Justice Statistics, PolitiFact's source, says blacks account for 30 percent of that number. That gives us 1.4 million, and 10 percent of 1.4 million comes to 140,000.

Totals

Combined with the 75,000 prison population PolitiFact Wisconsin used, our conservative estimate comes to 391,000--about half of PolitiFact Wisconsin's peak figure for black college enrollment. Based on our estimate, we think it's very unlikely Carson's claim exaggerates the truth by more than 100 percent, probably exaggerates it by substantially less than 100 percent and perhaps doesn't exaggerate at all.

For comparison, PolitiFact Georgia recently gave a "Mostly False" rating to a claim exaggerated by over 200 percent.

Do these numbers potentially support Carson's underlying point about the upward mobility of young black males? For some reason, PolitiFact Wisconsin did not deem that point worth considering.

Shall fact checkers rate claims "False" if they are difficult to settle? We think that's the wrong method. We also think fact checkers err by selectively ignoring politicians' underlying arguments. Either consider the underlying argument every time or never consider the underlying argument. Fairness demands it.

Tuesday, February 3, 2015

PunditFact amends pundit's claim about amendments

We've pointed out before how PolitiFact will fault statements made on Twitter for lacking context despite the 140-character limit Twitter imposes.

This week PunditFact played that game with the following tweet from conservative pundit Phil Kerpen:
PunditFact found that the new Republican-controlled Senate has already voted on more amendments in 2015 than Reid allowed in the Democrat-controlled Senate for all of 2014: "On the numbers, that is right."

But PunditFact went on to find fault with Kerpen for leaving out needed context:
On the numbers, that is right. But experts cautioned us that the claim falls more in the interesting factoid category than a sign of a different or more cooperative Senate leadership.

The statement is accurate but needs clarification and additional information. That meets our definition of Mostly True.
We'll spell out the obvious problem with PunditFact's rating: Kerpen's tweet doesn't say anything about different or more cooperative Senate leadership. If Kerpen's not making that argument (we found no evidence he was), then it makes no sense at all to charge him with leaving out information. In effect, PunditFact is amending Kerpen's tweet, giving it context that doesn't exist in the original. Kerpen's statement doesn't need clarification or additional information to qualify as simply "True."

PunditFact's rating offers us a perfect opportunity to point out that if Kerpen's statement isn't simply "True" then there's probably no political claim anywhere that's immune to the type of objection PunditFact used to justify its "Mostly True" rating of Kerpen. A politician could claim the sky is blue and the fact checker could reply that yes, the sky is blue but no thanks to the policies of that politician's party! There are endless ways to rationalize withholding a "True" rating.

This rating convinces us that it would be productive to look at the breakdown between "True" and "Mostly True" ratings to look for a partisan bias. Since there's always context missing from political claims, drawing that line between "True" and "Mostly True" may prove no more objective than the line between "False" and "Pants on Fire."

Saturday, August 9, 2014

The minimum wage hike and PolitiFact's disappearing underlying argument trick

We routinely note that PolitiFact exercises discretion in deciding whether to judge political statements on their literal truth or on the underlying argument.  That editorial choice often makes a critical difference in the final "Truth-O-Meter" rating.

This editorial discretion provides one of the broad avenues through which the ideological biases of PolitiFact staffers may find their way into PolitiFact's ratings.

Let's examine yet another case in point.

On Aug. 8, 2014, PolitiFact rated "Mostly True" a claim from proponents of raising the federal minimum wage that a summer's worth of minimum wage work could pay for the public college education of choice in 1978.

PolitiFact summarizes the fact check (bold emphasis added):
The meme said that "in 1978, a student who worked a minimum-wage summer job could afford to pay a year's full tuition at the 4-year public university of their choice."

If you use the national average the figure is correct. The only problem is the part about a university "of their choice." The data is correct for in-state tuition -- not for any university in the country, where out-of-state rates may well have kicked up the tuition amount beyond a summer’s minimum-wage haul.

On balance, we rate the claim Mostly True.
Obviously in this case PolitiFact ruled on the literal truth of the claim. As noted in the summary, the claim wasn't literally true since it relied on the unmentioned caveat that the prospective student would take advantage of favorable in-state tuition rates. So students would pay the tuition to the in-state college of their choice, not the out-of-state college of their choice. So the claim's literally false but "Mostly True" on PolitiFact's "Truth-O-Meter" before we even look at the underlying argument.

And what is the underlying argument?

We need to raise the minimum wage because its purchasing power has fallen off so much since 1978.


What's wrong with the underlying argument? Just the fact that the argument cherry-picks a notably inflated point of comparison.
 
From Businessweek:


Who thinks the 1978 baseline year is a coincidence?

Despite this obvious cherry-picking, PolitiFact declares as objective fact that the only problem with the claim stems from the out-of-state tuition exception.

The literal claim is false owing to an important and unmentioned caveat.  The underlying argument uses cherry-picking to exaggerate the decreased buying power of minimum wage work.

"Mostly True."

Coincidentally, raising the minimum wage is favored more heavily on the left than on the right.

This is supposed to pass as nonpartisan?

Sunday, August 3, 2014

PolitiMath at PolitiFact Virginia

Guided selection?
Earlier today, we reviewed the percentage error involved in pair of PolitiFact ratings.

On July 16, PolitiFact's PunditFact rated Cokie Roberts "Half True" for a numerical claim that was exaggerated by about 9,000 percent.  PunditFact justified the rating based on Roberts' underlying argument, that the risk of being murdered in Honduras is greater than the risk in New York City.

On July 31, PolitiFact Oregon rated George Will "False for a numerical claim that was off by as much as 225 percent.  Will claimed healthcare companies.make up 13 of the top 25 employers in Oregon, and occupy the top three positions on top of that.  The former claim was off by as much as 225 percent and the latter claim was off by 300 percent or so.  PolitiFact found Oregon's largest employer was a healthcare firm.

Today we take fresh note of a July 14 fact check from PolitiFact Virginia.

PolitiFact Virginia tested the claim of Democrat Mark Sickles that 70 percent of Virginia's Medicaid budget pays for care for seniors in nursing homes.

PolitiFact Virginia said the true number was 9.7 percent.

From that number, we calculate a percentage error of 622 percent (PolitiFact can't be trusted with that calculation).

PolitiFact Virginia gives Sickles no credit for his underlying argument and rates his claim "False."


What determines whether PolitiFact rates the underlying point along with the literal claim?

How big does an error need to get before a claim warrants a "Pants on Fire" rating?


Clarification 8-14-2014:
Changed "Will claimed healthcare companies.make up 13 of the top 25, and occupy the top three positions on top of that" to Will claimed healthcare companies.make up 13 of the top 25 employers in Oregon, and occupy the top three positions on top of that."

PolitiMath at PolitiFact Oregon

Leaning.
PolitiFact Oregon provides us with a great item to compare to our July 30 examination of mathematics at PolitiFact's PunditFact project.

In the PunditFact item, we noted that Cokie Roberts used a probability comparison that was off by almost 9000 percent and received a "Half True" rating from PolitiFact, thanks to her underlying point that getting murdered in Honduras was more likely than in New York City.

On July 31, PolitiFact Oregon published a fact check of George Will.  Will wrote a few things about how prominently health care providers figure in Oregon's list of top job providers.  Will was making the case for a medical doctor in the senate, Republican candidate Monica Wehby.

PolitiFact Oregon rated Will's claim "False":
Will, in a column supporting the candidacy of Republican Senate candidate Monica Wehby, included a link purporting to show Oregon’s 25 largest employers. The chart, he wrote, indicated that the dominance of large health care providers in Oregon -- the three largest employers and 13 of the top 25 in the state fit that niche, according to the chart -- make Dr. Wehby the best choice for the job.

Calls and emails to many of the companies listed, however, indicate that the chart’s numbers are way off, often wildly so. The top three employers on the list Will used are, in fact, a single entity. And by our count, the highest number of health care providers that can rank among Oregon’s top 25 employers is nine, not the 13 Will cited.

We rate the claim False.
Will was off by as much as 225 percent (using four as the number of health care providers in the top 25), apparently totally overwhelming any underlying point he had about about health care providers employing quite a few Oregonians.

After all, it's way too much to ask for consistency from mainstream media fact checkers.

Incidentally, we found healthcare/social assistance combined make up about 12.6 percent of all jobs in Oregon (as of June 2014, seasonally adjusted).  That's about 15.1 percent of the private workforce.

Sunday, July 20, 2014

Tweezers or tongs?

We've noted before PolitiFact's inconsistency in its treatment of compound statements.  It's time to focus on a specific way that inconsistency can influence PolitiFact's "Truth-O-Meter.

We'll call this problem "tweezers or tongs" and illustrate it with a recent PolitiFact fact check of Phil Gingrey (R-Ga.):
"As a physician for over 30 years, I am well aware of the dangers infectious diseases pose. In fact, infectious diseases remain in the top 10 causes of death in the United States. … Reports of illegal migrants carrying deadly diseases such as swine flu, dengue fever, Ebola virus and tuberculosis are particularly concerning."

[...]

The reality is that Ebola has only been found in Africa -- and experts agree that, given how the disease develops, the likelihood of children from Central America bringing it to the U.S. border is almost nonexistent. But most importantly for our fact-check, Gingrey’s office was unable to point to solid evidence that that Ebola has arrived in Western Hemisphere, much less the U.S. border. To the contrary, the CDC and independent epidemiologists say there is zero evidence that these migrants are carrying the virus to the border.

We rate the claim Pants on Fire.
It's tweezers this time.

Gingrey states that disease crossing the border via migration creates a concern.  He mentions reports of swine flu, dengue fever, Ebola virus and tuberculosis crossing the border as examples of concern.  PolitiFact takes its tweezers and picks out "Ebola virus," and drops from consideration the other diseases in Gingrey's compound statement.

Let's review again PolitiFact's guidelines statement of principles:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
Or sometimes PolitiFact will just settle on rating one piece of the compound statement.  It's up to PolitiFact, based on the whim of the editors.

Burying Gingrey's underlying point

Though we're focused mainly on PolitiFact's inconsistent handling of compound statements, it's hard to ignore another PolitiShenanigan in the Gingrey fact check.  PolitiFact sometimes takes a subject's underlying point into account when making a ruling.  And sometimes not.  In Gingrey's case, PolitiFact buried Gingrey's underlying point:
As a surge of unaccompanied children from Central America was arriving on the United States’ southern border this month, Rep. Phil Gingrey, R-Ga., expressed concern about the impact they could have on public health.
PolitiFact left out part of the story.  Yes, Gingrey was expressing concern about the potential spread of disease from human migration.  But he wasn't simply airing his concerns to the Centers for Disease Control, to whom he addressed the letter PolitiFact fact checked.  He was asking the CDC to assess the risk:
I request that the CDC take immediate action to assess the public risk posed by the influx of unaccompanied children and their subsequent transfer to different parts of the country.
PolitiFact claims "words matter."  Yet, contrary to PolitiFact's claim, Gingrey did not say migrants may be bringing Ebola virus through the U.S.-Mexico border.  Rather, he said it was troubling to hear reports of diseases, including Ebola virus, coming across the border.

Words matter to PolitiFact, we suppose, since one needs to know exactly how much twisting is needed to arrive at the desired "Truth-O-Meter" rating.

Wednesday, November 2, 2011

Grading PolitiFact: Joe Biden and the Flint crime rate

(crossposted from Sublime Bloviations with minor reformatting)


To assess the truth for a numbers claim, the biggest factor is the underlying message.
--PolitiFact editor Bill Adair


The issue:
(clipped from PolitiFact.com)


The fact checkers:

Angie Drobnic Holan:  writer, researcher
Sue Owen:  researcher
Martha Hamilton:  editor


Analysis:

This PolitiFact item very quickly blew up in their faces.  The story was published at about 6 p.m. on Oct. 20.  The CYA was published at about 2:30 p.m. on Oct. 21, after FactCheck.org and the Washington Post published parallel items very critical of Biden.  PolitiFact rated Biden "Mostly True."

First, the context:



(my portion of transcript in italics, portion of transcript used by PolitiFact highlighted in yellow):

BIDEN:
If anyone listening doubts whether there is a direct correlation between the reduction of cops and firefighters and the rise in concerns of public safety, they need look no further than your city, Mr. Mayor.  

In 2008--you know, Pat Moynihan said everyone's entitled to their own opinion, they're not entitled to their own facts.  Let's look at the facts.  In 2008 when Flint had 265 sworn officers on their police force, there were 35 murders and 91 rapes in this city.  In 2010, when Flint had only 144 police officers the murder rate climbed to 65 and rapes, just to pick two categories, climbed to 229.  In 2011 you now only have 125 shields.  

God only knows what the numbers will be this year for Flint if we don't rectify it.  And God only knows what the number would have been if we had not been able to get a little bit of help to you.

As we note from the standard Bill Adair epigraph, the most important thing about a numbers claim is the underlying message.  Writer Angie Drobnic Holan apparently has no trouble identifying Biden's underlying message (bold emphasis added):
If Congress doesn’t pass President Barack Obama’s jobs plan, crimes like rape and murder will go up as cops are laid off, says Vice President Joe Biden.

It’s a stark talking point. But Biden hasn’t backed down in the face of challenges during the past week, citing crime statistics and saying, "Look at the facts." In a confrontation with a conservative blogger on Oct. 19, Biden snapped, "Don’t screw around with me."
No doubt the Joe Biden of the good "Truth-O-Meter" rating is very admirable in refusing to back down.  The "conservative blogger" is Jason Mattera, editor of the long-running conservative periodical "Human Events."  You're a blogger, Mattera.  PolitiFact says so.

But back to shooting the bigger fish in this barrel.

PolitiFact:
We looked at Biden’s crime numbers and turned to the Federal Bureau of Investigation's uniform crime statistics to confirm them. But the federal numbers aren’t the same as the numbers Biden cited. (Several of our readers did the same thing; we received several requests to check Biden’s numbers.)

When we looked at the FBI’s crime statistics, we found that Flint reported 32 murders in 2008 and 53 murders in 2010. Biden said 35 and 65 -- not exactly the same but in the same ballpark.
Drobnic Holan initially emphasizes a fact check of the numbers.  Compared to the FBI numbers, Biden inflated the murder rate for both 2008 and 2010, and his inflated set of numbers in turn inflates the percentage increase by 45 percent (or 27 percentage points, going from 60 percent to 87 percent).  So it's a decent-sized ballpark.

PolitiFact:
For rapes, though, the numbers seemed seriously off. The FBI showed 103 rapes in 2008 and 92 rapes in 2010 -- a small decline. The numbers Biden cited were 91 rapes in 2008 and 229 in 2010 -- a dramatic increase.
If inflating the percentage increase in murders by 27 percentage points is not a problem for Biden then this at least sounds like a problem.

After going over some other reports on the numbers and a surprising discussion of how not much evidence suggests that Obama's jobs bill would address the number of police officers in Flint, PolitiFact returns to the discrepancy between the numbers:
(W)e found that discrepancies between the FBI and local agencies are not uncommon, and they happen for a number of reasons. Local numbers are usually more current and complete, and local police departments may have crime definitions that are more expansive than those of the FBI.
All this is very nice, but we're talking about the city of Flint, here.  We don't really need current stats for 2008 and 2010 because they're well past.  Perhaps that affects the completeness aspect of crime statistics also; PolitiFact's description is too thin to permit a judgment.  As for "expansive" definitions, well, there's a problem with that.  Biden's number of rapes in 2008 is lower than the number reported in the UCR (FBI) data.  That is a counterintuitive result for a more expansive definition of rape and ought to attract a journalist's attention.

In short, even with these proposed explanations it seems as though something isn't right.

PolitiFact:
Flint provided us with a statement from Police Chief Alvern Lock when we asked about the differences in the crime statistics, particularly the rape statistics.

"The City of Flint stands behind the crime statistics provided to the Office of The Vice President.  These numbers are an actual portrayal of the level of violent crime in our city and are the same numbers we have provided to our own community. This information is the most accurate data and demonstrates the rise in crime associated with the economic crisis and the reduced staffing levels.

"The discrepancies with the FBI and other sources reveal the differences in how crimes can be counted and categorized, based on different criteria." (Read the entire statement)
This is a city that's submitting clerical errors to the FBI, and we still have the odd problem with the rape statistics.  If the city can provide numbers to Joe Biden then why can't PolitiFact have the same set of numbers?   And maybe the city can include stats for crimes other than the ones Biden may have cherry-picked?  Not that PolitiFact cares about cherry-picked stats, of course.

Bottom line, why are we trusting the local Flint data sight unseen?

PolitiFact caps Biden's reward with a statement from criminologist and Obama campaign donor James Alan Fox of Northeastern University to the effect that Biden makes a legitimate point that "few police can translate to more violent crime" (PolitiFact's phrasing).  Fox affirms that point, by PolitiFact's account, though it's worth noting that on the record Biden asserted a "direct correlation" between crime and the size of a police force.  The change in wording seems strange for a fact check outfit that maintains that "words matter."

The conclusion gives us nothing new other than the "Mostly True" rating.  Biden was supposedly "largely in line" with the UCR murder data for Flint.  His claim about rape apparently did not drag down his rating much even though PolitiFact admittedly could not "fully" explain the discrepancies.  PolitiFact apparently gave Biden credit for the underlying argument that reductions in a police force "could result in increases in violent crime" despite Biden's rhetoric about a "direct correlation."


The grades:

Angie Drobnic Holan:  F
Sue Owen: N/A
Martha Hamilton:  F

This fact check was notable for its reliance on sources apparently predisposed toward the Obama administration and its relatively unquestioning acceptance of information from those sources.  The Washington Post version of this fact check, for comparison, contacted three experts to PolitiFact's one and none of the three had an FEC filing indicating a campaign contribution to Obama.

And no investigation of whether Biden cherry-picked Flint?  Seriously?  See the "Afters" section for more on that as well as commentary on PolitiFact's CYA attempt.

Tuesday, March 15, 2011

Anchor Rising: "More Bias on Display"

JD and I only got started with PolitiFact Bias in early 2011.  That puts us a few years behind in highlighting excellent examples others have found that help show PolitiFact's left-leaning bias.

Thanks to the new "PolitiFarce" tag at Anchor Rising I ran across this stellar example from Justin Katz:
The statement being addressed is that "over half of the foreign-born population in Rhode Island is white," and the findings were as follows:
Brown directed us to the U.S. Census Bureau's American Community Survey, 2006-2008, which includes three-year estimates of foreign-born populations in the United States. Specifically, he said he was citing the figures showing that 45.2 percent of foreign-born Rhode Islanders are white. That's not more than half. ...
Drawing from data in the 2006-2008 survey, the census said that 32 percent of foreign-born people, about one third, are white alone, not Hispanic or Latino. ...
A one-year report from 2009 showed that 30 percent of Rhode Island respondents identified themselves as "white alone, not Hispanic or Latino."
So, judged by the statistic that Brown incorrectly thought he should be using, his statement was only false by a little; judged by the appropriate statistic, Brown's statement was false by a lot. On what grounds did PolitiFact give him a "half true"?
Indeed, upon examination of PolitiFact's argument it is difficult to see what portion of Steve Brown's statement, if any, was true.

It's worth noting that this story by PolitiFact did attempt to address Brown's underlying point.  PolitiFact's standards (using the term advisedly) call for giving the underlying point the greatest emphasis in a numbers claim.

But trying to understand PolitiFact's approach on that basis simply leads to more trouble.

PolitiFact:
In the end, Brown's underlying claim that the state police investigate Hispanics more often than non-Hispanics for immigration violations is supported by the department's own numbers. Of the 92 people investigated, 71 were from Latin American countries.
The most obvious problem is the small sample size.  But the bigger problem is PolitiFact's supposed identification of Brown's "underlying claim" that Hispanics were investigated more often than non-Hispanics.  If that was Brown's underlying claim then there should have been no reason to look at race percentages among Rhode Island's foreign-born population.  PolitiFact could have just used the numbers 71 and 92 and had done with it with a glowing "True" rating.  But clearly Brown's point was that Hispanics are investigated disproportionately by race, implying racism in the department's methods.  That argument is specious on its face given the aforementioned small sample size and the strong possibility that factors other than race (proximity of the nation of origin, for example) come into play in leading to an investigation.

The "true" in Brown's statement, then, appears to come from an "underlying claim" that wasn't really Brown's point.  PolitiFact used a superficial factoid to justify bumping Brown up a notch or two (or three).

Monday, January 3, 2011

Carolina Journal-The Sophistry of Liberal Fact-Check Websites

Jon Ham at the Carolina Journal writes a scathing opinion piece that questions both the non-partisan credibility of PolitiFact, and also the merits of their Lie of the Year-
Anyone paying attention remembers that ObamaCare was a government takeover bid. That's what it was when Hillary Clinton was pushing it in 1993, and the 2009 Obama plan was, too. It included a "public option," which was really a "government option" to any objective news outlet. But PolitiFact sniffs that, while this may have been true before the "public option" was taken out of the bill, it wasn't accurate once that onerous provision was excised.

[Quoting PolitiFact] "By the time the health care bill was headed toward passage in early 2010, Obama and congressional Democrats had sanded down their program, dropping the "public option" concept that was derided as too much government intrusion. The law passed in March, with new regulations, but no government-run plan."

Robert Gibbs couldn't have spun that any better. PolitiFact maintains that anyone who continued to use "government takeover" after the public option was killed is a liar, and a big fat "liar of the year," to boot.
Ham goes on to raise the issue of the new authority granted to the Secretary of Health and Human Services-
Even a quick reading of the health care bill reveals an astounding level of government control of health care, even without the public option...The Health and Human Services bureaucracy is given an unprecedented degree of power by the ample use of the phrase "the Secretary shall, by regulation" in the bill. Any objective person would conclude that 2,000 pages of new regulations devoted to one industry constitutes a "government takeover" by definition, but not PolitiFact.
As evidence of their bias, Ham also points out the Lie of the Year runner up, Michelle Bachmann's claim regarding President Obama's trip to India-
This is a textbook example of the half-truth way liberal fact-check sites operate. Yes, Bachmann did say that the Obama trip would cost $200 million a day, but it was not her claim. It was the claim of an Indian mainstream news outlet, the Press Trust of India, and was picked up by other news outlets. Bachmann was simply repeating what had been reported.

The quick determination by PolitiFact readers that Bachmann's repeating of this report constitutes a "lie," and PolitiFact's evident acceptance of that determination, tells you all you need to know about the readers and PolitiFact. Why did they brand only Bachmann as a liar, an not the the many others who repeated what was thought to be an accurate report? I'd venture that it had something to do with the left's Bachmann Derangement Syndrome, second in severity only to Palin Derangement Syndrome.
The entire article can be found here.