Showing posts with label 2010. Show all posts
Showing posts with label 2010. Show all posts

Monday, December 21, 2020

PolitiFact botches one in Marco Rubio's favor

Though PolitiFact Bias finds PolitiFact biased to the left, we also find that PolitiFact simply stinks at fact-checking. PolitiFact stinketh so much that its mistakes sometimes run against its biased tendencies to unfairly harm Democrats or unfairly help Republicans.

We ran across a clear case of the latter this week while putting together a spreadsheet collection of PolitiFact's "True" ratings. Sen. Marco Rubio (R-Fla.) received a "True" for a significantly flawed claim about Social Security:

Image capture from PolitiFact.com


Rubio was right that Social Security had to draw down the Trust Fund balance to pay benefits. But PolitiFact simply didn't bother to look at whether it was happening "for the first time."

It wasn't happening for the first time. It happened often during the 1970s. And in the 1970s Social Security was on-budget. That means that when people claim that Social Security has never contributed to the federal deficit they are quite clearly wrong as a matter of fact.

PolitiFact only looked at one government source in fact-checking Rubio. That source had nothing about whether the Trust Fund drawdown was happening for the first time.

A chart from the Committee for a Responsible Federal Budget makes the shortfall from the 1970s clear:

It's unlikely PolitiFact was trying to do Rubio a favor. Rather, the staff at PolitiFact probably thought they knew Social Security's financial history was solid and simply did not question when Rubio affirmed that expectation.

We'll attach the "Left Jab" tag to this item even though it did not come from a left-leaning critic of PolitiFact.

Thursday, August 28, 2014

Layers of editors at PolitiFact Florida.

We ran across some faulty after-publication editing at PolitiFact Florida while doing some research.

A picture tells the story (red ovals and yellow highlights added):


Why pick on PolitiFact Florida over something relatively minor? We think it's a healthy reminder that the people who work for PolitiFact are fallible. Seeing this type of mistake reminds us that we shouldn't be too surprised to see other types of mistakes in their work, including mistakes in the research and conclusions.

Tuesday, March 15, 2011

Anchor Rising: "More Bias on Display"

JD and I only got started with PolitiFact Bias in early 2011.  That puts us a few years behind in highlighting excellent examples others have found that help show PolitiFact's left-leaning bias.

Thanks to the new "PolitiFarce" tag at Anchor Rising I ran across this stellar example from Justin Katz:
The statement being addressed is that "over half of the foreign-born population in Rhode Island is white," and the findings were as follows:
Brown directed us to the U.S. Census Bureau's American Community Survey, 2006-2008, which includes three-year estimates of foreign-born populations in the United States. Specifically, he said he was citing the figures showing that 45.2 percent of foreign-born Rhode Islanders are white. That's not more than half. ...
Drawing from data in the 2006-2008 survey, the census said that 32 percent of foreign-born people, about one third, are white alone, not Hispanic or Latino. ...
A one-year report from 2009 showed that 30 percent of Rhode Island respondents identified themselves as "white alone, not Hispanic or Latino."
So, judged by the statistic that Brown incorrectly thought he should be using, his statement was only false by a little; judged by the appropriate statistic, Brown's statement was false by a lot. On what grounds did PolitiFact give him a "half true"?
Indeed, upon examination of PolitiFact's argument it is difficult to see what portion of Steve Brown's statement, if any, was true.

It's worth noting that this story by PolitiFact did attempt to address Brown's underlying point.  PolitiFact's standards (using the term advisedly) call for giving the underlying point the greatest emphasis in a numbers claim.

But trying to understand PolitiFact's approach on that basis simply leads to more trouble.

PolitiFact:
In the end, Brown's underlying claim that the state police investigate Hispanics more often than non-Hispanics for immigration violations is supported by the department's own numbers. Of the 92 people investigated, 71 were from Latin American countries.
The most obvious problem is the small sample size.  But the bigger problem is PolitiFact's supposed identification of Brown's "underlying claim" that Hispanics were investigated more often than non-Hispanics.  If that was Brown's underlying claim then there should have been no reason to look at race percentages among Rhode Island's foreign-born population.  PolitiFact could have just used the numbers 71 and 92 and had done with it with a glowing "True" rating.  But clearly Brown's point was that Hispanics are investigated disproportionately by race, implying racism in the department's methods.  That argument is specious on its face given the aforementioned small sample size and the strong possibility that factors other than race (proximity of the nation of origin, for example) come into play in leading to an investigation.

The "true" in Brown's statement, then, appears to come from an "underlying claim" that wasn't really Brown's point.  PolitiFact used a superficial factoid to justify bumping Brown up a notch or two (or three).

Sunday, March 13, 2011

Introducing PFB's Anecdote-O-Meter

Just kidding with the title.  The o-meter stuff is too trite to use as more than a one-off joke.

But the subject is anecdotes and their role in helping to show bias in a body of work.

Most of us think we can perceive bias in an individual piece of journalism--an anecdotal evidence.  And we may often be right, but the appearance of bias in reporting or fact checking may simply occur at the result of writer error.

So how does one tell the difference between mere errors and ideological bias?

One method, bolstered by the methods of science, involves counting the number of errors and tracking any association with political parties or ideas.  Mere errors ought to occur roughly equally in stories regarding Republicans compared to those involving Democrats.  Where the errors harm one party more than the other beyond the line of statistical significance, evidence of a political bias has come to light.  The degree of deviation from best practices also may figure in a scientific study of journalistic errors.

In March of 2008 at my blog Sublime Bloviations, I started tagging relevant posts with "grading PolitiFact."  On occasion I have criticized PolitiFact for harsh grading of Democrats.  The vast majority of the posts, in accordance with my selection bias, is made up of criticisms of faulty grades given to Republicans or conservatives, or ridiculously gentle treatment of Democrats or progressives.

I work under no illusion that the list represents definitive evidence of a systemic bias at PolitiFact.  But the number of times PolitiFact's grades go easy on Democrats and tough on Republicans does count as an important and legitimate evidence supporting (not definitively) the charge of bias.

If the ideological bias at PolitiFact is not significant, then it should be possible to compile a list of comparable size and quality containing criticisms of PolitiFact where PolitiFact favors Republicans and deals harshly with Democrats.

I don't foresee that occurring.

The list from Sublime Bloviations, in chronological order (and do pardon the more polemical bent in the earlier entries.  I was shocked by the amateurish fact checks I was reading):

Friday, March 4, 2011

San Diego Rostra: "When Fact Checks are False"

Back in July of last year, Bradley J. Fikes unloaded a broadside against PolitiFact at San Diego Rostra.  Fikes used a fact check of Michele Bachmann as an example of fact checks gone awry:
Last year during the health care debates, PolitiFact rated this statement by Rep. Michele Bachmann, R-Minn, false: “Ezekiel Emanuel, one of President Obama’s key health care advisers, “says medical care should be reserved for the nondisabled. So watch out if you’re disabled.”

I think Bachmann was mostly right, after examining the evidence she presented. PolitiFact misrepresented that evidence, following guidance from an Obama Administration spokesperson.
Fikes offers a detailed takedown of his chosen example along with some trenchant observations of PolitiFact's approach to fact checking.

Fikes' review overlaps somewhat with a criticism I did some months after his on the same PolitiFact fact check.

I recommend his longer and more thorough version.

Sunday, January 23, 2011

Anchor Rising: "Rating of John Loughlin on Social Security: PolitiFact's Truth-O-Meter Earns Itself a 'Pants on Fire'"

Blogger Monique Chartier of Anchor Rising noticed when PolitiFact failed to admit that Social Security is a Ponzi scheme:
Here's the definition of a Ponzi scheme.
A Ponzi scheme is an investment fraud that involves the payment of purported returns to existing investors from funds contributed by new investors.
Here's how social security works.
The Social Security system is funded primarily by federal taxation of payrolls.

Doubtless PolitiFact might defend itself by noting that "fraud" occurs in Chartier's definition of "Ponzi scheme."  The PolitiFact argument insists against the evidence that Ponzi schemes require fraud in order to fit the proper definition.

But that's only true if we cherry-pick the definition.

Chartier's criticism hits its mark, but would have more force if she pointed out that fraud is not a necessary feature of Ponzi schemes and Ponzi financing.

Friday, January 21, 2011

Federal Review: "Politifact: The Selective Ignorance of Meaning"

"Winston" at a conservative blog called "Federal Review" fires off a blistering review of two PolitiFact fact checks:
Create a website purporting to “fact check” politicians, win a Pulitzer Prize. That’s a short history of Politifact, an operation of the St. Petersburg Times. Complete with fun little “Truth-o-meter” graphics that range from True to Pants-on-Fire, illustrated with, you guessed it. Flames.

I’ve been reading this for some time and have always come away uneasy. Often because they try to fact-check political opinion and not-unreasonable-predictions about the results of policy proposals. But today, I have discovered two good, illustrative examples.
Winston deals with the fact check of an ad attacking congressional candidate Zack Space and another concerning Sen. Barbara Boxer (D-Calif.).  The former contains the more damaging observations.  Winston gets to the point quickly and gets in some damaging shots.  Worth reading, so get busy.

Friday, January 7, 2011

Ethics Alarms: "'Lie of the Year'? Hardly"

Here's yet another well-reasoned takedown of PolitiFact's "Lie of the Year" for 2010, this time from Jack Marshall's blog "Ethics Alarms."

Marshall provides an excellent summary of the PolitiFact's fundamental error:
The point of disagreement depends on one’s tolerance for  an outside  authority’s interference with free choice. Every new control, regulation or alteration in options reduces the autonomy of individuals and the marketplace. To supporters of government micromanagement of individuals and commerce, this isn’t a “takeover,” because significant choices still remain with the consumer and the industry. To those who object to all but the most unobtrusive government controls, it is a takeover, because the government is deciding which options are available.

Regardless of who is right, and this is just part of a long-standing argument about what is the proper role of government, calling one side’s sincere and defensible characterization of the law  1) a lie, and 2) “the lie of the year” is taking partisan sides, especially obnoxious for a website that promotes its lack of bias.

As usual, read the whole thing.

Engineering Thinking: "PolitiFact Earns 'Pants On Fire' Rating"

Blogger Ed Walker of "Engineering Thinking," produced a list of deficiencies at PolitiFact likely to lead to biased findings.  After presenting the list, Walker drops the hammer:
PolitiFact scores a big fat zero, ranking it among sites devoted to UFOs, ghosts, psychic phenomena, and other organizations that dabble in pseudoscience.

This does not mean that PolitFact is completely biased or always wrong. It does mean that they have no published scientific standards, so it is not possible to evaluate their work. Without such standards, evaluation criteria may shift from issue to issue, perhaps allowing them to indulge in subtle favoritism toward people or issues they like, while awarding “pants on fire” ratings to those they don’t.
Obviously Walker is evaluating PolitiFact on some level, so when he says it is not possible to evaluate their work, he seems to be saying that PolitiFact's system (the continuum between "Pants on Fire" and "True") does not feature criteria adequate for separating one grade from another, as with the "ridiculous" criterion noted here, as though there is an objective determination of "ridiculous."

Walker accurately indicts PolitiFact on the issue of shifting standards--PolitiFact certainly does vary in its approach to fact checking, as shown by results such as finding it simultaneously "True" that Joe Biden did not advocate partitioning Iraq while also finding it "Half True" that Joe Biden advocated partitioning Iraq.

Walker's short post makes some terrific points, so please read it all.

Thursday, January 6, 2011

The Foundry: "PolitiFact Declares Century-Long Economic Debate Over"

Brian Riedl works as an economist for the right-leaning Heritage Foundation.  From time to time, PolitiFact uses Riedl as an expert source.

At times, expert sources come away unsatisfied with how their contribution was treated.

Enter Riedl, writing for The Foundry:
Typically, fact-checking is limited to checking, well, verifiable facts. Whether the budget deficit is rising, how much Washington spends on Social Security, and what provisions are in the latest health care bill are not open to interpretation. They can be verified factually.



Whether the economy would have performed better or worse without the President’s $862 billion stimulus is an analytical and theoretical argument. It is not a “fact” to be “checked.”

PolitiFact’s analysis displays a lack of understanding of the complexities of macroeconomic analysis. They cite as a “consensus” four studies claiming that the stimulus worked – yet those studies were all essentially Keynesian economic models, so of course they will declare that a Keynesian stimulus worked.
Riedl noted that PolitiFact essentially accepted the accuracy of estimates made by Keynesians using Keynesian models to measure the job creation effectiveness of the stimulus bill, and in turn used those numbers to rate the accuracy of a statement by President Obama.  Here's how that method appeared in PolitiFact's conclusion:
With the notable exception of conservatives, the independent economists who have produced studies agree that the stimulus has saved or created upwards of 1 million jobs, and that the bill will likely create another million or so jobs in 2010. These numbers are based on a "counterfactual" study that is an estimate subject to some professional disagreement. And within this broad range of expert opinion, Obama chose a number on the high side. The numbers could easily be less than what he suggests. So we rate his claim Half True.
 Obama may be flatly incorrect, in other words, and therefore receives a "Half True" rating from PolitiFact.

Wednesday, January 5, 2011

Le·gal In·sur·rec·tion: "Media Bias In Action: Providence Journal Politicizes PolitiFact"

Blogger and associate clinical professor (Cornell Law School) William A. Jacobson posted some poignant words about PolitiFact during the 2009 election season:
The Providence Journal, the only statewide daily newspaper in Rhode Island which dominates news coverage, has endorsed Democrat David Cicilline for Congress in the RI-01 District, running against John Loughlin.  As my readers know, this is my home district and I support Loughlin.

Unfortunately, the PolitiFact feature at ProJo reflects these political leanings, as substantially identical analyses result in PolitiFact ratings more favorable to Cicilline.  I'll assume this bias is unintended, but the bias is there nonetheless.
 Jacobson goes on to chronicle a number of inconsistencies in the political coverage at PolitiFact Rhode Island.  On one point I'll disagree with his language, however.  The Providence Journal is simply continuing the tradition of politization at PolitiFact.

Individually, anecdotes showing disparate treatment are a relatively weak evidence of ideological bias.  A large collection of anecdotes, particularly where the evidence is clear, do provide reasonable support to the charge of bias, however.



Hat tip to JD for pointing out that I spelled "on" with an "e" on the end in the next-to-last paragraph.  The typo is hereby fixed.

Tuesday, January 4, 2011

Peg Kaplan on the why of it

JD brought my attention to a blog post by one Peg Kaplan.

Kaplan has had some opportunity to roam the halls at the Poynter Institute in St. Petersburg, Fla.  The Poynter Institute owns the St. Petersburg Times, which in turn brought PolitiFact into being.

Kaplan's take:
I agree with those in Professor Burgess-Jackson's post who slam Politifact for its analysis about Obamacare. Nevertheless, I know some of the people who work at Politifact, through the Poynter Institute. These people are not stupid and they are not dishonest. I am certain that they believe what they write.

If they are wrong, then how is this possible?
Kaplan's experience agrees with mine.  The journalists I have met are sincere and conscientious as a rule.  Kaplan's explanation also agrees with the one to which I hold:  The newsroom culture steeps its membership in a cloud of accepted wisdom.  That accepted wisdom isn't always particularly wise.  The homogeneity of the newsroom culture discourages journalists from asking some of the right questions.  The blind spot in their perceptions can't help but manifest in their work. 

This type of bias, by the way, is an institutional bias.

Do read the whole of Kaplan's post, and follow the links to Burgess-Jackson's post.

Monday, January 3, 2011

Washington Examiner-Politifact Is Often More Politics Than Facts

Mark Hemingway of the Washington Examiner exposes flaws in Politifact's rating of Rand Paul. The Kentuckian pointed out a disparity between private and public worker compensation-
The average federal employee makes $120,000 a year. The average private employee makes $60,000 a year.
Politifact rated him False. They explained that Paul might confuse his audience-
Since most people usually think about how much they, their spouses and their colleagues get paid in salary alone — not salary plus benefits — we think most people hearing this statement would assume that Paul means that the average federal employee gets paid a salary of $120,000. That’s simply not true.
Politifact offered no evidence that "most people" would think Paul was talking about salary alone. And Hemingway was quick to point this out-
"So what they’re saying is not that what Paul said was literally false, but that according to how they think people will understand what he said, it’s not true. Come again?"
Hemingway concludes that despite Politifact framing the fact-check to their own ambiguous standards, they still missed the mark-
"Politifact does make one relevant point about the average private sector worker not being an apples-to-apples comparison to the average federal worker, but that has no bearing on what Paul actually said and hardly justifies the exorbitant compensation federal workers get."
You can read the entire article here.

You can also read a companion critique at Sublime Bloviations, that points out another flaw with the Politifact piece. Three months prior to the Paul rating Politifact came to a different conclusion when they rated Mike Keown.

This represents three separate fact checks on basically the same issue with two different conclusions.