Showing posts with label Percentage Calculation Error. Show all posts
Showing posts with label Percentage Calculation Error. Show all posts

Saturday, December 26, 2015

PolitiMath from PolitiFact New Hampshire

What's new with PolitiMath?

PolitiFact New Hampshire, lately the Concord Monitor's partnership with PolitiFact, gives us a double dose of PolitiMath with its July 2, 2015 fact check of New Hampshire's chief executive, Governor Maggie Hassan (D).

Hassan was the only Democrat to receive any kind of false rating ("False" or "Pants on Fire") from PolitiFact New Hampshire in 2015. PolitiFact based its ruling on a numerical error by Hassan and added another element of interest for us by characterizing Hassan's error in terms of a fraction.

What type of numerical error earns a "False" from PolitiFact New Hampshire?

PolitiFact summarizes the numbers:
In her state of the state address, Hassan said that "6,000 people have already accessed services for substance misuse" through the state’s Medicaid program.

There is no question that substance abuse in the state is a real and pressing problem, and the statistics show that thousands have sought help as a result of the state’s expanded Medicaid program. But Hassan offered (and later corrected) a number that simply wasn’t accurate. The real total is closer to 2,000 -- about one-third the amount she cited.

We rate her claim False.
Describing Hassan's mistake as a percentage error using PolitiFact's figures, Hassan exaggerated her figure by about 230 percent. PolitiFact gave Hassan no credit for her underlying point.

In our PolitiMath series we found the closest match for this case from PolitiFact Oregon. PolitiFact Oregon said conservative columnist George Will exaggerated a figure--by as much as 225 percent by our calculations. The figure PolitiFact Oregon found was uncertain, however, so Will may have exaggerated considerably less using the range of numbers PolitiFact Oregon provided.

In any case, PolitiFact Oregon ruled Will's claim "False." PolitiFact Oregon gave Will no credit for his underlying argument, just as PolitiFact New Hampshire did with Gov. Hassan.

Percent Error and Partisanship

One of our research projects looks in PolitiFact's fact checks for a common error journalists make. We reasoned that journalists would prove less likely to make such careless errors for the party they prefer. Our study produced only a small set of examples, but the percentage of errors was high and favored Democrats.

PolitiFact New Hampshire's fact check of Gov. Hassan draws some consideration for this error, giving us the second mathematical element of note.

PolitiFact could have expressed Hassan's mistake using a standard percentage error calculation like the one we used. We calculated a 230 percent error. But PolitiFact New Hampshire did not use the correct figure (1,800) as the baseline for calculating error. Instead, the fact checkers used the higher, incorrect figure (6,000) as the baseline for comparison: "about one-third the amount she cited."

Using the number "one-third" frames Hassan's error nearer the low end. "One-third" doesn't sound so bad, numerically. Readers with slightly more sophistication may reason that the "one-third" figure means Hassan was off by two-thirds.

Sometimes using the wrong baseline makes the error look bigger and sometimes it makes the error look smaller. In this case the wrong baseline frames Hassan's mistake as a smaller error. The Democrat Hassan gains the benefit of PolitiFact's framing.

Thursday, November 20, 2014

Lost letters to PolitiFact Bias

We discovered a lost letter of sorts intended in response to our recent post "Fact-checking while blind, with PolitiMath."

Jon Honeycutt, posting to PolitiFact's Facebook page, wrote that he posted a comment to this site but it never appeared. I posted to Facebook in response to Honeycutt, including the quotation of his criticism in my reply:
Jon Honeycutt (addressing "PolitiFact Bias") wrote:
Hmm, just looked into 'politifact bias', the very first article I read http://www.politifactbias.com/.../fact-checking-while... Claimed that politifact found a 20% difference in the congressional approval rating but still found the meme mostly true. But when you read the actual article they link to, politifact found about a 3% difference. Then when I tried to comment to correct it, my comment never appeared.
Jon, I'm responsible for the article you're talking about. You found no mistake. As I wrote, "percentage error calculations ours." That means PolitiFact didn't bother calculating the error by percentage. The 3 percent different you're talking about is a difference in terms of percentage *points*. It's two different things. We at PolitiFact Bias are much better at those types of calculations than is PolitiFact. You were a bit careless with your interpretation.I have detected no sign of any attempt to comment on that article. Registration is required or else we get anonymous nonsense. I'd have been quite delighted to defend the article against your complaint.
To illustrate the point, consider a factual figure of 10 percent and a mistaken estimate of 15 percent. The difference between the two is 5 percentage points. But the percentage error is 50 percent. That's because the estimate exceeds the true figure by that percentage (15-10=5, 5/10=.5).

http://www.basic-mathematics.com/calculating-percent-error.html

Don't be shy, would-be critics! We're no less than 10 times better than is PolitiFact at responding to criticism, based on past performance. The comments section is open to those who register, and anyone who is a member of Facebook can post to our Facebook page.

Tuesday, November 11, 2014

Fact-checking while blind, with PolitiMath

One of the things we would predict from biased journalists is a forgiving eye for claims for which the journalist sympathizes.

Case in point?

A Nov. 11, 2014 fact check from PolitiFact's Louis Jacobson and intern Nai Issa gives a "True" rating to a Facebook meme claiming Congress has 11 percent approval while in 2014 96.4 percent of incumbents successfully defended their seats.

PolitiFact found the claim about congressional approval was off by about 20 percent and the one about the percentage of incumbents was off by a maximum of 1.5 percent (percentage error calculations ours). So, in terms of PolitiMath the average error for the two claims was 10.75 percent yet PolitiFact ruled the claim "True." The ruling means the 11 percent average error is insignificant in PolitiFact's sight.

Aside from the PolitiMath angle, we were intrigued by the precision of the Facebook meme. Why 96.4 percent and not an approximate number by 96 or 97? And why, given that PolitiFact often excoriates its subjects for faulty methods, wasn't PolitiFact curious about the fake precision of the meme?

Even if PolitiFact wasn't curious, we were. We looked at the picture conveying the meme and saw the explanation in the lower right-hand corner.

Red highlights scrawled by the PolitiFact Bias team. Image from PolitiFact.com

It reads: "Based on 420 incumbents who ran, 405 of which kept their seats in Congress."

PolitiFact counted 415 House and Senate incumbents, counting three who lost primary elections. Not counting undecided races involving Democrats Mark Begich and Mary Landrieu, incumbents held 396 seats.

So the numbers are wrong, using PolitiFact's count as the standard of accuracy, but PolitiFact says the meme is true.

It was fact-checked, after all.

Sunday, June 29, 2014

The clueless guru?

Late last month, we published a limited study on PolitiFact's execution of a simple math problem: calculating percentage error.  Using search parameters that suitably simulate randomness, we found 14 cases where PolitiFact explicitly or implicitly performed a percentage error equation.  PolitiFact used the wrong equation an astounding nine times.  Two of the cases were ambiguous.  Those two we gave the benefit of the doubt.

We tweaked PolitiFact over this failure on June 14 after Neil Brown, editor and vice president of PolitiFact's parent the Tampa Bay Times, called PolitiFact editor Angie Holan a "guru of best practices" in a June 9 tweet.  We said a guru of best practices would do percent error calculations the right way.

On Friday, June 27, 2014, PolitiFact doubled down on its methods in a fact check of President Obama.  President Obama said child care costs more than college tuition in 31 states.  PolitiFact, with veteran staffers Louis Jacobson writing and Holan editing, said the president was cherry picking and eventually gave him a "Mostly True" rating.

PolitiFact's explanation of Obama's cherry-picking caught our attention:
It’s worth noting some clarifying language in the report --"for an infant in center-based care" -- that is absent from Obama’s statement. This is actually the highest-cost example of the four cases the report looked at.

If you look at the cost for a 4-year-old in center-based care -- rather than an infant -- it costs more than in-state college tuition and fees in 19 states. That’s 39 percent fewer states compared with statistics for infant care. (Generally, care for infants is more intensive, so costs tend to go down as children get older.)

The report also looked at costs for home-based care, which is often a less expensive option for parents. For infants, the cost of home-based care is higher than college costs in 14 states. That’s a 55 percent reduction in states compared to Obama’s 31.

And for 4-year-olds, the cost of home-based care is higher than college in 10 states. That’s a 68 percent reduction in states compared to Obama’s 31.
What's the problem?  One could argue there's no right figure here to use as a baseline for a percent error calculation, except the same principle holds true for calculating a percentage change from a baseline.  And in this fact check we've got a charge of cherry-picking.  Cherry-picking creates a favorable impression compared to alternative baselines.  Calculating the exaggeration above the baseline is exactly like calculating the percentage error.

And guess what?  PolitiFact consistently performs the calculation incorrectly in a way that makes Obama look better.
  1. For the 4-year-old group, PolitiFact said the cost was higher for child care in 19 states, 39 percent fewer than the figure Obama used:  31.  Do the calculation using 19 as the baseline and the result tells the effect of Obama's cherry-picking.  The real exaggeration Obama achieves is 63 percent.  PolitiFact's method underestimates the exaggeration by 38 percent (24 percentage points).
  2. For home-based care of an infant, the result follows the same pattern.  PolitiFact said the difference was a 55 percent reduction.  In truth, Obama's cherry-picking inflated the number of states by 121 percent.  PolitiFact's calculation reduced Obama's exaggeration by about 55 percent.
  3. For home-based care of 4-year-olds we see the same story again.  PolitiFact called the difference "a 68 percent reduction."  Using the cost of home-based care for 4-year-olds as the baseline, we find Obama's cherry-picking exaggerates the number of states by 210 percent.  PolitiFact reduces Obama's exaggeration in this case by 68 percent.
The group Obama chose to cherry-pick provided by far the largest group of states.  Any averaging with the other figures from PolitiFact's source, Child Care Aware of America, will lower the figure, especially if we also consider the school-age category that PolitiFact fails to mention.  The costs for that group were lower than for infants and 4-year-olds.

Rigged.
The percentage figures PolitiFact provides do nothing to explain the effects of Obama's cherry-picking.  Instead, they arbitrarily tell the relationship in size between two numbers, doing it in a way that ultimately misleads readers.

It's easy to see what happened with Obama's misstatement.  Obama's figure matches exactly the figure Child Care Aware of America published for four-year-olds receiving child-care services at a center.  Except Obama described the figure incorrectly.  An average for all three groups, considering both center-care and home-care, would render Obama's statement literally false.  He'd be just another politician who described a study using the wrong words, except PolitiFact goes easier on some than it does on others.  Obama's statement is literally false (off by no less than 63 percent).  It misleads his audience.  He gets a "Mostly True" from PolitiFact.

If these are its best practices then PolitiFact needs a new guru.

Wednesday, May 28, 2014

PolitiFact, percent error, partisanship

In last Sunday's post about PolitiFact's math on uninsured Americans, we mentioned the worth of tracking PolitiFact's application of a basic math equation used to calculate error percentages.

To calculate the percent error, one takes the difference between the right figure and the wrong figure, and divides the difference by the right figure.

For example, if we guess Scarlett Johansson weighs 175 lbs and her actual weight is 100 lbs, we take the difference, 75, and divide it by the correct figure, 100.  That gives us 75 percent, so that's the percentage error.  It is incorrect to divide by the incorrect estimate, 175 lbs.  That procedure ends up giving us a figure of about 43 percent, which is the wrong figure for the percentage error even if the calculation is performed correctly.  It's the wrong formula.  Our estimate was off by 75 percent, not 43 percent.

We propose, given the difficulty journalists often display with math problems, that this calculation can serve as an indicator of ideological bias.  Where a journalist performs the wrong equation to the benefit of one political party over the other, we have an evidence of ideological bias.

The Experiment

We used the search string "off by" and the term "percentage" and performed a Google search limited to the politifact.com domain.  We then combed the results for the target results: instances of percentage error calculations.

We found only 14, which is a low number for a study.  Two of the fact checks had two (wrong) calculations in them, but we count each fact check as just one case.

Out of 14 equations, PolitiFact performed the wrong calculation an appalling nine times.  Over half.


Good Fortune for Democrats

Since there are two ways to perform the percentage error calculation, albeit one is the wrong way, we tracked both methods for their correlation with benefit or harm to political parties.  The reason is simple.  One of the calculations will normally minimize the error compared to the other calculation.  If Republicans get the wrong calculation every time the wrong calculation minimizes the error and the right calculation every time it likewise minimizes the error then just counting the number of times the wrong formula helped Republicans doesn't adequately tell the story.  The other half of the story is the uncanny ability to use the right equation when it helps Republicans.

Does anybody sincerely expect PolitiFact to help Republicans?  Do you work for Media Matters or what?


Wrong calculation helped Dem., harmed Rep.   6  
Wrong calculation helped Rep., harmed Dem.    3   



Right calculation helped Dem, harmed Rep.
3

Right calculation helped Rep., harmed Dem.
0


Each of the nine times PolitiFact used the percentage error concept for a fact check of a Democrat in our sample, the Democrat received the calculation that provided the greatest benefit.  That's not counting the two ambiguous cases. [incorrectly phrased; ignore--ed.]

Where the type of calculation made a noticeable difference in the degree of error, the liberal point of view got the benefit 75 percent of the time.

Three times PolitiFact aided Republicans with the wrong calculation.  One case occurred in a fact check by an intern.  Another case helped out Republican Allen West by a measly 2 percentage points.  The third case helped a Republican criticizing the influence of a special interest group on an election.  In other words, the truth is even worse than the table makes it look.


Correction 5/29/2014:  Allen West received the benefit of two measly percentage points, not Herman Cain as I originally wrote.  Cain occurs in our data as a case of harm from the wrong calculation.


Data notes

Sunday, May 25, 2014

PolitiMath on uninsured Americans

An pseudonymous tipster pointed out problems with an old PolitiFact rating from 2009.

PolitiFact rated President Obama "Mostly True" for his statement that nearly 46 million Americans lack health insurance.

PolitiFact examined Census Bureau data confirming the president's figure, but noted it included 9.7 million non-citizens.  Our tipster pointed out that the number also included an estimated 14 million already eligible for government assistance in getting health insurance. 
The 2004 Census Current Population Survey (CPS) identified 44.7 million non-elderly uninsured in 2003. Blue Cross and Blue Shield Association contracted with the Actuarial Research Corporation (ARC) to provide a detailed analysis of the uninsured identified by the Census Bureau, which found:
  • Nearly one-third were reachable through public programs, such as Medicaid and the SCHIP program for children
  • One-fifth earn $50,000 or more annually and may be able to afford coverage
  • Almost half may have difficulty affording coverage because they earn less than $50,000 per year. Many of these people work for small firms that do not offer health coverage
Given that Obama was using the number of uninsured to promote the need for government intervention, PolitiFact should have mentioned the number of uninsured already able to take advantage of government help.  We're seeing that this year as at least 380,000 of those the administration says are gaining Medicaid through the ACA were already eligible before the law was passed. The administration can claim some credit for getting eligible persons signed up, but it's misleading to say all those signing up for Medicaid are gaining their coverage thanks to the ACA, just as it was misleading to use 14 million assistance-eligible Americans to show the need to offer more of the same kind of assistance.  The need was exaggerated, and PolitiFact failed to properly notice the size of the exaggeration.

The PolitiMath angle

We use the term PolitiMath of the relationships between PolitiFact's math equations and its "Truth-O-Meter" ratings.  Many journalists have trouble properly calculation error percentage, and in this item we find PolitiFact's former chief editor (Bill Adair) and its present chief editor (Angie Drobnic Holan) making a common mistake:
Getting back to Obama's statement, he said, "Nearly 46 million Americans don't have health insurance coverage today." That is the most recent number for the U.S. Census available, but he messes it up in one way that would tend to overcount the uninsured and in another way that would tend to undercount them.

It's an overcount because it counts noncitizens. Take out the 9.7 million noncitizens and the actual number is closer to 36 million. 

... So Obama is sloppy by saying it is for "Americans" but not accounting for the noncitizens, which leaves him off by about 22 percent.
PolitiFact's likely equation:  (46-36)/46   _21.7 percent_

It's the wrong equation, and this is not controversial.  It's basic math.  To find the percentage error the accurate value belongs in the denominator.

The right equation:  (46-36)/36    _27.7 percent_

Marc Caputo of the Miami Herald, a PolitiFact partner paper, made the same mistake months ago and vigorously defended it on Twitter.  Caputo argued that it's okay to do the equation either way.  One can execute the equation accurately in either form, but executing the wrong equation gives the wrong final figure.  Journalists need to consider the ramifications of having two different options for calculating an error percentage.  If one chooses the method in a way that favors one party over another then a pattern of that behavior turns into evidence of political bias.

Caputo used the method more damaging to the Republican to whom he referred.

In Adair and Holan's case, guess which party received the benefit of the wrong equation?

It's a statistic worth following.