Showing posts with label PolitiMath. Show all posts
Showing posts with label PolitiMath. Show all posts

Thursday, March 10, 2016

Bernie Sanders, PolitiMath and the price of water in Flint

In our PolitiMath series we look at how numerical errors correlate to PolitiFact's ratings.

PolitiFact's March 7, 2016 rating of Democratic presidential candidate Bernie Sanders (I-Vt.) suits our purposes well, with Sanders claiming Flint residents pay three times for water what Sanders pays in Burlington, Vt.

PolitiFact found Sanders was right if it used outdated water rates:
When we look at average annual bills from January 2015, Sanders’ 3-to-1 comparison is pretty close. But after August, Flint customers were paying a little more than twice as much as Burlington residents.
A judge's order in August 2015 rolled back water rates. Therefore, as PolitiFact notes, Flint residents now pay about twice what Burlington residents pay, counting Flint's charges for the home water meter. Burlington doesn't charge for the water meter.

To us, it's okay if Sanders wants to round up to get to his "three times" figure. So for PolitiMath purposes, we'll calculate how much 2.5 exaggerates the difference in water rates between Flint and Burlington.

Going by PolitiFact's chart, "a little more than twice as much" turned out to be about 2.4, leading to a very modest exaggeration on Sanders' part: about 4 percent. Yes, allowing for rounding up helped Sanders immensely. That's okay. We'd handle this the same way for a conservative.

PolitiFact gave Sanders a "Mostly True" rating, by the way, for a claim that was literally false.  

Deja vu.

Friday, January 29, 2016

More Clintonian PolitiMath

With our PolitiMath posts we look for correlations between numerical errors and PolitiFact's "Truth-O-Meter" ratings. Today's item looks at PolitiFact's rating of Democratic presidential candidate Hillary Rodham Clinton, who said she is the only candidate with a specific plan to fight ISIS.

PolitiFact said there were at least seven such plans among presidential candidates and gave Clinton a "False" rating (bold emphasis added):
While Clinton’s plan is more detailed, by some measurements, than those of other candidates, at least seven other candidates in both parties have released multi-point plans for taking on ISIS. Some plans -- such as those from Bush and Rubio -- approach Clinton’s in either length or degree of detail. In fact, there’s a significant degree of overlap between the agenda items in Clinton’s plan and in plans released by other candidates.

We don’t see strong evidence for Clinton’s claim that she’s the only member of the 2016 field with a "specific plan." We rate the claim False.
Clinton, using PolitiFact's estimates as a basis, underestimated the number of specific plans for defeating ISIS by 86 percent.

We found a close parallel to this case involving Republican candidate Rick Santorum (an 83 percent underestimation). PolitiFact rated Santorum "False" also.

Wednesday, January 27, 2016

Martin O'Malley, PolitiFact, inconsistency and PolitiMath

One of the things that makes PolitiFact's "Truth-O-Meter" so subjective is the lack of any apparent standard for weighting  the literal truth of a claim against its underlying point.

PolitiFact gives us a fresh example with the "Mostly True" rating it bestowed on Democratic presidential nominee Martin O'Malley.

O'Malley said in 1965 the average GM employee could pay for a year's college tuition on just two weeks' wages.

Have a gander at PolitiFact's summary conclusion:
O’Malley said, "Fifty years ago, the average GM employee could pay for a year of a son or daughter’s college tuition on just two weeks [sic] wages."

That’s not quite right -- it would have taken about four weeks of work at GM, not two, to pay for a year at the average four-year college in 1965, and more than that if you take account of taxes. Still, O’Malley has a point that the situation in 1965 was quite a deal compared to today, when a typical auto worker would have to work for 10 weeks in order to pay for a year of tuition at the average four-year college. We rate the statement Mostly True.
Note that PolitiFact cuts O'Malley a break on payroll taxes and he still underestimated the amount of work needed by half with his estimate.

That's like saying the Empire State Building is 800 feet tall but getting a "Mostly True" because, hey, the underlying point is that it's a tall building.

PolitiMath

With PolitiMath we're looking for correlations between numerical errors and PolitiFact's ratings.

This O'Malley case caught our eye right away:



The story summary makes clear right away that O'Malley was way off with his figure. Despite that, he received the "Mostly True" rating.

What was O'Malley's percentage error?

Using PolitiFact's figures for tuition at a 4-year college ($607) and giving O'Malley a pass on payroll taxes ($297.60), O'Malley underestimated the cost in 1965 by 51 percent. He cut it in half and received a "Mostly True" rating.

We're not aware of any other percentage error of this magnitude receiving a rating of "Mostly True" from PolitiFact. This might be the record.

Thursday, January 21, 2016

Hillary Clinton & PolitiMath

Our PolitiMath series of posts looks for correlations between numerical errors and PolitiFact's ratings.

The "False" rating PolitiFact gave to Democratic presidential candidate Hillary Clinton on Jan. 20, 2015 allows us to further expand our data set. Clinton said nearly all of the bills she presented as a senator from New York had Republican co-sponsors.

PolitiFact said her numbers were off.
We found at least one Republican co-sponsor in 4 of 7 resolutions or continuing resolutions (57 percent) but only 9 of 37 bills (24 percent).

Overall, that's 13 out of 44, or just under 30 percent.

Focusing on the 18 bills that Clinton sponsored and brought to the Senate floor for consideration, four had at least one Republican co-sponsor (22 percent) ...
Note we are dealing with a slightly mushy comparison. What is "nearly all"? We think setting a fairly low bar gives us the most useful comparison to other ratings.

Let's say 80 percent of her bills would count as "nearly all." We think that's a fairly low bar to clear.

Using the best figure PolitiFact produced on Clinton's behalf, she exaggerated her claim by 167 percent. Using the figure reflecting a more literal interpretation of her words (the 22-percent figure), Clinton exaggerated by 264 percent.

PolitiFact said Clinton's claim "isn't even close" to the truth. But we know it wasn't ridiculously far off, otherwise PolitiFact would have awarded Clinton a "Pants on Fire" rating.

Right?

Saturday, December 26, 2015

PolitiMath from PolitiFact New Hampshire

What's new with PolitiMath?

PolitiFact New Hampshire, lately the Concord Monitor's partnership with PolitiFact, gives us a double dose of PolitiMath with its July 2, 2015 fact check of New Hampshire's chief executive, Governor Maggie Hassan (D).

Hassan was the only Democrat to receive any kind of false rating ("False" or "Pants on Fire") from PolitiFact New Hampshire in 2015. PolitiFact based its ruling on a numerical error by Hassan and added another element of interest for us by characterizing Hassan's error in terms of a fraction.

What type of numerical error earns a "False" from PolitiFact New Hampshire?

PolitiFact summarizes the numbers:
In her state of the state address, Hassan said that "6,000 people have already accessed services for substance misuse" through the state’s Medicaid program.

There is no question that substance abuse in the state is a real and pressing problem, and the statistics show that thousands have sought help as a result of the state’s expanded Medicaid program. But Hassan offered (and later corrected) a number that simply wasn’t accurate. The real total is closer to 2,000 -- about one-third the amount she cited.

We rate her claim False.
Describing Hassan's mistake as a percentage error using PolitiFact's figures, Hassan exaggerated her figure by about 230 percent. PolitiFact gave Hassan no credit for her underlying point.

In our PolitiMath series we found the closest match for this case from PolitiFact Oregon. PolitiFact Oregon said conservative columnist George Will exaggerated a figure--by as much as 225 percent by our calculations. The figure PolitiFact Oregon found was uncertain, however, so Will may have exaggerated considerably less using the range of numbers PolitiFact Oregon provided.

In any case, PolitiFact Oregon ruled Will's claim "False." PolitiFact Oregon gave Will no credit for his underlying argument, just as PolitiFact New Hampshire did with Gov. Hassan.

Percent Error and Partisanship

One of our research projects looks in PolitiFact's fact checks for a common error journalists make. We reasoned that journalists would prove less likely to make such careless errors for the party they prefer. Our study produced only a small set of examples, but the percentage of errors was high and favored Democrats.

PolitiFact New Hampshire's fact check of Gov. Hassan draws some consideration for this error, giving us the second mathematical element of note.

PolitiFact could have expressed Hassan's mistake using a standard percentage error calculation like the one we used. We calculated a 230 percent error. But PolitiFact New Hampshire did not use the correct figure (1,800) as the baseline for calculating error. Instead, the fact checkers used the higher, incorrect figure (6,000) as the baseline for comparison: "about one-third the amount she cited."

Using the number "one-third" frames Hassan's error nearer the low end. "One-third" doesn't sound so bad, numerically. Readers with slightly more sophistication may reason that the "one-third" figure means Hassan was off by two-thirds.

Sometimes using the wrong baseline makes the error look bigger and sometimes it makes the error look smaller. In this case the wrong baseline frames Hassan's mistake as a smaller error. The Democrat Hassan gains the benefit of PolitiFact's framing.

Wednesday, May 27, 2015

Bernie Sanders, PolitiFact, PolitiMath

We do a "PolitiMath" evaluation of PolitiFact's fact checks where numerical errors ought to have a powerful bearing on PolitiFact's "Truth-O-Meter" ratings. We're interested in how percentage error impacts the differences between "Pants on Fire," "False," "Mostly False" and so on.

An older item on Sen. Bernie Sanders (I, Vt.) caught our eye in the midst of one of PolitiFact's bogus "report card" stories. Sanders said the United States spends twice as much per capita on health care than any other nation on earth.

PolitiFact found Sanders was off:
According to the 2009 edition of WHO's World Health Statistics report, which uses figures from 2006, health care spending in the United States — both public- and private-sector — amounted to $6,719 per capita. Ranking next were Luxembourg and Monaco at $6,506 and $6,353 per capita, respectively. All told, either 11 or 15 countries told the WHO they spent more than $3,360 per capita, the point at which the United States no longer doubles their spending. (We provide two possible figures here because the WHO offers both raw figures and statistics adjusted for currency valuations.) The other nations that rank near the top with the United States include Austria, Belgium, Canada, Denmark, France, Germany, Iceland, Ireland, the Netherlands, Norway, Sweden and Switzerland, in addition to tiny Malta and San Marino.
We'd have gone to bat for Sanders if only OECD nations were counted. The United States spends more per capita than its nearest rival by over 50 percent, which is the reasonable floor for rounding up to a "twice as much" claim. But the United States only spends 3.3 percent more per capita than Luxembourg, which is a pretty far cry from 50 percent.

Sanders exaggerated the truth by at least 1,400 percent, using the figures from Luxembourg as the counterexample to his claim.

PolitiFact's rating of Sanders? "False."

Sunday, May 3, 2015

PunditFact's PolitiMath on the GDP of 29 countries

We do "PolitiFact" stories to examine how PolitiFact's ratings correlate to percentage error. Claims where the ratings seem based purely or mainly on the degree of error serve as the best case studies. PunditFact gives us a great study example with its article on the claim that a boxing match would generate more revenue than the GDP of 29 different countries.

PunditFact ruled that claim "Pants on Fire," finding only six countries with a GDP lower than that predicted for the fight: $400 million.

Jim Lampley's figure of 29 exaggerates PunditFact's total by 383 percent. That substantial error, we suppose, justifies the "Pants on Fire" rating.

On the other hand, PunditFact gave Cokie Roberts a "Half True" rating for a claim she exaggerated by over 9,000 percent. PunditFact gave Roberts credit for her underlying point, that the risk of getting murdered in Honduras is greater than for New York City.

Apparently Lampley has no valid underlying point that the Mayweather-Pacquiao fight would generate a great deal of revenue.

You be the judge.


Update May 3, 2015

While researching and wondering how Lampley ended up with 29 countries producing a GDP under $400 million, we noticed a perhaps-coincidental statistic: The World Bank's 2013 GDP rankings have 29 countries with a GDP above $400 billion.


Lampley's claim may have started with this statistic. After mixing up millions with billions and mistaking the top of the list for the bottom, Lampley's claim makes perfect sense, in a way.


Correction May 4, 2015: Fixed spelling of "Pacquiao."

Thursday, April 23, 2015

PolitiFact Georgia, PolitiMath and the gender pay gap

On April 22, 2015, PolitiFact Georgia found it "Mostly False" that women make only 78 percent of what men make for doing the same work.

PolitiFact Georgia reported that the claim, from a Buzzfeed video, based its claim on statistics that, among other faults, did not bother to ensure that the men and women were doing the same work.

At Zebra Fact Check I've published an in-depth treatment of the way mainstream fact checkers mishandle the gender pay gap. But here we'll look narrowly at how PolitiFact Georgia applies a "Mostly False" rating to a gross exaggeration. Our "PolitiMath" stories explore the relationship between percentage error and PolitiFact's ratings, so PolitiFact Georgia's story makes a good subject.

PolitiFact's highest estimate of the wage gap after controlling for the type of job and some other factors was about 7 percent:
(T)he American Association of University Women that controlled for college major, occupation, age, geographical region, hours worked and more, and found there was still a 7 percent wage gap between male and female college grads a year after graduation.
Using that high-end estimate, the Buzzfeed video exaggerated by no less than 214 percent. There's precedent for liberals receiving ratings of "Mostly False" or better for exaggerations that large and larger. On the other hand, PolitiFact Wisconsin gave a state Democrat a "False" rating for an exaggeration of 114 percent.

At least we know Buzzfeed's exaggeration is not the largest to receive a rating of "Mostly False" or higher.

If anyone can find a statement from a Republican or conservative where a figure exaggerated by more than 100 percent received a rating of "Mostly False" or higher from PolitiFact, we'd love to hear about it. We haven't turned up anything like that yet.

Thursday, February 19, 2015

Fifty shades of "Half True"

PolitiFact's founding editor, Bill Adair, has said the truth is often not black and white, but gray:
Our Truth-O-Meter is based on the concept that the truth in politics is often not black and white, but shades of gray.
With this post we'll look at an example of PolitiFact shading the truth with its middle-ground "Half True" rating.

Justice Roy Moore, conservative: "Half True"


The first example comes from Feb. 13, 2015. Alabama Supreme Court Justice Roy Moore said Alabama hadn't changed its mind about gay marriage since passing a law in 2007 defining marriage in heterosexual terms. Moore was answering a claim from CNN host Chris Cuomo that people in Alabama had changed their views on gay marriage. PolitiFact reported the key exchange:
"Times have changed as they did with slavery," Cuomo said Feb. 12 on New Day. "The population no longer feels the same way. And even in your state, people no longer feel the same way."

Moore held firm that marriage was defined as between a man and a woman, and said, "81 percent as recently as 2006 said it was the definition. They haven’t changed their opinion."
PolitiFact framed its fact check in terms of a contest between the statements from Cuomo and Moore. If support for gay marriage had changed in Alabama, then Moore's claim was not plainly true.

PolitiFact flubbed its interpretation of Moore's response. Moore was not arguing that no change had occurred in opinion polls. Moore referred to the percentage of Alabama voters who approved the heterosexual marriage definition in 2006. The voters had not changed their minds in that the people of Alabama had not moved to change the law they overwhelmingly approved. PolitiFact noted that Moore was referring to that vote, but somehow failed to put the pieces of the puzzle together. Moore's Truth-O-Meter rating: "Half True."

Even if PolitiFact's wrong interpretation was correct, Moore would be off by a scant 14 percent. Democrat Sen. Sheldon Whitehouse once received a "Mostly True" rating for a claim that was off by 27 percent.


Rep. Pete DeFazio (D-Ore.): "Half True"


Our second example comes from a PolitiFact fact check published on Feb. 17, 2015. Rep. Peter DeFazio (D-Ore.) blamed genetically modified crops for the impending extinction of the monarch butterfly.

PolitiFact quotes DeFazio:
"We certainly know there is going to be secondary harm to the environment," he said. "In fact, monarch butterflies are becoming extinct because of this sort of dumping, (the) huge increase in pesticides’ use because of these modified organisms."
DeFazio got a thing or two wrong. Monarch butterflies aren't going extinct. The causal connection between the increased use of herbicides and the decreased wintering population of monarch butterflies has not yet been scientifically established. And, though PolitiFact kindly ignored this mistake, DeFazio referred to "pesticides" instead of "herbicides."  Update Aug. 17, 2018: In fact "pesticides" can encompass plant pests as well as animal pests./update The expert PolitiFact cited mentioned the effects of herbicides on the monarch caterpillar's favored food, milkweed. PolitiFact apparently didn't investigate the effect of pesticide dumping on monarch butterfly populations.

So DeFazio got nothing right, but PolitiFact accepted his extinction claim as a mere exaggeration of the declining wintering population of monarch butterflies. The final ruling: "Half True."


It almost takes a masochist to read PolitiFact's fifty shades of gray.

Tuesday, November 11, 2014

Fact-checking while blind, with PolitiMath

One of the things we would predict from biased journalists is a forgiving eye for claims for which the journalist sympathizes.

Case in point?

A Nov. 11, 2014 fact check from PolitiFact's Louis Jacobson and intern Nai Issa gives a "True" rating to a Facebook meme claiming Congress has 11 percent approval while in 2014 96.4 percent of incumbents successfully defended their seats.

PolitiFact found the claim about congressional approval was off by about 20 percent and the one about the percentage of incumbents was off by a maximum of 1.5 percent (percentage error calculations ours). So, in terms of PolitiMath the average error for the two claims was 10.75 percent yet PolitiFact ruled the claim "True." The ruling means the 11 percent average error is insignificant in PolitiFact's sight.

Aside from the PolitiMath angle, we were intrigued by the precision of the Facebook meme. Why 96.4 percent and not an approximate number by 96 or 97? And why, given that PolitiFact often excoriates its subjects for faulty methods, wasn't PolitiFact curious about the fake precision of the meme?

Even if PolitiFact wasn't curious, we were. We looked at the picture conveying the meme and saw the explanation in the lower right-hand corner.

Red highlights scrawled by the PolitiFact Bias team. Image from PolitiFact.com

It reads: "Based on 420 incumbents who ran, 405 of which kept their seats in Congress."

PolitiFact counted 415 House and Senate incumbents, counting three who lost primary elections. Not counting undecided races involving Democrats Mark Begich and Mary Landrieu, incumbents held 396 seats.

So the numbers are wrong, using PolitiFact's count as the standard of accuracy, but PolitiFact says the meme is true.

It was fact-checked, after all.

Thursday, November 6, 2014

PunditFact PolitiFail on Ben Shapiro, with PolitiMath

On Nov. 6, 2014 PunditFact provided yet another example why the various iterations of PolitiFact do not deserve serious consideration as fact checkers (we'll refer to PolitiFact writers as bloggers and the "fact check" stories as blogs from here on out as a considered display of disrespect).

PunditFact reviewed a claim by Truth Revolt's Ben Shapiro that a majority of Muslims are radical. PunditFact ruled Shapiro's claim "False" based on the idea that Shapiro's definition of "radical" and the numbers used to justify his claim were, according to PunditFact, "almost meaningless."

Lost on PunditFact was the inherent difficulty of ruling "False" something that's almost meaningless. Definite meanings lend themselves to verification or falsification. Fuzzy meanings defy those tests.

PunditFact's blog was literally filled with laughable errors, but we'll just focus on three for the sake of brevity.

First, PunditFact faults Shapiro for his broad definition of "radical," but Shapiro explains very clearly what he's up to in the video where he made the claim. There's no attempt to mislead the viewer and no excuse to misinterpret Shapiro's purpose.



Second, PunditFact engages in its own misdirection of its readers. In PunditFact's blog, it reports how Muslims "favor sharia." Pew Research explains clearly what that means: Favoring sharia means favoring sharia as official state law. PunditFact never mentions what Pew Research means by "favor sharia."

Do liberals think marrying church and state is radical? You betcha. Was PunditFact deliberately trying to downplay that angle? Or was the reporting just that bad? Either way, PunditFact provides a disservice to its readers.

Third, PunditFact fails to note that Shapiro could easily have increased the number of radicalized Muslims in his count. He drew his totals from a limited set of nations for which Pew Research had collected data. Shapiro points this out near the end of the video, but it PunditFact either didn't notice or else determined its readers did not need to know.

PolitiMath


PunditFact used what it calls a "reasonable" method of counting radical Muslims to supposedly show how Shapiro engaged in cherry-picking. We've pointed out at least two ways PunditFact erred in its methods, but for the sake of PolitiMath we'll assume PunditFact created an apt comparison between its "reasonable" method and Shapiro's alleged cherry-picking.

Shapiro counted 680 million radical Muslims. PunditFact counted 181.8 million. We rounded both numbers off slightly.

Taking PunditFact's 181.8 million as the baseline, Shapiro exaggerated the number of radical Muslims by 274 percent. That may seem like a big enough exaggeration to warrant a "False" rating. But it's easy to forget that the bloggers at PunditFact gave Cokie Roberts a "Half True" for a claim exaggerated by about 9,000 percent. PunditFact detected a valid underlying argument from Roberts. Apparently Ben Shapiro has no valid underlying argument that there are plenty of Muslims around who hold religious views that meet a broad definition of "radical."

Why?

Liberal bias is as likely an explanation as any.


Addendum:

Shapiro makes some of the same points we make with his own response to PunditFact.

Saturday, September 20, 2014

PolitiMath at PolitiFact New Hampshire

PolitiFact New Hampshire provides us an example of PolitiMath with its Sept. 19, 2014 rating of Sen. Jeanne Shaheen's ad attacking Republican challenger Scott Brown.

The ad claims Brown ranked first in receiving donations from "Wall Street," to the tune of $5.3 million.

PolitiFact New Hampshire pegged the reasonably "Wall Street" figure lower than $5.3 million:
Brown’s total haul from these six categories was about $4.2 million, or about one-fifth lower than what the ad said.
Note that national PolitiFact's Louis Jacobson, writing for PolitiFact New Hampshire, figures the difference between the two figures with the errant figure as the baseline. That method sends the message that Shaheen's ad was off on the number by about one-fifth, or in error by about 20 percent. Calculated properly, the figure in Shaheen's ad represents an exaggeration (that is, error) of 26 percent.

Curiously, PolitiFact doesn't bother reaching a conclusion on whether it's true that Brown ranks number one in terms of Wall Street giving. Jacobson says Brown led in four of the six categories he classified as Wall Street, but kept mum about where Brown ranked with the figures added up.

That makes it difficult to judge whether the 26 percent error implied by PolitiFact New Hampshire's $4.2 million figure accounts for the "Mostly True" rating all by itself.

Capricious.

For comparison, we have a rating of President Obama where the PolitiFact team made a similar mistake, calculating the error as a percentage of the errant number. In that case, Obama gave a figure that was off by 27 percent and received a rating of "Mostly True."


Afters

After a little searching we found a "Mostly True" rating of a conservative where the speaker used the wrong figure. Conservative pundit Bill Kristol said around 40 percent of union members voted for the Republican presidential candidate in 2008. The actual number was 37 percent. Kristol was off by about 8 percent. So "Mostly True."

Saturday, September 6, 2014

PolitiFact Wisconsin serves up more baloney on Obama cutting the deficit in half

PolitiFact defines its "True" rating as "The statement is accurate and there’s nothing significant missing."

Thus we greet with derisive laughter PolitiFact's Sept. 5, 2014 bestowal of a "True" rating on President Obama's declaration "We cut our deficits by more than half."

Curious about what "we" cut the deficits? PolitiFact Wisconsin is here to help:

"We" is "he": Obama (image from PolitiFact.com)
"We" is "he." Obama did it. Obama cut the national deficit in half. The statement is accurate and there's nothing significant missing. Right?

Well, no. It's a load of hooey that PolitiFact has consistently helped Obama sell.

Here are some insignificant things PolitiFact Wisconsin found:
  1. "When you use Obama's methodology to compare the deficit Obama inherited -- the 2009 result minus the stimulus package to that in 2013 --  the drop in the deficit is slightly under half, at 48%."
  2.  "'The economic recovery, wind-down of stimulus, reversal of TARP/Fannie transactions, and lower interest rates are really what has caused our deficit to fall so much,' Goldwein told us. He mentioned cuts in discretionary spending as well."
  3.  "(Ellis) and Goldwein emphasized that while the deficit has been halved, it’s been halved from a skyscraping peak."
The second point is significant because TARP and other bailout spending was heavily focused on FY2009. As that money is repaid, it counts as lower spending ("negative spending"). The government has turned a profit on the TARP bailouts, so a fair bit of the "skyscraping peak" came right back to the government, making its later spending appear lower.

Here are some insignificant missing things PolitiFact Wisconsin didn't bother to mention:
  1. PolitiFact claims it takes credit and blame into account. But Obama carries little (if any) personal responsibility for reducing the deficit by half.
  2. Remember those obstructionist Republicans who block the Democrats' every attempt to pass jobs bills and keep critically important entitlement benefits flowing?
  3. PolitiFact's expert, Goldwein, mentioned cuts in discretionary spending. Way to go, Obama! Oh, wait, that was largely a result of the sequestration that the president blames on Republicans.
So, yeah, the deficit was cut in half. But given the nature of the FY2009 deficit spike, cutting the deficit in half by the end of Obama's first term in office should have been a layup. It wasn't a layup because the economy stayed bad. Democrats would have continued spending investing in jobs and education if Republicans hadn't gained control of the House of Representatives in 2010.

Obama tries to take this set of circumstances largely beyond his control to fashion a feather for his own cap.

To PolitiFact Wisconsin, none of that is significant. What a joke.


Afters

For more on Obama's effect on the deficit and debt, see the following Zebra Fact Check articles:

FactCheck.org says federal spending has increased ‘far more slowly’ under Obama than under Bush

Is the federal deficit ‘falling at fastest rate in 60 years’?


Edit 11/08/2014 - Added link to original PFW article in second paragraph - Jeff

Tuesday, August 26, 2014

Marc Lamont Hill and PolitiMath

A PunditFact rating of CNN pundit Marc Lamont Hill drew our attention today for its PolitiMath content.

PolitiMath takes place when math calculations appear to bear on whether a figure receives one "Truth-O-Meter" rating instead of another.  In this case, Hill received a "False" rating for claiming an unarmed black person is  shot by a cop every 28 hours.

PunditFact found Hill reached his conclusion using the numbers for black persons armed or unarmed. The total figure for both was 313. The figure for unarmed black people was 136.  The calculation is uncomplicated. Taking the number of hours in a 365-day year, we get 8760. Divide 8760 by 313 and we get Hill's 28-hour figure. Use what PunditFact said was the correct figure and we get 64 hours (8760/136).

Hill exaggerated the frequency of a unarmed black person dying from a police shooting by 124 percent.

We're certainly not saying that PolitiFact is in any way consistent with how it classifies errors by percentage, but for comparison Florida lawmaker Will Weatherford made a statistical claim that was off by about 49 percent and received a "False" rating. Democrat Gerry Connolly, on the other hand, managed to wring a "Mostly False" rating out of a statistic that was off by about 45 percent.

Perhaps this is just science at work. Given reality's liberal bias, it may make sense to grade errors of the same percentage more harshly where they affect liberally-biased truths. Fact checkers could be guilty of false equivalency by acting as though the truth is simply objective.

Sunday, August 3, 2014

PolitiMath at PolitiFact Virginia

Guided selection?
Earlier today, we reviewed the percentage error involved in pair of PolitiFact ratings.

On July 16, PolitiFact's PunditFact rated Cokie Roberts "Half True" for a numerical claim that was exaggerated by about 9,000 percent.  PunditFact justified the rating based on Roberts' underlying argument, that the risk of being murdered in Honduras is greater than the risk in New York City.

On July 31, PolitiFact Oregon rated George Will "False for a numerical claim that was off by as much as 225 percent.  Will claimed healthcare companies.make up 13 of the top 25 employers in Oregon, and occupy the top three positions on top of that.  The former claim was off by as much as 225 percent and the latter claim was off by 300 percent or so.  PolitiFact found Oregon's largest employer was a healthcare firm.

Today we take fresh note of a July 14 fact check from PolitiFact Virginia.

PolitiFact Virginia tested the claim of Democrat Mark Sickles that 70 percent of Virginia's Medicaid budget pays for care for seniors in nursing homes.

PolitiFact Virginia said the true number was 9.7 percent.

From that number, we calculate a percentage error of 622 percent (PolitiFact can't be trusted with that calculation).

PolitiFact Virginia gives Sickles no credit for his underlying argument and rates his claim "False."


What determines whether PolitiFact rates the underlying point along with the literal claim?

How big does an error need to get before a claim warrants a "Pants on Fire" rating?


Clarification 8-14-2014:
Changed "Will claimed healthcare companies.make up 13 of the top 25, and occupy the top three positions on top of that" to Will claimed healthcare companies.make up 13 of the top 25 employers in Oregon, and occupy the top three positions on top of that."

PolitiMath at PolitiFact Oregon

Leaning.
PolitiFact Oregon provides us with a great item to compare to our July 30 examination of mathematics at PolitiFact's PunditFact project.

In the PunditFact item, we noted that Cokie Roberts used a probability comparison that was off by almost 9000 percent and received a "Half True" rating from PolitiFact, thanks to her underlying point that getting murdered in Honduras was more likely than in New York City.

On July 31, PolitiFact Oregon published a fact check of George Will.  Will wrote a few things about how prominently health care providers figure in Oregon's list of top job providers.  Will was making the case for a medical doctor in the senate, Republican candidate Monica Wehby.

PolitiFact Oregon rated Will's claim "False":
Will, in a column supporting the candidacy of Republican Senate candidate Monica Wehby, included a link purporting to show Oregon’s 25 largest employers. The chart, he wrote, indicated that the dominance of large health care providers in Oregon -- the three largest employers and 13 of the top 25 in the state fit that niche, according to the chart -- make Dr. Wehby the best choice for the job.

Calls and emails to many of the companies listed, however, indicate that the chart’s numbers are way off, often wildly so. The top three employers on the list Will used are, in fact, a single entity. And by our count, the highest number of health care providers that can rank among Oregon’s top 25 employers is nine, not the 13 Will cited.

We rate the claim False.
Will was off by as much as 225 percent (using four as the number of health care providers in the top 25), apparently totally overwhelming any underlying point he had about about health care providers employing quite a few Oregonians.

After all, it's way too much to ask for consistency from mainstream media fact checkers.

Incidentally, we found healthcare/social assistance combined make up about 12.6 percent of all jobs in Oregon (as of June 2014, seasonally adjusted).  That's about 15.1 percent of the private workforce.

Wednesday, July 30, 2014

More of PunditFact's PolitiMath

Occasionally we have fun looking at how the degree of inaccuracy impacts PolitiFact's "Truth-O-Meter" ratings.  Naturally the same evaluations apply to PunditFact, which uses the same rating system as well as, we suspect, a similarly radical inconsistency in applying the ratings.

Today we're looking at PunditFact's July 16, 2014 "Half True" rating of Cokie Roberts comparison of the murder risk in Honduras with that risk in New York City.

Roberts was way off with her figures, and PunditFact surmised Roberts may have conflated the yearly risk of being murdered in Honduras with the annual risk of being murdered in New York City:
(T)he chances of getting murdered in Honduras are 1 in 1100 per year compared to 1 in 20,000 per year in New York. Over a lifetime, the chances of being murdered in Honduras are 1 in 15, compared to 1 in 250 in New York.

That makes Honduras more dangerous but not nearly to the levels Roberts described.

What Rattner may have done, and what Roberts repeated, was compare figures approaching the chances of being murdered in New York in one year (1 in 20,000).
Acting charitably toward Roberts, the risk of getting murdered in Honduras is at most 18 times greater than in New York City.  Roberts' numbers imply the risk is about 1780 times greater than that (and we're doing Roberts a favor by rounding that figure down).

These figures mean Roberts exaggerated the difference in risk by about 9,789 percent, which is another way of saying her figures magnified the difference in risk by almost 100 times.

Throwing darts while blindfolded?
That's a high level of inaccuracy.

For comparison, PolitiFact rated President Obama "False" for overstating the ACA's effect on the number of people obtaining insurance for the first time by a mere 288 percent.  We thought that degree of exaggeration might qualify Obama for a "Pants on Fire" rating given PolitiFact's history.

Using PunditFact's application of principle, however, perhaps Obama should have received a "Half True" in recognition of his point that some people were getting insurance for the first time.

It goes without saying that Republicans tend to face an even tougher time receiving consideration of their underlying points.

Examples like this show us the "Truth-O-Meter" has little to do with fact checking and a great deal to do with editorializing.

Sunday, June 29, 2014

The clueless guru?

Late last month, we published a limited study on PolitiFact's execution of a simple math problem: calculating percentage error.  Using search parameters that suitably simulate randomness, we found 14 cases where PolitiFact explicitly or implicitly performed a percentage error equation.  PolitiFact used the wrong equation an astounding nine times.  Two of the cases were ambiguous.  Those two we gave the benefit of the doubt.

We tweaked PolitiFact over this failure on June 14 after Neil Brown, editor and vice president of PolitiFact's parent the Tampa Bay Times, called PolitiFact editor Angie Holan a "guru of best practices" in a June 9 tweet.  We said a guru of best practices would do percent error calculations the right way.

On Friday, June 27, 2014, PolitiFact doubled down on its methods in a fact check of President Obama.  President Obama said child care costs more than college tuition in 31 states.  PolitiFact, with veteran staffers Louis Jacobson writing and Holan editing, said the president was cherry picking and eventually gave him a "Mostly True" rating.

PolitiFact's explanation of Obama's cherry-picking caught our attention:
It’s worth noting some clarifying language in the report --"for an infant in center-based care" -- that is absent from Obama’s statement. This is actually the highest-cost example of the four cases the report looked at.

If you look at the cost for a 4-year-old in center-based care -- rather than an infant -- it costs more than in-state college tuition and fees in 19 states. That’s 39 percent fewer states compared with statistics for infant care. (Generally, care for infants is more intensive, so costs tend to go down as children get older.)

The report also looked at costs for home-based care, which is often a less expensive option for parents. For infants, the cost of home-based care is higher than college costs in 14 states. That’s a 55 percent reduction in states compared to Obama’s 31.

And for 4-year-olds, the cost of home-based care is higher than college in 10 states. That’s a 68 percent reduction in states compared to Obama’s 31.
What's the problem?  One could argue there's no right figure here to use as a baseline for a percent error calculation, except the same principle holds true for calculating a percentage change from a baseline.  And in this fact check we've got a charge of cherry-picking.  Cherry-picking creates a favorable impression compared to alternative baselines.  Calculating the exaggeration above the baseline is exactly like calculating the percentage error.

And guess what?  PolitiFact consistently performs the calculation incorrectly in a way that makes Obama look better.
  1. For the 4-year-old group, PolitiFact said the cost was higher for child care in 19 states, 39 percent fewer than the figure Obama used:  31.  Do the calculation using 19 as the baseline and the result tells the effect of Obama's cherry-picking.  The real exaggeration Obama achieves is 63 percent.  PolitiFact's method underestimates the exaggeration by 38 percent (24 percentage points).
  2. For home-based care of an infant, the result follows the same pattern.  PolitiFact said the difference was a 55 percent reduction.  In truth, Obama's cherry-picking inflated the number of states by 121 percent.  PolitiFact's calculation reduced Obama's exaggeration by about 55 percent.
  3. For home-based care of 4-year-olds we see the same story again.  PolitiFact called the difference "a 68 percent reduction."  Using the cost of home-based care for 4-year-olds as the baseline, we find Obama's cherry-picking exaggerates the number of states by 210 percent.  PolitiFact reduces Obama's exaggeration in this case by 68 percent.
The group Obama chose to cherry-pick provided by far the largest group of states.  Any averaging with the other figures from PolitiFact's source, Child Care Aware of America, will lower the figure, especially if we also consider the school-age category that PolitiFact fails to mention.  The costs for that group were lower than for infants and 4-year-olds.

Rigged.
The percentage figures PolitiFact provides do nothing to explain the effects of Obama's cherry-picking.  Instead, they arbitrarily tell the relationship in size between two numbers, doing it in a way that ultimately misleads readers.

It's easy to see what happened with Obama's misstatement.  Obama's figure matches exactly the figure Child Care Aware of America published for four-year-olds receiving child-care services at a center.  Except Obama described the figure incorrectly.  An average for all three groups, considering both center-care and home-care, would render Obama's statement literally false.  He'd be just another politician who described a study using the wrong words, except PolitiFact goes easier on some than it does on others.  Obama's statement is literally false (off by no less than 63 percent).  It misleads his audience.  He gets a "Mostly True" from PolitiFact.

If these are its best practices then PolitiFact needs a new guru.

Sunday, May 25, 2014

PolitiMath on uninsured Americans

An pseudonymous tipster pointed out problems with an old PolitiFact rating from 2009.

PolitiFact rated President Obama "Mostly True" for his statement that nearly 46 million Americans lack health insurance.

PolitiFact examined Census Bureau data confirming the president's figure, but noted it included 9.7 million non-citizens.  Our tipster pointed out that the number also included an estimated 14 million already eligible for government assistance in getting health insurance. 
The 2004 Census Current Population Survey (CPS) identified 44.7 million non-elderly uninsured in 2003. Blue Cross and Blue Shield Association contracted with the Actuarial Research Corporation (ARC) to provide a detailed analysis of the uninsured identified by the Census Bureau, which found:
  • Nearly one-third were reachable through public programs, such as Medicaid and the SCHIP program for children
  • One-fifth earn $50,000 or more annually and may be able to afford coverage
  • Almost half may have difficulty affording coverage because they earn less than $50,000 per year. Many of these people work for small firms that do not offer health coverage
Given that Obama was using the number of uninsured to promote the need for government intervention, PolitiFact should have mentioned the number of uninsured already able to take advantage of government help.  We're seeing that this year as at least 380,000 of those the administration says are gaining Medicaid through the ACA were already eligible before the law was passed. The administration can claim some credit for getting eligible persons signed up, but it's misleading to say all those signing up for Medicaid are gaining their coverage thanks to the ACA, just as it was misleading to use 14 million assistance-eligible Americans to show the need to offer more of the same kind of assistance.  The need was exaggerated, and PolitiFact failed to properly notice the size of the exaggeration.

The PolitiMath angle

We use the term PolitiMath of the relationships between PolitiFact's math equations and its "Truth-O-Meter" ratings.  Many journalists have trouble properly calculation error percentage, and in this item we find PolitiFact's former chief editor (Bill Adair) and its present chief editor (Angie Drobnic Holan) making a common mistake:
Getting back to Obama's statement, he said, "Nearly 46 million Americans don't have health insurance coverage today." That is the most recent number for the U.S. Census available, but he messes it up in one way that would tend to overcount the uninsured and in another way that would tend to undercount them.

It's an overcount because it counts noncitizens. Take out the 9.7 million noncitizens and the actual number is closer to 36 million. 

... So Obama is sloppy by saying it is for "Americans" but not accounting for the noncitizens, which leaves him off by about 22 percent.
PolitiFact's likely equation:  (46-36)/46   _21.7 percent_

It's the wrong equation, and this is not controversial.  It's basic math.  To find the percentage error the accurate value belongs in the denominator.

The right equation:  (46-36)/36    _27.7 percent_

Marc Caputo of the Miami Herald, a PolitiFact partner paper, made the same mistake months ago and vigorously defended it on Twitter.  Caputo argued that it's okay to do the equation either way.  One can execute the equation accurately in either form, but executing the wrong equation gives the wrong final figure.  Journalists need to consider the ramifications of having two different options for calculating an error percentage.  If one chooses the method in a way that favors one party over another then a pattern of that behavior turns into evidence of political bias.

Caputo used the method more damaging to the Republican to whom he referred.

In Adair and Holan's case, guess which party received the benefit of the wrong equation?

It's a statistic worth following.

Monday, April 21, 2014

PunditFact's PolitiMath

Here at PolitiFact Bias, we keep some tabs on what we call "PolitiMath"--the mathematical indicators that correspond (or not) to various positions on its "Truth-O-Meter" scale.

This week offers us another potentially informative case, as PunditFact looks at whether Michael Eric Dyson was accurate in claiming that Sunday morning political talk shows "usually" feature conservative white men.


In terms of math, this case is fairly simple.  PunditFact counted 25 percent of the Sunday show guests as conservative white males--a little short of a plurality.

PunditFact also shared parallel figures compiled by the left-wing Media Matters organization.  Media Matters put the figure for conservative white men at about 29 percent, which did count as a plurality.

"Usually" means more than half the time, so using PunditFact's count Dyson was off by 50 percent.  Using the Media Matters count, Dyson was off by about 42 percent.

PunditFact rated Dyson's claim "Mostly False":
Dyson described the Sunday shows as having been "given over" to conservative white males. While that phrase isn't exact, it does suggest a dominant presence. The numbers don’t back that up. Conservatives outman the liberals but by the time you drill down to white, male, conservatives, they lose much of the edge.

Dyson pushed too far on his adjectives. We rate the claim Mostly False.
We don't understand PunditFact focusing on "given over."  It is the extended phrase "mostly given over" that provides the basis for the fact check.  "Mostly" communicates a "given over" figure exceeding 50 percent.

Going by the "Principles of PolitiFact, PunditFact and the Truth-O-Meter," a "Mostly False" statement contains an element of truth.  We are unable to identify what PunditFact thinks is the element of truth in Dyson's statement.

This case featuring Dyson compares very naturally with PolitiFact Florida's rating of Sen. Marco Rubio's statement claiming Americans are mostly conservative.  PolitiFact Florida gave Rubio a "Half True" for that one, though only one of three polls had conservatives self-identifying in majority numbers.

We certainly think there's much to criticize in the way PolitiFact's fact checkers went about rating both of these claims, but in terms of PolitiMath we can set those concerns aside and simply look at how PunditFact's numbers correlated to the "Truth-O-Meter" rating.

By the two measurements PunditFact offered, Dyson was at least 42 percent in error.  That resulted in a "Mostly False" rating.  By three measurements, Rubio was correct on one and off by a maximum of 36 percent by the two polls that showed conservatives as the plurality.

It's easy to see how Rubio's statement could count as at least partly true.  One poll unambiguously supported him.  But Dyson?  Not so much.

That's how PolitiMath works, for what it's worth.


Edit 11/16/2014: Added Link to PunditFact article in 6th graph - Jeff