Showing posts with label CYA. Show all posts
Showing posts with label CYA. Show all posts

Friday, July 7, 2017

PolitiFact, Lauren Carroll, pathetic CYA

With a post on July 1, 2017, we noted PolitiFact's absurdity in keeping the "True" rating on Hillary Clinton's claim that 17 U.S. intelligence agencies "all concluded" that Russia intervened in the U.S. presidential election.

PolitiFact has noticed that not enough people accept 2+2=5, however, so departing PolitiFact writer Lauren Carroll returned within a week with a pathetic attempt to justify her earlier fact check.

This is unbelievable.

Carroll's setup:
Back in October 2016, we rated this statement by then-candidate Hillary Clinton as True: "We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."

Many readers have asked us about this rating since the New York Times and Associated Press issued their corrections.
Carroll then repeats PolitiFact's original excuse that since the Director of National Intelligence speaks for all 17 agencies, it somehow follows that 17 agencies "all concluded" that Russia interfered with the U.S. election.

And the punchline (bold emphasis added):
We asked experts again this week if Clinton’s claim was correct or not.

"In the context of a national debate, her answer was a reasonable inference from the DNI statement," Cordero said, emphasizing that the statement said, "The U.S. Intelligence Community (USIC) is confident" in its assessment.

Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity across the community, and it’s possible that some organizations disagree.

But in the case of the Russia investigation, there is no evidence of disagreement among members of the intelligence community.
Put simply, either the people who work at PolitiFact are stupid, or else they think you're stupid.

PolitiFact claims it asked its cited experts whether Clinton's claim was correct.

PolitiFact then shares with its readers responses that do not tell them whether the experts think Clinton's claim was correct.

1) "In the context of a national debate, her answer was a reasonable inference from the DNI statement" 

It's one thing to make a reasonable inference. It's another thing whether the inference was true. If a person shows up at your home soaking wet, it may be a reasonable inference that it's raining outside. The inference isn't necessarily correct.

The quotation of Carrie Cordero does not answer whether Clinton's claim was correct.

How does a fact checker not know that?

 2) PolitiFact paraphrases expert Steven Aftergood: "Aftergood said it’s fair to say the Director of National Intelligence speaks for the intelligence community, but that doesn’t always mean there is unamity [sic] across the community, and it’s possible that some organizations disagree."

The paraphrase of Aftergood appears to make our point. Even if the Director of National Intelligence speaks for all 17 agencies it does not follow that all 17 agencies agreed with the finding. Put another way, even if Clinton's inference was reasonable the more recent reports show that it was wrong. The 17 agencies did not all reach the same conclusion independently, contrary to what Clinton implied.

And that's it.

Seriously, that's it.

PolitiFact trots out this absolutely pathetic CYA attempt and expects people to take it seriously?

May it never be.

The evidence from the experts does not support PolitiFact's judgment, yet PolitiFact uses that evidence to support its judgment.

Ridiculous.



Afters

Maybe they'll be able to teach Carroll some logic at UC Berkeley School of Law.



Correction July 7, 2017: Removed an extraneous "the" preceding "PolitiFact" in our first paragraph following our first quotation of PolitiFact.

Sunday, January 20, 2013

Another Black Knight for PolitiFact

The comedy film "Monty Python and the Holy Grail" is justly famous for its fight scene between King Arthur and the mysterious Black Knight who attempts to block his path.

Arthur defeats the Black Knight, first chopping off an arm, then another arm, then a leg and then the other leg.  As the Black Knight suffers each stage of defeat he defiantly continues to challenge Arthur to continue the fight.

PolitiFact's efforts to defend itself from criticism often run parallel to the Black Knight's fighting prowess against Arthur.

The latest duel pits PolitiFact editor Bill Adair against critics who say Fiat's confirmation that it will produce over 100,000 Jeep vehicles annually at a Chinese manufacturing plant undercuts PolitiFact's 2012 choice for "Lie of the Year."  The Romney campaign produced an ad saying Obama sold Jeep to Italians who will build Jeeps in China.  PolitiFact ruled the ad "Pants on Fire" in October before selecting it as the "Lie of the Year" in December.

The original ruling drew plenty of criticism, and the recent confirmation of the deal to produce Jeeps in China produced a renewal of that criticism, perhaps best expressed by Mark Hemingway of The Weekly Standard.

"It's just a flesh wound."

On Jan. 18 Adair responded to the latest round of criticism:
A number of readers emailed us this week about news reports that Chrysler is moving forward with a partnership in China to produce Jeeps. They wondered: Doesn’t that disprove our Lie of the Year -- that Mitt Romney said Barack Obama "sold Chrysler to Italians who are going to build Jeeps in China" at the cost of American jobs?
No, it doesn’t.
It bears emphasis that Jeep sold about 50,000 American-made Jeeps in China in 2012. Somehow no mention of Jeep exports to China crept into any of PolitiFact's fact checking of the Romney ad.

Adair's right about one thing, at least.  All of PolitiFact's "Lie of the Year" selections contain a significant element of truth, so of course it doesn't matter to PolitiFact if the ad is true.  It can still qualify as "Lie of the Year."  The tough thing for Adair to explain, which he doesn't attempt, is how the ad can be technically true yet receive a "Pants on Fire" rating as election day approached.

It's just another dismal defense of a PolitiFact blunder.

Mark Hemingway, by the way, responded with Arthurian effectiveness to Adair's post the same day it was published.

We'll give away the ending:
PolitiFact has a reputation for alternately being unresponsive or inadequately responding to criticisms. And they haven't done anything to remedy that today.
Exactly.

(The video contains language some may find offensive.  Oh, and there's lots of obviously fake blood.)






Jeff adds (1/30/13):
Adair's most recent CYA/non-response to Hemingway is typically awful of the genre, and PolitiFact has had some stinkers. Chock full of evasions and denials, it would seem that Adair is completely unable to confront the facts that lurk in front of his face. Take a look at the opening paragraph of his nada culpa, and pay special attention to the quotation marks:

[Readers] wondered: Doesn’t that disprove our Lie of the Year -- that Mitt Romney said Barack Obama "sold Chrysler to Italians who are going to build Jeeps in China" at the cost of American jobs?

No, it doesn’t.
The entire basis for the Pants of Fire rating is something the Romney ad never claimed. If it did, why didn't PolitiFact quote the relevant portion? The portion that Adair quotes is entirely accurate, even by PolitiFact's own admission. The only falsehood here is PolitiFact's invention that the Romney ad claimed it would cost American jobs.

Another comically dishonest diversion from Adair is his assertion that PolitiFact isn't making a value judgement on Obama's policy. He writes:
We should be clear, we are not defending President Obama’s auto policy. As independent fact-checkers, we don’t take positions on policy issues. So whether it was advisable to bail out the auto companies, and or whether the bailout  was done with proper safeguards was beyond the scope of our fact-check.
As I pointed out in our original review of this claim back in November, PolitiFact was much more smitten with the Presidents performance back then (emphasis added):
With Ohio’s 18 electoral votes very much in play, the Mitt Romney campaign aims to blunt one of Barack Obama’s key advantages in that state -- his rescue of the auto industry.
Let me be clear: PolitiFact has determined that Barack Obama single-handedly rescued the entire auto industry...they're just not taking a position it.

 

Sunday, September 2, 2012

Disconnect at PolitiFact Ohio

Nothing's better than getting PolitiFact editors on the record about PolitiFact.  Their statements probably do more than anything else to show that the PolitiFact system encourages bias and the people who created it either don't realize it or couldn't care less.

Statements from editors at PolitiFact's associated newspapers come in a close second.

Ted Diadiun, reader representative for the Cleveland Plain Dealer (host of PolitiFact Ohio), answering a reader's question:
In July you printed a chart with two years of PolitiFact Ohio results. It showed Democrats with 42 ratings of Mostly False, False or Pants on Fire, while the Republicans had a total of 88 in those categories. Doesn't that prove you guys are biased?

Well, it doesn't necessarily prove that. It might prove instead that in the statements our PolitiFact team chose to check out, Republicans tended to be more reckless with the truth than Democrats.
Diadiun apparently doesn't realize that if PolitiFact Ohio chooses more Republican statements to treat harshly then it is a likely sign of institutional selection bias unless PolitiFact Ohio either randomizes its story selection (highly unlikely) or coincidentally chose a representative sample.  How would we ever know that the sample is representative unless somebody runs a controlled study?  Great question.  It's such a good question that it is reasonable to presume that a disparity in the treatment of the respective parties by PolitiFact results from an ideologically influenced selection bias.  That was the point of Eric Ostermeier's study of PolitiFact's 2011 "Truth-O-Meter" results.

Diadiun, continuing his answer to the same question:
Or, it might prove only that there are a lot more Republicans who have been elected to the major offices that provide most of the fodder for fact-checking.
It would prove nothing of the kind.  PolitiFact has one state operation in a state that is firmly controlled by Democrats:  PolitiFact Oregon.  PolitiFact Oregon, despite a state political climate dominated by Democrats, rates the parties about evenly in its bottom three "Truth-O-Meter" categories (Republicans fare slightly worse).

Diadiun:
It is also a fact that Republicans had a few more statements rated True than the Democrats did, but the Truth-O-Meter was indeed a bit tougher overall on Republicans. You can find the report here

Does that show bias? I've said it before, and I'll say it again here: The PolitiFact Truth-O-Meter is an arbitrary rating that has the often impossible job of summing up an arduously reported, complicated and nuanced issue in one or two words.
Diadiun goes on to tell his readers to "ignore" the Truth-O-Meter.

That's quite the recommendation for PolitiFact's signature gimmick.

He's partly right.  PolitiFact ratings cram complicated issues into narrow and ill-defined categories.  The ratings almost inevitably distort whatever truth ends up in the reporting.  So shouldn't we ask why a fact checker steadfastly continues to use a device that distorts the truth?

The answer is pretty plain:  The "Truth-O-Meter" gimmick is about money.  PolitiFact's creators think it helps market the fact checking service.  And doubtless they're right about that.

There is a drawback to selling out accuracy for 30 pieces of silver:  Contrary to Diadiun's half-hearted reassurances, the "Truth-O-Meter" numbers do tell a story of selection and ideological bias.  Readers should not ignore that story. 


Jeff adds:

The hubris on display from Diadiun could fill gallon-sized buckets. Notice that he completely absolves PolitiFact from the role they play in selecting which statements to rate, and immediately implies the "Republicans tended to be more reckless with the truth than Democrats." Incompetence or deceit are the only reasonable explanations for such an empty claim.

For the benefit of our new readers, I'd like to provide an exaggerated example of selection bias: Let's say I'm going to call myself an unbiased fact checker. Let's say I'm going to check 4 statements that interest me (as opposed to a random sample of claims). I'll check Obama's claim that he would close Guantanamo Bay, and his claim that he "didn't raise taxes once." I find he's telling falsehoods on both accounts.

Next, I'll check Rush Limbaugh's statement that he's happy to be back in the studio after a long weekend. I'll also check his claim that he's one of the most popular radio shows in the nation. Of course, these claims are true.

What can we learn from this? According to PolitiFact's metrics, Rush Limbaugh is a bastion of honesty while Barack Obama is a serial liar. I'll even put out "report cards" that "reveal patterns and trends about their truth-telling." I'll admit the "tallies are not scientific, but they provide interesting insights into a candidate's overall record for accuracy." It's unbiased because, as the popular defense goes, I checked both sides. The reality that I checked statements that interested me supposedly has no influence on the credibility of the overall ratings. If you don't like the results, it's because you're biased!

See how that trick works?

It's something to keep in mind the next time you see Obama earning a Promise Kept for getting his daughters a puppy or a True for his claim that the White Sox can still make the playoffs. A cumulative total of PolitiFact's ratings serves the purpose of telling readers about the bias of PolitiFact's editors and what claims are interesting to them. It's a worthless measure of anything else.

Sunday, July 22, 2012

PolitiFact "a distillery for truth"?

How can one blame the Atlanta Journal-Constitution for publishing an editorial that calls PolitiFact "a distillery for truth"?  The AJC, after all, is one of PolitiFact's state affiliates, the home of PolitiFact Georgia.

Blame aside, however, what a load of codswallop.

AJC:
There’s something about PolitiFact.

Maybe it’s the clarity it forces on public discourse. Perhaps it’s the eye-catching Truth-O-Meter with its brutal simplicity. Or could it be its distaste for nuance in a world grown comfortable with wiggle room?
Anybody else detect a paradox when a device of "brutal simplicity" is said to force clarity on public discourse?

 The "Truth-O-Meter" and its "brutal simplicity" are a maul used to butcher a steer.  Rather than distinct cuts of beef such as sirloin or ribs, one ends up with hamburger blended with all the humblest portions of the unfortunate beast.  Hotdog/hamburger hash, as it were.  PolitiFact provides all the clarity of Soylent Green, and creates its own rambling vistas of wiggle room.

AJC:
PolitiFact is powerful because it represents the essence of what we do. It is intensely distilled journalism that filters out the good intentions, mendacity and ignorance that lead public officials to fracture the truth occasionally. Like a great scotch, the appeal is in its simplicity. That’s why politicians and power brokers hate it, if “hate” is a strong enough word.
Are we talking about the same PolitiFact?

I could maybe see the AJC's point if newspaper journalists weren't at least as capable of good intentions, mendacity and ignorance as politicians.

Hold on--there's a nugget amidst the self-congratulatory pablum:
The state has so few powerful Democrats, that PolitiFact Georgia has to look to Democrats from elsewhere to avoid giving the impression that it trains its fire only on Republicans.
AJC editorialist Bert Roughton Jr. just spilled the beans that PolitiFact Georgia engages in the type of compensatory rating that critics have long suspected PolitiFact of doing.  Some PolitiFact operations, such as Ohio's, deny using the technique.  So either somebody's not giving us the facts or else PolitiFact's standards vary.

Like a great scotch.

Saturday, July 14, 2012

The Washington Post: "PolitiFacters respond to ‘weekend dump’ allegations"

Erik Wemple of the Washingon Post delivers the third in his series following the current dust-up between the Republican Party of Virginia and PolitiFact Virginia.

Wemple, as he promised, visits the GOP's claim that the timing of PolitiFact Virginia stories appears to maximize the impact of negative stories while burying positive ones.

Wemple:
As detailed in Part One of this extensive series, the Republican Party of Virginia is claiming that PolitiFact Virginia, which is run from the offices of the Richmond Times-Dispatch, discriminates against Republican politicians in the most insidious of manners: It times positive fact-checks of Republicans for the weekends, when people aren’t logged on, and “saves” the negative stuff on Republicans for high-traffic mid-week slots. That’s the claim.
Wemple's off the mark.  The document doesn't claim that the stories are deliberately timed.  Rather, it claims that the timing of the stories yields a discriminatory result.  The discrepancy between Wemple's report and the reality of the GOP document is easy to see in the passage Wemple quotes:
Here’s a relevant excerpt from the 86-page slameroo report that the Republican Party of Virginia compiled on PolitiFact Virginia:
PolitiFact Issued Only Two “False” And One “Pants On Fire” Ruling On Republican Statements During The Weekend (Starting After 5 P.M. On Friday), Saving 37 “Mostly False,” “False,” “Pants On Fire,” And “Full Flop” Reports To Be Issued Between Monday And Thursday.

The GOP claim is obviously couched in objective terms and makes no judgments about PolitiFact's intent.  The claim concerns the result, not the intent.

And, of course, the PolitiFact response is a total joke.

Rick Thornton of the Richmond Times-Dispatch says “We typically print in the newspaper PolitiFact rulings on Sunday and Monday . . . . We post our rulings online pretty much as soon as they’re done . . . . A number of our rulings on both sides are on Fridays because they’re being finished up on Friday for Sunday.” 

That doesn't answer anything.

PolitiFact editor Bill Adair, who heads the national operation, says "It’s ridiculous to suggest that any of our PolitiFact sites schedule publication of some items to get smaller audiences."

Adair gravitates directly toward the same straw man that fascinated Wemple.  If the GOP document has the facts right and the good gets the small audience while the bad gets the big audience then the discrimination exists regardless of whether the PolitiFacters possess an awareness of the fact.  And one would think that PolitiFact Virginia would know about the alleged problem from its communications with the RPV.

Neither Thornton nor Adair addresses the charge from the RPV.  And it's a pity that Wemple reported it inaccurately.

Why is this so hard?  If the Sunday paper has more readers than weekday papers then PolitiFact can give an objective response to the charge from the RPV:  Those weekend stories often may have the larger audience.  If that defense isn't accurate then perhaps admit that the RPV has a point but assure everyone that it wasn't on purpose.

Is PolitiFact dissembling for the sake of a CYA strategy?   Yeah, could be.  In any case, the responses from PolitiFact scarcely count as serious.  And we let these people check facts for us?

Tuesday, July 10, 2012

Virginia GOP vs. PolitiFact Virginia

The Republican Party of Virginia yesterday published an 86 page criticism of PolitiFact Virginia's objectivity.

We'll have plenty more to say about the specifics as we sift through it all, but here's a small taste of the lengthy document:
We believe the objective evidence assembled here provides ample reason for the public to question PolitiFact Virginia’s objectivity. Based on the compelling data contained herein we believe any Republican official or candidate in Virginia would be justified in publicly indicting PolitiFact Virginia’s pattern of bias, and publicly refusing to participate in or cooperate with their analyses unless and until such time the Richmond Times-Dispatch can substantively and publicly address the underlying concerns about their PolitiFact Virginia team’s lack of objectivity. Each official and candidate can make their own decision on participation with PolitiFact Virginia going forward.
PolitiFact Virginia was not slow to respond, though their response was weak even by PolitiFact standards.

A portion of the rejoinder from PolitiFact Virginia:
On Tuesday, the Republican Party of Virginia sent an "open letter to the commonwealth" accusing PolitiFact Virginia of being biased against the GOP in our rulings.

We disagree.
That's the gist of it, and the evidence supporting PolitiFact's disagreement is marginally greater than what occurs in the above quotation.  It doesn't begin to answer all the points in the GOP critique.

Sunday, June 3, 2012

Cover your PolitifArse! PolitiFact goes shameless

PolitiFact has egg on its face worthy of the Great Elephant Bird of Madagascar.

On May 23, PolitiFact published an embarrassingly shallow and flawed fact check of two related claims from a viral Facebook post.  One of the claims held false a claim from Mitt Romney that President Obama has presided over an acceleration of government spending unprecedented in recent history.  The second claim, quoted from Rex Nutting of MarketWatch, held that "Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years." 

PolitiFact issued a "Mostly True" rating to these claims, claiming their math confirmed select portions of Nutting's math. The Associated Press and Glenn Kessler of the Washington Post, among others, gave Nutting's calculations very unfavorable reviews.

PolitiFact responded with an article titled "Lots of heat and some light," quoting some of the criticisms without comment other than to insist that they did not justify any change in the original "Mostly True" rating.  PolitiFact claimed its rating was defensible since it only incorporated part of Nutting's article.
(O)ur item was not actually a fact-check of Nutting's entire column. Instead, we rated two elements of the Facebook post together -- one statement drawn from Nutting’s column, and the quote from Romney.
I noted at that point that we could look forward to the day when PolitiFact would have to reveal its confusion in future treatments of the claim.

We didn't have to wait too long.

On May 31, last Thursday, PolitiFact gave us an addendum to its original story.  It's an embarrassment.

PolitiFact gives some background for the criticisms it received over its rating.  There's plenty to criticize there, but let's focus on the central issue:  Was PolitiFact's "Mostly True" ruling defensible?  Does this defense succeed?

The biggest reason this CYA fails

PolitiFact keeps excusing its rating by claiming it focuses on the Facebook post by "Groobiecat", rather than Nutting's article, and only fact checks the one line from Nutting included in the Facebook graphic.

Here's the line again:
Government spending under Obama, including his signature stimulus bill, is rising at a 1.4 percent annualized pace — slower than at any time in nearly 60 years.
This claim figured prominently in the AP and Washington Post fact checks mentioned above.  The rating for the other half of the Facebook post (on Romney's claim) relies on this one.

PolitiFact tries to tell us, in essence, that Nutting was right on this point despite other flaws in his argument (such as the erroneous 1.4 percent figure embedded right in the middle), at least sufficiently to show that Romney was wrong.

A fact check of the Facebook graphic should have looked at Obama's spending from the time he took office until Romney spoke.  CBO projections should have nothing to do with it.  The fact check should attempt to pin down the term "recent history" without arbitrarily deciding its meaning. 

The two claims should have received their own fact checks without combining them into a confused and misleading whole.  In any case, PolitiFact flubbed the fact check as well as the follow up.

Spanners in the works

As noted above, PolitiFact simply ignores most of the criticisms Nutting received.  Let's follow along with the excuses.

PolitiFact:
Using and slightly tweaking Nutting’s methodology, we recalculated spending increases under each president back to Dwight Eisenhower and produced tables ranking the presidents from highest spenders to lowest spenders. By contrast, both the Fact Checker and the AP zeroed in on one narrower (and admittedly crucial) data point -- how to divide the responsibility between George W. Bush and Obama for the spending that occurred in fiscal year 2009, when spending rose fastest.
Stay on the lookout for specifics about the "tweaking."

Graphic image from Groobiecat.blogspot.com

I'm still wondering why PolitiFact ignored the poor foundation for the 1.4 percent average annual increase figure the graphic quotes from Nutting.  But no matter.  Even if we let PolitiFact ignore it in favor of  "slower than at any time in nearly 60 years" the explanation for their rating is doomed.

PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.
The fiscal year 2009 spending figure from the Office of Management and Budget was $3,517,677,000,000.  That means that $307 billion (there's a tweak!) is 8.7 percent of the 2009 total spending.  And it means before Obama even starts getting blamed for any spending he increased spending in FY 2009 over the 2008 baseline more than President Bush did.  I still don't find it clear where PolitiFact puts that spending on Obama's account.
(B)y our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent. That would place Obama either second from the bottom or third from the bottom out of the 10 presidents we rated, rather than last.
PolitiFact appears to say its calculations suggest that accepting the critics' points makes little difference.  We'll see that isn't the case while also discovering a key criticism of the "annual spending increase" metric.

Reviewing PolitiFact's calculations from earlier in its original story, we see that PolitiFact averages Obama's spending using fiscal years 2010 through 2013.  However, in this update PolitiFact apparently does not consider another key criticism of Nutting's method:  He cherry picked future projections.  Subtract $307 billion from the FY 2009 spending and the increase in FY 2010 ends up at 7.98 percent.  And where then do we credit the $307 billion?

An honest accounting requires finding a proper representation of Obama's share of FY 2009 spending.  Nutting provides no such accounting:
If we attribute that $140 billion in stimulus to Obama and not to Bush, we find that spending under Obama grew by about $200 billion over four years, amounting to a 1.4% annualized increase.
Neither does PolitiFact:
(C)ombining the fiscal 2009 costs for programs that are either clearly or arguably Obama’s -- the stimulus, the CHIP expansion, the incremental increase in appropriations over Bush’s level and TARP -- produces a shift from Bush to Obama of between $307 billion and $456 billion, based on the most reasonable estimates we’ve seen critics offer.

That’s quite a bit larger than Nutting’s $140 billion, but by our calculations, it would only raise Obama’s average annual spending increase from 1.4 percent to somewhere between 3.4 percent and 4.9 percent.
But where does the spending go once it is shifted? Obama's 2010?  It makes a difference.

"Lies, damned lies, and statistics":  PolitiFact, Nutting and the improper metric

Click image for larger view
The graphic embedded to the right helps illustrate the distortion one can create using the average increase in spending as a key statistic.  Nutting probably sought this type of distortion deliberately, and it's shameful for PolitiFact to overlook it.

Using an annual average for spending allows one to make much higher spending not look so bad.  Have a look at the graphic to the right just to see what it's about, then come back and pick up the reading.  We'll wait.

Boost spending 80 percent in your first year (A) and keep it steady thereafter and you'll average 20 percent over four years. Alternatively, boost spending 80 percent just in your final year (B) and you'll also average 20 percent per year. But in the first case you'll have spent far more money--$2,400 more over the course of four years.

It's very easy to obscure the amount of money spent by using a four-year average.  In case A spending increased by a total of $3,200 over the baseline total.  That's almost $800 more than the total derived from simply increasing spending 20 percent each year (C).

Note that in the chart each scenario features the same initial baseline (green bar), the same yearly average increase (red star), and widely differing total spending over the baseline (blue triangle).

Some of Nutting's conservative critics used combined spending over four-year periods to help refute his point.  Given the potential distortion from using the average annual increase it's very easy to understand why.  Comparing the averages for the four year total smooths out the misleading effects highlighted in the graphic.

We have no evidence that PolitiFact noted any of this potential for distorting the picture.  The average percentage increase should work just fine, and it's simply coincidence that identical total increases in spending look considerably lower when the largest increase happens at the beginning (example A) than when it happens at the end (example B).

Shenanigan review:
  • Yearly average change metric masks early increases in spending
  • No mention of the effects of TARP negative spending
  • Improperly considers Obama's spending using future projections
  • Future projections were cherry-picked
The shift of FY 2009 spending from TARP, the stimulus and other initiatives may also belong on the above list, depending on where PolitiFact put the spending.

I have yet to finish my own evaluation of the spending comparisons, but what I have completed so far makes it appear that Romney may well be right about Obama accelerating spending faster than any president in recent history (at least back through Reagan).  Looking just at percentages on a year-by-year basis instead of averaging them shows Obama's first two years allow him to challenge Reagan or George W. Bush as the biggest accelerator of federal spending in recent history.  And that's using PolitiFact's $307 billion figure instead of the higher $456 billion one.

So much for PolitiFact helping us find the truth in politics.

Note:

I have a spreadsheet on which I am performing calculations to help clarify the issues surrounding federal spending and the Nutting/PolitiFact interpretations.  I hope to produce an explanatory graphic or two in the near future based on the eventual numbers.  Don't expect all the embedded comments on the sheet to make sense until I finalize it (taking down the "work in progress" portion of the title).



Jeff adds:

It's not often PolitiFact admits to the subjective nature of their system, but here we have a clear case of editorial judgement influencing the outcome of the "fact" check:
Our extensive consultations with budget analysts since our item was published convinces us that there’s no single "correct" way to divvy up fiscal 2009 spending, only a variety of plausible calculations.
This tells us that PolitiFact arbitrarily chose the "plausible calculation" that was very favorable to Obama in its original version of the story. By using other equally plausible methods, the rating would have gone down. By presenting this interpretation of the calculations as objective fact, PolitiFact misleads their readers into believing the debate is settled.

This update also contradicts PolitiFact's reasons for the "Mostly True" rating:
So the second portion of the Facebook claim -- that Obama’s spending has risen "slower than at any time in nearly 60 years" -- strikes us as Half True. Meanwhile, we would’ve given a True rating to the Facebook claim that Romney is wrong to say that spending under Obama has "accelerated at a pace without precedent in recent history." Even using the higher of the alternative measurements, at seven presidents had a higher average annual increases in spending. That balances out to our final rating of Mostly True.
In the update, they're telling readers a portion of the Facebook post is Half-True, while the other portion is True, which balances out to the final Mostly True rating. But that's not what they said in the first rating (bold emphasis added):
The only significant shortcoming of the graphic is that it fails to note that some of the restraint in spending was fueled by demands from congressional Republicans. On balance, we rate the claim Mostly True.
In the first rating, it's knocked down because it doesn't give enough credit to the GOP for restraining Obama. In the updated version of the "facts", it's knocked down because of a "balance" between two portions that are Half-True and completely True. There's no mention of how the GOP's efforts affected the rating in the update.

Their attempts to distance themselves from Nutting's widely debunked article are also comically dishonest:
The Facebook post does rely partly on Nutting’s work, and our item addresses that, but we did not simply give our seal of approval to everything Nutting wrote.
That's what PolitiFact is saying now. But in the original article PolitiFact was much more approving:
The math simultaneously backs up Nutting’s calculations and demolishes Romney’s contention.
 And finally, we still have no explanation for the grossly misleading headline graphic, first pointed out by Andrew Stiles:

Image clipped from PolitiFact.com
Neither Nutting or the original Groobiecat post claim Obama had the "lowest spending record". Both focused on the growth rate of spending. This spending record claim is PolitiFact's invention, one the fact check does not address. But it sure looks nice right next to the "Mostly True" graphic, doesn't it? Sorting out the truth, indeed.

The bottom line is PolitiFact's CYA is hopelessly flawed, and offensive to anyone that is sincerely concerned with the truth. A fact checker's job is to illuminate the facts. PolitiFact's efforts here only obfuscate them.


Bryan adds:

Great points by Jeff across the board.  The original fact check was indefensible and the other fact checks of Nutting by the mainstream media probably did not go far enough in calling Nutting onto the carpet.  PolitiFact's attempts to glamorize this pig are deeply shameful.


Update:  Added background color to embedded chart to improve visibility with enlarged view.



Correction 6/4/2012:  Corrected one instance in which PolitiFact's $307 billion figure was incorrectly given as $317 billion.  Also changed the wording in a couple of spots to eliminate redundancy and improve clarity, respectively.

Sunday, May 27, 2012

Nutting doing: PolitiFact's inadequate excuse

Crossposted from Sublime Bloviations


This week many liberals jumped on the meme that President Obama has the lowest spending record of any recent president.

Fortunately for all of us, PolitiFact was there to help us find out the truth in politics.

Actually, PolitiFact completely flubbed the related fact check.  And that's not particularly unusual.  Instead, it was the Washington Post's Glenn Kessler and an Associated Press fact check that helped people find the truth in politics.

PolitiFact isn't backing down so far, however.  On Friday PolitiFact offered the following response to the initial wave of criticism (bold emphasis added):
(O)ur item was not actually a fact-check of Nutting's entire column. Instead, we rated two elements of the Facebook post together -- one statement drawn from Nutting’s column, and the quote from Romney.

We haven't seen anything that justifies changing our rating of the Facebook post. But people can have legitimate differences about how to assign the spending, so we wanted to pass along some of the comments.

PolitiFact also made the distinction on Twitter:

(Image captured by Jeff Dyberg;
 click image for enlarged view)
There's a big problem with the attempt to distinguish between checking Nutting's claims and those from the Facebook post:  The Facebook post argues implicitly solely on the basis of Nutting's work.  PolitiFact likewise based its eventual ruling squarely on its rating of the Nutting graphic.

PolitiFact (bold emphasis added): 
The Facebook post says Mitt Romney is wrong to claim that spending under Obama has "accelerated at a pace without precedent in recent history," because it's actually risen "slower than at any time in nearly 60 years."

Obama has indeed presided over the slowest growth in spending of any president using raw dollars, and it was the second-slowest if you adjust for inflation. The math simultaneously backs up Nutting’s calculations and demolishes Romney’s contention.
Credit PolitiFact with accurately representing the logic of the implicit argument.  Without the fact check on Nutting's work there is no fact check of Romney's claim.  Making matters worse, PolitiFact emphasized the claim that Obama "has the lowest spending record" right next to its "Mostly True" Truth-O-Meter graphic.  The excuse that PolitiFact was fact checking the Facebook post completely fails to address that point.  Andrew Stiles is probably still laughing.

Criticisms of Nutting make clear that the accounting of bailout loans substantially skews the numbers in Obama's favor. Using the AP's estimates of 9.7 percent for 2009 (substantially attributable to Obama) and 7.8 percent in 2010, Obama's record while working with a cooperative Democrat-controlled Congress looks like it would challenge the high spending of any of his recent predecessors.  The leader from the Facebook graphic, President Reagan, tops out at 8.7 percent without any adjustment for inflation.  PolitiFact's fact check was utterly superficial and did not properly address the issue.

There is a silver lining.  The Obama administration has so aggressively seized on this issue that PolitiFact will certainly feel pressure to fact check different permutations of Nutting's claims.

I can't wait to see the contortions as PolitiFact tries to reconcile this rating with subsequent attempts.



*Many thanks to Mickey Kaus of the Daily Caller for linking this story.

Thursday, April 12, 2012

PolitiFact Florida editor Angie Drobnic Holan on WMNF

Angie Drobnic Holan, recently named editor at PolitiFact Florida, appeared for an interview last month on WMNF, a Tampa Bay area public radio station.

The interview overall was relatively mundane, serving mostly as a forum for liberal callers to recycle their most popular recent complaints about PolitiFact ("Lie of the Year" for 2011, Rubio saying the U.S. is majority conservative, etc.). 

Drobnic Holan did offer up a self-assessment of the fact check operation that we might want to refer to later on down the line, however:
We've published almost 5,000 fact checks since we got started.  We do not get it right every single single (sic) time.  Mitch, you're a journalist [crosstalk]

Let me tell you what we do do.  We correct errors.  There were no, um, errors of fact in this particular report, but sometimes there have been, and we correct those quickly, we note them.  Every now and then, a handful of claims, we say "You know, we got the Truth-O-Meter rating on that one wrong." 

So, our typical procedure is, we go back, we report it again, uh, we have a procedure where a reporter researches and writes the report and three editors sign off on the ruling.  So when we look at rulings again, uh, we have all the editors look at it again.  Um, so, you know, that's a handful of rulings.  The majority--the vast majority--of rulings are, uh, you know, not second-guessed [If only I had the time!--ed.] in any way.

And I would also add that when you go to PolitiFact, when you read our reports, uh, we do something that you don't often see in journalism.  We have a source list where we list all of our sources., we hyperlink to all of the data, list everybody we interviewed, and in our story we carefully explain our logic.  So, what's interesting to me is oftentimes when readers, um, disagree with our work and very passionately disagree with us, they disagree with us using evidence that we gave them.  It's not like they're going out and researching these things and uncovering these facts, I mean it's, it's, it's very much of a, um, of a, of a--they know things now they didn't know before they read the report.
I find it at least as interesting the many times that others go beyond PolitiFact's research (finding additional facts or additional context; examples of both are legion), inform PolitiFact of the additional information and then PolitiFact does absolutely nothing discernible in response. 

Not that a critic needs to find new information to offer a legitimate criticism, of course.

But I guess her response serves Drobnic Holan's PR purposes better in the context of a radio interview.


Jeff adds: Drobnic Holan highlights PolitiFact's supposed transparency by listing "everybody we interviewed." What she failed to mention was that they don't include the transcripts of those interviews. More than one of those interviewees has publicly complained that PolitiFact took their responses out of context. One even pointed out the questions appeared biased from the start:
What struck me about Jacobson’s message was it asked if Romney’s statement was “technically true” and “what context does this ignore,” which carried the clear implication – as I warned Jacobson in my reply – that he’d already decided what he was going to write.
Given PolitiFact's propensity for statement distortion, simply naming people they emailed hardly inspires confidence about their integrity. Until PolitiFact provides the full context of their interviews, the list is little more than window dressing.  

Sunday, February 19, 2012

Positives: PolitiFact adds corrections page

A bit of optional background, first.

On Feb. 7 of this year, PolitiFact announced the addition of a corrections page.

It's about time, better late than never, etcetera.

As a corrections page the new feature is pretty skimpy.  Essentially it's just a list of stories that have corrections or updates and looks like any other list of stories in the politifact.com domain.  One interested in finding a list of stories that required a change in rating is out of luck, at least with the present version of the page.

Caveats aside, we applaud the modest improvement in transparency.  Now we just have to figure out why the total number of corrections on the page disagrees with the number we counted on our "(Annotated) Principles of PolitiFact" page.

Looks like their corrections page needs a few corrections.  Or at least a clarification to illuminate the fact that PolitiFact does not intend to admit corrections from more than three weeks prior to the publication of its "Principles of PolitiFact and the Truth-O-Meter" on Feb. 21, 2011.

Welcome to the wonderful world of PolitiFact fact checking.

Friday, February 17, 2012

PolitiFact's prophylactic CYA

Yesterday PolitiFact rolled out a CYA article in response to the blowback to the oft-floated claim that 98 percent of all Catholic women use contraception.  PolitiFact rated that claim from an Obama administration official on Feb. 6, finding it "Mostly True."  PolitiFact's treatment of the issue provided little evidence of earnest journalistic curiosity and left its readers with no real means of independently verifying the data.

Watch how PolitiFact deftly avoids taking any responsibility for failing to present a clear account of the issue:
For the past week, thoughtful readers have let us know that we were wrong to give a Mostly True to the claim from a White House official that "most women, including 98 percent of Catholic women, have used contraception."

They said we overlooked a chart in a study from the Guttmacher Institute that showed the percentage was far more limited. But there’s a good reason we didn’t rely on the chart — it wasn’t the right one.
PolitiFact doesn't tell you that the Feb 6 story doesn't refer at all to the relevant chart.  PolitiFact claims to provide its sources. The source list doesn't include the relevant chart.  Instead, it features the charts that drew so much attention in the published criticisms.
Guttmacher Institute, "Contraceptive Use Is The Norm Among Religious Women," April 13, 2011

Guttmacher Institute, "Countering Conventional Wisdom: New Evidence on Religion and Contraceptive Use," April 2011

Centers for Disease Control and Prevention, "National Survey of Family Growth," accessed Feb. 2, 2012

Centers for Disease Control and Prevention, "Key Statistics from the National Survey of Family Growth," accessed Feb. 6, 2012
PolitiFact's mission (bold emphasis added):
PolitiFact relies on on-the-record interviews and publishes a list of sources with every Truth-O-Meter item. When possible, the list includes links to sources that are freely available, although some sources rely on paid subscriptions. The goal is to help readers judge for themselves whether they agree with the ruling.
Um, yeah, whatever.

So did PolitiFact fact check the item without checking the facts or simply forget to link the relevant data in the source list? 

Don't look for a confession in a CYA:
To double-check, we reviewed the criticism, talked with the study’s lead researcher, and reviewed the report and an update from the institute. We’re confident in our original analysis.
We can take that statement for what it's worth, given that the original analysis never produced a baseline for determining the error of the 98 percent figure.  We're left to guess whether the CYA intends to assure us that the original item includes data sufficient to help readers judge for themselves whether to agree with the ruling.

PolitiFact is suggesting that the fact check was perfectly fine, and those of you who used their references to try to reach your own conclusions mishandled the facts.

PolitiFact:
The spate of blog posts and stories this week — some directly claiming to debunk our reporting — unfortunately rely on a flawed reading of a Guttmacher Institute study.

They were easy mistakes to make, confusing the group of women who have "ever used" contraceptives with those who are "currently using" contraceptives — and misapplying footnote information about those "currently using" to the 98 percent statistic.
The "flawed reading" results directly from the fact that neither the Guttmacher Institute nor PolitiFact provided access to the data that might have supported the key claim.  I'll quote from the PFB assessment:  "That's fact checking?"

If PolitiFact had checked the claim properly in the first place then PolitiFact could have answered the criticisms without the wholesale review.  In fact, the criticisms would be clearly wrong based on material included in or linked from the original fact check.

More from PolitiFact:
The critics of our reporting — bloggers for the Weekly Standard, CatholicVote.org and GetReligion.org — were relying on an analysis from Lydia McGrew in her blog, "What's Wrong With The World," which was also cited by the Washington Post's WonkBlog.
PFB highlighted McGrew's analysis, certainly.  But our criticisms expanded beyond McGrew's and recognized that the Guttmacher Institute report may have included data that PolitiFact neglected to explain to its readers.  One would think from PolitiFact's response above that no criticism of its reporting on this issue contains merit.

Focus on McGrew

Wednesday, November 2, 2011

Grading PolitiFact: Joe Biden and the Flint crime rate

(crossposted from Sublime Bloviations with minor reformatting)


To assess the truth for a numbers claim, the biggest factor is the underlying message.
--PolitiFact editor Bill Adair


The issue:
(clipped from PolitiFact.com)


The fact checkers:

Angie Drobnic Holan:  writer, researcher
Sue Owen:  researcher
Martha Hamilton:  editor


Analysis:

This PolitiFact item very quickly blew up in their faces.  The story was published at about 6 p.m. on Oct. 20.  The CYA was published at about 2:30 p.m. on Oct. 21, after FactCheck.org and the Washington Post published parallel items very critical of Biden.  PolitiFact rated Biden "Mostly True."

First, the context:



(my portion of transcript in italics, portion of transcript used by PolitiFact highlighted in yellow):

BIDEN:
If anyone listening doubts whether there is a direct correlation between the reduction of cops and firefighters and the rise in concerns of public safety, they need look no further than your city, Mr. Mayor.  

In 2008--you know, Pat Moynihan said everyone's entitled to their own opinion, they're not entitled to their own facts.  Let's look at the facts.  In 2008 when Flint had 265 sworn officers on their police force, there were 35 murders and 91 rapes in this city.  In 2010, when Flint had only 144 police officers the murder rate climbed to 65 and rapes, just to pick two categories, climbed to 229.  In 2011 you now only have 125 shields.  

God only knows what the numbers will be this year for Flint if we don't rectify it.  And God only knows what the number would have been if we had not been able to get a little bit of help to you.

As we note from the standard Bill Adair epigraph, the most important thing about a numbers claim is the underlying message.  Writer Angie Drobnic Holan apparently has no trouble identifying Biden's underlying message (bold emphasis added):
If Congress doesn’t pass President Barack Obama’s jobs plan, crimes like rape and murder will go up as cops are laid off, says Vice President Joe Biden.

It’s a stark talking point. But Biden hasn’t backed down in the face of challenges during the past week, citing crime statistics and saying, "Look at the facts." In a confrontation with a conservative blogger on Oct. 19, Biden snapped, "Don’t screw around with me."
No doubt the Joe Biden of the good "Truth-O-Meter" rating is very admirable in refusing to back down.  The "conservative blogger" is Jason Mattera, editor of the long-running conservative periodical "Human Events."  You're a blogger, Mattera.  PolitiFact says so.

But back to shooting the bigger fish in this barrel.

PolitiFact:
We looked at Biden’s crime numbers and turned to the Federal Bureau of Investigation's uniform crime statistics to confirm them. But the federal numbers aren’t the same as the numbers Biden cited. (Several of our readers did the same thing; we received several requests to check Biden’s numbers.)

When we looked at the FBI’s crime statistics, we found that Flint reported 32 murders in 2008 and 53 murders in 2010. Biden said 35 and 65 -- not exactly the same but in the same ballpark.
Drobnic Holan initially emphasizes a fact check of the numbers.  Compared to the FBI numbers, Biden inflated the murder rate for both 2008 and 2010, and his inflated set of numbers in turn inflates the percentage increase by 45 percent (or 27 percentage points, going from 60 percent to 87 percent).  So it's a decent-sized ballpark.

PolitiFact:
For rapes, though, the numbers seemed seriously off. The FBI showed 103 rapes in 2008 and 92 rapes in 2010 -- a small decline. The numbers Biden cited were 91 rapes in 2008 and 229 in 2010 -- a dramatic increase.
If inflating the percentage increase in murders by 27 percentage points is not a problem for Biden then this at least sounds like a problem.

After going over some other reports on the numbers and a surprising discussion of how not much evidence suggests that Obama's jobs bill would address the number of police officers in Flint, PolitiFact returns to the discrepancy between the numbers:
(W)e found that discrepancies between the FBI and local agencies are not uncommon, and they happen for a number of reasons. Local numbers are usually more current and complete, and local police departments may have crime definitions that are more expansive than those of the FBI.
All this is very nice, but we're talking about the city of Flint, here.  We don't really need current stats for 2008 and 2010 because they're well past.  Perhaps that affects the completeness aspect of crime statistics also; PolitiFact's description is too thin to permit a judgment.  As for "expansive" definitions, well, there's a problem with that.  Biden's number of rapes in 2008 is lower than the number reported in the UCR (FBI) data.  That is a counterintuitive result for a more expansive definition of rape and ought to attract a journalist's attention.

In short, even with these proposed explanations it seems as though something isn't right.

PolitiFact:
Flint provided us with a statement from Police Chief Alvern Lock when we asked about the differences in the crime statistics, particularly the rape statistics.

"The City of Flint stands behind the crime statistics provided to the Office of The Vice President.  These numbers are an actual portrayal of the level of violent crime in our city and are the same numbers we have provided to our own community. This information is the most accurate data and demonstrates the rise in crime associated with the economic crisis and the reduced staffing levels.

"The discrepancies with the FBI and other sources reveal the differences in how crimes can be counted and categorized, based on different criteria." (Read the entire statement)
This is a city that's submitting clerical errors to the FBI, and we still have the odd problem with the rape statistics.  If the city can provide numbers to Joe Biden then why can't PolitiFact have the same set of numbers?   And maybe the city can include stats for crimes other than the ones Biden may have cherry-picked?  Not that PolitiFact cares about cherry-picked stats, of course.

Bottom line, why are we trusting the local Flint data sight unseen?

PolitiFact caps Biden's reward with a statement from criminologist and Obama campaign donor James Alan Fox of Northeastern University to the effect that Biden makes a legitimate point that "few police can translate to more violent crime" (PolitiFact's phrasing).  Fox affirms that point, by PolitiFact's account, though it's worth noting that on the record Biden asserted a "direct correlation" between crime and the size of a police force.  The change in wording seems strange for a fact check outfit that maintains that "words matter."

The conclusion gives us nothing new other than the "Mostly True" rating.  Biden was supposedly "largely in line" with the UCR murder data for Flint.  His claim about rape apparently did not drag down his rating much even though PolitiFact admittedly could not "fully" explain the discrepancies.  PolitiFact apparently gave Biden credit for the underlying argument that reductions in a police force "could result in increases in violent crime" despite Biden's rhetoric about a "direct correlation."


The grades:

Angie Drobnic Holan:  F
Sue Owen: N/A
Martha Hamilton:  F

This fact check was notable for its reliance on sources apparently predisposed toward the Obama administration and its relatively unquestioning acceptance of information from those sources.  The Washington Post version of this fact check, for comparison, contacted three experts to PolitiFact's one and none of the three had an FEC filing indicating a campaign contribution to Obama.

And no investigation of whether Biden cherry-picked Flint?  Seriously?  See the "Afters" section for more on that as well as commentary on PolitiFact's CYA attempt.