Monday, July 16, 2012

The Washington Times: "These three fact-checkers keep candidates in line"

Once one makes it past the glowing title the Washington Times stuck over its too-generous review of the three most significant fact-check operations, there's some good and fresh reporting.  And it touches our weakest-link candidate, PolitiFact:
(M)uch of what the fact-checkers do is inherently judgment calls.

For example, PolitiFact Virginia will grade a politician's words as true on their face, while other times will look for suggestive meanings that they say make factually true statements unfair.
The interpretive bias affecting the story focus is hard to quantify, but finding examples isn't hard.  In June, Mitt Romney claimed that poverty among Hispanics increased under Obama.  PolitiFact decided Romney was blaming Obama and ruled the true statement "Half True."  Also in June, the Obama campaign claimed that Romney, as governor of Massachusetts, outsourced call center jobs to India.  But Romney did not outsource any jobs.  He vetoed a state law that would have prevented companies contracting with the state of Massachusetts from outsourcing jobs.  The PolitiFact rating?  "Half True."  A true statement is half true depending on the focus PolitiFact gives it.  A false statement is half true, likewise depending on the focus.

Similar examples occur often throughout the various PolitiFact operations.

The best part of the Times' story comes from a quoted source:
Ray Allen, a longtime Virginia GOP consultant and adviser to House Majority Leader Eric Cantor, said the entire process is inherently flawed.

"So much of what is getting fact-checked is opinion and political philosophy," he said. "The fact-checkers are actively intervening in the campaigns. We've seen fact-checkers write things they clearly want to get in TV ads."
Allen's right about the flaws in PolitiFact's process.  It enshrines liberal bias from the start through the end of the process.  The imprecise rating system provides a versatile canvas for expressing political opinions, no less so as PolitiFact's ratings in practice ignore the definitions of the ratings.

The system almost inevitably results in spin.

Saturday, July 14, 2012

The Washington Post: "PolitiFacters respond to ‘weekend dump’ allegations"

Erik Wemple of the Washingon Post delivers the third in his series following the current dust-up between the Republican Party of Virginia and PolitiFact Virginia.

Wemple, as he promised, visits the GOP's claim that the timing of PolitiFact Virginia stories appears to maximize the impact of negative stories while burying positive ones.

Wemple:
As detailed in Part One of this extensive series, the Republican Party of Virginia is claiming that PolitiFact Virginia, which is run from the offices of the Richmond Times-Dispatch, discriminates against Republican politicians in the most insidious of manners: It times positive fact-checks of Republicans for the weekends, when people aren’t logged on, and “saves” the negative stuff on Republicans for high-traffic mid-week slots. That’s the claim.
Wemple's off the mark.  The document doesn't claim that the stories are deliberately timed.  Rather, it claims that the timing of the stories yields a discriminatory result.  The discrepancy between Wemple's report and the reality of the GOP document is easy to see in the passage Wemple quotes:
Here’s a relevant excerpt from the 86-page slameroo report that the Republican Party of Virginia compiled on PolitiFact Virginia:
PolitiFact Issued Only Two “False” And One “Pants On Fire” Ruling On Republican Statements During The Weekend (Starting After 5 P.M. On Friday), Saving 37 “Mostly False,” “False,” “Pants On Fire,” And “Full Flop” Reports To Be Issued Between Monday And Thursday.

The GOP claim is obviously couched in objective terms and makes no judgments about PolitiFact's intent.  The claim concerns the result, not the intent.

And, of course, the PolitiFact response is a total joke.

Rick Thornton of the Richmond Times-Dispatch says “We typically print in the newspaper PolitiFact rulings on Sunday and Monday . . . . We post our rulings online pretty much as soon as they’re done . . . . A number of our rulings on both sides are on Fridays because they’re being finished up on Friday for Sunday.” 

That doesn't answer anything.

PolitiFact editor Bill Adair, who heads the national operation, says "It’s ridiculous to suggest that any of our PolitiFact sites schedule publication of some items to get smaller audiences."

Adair gravitates directly toward the same straw man that fascinated Wemple.  If the GOP document has the facts right and the good gets the small audience while the bad gets the big audience then the discrimination exists regardless of whether the PolitiFacters possess an awareness of the fact.  And one would think that PolitiFact Virginia would know about the alleged problem from its communications with the RPV.

Neither Thornton nor Adair addresses the charge from the RPV.  And it's a pity that Wemple reported it inaccurately.

Why is this so hard?  If the Sunday paper has more readers than weekday papers then PolitiFact can give an objective response to the charge from the RPV:  Those weekend stories often may have the larger audience.  If that defense isn't accurate then perhaps admit that the RPV has a point but assure everyone that it wasn't on purpose.

Is PolitiFact dissembling for the sake of a CYA strategy?   Yeah, could be.  In any case, the responses from PolitiFact scarcely count as serious.  And we let these people check facts for us?

Friday, July 13, 2012

Big Journalism: "Politispin: 'Fact-Checkers' Mislead on GOP Leaders' Favorable Unemployment Numbers"

Big Journalism's Tony Lee posted an article yesterday that sets the tone with the first line:
The purportedly unbiased Politifact will go to great lengths to help Democrats.
That's old news around here, but like the liberals who endlessly parrot PolitiFact's spin, we appreciate confirmation bias as much as the next guy. 

So what's all the hubbub about? PolitiFact Rhode Island's rating of GOP gubernatorial candidate John Robitaille's claim that "Unemployment rate dropped in every state that elected a Republican gov. in 2010." Robitaille based his claim on a report done by Robert Elliott. Lee critiques PolitiFact's dance moves:
In a remarkable twisting of facts and logic, Politifact concedes Elliott’s two points are true before somehow rendering those points to be “Half True.”
This isn't the first time in recent memory where PolitiFact found ways to determine accurate figures weren't worthy of a True rating. This rating also adds to the list of experts left with a bad taste in their mouth after dealing with PolitiFact:
“That type of spin would be expected of, say, the Democratic Governors Association, but not a supposedly ‘objective’ and ‘nonpartisan’ news organization that claims to be the official arbiter of the truth,” Elliott told Breitbart News. “It is the insidious nature of PolitiFact's bias that makes them so loathsome.”
Elliot's statements by themselves make Lee's article a must-read. But Lee sweetens the pot by highlighting PolitiFact's use of extraneous evidence to cloud the issue they were ostensibly reviewing:
Politifact then goes on to compare the unemployment rates of the states that simply elected a governor who was from a different party from the predecessor’s, which is a completely different analysis than Elliott’s, which is what Politifact was supposed to be “fact checking.”

When doing that analysis resulted in Republican governors still reducing the unemployment rate faster than Democratic governors, Politifact decided to compare the unemployment numbers of the Republican predecessors in states that elected Democrats and Democratic predecessors in states that elected Republicans. Only then -- when not even comparing the current crop of governors or the past two years, which was the basis of Elliott’s analysis -- was Politifact able to find something they could use to say Democrats (predecessors) were slightly better than Republicans (predecessors) at reducing the unemployment rate.
Lee nails the point, and this reminds me of PolitiFact's treatment of Laura Ingraham's claim about RomneyCare's unpopularity with national voters. In that case, PolitiFact based their entire rating on statistics only from Massachusetts when Ingraham was probably talking about all 50 states. It appears that when PolitiFact doesn't like the initial outcome, they find new facts to throw into the mix until they reach their desired outcome.

The most hilarious part of the rating was that PolitiFact not only conceded, but confirmed Elliot's numbers:
Considering the unemployment rate has fallen in 49 states in the last year, that’s stretching the statistic pretty thin.

We find Robitaille’s claim "is partially accurate but leaves out important details or takes things out of context," our definition of Half True.
If the unemployment rate fell in 49 states, by definition Robitaille's claim is true. Using PolitiFact's logic, if Robitaille had claimed "The sun rose in every state that elected a GOP governor", he'd only be rated Half-True because he left out important details. It's nonsense. And it's not fact checking.

This is the first time we've noticed Tony Lee, but if this installment at Big Journalism is any indication, we're looking forward to highlighting his work in the future. Head over to Breitbart and read the whole thing. There's plenty more to this smackdown.


Bryan adds:

Matthew Hoy of Hoystory also takes issue with PolitiFact's rating of Robitaille.

One might cut PolitiFact a break for trying to take credit into account for the sake of its ratings if the effort was evenly applied and didn't force PolitiFact to largely ignore the definitions it established for its ratings.

What do I think of PolitiFact's execution?  I'd borrow a line from legendary football coach John McKay:  "I'm in favor of it."

Wednesday, July 11, 2012

The Washington Post: "Virginia Republican Party to PolitiFact: Don’t bother ringing!"

The Washington Post's reporter/blogger Erik Wemple updates his reporting on PolitiFact Virginia and the critique from the Republican Party of Virginia.

It turns out--no big surprise here--that the relationship broke down between the Virginia GOP and PolitiFact reporters.  The Republican Party of Virginia joins Wisconsin Democrats in giving their state's PolitiFact franchise the silent treatment.

Wemple may have tipped his ideological hand by referring to the GOP's critique as a "screed."  Sure, he can claim he just meant it was a long critique.  But if he does that then it makes his use of the term appear redundant ("enormous screed").  Careful, Mr. Wemple.

State political parties cutting off their cooperation with a fact checker?  Looks like the making of a news story.

The Washington Post: "Virginia Republican Party publishes huge attack paper on PolitiFact"

Erik Wemple and the Washington Post stand as the first mainstream media entities, not counting PolitiFact Virginia itself, to weigh in on the massive pushback PolitiFact Virginia received yesterday from the Republican Party of Virginia:
The Virginia Republican Party has compiled an attack on PolitiFact’s Virginia operation that is virtually unbloggable. An 86-page document with a cover page stating, “TO THE COMMONWEALTH OF VIRGINIA: A COMPREHENSIVE ANALYSIS OF POLITIFACT VIRGINIA’S QUESTIONABLE OBJECTIVITY,” it starts with a two-page memorandum and a three-page table of contents. Even Rachel Maddow has never produced a PolitiFact critique as exquisitely formatted.
Exquisite formatting makes less gratuitous use of capitalization according to our tastes, but we credit Wemple for zeroing in on one of the most intriguing aspects of the ponderous critique:
To narrow the scope of its inquiry, the Erik Wemple Blog will start out by exploring only the most fascinating of the Republican Party’s allegations — namely, that PolitiFact Virginia attempts to bury good ratings about Republicans and tout bad ones.
The Virginia GOP may qualify as the first to notice a bias in the timing of the stories, so that makes it a good angle for Wemple's initial approach.

Journalists are still missing the big story:  No mainstream fact checker receives anywhere near the criticism that PolitiFact receives.  There's a story in there.  And it's an important one.

Big Journalism: "VA GOP Pushes Back Against PolitiFact, Shows Other States the Way"

Big Journalism's John Nolte didn't take long to weigh in on the Republican Party of Virginia's challenge to PolitiFact Virginia's objectivity:
PolitiFact isn't just a national cancer on all of us. This reprehensible outfit also "fact-checks" in a number of individual states, including the crucial swing states of Florida, Wisconsin, Ohio, New Hampshire, and Virginia.

Unfortunately, my lack of superpowers makes it impossible for me to monitor the left-wing propaganda PolitiFact is surely spewing in each individual state. Thankfully, though, the Republican Party of Virginia has had enough.
Nolte's spirited rant is worth a full read, but we'll register some qualified disagreements.

If PolitiFact is a cancer it's often benign.  The fact checks are in the ballpark enough so that radical surgery probably doesn't serve as the answer.  And, in fact, the response from the Republican Party of Virginia probably doesn't serve as the model response, largely because it's too late to serve as a timely corrective for any misinformation it detects and because its format discourages people from reading it (PFB will give it a closer look over time).

On the good side, this type of response from the party does a great deal to bring attention to PolitiFact's many issues, which are best highlighted by research like that of Eric Ostermeier and collected evaluations like the recent set from Ohio Watchdog.

It's pretty easy to find good criticism of PolitiFact.  The challenge comes from getting the information in front of the public to increase people's awareness that PolitiFact cannot be trusted in its current form.




Tuesday, July 10, 2012

Virginia GOP vs. PolitiFact Virginia

The Republican Party of Virginia yesterday published an 86 page criticism of PolitiFact Virginia's objectivity.

We'll have plenty more to say about the specifics as we sift through it all, but here's a small taste of the lengthy document:
We believe the objective evidence assembled here provides ample reason for the public to question PolitiFact Virginia’s objectivity. Based on the compelling data contained herein we believe any Republican official or candidate in Virginia would be justified in publicly indicting PolitiFact Virginia’s pattern of bias, and publicly refusing to participate in or cooperate with their analyses unless and until such time the Richmond Times-Dispatch can substantively and publicly address the underlying concerns about their PolitiFact Virginia team’s lack of objectivity. Each official and candidate can make their own decision on participation with PolitiFact Virginia going forward.
PolitiFact Virginia was not slow to respond, though their response was weak even by PolitiFact standards.

A portion of the rejoinder from PolitiFact Virginia:
On Tuesday, the Republican Party of Virginia sent an "open letter to the commonwealth" accusing PolitiFact Virginia of being biased against the GOP in our rulings.

We disagree.
That's the gist of it, and the evidence supporting PolitiFact's disagreement is marginally greater than what occurs in the above quotation.  It doesn't begin to answer all the points in the GOP critique.

Wednesday, July 4, 2012

Ohio Watchdog: the "PolitiFact or Fiction" series

The Franklin Center for Government and Public Integrity is onto PolitiFact, in the form of Watchdog Ohio and its seven-part (so far?) series "PolitiFact or Fiction."

Each of the seven parts reviews a questionable ruling by PolitiFact Ohio, with the focus falling on the U.S. Senate contest between incumbent Democrat Sherrod Brown and Republican challenger Josh Mandel.

Part 1

The opening salvo by Jon Cassidy blasts PolitiFact Ohio for rating two nearly identical claims from Mandel differently.  One version received a "Half True" while the other garnered a "Mostly True."  Cassidy argues that both versions were true and explains the flaw in the reasoning PolitiFact used to justify its "Half True" rating in one case.

Part 2

This installment, again by Cassidy, criticizes PolitiFact's use of softball ratings in the context of its candidate report cards. The report cards graphically total PolitiFact's ratings for a given candidate and PolitiFact encourages readers to compare the report cards when deciding for whom to vote.

Cassidy:
Here’s Democrat Brown’s claim, which got a rating of “true”:
Rooting for the Red Sox is like rooting for the drug companies. I mean it’s like they have so much money, they buy championships against the working-class, middle-America Cleveland Indians. It’s just the way you are.
Yes, it's questionable whether Brown's statement is even worthy of a fact check.  Cassidy goes further, showing that PolitiFact's rating doesn't make any sense given Brown's failure to draw an apt analogy:
Brown didn’t pick a dominant pharmaceutical company for his comparison. He picked all pharmaceutical companies, as though we should root against an entire industry because of its size.
Touche.

Part 3

With the third installment Cassidy absolutely clobbers PolitiFact for a botched rating of Brown's claim regarding average student loan debt for Ohio graduates.  PolitiFact originally gave Brown a "True" rating but changed it to "Half True" after readers pointed out problems with the rating.  Brown claimed the average graduate owed about $27,000 on student loans but in fact that number only applies to students who had taken out student loans.  Cassidy did the calculations including the students without student loans:
Since 32 percent of Ohio graduates have no student debt at all, Brown overstated the average debt by half.
And that warrants a "Half True" on PolitiFact's "Truth-O-Meter."  Supposedly.

Part 4

As with Part 3, Part 4 hits PolitiFact Ohio for choosing a softball issue on which to grade Brown while also giving him an inflated grade.  Cassidy points out how PolitiFact's use of equivocal language gets Brown off the hook for using a very misleading statistic.  Brown ends up with a "Mostly True" from PolitiFact.

Part 5

The fifth installment adds another example in kind with the previous two.

PolitiFact uses equivocal language--well beyond using mere charitable interpretation--to help defend another of Brown's dubious claims.

Cassidy:
“You’d think it would be as easy as comparing the value of goods and services exported from the United States with those imported from other countries,” [PolitiFact Ohio's Stephen] Koff writes.

Note to Koff: it’s exactly that easy.
 Cassidy could have shown PolitiFact's spin more graphically than he did.

PolitiFact:
For January through September 2010, the most recent measurement available, the trade balance was a negative $379.1 billion. Assuming the monthly trends hold through December, this year’s annual trade deficit should reach $500 billion.

Divide that by the days of the year and you’d have a daily trade deficit of $1.37 billion a day. That’s 32 percent lower than Brown’s claim of $2 billion a day.
The accurate figure should always serve as the baseline.  PolitiFact uses Brown's number as the baseline instead, finding the real figure lower by 32 percent.  A 32 percent error doesn't sound so bad.  Use $1.37 billion as the baseline and it turns out that Brown's number represents an inflation of 46 percent.  In PolitiFact terms, that's "Mostly True."  PolitiFact tried to justify the rating based on higher trade deficits from prior to 2009.

Cassidy's right again that Brown benefited from grade inflation.

Part 6

With Part 6, Cassidy offers an example of PolitiFact Ohio nitpicking Mandel down to "Mostly True" for a plainly true statement.

Mandel claimed his election to the office of state treasurer came from a constituency where Democrats outnumber Republicans  2 to 1.  People would understand that to mean a count of voter registration records.

PolitiFact justified its ruling according to an expert's claim that voter registration numbers aren't "the best litmus test."  But think how much more context was missing from Sherrod Brown's statement in Part 5.  There's no comparison.

Part 7

In Part 7 Cassidy switches gears and defends one of Brown's statements from the truth-torturers at PolitiFact, but uses the minor slight as a contrast to yet another example of grade inflation.

Cassidy:
When Brown said “we buy 35 percent of all Chinese exports” and the actual number turned out to be 25 percent, they gave him a “half-true.”

We’re not sure which half. If you take out the middle four words, “we buy… Chinese imports” is true. You can argue Brown’s claim is close enough, or that it’s way off the mark, but whatever you call it, it isn’t half-true.
Looking at the original story we find PolitiFact again favoring Brown by using the errant figure as the baseline for comparison:
But while PolitiFact Ohio isn’t looking to play Gotcha!, a key tenet is that words matter. In this case, Brown’s number is nearly 30 percent greater than the correct figure.
Yes, words matter.  PolitiFact uses words that suggest the accurate figure was used as the baseline.  But the math tells a different story.  The 10 percentage point difference between 25 percent and 35 percent is nearly 30 percent of Brown's incorrect figure--the wrong one to use as the baseline.  In fact, Brown's number is 40 percent greater than the correct figure.

Summary

Overall, Cassidy did a fine job of assembling a set of PolitiFact Ohio's miscues and explaining where the ratings went wrong.  When PolitiFact botches the math on percentages, as we point out, it helps out Sherrod Brown all the more.

We appreciate Ohio Watchdog's contribution toward holding PolitiFact to account.