Showing posts with label 2014. Show all posts
Showing posts with label 2014. Show all posts

Friday, June 2, 2017

An objective deception: "neutral" PolitiFact

PolitiFact's central deception follows from its presentation of itself as a "nonpartisan" and neutral judge of facts.

A neutral fact checker would apply the same neutral standards to every fact check. Naturally, PolitiFact claims it does just that. But that claim should not convince anyone given the profound level of inconsistency PolitiFact has achieved over the years.

To illustrate PolitiFact's inconsistency we'll use an example from 2014 via PolitiFact Rhode Island that we just ran across.

Rhode Island's Senator Sheldon Whitehouse said jobs in the solar industry outnumbered jobs in coal mining. PolitiFact used data from the Solar Foundation to help evaluate the claim, and included this explanation from the Solar Foundation's Executive Director Andrea Luecke:
Luecke said by the census report’s measure, "the solar industry is outpacing coal mining." But she noted, "You have to understand that coal-mining is one aspect of the coal industry - whereas we’re talking about the whole solar industry."

If you add in other coal industry categories, "it’s more than solar, for sure. But the coal-mining bucket is less, for sure."
Luecke correctly explained that comparing the numbers from the Solar Foundation's job census to "coal mining" jobs represented an apples-to-oranges comparison.

PolitiFact Rhode Island did not take the rigged comparison into account in rating Whitehouse's claim. PolitiFact awarded Whitehouse a "True" rating, defined as "The statement is accurate and there’s nothing significant missing." We infer from the rating that PolitiFact Rhode Island regarded the apples-to-oranges comparison as insignificant.

However, when Mitt Romney in 2012 made substantially accurate claims about Navy ships and Air Force planes, PolitiFact based its rating on the apples-to-oranges angle:
This is a great example of a politician using more or less accurate statistics to make a meaningless claim. Judging by the numbers alone, Romney was close to accurate.

...

Thanks to the development of everything from nuclear weapons to drones, comparing today’s military to that of 60 to 100 years ago presents an egregious comparison of apples and oranges.
PolitiFact awarded Romney's claim its lowest-possible "Truth-O-Meter" rating, "Pants on Fire."

If Romney's claim was "meaningless" thanks to advances in military technology, is it not reasonable to regard Whitehouse's claim as similarly meaningless? PolitiFact Rhode Island didn't even mention government subsidies of the solar energy sector, nor did it try to identify Whitehouse's underlying argument--probably something along the lines of "Focusing on renewable energy sources like solar energy, not on fossil fuels, will help grow jobs and the economy!"

Comparing mining jobs to jobs for the whole solar energy sector offers no reasonable benchmark for comparing the coal energy sector as a whole to the solar energy sector as a whole.

Regardless of whether PolitiFact's people think they are neutral, their work argues the opposite. They do not apply their principles consistently.

Wednesday, December 24, 2014

PolitiFact editor explains the difference between "False" and "Pants on Fire"

During an interview for a  "DeCodeDC" podcast, PolitiFact editor Angie Drobnic Holan explained to listeners the difference between the Truth-O-Meter ratings "False" and "Pants on Fire":



Our transcript of the relevant portion of the podcast follows, picking up with the host asking why President Barack Obama's denial of a change of position on immigration wasn't rated more harshly (bold emphasis added):
ANDREA SEABROOK
Why wouldn't that be "Pants on Fire," for example?

ANGIE DROBNIC HOLAN
You know, that's an interesting question.

We have definitions for all of our ratings. The definition for "False" is the statement is not accurate. The definition for "Pants on Fire" is the statement is not accurate and makes a ridiculous claim. So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other. And we totally understand when readers might disagree and say "You rated that 'Pants on Fire.' It should only be 'False.'" Or "You rated that 'False." Why isn't it 'Pants on Fire'?" Those are the kinds of discussions we have every day ...
One branch of our research examines how PolitiFact differentially applies its "Pants on Fire" definition to false statements by the ideology of the subject. Holan's description accords with other statements from PolitiFact regarding the criteria used to distinguish between "False" and "Pants on Fire."

Taking PolitiFact at its word, we concluded that the line of demarcation between the two ratings is essentially subjective. Our data show that PolitiFact National is over 70 percent more likely to give a Republican's false statement a "Pants on Fire" rating than a Democrat's false statement.

We don't necessarily agree with PolitiFact's determinations of what is true or false, of course. What's important to our research is that the PolitiFact editors doing the voting believe it.

Holan's statement helps further confirm our hypothesis regarding the subjective line of demarcation between "False" and "Pants on Fire."

We'll soon publish an update of our research, covering 2014 and updating cumulative totals.

Monday, December 22, 2014

Mailbag meets windbag

PolitiFact published a "Lie of the Year" edition of its "Mailbag" feature on Dec. 22, 2014. Criticism by Hot Air's Noah Rothman drew immediate mention:
Noah C. Rothman at the conservative blog Hot Air took issue with our Lie of the Year choice.

"Some of these assertions (that collectively earned the Lie of the Year) were misleading, but PolitiFact’s central thesis – ‘when combined, the claims edged the nation toward panic’ – is unfalsifiable. In the absence of any questioning of the federal response to the Ebola epidemic, an unlikely prospect given the government’s poor performance, PolitiFact cannot prove there would have been no broader apprehension about the deadly African hemorrhagic fever. In fact, to make that claim would be laughable.

"In response to Ebola, Sierra Leone literally canceled Christmas. In Britain, returning health care workers who may have had contact with an Ebola patient will have a lonely holiday as well. They will be forced by government mandate to isolate themselves for the duration of the 21-day incubation period, despite the protestations of health care workers. If Ebola ‘panic’ exists, it is certainly not limited to America and is not the fault of exclusively conservative lawmakers. … PolitiFact embarrassed itself again today, but I guess that’s hardly news."
Rothman's main criticism was PolitiFact's ridiculous primary focus on George Will's true claim that Ebola could be transmitted through the air by a sneeze or a cough.

PolitiFact's guidelines:
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
Left out: the main part of Rothman's criticism.

PolitiFact=hypocrites.

Wednesday, December 17, 2014

Commentary: 'PolitiFact’s Ebola Distortions'

Seth Mandel has another deft dissection of PolitiFact's 2014 "Lie of the Year" up at Commentary:
Different statements being grouped together into one “lie”–especially when they’re not lies, even if they’re mistaken–will not do wonders for PolitiFact’s already rock-bottom credibility. But in fact it’s really worse than that. Here’s PolitiFact’s explanation for their choice of “Lie of the Year,” demonstrating beyond any semblance of a doubt that those who run PolitiFact don’t understand the concept around which they’ve supposedly built their business model:
Yet fear of the disease stretched to every corner of America this fall, stoked by exaggerated claims from politicians and pundits. They said Ebola was easy to catch, that illegal immigrants may be carrying the virus across the southern border, that it was all part of a government or corporate conspiracy.
The claims — all wrong — distorted the debate about a serious public health issue. Together, they earn our Lie of the Year for 2014.
You’ll notice right there that PolitiFact engages in its own bit of shameless dishonesty.
Mandel makes a great point about PolitiFact's careless reporting of its "Lie of the Year" selection, a point we're also poised to make by using even more blatant examples from Aaron Sharockman, the editor of PolitiFact's "PunditFact" venture.

It's bad enough to botch the fact-checking end of things. Telling people about the botched fact checks using a new layer of falsehoods and distortions intensifies the deceptive effects.

This is nothing out of the ordinary for PolitiFact.

We'll once again emphasize the point we made in our post yesterday: Naming more than one "Lie of the Year" has some utility when it comes to deflecting criticism. Even Mandel mumbled something about being fair to PolitiFact owing to the multiple winners before he eviscerated their inclusion of Will's claim about the airborne spread of Ebola.

Tuesday, December 16, 2014

Lost letters: Apology Tour edition

We love getting reader feedback on our posts. And we're much more likely to respond to criticism than is PolitiFact. We found this bit of feedback very recently posted to a message board after somebody mentioned how PolitiFact found claims about an Obama "apology tour" false:
How cute, some nut's blog claiming that conservative-leaning Politifact is biased in favor of Obama. And claims that Obama had an "apology tour" because...far-right nutcase Nile Gardiner of the Heritage Foundation says so
How odd for "conservative-leaning PolitiFact" to dismiss the one conservative view among the four experts whose views it solicited! Especially when Gardiner stood alone among the experts with his experience in international relations.

And, of course, our argument wasn't based solely on PolitiFact arbitrarily ignoring an expert opinion it solicited. We found descriptions of apologies in professional journals and applied the definitions we found to the idea of an Obama "apology tour." Because that's what nutcases do.

Why didn't PolitiFact bother reviewing professional literature in its effort to settle the fact of the matter? Probably the same reason it ignored Gardiner's professional opinion: PolitiFact is conservative-leaning.

It makes complete sense if reality is liberally biased.


Find the full explanation of why it's reasonable to call Obama's world tour an "apology tour" at Zebra Fact Check

Addendum:

Jeff thinks it may not be obvious to every visitor that I'm kidding about PolitiFact's conservative bias. This note is intended as a corrective for any who might fall in that group.

2014: Another year, another laughable Lie of the Year

It's time for our annual criticism of PolitiFact's "Lie of the Year" award!

Leading off in a bipartisan spirit, let's note that every single one of PolitiFact's "Lie of the Year" award winners have contained some nugget of truth. This year, PolitiFact decisively elected to give the award to many quite different claims, each having something to do with the Ebola virus.

There's nothing like the meat tenderizer approach when wielding the scalpel of truth.

My handicapping job on the Lie of the Year award was pretty close. But PolitiFact threw us another curve this year by choosing two entries from its list of candidates and then throwing a bunch of other somewhat related claims in for good measure.

No, we're not even kidding.

Let's let PolitiFact's editor, Angie Drobnic Holan, tell the story:
[F]ear of the disease stretched to every corner of America this fall, stoked by exaggerated claims from politicians and pundits. They said Ebola was easy to catch, that illegal immigrants may be carrying the virus across the southern border, that it was all part of a government or corporate conspiracy.

The claims -- all wrong -- distorted the debate about a serious public health issue. Together, they earn our Lie of the Year for 2014.
PolitiFact's lead example, that Ebola is easy to catch, matches closely with the entry I marked as the most likely candidate. It's also the candidate that Hot Air's Noah Rothman identified as the worst candidate:
[T]he most undeserving of entries upon which PolitiFact has asked their audience to vote is a claim attributed to the syndicated columnist George Will. That claim stems from an October 18 appearance on Fox News Sunday in which Will criticized the members of the Obama administration for their hubristic early statements assuring the country that the Ebola outbreak in Africa was contained to that continent.

“The problem is the original assumption, said with great certitude if not certainty, was that you need to have direct contact, meaning with bodily fluids from someone because it’s not airborne,” Will said of the deadly African hemorrhagic fever. “There are doctors who are saying that in a sneeze or some cough, some of the airborne particles can be infectious.”
Rothman's post at Hot Air makes essentially the same points we posted to PolitiFact's Facebook page back in October:
 

PolitiFact's ruling was an exercise in pedantry, extolling the epidemiological understanding of "airborne" over the common understanding. Perhaps Will's statement implicitly exaggerated the risk of contracting Ebola via airborne droplets, but his statement was literally true.

What else went into the winning "Lie of the Year" grab-bag?
  • Rand Paul's claim that Ebola is "incredibly contagious" (not a candidate)
  • Internet users claiming Obama would detain persons showing Ebola symptoms (not a candidate)
  • Bloggers claiming the virus was cooked up in a bioweapons lab (not a candidate)
  • Rep. Paul Broun's claim he'd encountered reports of Ebola carriers crossing the U.S.-Mexico border (a candidate!)
  • Sen. John McCain's claim the the Obama administration said there's be no U.S. outbreak of Ebola (not a candidate).
PolitiFact tosses in a few more claims later on, but you get the idea. PolitiFact crowned every blue-eyed girl Homecoming Queen in 2014, after naming only two statements "Lie of the Year" in 2013.

Why so many lies of the year in 2014?


Sunday, December 14, 2014

PolitiFact poised to pick 2014 "Lie of the Year"

It's time for PolitiFact's "Lie of the Year" nonsense again, where the supposedly nonpartisan fact checkers set aside objectivity even more blatantly than usual to offer their opinion on the year's most significant political falsehood.

We'll first note a change from years past, as PolitiFact abandons its traditional presentation of the candidates accompanied by their corresponding "Truth-O-Meter" graphic. Does that have something to do with criticisms over last year's deceitful presentation? One can only hope, but we're inclined to call it coincidence.

And now a bit of handicapping, using a 0-10 scale to rate the strength of the candidate:

Tuesday, December 9, 2014

PolitiFact's coin flips

We've often highlighted the apparent non-objective standards PolitiFact uses to justify its "Truth-O-Meter" ratings. John Kroll, a former staffer at the Cleveland Plain Dealer, PolitiFact's former partner with PolitiFact Ohio, said the choice between one rating and another was often difficult and said the decisions amounted to "coin flips" much of the time.

Heads the liberal wins, tails the Republican loses, at least in the following comparison of PolitiFact's ratings of Stephen Carter (liberal) and Ted Cruz (Republican).

I'll simply reproduce the email PolitiFact Bias editor Jeff D. sent me, reformatted to our standard PFB presentation:
Read the last three paragraphs of each one (emphasis mine):
Carter said that more than 70 percent of American adults have committed a crime that could lead to imprisonment. Based on a strictly technical reading of existing laws, the consensus among the legal experts we reached is that the number is reasonable. Way more than a majority of Americans have done something in their lives that runs afoul of some law that includes jail or prison time as a potential punishment.

That said, experts acknowledged that the likelihood of arrest, prosecution or imprisonment is exceedingly low for many of Americans’ "crimes." 

As such, we rate the claim Mostly True.

Cruz said that "Lorne Michaels could be put in jail under this amendment for making fun of any politician."

Most experts we talked to agreed that the proposed amendment’s language left open the door to that possibility. But many of those same experts emphasized that prosecuting, much less imprisoning, a comedian for purely political speech would run counter to centuries of American tradition, and would face many obstacles at a variety of government levels and run headlong into popular sentiment.

In the big picture, Cruz makes a persuasive case that it’s not a good idea to mess with the First Amendment. Still, his SNL scenario is far-fetched. The claim is partially accurate but leaves out important details, so we rate it Half True.

One wonders if PolitiFact sought the consensus of experts while considering whether blacks were convicted at a higher rate than whites in a recent fact check. Rudy Giuliani received a "False" rating since PolitiFact could locate no official statistics backing his claim. Looks like official statistics aren't really needed if experts think a claim seems reasonable.
 

Jeff Adds: 

Though former Cleveland Plain Dealer (PolitiFact Ohio) editor John Kroll admits PolitiFact's ratings often amount to coin flips, their other journalistic standards are applied with the same consistency. Take for instance their Dec. 2 dodge of the claim Obama's executive order on immigration would create a $3000 incentive to hire undocumented workers:
The claim isn’t so much inaccurate as it is speculative. For that reason, we won’t put this on our Truth-O-Meter.
Was there an unannounced policy change at PolitiFact? Aaron Sharockman was editor on both the Cruz and Carter checks. An unnamed editor signed off on the Incentive claim, adding flip flops to coin flips.

Here's a timeline:
  • On Sept. 11, 2014, there was enough established, tangible evidence for something that may or may not happen in the future to say Ted Cruz' prediction was half wrong
  • On Dec. 2, 2014, PolitiFact suddenly has a policy against checking speculative claims, but felt compelled enough to spend an entire article Voxsplaining their work to readers.
  • On Dec. 8th, 2014, PolitiFact is back in the future-checking business and found enough proof of something that hasn't actually happened yet to definitively determine a liberal's claim is Mostly True.
Remember also that Mitt Romney won the Lie of the Year award for a TV ad that claimed implied Chrysler would be moving Jeep production to China. So in 2012, PolitiFact's most notable falsehood of the year was a campaign ad implying something would happen in the future.

But does Obama's executive order offer a certain economic incentive, as in the Dec. 2 article? Sorry, PolitiFact says it doesn't rate speculative claims.

Friday, December 5, 2014

Legal Insurrection: 'PolitiFact debunks obviously 'shopped photo (that no one believed was real)'

The title pretty much says it all.

We had plans to write this example, but we're pleased as punch to find our effort pre-empted by the good folks at Legal Insurrection:
Two different photographs, one very clearly photoshopped to provide some social commentary on a quickly spiraling situation. Predictably, the photograph enraged some, delighted others, but no one with two brain cells to rub together believed that the “rob a store” version of the photograph was real.

Politifact, however, dove in headfirst to provide us with an analysis no one asked for[.]
Visit Legal Insurrection for the entire article and to experience the visuals.

Thursday, November 27, 2014

Different strokes for different folks

Folk A: President Barack Obama


Obama claimed border crossings are at the lowest level since the 1970s.
We cannot directly check Obama's literal claim -- which would include the number of people who failed and succeeeded to cross the border -- because those statistics are not maintained by the federal government.
Truth-O-Meter rating: "Half True"

Folk B: Rudy Giuliani


Giuliani said blacks and whites are acquitted of murder at about the same rate.
We couldn't find any statistical evidence to support Giuliani’s claim, and experts said they weren't aware of any, either. We found some related data, but that data only serves to highlight some of the racial disproportion in the justice system.
We found "related data" PolitiFact apparently couldn't find:
Blacks charged with murder, rape and other major crimes are more likely to be acquitted by juries or freed because of a dismissal than white defendants, according to an analysis of Justice Department statistics.
Truth-O-Meter rating: "False"

Different strokes for different folks.

Thursday, November 20, 2014

Lost letters to PolitiFact Bias

We discovered a lost letter of sorts intended in response to our recent post "Fact-checking while blind, with PolitiMath."

Jon Honeycutt, posting to PolitiFact's Facebook page, wrote that he posted a comment to this site but it never appeared. I posted to Facebook in response to Honeycutt, including the quotation of his criticism in my reply:
Jon Honeycutt (addressing "PolitiFact Bias") wrote:
Hmm, just looked into 'politifact bias', the very first article I read http://www.politifactbias.com/.../fact-checking-while... Claimed that politifact found a 20% difference in the congressional approval rating but still found the meme mostly true. But when you read the actual article they link to, politifact found about a 3% difference. Then when I tried to comment to correct it, my comment never appeared.
Jon, I'm responsible for the article you're talking about. You found no mistake. As I wrote, "percentage error calculations ours." That means PolitiFact didn't bother calculating the error by percentage. The 3 percent different you're talking about is a difference in terms of percentage *points*. It's two different things. We at PolitiFact Bias are much better at those types of calculations than is PolitiFact. You were a bit careless with your interpretation.I have detected no sign of any attempt to comment on that article. Registration is required or else we get anonymous nonsense. I'd have been quite delighted to defend the article against your complaint.
To illustrate the point, consider a factual figure of 10 percent and a mistaken estimate of 15 percent. The difference between the two is 5 percentage points. But the percentage error is 50 percent. That's because the estimate exceeds the true figure by that percentage (15-10=5, 5/10=.5).

http://www.basic-mathematics.com/calculating-percent-error.html

Don't be shy, would-be critics! We're no less than 10 times better than is PolitiFact at responding to criticism, based on past performance. The comments section is open to those who register, and anyone who is a member of Facebook can post to our Facebook page.

Tuesday, November 11, 2014

Fact-checking while blind, with PolitiMath

One of the things we would predict from biased journalists is a forgiving eye for claims for which the journalist sympathizes.

Case in point?

A Nov. 11, 2014 fact check from PolitiFact's Louis Jacobson and intern Nai Issa gives a "True" rating to a Facebook meme claiming Congress has 11 percent approval while in 2014 96.4 percent of incumbents successfully defended their seats.

PolitiFact found the claim about congressional approval was off by about 20 percent and the one about the percentage of incumbents was off by a maximum of 1.5 percent (percentage error calculations ours). So, in terms of PolitiMath the average error for the two claims was 10.75 percent yet PolitiFact ruled the claim "True." The ruling means the 11 percent average error is insignificant in PolitiFact's sight.

Aside from the PolitiMath angle, we were intrigued by the precision of the Facebook meme. Why 96.4 percent and not an approximate number by 96 or 97? And why, given that PolitiFact often excoriates its subjects for faulty methods, wasn't PolitiFact curious about the fake precision of the meme?

Even if PolitiFact wasn't curious, we were. We looked at the picture conveying the meme and saw the explanation in the lower right-hand corner.

Red highlights scrawled by the PolitiFact Bias team. Image from PolitiFact.com

It reads: "Based on 420 incumbents who ran, 405 of which kept their seats in Congress."

PolitiFact counted 415 House and Senate incumbents, counting three who lost primary elections. Not counting undecided races involving Democrats Mark Begich and Mary Landrieu, incumbents held 396 seats.

So the numbers are wrong, using PolitiFact's count as the standard of accuracy, but PolitiFact says the meme is true.

It was fact-checked, after all.

Thursday, November 6, 2014

PunditFact PolitiFail on Ben Shapiro, with PolitiMath

On Nov. 6, 2014 PunditFact provided yet another example why the various iterations of PolitiFact do not deserve serious consideration as fact checkers (we'll refer to PolitiFact writers as bloggers and the "fact check" stories as blogs from here on out as a considered display of disrespect).

PunditFact reviewed a claim by Truth Revolt's Ben Shapiro that a majority of Muslims are radical. PunditFact ruled Shapiro's claim "False" based on the idea that Shapiro's definition of "radical" and the numbers used to justify his claim were, according to PunditFact, "almost meaningless."

Lost on PunditFact was the inherent difficulty of ruling "False" something that's almost meaningless. Definite meanings lend themselves to verification or falsification. Fuzzy meanings defy those tests.

PunditFact's blog was literally filled with laughable errors, but we'll just focus on three for the sake of brevity.

First, PunditFact faults Shapiro for his broad definition of "radical," but Shapiro explains very clearly what he's up to in the video where he made the claim. There's no attempt to mislead the viewer and no excuse to misinterpret Shapiro's purpose.



Second, PunditFact engages in its own misdirection of its readers. In PunditFact's blog, it reports how Muslims "favor sharia." Pew Research explains clearly what that means: Favoring sharia means favoring sharia as official state law. PunditFact never mentions what Pew Research means by "favor sharia."

Do liberals think marrying church and state is radical? You betcha. Was PunditFact deliberately trying to downplay that angle? Or was the reporting just that bad? Either way, PunditFact provides a disservice to its readers.

Third, PunditFact fails to note that Shapiro could easily have increased the number of radicalized Muslims in his count. He drew his totals from a limited set of nations for which Pew Research had collected data. Shapiro points this out near the end of the video, but it PunditFact either didn't notice or else determined its readers did not need to know.

PolitiMath


PunditFact used what it calls a "reasonable" method of counting radical Muslims to supposedly show how Shapiro engaged in cherry-picking. We've pointed out at least two ways PunditFact erred in its methods, but for the sake of PolitiMath we'll assume PunditFact created an apt comparison between its "reasonable" method and Shapiro's alleged cherry-picking.

Shapiro counted 680 million radical Muslims. PunditFact counted 181.8 million. We rounded both numbers off slightly.

Taking PunditFact's 181.8 million as the baseline, Shapiro exaggerated the number of radical Muslims by 274 percent. That may seem like a big enough exaggeration to warrant a "False" rating. But it's easy to forget that the bloggers at PunditFact gave Cokie Roberts a "Half True" for a claim exaggerated by about 9,000 percent. PunditFact detected a valid underlying argument from Roberts. Apparently Ben Shapiro has no valid underlying argument that there are plenty of Muslims around who hold religious views that meet a broad definition of "radical."

Why?

Liberal bias is as likely an explanation as any.


Addendum:

Shapiro makes some of the same points we make with his own response to PunditFact.

Thursday, October 30, 2014

Larry Elder: 'PunditFact Lies Again'

Conservative radio show host Larry Elder has an Oct. 30 criticism of PunditFact posted at Townhall.com. It's definitely worth a read, and here's one of our favorite bits:
Since PunditFact kicks me for not using purchasing power parity, surely PunditFact's parent, Tampa Times, follows its own advice when writing about the size of a country's economy? Wrong.

A Tampa Times' 2012 story headlined "With Slow Growth, China Can't Prop Up the World Economy" called China "the world's second-largest economy," with not one word about per capita GDP or purchasing power parity. It also reprinted articles from other papers that discuss a country's gross GDP with no reference to purchasing power parity or per capita income.
Elder does a nice job of highlighting PolitiFact's consistency problem. PolitiFact often abandons normal standards of interpretation in its fact check work. Such fact checks amount to pedantry rather than journalistic research.

A liberal may trot out a misleading statistic and it will get a "Half True" or higher. A figure like Sarah Palin uses CIA Factbook ratings of military spending and receives a "Mostly False" rating.

Of course Elder makes the point in a fresh way by looking at the way PolitiFact's parent paper, the Tampa Bay Times, handles its own reporting. And the same principle applies to fact checks coming from PolitiFact. The fact checkers don't follow the standard for accuracy they apply to others.

Tuesday, October 28, 2014

PolitiFact supports dishonest Democratic Party "war on women" meme


There's plenty wrong with PolitiFact's Oct. 28 fact check of a Facebook meme, but we'll focus this time only on PolitiFact's implicit support of Democrats' baseless "war on women" political attack strategy regarding equal pay for equal work.

The meme graphic looks like this:


The question at the end, "Share if you miss the good old days!" implies a contrast between today's Republican Party platform and the platform in 1956. For us, the relevant point is No. 7, "Assure equal pay for equal work regardless of sex." Democrats have made the supposed Republican "war on women" a main point of their campaigns, partly by criticizing Republican opposition of the Democrats' proposed "Paycheck Fairness Act," which places new burdens on businesses defending against pay discrimination lawsuits. Equal pay for equal work, of course, already stands as the law of the land.

PolitiFact, on its main page of fact checks, elevates the fake contrast on the equal pay issue to first place:

Image from PolitiFact.com, appropriated under Fair Use.

"What a difference 58 years makes."

Back in '56, Republicans supported equal pay for equal work. But today, PolitiFact implies, Republicans no longer support equal work for equal pay.

It's horrible and biased reporting that has no place in a fact check.

Are things better in the text of the story? Not so much (emphasis in the original):
The 2012 platform doesn’t mention two of the meme’s seven items from 1956 -- unemployment benefits and equal pay for women.

The bottom line, then, is that on most of these issues, the GOP moved to the right between 1956 and 2012, though the degree of that shift has varied somewhat issue by issue.
PolitiFact finds no evidence of a shift on equal pay other than the issue's absence in the 2014 GOP platform. But that only makes sense given that federal equal pay laws went on the books in the 1960s. So there's no real evidence of any shift other than a fallacious appeal to silence, yet we have "equal pay" as PolitiFact's lead example of the GOP's supposed move to the right.

The issue also gets top billing in PolitiFact's concluding paragraphs (bold emphasis added):
The meme says the 1956 Republican Party platform supported equal pay, the minimum wage, asylum for refugees, protections for unions and more.

That’s generally correct. However, it’s worth noting that other elements of the 1956 platform were considered conservative for that era. Also, some of the issues have changed considerably between 1956 and 2012, such as the shift from focusing on post-war refugees to focusing on illegal immigration.
There's really no need to mention anything in the fact check about federal equal pay legislation passing in the 1960s, right?

This type of journalism is exactly what one would predict if liberal bias affected the work of left-leaning journalists. This fact check serves as an embarrassment to the practice of journalism.

Tuesday, October 21, 2014

Fact-checker can't tell the difference between "trusted" and "trustworthy"?

PolitiFact just makes criticism too easy.

Today PolitiFact's PunditFact highlighted a Pew Research poll that found many people do not trust Rush Limbaugh as a news source. The fact checkers published their article under the headline "Pew study finds Rush Limbaugh least trustworthy news source."

No, we're not kidding.


As if botching the headline isn't bad enough, the article contains more of PolitiFact's deceptive framing:
PunditFact is tracking the accuracy of claims made on the five major networks using our network scorecards. By that measure, 61 percent of the claims fact-checked on Fox News have been rated Mostly False, False or Pants on Fire, the most among any of the major networks.
As PolitiFact's methodology for constructing its scorecards lacks any scientific basis, this information is a curiosity at best. But by juxtaposing it with the polling data from Pew Research, the reader gets a gentle nudge in the direction of "Ah, yes, so the accuracy of PolitiFact's scorecards is supported by this polling!" That's bunk.

The scientific angle would come from an investigation into whether prior impressions of trustworthiness influence the ratings organizations like PolitiFact give to their subjects.

If PunditFact's article passes as responsible journalism then journalism is a total waste of time.

Sunday, October 12, 2014

Hot Air: 'It’s time to ask PolitiFact: What is the meaning of “is”?'

Dustin Siggins, writing for the Hot Air blog, gives us a PolitiFact twofer, swatting down two fact-check monstrosities from the left-leaning fact checker.

Siggins:
In 1998, then-President Bill Clinton told the American people that he hadn’t lied to a grand jury about his relationship with Monica Lewinsky because, on the day he said “there’s nothing going on between us,” nothing was going on.

While that line has been a joke among the American people ever since, it looks like PolitiFact took the lesson to heart in two recent fact-checks that include some Olympic-level rhetorical gymnastics.
Siggins goes on from there, dealing with recent fact checks of senators Ted Cruz (R-Texas) and Jeanne Shaheen (D-N.H.).

Click the link and read.

Also consider reviewing our highlights of a few other stories by Siggins.

Tuesday, September 30, 2014

Hot Air: 'Whiplash: Politifact absolves Democrat who repeated…Politifact’s lie of the year'

Rest assured, readers: There's no lack of PolitiFact blunders to write about, merely a lack of time to get to them all. For that reason, we're grateful that we're not the only ones doing the work of exposing the worst fact checker in the biz for what it iz.

Take it away, Guy Benson:
Politifact, the heavily left-leaning political fact-checking oufit, has truly outdone itself.  The organization crowned President Obama as the 2013 recipient of its annual “lie of the year” designation for his tireless efforts to mislead Americans about being able to keep their existing healthcare plans under Obamacare.  While richly deserved, the decision came as a bit of a surprise because Politifact had rated that exact claim as “half true” in 2012, and straight-up “true” in 2008 (apparently promises about non-existent bills can be deemed accurate).
And what did PolitiFact do to outdo itself? Republican senatorial candidate Ed Gillespie ran an ad attacking Democratic rival Mark Warner over pledge not to vote for a bill that would take away people's current health insurance plans.

PolitiFact Virginia, incredibly, ruled the ad "False."

Read Benson's piece at Hot Air in full for all the gory details. The article appropriately strikes down PolitiFact Virginia's thin justification for its ruling.

Also see our past assessment of PolitiFact's preposterous maneuvering on its editorial "Lie of the Year" proclamation from 2013.

Tuesday, September 16, 2014

PunditFact and the disingenuous disclaimer

Since PunditFact brings up its network scorecards yet again, it's worth repeating our observation that PunditFact speaks out of both sides of its mouth on the issue of scorecards/report cards.

This corner:
The network scorecards were designed to provide you a way to measure the relative truth of statements made on a particular network.
The other corner:
We avoid comparisons between the networks.
PunditFact breaks down its data to enable its readers to "measure the relative truth of statements made on a particular network."

At the same time, PunditFact tells its readers that it's not comparing the networks.

We're still trying to figure out a way these claims can reconcile without contradiction and/or excusing PolitiFact from the charge of deliberately misleading its readers.

If the scorecards provide readers with a legitimate tool for judging the relative truth of statements made on a particular network, then why would PunditFact avoid comparisons between the networks? And how can PunditFact even claim to avoid making comparisons between the networks when its scorecards avowedly serve the purpose of leading readers to make those comparisons?

If this paradox doesn't indicate simple ignorance on PunditFact's part, it indicates a disturbingly disingenuous approach to its subject matter.