Tuesday, April 17, 2018

What??? No Pulitzer for PolitiFact in 2018?

We're not surprised PolitiFact failed to win a Pulitzer Prize in 2018.

The Pulitzer it won back in 2009 was a bit of a fluke to begin with, stemming from prize submissions in the public service category and arbitrarily used by the Pulitzer board to justify the prize for "National Reporting."

Since the win in 2009, PolitiFact has won exactly the same number of Pulitzer Prizes that PolitiFact Bias has won: Zero.

So, no, we're not surprised. But we think Hitler might be, judging from his reaction when PolitiFact failed to win a Pulitzer back in 2014.

Friday, April 13, 2018

PolitiFact continues to botch the gender pay gap


We can depend on PolitiFact to perform lousy fact-checking on the gender wage gap.

PolitiFact veteran Louis Jacobson proved PolitiFact consistent ineptitude with an April 13, 2018 fact check of Sen. Tina Smith (D-Min.), Sen. Al Franken's replacement.

Sen. Smith claimed that women earn only 80 cents on the dollar for doing the same jobs as men. That's false, and PolitiFact rated it "Mostly False."


That 80-cents-on-the-dollar wage gap is calculated based on full-time work irrespective of the job type and irrespective of hours worked once above the full-time threshold. The figure represents the median, not the average.

But isn't "Mostly False" a Fair Rating for Smith?

Good question! PolitiFact noted that the figure Smith was using did not take the type of job specifically into account. And PolitiFact pointed out that Smith made a common mistake. People often fail to mention that the raw wage gap figure doesn't take the type of job into account.

PolitiFact's Jacobson doesn't precisely spell out why PolitiFact finds a germ of truth in Smith's statement. Presumably PolitiFact's reasoning matches that of its earlier ratings where it noted that the wage gap statistic is accurate except for the part about it applying to equal work. So it's true except for the part that makes it false, therefore "Mostly False" instead of "False."

Looking at it objectively, however, it's just plain false that women earn 80 cents on the dollar for doing the same work. Researchers talk about an "unexplained gap" after taking various factors into account to explain the gap, and the ceiling for gender discrimination looks like it falls to around 5 percent to 7 percent.

Charitably using the 7 percent figure as the ceiling for gender-based wage discrimination, Smith exaggerated the gap by 186 percent. It's likely the exaggeration was far greater than that.

For comparison, when Bernie Sanders said 40 percent of U.S. gun sales occur without background checks, PolitiFact gave him a "False" rating for exaggerating the right figure by 90 percent.

The Ongoing Democratic Deception PolitiFact Overlooks

If a Democrat describes the 80 percent raw pay gap accurately, why not give it a "True" rating? Or at least "Mostly True"?

Democrats tend to trot out the raw gender pay gap statistic while proposing legislation that supposedly addresses gender discrimination. By repeatedly associating the raw wage gap with the issue of wage discrimination, Democrats send the implicit message that the raw wage gap describes gender discrimination. It uses the anchoring bias to mislead the audience about the size of the pay gap stemming from gender discrimination.

Democrats habitually use "Equal Pay Day," based on the raw wage gap, to argue for equal pay for equal work. But the raw wage gap doesn't take the type of job into account.

Trust PolitiFact not to notice the deception.

Fact checkers ought to assist in making Democrats clarify their position. Are Democrats in favor of equal pay regardless of the job or hours worked? Or do Democrats believe the demands for equal pay apply only to matters of gender discrimination?

If the latter, Democrats' continued use of the raw wage gap to peg the date of its "Equal Pay Day" counts as a blatant deception.

If the former, voters deserve to know what Democrats stand for.

Afters


It amused us that Jacobson directly referenced an earlier PolitiFact Florida botched treatment of the gender pay gap.

PolitiFact Florida couldn't figure out that claiming the gap occurs "simply because she isn't a man" is equivalent to claiming the raw gap is for men and women doing the same work. Think about it. If the gap occurs "simply because she isn't a man" then the reason for the disparity cannot be because she is doing different work. Doing different work would be a factor in addition to her not being a man.

PolitiFact Florida hilariously rated that claim "Mostly True." We wrote about it on March 14, 2017.

Fact checkers. D'oh.

Thursday, April 12, 2018

Not a Lot of Reader Confusion X: "I admit that there are flaws in this ..."

So hold my beer, Nelly.

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

Quite a few folks apparently have no clue at all that PolitiFact's charts and graphs lack anything like a scientific basis. Others know that something isn't right about the charts and graphs but against all reason find some value in them anyway.

PolitiFact itself would fall in the latter camp, based on the way it uses its charts and graphs.

So would Luciano Gonzalez, writing at Patheos. Gonzalez listened to Speaker of the House Paul Ryan's speech announcing his impending retirement and started wondering about Ryan's record of honesty.

PolitiFact's charts and graphs don't tell people about the honesty of politicians because of many flaky layers of selection bias, but people can't seem to help themselves (bold emphasis added):
I decided after hearing his speech at his press conference to independently check if this House Speaker has made more honest claims than his predecessors. To did this I went to Politifact and read the records of Nancy Pelosi (House Speaker from January 4th 2007-January 3rd 2011), John Boehner (House Speaker from January 5th 2011-October 29th, 2015), and of course of the current House Speaker Paul Ryan (October 29th 2015 until January 2019). I admit that there are flaws in this, such as the fact that not every political claim a politician makes is examined (or even capable of being examined) by Politifact and of course the inherent problems in giving political claims “true”, “mostly”, “half-true”, “mostly false”, “false”, & “pants on fire” ratings but it’s better than not examining political claims and a candidate’s level of honesty or awareness of reality at all.
If we can't have science, Gonzalez appears to say, pseudoscience is better than nothing at all.

Gonzalez proceeds to crunch the meaningless numbers, which "support" the premise of his column that Ryan isn't really so honest.

That accounts for the great bulk of Gonzalez's column.

Let's be clear: PolitiFact encourages this type of irresponsible behavior by publishing its nonsense graphs without the disclaimers that spell out for people that the graphs cannot be reasonably used to gauge people's honesty.

PolitiFact encourages exactly the type of behavior that fact checkers ought to discourage.

Monday, April 2, 2018

PolitiFact Bias Fights Fact Checker Falsehoods

A December 2017 project report by former PolitiFact intern Allison Colburn of the University of Missouri-Columbia School of Journalism made a number of misleading statements about PolitiFact Bias. This is our second post addressing that report. Find the first one here.

Colburn:
A blog, PolitiFactBias.com, devotes itself to finding specific instances of PolitiFact being unfair to conservatives. The blog does not provide analysis or opinion about fact-checks that give Republicans positive ratings. Rather, it mostly focuses on instances of PolitiFact being too hard on conservatives.
We find all three sentences untrue.

Does PolitiFact Bias devote itself to finding specific instances of PolitiFact being unfair to conservatives?

The PolitiFact Bias banner declares the site's purpose as "Exposing bias, mistakes and flimflammery at the PolitiFact fact check website." Moreover, the claim is specious on its face. After the page break we posted the title of each PolitiFact Bias blog entry from 2017, the year when Colburn published her report. The titles alone provide strong evidence contradicting Colburn's claim.

PolitiFact Bias exists to show the strongest evidence of the left-leaning bias that a plurality of Americans detect in the mainstream media, specific to PolitiFact. As such, we look for any manifestations of bias, including patterns in the use of words, patterns in the application of subjective ratings, biased framing and inconsistent application of principles.


Does PolitiFact Bias not provide analysis or opinion about about fact-checks that give Republicans positive ratings?

PolitiFact Bias focuses its posts on issues that accord with its purpose of exposing PolitiFact's bias, mistakes and flimflammery. Our focus by its nature is technically orthogonal to PolitiFact giving Republicans positive ratings. And, in fact, PolitiFact Bias does analyze cases where Republicans received high ratings. PolitiFact Bias even highlights some criticisms of PolitiFact from the left.

We simply do not find many strong criticisms of PolitiFact from the left. There are plenty of criticisms of PolitiFact from the right that we likewise find weak.

Does PolitiFact Bias "mostly focus" on PolitiFact's harsh treatment of conservatives?

PolitiFact Bias recognizes the subjectivity of PolitiFact's "Truth-O-Meter" ratings. PolitiFact's rating system offers no dependable means of objectively grading the truth value of political statements. For that reason, this site tends to avoid specifically faulting PolitiFact's assigned ratings. Instead, PolitiFact Bias places its emphasis on cases showing PolitiFact inconsistency in applying its ratings. In two similar cases where a Democrat received a positive rating and a Republican received a lower rating it might be the case that PolitiFact went easy on the Democrat.

That said, the list of post titles again shows that PolitiFact Bias produces a great deal of content that is not focused on showing PolitiFact should give conservatives more positive ratings. Holan's statement jibes with Colburn's false statement about the focus at PolitiFact Bias.

Why the misleading claims about PolitiFact Bias?

As far as we can tell, the entire evidence Colburn used in her report's judgment of PolitiFact Bias came from her interview with PolitiFact Editor Angie Drobnic Holan:
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from. But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
Note that in Holan's response to Colburn's first question about PolitiFact Bias she suggests the site focuses on PolitiFact not giving enough positive ratings to conservatives.

Are Colburn and Holan lying?

PolitiFact Bias co-editor Jeff D. has used the PolitiFact Bias Twitter account to charge Colburn and Holan with lying.

The charge isn't unreasonable.

Colburn very likely read the PolitiFact Bias site to some extent before asking Holan about it. Even a cursory read ought to have informed a reasonable person that Holan's description of the site was slanted at best. Yet Holan's description apparently underpinned Colburn's description of PolitiFact Bias.

Likewise, Holan's familiarity with the PolitiFact Bias site ought to have informed her that her description of it was wrong and misleading.

When a person knowingly makes a false or misleading statement, it counts as a lie. Colburn and Holan were both very likely to have reason to know their statements were false or misleading.

We're pondering a second post pressing the issue still further in Holan's case.

Wednesday, March 28, 2018

How PolitiFact Fights Its Reputation for Anti-conservative Bias

This week we ran across a new paper with an intriguing title: Everyone Hates the Referee:How Fact-Checkers Mitigate a Public Perception of Bias.

The paper, by Allison Colburn, pretty much concludes that fact checkers do not know how to fight their reputation for bias. Aside from that, it lets the fact checkers describe what they do to try to seem fair and impartial.

The paper mentions PolitiFact Bias, and we'll post more about that later. We place our focus for this post on Colburn's October 2017 interview of PolitiFact Editor Angie Drobnic Holan. Colburn asks Holan directly about PolitiFact Bias (Colburn's words in bold, following the format from her paper):
I'm just kind of curious, there's the site, PolitiFactBias.com. What are what are your thoughts on that site?

That seems to be one guy who's been around for a long time, and his complaints just  seem to be that we don't have good, that we don't give enough good ratings, positive ratings to conservatives. And then he just kind of looks for whatever evidence he can find to support that point.

Do you guys ever read his stuff? Does it ever worry you?

He's been making the same complaint for so long that it has tended to become background noise, to be honest. I find him just very singularly focused in his complaints, and he very seldom brings up anything that I learn from.

But he's very, you know, I give him credit for sticking in there. I mean he used to give us, like when he first started he would give us grades for our reporting and our editing. So it would be like grades for this report: Reporter Angie Holan, editor Bill Adair. And like we could never do better than like a D-minus. So it's just like whatever. What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site.
We could probably mine material from these answers for weeks. One visit to our About/FAQ page would prove enough to treat the bulk of Holan's misstatements. Aside from the FAQ, Jeff's tweet of Rodney Dangerfield's mug is the perfect response to Holan's suggestion that PolitiFact Bias is "one guy."



The Holan interview does deliver some on the promise of Colburn's paper. It shows how Holan tries to marginalize PolitiFact's critics.

I summed up one prong of PolitiFact's strategy in a post from Jan 30, 2018:
Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?
In that post, I reported on how Holan bemoaned the fact that PolitiFact critics do not offer factual criticisms of its fact checks, preferring instead to quibble over its subjective ratings.
If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Holan says my (our) criticism amounts to a call for more positive ratings for conservatives. So we're just carping about the ratings, right? Holan's summation does a tremendous disservice to our painstaking and abundant research pointing out PolitiFact's errors (for example).

In the Colburn interview Holan also says she has trouble taking criticism seriously when the critic doesn't write articles complimenting what PolitiFact does correctly.

We suppose it must suck to find one's self the victim of selection bias. We suppose Holan must have a tough time taking FactCheck.org seriously, given its policy against publishing fact checks showing a politician spoke the truth without misleading.

The hypocrisy from these people is just too much.

Exit question: Did Holan just not know what she was talking about, or was she simply lying?



Afters

For what it's worth, we sometimes praise PolitiFact for doing something right.



Correction March 31, 2018: We erred by neglecting to include the URL linking to Colburn's paper. We apologize to Allison Colburn and our readers for the oversight.

Tuesday, March 20, 2018

PolitiFact's apples and oranges make Zinke a liar?

Fact checkers often mete out harsh ratings to politicians who employ apples-to-oranges comparisons to make their point.

Mainstream media fact checkers often find themselves immune from the principles they use to find fault with others, however.

Consider PolitiFact's March 19, 2018 fact check of Interior Secretary Ryan Zinke.


Zinke said a Trump administration proposal "is the largest investment in our public lands infrastructure in our nation's history."

PolitiFact found that Civilian Conservation Corps program under President Franklin Roosevelt would far exceed the proposed Trump administration spending if adjusted for inflation:
The CCC’s director wrote in 1939 that it had cost $2 billion; that was two-thirds of the way through the program’s life. And according to a Park Service study, the annual annual cost per CCC enrollee was $1,004 per year. If you assume that the average tenure of the CCC’s 3.5 million workers was about a year, that would produce a cumulative cost around $3 billion.

Such calculations "sound right — millions of young men, camps to house them, food and uniforms, and they were paid," said Steven Stoll, an environmental historian at Fordham University.

Once you factor in inflation, $3 billion spent in the 1930s would be the equivalent of about $53 billion today — about three times bigger than even the fully funded Trump proposal.
When a spokesperson for the Trump administration pointed out that the CCC included lands controlled at the state and local level, PolitiFact brushed the objection aside (bold emphasis added):
Interior Department spokeswoman Heather Swift pointed out that the CCC "also incorporated state and local land."

It’s true that the CCC created more than 700 state parks and upgraded many others, in addition to its efforts on federally owned land. Ultimately, though, the point is moot: Zinke didn’t say the proposal is the largest investment in federal lands infrastructure. He said "public lands infrastructure," and state and local parks count as "public lands."
The key to the "False" rating PolitiFact gave Zinke comes entirely from its insistence that Zinke's statement covers all public lands.

But the context, which PolitiFact reported but ignored, clearly shows Zinke was talking specifically about spending on federal lands (bold emphasis added):
"The president is a builder and the son of a plumber, as I am," Zinke told the Senate Energy & Natural Resources Committee. "I look forward to working with the president on restoring America's greatness through a historic investment of our public lands infrastructure. This is the largest investment in our public lands infrastructure in our nation's history. Let me repeat that, this is the largest investment in our public lands infrastructure in the history of this country."

Zinke specified that he was referring to the president's budget proposal, which would create a fund to provide "up to $18 billion over 10 years for maintenance and improvements in our national parks, our national wildlife refuges, and Bureau of Indian Education funds."
We note a pair of irreconcilable problems with PolitiFact's reasoning.

If Zinke had claimed the CCC spending was greater than the spending proposed by the Trump administration, he would be guilty of using an apples-to-oranges comparison. Why? Because the scope of the two spending programs varies at a fundamental level.

Any would-be comparison between spending on federal lands only and spending on federal, state and local lands qualifies as an apples-to-oranges comparison.

If Zinke's statement was interpreted in keeping with his comments on the scope of the spending--kept to "federal lands"--then PolitiFact simply elected to avoid doing the appropriate fact check. That is, measuring the CCC spending on federal lands against the proposed Trump administration spending on federal lands. Apples-to-apples.

PolitiFact bases its fact check on the apples-to-oranges comparison: CCC spending on federal, state and local parks against proposed Trump administration spending on federal lands only.

Objective?

Nonpartisan?

Not hardly.



Afters: Multiple flaky layers

In its fact check PolitiFact stresses the enormity of CCC spending under Roosevelt by expressing it as a percentage of the federal budget. And compares that to the tiny percentage of the total budget taken up by Trump's proposed spending.

Hello?

Has the federal budget increased over time (try as a percentage of GDP)? Medicare? Medicaid? Hello?

PolitiFact loves it some apples and oranges.

Not a Lot of Reader Confusion IX

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. But PolitiFact Editor Angie Drobnic Holan says there isn't much reader confusion.

In a March 19, 2018 column published by The Hill, liberal radio talk show host Bill Press declared that we can't believe what President Donald Trump says.

What evidence did Press have to support his claim?

PolitiFact.

Press (bold emphasis added):
Trump tells so many lies, so often, that not even Politifact, the Pulitzer Prize-winning, nonpartisan fact-checking website can keep score. But they do the most thorough job of anybody. Since he launched his 2016 campaign, Politifact has evaluated more than 500 assertions made by candidate and president Donald Trump, and they’ve rated an astounding 69 percent of them as “false,” “mostly false,” or, the worst category, “liar, liar, pants on fire.” Think about that. On any given day, you know that seven out of ten things Donald Trump says are not true!
We at PolitiFact Bias have endlessly pointed out that even if one assumes that PolitiFact did its fact checks without mistakes that selection bias and subjective application of its rating system make claims like the one in bold ridiculous.

Claims mirroring the one above happen routinely, yet PolitiFact denies the evidence ("not a lot of reader confusion") and continues publishing its misleading charts and graphs with no explanatory disclaimer.

We can think of two primary potential explanations.
  • PolitiFact truly doesn't see the abundance of evidence even though we jump up and down calling attention to it
  • PolitiFact is deliberately misleading people
 We invite readers to provide alternative possibilities in the comments section.


Sunday, March 18, 2018

PolitiFact: It's 'Half True' and 'Mostly True' that President Obama doubled the debt

Twitterer Ely Brit (@RealElyBritt) tweeted out this comparison of past PolitiFact ratings on March 17, 2018:


For the Trump fact check, PolitiFact came down hard on The Donald for placing blame too squarely on President Obama when Congress controls the federal government's purse strings.

On the other hand, it's hard to see how Sen. Paul eases up on placing the blame, unless he gets a bipartisan pass for blaming President Bush for the earlier debt increase.

Apart from that, neither Trump nor Paul received a "True" rating because Congress shares the blame for government spending.

Right, PolitiFact?


O-kay, then.



Correction 3/18/2018: Corrected transposed misspellings of Ely Brit's name. Our apologies to Brit. 
Correction 3/18/2018: Commenter YuriG pointed out that I (Bryan) used "deficit" in the headline, conflicting with the content of the post. Changed "deficit" to "debt" in the headline. Our thanks to YuriG for taking the time to point out the problem.

Monday, March 12, 2018

PolitiFact's Jolly problem

PolitiFact hired David Jolly at some point in February, with the hire date depending on whether his job title was "reader representative" or "Republican guest columnist."

PolitiFact said the hire was intended to build trust in PolitiFact across party lines. We've viewed the experiment with justified skepticism. And Jolly's work so far as the "Republican guest columnist" only solidifies our skepticism.

Jolly's first guest column was published March 2, 2018. We noted that Jolly used that column to address a subject tied to his own advocacy of "common sense gun control." We doubted many PolitiSkeptical conservatives would hear their voices in that column. We judged that Jolly was using his position at PolitiFact to essentially write an op-ed about one of his pet political issues.

Jolly's March 7, 2018 column followed that pattern.

Instead of critiquing PolitiFact, Jolly used his column to attack the target of a PolitiFact fact check. The target of that fact check? President Donald Trump.

Jolly (bold emphasis added):
As the nation continues to debate which gun policies might provide for the safety of our schools and communities, PolitiFact demonstrated in a single column the critical importance fact-checkers serve in both informing the American public as well as holding politicians and advocates on both sides of the debate accountable for their assertions.
Jolly played his gun control theme song in the background again and again we ask: How does Jolly's approach to his columns build trust among conservatives skeptical of PolitiFact? Does he think that just having a Republican say something like "PolitiFact is right" will budge the needle of partisan mistrust?

We'll go out on a limb and predict that approach has a snowball's chance in hell of working.

Conservative mistrust in PolitiFact stems primarily from two factors:
  • Conservatives see PolitiFact turning a blind eye to conservative arguments
  • PolitiFact commonly makes errors of fact and logic damaging to conservatives
Jolly's column serves as an example of both problems, despite his willingness to identify as a Republican.


Jolly reinforces PolitiFact's left-leaning bias

Jolly lauded PolitiFact for rating "False" the claim that an armed civilian might have stopped the "Pulse" nightclub shooting in Orlando (bold emphasis added):
"You take Pulse nightclub," Trump said. "If you had one person in that room that could carry a gun and knew how to use it, it wouldn’t have happened, or certainly to the extent that it did."
The problem for both the president and his theory is that an armed officer and 15-year veteran of the Orlando Police Department, Adam Gruler, was actually working security at Pulse that fateful night, and indeed engaged the shooter directly with gunfire. Forty-nine people still lost their lives.
PolitiFact rightly rated as False the president's statement that an armed security guard could have saved those 49 victims.
The conservative would have read the PolitiFact fact check looking for evidence of fair treatment of the mainstream conservative point of view. That view is absent, and when Jolly fails to notice its absence and celebrates PolitiFact's fact check in spite of that, the conservative cannot take Jolly's columns any more seriously than the fact check in the first place.

The problem with the fact check

What's the problem with PolitiFact's fact check?

The fact check pretends that the armed guard working security in the Pulse parking lot counts as "one person in that room that could carry a gun and knew how to use it."

Is the Pulse parking lot inside the Pulse nightclub?

Shouldn't that make a difference for non-partisan fact checkers and GOP columnists alike?

Note PolitiFact's description of armed guard Anthony Gruler's involvement in the Pulse incident (bold emphasis added):
The Justice Department in 2017 released a nearly 200-page report detailing the Orlando police response to the shooting. Here’s the report’s account of Gruler’s initial confrontation with Mateen:
"Outside, in the Pulse parking lot, (Gruler), who was working extra duty at the club — to provide outside security and to provide assistance to security personnel inside the club if needed — heard the shots that were being fired; at 2:02:17 a.m., he broadcast over the radio, 'Shots fired, shots fired, shots fired,' and requested additional officers to respond.

"The detective told the assessment team that he immediately recognized that his Sig Sauer P226 9mm handgun was no match for the .223 caliber rifle being fired inside the club and moved to a position that afforded him more cover in the parking lot. Two patrons attempted to flee through an emergency exit on the south side of the club. When the detective saw the suspect shoot them, he fired at the suspect."
According to an Orlando Police Department report, additional officers arrived on the scene about a minute after Gruler’s call for backup was broadcast. A second backup officer arrived about a minute after that.
 PolitiFact's conclusion, sadly, serves as an adequate summary of its argument:
Talking about the Pulse nightclub shooting, Trump said, "If you had one person in that room that could carry a gun and knew how to use it, it wouldn’t have happened, or certainly to the extent that it did."

An armed, off-duty police officer in uniform was at the club during the shooting, and exchanged gunfire with the shooter, who managed to kill 49 people.

We rate this False.
Notice that PolitiFact says Gruler was "at the club," not "outside the club."

The "False" rating gets its premise from the fiction that Gruler was in the same room with Mateen while the latter was murdering club patrons. Jolly, PolitiFact's voice of the GOP, signs on with that falsehood.

In this case the deception comes from PolitiFact and Jolly. Not from Trump.

PolitiFact demonstrated nothing false in Trump's statement, yet pinned a "False" rating on his statement.

Jolly cheered PolitiFact's work, missing the key discrepancy in its fact check.

That's all kinds of wrong.

Saturday, March 3, 2018

The first critique from PolitiFact's Republican "guest columnist" David Jolly (Updated)

In early February, PolitiFact tabbed two former Washington D.C. politicians as Democrat and Republican "guest columnists." PolitiFact said the hires were intended to raise its credibility across partisan lines.

We looked the the first column from the Democrat back on Feb. 12 and judged it pointless crap.

Republican David Jolly published his first critique on March 2, 2018. And though we were aware of charges Jolly counts as an "MSNBC Republican," we were still a bit surprised at first blush when Jolly used his column to apparently suggest PolitiFact had rendered a too-harsh judgment on a Democrat's claim about gun violence.

It's worth noting that Jolly has been busy lately promoting "common sense" gun control legislation, so maybe it was just too tempting to resist tying that political cause to his role at PolitiFact.

PolitiFact gave a gun violence statistic from Rep. Ted Deutch (D-Fla) a rating of "Mostly False." Jolly appeared to think Deutch deserved a better fate (bold emphasis added)
Rep. Deutch is said to have relied on statistics provided by The Century Foundation, a left-leaning think tank, which plainly assessed there had been "an increase of over 200%" in mass shootings since the expiration of the assault weapons ban. PolitiFact, however, rated Deutch’s statement Mostly False, largely on the basis that the commentary upon which the congressman relied was substantively challenged by other experts in the field. Presumably, had Deutch said in the town hall, "According to The Century Foundation, mass shootings went up 200% in the decade after the assault weapons ban expired," PolitiFact would have found the Congressman’s statement True on its face, while ruling the findings of The Century Foundation Mostly False.

Moreover, had PolitiFact evaluated Deutch’s statement simply on the numbers, there is ample evidence in the PolitiFact article to support a ruling of Mostly True.
Jolly's critique hits at PolitiFact's standard operating procedure. Yes, of course PolitiFact's ratings oversimplify complexities. Will writing a column pointing that out for the umpteenth time change anything?

Is that the extent of Jolly's point?  That's what his conclusion suggests:
In this case, a congressman’s statement seems to have been ruled Mostly False on two primary factors — his citing a credible think tank’s commentary on gun violence statistics, and a drawn inference by fact-checkers that may or may not have been intended in the congressman’s statement. Neither makes PolitiFact’s ruling right or wrong, but it reflects the enormous challenge faced by politicians, fact-checkers and ultimately voters in today’s political environment.
Newsflash for David Jolly: The fact checkers do not see the subjectivity of their ratings as a problem. The problem, in their eyes, is that readers do not place enough trust in fact checkers. And they recruited their "guest columnists" to help build bipartisan trust in their subjective judgments.

Jolly's critique is about as deep as Barbie's "Math is hard" critique of high school and as such works better as a soft-sell version of his gun control op-ed that we linked above. But we like it for the fact that its premise arguably strikes against the fundamental subjectivity of the PolitiFact approach.

Expect a short run for this PolitiFact experiment if Jolly's future work similarly attacks the PolitiFact premise.


Afters

We learned about Jolly's column through PolitiFact's Facebook page, where it featured a link to his column.

PolitiFact's main page at PolitiFact.com does a pretty awesome job of burying these columns. The reader would have to scroll near the bottom of the page and see the "Inside the Meters" section in the right sidebar.

It's literally the last element on the right sidebar, and there's no way to directly reach the "Inside the Meters" posts using the navigation buttons at the top of the PolitiFact website.



Afters 2

Unsurprisingly, PolitiFact uses an "even the Republican thinks we were too tough on the Democrat" frame in its Facebook announcement of his column:


Note:
See why he said "there is ample evidence ... to support a ruling of "Mostly True."
PolitiFact left out enough context of Jolly's column to earn its own "Half True" (bold emphasis added):
Moreover, had PolitiFact evaluated Deutch’s statement simply on the numbers, there is ample evidence in the PolitiFact article to support a ruling of Mostly True.
Sometimes PolitiFact judges simply on the numbers, sometimes it doesn't. No objective, nonpartisan principle appears to guide the decision.

We suggest PolitiFact's manipulation of Jolly's statement misleads its audience about Jolly's point. Jolly's overall point, albeit delivered weakly, was the difficulty of making the ratings (thanks to subjectivity). PolitiFact spins that into the Republican saying PolitiFact was too hard on the Democrat.

PolitiFact's spin helps stuff Jolly's column into a frame that assists PolitiFact's purpose of fluffing up its own credibility: They're so nonpartisan! They gave a Democrat a "Mostly False" when a Republican said he should have had a "Mostly True!!"

Afters 3

Exit question for David Jolly:

PolitiFact sees your role as a guest columnist as a way for it to help build its own credibility. It has said as much in public statements. Describe the difference between the way you see your role as a guest columnist and the way PolitiFact sees it.



Update: Jeff adds

Cherry-picking the source of a claim

One might assume PolitiFact's newest critic would know something about PolitiFact, until you read PolitiFact's newest critic's newest PolitiFact critique.

Jolly's gripe, and his solution to it, betray a gross ignorance of the way his employer works.
"PolitiFact would have found the Congressman’s statement True on its face, while ruling the findings of The Century Foundation Mostly False."
Jolly finds fault with PolitiFact giving the False rating to Rep. Deutch, when Deutch was simply repeating a claim made by a (Jolly's words) "credible think tank." He argues the rating should have gone to the think tank.

That's nice, but it's also a problem spelled out on these pages for nearly a decade. One of the ways  PolitiFact shows its liberal lean comes from its choice of the source for a claim. We've called that "attribution bias."

In 2011, when The New York Times, ABC, and NPR all repeated a bogus number published in an official Inspector General report regarding $16 muffins, PolitiFact added a "False" rating to conservative Bill O'Reilly's "report card." 

In 2012, when Time, CNN, and the New York Post published a faulty stat from a dubious research firm, it was conservative media group American Crossroads that earned the demerit on its PolitiFact scorecard.

More recently in 2017 PolitiFact ignored a widely publicized liberal talking point espoused by numerous talking heads in the national media for months, and then quickly jumped in to deem it "False" within a day of Donald Trump repeating it.

Though in Deutch's case PolitiFact gave the rating to a Democrat instead of the think tank responsible for the stat, it still exemplifies PolitiFact's subjective, cherry-picking process for assigning its ratings. That Jolly seems to think his observation is novel only highlights his own naivety about PolitiFact.

The critique overall?

 Jolly's overall critique comes off as compelling as warm beer. It's almost as if he quickly cobbled together the column immediately after being criticized for not producing a critique during the first month of PolitiFact's new "readers advocate" feature.

We're not sure what value Jolly adds to the operation here, unless being duped into being a real life version of PolitiFact's lame "we get it from both sides!" defense is his goal. He didn't provide any insight or commentary that hasn't been offered in a better form on this blog for the last several years. We think having an in-house, right-leaning critic at PolitiFact is a good idea and think it could improve its work immensely. So far it appears Jolly is not the vehicle for the improvement.

In any event, congratulations to Jolly for finding a way to get his first critique of PolitiFact from the right to dovetail so nicely with his conservative efforts to support ... gun control.

Thursday, March 1, 2018

Top five reasons why David Jolly did not critique PolitiFact in February (Updated)

Update March 2, 2018

We've been curious, especially in light of the gun debate's current popularity, what has happened to PolitiFact's experiment involving "reader advocates." Particularly we noted the absence of Republican David Jolly. We published a post yesterday 
[March 1--bww] addressing this curiosity and included a healthy dose of snark. 

Mr. Jolly has since responded via Twitter that his silence was the result of personal matters, including a death in the family and threats he's been receiving.

Our sarcasm was intended to mock the same problems we typically do with PolitiFact. While there's nothing offensive in anything we wrote, we've decided to remove the post in deference to Mr. Jolly's personal circumstances.

We'll continue to use snark to highlight PolitiFact's dubious operation, but in this instance we don't see the value in unnecessary jokes in light of Mr. Jolly's response.
(above by Jeff D.)
Update continued:

Instead of deleting the post or preserving it offsite via the Internet Archive, we're putting it below a page break and changing the text to white (on a white background) to make it mostly invisible to all except the insatiably curious.

Some blue highlights from embedded URLs will remain visible.

Sunday, February 18, 2018

PolitiFact partially unveils spectacularly transparent description of its fact-checking process

"The Week in Fact-Checking," an update on the latest fact-checking news posted at the Poynter website, alerted us to the fact that PolitiFact has updated its statement of principles:
PolitiFact made their methodology more transparent, in keeping with other fact-checkers around the world. (And ICYMI,  PolitiFact has moved its headquarters to Poynter, earning a not-for-profit designation.)
We were surprised we had missed PolitiFact's welcome improvement to its methodological transparency. So we visited PolitiFact.com to check it out.

So ... where is it?

PolitiFact created multiple pages of transparent new content and apparently neglected to equip its website with internal links leading readers to the new content.

Clicking "About Us>>Our Process" on the main menu takes the reader to PolitiFact's 2013 statement of principles.

Clicking "Our Process" on the footer takes the reader to PolitiFact's 2013 statement of principles

There's no apparent way to use PolitiFact's main page to find the new even-more-transparent(!) statement of principles.

But people can see PolitiFact's latest extreme transparency through the Poynter.org website. Or maybe via links posted to Twitter. We haven't noticed any yet, but it's possible.

So there's that.

The new material published on Feb. 12, 2018. As of Feb. 18, 2018, PolitFact.com still funneled readers to its 2013 statement of principles.

We see that as illustrative of the PolitiFact bubble. PolitiFact judges its transparency according to its belief it has published a new statement of principles. Those outside the PolitiFact bubble, unaware of the new statement of principles thanks to PolitiFact's oversight, do not likely take the same view of PolitiFact's transparency.

Why are those outside the bubble so ignorant of PolitiFact's extreme transparency?

Monday, February 12, 2018

Guest columnist (Democrat) critiques PolitiFact

We covered PolitiFact's announcement it had hired Democratic and Republican "reader advocates" to help establish its trustworthiness. And we covered how PolitiFact unpublished that announcement when its choice of Alan Grayson, former Democratic congressman from Florida, blew up in its face.

Another announcement followed on Feb. 9, 2018, naming Republican David Jolly and Democrat Jason Altmire as "guest columnists."

The guest columns appear on PolitiFact's blog page, "Inside the Meters," which should prove sufficient to bury the columns outside the notice of anybody who doesn't either get Twitter or email alerts directly from PolitiFact.

Altmire was the first to have a critique published.

We think it's pointless crap.

Altmire says PolitiFact "generously" rated a Republican "Half True." Then later in the column says the "Half True" rating is the correct rating.

No, seriously. That's what Altmire does.

In the lead paragraph, Altimire says PolitiFact gave a "generous" Half True rating (bold emphasis added):
PolitiFact generously rates Congressman Mullin’s Facebook post "Half True." He got the numbers right, but failed to inform readers of the context. In evaluating claims involving the selective use of statistics, PolitiFact must consider whether the omission was accidental or meant to deceive. Mullin’s omission appears to have been purposeful, because he knows an evaluation of Obama’s entire economic record would present a completely different picture than the one the congressman was trying to paint. Is "Half True" an accurate rating in this case?
 And in his concluding paragraph, Altmire says PolitiFact got the rating right (bold emphasis added):
PolitiFact gave the correct rating; Mullin’s post was indeed "Half True." But from now on, when readers consider a statement that has been rated "Half True" based upon the misuse of statistics, I hope they will remember the less-than-complimentary implication of that rating.
("from now on"?????)

Altmire's point, cleverly disguised in the midst of his self-contradiction, was that Congressman Mullin was lying, and Altmire wishes PolitiFact had been more clear about it.

As critiques go, that's plenty lame. Media Matters could have come up with that one with no problem.

If these columnists don't pick up their game immediately, PolitiFact ought to waste no time at all pulling the plug on this experiment.

Saturday, February 10, 2018

How we made our meme mocking PolitiFact

Earlier this week we noticed PolitiFact making yet another hypocritical declaration. PolitiFact has ruled it misleading to use "cuts" to refer to reductions to a future projected spending baseline. In many cases a budget might increase year by year but the legislature "cuts" spending by slowing its increase.

In the past, we've pointed out how PolitiFact tended to rate Republicans "Mostly False" for claiming the Affordable Care Act cut Medicare by hundreds of millions of dollars. When President Donald R. Trump and the Republican Congress tried the same thing with Medicaid in 2017, PolitiFact discovered that the claim was "Half True" on the few occasion(s?) it noticed the Democrats' ubiquitous claim and then quickly lost interest.

Fast forward to 2018, and PolitiFact published a fact check of a Trump statement about protests over the United Kingdom's National Health Service, its universal care program. PolitiFact treated Trump unfairly by rating him on something he did not say, but what really knocked our socks off was a sentence PolitiFact reeled off in its summary:
While the NHS has lost funding over the years, the march that took place was not in opposition to the service, but a call to increase funding and stop austerity cuts towards health and social care.
The problem? You guessed it! Spending has gone up for the NHS pretty consistently. The fact checkers at Britain's Full Fact even did a fact check in January 2018 relating to NHS funding. It only reported spending going up.
Spending on the NHS in England has increased in real terms by an average of around 1% a year since 2010. Since the NHS was established spending increases have averaged 4% per year.
So the NHS hasn't "lost funding" except against baseline future spending. The austerity "cuts" PolitiFact reports are a decrease of the rate of future spending.

PolitiFact is making a claim it has rated "Half True" and worse in the past.

We don't appreciate that type of hypocrisy from a supposedly non-partisan and objective fact checker. So we went to work on meme.

First, we looked at PolitiFact's list of stories with the "Medicare" tag. We knew we'd find stories reporting on budget cuts to a baseline. And from those stories we looked for one with a summary that would fit the present case. It didn't take long. We found a "Half True" rating from PolitiFact Ohio that fit the bill:


"So-called cut reflects savings from slowing growth in spending." Doesn't that sound much better than "cutting Medicare"? Hurrah! It's savings!

Our next step was to replace the text and image to the left of the "Truth-O-Meter" graphic. We decided to pin the blame on PolitiFact Editor Angie Drobnic Holan instead of on the intern who wrote and researched the fact check. Holan had good reason to know PolitiFact's history on rating cuts to a future baseline.

We took Holan's image from her Twitter account.

We replaced the text with Holan's name and the outrageous quotation from the Trump fact check.

We credited our faux fact check to "PolitiFact National" on the day the Trump fact check came out. We skipped the em-dash this time since it takes a few extra steps.

And we put a big "PARODY" watermark on the whole thing to make clear we're not trying to trick anybody. The point is to mock PolitiFact for its inconsistency.

Our finished product:


Seriously: It's ridiculous for a national fact-checking service to do such a poor job of reporting consistently. Holan is the chief editor, and she doesn't notice this clear problem? She let the intern down by not catching it. And how long will it take to correct the problem? Eternity?

PolitiFact's past work on budget cuts is already so chaotic that one more miss hardly matters. We don't expect anything to change. PolitiFact will go right on giving readers a slanted view of budget cuts.

For that matter, we expect the other two of America's "elite three" fact checkers to independently follow the same misleading pattern PolitiFact uses. That's what happens when all three lean left.

Thursday, February 8, 2018

Are fact checkers fact-checking opinions more? Blame Trump! (Updated)

The fact checkers at PolitiFact apparently can't keep themselves from allowing their opinions to seep into their work.

Fortunately, we can all blame President Trump. That way, the fact checkers need not acknowledge any error.

A Feb. 6, 2018  PolitiFact fact check took as an assertion of fact Trump's apparent opinion that the word "treason" might apply to Democrats who failed to applaud good news about the United States during Trump's State of the Union Address.


PolitiFact, in classic straw man fashion, insisted that "treason" had to refer to the type codified in law, and so rated Trump's claim "Pants on Fire" (bold emphasis added):
Trump said that at the State of the Union address, Democrats, "even on positive news … were like death and un-American. Un-American. "even on positive news … were like death and un-American. Un-American. Somebody said, ‘treasonous.’ I mean, yeah, I guess, why not? Can we call that treason? Why not?"

There’s a good reason why not: Declining to applaud the president doesn’t come anywhere near meeting the constitutionally defined threshold of treason, which in any case can’t occur except in wartime. Rather, legal experts agree that it is a clear case of constitutionally protected free speech. We rate the statement Pants on Fire.
In fact, "treason" has a broader definition than PolitiFact allowed:

  1. the offense of acting to overthrow one's government or to harm or kill its sovereign.
  2. a violation of allegiance to one's sovereign or to one's state.
  3. the betrayal of a trust or confidence; breach of faith; treachery. 
Failing to applaud good news about one's state would, in a sense, violate allegiance to one's state. And, more to the point, one can define words as one likes. One could, for example, choose to define the word "Rump" to refer exclusively to President Trump. One can do such things because words are ultimately just symbols representing ideas, and people can choose what idea to associate with what symbol.

Is it a good idea to use words in ways that run against their commonly understood meanings? That's a different issue.

Trump afforded his critics another marvelous opportunity to criticize his temperament and wisdom, but that criticism belongs in op-eds, not fact checks.

The dastardly Trump forced helpless journalists to abandon their objectivity.

How dare he.


Update Feb. 8, 2018

We weren't going to make a big deal of PolitiFact saying that Trump was suggesting that not applauding for him (Trump) might qualify as treason.

But then PolitiFact started emphasizing that misleading headline on Twitter:
That's just bad reporting, and it's a classic example of a biased headline. Trump says failing to applaud good news about the United States might pass as treason, not the failure to applaud President Trump.

Nonpartisan and objective journalists should be able to distinguish between the two.

Tuesday, February 6, 2018

PolitiFact: One standard for me, and another for thee

On Feb. 5, 2018, PolitiFact published an article on cherry picking from one of its veteran writers, Louis Jacobson. Titled, "The Age of Cherry-picking," it led with a claim of fact as its main hook:
These days, it isn’t just that Republicans are from Mars and Democrats are from Venus. Increasingly, politicians on either side are cherry-picking evidence to support their version of reality.
With cherry-picking on the increase, and with both sides using it more, certainly readers would want to see what PolitiFact has to say about it.

But is it true? Is cherry-picking on the increase?

One had to read far down the column to reach Jacobson's evidence (bold emphasis added):
So is there more cherry-picking today in political rhetoric than in the past? That’s hard to say -- we couldn’t find anyone who measures it. But several political scientists and historians said that even if it’s not more common, the use of the tactic may have turned a corner.
Seriously?

If a writer tries to hook me into reading a story based on the claim that cherry-picking is on the increase, then takes over 20 paragraphs before getting around to telling me that no good evidence supports the claim, I want my money back.

This isn't hard, fact checkers. If it's hard to say if there is more cherry-picking today in political rhetoric than in the past, don't say "Increasingly, politicians on either side are cherry-picking evidence to support their version of reality."

Don't do it.

Even a Democrat probably couldn't entirely get away with a claim so poorly supported by the evidence, thanks to PolitiFact's occasionally-applied principle of the burden of proof:
Burden of proof – People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.
We used Twitter to needle PolitiFact over this issue, surprisingly drawing some response (nothing of substance). But the exchange ended up productive when co-editor Jeff D, who runs the PFB Twitter account, contributed this summary:
That about sums it up. One standard for me, and another for thee.



Update Feb. 7, 2018: Supplied URL to PolitiFact's article on cherry picking, added tag labels.

Monday, February 5, 2018

Does "lowest" mean something different in Georgia than it does in Texas?

Today PolitiFact National, posing as PolitiFact Georgia, called it "Mostly True" that Georgia has the lowest minimum wage in the United States.

Georgia law sets the minimum wage at $5.15 per hour, the same rate Wyoming uses, and the federal minimum wage of $7.25 applies to all but a very few Georgians. PolitiFact National Georgia hit Democrat Stacey Evans with a paltry "Mostly True" rating:
Evans said Georgia "has the lowest minimum wage in the country."

Georgia’s minimum wage of $5.15 per hour is the lowest in the nation, but Wyoming also has the same minimum wage.

Also, most of Georgia’s workers paid hourly rates earn the federal minimum of $7.25.

Evans’ statement is accurate but needs clarification or additional information. We rate it Mostly True.
Sounds good. No problem. Right?

Eh. Not so fast.

Why is it okay in Georgia for "lowest" to reasonably reflect a two-way tie with Wyoming, but in Texas using "lowest" where there's a three-way tie earns the speaker a "False" rating?



How did PolitiFact Texas justify the "False" rating it gave the Republican governor (bold emphasis added)?
Abbott tweeted: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."

The latest unemployment data posted when Abbott spoke showed Texas with a 4 percent unemployment rate in September 2017 though that didn't set a 40-year record. Rather, it tied the previous 40-year low set in two months of 2000.

Abbott didn’t provide nor did we find data showing jobs created in each state in October 2017.

Federal data otherwise indicate that Texas experienced a slight decrease in jobs from August to September 2017 though the state also was home to more jobs than a year earlier.

We rate this claim False.
 A tie goes to the Democrat, apparently.

We do not understand why it is not universally recognized that PolitiFact leans left.



Correction/clarification Feb. 5, 2018:
Removed unneeded "to" from the second paragraph. And added a needed "to" to the next-to-last sentence.


Thursday, February 1, 2018

The bird's-eye lowdown on PolitiFact's partisan reader advocates (Updated)

Today PolitiFact announced it will publish content from two reader advocates at its PolitiFact.com website.

The announcement didn't go so well. Shortly after making the announcement PolitiFact nixed Democrat Alan Grayson's planned involvement.

The Hill reports:
Fact-checking website PolitiFact on Thursday announced that it had hired former Florida Reps. David Jolly (R) and Alan Grayson (D) as “reader advocates” before hours later nixing Grayson's hire after fierce backlash.
The dumping of Grayson aside, does this represent a sincere effort from PolitiFact to help improve its product?

Probably not, despite the we're-so-humble sales job from PolitiFact Executive Director Aaron Sharockman.

Huh. Well, we intended to link directly to PolitiFact's announcement about its new reader representatives, but it looks like PolitiFact unpublished it. We'll go with the reporting from the Hill instead, which has coincidentally also been altered since its publication: (an earlier version carried a hyperlink to PolitiFact's announcement)
The two former lawmakers had been set to critique the website’s fact-checks and provide additional insight on political issues as part of a pilot program that will run through the end of April, according to a Thursday PolitiFact post announcing the hires. The post has since been deleted.

“David and Alan are both particularly qualified, we think, to critique the work of PolitiFact, because they’ve been subject to our fact-checks as members of Congress,” PolitiFact Executive Director Aaron Sharockman wrote an his initial statement.

Well-qualified to critique the work of PolitiFact! That implies that the critiques may prove valid, right?

PolitiFact said it would learn from Jolly and Grayson, though it's hard for us to prove that now that PolitiFact is disposing of the evidence. Like a good fact-checker should, I guess.

We think PolitiFact's flirtation with reader advocates fits well with its old narrative about how it receives criticism from both sides. Getting criticized from both sides, using mumbo-jumbo logic, shows the reliability of the entity getting criticized. That's a superb narrative for PolitiFact to promote. It's certainly far better than the alternative narrative, that getting criticized by both sides means you're probably doing something wrong.

PolitiFact's founding editor, Bill Adair, did research suggesting that PolitiFact receives most of its substantive criticism from the right. Will PolitiFact allow such research to impact its choice of which narrative to promote? We doubt it.

We captured images from Sharockman's Twitter feed--ones he had so far elected not to delete. The one at the top pretty much shows PolitiFact's thinking behind this experiment. It's not about improving the product. It's about encouraging the public to trust in the existing product.


Note that PolitiFact's experiment was only slated to run through April 2018. Three months. If PolitiFact detects signs that people are trusting it less during the experiment, it will terminate the experiment.

Does that sound like a sincere effort to improve the product? Or more like a cynical ploy intended to trick people into trusting PolitiFact?

How many times do we have to say it? One gains trust by proving trustworthy. One proves one's trustworthiness through accuracy and transparency.

Making published works entirely disappear is not transparency.



Jeff adds:


We wonder what it says about PolitiFact that they considered Grayson representative of a conventional Democrat voice.

We'd also like to congratulate the fact checkers on selecting conservative powerhouse David ... uh ... *checks notes* ... Jolly.

We're not surprised that PolitiFact's ill-advised attempt to gain credibility resulted in more of the same deception we've come to expect. For example, un-publishing articles is somehow a sign of trustworthiness?

For years we've said PolitiFact would benefit from an inside critic of their work and suggested most of their obvious blunders would have been prevented with a heterodox voice on staff. On its face the notion of "reader advocates" seems like a step in that direction, until you realize it's just another click-seeking gimmick (Is Grayson really the top pick for any serious endeavor?).

If PolitiFact were actually sincere about gaining reader trust there's more effective ways than adding sideshow acts performed by clowns and cranks. For instance, they could unequivocally disavow their longtime use of stealth edits. That might help bring them into compliance with the International Fact Checking Network code of principles that they currently violate.

But the most obvious thing they could do to improve their image is to credibly rebut the volumes of legitimate, earnest criticism of their work. So far, PolitiFact's response to charges of bias has been to call critics "mental" or to ignore them altogether. For some reason PolitiFact does not view an honest defense of their work as a viable remedy for reader distrust.

PolitiFact's incompetent and biased editorializing has never earned credibility. Adding more clowns to the car won't change that. This "readers advocate" stunt only shows how unserious PolitiFact is about providing readers the truth.






Edit 0835PST 2/2/2018: Added word "PolitiFact" in penultimate graph -Jeff

Imperious PolitiFact

PolitiFact decides who built what, and it's ridiculous

Back in 2016 we reviewed the "True" claim PolitiFact awarded First Lady Michelle Obama for her claim the White House was built by slaves.

Slaves definitely helped with the labor of constructing the White House, but to an unknown degree. Regardless of that, PolitiFact awarded Obama a "True" rating on its subjective "Truth-O-Meter."

We thought PolitiFact went too easy on the claim, given that one could use the same standard to claim it "True" that European immigrants built the White House. Including a word like "helped" allows either claim to rise to credibility.

Fast forward to 2018 and the State of the Union Address response from U.S. Rep. Joe Kennedy III (D-Mass.).

Kennedy said immigrants built Fall River, Massachusetts.

Breitbart, a right-leaning news outlet, judged Kennedy's statement "Mostly False," reasoning that the establishment of Fall River by native-born descendants of English settlers made it reasonable to say the city was built, at least in part, by those native-born people. Breitbart added that the native-born population has always outnumbered immigrants in the county that contains Fall River.

PolitiFact apparently doesn't care for sharing credit. If one group helped build something then that group gets credit and other groups that helped do not get credit.

PolitiFact rated Breitbart's claim "False." Yes, that implies that immigrants who helped "build" Fall River by coming to work at factories established by the native residents were the ones who exclusively built Fall River.



Call us radical right-wingers if you like, but we think if the facts show that credit for building something should be shared, then a fact checker should acknowledge shared credit in its ratings.

Breitbart's "Mostly False" rating hints at an ability to make that type of acknowledgement.

PolitiFact's "True" and "False" ratings make it look more partisan than Breitbart.

Tuesday, January 30, 2018

PolitiFact editor: "Tell me where the fact-check is wrong"

Ever notice how PolitiFact likes to paint its critics as folks who carp about whether the (subjective) Truth-O-Meter rating was correct?

PolitiFact Editor Angie Drobnic Holan gave us another stanza of that song-and-dance in a Jan. 26, 2018 interview with Digital Charlotte. Digital Charlotte's Stephanie Bunao asked Holan whether she sees a partisan difference in the email and commentary PolitiFact receives from readers.

Holan's response (bold emphasis added):
Well, we get, you know, nobody likes it when their team is being criticized, so we get mail a lot of different ways. I think obviously there's a kind of repeated slogan from the conservative side that when they see media reports they don't like, that it's liberal media or fake news. On the left, the criticism is a little different – like they accuse us of having false balance. You know, when we say the Democrats are wrong, they say, ‘Oh, you're only doing that to try to show that you're independent.’ I mean it gets really like a little bit mental, when people say why we're wrong. If they're not dealing with the evidence, my response is like, ‘Well you can say that we're biased all you want, but tell me where the fact-check is wrong. Tell me what evidence we got wrong. Tell me where our logic went wrong. Because I think that's a useful conversation to have about the actual report itself.
Let us count the ways Holan achieves disingenuousness, starting with the big one at the end:

1) "Tell me where the fact-check is wrong"

We've been doing that for years here at PolitiFact Bias, making our point in blog posts, emails and tweets. Our question for Holan? If you think that's a useful conversation to have then why do you avoid having the conversation? On Jan. 25, 2018, we sent Holan an email pointing out a factual problem with one of its fact checks. We received no reply. And on Jan. 26 she tells an interviewer that the conversation she won't have is a useful one?

2) "Every year in December we look at all the things that we fact-check, and we say, ‘What is the most significant lie we fact-checked this year’"

Huh? In 2013, PolitiFact worked hard to make the public believe it had chosen the president's Affordable Care Act promise that people would be able to keep plans they liked under the new health care law as its "Lie of the Year." But PolitiFact did not fact check the claim in 2013. PolitiFact Bias and others exposed PolitiFact's deception at the time, but PolitiFact keeps repeating it.

3) PolitiFact's "extreme transparency"

Asked how the media can regain public trust, Holan mentioned the use of transparency. We agree with her that far. But she used PolitiFact as an example of providing readers "extreme transparency."

That's a laugh.

Perhaps PolitiFact provides more transparency than the average mainstream media outlet, but does that equal "extreme transparency"? We say no. Extreme transparency is admitting your politics (PolitiFail), publishing the texts of expert interviews (PolitiFail, except for PolitiFact Texas), revealing the "Truth-O-Meter" votes of its editorial "star chamber" (PolitiFail) and more.

PolitiFact practices above-average transparency, not "extreme transparency." And the media tend to deliver a poor degree of transparency.

We remain prepared to have that "useful conversation" about PolitiFact's errors of fact and research.

You let us know when you're ready, Angie Drobnic Holan.

Monday, January 29, 2018

PolitiFact masters the non sequitur

A non sequitur occurs when one idea does not follow from another.

A Jan. 23, 2018 fact check by PolitiFact's Miriam Valverde offers ample evidence that PolitiFact has mastered the non sequitur.


Valverde's fact check concerned a claim from a White House infographic*:


PolitiFact looked at whether it was true that immigrants cost U.S. taxpayers $300 billion annually. The careful reader will have noticed that the White House infographic did not claim that immigrants cost U.S. taxpayers $300 billion annually. It made two distinct claims, first that unskilled immigrants create a net fiscal deficit and second that current immigration policy puts U.S. taxpayers on the hook for as much as $300 billion.

Isn't it wonderful when supposedly non-partisan fact checkers create straw men?

As for what the White House actually claimed, yes the Washington Times reported there was one study--a thorough one--that said current immigration policy costs U.S. taxpayers as much as $296 billion annually.

We do not know the precise origin of that figure after looking for it in the study. Apparently PolitiFact also failed to find it and after mentioning the Times' report proceeded to use the study's figure of $279 billion for 2013. That figure was for the first of eight scenarios.

Was the $296 billion number an inflation adjustment? A population increase adjustment? A mistake? A figure representing one of the other groups? We don't know. But if the correct figure is $279 billion, $300 billion represents a liberal-but-common method of rounding. It could also qualify as an exaggeration of 8 percent.

What problem does PolitiFact find with the infographic (bold emphasis added)?
A consultant who contributed to the report told us that in 2013 the total fiscal burden -- average outlays minus average receipts multipled [sic] by 55.5 million individuals -- was $279 billion for the first generation of immigrants. But making a conclusion on that one figure is a mighty case of cherry-picking.
What?

What conclusion did the infographic draw that represents cherry picking?

That line from Valverde does not belong in a fact check without a clear example of the faulty cherry picking. And in this case there isn't anything. The fact check provides more information about the report, including some positives regarding immigrant populations (especially second-generation immigrants), but ultimately finds no concrete fault with the infographic.

PolitiFact's charge of cherry picking doesn't follow.

And PolitiFact's conclusion?
Our ruling

The White House claimed that "current immigration policy imposes as much as $300 billion annually in net fiscal costs on U.S. taxpayers."

A study from the National Academies of Sciences, Engineering, and Medicine analyzed the fiscal impact of immigration under different scenarios. Under some assumptions, the fiscal burden was $279 billion, but $43 billion in other scenarios.

The report also found that U.S.-born children with at least one foreign-born parent are among the strongest economic and fiscal contributors, thanks in part to the spending by local governments on their education.

The statement is partially accurate but leaves out important details. We rate it Half True.
In the second paragraph PolitiFact says the fiscal burden amounted to $43 billion "in other scenarios." Put correctly, one scenario put the figure at $279 billion and two scenarios may have put the figure at $43 billion because the scenarios were nearly identical. The study looked at a total of eight scenarios, found here. It appears to us that scenario four may serve as the source of the $296 billion figure reported in the Washington Times.

Our conclusion? PolitiFact's fact check provides a left-leaning picture of the context of the Trump White House infographic. The infographic is accurate. It plainly states that it is picking out a high-end figure. It states it relies on one study for the figure.

The infographic, in short, provides readers alerts to the potential problems with the figure it uses.

That said, the $300 billion figure serves as a pretty good example of appealing to the audience's anchoring bias. Mentioning "$300 billion" predisposes the audience toward believing a similarly high figure regardless of other evidences. That's a legitimate gripe about the infographic, though one PolitiFact neglected to point out while fabricating its charge of "cherry-picking."


Afters

*I noticed ages ago that the Obama administration produced a huge number of misleading infographics. Maybe PolitiFact fact checked one of them?



Correction Jan. 31, 2018: Inserted the needed word "check" between "fact" and "provides" in the fourth paragraph from the end.