Monday, October 22, 2018

PolitiFact: One Standard For Me and Another For Thee 2

PolitiFact executed another of its superlative demonstrations of hypocrisy this month.

After PolitiFact unpublished its botched fact check about Claire McCaskill and the affordability of private aircraft, it published a corrected (?) fact check changing the rating from "False" to "Half True." Why "Half True" instead of "True"? PolitiFact explained it gave the "Half True" rating because the (Republican) Senate Leadership Fund failed to provide adequate context (bold emphasis added).
The Senate Leadership Fund says McCaskill "even said this about private planes, ‘that normal people can afford it.’"

She said those words, but the footage in the ad leaves out both the lead-in comment that prompted McCaskill’s remark and the laughter that followed it. The full footage makes it clear that McCaskill was wrapping up a policy-heavy debate with a private-aviation manager and with a riff using the airport manager’s words. In context, he was referring to "normal" users of private planes, as opposed to "normal" Americans more generally.

We rate the statement Half True.
Let's assume for the sake of argument that PolitiFact is exactly right (we don't buy it) in the way it recounts the problems with the missing context.

Assuming the missing context in a case like this makes a statement "Half True," how in the world does PolitiFact allow itself to get away the shenanigan PolitiFact writer Jon Greenberg pulled in his article on Sen. Elizabeth Warren's DNA test?

Greenberg (bold emphasis added):
Trump once said she had as much Native American blood as he did, and he had none. At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Trump said those words, but Greenberg's version of the quote leaves out more than half of Trump's sentence, as well as comments that came before. The full quotation makes it clear that Trump's million dollar challenge was presented as a potential future event--a hypothetical, in other words. In context, Trump was referring to a potential future challenge for Warren to take a DNA test as opposed to making the $1 million challenge at that moment.

PolitiFact takes Trump just as much, if not more, out of context as the Senate Leadership Fund did with McCaskill.

How does that kind of boundless hypocrisy pass the sniff test? Are the people at PolitiFact that accustomed to their own stench?


Afters

PolitiFact's "In Context" presentation of Trump's million-dollar challenge to Sen. Warren, confirming what we're saying about PolitiFact's Jon Greenberg ignoring the surrounding context (bole emphasis in the original):
(L)et's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’
See also: Fact Checkers for Elizabeth Warren

Wednesday, October 17, 2018

Washington Free Beacon: "PolitiFact Retracts Fact Check ..."

Full title:

PolitiFact Retracts Fact Check After Erroneously Ruling Anti-Claire McCaskill Ad ‘False’

We were preparing to post about PolitiFact's crashed-and-burned fact check of  the (Republican) Senate Leadership Fund's Claire McCaskill attack ad. But we noticed that Alex Griswold did a fine job of telling the story for the Washington Free Beacon.

Griswold:
In the revised fact check published Wednesday, PolitiFact announced that "after publication, we received more complete video of the question-and-answer session between McCaskill and a constituent that showed she was in fact responding to a question about private planes, as well as a report describing the meeting … We apologize for the error."

PolitiFact still only ruled the ad was "Half True," arguing that the Senate Leadership Fund "exaggerated" McCaskill's remarks by showing them in isolation. In full context, the fact checker wrote, McCaskill's remarks "seem to refer to ‘normal' users of private planes, not to ‘normal' Americans more generally."
Griswold's article managed to hit many of the points we made about the PolitiFact story on Twitter.


For example:

New evidence to PolitiFact, maybe. The evidence was on the World Wide Web since 2017.

PolitiFact claimed it was "clear" from the short version of the town hall video that the discussion concerned commercial aviation in the broad sense, not private aircraft. Somehow that supposed clarity vanished with the appearance of a more complete video.


Read the whole article at the Washington Free Beacon.


We also used Twitter to slam PolitiFact for its policy of unpublishing when it notices a fact check has failed. Given that PolitiFact, as a matter of stated policy, archives the old fact check and embeds the URL in the new version of the fact check. No good reason appears to exist to delay availability of the archived version. It's as easy as updating the original URL for the bad fact check to redirect to the archive URL.

In another failure of transparency, PolitiFact's archived/unpublished fact checks eliminate bylines and editing or research credits along with source lists and hotlinks. In short, the archived version of PolitiFact's fact checks loses a hefty amount of transparency on the way to the archive.

PolitiFact can and should do better both with its fact-checking and its policies on transparency.


Exit question: Has PolitiFact ever unpublished a fact check that was too easy on a conservative or too tough on a liberal?

There's another potential bias measure waiting for evaluation.

Tuesday, October 16, 2018

Fact Checkers for Elizabeth Warren

Sen.Elizabeth Warren (D-Mass.) provided mainstream fact checkers a great opportunity to show their true colors. Fact checkers from the PolitiFact and Snopes spun themselves into the ground trying to help Warren excuse her self-identification as a "Native American."

Likely 2020 presidential candidate Warren has long been mocked from the right as "Fauxcahontas" based on her dubious claims of Native American minority status. Warren had her DNA tested and promoted the findings as some type of vindication of her claims.

The fact checkers did their best to help.


PolitiFact

PolitiFact ran Warren's report past four experts and assured us the experts thought the report was legitimate. But the quotations from the experts don't tell us much. PolitiFact uses its own summaries of the experts' opinions for the statements that best support Warren. Are the paraphrases or summaries fair? Trust PolitiFact? It's another example showing why fact checkers ought to provide transcripts of their interactions with experts.

Though the article bills itself as telling us what we can and cannot know from Warren's report, it takes a Mulligan on mentioning Warren's basic claim to minority status. Instead it emphasizes the trustworthiness of the finding of trace Native American inheritance.

At least the article admits that the DNA evidence doesn't help show Warren is of Cherokee descent. There's that much to say in favor of it.

But more to the downside, the article repeats as true the notion that Trump had promised $1 million if Warren could prove Native American ancestry (bold emphasis added):
At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Just minutes after PolitiFact published the above, it published a separate "In Context" article under this title: "In context: Donald Trump's $1 million offer to Elizabeth Warren."

While we do not recommend PolitiFact's transcript as any kind of model journalism (it leaves out quite a bit without using ellipses to show the omissions), the transcript in that article is enough to show the deception in its earlier article (green emphasis added, bold emphasis in the original):
"I shouldn't tell you because I like to not give away secrets. But let's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’ And let’s see what she does. I have a feeling she will say no. But we’ll hold that for the debates.
Note that very minor expansion of the first version of the Trump quotation torpedoes claims that Trump has already pledged $1 million hinging on Warren's DNA test results: "We will say." So PolitiFact's first story dutifully leaves it out and reinforces the false impression Trump's promise was not a hypothetical.

Despite clear evidence that Trump was speaking of a hypothetical future situation, PolitiFact's second article sticks with a headline suggesting an existing pledge of $1 million--though it magnanimously allows at the end of the article that readers may draw their own conclusions.

It's such a close call, apparently, that PolitiFact does not wish to weigh in either pro or con.

Our call: The fact checkers liberal bloggers at PolitiFact contribute to the spread of misinformation.

Snopes

Though we think PolitiFact is the worst of the mainstream fact checkers, the liberal bloggers at Snopes outdid PolitiFact in terms of ineptitude this time.

Snopes used an edited video to support its claim that it was "True" Trump pledged $1 million based on Warren's DNA test.



The fact check coverage from PolitiFact and Snopes so far makes it look like Warren will be allowed to skate on a number of apparently false claims she made in the wake of her DNA test announcement. Which mainstream fact-checker is neutral enough to look at Warren's suggestion that she can legitimately cash in on Trump's supposed $1 million challenge?

It's a good thing we have non-partisan fact checkers, right?


Afters

Glenn Kessler, the Washington Post Fact Checker

The Washington Post Fact Checker, to our knowledge, has not produced any content directly relating to the Warren DNA test.

That aside, Glenn Kessler has weighed in on Twitter. Some of Kessler's (re)tweets have underscored the worthlessness of the DNA test for identifying Warren as Cherokee.

On the other hand, Kessler gave at least three retweets for stories suggesting Trump had already pledged $1 million based on the outcome of a Warren DNA test.




So Kessler's not joining the other two in excusing Warren. But he's in on the movement to brand Trump as wrong even when Trump is right.

Monday, October 15, 2018

Taylor Swift's Candidates Lag in Polls--PolitiFact Hardest Hit?

We noted pop star Taylor Swift's election endorsement statement drew the selective attention of the fact checkers left-leaning bloggers at PolitiFact.

We've found it hilarious over the past several days that PolitiFact has mercilessly pimped its Swiftian fact check repeatedly on Twitter and Facebook.

Now with polls showing Swift's candidates badly trailing the Republican counterparts we can only wonder: Is PolitiFact the entity hardest hit by Swift's failure (so far) to make a critical difference in putting the Democrats over the top?


The Biggest Problem with PolitiFact's Fact Check of Taylor Swift

The Swift claim PolitiFact chose to check was the allegation that Tennessee Republican Marsha Blackburn voted against the Violence Against Women Act. We noted that PolitiFact's choice of topic, given the fact that Swift made at least four claims that might interest a fact checker, was likely the best choice from the liberal point of view.

Coincidentally(?), PolitiFact pulled the trigger on that choice. But as we pointed out in our earlier post, PolitiFact still ended up putting its finger on the scales to help its Democratic Party allies.

It's true Blackburn voted against reauthorizing the Violence Against Women Act (PolitiFact ruled it "Mostly True").

But it's also true that Blackburn voted to reauthorize the Violence Against Women Act.

Contradiction?

Not quite. VAWA came up for reauthorization in 2012.Blackburn co-sponsored a VAWA reauthorization bill and voted in favor. It passed the House with most Democrats voting in opposition.

And the amazing thing is that the non-partisan fact checkers liberal bloggers at PolitiFact didn't mention it. Not a peep. Instead, PolitiFact began its history of the reauthorization of the VAWA in 2013:
The 2013 controversy
The Violence Against Women Act was two decades old in 2013 when Congress wrestled with renewing the funds to support it. The law paid for programs to prevent domestic violence. It provided money to investigate and prosecute rape and other crimes against women. It supported counseling for victims.

The $630 million price tag was less the problem than some specific language on non-discrimination.

The Senate approved its bill first on Feb. 12, 2013, by a wide bipartisan margin of 78 to 22. That measure redefined underserved populations to include those who might be discriminated against based on religion, sexual orientation or gender identity.
Starting the history of VAWA reauthorization in 2013 trims away the bothersome fact that Blackburn voted for VAWA reauthorization in 2012. Keeping that information out of the fact check helps sustain the misleading narrative that Republicans like Blackburn are okay with violence against women.

As likely as not that was PolitiFact's purpose.



Thursday, October 11, 2018

This Is How Selection Bias Works

Here at PolitiFact Bias we have consistently harped on PolitiFact's vulnerability to selection bias.

Selection bias happens, in short, whenever a data set fails to serve as representative. Scientific studies often simulate random selection to help achieve a representative sample and avoid the pitfall of selection bias.

PolitiFact has no means of avoiding selection bias. It fact checks the issues it wishes to fact check. So PolitiFact's set of fact checks is contaminated by selection bias.

Is PolitiFact's selection bias influenced by its ideological bias?

We don't see why not. And Taylor Swift will help us illustrate the problem.


PolitiFact looked at Swift's claim that Sen. Marsha Blackburn voted against the Violence Against Women Act. That fact check comes packed with the usual PolitiFact nonsense, such as overlooking Blackburn's vote in favor of VAWA in 2012. But this time our focus falls on PolitiFact's decision to look at this Swift claim instead of others.

What other claims did PolitiFact have to choose from? Let's have a look at the relevant part of Swift's statement:
I cannot support Marsha Blackburn. Her voting record in Congress appalls and terrifies me. She voted against equal pay for women. She voted against the Reauthorization of the Violence Against Women Act, which attempts to protect women from domestic violence, stalking, and date rape. She believes businesses have a right to refuse service to gay couples. She also believes they should not have the right to marry. These are not MY Tennessee values.
 Now let's put the different claims in list form:
  • Blackburn voted against equal pay for women.
  • Blackburn voted against the Reauthorization of the Violence Against Women Act
  • Blackburn believes businesses have a right to refuse service to gay couples
  • Blackburn also believes they should not have the right to marry
PolitiFact says it checks claims that make it wonder "Is that True?

The first statement regarding equal pay for women makes a great candidate for that question. Congress hasn't had to entertain a vote that would oppose equal pay for women (for equal work) for many years. It's been the law of the land since the 1960s. Lilly Ledbetter Fair Pay Act? Don't make me laugh.

The second statement is a great one to check from the Democratic Party point of view, for the Democrats made changes to the VAWA with the likely intent of creating voter appeals based on conservative opposition to those changes.

The third statement concerns belief instead of the voting record, so that makes it potentially more challenging to check. On its face, Swift's claim looks like a gross oversimplification that ignores concerns about constitutional rights of conscience.

The fourth statement, like the third, involves a claim about belief. Also, the fourth statement would likely count as a gross oversimplification. Conservatives opposed to gay marriage tend to oppose same-sex couples asserting every legal advantage that opposite-sex couples enjoy.

PolitiFact chose its best candidate for finding the claim "True" instead of one more likely to garner a "False" rating. It chose the claim most likely to electorally favor Democrats.

Commonly choosing facts to check on that type of basis may damage the election prospects of those unfairly harmed by partisan story selection. People like Sen. Blackburn.

It's a rigged system when employed by neutral and nonpartisan fact checkers who lean left.

And that's how selection bias works.


Tuesday, October 2, 2018

Again: PolitiFact vs PolitiFact

In 2013, PolitiFact strongly implied (it might opine that it "declared") that President Obama's promise that people could keep the plans they liked according to his health care overhaul, the Affordable Care Act, received its "Lie of the Year" award.

In 2018, PolitiFact Missouri (with editing help from longtime PolitiFacter Louis Jacobson) suffered acute amnesia about its 2013 "Lie of the Year" pronouncements.


PolitiFact Missouri rates "Mostly False" Republican Josh Hawley's claim that millions of Americans lost their health care plans.

Yet in 2013 it was precisely the loss of millions of health care plans that PolitiFact advertised as its reason for giving Mr. Obama its "Lie of the Year" award (bold emphasis added):
It was a catchy political pitch and a chance to calm nerves about his dramatic and complicated plan to bring historic change to America’s health insurance system.

"If you like your health care plan, you can keep it," President Barack Obama said -- many times -- of his landmark new law.

But the promise was impossible to keep.

So this fall, as cancellation letters were going out to approximately 4 million Americans, the public realized Obama’s breezy assurances were wrong.
Hawley tried to use PolitiFact's finding against his election opponent, incumbent Sen. Claire McCaskill (D-Mo.) (bold emphasis added):
"McCaskill told us that if we liked our healthcare plans, we could keep them. She said the cost of health insurance would go down. She said prescription drug prices would fall. She lied. Since then, millions of Americans have lost their health care plans."

Because of the contradiction between Hawley’s assertion and the promises of the ACA to insure more Americans, we decided to take a closer look.
So, despite the fact that PolitiFact says millions lost their health care plans and the breezy assurance to the contrary was wrong, PolitiFact says it gave Hawley's claim a closer look because it contradicts assurances that the ACA would insure more Americans.

Apparently it doesn't matter to PolitiFact that Hawley was specifically talking about losing health care plans and not losing health insurance completely. In effect, PolitiFact Missouri disavows any knowledge that the promise "if we liked our healthcare plans, we could keep them" was a false promise. The fact checkers substitute loss of health insurance for the loss of health care plans and give Hawley a "Mostly False" rating based on their own fallacy of equivocation (ambiguity).

A consistent PolitiFact could have performed this fact check easily. It could have looked at whether McCaskill made the same promise Obama made. And after that it could have remembered that it claimed to have found Obama's promise false along with the reasoning it used to justify that ruling.

Instead, PolitiFact Missouri delivers yet another outstanding example of PolitiFact inconsistency.



Afters:

Do we cut PolitiFact Missouri a break because it was not around in 2013?

No we do not.

Exhibit 1: Louis Jacobson, who has been with PolitiFact for over 10 years, is listed as an editor.

Exhibit 2: Jacobson, beyond a research credit on the "Lie of the Year" article we linked above, wrote a related fact check on the Obama administration's attempt to explain its failed promise.

There's no excuse for this type of inconsistency. But bias offers a reasonable explanation for this type of inconsistency.



Tuesday, September 25, 2018

Thinking Lessons

Our post "Google Doesn't Love Us Anymore" prompted a response from the pseudonymous "Jobman."

Nonsensical comments are normally best left unanswered unless they are used for instruction. We'll use "Jobman's" comments to help teach others not to make similar mistakes.

"Jobman" charged that our post misled readers in two ways. In his first reply "Jobman" offer this explanation of the first of those two allegedly misleading features:

This post is misleading for two reasons, 1. Because it implies that google is specifically down-ranking your website. (Yes, it still does, even if your little blurb at the bottom tries to tell otherwise. "One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own." and "But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic" Prove this point)
We answered that "Jobman" contradicted his claim with his evidence.


Lesson One: Avoid the Non Sequitur

"Jobman" asserts that our post implies Google specifically downranked the "PolitiFact Bias" website. The first evidence he offers is our statement that in the past Google gave priority to its own. Google owns Blogger and could be depended on to rank a Blogger blog fairly quickly. What does that have to do with specifically downranking the (Blogger) website "PolitiFact Bias"? Nothing. We offered it only as a reason we chose and continued with Blogger. Offering evidence that doesn't support a claim is a classic example of a non sequitur.
  • Good arguments use evidence that supports the argument, avoiding non sequiturs.

Lesson Two: Looking Up Words You May Not Understand Can Help Avoid Non Sequiturs

"Jobman" offered a second piece of evidence that likewise counted as a non sequitur. We think "Jobman" doesn't know what the term "surmise" means. Not realizing that "surmise" means coming to a conclusion based on reasoning short of proof might lead a person to claim that one who claims to have surmised something needs to provide proof of that thing. But that's an obvious non sequitur for a person who understands that saying one "surmised" communicates the idea that no proof is offered or implied.
  • Make sure you understand the other person's argument before trying to answer or rebut it. 

Lesson Three: Understand the Burden of Proof

In debate, the burden of proof belongs on the person asserting something. In non-debate contexts, the burden of proof belongs on anyone who wants another person to accept what they say.  In the present case, "Jobman" asserted, without elaborating, that two parts of our post sent the message that Google deliberately downranked "PolitiFact Bias." It turns out he was wrong, as we showed above. But "Jobman" showed little understanding of the burden of proof concept with his second reply:
The evidence that I point to doesn't contradict what I say. Yes, that's my rebuttal. You haven't proven that It does contradict what I say. Maybe try again later?
Who is responsible for showing that what we wrote doesn't mean whatever "Jobman" thinks it means? "Jobman" thinks we are responsible. If "Jobman" thinks what we wrote means X then it means X unless we can show otherwise. That's a classic case of the fallacy of shifting the burden of proof. The critic is responsible for supporting his own case before his target needs to respond.

Jobman added another example of this fallacy in his second reply:
Your title, "Google doesn't love us anymore" and contents of your post prove that you believe that Google somehow wants to push your content lower, yet you give no evidence for this.
"Jobman" says "Google doesn't love us anymore" means X (Google somehow wants to push our content lower). And "Jobman" thinks the burden rightly falls on us to show that "Google doesn't love us anymore" means ~X, such as simply saying Google downranked the site. "Jobman" thinks we are responsible for proving that Google somehow wants to push our content lower even if we already said that we did not think that is what Google did.

That's a criminal misunderstanding of the burden of proof.
  • Making a good argument involves understanding who bears the burden of proof.

Lesson Four: Strive For Coherence & Lesson Five: Avoid Creating Straw Men

In his second reply "Jobman" suggested that we brushed off our lack of evidence (lack of evidence supporting the point we were not making!) by with our claim we were not making the point we were not making.
Then, since you don't have any evidence, you try to brush it off and say "This post isn't about google targeting us" When every part of your post says otherwise.
With that last line we think perhaps "Jobman" meant to say "every part of your post says otherwise except for the part that doesn't." Though "Jobman" obviously overestimates the part that says otherwise.

His incoherence is palpable, and given that we specifically said that we were not saying Google specifically targeted the PolitiFact Bias site a critic needs an incredibly good argument to claim that we were arguing the opposite of what we argued. "Jobman" does not have that. He has a straw man fallacy supported only by his own non sequiturs.
  • It's a good idea to review your argument to makes sure you don't contradict yourself.
  • Resist the temptation to argue against a distortion of the other person's argument. That path leads to the straw man fallacy.

Lesson Three Review: Understand the Burden of Proof

The burden of proof falls on the one claiming something in the debate context, or on anyone who wants somebody else to believe something in everyday life.
When you claim that Google has made changes that have negatively impacted your website, you DO have to prove that. For now, I'll just dismiss your claim entirely until you provide evidence that google has made these changes, and that your website was previously ranked on the top of the list.
We said we surmised that Google's tweaking of its algorithms resulted in the downranking. As noted earlier, "Jobman" apparently thinks that claiming something while admitting it isn't proven obligates the claimant to prove the claim. Claiming to have proof carries with it the natural expectation that one may obtain that proof by asking. Recognizing when proof is claimed and when it isn't helps prevent mistakes in assigning the burden of proof.

In fact, the PFB post does offer evidence short of proof in the form of screenshots showing top-ranked searches from Bing and DuckDuckGo along with a much lower ranking from Google. Specific evidence of the Google downranking comes from reported evidence of past observations of a consistent top ranking. Evidence of Google tweaking its algorithms is not hard to find, so the argument in our post counted that as common knowledge for which the average reader would require no proof. And others we could expect to research the issue if they questioned it.

As for the promise to dismiss our claims for lack of proof, that is the prerogative of every reader no matter the literature. Readers who trust us will tend to accept our claims about our Google rank. Others can judge based on our accuracy with other matters. Others will use the "Jobman" method. That's up to the reader. And that's fine with us.
 

Lesson Five Review: Avoid Creating Straw Men

It was news to us that we posted the Bing and DuckDuckGo search results to prove Google is specifically biased against the PolitiFact Bias website. We thought we were showing that we rank No. 1 on Bing and DuckDuckGo while ranking much lower on Google.

We suppose "Jobman" will never buy that explanation:

Every single web indexing website in the history of the internet has had the purpose of putting forth the most relevant search results. You could prove that by literally googling anything, then saying "'X' Irrelevant thing didn't show up on the search results", but you compared search results of google and other search engines In order to convey the theme that google is somehow biased in their web searches because your website isn't at the top for theirs.
All search engines are biased toward their managers' vision of relevant search results. The bias at Bing and DuckDuckGo is friendlier to the PolitiFact Bias website than the bias at Google.

"Jobman" finished his second reply by telling us about ways we could improve our website's page rank without blaming Google for it. If that part of his comment was supposed to imply that we blame our website traffic on Google, that's misleading. 

Obviously, though, it's true that if Google gave us the same rank we get from Bing and DuckDuckGo we would probably enjoy healthier traffic. The bulk of our traffic comes from Google referrals, and we would expect a higher ranking to result in more of those referrals.

Like we said in the earlier PFB post, it comes down to Google's vision of what constitutes relevance. And clearly that vision, as the algorithm expresses it, is not identical to the ones expressed in the Bing and DuckDuckGo algorithms.

We did not and do not argue that Google targeted "PolitiFact Bias" specifically for downranking. Saying otherwise results in the creation of a straw man fallacy.




Note: "Jobman" has exhausted his reply privileges with the second reply that we quoted extensively above. He can take up the above argument using a verifiable identify if he wishes, and we will host comments (under other posts) he submits under a different pseudonym. Within limits.

Sunday, September 16, 2018

Google doesn't love us anymore

One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own.

It took us very little time to make it to the top of Google's search results for Web surfers using the terms "PolitiFact" and "bias."

But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic. That was good news for PolitiFact, whose fact checking efforts we criticize and Google tries to promote.

And perhaps "eroded" isn't the right word. Our traffic pretty much fell off a cliff between the time Trump won election and the time Trump took office. And it coincided with the Google downranking that occurred while the site was enjoying its peak traffic.

We've found it interesting over the past couple of years to see how different search engines treated a search for "PolitiFact bias." Today's result from Microsoft's Bing search engine was a pleasant surprise. Our website was the top result and our site was highlighted with an informational window.

The search result even calls the site "Official Site." We're humbled. Seriously.



What does the same search look like on Google today?

Ouch:



"Media Bias Fact Check"? Seriously?

Dan flippin' Bongino? Seriously?

A "PolitiFact" information box to the upper right?

The hit for our site is No. 7.

It's fair to charge that we're not SEO geniuses. But on the other hand we provide excellent content about "PolitiFact" and "bias." We daresay nobody has done it better on a more consistent basis.


DuckDuckGo




DuckDuckGo is gaining in popularity. It's a search engine marketing itself based on not tracking users' searches. So we're No. 1 on Bing and DuckDuckGo but No. 7 on Google.

It's not that we think Google is deliberately targeting this website. Google has some kind of vision for what it wants to end up high in its rankings and designs its algorithms to reach toward that goal. Sites like this one are "collateral damage" and "disparate impact."

Thursday, September 13, 2018

PolitiFact Avoids Snarky Commentary? 2

In its statement of principles PolitiFact says it avoids snarky commentary (bold emphasis added):
We don’t lay out our personal political views on social media. We do share news stories and other journalism (especially our colleagues’ work), but we take care not to be seen as endorsing or opposing a political figure or position. We avoid snarky commentary.

These restrictions apply to both full-time staffers, correspondents and interns. We avoid doing anything that compromises PolitiFact, or our ability to do our jobs.
 Yet PolitiFact tweeted the following on Sept. 13, 2018:
We're having trouble getting that one past the definition of "snark":
: an attitude or expression of mocking irreverence and sarcasm

Wednesday, September 12, 2018

PolitiFact flubs GDP comparison between added debt and cumulative debt

Here at PolitiFact Bias we think big mistakes tell us something about PolitiFact's ideological bias.

If PolitiFact's big mistakes tend to harm Republicans and not Democrats, it's a pretty good sign that PolitiFact leans left. For that reason, much of what we do centers on documenting big mistakes.

Veteran PolitiFact fact checker Louis Jacobson gave us a whopper of a mistake this week in a Sept. 12, 2018 PunditFact fact check.

Before reading the fact check we had a pretty good idea this one was bogus. Note the caveat under the meter telling the reason why Scarborough's true numbers only get by with a "Mostly True" rating: The added debt was not purely the GOP's fault.

We easily found a parallel claim, this one from PolitiFact Virginia but with Trump as the speaker:

Trump's parallel claim was dragged down to "Half True" because there was plenty of blame to share for doubling the debt. In other words it was not purely Obama's fault.


A Meaningless Statistic?

Scarborough's statistic makes less sense than Trump's on closer examination. The point comes through clearly once we see how PolitiFact botched its analysis.

Scarborough said the GOP would create more debt in one year than was generated in America's first 200 years.

After quoting an expert who said percentage of GDP serves as a better measure than nominal dollars, PolitiFact proceeded to explain that testing Scarborough's claim using the percentage of GDP tells essentially the same story.  PolitiFact shared a chart based on data from the executive branch's Office of Management and Budget:



So far so good. The OMB is recognized as a solid source for such data. But then PolitiFact PolitiSplains (bold emphasis added):
The chart does show that, when looking at a percentage of GDP, Scarborough is correct in his comparison. Debt as a percentage of GDP in 2017 was far higher (almost 77 percent)  than it was in 1976 (about 27 percent).
Colossal Blunder Alert!

PolitiFact/PunditFact, intentionally or otherwise, pulled a bait and switch. Scarborough said the GOP would create more debt in one year than was generated in America's first 200 years. As PolitiFact recognized when comparing the nominal dollar figures, that comparison involves the cumulative deficit number for one year (which we call the debt) and comparing it to the non-cumulative deficit number for one year (which we call the deficit). It's a comparison of the debt in 1976, following PolitiFact's methodology for nominal dollars in the first part of the fact check, to the deficit for 2017.

But that's not what PolitiFact did when it tried to test Scarborough using percentage of GDP.

PolitiFact compared the debt in 1976 to the debt in 2017. That's the wrong comparison. PolitiFact needed to substitute the deficit in 2017 as a percentage of GDP for the debt in 2017 as a percentage of GDP. That substitution corresponds to Scarborough's argument.

The deficit in 2017 does not measure out to nearly 77 percent of GDP. Not even close.

The OMB reports the deficit for 2017 was 3.5 percent of GDP. That's less than 27 percent. It's also less than 77 percent.

Using the preferred measure for comparing deficit and debt numbers across time, Scarborough's claim fell flat. And PolitiFact failed to notice.

Testing Scarborough's number correctly as a percentage of GDP illustrates the worthlessless of his statistic. Instead of "Mostly True" PolitiFact could easily have issued a ruling more similar to the one it issued to Republican presidential candidate Mitt Romney when he correctly noted that our armed forces were shorter on ships and planes in 2012 than at times in the past.


Cheer up, PolitiFact. You'll be tarring the conservative Scarborough. So it's not a total loss.

Friday, August 31, 2018

False Stuff From Fact Checker (PolitiFact)

A funny thing happened when PolitiFact fact-checked a claim about a bias against conservative websites: PolitiFact did not fact check its topic.

No, we're not kidding. Instead of researching whether the claim was true, PolitiFact spent its time undermining the source of the claim. And PolitiFact even used a flatly false claim of its own toward that end (bold emphasis added):
The chart is not neutral evidence supporting Trump’s point, and it labels anything not overtly conservative as "left. In the "left" category are such rigorously mainstream outlets as the Associated Press and Reuters. The three big broadcast networks — ABC, NBC, CBS — are considered "left," as are the Washington Post and the New York Times. Other media outlets that produce a large amount of content every day, including CNN, NPR, Politico, USA Today, and CNBC, are labeled "left."
The statement we highlighted counts as hyperbole at best. On it's face, it simply counts as a false statement exposed as such by the accompanying graphic:


If PolitiFact's claim was true then any outlet not labeled "left" would overtly identify itself as conservative. We can disprove PolitiFact's claim easily by simply looking down the line at the middle. If a media outlet straddles the line between left and right then that organization is not classified as "left." And if such media organizations do not overtly identify as conservative then PolitiFact's claim is false.

Overtly conservative? Let's go down the line:
And for good measure: The Economist, located on the right side of the chart. Is The Economist overtly conservative (See also Barron's, McClatchy)?

Did PolitiFact even bother to research its own claim? Where are the sources listed? Or did writer Louis Jacobson just happen to have that factoid rattling around in his cranium?

But it's not just Jacobson! The factoid gets two mentions in the fact check (the second one in the summary paragraph) and was okayed by editor Katie Sanders (recently promoted for obvious reasons to managing editor at PolitiFact) and at least two other editors from the PolitiFact "star chamber" that decides the "Truth-O-Meter" rating.

As we have asked before, how can a non-partisan and objective fact checker make such a mistake?

Inconceivable!

And how does a fact checker properly justify issuing a ruling without bothering to check on the fact of the matter?

Monday, August 27, 2018

PolitiFact Illinois: 'Duckwork's background check claim checks out" (Updated x2)

Huh.

On August 26, 2018 PolitiFact Illinois published a fact check of Sen. Tammy Duckworth (D-Ill.) with the title "Duckwork's background check claim checks out."

We find it hard to believe a fact-checking organization could prove so careless it would badly misspell the last name of one of its senators in a headline.

And we find it even harder to believe the error could last until the next day (today) without receiving a correction.

We will update this item to track whether PolitiFact Illinois runs a correction notice when it fixes the problem.

Assuming it fixes the problem.



Update Aug. 27, 2018:

Apparently "Duckwork" is a fairly common misspelling of Sen. Duckworth's name. NPR (Illinois) made a similar mistake in January 2018 and fixed it on the sly. Don't journalists know better? Misspelling a name warrants a transparent correction.


Update Aug. 28, 2018:

Very early on Aug. 28, 2018, I tweeted a message pointing out this error and tagging the author, editor and PolitiFact Illinois.


When I checked hours later PolitiFact had corrected the spelling of Duckworth's name but added no correction notice to the item.

It's important to note, we suppose, that PolitiFact's corrections policy does not obligate it to append a correction notice on the basis of a misspelled name. That policy, in fact, appears to promise that PolitiFact will fix all of its spelling errors without acknowledging error (italics added for emphasis):
Typos, grammatical errors, misspellings – We correct typos, grammatical errors, misspellings, transpositions and other small errors without a mark of correction or tag and as soon as they are brought to our attention.
That seems to us like an unusually low bar for running a correction. Compare the above with the aggressive use of corrections involving misspelled names by PolitiFact's parent organization, the Poynter Institute.

Here's  one example from that page:
‘Newspapers killed newspapers,’ says reporter who quit the business (March 20, 2013)
Correction: This post misspelled Bird’s last name in one instance.
Journalists traditionally seem to give special attention to misspellings involving names. Misspelling a person's name counts as a different degree of error than a minor typographical error:
In journalism schools across Canada this week, many a freshman student will learn one of the foremost lessons of the J-school classroom: Get someone’s name wrong and you get a failing grade.

In the decade I taught at Ryerson University’s journalism school my students understood that no matter how brilliant their reporting and writing, if they messed up a name, they got an automatic F on that assignment. That’s a common policy of most journalism schools.
Apparently the fact checkers at PolitiFact find such obsessive attention to detail quaint.Which we count as a strange attitude for people calling themselves "fact checkers."

Saturday, August 25, 2018

PolitiFact's Fallacious "Burden of Proof" Bites a Democrat? Or Not

We're nonpartisan because we defend Democrats unfairly harmed by the faulty fact checkers at PolitiFact.

See how that works?

On with it, then:

Oops.

Okay, we made a faulty assumption. We thought when we saw PolitiFact's liberal audience complaining about the treatment of Nelson that it meant Nelson had received a "False" rating based on Nelson not offering evidence to support his claim.

But PolitiFact did not give Nelson a "Truth-O-Meter" rating at all. Instead of the "Truth-O-Meter" graphic for the claim (there is none), PolitiFact gave its readers the "Share The Facts" version:



Republicans (and perhaps Democrats) have received poor ratings in the past where evidence was lacking, which PolitiFact justifies according to its "burden of proof" criterion. But either the principle has changed or else PolitiFact made an(other) exception to aid Nelson.

If the principle has changed that's good. It's stupid and fallacious to apply a burden of proof standard in fact checking, at least where one determines a truth value based purely on the lack of evidence.

But's it's small consolation to the people PolitiFact unfairly harmed in the past with its application of this faulty principle.


Afters:

In April 2018 it looks like the "burden of proof" principle was still a principle.



As we have noted before, it often appears that PolitiFact's principles are more like guidelines than actual rules.

And to maintain our nonpartisan street cred, here's PolitiFact applying the silly burden of proof principle to a Democrat:


If "burden of proof" counts as one of PolitiFact's principles then PolitiFact can only claim itself as a principled fact checker if the Nelson exception features a principled reason justifying the exception.

If anyone can find anything like that in the non-rating rating of Nelson, please drop us a line.

Thursday, August 23, 2018

PolitiFact Not Yet Tired of Using Statements Taken Out Of Context To Boost Fundraising

Remember back when PolitiFact took GOP pollster Neil Newhouse out of context to help coax readers into donating to PolitiFact?

Good times.

Either the technique works well or PolitiFact journalists just plain enjoy using it, for PolitiFact Editor Angie Drobnic Holan's Aug. 21, 2018 appeal to would-be supporters pulls the same type of stunt on Rudy Giuliani, former mayor of New York City and attorney for President Donald Trump.

Let's watch Holan the politician in action (bold emphasis added):
Just this past Sunday, Rudy Giuliani told journalist Chuck Todd that truth isn’t truth.

Todd asked Giuliani, now one of President Donald Trump’s top advisers on an investigation into Russia’s interference with the 2016 election, whether Trump would testify. Giuliani said he didn’t want the president to get caught perjuring himself — in other words, lying under oath.

"It’s somebody’s version of the truth, not the truth," Giuliani said of potential testimony.

Flustered, Todd replied, "Truth is truth."

"No, it isn’t truth. Truth isn’t truth," Giuliani said, going on to explain that Trump’s version of events are his own.

This is an extreme example, but Giuliani isn’t the only one to suggest that truth is whatever you make it. The ability to manufacture what appears to be the truth has reached new heights of sophistication.
Giuliani, contrary to Holan's presentation, was almost certainly not suggesting that truth is whatever you make it.

Rather, Giuliani was almost certainly making the same point about perjury traps that legal expert Andrew McCarthy pointed out in a Aug. 11, 2018 column for National Review (hat tip to Power Line Blog)
The theme the anti-Trump camp is pushing — again, a sweet-sounding political claim that defies real-world experience — is that an honest person has nothing to fear from a prosecutor. If you simply answer the questions truthfully, there is no possibility of a false-statements charge.

But see, for charging purposes, the witness who answers the questions does not get to decide whether they have been answered truthfully. That is up to the prosecutor who asks the questions. The honest person can make his best effort to provide truthful, accurate, and complete responses; but the interrogator’s evaluation, right or wrong, determines whether those responses warrant prosecution.
It's fair to criticize Giuliani for making the point less elegantly than McCarthy did. But it's inexcusable for a supposedly non-partisan fact checker to take a claim out of context to fuel an appeal for cash.

That's what we expect from partisan politicians, not non-partisan journalists.

Unless they're "non-partisan journalists" from The Bubble.





 

Worth Noting:

For the 2017 version of this Truth Hustle, Holan shared writing credits with PolitiFact's Executive Director Aaron Sharockman.

Tuesday, August 21, 2018

All About That Base(line)

When we do not publish for days at a time it does not mean that PolitiFact has cleaned up its act and learned to fly straight.

We simply lack the time to do a thorough job policing PolitiFact's mistakes.

What caught our attention this week? A fact check authored by one of PolitiFact's interns, Lucia Geng.



We were curious about this fact check thanks to PolitiFact's shifting standards on what counts as a budget cut. In this case the cut itself was straightforward: A lower budget one year compared to the preceding year. In that respect the fact check wasn't a problem.

But we found a different problem--also a common one for PolitiFact. At least when PolitiFact is fact-checking Democrats.

The fact check does not question the baseline.

The baseline is simply the level chosen for comparison. The Florida Democratic Party chose to compare the 2011 water management districts' collective budgets with the ones in 2012 and found that they were about $700 million lower. Our readers should note that the FDP started making this claim in 2018, not 2012.

It's just crazy for a fact checker to perform a fact check without looking at other potential baselines. Usually politicians and political groups choose a baseline for a reason. Comparing 2011 to 2012 appears to make sense superficially. The year 2011 represents Republican-turned-Independent Governor Charlie Crist. The year 2012 represents the current governor, also a Republican, Rick Scott.

But what if there's more to it? Any fact checker should look at data covering a longer time period to get an idea of what the claimed cut would actually mean.

We suspected that 2010 and before might show much lower budget numbers. To our surprise, the budget numbers were far higher, at least for the South Florida Water Management District whose budget dwarfs those of the other districts.

From 2010 to 2011, Gov. Crist cut the SFWMD budget by about $443 million. From 2009 to 2010 Gov. Crist cut the SFWMD budget by almost $1.5 billion. That's not a typo.

The message here is not that Gov. Crist was some kind of anti-environmental zealot. What we have here is a sign that the water management district budgets are volatile. They can change dramatically from one year to the next. The big question is why, and a secondary question is whether the reason should affect our understanding of the $700 million Gov. Scott cut from the combined water management district budgets between 2011 and 2012.

A fact checker who looked at the volatile changes in spending could then use that knowledge to ask officials at the water management districts questions that would help answer our two questions above. Geng listed email exchanges with officials from each of Florida's water management districts. But the fact check contains no quotations from those officials. It does not even refer to their responses via paraphrase or summary. We don't even know what questions Geng asked.

We did not contact the water management districts. But we looked for a clue regarding the budget volatility in the SFWMD's fiscal year 2011 projections for its future budgets. The agency expected capital expenditures to drop by more than half after 2011.

Rick Scott had not been elected governor at that time (October 2010).

This suggests that the water management districts had a budget cut baked into their long-term program planning, quite possibly strongly influenced by budgeting for the Everglades restoration project (including land purchases). If so, that counts as critical context omitted from the PolitiFact Florida fact check.

We flagged these problems for PolitiFact on Twitter and via email. As usual, the faux-transparent fact checkers responded with a stony silence and made no apparent effort to fix the deficiencies.

Aside from the hole in the story we felt the "Mostly True" rating was very forgiving of the Florida Democratic Party's blatant cherry-picking. And somehow PolitiFact even resisted using the term "cherry-picking" or any close synonym.



Afters:
The Florida Democratic Party, in the same tweet PolitiFact fact-checked, recycled the claim that Gov. Scott "banned the term 'Climate Change.'"

We suppose that's not the sort of thing that makes PolitiFact editors wonder "Is that true?"

Saturday, August 11, 2018

Did an Independent Study Find PolitiFact Is Not Biased?

An email alert from August 10, 2018 led us to a blaring headline from the International Fact-Checking Network:

Is PolitiFact biased? This content analysis says no

Though "content analysis" could mean the researchers looked at pretty much anything having to do with PolitiFact's content, we suspected the article was talking about an inventory of PolitiFact's word choices, looking for words associated with a political point of view. For example, "anti-abortion" and "pro-life" signal political points of view. Using those and similar terms may tip off readers regarding the politics those who produce the news.

PolitiFact Bias has never used the presence of such terms to support our argument that PolitiFact is biased. In fact, I (Bryan) tweeted out a brief judgment of the study on Twitter back on July 16, 2018:
We have two major problems with the the IFCN article at Poynter.org (by Daniel Funke).

First, it implies that the word-use inventory somehow negates the evidence of bias that PolitiFact's critics use that do not include the types of word choices the study was was designed to detect:
It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “'Fact Checkers' Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.
The second paragraph mentions selection bias (taking the Weekly Standard quotation out of context) and other types of bias noted by PolitiFact Bias ("an entire blog dedicated to showing the ways in which PolitiFact is biased"--close enough, we suppose, thanks for linking us).

The third paragraph says PolitiFact has "rebuffed those accusations." We think "ignores those accusations" describes the situation more accurately.

The third paragraph goes on to mention PolitiFact's "transparent methodology" (true if you ignore the ambiguity and inconsistency) and transparent funding (yes, funded by some left-wing sources but PolitiFact Bias does not use that as an evidence of PolitiFact's bias). before claiming that PolitiFact "has an academic study to back it up."

"It"=PolitiFact's rebuffing of accusations it is biased????

That does not follow logically. To support PolitiFact's denials of the bias of which it is accused, the study would have to offer evidence countering the specific accusations. It doesn't do that.

Second, Funke's article suggests that the study shows a lack of bias. We see that idea in the title of Funke's piece as well as in the material from the third paragraph.

But that's not how science works. Even for the paper's specific area of study, it does not show that PolitiFact has no bias. At best it could show the word choices it tested offer no significant indication of bias.

The difference is not small, and Funke's article even includes a quotation from one of the study's authors emphasizing the point:
But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there's really nothing to find, or because our tools aren't powerful enough to find what's there,” he said.
So the co-author says maybe the study's tools were not powerful enough to find the bias that exists. Yet Funke sticks with the title "Is PolitiFact biased? This content analysis says no."

Is it too much to ask for the title to agree with a co-author's description of the meaning of the study?

The content analysis did not say "no." It said (we summarize) "not in terms of these biased language indicators."

Funke's article paints a very misleading picture of the content and meaning of the study. The study refutes none of the major critiques of PolitiFact of which we are aware.


Afters

PolitiFact's methodology, funding and verified IFCN signatory status is supposed to assure us it has no political point of view?

We'd be more impressed if PolitiFact staffers revealed their votes in presidential elections and more than a tiny percentage voted Republican more than once in the past 25 years.

It's anybody's guess why fact checkers do not reveal their voting records, right?


Correction Aug. 11, 2018: Altered headline to read "an Independent Study" instead of "a Peer-Reviewed Study"

The Weekly Standard Notes PolitiFact's "Amazing" Fact Check

The Weekly Standard took note of PolitiFact's audacity in fact-checking Donald Trump's claim that the economy grew at the amazing rate of 4.1 percent rate in the second quarter.
The Trumpian assertion that moved the PolitiFact’s scrutineers to action? This one: “In the second quarter of this year, the United States economy grew at the amazing rate of 4.1 percent.” PolitiFact’s objection wasn’t to the data—the economy really did grow at 4.1 percent in the second quarter—but to the adjective: amazing.
That's amazing!

PolitiFact did not rate the statement on its "Truth-O-Meter" but published its "Share The Facts" box featuring the judgment "Strong, but not amazing."

PolitiFact claims it does not rate opinions and grants license for hyperbole.

As we have noted before, it must be the fault of Republicans who keep trying to use hyperbole without a license.

Friday, August 10, 2018

PolitiFact Editor: It's Frustrating When Others Do Not Follow Their Own Policies Consistently

PolitiFact Editor Angie Drobnic Holan says she finds it frustrating that Twitter does not follow its own policies (bold emphasis added):
The fracas over Jones illustrates a lot, including how good reporting and peer pressure can actually force the platforms to act. And while the reasons that Facebook, Apple and others banned Jones and InfoWars have to do with hate speech, Twitter’s inaction also confirms what fact-checkers have long thought about the company’s approach to fighting misinformation.

They’re not doing anything, and I’m frustrated that they don’t enforce their own policies,” said Angie Holan, editor of (Poynter-owned) PolitiFact.
Tell us about it.

We started our "(Annotated) Principles of PolitiFact" page years ago to expose examples of the way PolitiFact selectively applies its principles. It's a shame we haven't had the time to keep that page updated, but our research indicates PolitiFact has failed to correct the problem to any noticeable degree.

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out whaten a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Reader: "PolitiFact is not biased. Republicans just lie more."

Every few years or so we recognize a Comment of the Week.

Jehosephat Smith dropped by on Facebook to inform us that PolitiFact is not biased:
Politifact is not biased, Republicans just lie more. That is objectively obvious by this point and if your mind isn't moved by current realities then you're willfully ignorant.
As we have prided ourselves on trying to communicate clearly exactly why we find PolitiFact biased, we find such comments fascinating on two levels.


First, how can one claim that PolitiFact is not biased? On what evidence would one rely to support such a claim?

Second, how can one contemplate claiming PolitiFact isn't biased without making some effort to address the arguments we've made showing PolitiFact is biased?

We invited Mr. Smith to make his case either here on the website or on Facebook. But rather than simply heaping Smith's burden of proof on his head we figured his comment would serve us well as an excuse to again summarize the evidence showing PolitiFact's bias to the left.


Journalists lean left
Journalists as a group lean left. And they lean markedly left of the general U.S. population. Without knowing anything else at all about PolitiFact we have reason to expect that it is made up mostly of left-leaning journalists. If PolitiFact journalists lean left as a group then right out of the box we have reason to look for evidence that their political leaning affects their fact-checking.

PolitiFact's errors lean left I
When PolitiFact makes a egregious reporting error, the error tends to harm the right or fit with left-leaning thinking. For example, when PolitiFact's Louis Jacobson reported that the Hobby Lobby's policy on health insurance "barred" women from using certain types of birth control, we noted that pretty much anybody with any rightward lean would have spotted the mistake and prevented its publication. Instead, PolitiFact published it and later changed it without posting a correction notice. We have no trouble finding such examples.

PolitiFact's errors lean left II
We performed a study of PolitiFact's calculations of percentage error. PolitiFact often performs the calculation incorrectly, and errors tend to benefit Democrats (caveat: small data set).

PolitiFact's ratings lean left I
When PolitiFact rates Republicans and Democrats on closely parallel claims Democrats often fare better. For example, when PolitiFact investigated a Democratic Party charge that Rep. Bill McCollum raised his own pay while in Congress PolitiFact said it was true. But when PolitiFact investigated a Republican charge that Sherrod Brown had raised his own pay PolitiFact discovered that members of Congress cannot raise their own pay and rated the claim "False." We have no trouble finding such examples.

PolitiFact's ratings lean left II
We have done an ongoing and detailed study looking at partisan differences in PolitiFact's application of its "Pants on Fire" rating. PolitiFact describes no objective difference in distinguishing between "False" and "Pants on Fire" ratings, so we hypothesize that the difference between the two ratings is subjective. Republicans are over 50 percent more likely than Democrats to have a false rating deemed "Pants on Fire" false for apparently subjective reasons.

PolitiFact's explanations lean left
When PolitiFact explains topics its explanations tend to lean left. For example, when Democrats and liberals say Social Security has never contributed a dime to the deficit PolitiFact gives it a rating such as "Half True," apparently unable to discover the fact that Social Security has run a deficit during years when the program was on-budget (and therefore unquestionably contributed directly to the deficit those years). PolitiFact resisted Republican claims that the ACA cut Medicare, explaining that the so-called Medicare cuts were not truly cuts because the Medicare budget continued to increase. Yet PolitiFact discovered when the Trump administration slowed the growth of Medicaid it was okay to refer to the slowed growth as a program cut. Again, we have no trouble finding such examples.

How can a visitor to our site (including Facebook) contemplate declaring PolitiFact isn't biased without coming prepared to answer our argument?


Friday, July 6, 2018

PolitiFact: "European Union"=Germany

PolitiFact makes all kinds of mistakes, but some serve as better examples of ideological bias than others. A July 2, 2018 PolitiFact fact check of President Donald Trump serves as pretty good evidence of a specific bias against Mr. Trump:


The big clue that PolitiFact botched this fact check occurs in the image we cropped from PolitiFact's website.

Donald Trump states that the EU sends millions of cars to the United States. PolitiFact performs adjustments to that claim, suggesting Trump specified German cars and specifying that the EU sends millions of German cars per year. Yet Trump did not specify German cars and did not specify an annual rate.

PolitiFact quotes Trump:
At one point, he singled out German cars.

"The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions," Trump said.
Saying Trump "singled out German cars" counts as twisting the truth. Trump "singled out" German cars in the sense of offering two examples of German cars among the millions sent to the United States by the European Union.

It counts as a major error for a fact checker to ignore the clear context showing that Trump was talking about the European Union and not simply German cars of one make (Mercedes) or another (BMW). And if those German makes account for large individual shares of EU exports to the United States then Trump deserves credit for choosing strong examples.

It counts as another major error for a fact checker to assume an annual rate in the millions when the speaker did not specify any such rate. How did PolitiFact determine that Trump was  not talking about a monthly rate, or the rate over a decade? Making assumptions is not the same thing as fact-checking.

When a speaker uses ambiguous language, the responsible fact checker offers the speaker charitable interpretation. That means using the interpretation that makes the best sense of the speaker's words. In this case, the point is obvious: The European Union exports millions of cars to the United States.

But instead of looking at the number of cars the European Union exports to the United States, PolitiFact cherry picked German cars. That focus came through strongly in PolitiFact's concluding paragraphs:
Our ruling

Trump said, "The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions."

Together, Mercedes, BMW and Volkswagen imported less than a million cars into the United States in 2017, not "millions."

More importantly, Trump ignores that a large proportion of German cars sold in the United States were also built here, using American workers and suppliers whose economic fortunes are boosted by Germany’s carnakers [sic]. Other U.S.-built German cars were sold as exports.

We rate the statement False.
That's sham fact-checking.

A serious fact check would look at the European Union's exports specifically to the United States. The European Automobile Manufacturers Association has those export numbers available from 2011 through 2016. From 2011 through 2013 the number was under 1 million annually. For 2014 through 2016 the number was over 1 million annually.

Data through September 2017 from the same source shows the European Union on pace to surpass 1 million units for the fourth consecutive year.


Does exporting over 1 million cars to the United States per year for three or four consecutive years count as exporting cars to the United States by the millions (compare the logic)?

We think we can conclude with certainty that the notion does not count as "False."

Our exit question for PolitiFact: How does a non-partisan fact checker justify ignoring the context of Trump's statement referring specifically to the European Union? How did the European Union get to be Germany?