Wednesday, October 17, 2018

Washington Free Beacon: "PolitiFact Retracts Fact Check ..."

Full title:

PolitiFact Retracts Fact Check After Erroneously Ruling Anti-Claire McCaskill Ad ‘False’

We were preparing to post about PolitiFact's crashed-and-burned fact check of  the (Republican) Senate Leadership Fund's Claire McCaskill attack ad. But we noticed that Alex Griswold did a fine job of telling the story for the Washington Free Beacon.

Griswold:
In the revised fact check published Wednesday, PolitiFact announced that "after publication, we received more complete video of the question-and-answer session between McCaskill and a constituent that showed she was in fact responding to a question about private planes, as well as a report describing the meeting … We apologize for the error."

PolitiFact still only ruled the ad was "Half True," arguing that the Senate Leadership Fund "exaggerated" McCaskill's remarks by showing them in isolation. In full context, the fact checker wrote, McCaskill's remarks "seem to refer to ‘normal' users of private planes, not to ‘normal' Americans more generally."
Griswold's article managed to hit many of the points we made about the PolitiFact story on Twitter.


For example:

New evidence to PolitiFact, maybe. The evidence was on the World Wide Web since 2017.

PolitiFact claimed it was "clear" from the short version of the town hall video that the discussion concerned commercial aviation in the broad sense, not private aircraft. Somehow that supposed clarity vanished with the appearance of a more complete video.


Read the whole article at the Washington Free Beacon.


We also used Twitter to slam PolitiFact for its policy of unpublishing when it notices a fact check has failed. Given that PolitiFact, as a matter of stated policy, archives the old fact check and embeds the URL in the new version of the fact check no good reason appears to exist to delay availability of the archived version. It's as easy as updating the original URL for the bad fact check to redirect to the archive URL.

In another failure of transparency, PolitiFact's archived/unpublished fact checks eliminate bylines and editing or research credits along with source lists and hotlinks. In short, the archived version of PolitiFact's fact checks loses a hefty amount of transparency on the way to the archive.

PolitiFact can and should do better both with its fact-checking and its policies on transparency.


Exit question: Has PolitiFact ever unpublished a fact check that was too easy on a conservative or too tough on a liberal?

There's another potential bias measure waiting for evaluation.

Tuesday, October 16, 2018

Fact Checkers for Elizabeth Warren

Sen.Elizabeth Warren (D-Mass.) provided mainstream fact checkers a great opportunity to show their true colors. Fact checkers from the PolitiFact and Snopes spun themselves into the ground trying to help Warren excuse her self-identification as a "Native American."

Likely 2020 presidential candidate Warren has long been mocked from the right as "Fauxcahontas" based on her dubious claims of Native American minority status. Warren had her DNA tested and promoted the findings as some type of vindication of her claims.

The fact checkers did their best to help.


PolitiFact

PolitiFact ran Warren's report past four experts and assured us the experts thought the report was legitimate. But the quotations from the experts don't tell us much. PolitiFact uses its own summaries of the experts' opinions for the statements that best support Warren. Are the paraphrases or summaries fair? Trust PolitiFact? It's another example showing why fact checkers ought to provide transcripts of their interactions with experts.

Though the article bills itself as telling us what we can and cannot know from Warren's report, it takes a Mulligan on mentioning Warren's basic claim to minority status. Instead it emphasizes the trustworthiness of the finding of trace Native American inheritance.

At least the article admits that the DNA evidence doesn't help show Warren is of Cherokee descent. There's that much to say in favor of it.

But more to the downside, the article repeats as true the notion that Trump had promised $1 million if Warren could prove Native American ancestry (bold emphasis added):
At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Just minutes after PolitiFact published the above, it published a separate "In Context" article under this title: "In context: Donald Trump's $1 million offer to Elizabeth Warren."

While we do not recommend PolitiFact's transcript as any kind of model journalism (it leaves out quite a bit without using ellipses to show the omissions), the transcript in that article is enough to show the deception in its earlier article (green emphasis added, bold emphasis in the original):
"I shouldn't tell you because I like to not give away secrets. But let's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’ And let’s see what she does. I have a feeling she will say no. But we’ll hold that for the debates.
Note that very minor expansion of the first version of the Trump quotation torpedoes claims that Trump has already pledged $1 million hinging on Warren's DNA test results: "We will say." So PolitiFact's first story dutifully leaves it out and reinforces the false impression Trump's promise was not a hypothetical.

Despite clear evidence that Trump was speaking of a hypothetical future situation, PolitiFact's second article sticks with a headline suggesting an existing pledge of $1 million--though it magnanimously allows at the end of the article that readers may draw their own conclusions.

It's such a close call, apparently, that PolitiFact does not wish to weigh in either pro or con.

Our call: The fact checkers liberal bloggers at PolitiFact contribute to the spread of misinformation.

Snopes

Though we think PolitiFact is the worst of the mainstream fact checkers, the liberal bloggers at Snopes outdid PolitiFact in terms of ineptitude this time.

Snopes used an edited video to support its claim that it was "True" Trump pledged $1 million based on Warren's DNA test.



The fact check coverage from PolitiFact and Snopes so far makes it look like Warren will be allowed to skate on a number of apparently false claims she made in the wake of her DNA test announcement. Which mainstream fact-checker is neutral enough to look at Warren's suggestion that she can legitimately cash in on Trump's supposed $1 million challenge?

It's a good thing we have non-partisan fact checkers, right?


Afters

Glenn Kessler, the Washington Post Fact Checker

The Washington Post Fact Checker, to our knowledge, has not produced any content directly relating to the Warren DNA test.

That aside, Glenn Kessler has weighed in on Twitter. Some of Kessler's (re)tweets have underscored the worthlessness of the DNA test for identifying Warren as Cherokee.

On the other hand, Kessler gave at least three retweets for stories suggesting Trump had already pledged $1 million based on the outcome of a Warren DNA test.




So Kessler's not joining the other two in excusing Warren. But he's in on the movement to brand Trump as wrong even when Trump is right.

Monday, October 15, 2018

Taylor Swift's Candidates Lag in Polls--PolitiFact Hardest Hit?

We noted pop star Taylor Swift's election endorsement statement drew the selective attention of the fact checkers left-leaning bloggers at PolitiFact.

We've found it hilarious over the past several days that PolitiFact has mercilessly pimped its Swiftian fact check repeatedly on Twitter and Facebook.

Now with polls showing Swift's candidates badly trailing the Republican counterparts we can only wonder: Is PolitiFact the entity hardest hit by Swift's failure (so far) to make a critical difference in putting the Democrats over the top?


The Biggest Problem with PolitiFact's Fact Check of Taylor Swift

The Swift claim PolitiFact chose to check was the allegation that Tennessee Republican Marsha Blackburn voted against the Violence Against Women Act. We noted that PolitiFact's choice of topic, given the fact that Swift made at least four claims that might interest a fact checker, was likely the best choice from the liberal point of view.

Coincidentally(?), PolitiFact pulled the trigger on that choice. But as we pointed out in our earlier post, PolitiFact still ended up putting its finger on the scales to help its Democratic Party allies.

It's true Blackburn voted against reauthorizing the Violence Against Women Act (PolitiFact ruled it "Mostly True").

But it's also true that Blackburn voted to reauthorize the Violence Against Women Act.

Contradiction?

Not quite. VAWA came up for reauthorization in 2012.Blackburn co-sponsored a VAWA reauthorization bill and voted in favor. It passed the House with most Democrats voting in opposition.

And the amazing thing is that the non-partisan fact checkers liberal bloggers at PolitiFact didn't mention it. Not a peep. Instead, PolitiFact began its history of the reauthorization of the VAWA in 2013:
The 2013 controversy
The Violence Against Women Act was two decades old in 2013 when Congress wrestled with renewing the funds to support it. The law paid for programs to prevent domestic violence. It provided money to investigate and prosecute rape and other crimes against women. It supported counseling for victims.

The $630 million price tag was less the problem than some specific language on non-discrimination.

The Senate approved its bill first on Feb. 12, 2013, by a wide bipartisan margin of 78 to 22. That measure redefined underserved populations to include those who might be discriminated against based on religion, sexual orientation or gender identity.
Starting the history of VAWA reauthorization in 2013 trims away the bothersome fact that Blackburn voted for VAWA reauthorization in 2012. Keeping that information out of the fact check helps sustain the misleading narrative that Republicans like Blackburn are okay with violence against women.

As likely as not that was PolitiFact's purpose.



Thursday, October 11, 2018

This Is How Selection Bias Works

Here at PolitiFact Bias we have consistently harped on PolitiFact's vulnerability to selection bias.

Selection bias happens, in short, whenever a data set fails to serve as representative. Scientific studies often simulate random selection to help achieve a representative sample and avoid the pitfall of selection bias.

PolitiFact has no means of avoiding selection bias. It fact checks the issues it wishes to fact check. So PolitiFact's set of fact checks is contaminated by selection bias.

Is PolitiFact's selection bias influenced by its ideological bias?

We don't see why not. And Taylor Swift will help us illustrate the problem.


PolitiFact looked at Swift's claim that Sen. Marsha Blackburn voted against the Violence Against Women Act. That fact check comes packed with the usual PolitiFact nonsense, such as overlooking Blackburn's vote in favor of VAWA in 2012. But this time our focus falls on PolitiFact's decision to look at this Swift claim instead of others.

What other claims did PolitiFact have to choose from? Let's have a look at the relevant part of Swift's statement:
I cannot support Marsha Blackburn. Her voting record in Congress appalls and terrifies me. She voted against equal pay for women. She voted against the Reauthorization of the Violence Against Women Act, which attempts to protect women from domestic violence, stalking, and date rape. She believes businesses have a right to refuse service to gay couples. She also believes they should not have the right to marry. These are not MY Tennessee values.
 Now let's put the different claims in list form:
  • Blackburn voted against equal pay for women.
  • Blackburn voted against the Reauthorization of the Violence Against Women Act
  • Blackburn believes businesses have a right to refuse service to gay couples
  • Blackburn also believes they should not have the right to marry
PolitiFact says it checks claims that make it wonder "Is that True?

The first statement regarding equal pay for women makes a great candidate for that question. Congress hasn't had to entertain a vote that would oppose equal pay for women (for equal work) for many years. It's been the law of the land since the 1960s. Lilly Ledbetter Fair Pay Act? Don't make me laugh.

The second statement is a great one to check from the Democratic Party point of view, for the Democrats made changes to the VAWA with the likely intent of creating voter appeals based on conservative opposition to those changes.

The third statement concerns belief instead of the voting record, so that makes it potentially more challenging to check. On its face, Swift's claim looks like a gross oversimplification that ignores concerns about constitutional rights of conscience.

The fourth statement, like the third, involves a claim about belief. Also, the fourth statement would likely count as a gross oversimplification. Conservatives opposed to gay marriage tend to oppose same-sex couples asserting every legal advantage that opposite-sex couples enjoy.

PolitiFact chose its best candidate for finding the claim "True" instead of one more likely to garner a "False" rating. It chose the claim most likely to electorally favor Democrats.

Commonly choosing facts to check on that type of basis may damage the election prospects of those unfairly harmed by partisan story selection. People like Sen. Blackburn.

It's a rigged system when employed by neutral and nonpartisan fact checkers who lean left.

And that's how selection bias works.


Tuesday, October 2, 2018

Again: PolitiFact vs PolitiFact

In 2013, PolitiFact strongly implied (it might opine that it "declared") that President Obama's promise that people could keep the plans they liked according to his health care overhaul, the Affordable Care Act, received its "Lie of the Year" award.

In 2018, PolitiFact Missouri (with editing help from longtime PolitiFacter Louis Jacobson) suffered acute amnesia about its 2013 "Lie of the Year" pronouncements.


PolitiFact Missouri rates "Mostly False" Republican Josh Hawley's claim that millions of Americans lost their health care plans.

Yet in 2013 it was precisely the loss of millions of health care plans that PolitiFact advertised as its reason for giving Mr. Obama its "Lie of the Year" award (bold emphasis added):
It was a catchy political pitch and a chance to calm nerves about his dramatic and complicated plan to bring historic change to America’s health insurance system.

"If you like your health care plan, you can keep it," President Barack Obama said -- many times -- of his landmark new law.

But the promise was impossible to keep.

So this fall, as cancellation letters were going out to approximately 4 million Americans, the public realized Obama’s breezy assurances were wrong.
Hawley tried to use PolitiFact's finding against his election opponent, incumbent Sen. Claire McCaskill (D-Mo.) (bold emphasis added):
"McCaskill told us that if we liked our healthcare plans, we could keep them. She said the cost of health insurance would go down. She said prescription drug prices would fall. She lied. Since then, millions of Americans have lost their health care plans."

Because of the contradiction between Hawley’s assertion and the promises of the ACA to insure more Americans, we decided to take a closer look.
So, despite the fact that PolitiFact says millions lost their health care plans and the breezy assurance to the contrary was wrong, PolitiFact says it gave Hawley's claim a closer look because it contradicts assurances that the ACA would insure more Americans.

Apparently it doesn't matter to PolitiFact that Hawley was specifically talking about losing health care plans and not losing health insurance completely. In effect, PolitiFact Missouri disavows any knowledge that the promise "if we liked our healthcare plans, we could keep them" was a false promise. The fact checkers substitute loss of health insurance for the loss of health care plans and give Hawley a "Mostly False" rating based on their own fallacy of equivocation (ambiguity).

A consistent PolitiFact could have performed this fact check easily. It could have looked at whether McCaskill made the same promise Obama made. And after that it could have remembered that it claimed to have found Obama's promise false along with the reasoning it used to justify that ruling.

Instead, PolitiFact Missouri delivers yet another outstanding example of PolitiFact inconsistency.



Afters:

Do we cut PolitiFact Missouri a break because it was not around in 2013?

No we do not.

Exhibit 1: Louis Jacobson, who has been with PolitiFact for over 10 years, is listed as an editor.

Exhibit 2: Jacobson, beyond a research credit on the "Lie of the Year" article we linked above, wrote a related fact check on the Obama administration's attempt to explain its failed promise.

There's no excuse for this type of inconsistency. But bias offers a reasonable explanation for this type of inconsistency.



Tuesday, September 25, 2018

Thinking Lessons

Our post "Google Doesn't Love Us Anymore" prompted a response from the pseudonymous "Jobman."

Nonsensical comments are normally best left unanswered unless they are used for instruction. We'll use "Jobman's" comments to help teach others not to make similar mistakes.

"Jobman" charged that our post misled readers in two ways. In his first reply "Jobman" offer this explanation of the first of those two allegedly misleading features:

This post is misleading for two reasons, 1. Because it implies that google is specifically down-ranking your website. (Yes, it still does, even if your little blurb at the bottom tries to tell otherwise. "One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own." and "But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic" Prove this point)
We answered that "Jobman" contradicted his claim with his evidence.


Lesson One: Avoid the Non Sequitur

"Jobman" asserts that our post implies Google specifically downranked the "PolitiFact Bias" website. The first evidence he offers is our statement that in the past Google gave priority to its own. Google owns Blogger and could be depended on to rank a Blogger blog fairly quickly. What does that have to do with specifically downranking the (Blogger) website "PolitiFact Bias"? Nothing. We offered it only as a reason we chose and continued with Blogger. Offering evidence that doesn't support a claim is a classic example of a non sequitur.
  • Good arguments use evidence that supports the argument, avoiding non sequiturs.

Lesson Two: Looking Up Words You May Not Understand Can Help Avoid Non Sequiturs

"Jobman" offered a second piece of evidence that likewise counted as a non sequitur. We think "Jobman" doesn't know what the term "surmise" means. Not realizing that "surmise" means coming to a conclusion based on reasoning short of proof might lead a person to claim that one who claims to have surmised something needs to provide proof of that thing. But that's an obvious non sequitur for a person who understands that saying one "surmised" communicates the idea that no proof is offered or implied.
  • Make sure you understand the other person's argument before trying to answer or rebut it. 

Lesson Three: Understand the Burden of Proof

In debate, the burden of proof belongs on the person asserting something. In non-debate contexts, the burden of proof belongs on anyone who wants another person to accept what they say.  In the present case, "Jobman" asserted, without elaborating, that two parts of our post sent the message that Google deliberately downranked "PolitiFact Bias." It turns out he was wrong, as we showed above. But "Jobman" showed little understanding of the burden of proof concept with his second reply:
The evidence that I point to doesn't contradict what I say. Yes, that's my rebuttal. You haven't proven that It does contradict what I say. Maybe try again later?
Who is responsible for showing that what we wrote doesn't mean whatever "Jobman" thinks it means? "Jobman" thinks we are responsible. If "Jobman" thinks what we wrote means X then it means X unless we can show otherwise. That's a classic case of the fallacy of shifting the burden of proof. The critic is responsible for supporting his own case before his target needs to respond.

Jobman added another example of this fallacy in his second reply:
Your title, "Google doesn't love us anymore" and contents of your post prove that you believe that Google somehow wants to push your content lower, yet you give no evidence for this.
"Jobman" says "Google doesn't love us anymore" means X (Google somehow wants to push our content lower). And "Jobman" thinks the burden rightly falls on us to show that "Google doesn't love us anymore" means ~X, such as simply saying Google downranked the site. "Jobman" thinks we are responsible for proving that Google somehow wants to push our content lower even if we already said that we did not think that is what Google did.

That's a criminal misunderstanding of the burden of proof.
  • Making a good argument involves understanding who bears the burden of proof.

Lesson Four: Strive For Coherence & Lesson Five: Avoid Creating Straw Men

In his second reply "Jobman" suggested that we brushed off our lack of evidence (lack of evidence supporting the point we were not making!) by with our claim we were not making the point we were not making.
Then, since you don't have any evidence, you try to brush it off and say "This post isn't about google targeting us" When every part of your post says otherwise.
With that last line we think perhaps "Jobman" meant to say "every part of your post says otherwise except for the part that doesn't." Though "Jobman" obviously overestimates the part that says otherwise.

His incoherence is palpable, and given that we specifically said that we were not saying Google specifically targeted the PolitiFact Bias site a critic needs an incredibly good argument to claim that we were arguing the opposite of what we argued. "Jobman" does not have that. He has a straw man fallacy supported only by his own non sequiturs.
  • It's a good idea to review your argument to makes sure you don't contradict yourself.
  • Resist the temptation to argue against a distortion of the other person's argument. That path leads to the straw man fallacy.

Lesson Three Review: Understand the Burden of Proof

The burden of proof falls on the one claiming something in the debate context, or on anyone who wants somebody else to believe something in everyday life.
When you claim that Google has made changes that have negatively impacted your website, you DO have to prove that. For now, I'll just dismiss your claim entirely until you provide evidence that google has made these changes, and that your website was previously ranked on the top of the list.
We said we surmised that Google's tweaking of its algorithms resulted in the downranking. As noted earlier, "Jobman" apparently thinks that claiming something while admitting it isn't proven obligates the claimant to prove the claim. Claiming to have proof carries with it the natural expectation that one may obtain that proof by asking. Recognizing when proof is claimed and when it isn't helps prevent mistakes in assigning the burden of proof.

In fact, the PFB post does offer evidence short of proof in the form of screenshots showing top-ranked searches from Bing and DuckDuckGo along with a much lower ranking from Google. Specific evidence of the Google downranking comes from reported evidence of past observations of a consistent top ranking. Evidence of Google tweaking its algorithms is not hard to find, so the argument in our post counted that as common knowledge for which the average reader would require no proof. And others we could expect to research the issue if they questioned it.

As for the promise to dismiss our claims for lack of proof, that is the prerogative of every reader no matter the literature. Readers who trust us will tend to accept our claims about our Google rank. Others can judge based on our accuracy with other matters. Others will use the "Jobman" method. That's up to the reader. And that's fine with us.
 

Lesson Five Review: Avoid Creating Straw Men

It was news to us that we posted the Bing and DuckDuckGo search results to prove Google is specifically biased against the PolitiFact Bias website. We thought we were showing that we rank No. 1 on Bing and DuckDuckGo while ranking much lower on Google.

We suppose "Jobman" will never buy that explanation:

Every single web indexing website in the history of the internet has had the purpose of putting forth the most relevant search results. You could prove that by literally googling anything, then saying "'X' Irrelevant thing didn't show up on the search results", but you compared search results of google and other search engines In order to convey the theme that google is somehow biased in their web searches because your website isn't at the top for theirs.
All search engines are biased toward their managers' vision of relevant search results. The bias at Bing and DuckDuckGo is friendlier to the PolitiFact Bias website than the bias at Google.

"Jobman" finished his second reply by telling us about ways we could improve our website's page rank without blaming Google for it. If that part of his comment was supposed to imply that we blame our website traffic on Google, that's misleading. 

Obviously, though, it's true that if Google gave us the same rank we get from Bing and DuckDuckGo we would probably enjoy healthier traffic. The bulk of our traffic comes from Google referrals, and we would expect a higher ranking to result in more of those referrals.

Like we said in the earlier PFB post, it comes down to Google's vision of what constitutes relevance. And clearly that vision, as the algorithm expresses it, is not identical to the ones expressed in the Bing and DuckDuckGo algorithms.

We did not and do not argue that Google targeted "PolitiFact Bias" specifically for downranking. Saying otherwise results in the creation of a straw man fallacy.




Note: "Jobman" has exhausted his reply privileges with the second reply that we quoted extensively above. He can take up the above argument using a verifiable identify if he wishes, and we will host comments (under other posts) he submits under a different pseudonym. Within limits.

Sunday, September 16, 2018

Google doesn't love us anymore

One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own.

It took us very little time to make it to the top of Google's search results for Web surfers using the terms "PolitiFact" and "bias."

But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic. That was good news for PolitiFact, whose fact checking efforts we criticize and Google tries to promote.

And perhaps "eroded" isn't the right word. Our traffic pretty much fell off a cliff between the time Trump won election and the time Trump took office. And it coincided with the Google downranking that occurred while the site was enjoying its peak traffic.

We've found it interesting over the past couple of years to see how different search engines treated a search for "PolitiFact bias." Today's result from Microsoft's Bing search engine was a pleasant surprise. Our website was the top result and our site was highlighted with an informational window.

The search result even calls the site "Official Site." We're humbled. Seriously.



What does the same search look like on Google today?

Ouch:



"Media Bias Fact Check"? Seriously?

Dan flippin' Bongino? Seriously?

A "PolitiFact" information box to the upper right?

The hit for our site is No. 7.

It's fair to charge that we're not SEO geniuses. But on the other hand we provide excellent content about "PolitiFact" and "bias." We daresay nobody has done it better on a more consistent basis.


DuckDuckGo




DuckDuckGo is gaining in popularity. It's a search engine marketing itself based on not tracking users' searches. So we're No. 1 on Bing and DuckDuckGo but No. 7 on Google.

It's not that we think Google is deliberately targeting this website. Google has some kind of vision for what it wants to end up high in its rankings and designs its algorithms to reach toward that goal. Sites like this one are "collateral damage" and "disparate impact."