Wednesday, December 5, 2018

Handicapping PolitiFact's "Lie of the Year" Candidates (Updated)


It's that time of year again, when the supposedly non-partisan and unbiased folks at PolitiFact prepare an op-ed about the most significant lie of the year, PolitiFact's "Lie of the Year" award.

At PolitiFact Bias we have made a tradition of handicapping PolitiFact's list of candidates.

So, without further ado:


To the extent that Democrats think Trump's messaging on immigration helped Republicans in the 2018 election cycle, this candidate has considerable strength. I (Jeff can offer his own breakdown if he wishes) rate this entry as a 6 on a scale of 1-10 with 10 representing the strongest.


This claim involving Saudi Arabia qualifies as my dark horse candidate. By itself the claim had relatively little political impact. But Trump's claim relates to the murder of Saudi journalist (and U.S. resident) Jamal Khashoggi. Journalists have disproportionately gravitated toward that issue. Consonant with journalists' high estimation of their own intelligence and perception, this is the smart choice. 6.


This claim has much in common with the first one. It deals with one of the key issues of the 2018 election cycle, and Democrats may view this messaging as one of the reasons the "blue wave" did not sweep over the U.S. Senate. But the first claim came from a popular ad. And the first claim was rated "Pants on Fire" while PolitiFact gave this one a mere "False" rating. So this one gets a 5 from me instead of a 6.



PolitiFact journalists may like this candidate because it undercuts Trump's narrative about the success of his economic policies. Claiming U.S. Steel is opening new plants after Trump slapped tariffs on aluminum and steel makes the tariffs sound like a big success. But not so much if there's no truth to it. How significant was it politically? Not so much. I rate this one a 4.



If this candidate carries significant political weight, it comes from the way the media narrative contradicting Trump's claim helped lead to the administration's reversal of its border policy. That reversal negated, at least to some extent, a potentially effective Democratic Party election-year talking point. I rate this one a 5.


That's five from President Trump. Are PolitiFact's candidates listed "in no particular order"? PolitiFact does not say.



Bernie Sanders' claim about background checks for firearm purchases was politically insignificant. Pickings from the Democratic Party side were slim. Democrats only had about 12 false ratings through this point in 2018, including "Pants on Fire" ratings. Republicans had over 80, for comparison. I give this claim a 1.



As with the Sanders' claim, the one from Ocasio-Cortez was politically insignificant. It was ignorant, sure, but Ocasio-Cortez was guaranteed to win in her district regardless of what she said. Her statement would have been just as significant politically if she said it to herself in a closet. This claim, like Sanders', rates as a 1.




Is this the first time a non-American made PolitiFact's list of candidates? This claim ties into the same subject as last year's winner, Russian election interference. About last year's selection I predicted "PolitiFact will hope the Mueller investigation will eventually provide enough backing to keep it from getting egg on its face." One year later it remains uncertain whether the Mueller investigation will produce a report that shows much more than the purchase of some Facebook ads. If and only if the Russia story gets new life in December will PolitiFact make this item its "Lie of the Year." I give this item a 4, with a higher ceiling depending on the late 2018 news narrative.




Yawn. 1.





This claim from one of Trump's economic advisors rates about the same as Ocasio-Cortez's claim on its face. I think Kudlow may have referred to deficit projections and not deficits. But that aside, this item may appeal to PolitiFact because it strikes at the idea that tax cuts pay for themselves. Democrats imagine that Republicans commonly believe that (it may be true--I don't know). So even though this item should rate in the same range as the Sanders and Ocasio-Cortez claims I will give it a 4 to recognize its potential appeal to PolitiFact's left-leaning staff. It has a non-zero chance of winning.



Afters

A few notes:  Once again, PolitiFact drew only from claims rated "False" or "Pants on Fire" to make up its list of candidates. President Obama's claim about Americans keeping their health insurance plans remains the only candidate to receive a "Half True" rating.

With five Trump statements among the 10 nominees we have to allow that PolitiFact will return to its ways of the past and make "Trump's statements as president" (or something like that) the winner.


Jeff Adds:

Knowing PolitiFact's Lie of the Year stunt is more about generating teh clickz as opposed to a function of serious journalism or truth seeking, my pick is the Putin claim.

The field of candidates is, once again, intentionally weak outside of the Putin rating. Of all the Pants on Fire ratings they passed out to Trump this year, PolitiFact filled the list with claims that were simply False (and this is pretending there's some objective difference between any of PolitiFact's subjective ratings.)

Giving the award to Bernie won't generate much buzz, so you can cross him off the list.

It's doubtful the nonpartisan liberals at PolitiFact would burden Ocasio-Cortez with such an honor when she's already taking well-deserved heat for her frequent gaffes. And as far as this pick creating web traffic, I submit that AOC isn't nearly as talked about in Democrat circles as the ire she elicits from the right would suggest. That said, she should be considered a dark horse pick.

It's not hard to imagine PolitiFacter Aaron Sharockman cooking up a scheme during a Star Chamber session to pick AOC as an attempt at outreach to conservative readers and beefing up their "we pick both sides!" street cred (a credibility, by the way, that only PolitiFact and the others in their fishbowl of liberal confirmation bias actually believe exists.)

More people in America know Spencer Pratt sells healing crystals than have ever heard of Larry Kudlow. You can toss this one aside.

The inclusion of the David Hogg claim seems like a PolitiFact intern was given the task of picking out a few False nuggets from liberals and that was what they came up with. Don't expect PolitiFact to pick on the young but misinformed activist. [Update: This is a completely embarrassing take on my part. I was in a rush to publish my thoughts on the Lie of the Year candidates, and in that rush, I glossed over this claim. Obviously, I didn't even give it a passing notice. I'm confident that had I actually paid attention to it, I would have ignored it as a contender anyways (and I still think it's a lame pick on its face.) But that's not an excuse.

I let readers down and I embarrassed myself. As I repeatedly and mockingly point out to fact checkers: Confirmation bias is a helluva drug. I was convinced of the winner, and I ignored information that didn't support that outcome.

I regret that I didn't dismiss it with a coherent argument. My bad.-Jeff]

Putin is the obvious  pick. Timed perfectly with the release of the Mueller report, it piggybacks onto the Russian interference buzz. Additionally, it allows ostensibly serious journos to include PolitiFact's Lie of the Year piece into their own articles about Russian involvement in the election (the catnip of liberal click-bait.) It gets bonus points for confirming for PolitiFact's Democrat fan base that Trump is an illegitimate president that stole the election.

The Putin claim has everything: Anti-Trump, stokes Russian interference RT's and Facebook shares, and gets links from journalists at other news outlets sympathetic to PolitiFact's cause.

The only caveat here is if PolitiFact continues their recent history of coming up with some hybrid, too-clever-by-half Lie of the Year winner that isn't actually on the list. But even if they do that the reasoning is the same: PolitiFact is not an earnest journalism outlet engaged in fact spreading. PolitiFact exists to get your clicks and your cash.

Don't believe the hype.





Updated: Added Jeff Adds section 1306 PST 12/10/2018 - Jeff
Edit: Corrected misspelling of Ocasio-Cortez in Jeff Adds portion 2025 PST 12/10/2018 - Jeff
Updated: Strike-through text of Hogg claim analysis in Jeff Adds section, added three paragraph mea culpa defined by brackets 2157 PST 12/12/2018 -Jeff


Wednesday, November 14, 2018

PolitiFact misses obvious evidence in Broward recount fact check

On Nov. 13, 2018 PolitiFact's "PunditFact" brand issued a "Pants on Fire" rating to conservative Ken Blackwell for claiming Democrats and their allies were manufacturing voters in the Florida election recount.


The problem?

PolitiFact somehow overlooked obvious evidence reported in the mainstream media. The Tampa Bay Times, former owner of PolitiFact before it was transferred to the nonprofit Poynter Institute, published a version of the story:
Broward's elections supervisor accidentally mixed more than a dozen rejected ballots with nearly 200 valid ones, a circumstance that is unlikely to help Brenda Snipes push back against Republican allegations of incompetence.

The mistake — for which no one had a solution Friday night — was discovered after Snipes agreed to present 205 provisional ballots to the Broward County canvassing board for inspection. She had initially intended to handle the ballots administratively, but agreed to present them to the canvassing board after Republican attorneys objected.
The Times story says counting the 205 provisional ballots resulted in at least 20 illegal votes ending up in Broward County's vote totals.

The Times published its story on Nov. 10, 2018.

PolitiFact/PunditFact published its fact check on Nov. 13, 2018 (2:24 p.m. time stamp). The fact check contains no mention at all that Broward County included invalid votes in its vote totals.

Instead, PolitiFact reporter John Kruzel gives us the breezy assurance that neither he nor the state found evidence supporting Blackwell's charge.
Our ruling

Blackwell said, "Democrats and their allies (...) are manufacturing voters."

We found no evidence, nor has the state, to support this claim. Blackwell provided no evidence to support his statement.

We rate this Pants on Fire.
Inconceivable, you say?



via GIPHY

Friday, November 9, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 4

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:
4. Reader support allows us to stay independent.

Our independent journalism, seeking only to sort out the truth in American policy, is what motivates us to keep publishing for the benefit of our readers. We began over a decade ago as a politics project at Florida’s largest daily newspaper, the Tampa Bay Times. Today, we are a nonprofit newsroom that is part of the Poynter Institute, a school for journalists based in Florida.
As with Holan's third point offered to show PolitiFact is not biased, her fourth point gives no reason to think PolitiFact is not biased.

Does anybody need a list of partisan causes supported by public donations? Does anybody have the slightest doubt that PolitiFact's "Truth Squad" membership skews strongly left? If anyone harbors the second doubt we recommend checking out polls like this one that show moderates and conservatives place little trust in the media.

If PolitiFact relies on donations primarily from liberals, then how does that make it more independent instead of less independent? Were PolitiFact to displease its liberal base it could expect its primary source of private donations to shrink.

Here's something we'd like to see. And it's something that will never happen. Let PolitiFact poll its "Truth Squad" to find out how its ideology trends as a group. If conservatives and moderates are a distinct minority PolitiFact can use that information to bolster its membership outreach to those groups: "We need more support from conservatives who care about objective fact-checking!"

And of course that will never happen. It tears down the facade PolitiFact built to suggest that its reliance on public support somehow keeps it politically neutral. PolitiFact has no interest in that kind of transparency. That kind of truth is not in PolitiFact's self-interest.


The Main Point? Reader $upport

We do not buy that PolitiFact sincerely tried to put forth a serious argument that it is unbiased. The argument Holan put forward toward that end was simply way too weak to make that easily believable. We think the main point was to soften misgivings people may have about joining PolitiFact's financial support club, which it has dubbed its "Truth Squad."

Holan tipped off that purpose early in her article (bold emphasis added):
We expect it (accusations of bias--ed.). Afterall [sic], as an independent group measuring accuracy, we are disrupting the agendas of partisans and political operatives across the ideological spectrum. We do it to give people the information they need to govern themselves in a democracy, and to uphold the tradition of a free and independent press.

Still, we think it’s worth explaining our mission and methods, both to answer those who make the charge against us, and for our supporters when confronted by naysayers.
Also see the trimmed screen capture image below (bottom) with an ad asking readers to support PolitiFact.

If you ask us, PolitiFact's "Truth Squad" isn't worthy of the name if they buy Holan's argument. "Dupe Squad" would be more like it. Holan wants your money. And it looks like she's willing to put forward an argument she wouldn't buy herself to help keep that money flowing.

Holan offers no real answer to those who claim PolitiFact is biased. To do that, Holan would need to specifically answer the arguments critics use to support their claims.

PolitiFact finds it preferable to simply say it is unbiased without offering real evidence supporting its claim. And without rebutting the arguments of its detractors.


Is the embedded ad asking for money a mere coincidence? We added the red border for emphasis.

Thursday, November 8, 2018

PolitiFact: "PolitiFact is not biased--here's why" Pt. 3

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

3. We make mistakes sometimes, but we correct our errors promptly.
The facts comes first with us. That’s why it’s important for us -- or any reputable news organization -- to correct mistakes promptly and clearly. We follow a published corrections policy than anyone can read. Readers also can easily access a list of fact-checks that have been corrected or updated after the original publication.
I make mistakes sometimes, but I correct my errors promptly. Would that make me unbiased? Who believes that?

A willingness to correct errors does not bear directly on the issue of bias. Consider PolitiFact's move of paying researchers to look for examples of biased language in its work (the study found no systematic evidence of biased language). Would a policy of correcting mistakes promptly cancel out a strong propensity to use biased language?

Of course not. Correcting mistakes would only have an effect on biased language if the publisher viewed biased language as a mistake and corrected it as such.

In our experience PolitiFact often refuses to consider itself mistaken when it makes a real mistake.

What good is a thorough and detailed corrections policy if the publishing entity can't recognize the true need for a correction?

And doesn't it go without saying that the failure to recognize the need for a correction may serve as a strong indicator of bias?


Wonderful-Sounding Claim Meaning Nearly Nothing

How great is PolitiFact's corrections policy? Just let Holan tell you:
We believe it is one of the most robust and detailed corrections policies in American fact-checking.
We were momentarily tempted to fact check Holan's claim. Except she starts with "We believe" which immediately moves the claim into the realm of opinion. But if it were a claim of fact Holan could probably easily defend it because the claim doesn't really mean anything.

Think about it. "One of the the most robust and detailed corrections policies in American fact-checking." Let's take a look at the set of American fact checkers, using the list of IFCN-verified fact-checkers. When we looked on Nov. 8, 2018 there were eight (including PolitiFact).

With a pool that small PolitiFact could have the least robust and detailed corrections policy among the eight and plausibly say it has one of the most robust and detailed corrections policies in American fact-checking. Our opinion? In a pool of eight there's nothing to crow about unless you're No.1. Coming in No. 4 puts one in the middle of the pack, after all.

We think PolitiFact's corrections policy is less robust than that of its parent organization, the Poynter Institute. We're wondering why that should be the case.


Summary

The first two reasons Holan offered to support PolitiFact's "not biased" claim were incredibly weak. But the third item managed to register even lighter weight on the scale of evidence. A robust corrections policy is a poor protection against ideological bias. It's a bit like using a surgical mask for protection against mustard gas.

PolitiFact: "PolitiFact is not biased--here's why" Pt. 2 (Updated)

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:

2. We follow the facts, not fact-check count formulas.

We let the factual chips fall where they may. This is not bias; this is sticking to our mission of correcting falsehoods as we find them.
As with PolitiFact's first supposed evidence, this one does not appear to work without assuming PolitiFact lacks bias. It's not biased to let the factual chips fall where they may if the evaluation of the facts was unbiased. But it's biased to let the factual chips fall where they may if the evaluation of the facts was biased.

So far, this item offers us no solid reason for concluding PolitiFact lacks bias.

Our Little League doesn't keep score!

Holan continues:
We don’t worry about who got the last False rating or how long since some group got a True rating. We look at each statement and each set of evidence separately and give it a rating that stands on its own.
Concerning the first sentence, our data hint that PolitiFact does consider the proportion of "Pants on Fire" ratings it gives in relation to false ratings overall. Note the pattern under Holan (2014 onward) exhibits much more stability than PolitiFact's record under her predecessor, Bill Adair. Note that we started publishing our data midway through 2011. PolitiFact editors may have looked at the data in 2011 or later and acted on it, which may explain the reduced variation.


Bringing the chart up to date as of today would bring the blue 11.11 percent for 2018 up to 20 percent. That's thanks to a "Pants on Fire" rating given to the North Dakota Democratic-Nonpartisan League Party (who?). Is there even a PolitiFact North Dakota? No. And the other "Pants on Fire" rating given to a Democrat this year went to Alexandria Ocasio-Cortez, who would have easily won her race in New York with 3,000 "Pants on Fire" ratings.

Only once under Holan's tenure (omitting 2013, which she shared with Adair) has either party had a percentage outside the 20 percent to 30 percent range. Under Adair it happened seven times (again omitting 2013).

Holan's assurances ring hollow because PolitiFact's Truth-O-Meter ratings get picked by a fairly consistent group of editors. They know the ratings they are giving even if they're not looking at a scoreboard, just like members of those Little League teams in leagues that avoid hurting feelings by not keeping score.

On top of that, PolitiFact constantly encourages readers to view candidate "report cards" that show all the "Truth-O-Meter" ratings PolitiFact has meted out to a given candidate.

Does this look like PolitiFact isn't keeping score?

But Let's Assume PolitiFact Does Not Keep Score

Even assuming Holan is right that PolitiFact does not keep score with its "Truth-O-Meter" ratings, that offers no assurance that PolitiFact lacks bias. Think of an umpire in one of those "no score" little league games. Would not keeping a tally of the number of runs scored prevent an umpire from calling a bigger strike zone for one team than the other? We don't see what would prevent it.

Tweaking the Little League Analogy: Yes We Keep Score, But It Does Not Make Us Biased

The Little League analogy breaks down in the end because PolitiFact does keep score, as Holan acknowledges:
Our database of fact-checks make it easy to see the ratings people or parties have received over the years. Our readers tell us they like seeing these summaries and find them easy to browse. But we are not driven by those numbers; they have no bearing on how we rate the next statement we choose to fact-check.
So Holan is saying yeah, we keep score but we don't let it bias our decisions therefore we are not biased. It's circular reasoning again. Where's the evidence?

How does Holan know PolitiFact does not let the score affect its work? What is PolitiFact's secret for exterminating normal human bias? Wouldn't we all like to know?

We're not going to know from Holan's explanation, that's for sure.

There's nothing in this section of Holan's article that offers any kind of legitimate assurance that PolitiFact filters bias from its work.



Update Jan. 7, 2018



In our story, we jabbed PolitiFact for publishing a page showing the "report cards" for some of the people it fact checks most often, asking our readers whether that page makes it look like PolitiFact doesn't keep score.

Today, upon testing the link, it returned one of PolitiFact's 404 page not found errors.

We don't know whether PolitiFact removed the page to make it look less like PolitiFact keeps score. But if the page was permanently removed (instead of being temporarily inaccessible) it's not a good look for Pulitzer Prize-winning PolitiFact.

If PolitiFact removed the page, at least we've got an Internet Archive version for our readers. Our archived version is from Dec. 28, 2018.






Wednesday, November 7, 2018

Remember when the Bush brothers negotiated NAFTA?

We've written about his before, but it's useful for communication purposes to dedicate a post to this memorable PolitiFact pictorial flub:

PolitiFact: "PolitiFact is not biased--here's why" Pt. 1

In an article titled "PolitiFact is not biased--here's why" PolitiFact Editor Angie Drobnic Holan offers four points as evidence PolitiFact is not biased. This series deals with each of the four.

PolitiFact:

1. We fact-check inaccurate statements, not political parties.

We are always on the lookout for bad information that needs correcting. We don’t have any concern about which party it comes from or who says it. If someone makes an inaccurate statement, it gets a negative rating on our Truth-O-Meter: Mostly False, False or Pants on Fire.
If we at PolitiFact Bias were to come up with a story making an assertion, we would certainly try to produce some type of evidence giving palpable evidence in support. We find PolitiFact's article striking for its lack of evidence in support of the claim in the title.

Let's assume for the sake of argument that it's true PolitiFact fact checks inaccurate statements and not political parties. We find both assertions questionable, but we can set that aside for the moment.

What stops a biased fact checker from allowing factors like confirmation bias to guide its selection of fact checks to reflect an ideological bias? This is an obvious objection to the first part of Holan's argument but her article completely fails to acknowledge it. If Holan assumes that PolitiFact has no bias and therefore no confirmation bias can result then her argument begs the question (circular reasoning: PolitiFact is not biased because PolitiFact is not biased).

If Holan isn't using circular reasoning then she's simply not addressing the issue in any relevant way. Fact-checking inaccurate statements and not political parties does nothing to show a lack of bias.


The Elephant in the Room (a pun of foreshadowing)

In early 2011 Eric Ostermeier of the University of Minnesota did a study of PolitiFact's ratings. Ostermeier found Republicans were receiving worse treatment in PolitiFact's ratings. Ostermeier noted that PolitiFact's descriptions of its methodology offered no assurance at all that the skew in its ratings was unaffected by selection bias. In other words, was unrepresentative sampling responsible for making it appear that Republicans lie more?

Ostermeier posed an important question that PolitiFact has never satisfactorily addressed:
The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case.

The evidence says PolitiFact's story selection is biased

While developing our own research approaches to PolitiFact's ratings we came up with an observation we say strongly shows PolitiFact guilty of selection bias.

Imagine PolitiFact used only its editorial judgment of whether a statement seemed so false that it was worthy of a fact check and was completely blind to political party and ideology.

We say that regardless of whether one party lies more, the results should prove pretty close to proportional. If 40 percent of PolitiFact's ratings of Republicans come out "Pants on Fire" or "False" then the same should hold true of Democrats. If Republicans lie more that should end up reflected in the number of ratings, not in the proportions.

PolitiFact as much as admitted to selection bias in the early days. PolitiFact founding editor Bill Adair said PolitiFact tried to do a roughly equal number of fact checks for Republicans and Democrats. That makes no less than two criteria for selecting a story, and one of them is not simply whether the statement appeared false. Trying to fact check Republicans and Democrats equally will skew the proportions (unless the parties lie equally and PolitiFact's sample is effectively random).

In Ostermeier's research, Republicans' statements were 39 percent "Pants on Fire" or "False" while Democrats' statements were 12 percent "Pants on Fire" or "False." That's strong evidence of selection bias.

Note: We have not tracked these numbers through the present. Perhaps PolitiFact is closer to rating claims proportionally now than it was in Adair's time. If it is, then PolitiFact could present that as evidence it is blind to ideology when it chooses which claims to check.


Until PolitiFact answers Eric Ostermeier's question it is unsafe to conclude that PolitiFact lacks bias.

PolitiFact: "PolitiFact is not biased--here's why" (Intro) (Updated)

On Nov. 6, 2018, PolitiFact published an article declaring itself "not biased," suggesting with the tail end of the title ("here's why") that it could support the declaration with evidence.

We welcome PolitiFact's better-late-than-never response to its critics. But we find the proffered reasoning incredibly weak. If the argument from PolitiFact Editor Angie Drobnic Holan addresses any item from our list of arguments helping show PolitiFact's leftward lean it does so obliquely at best.

At the risk of using a pile driver to squash a gnat, we will address each of Holan's arguments in a series of posts. As we complete each part of the series we will add a hotlink to the corresponding list of arguments from Holan and PolitiFact.

Just click a claim from PolitiFact to see our answers to Holan's arguments.


1. We fact-check inaccurate statements, not political parties.

2. We follow the facts, not fact-check count formulas.

3. We make mistakes sometimes, but we correct our errors promptly.

4. Reader support allows us to stay independent.

 

PolitiFact Editor Angie Drobnic Holan

 

Update, With Conclusion (Nov. 9, 2018)

We have completed adding links to articles debunking each of PolitiFact's supposed reasons supporting its claim of being unbiased.

In summary, we think the purpose of Holan's article was not to put forward a serious argument. The article was ad copy designed to support an appeal for reader support.

It was PolitiFlack. Item No. 4 emphasizes the point.


 

Saturday, November 3, 2018

PolitiFact's Liberal Tells for $400, Alex

When PolitiFact released the results of a language inventory it commissioned on itself, we were not surprised that the researchers found no clear evidence of biased language. PolitiFact's bias is mostly found in its choice of stories accompanied by bias in the execution of the fact checks.

But ...

On Oct. 31, 2018 PolitiFact Editor Angie Drobnic Holan published an article on the top election issues for 2018 and promptly stepped in it:
PolitiFact has been monitoring and fact-checking the midterm campaigns of 2018 in races across the country. We’ve seen common themes emerge as the Democrats and Republicans clash. Here’s a look at what we’ve found to be the top 10 storylines of the 2018 contests. (We provide short summaries of our fact-checks here; links will take you to longer stories with detailed explanations and primary sources.)

1. Fear of immigration
We'll explain to Holan (and the audience) the right way to identify immigration as an election issue without employing biased language:
1. Immigration
It's pretty easy.

Use "Fear of immigration" and the language communicates a lean to the left. Something like "Inadequate border security" might communicate the opposite (no danger of that from PolitiFact!).

Others from Holan's list of 10 election topics may also qualify as biased language. But this one is the most obvious. "Fear of immigration" is how liberals imagine conservatives reach the conclusion that securing the border and controlling immigration count as good policy.

PolitiFact's claim to non-partisanship is a gloss.

Tuesday, October 30, 2018

PolitiScam: It's Not What You Say, It's How PolitiFact Frames It

PolitiFact's Oct. 29, 2018 fact check of President Trump gave us yet another illustration of PolitiFact's inconsistent application of its principles.

The same day as shooting at a Pittsburgh synagogue resulting in multiple deaths, Mr. Trump justified not canceling his campaign appearances by saying that terrorizing acts should not alter daily business. Trump used the New York Stock Exchange as his example, saying it opened the day after the Sept. 11, 2001 terrorist attacks.

But it didn't open the next day. Trump was flatly wrong.


PolitiFact:


Note that PolitiFact spins its description. PolitiFact says Trump claimed he did not cancel the political rally simply because the NYSE opened the day after Sept. 11. But the NYSE opening was simply an example of the justification Trump was using.

This case involving Trump carries a parallel to a fact check PolitiFact did in 2008 of then-presidential candidate Barack Obama. Both Trump and Obama made false statements. But PolitiFact found a way to read Mr. Obama's false statement favorably:


Obama claimed his uncle helped liberate Auschwitz. But Obama's uncle was never particularly close to Auschwitz. That most famous of the concentration camps was located in Poland, not Germany, and was liberated by troops from the Soviet Union.

One might well wonder how Obama received a "Mostly True" rating for a relatively outrageous claim.


PolitiFact Framing to the Rescue!

It was very simple for PolitiFact to rehabilitate Mr. Obama's claim about his uncle. The uncle was a real person, albeit an uncle in the broad sense, and he did serve with American troops who helped liberate a less-well-known concentration camp near Buchenwald, Germany.

PolitiFact explains in its summary paragraph:
There's no question Obama misspoke when he said his uncle helped to liberate the concentration camp in Auschwitz.

But even with this error in locations, Obama's statement was substantially correct in that he had an uncle — albeit a great uncle — who served with troops who helped to liberate the Ohrdruf concentration/work camp and saw, firsthand, the horrors of the Holocaust. We rate the statement Mostly True.
See? Easy-peasy. The problem? It's pretty much just as easy to rehabilitate the claim Trump made:
There's no question Trump misspoke when he said the NYSE opened the day after Sept. 11.

But even with his error about the timing, Trump was substantially correct that the NYSE opened as soon as it feasibly could following the Sept. 11 terrorist attacks. The NYSE opened the following week not far from where the twin towers collapsed.
PolitiFact only used two sources on the reopening of the NYSE, and apparently none that provided the depth of the Wall Street Journal article we linked. Incredibly, PolitiFact also failed to link the articles it used. The New York Times story it used was available on the internet. Instead, the sources have notes that say "accessed via Nexis."

All it takes to adjust the framing of  these fact check stories is the want-to. Trump was off by a week. Obama was off by a country. Both had underlying points a fact checker could choose to emphasize.

These fact checkers do not have objective standards for deciding how to frame fact checks.


Related: "Lord knows the decision about a Truth-O-Meter rating is entirely subjective"

Monday, October 22, 2018

PolitiFact: One Standard For Me and Another For Thee 2

PolitiFact executed another of its superlative demonstrations of hypocrisy this month.

After PolitiFact unpublished its botched fact check about Claire McCaskill and the affordability of private aircraft, it published a corrected (?) fact check changing the rating from "False" to "Half True." Why "Half True" instead of "True"? PolitiFact explained it gave the "Half True" rating because the (Republican) Senate Leadership Fund failed to provide adequate context (bold emphasis added).
The Senate Leadership Fund says McCaskill "even said this about private planes, ‘that normal people can afford it.’"

She said those words, but the footage in the ad leaves out both the lead-in comment that prompted McCaskill’s remark and the laughter that followed it. The full footage makes it clear that McCaskill was wrapping up a policy-heavy debate with a private-aviation manager and with a riff using the airport manager’s words. In context, he was referring to "normal" users of private planes, as opposed to "normal" Americans more generally.

We rate the statement Half True.
Let's assume for the sake of argument that PolitiFact is exactly right (we don't buy it) in the way it recounts the problems with the missing context.

Assuming the missing context in a case like this makes a statement "Half True," how in the world does PolitiFact allow itself to get away the shenanigan PolitiFact writer Jon Greenberg pulled in his article on Sen. Elizabeth Warren's DNA test?

Greenberg (bold emphasis added):
Trump once said she had as much Native American blood as he did, and he had none. At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Trump said those words, but Greenberg's version of the quote leaves out more than half of Trump's sentence, as well as comments that came before. The full quotation makes it clear that Trump's million dollar challenge was presented as a potential future event--a hypothetical, in other words. In context, Trump was referring to a potential future challenge for Warren to take a DNA test as opposed to making the $1 million challenge at that moment.

PolitiFact takes Trump just as much, if not more, out of context as the Senate Leadership Fund did with McCaskill.

How does that kind of boundless hypocrisy pass the sniff test? Are the people at PolitiFact that accustomed to their own stench?


Afters

PolitiFact's "In Context" presentation of Trump's million-dollar challenge to Sen. Warren, confirming what we're saying about PolitiFact's Jon Greenberg ignoring the surrounding context (bole emphasis in the original):
(L)et's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’
See also: Fact Checkers for Elizabeth Warren

Wednesday, October 17, 2018

Washington Free Beacon: "PolitiFact Retracts Fact Check ..."

Full title:

PolitiFact Retracts Fact Check After Erroneously Ruling Anti-Claire McCaskill Ad ‘False’

We were preparing to post about PolitiFact's crashed-and-burned fact check of  the (Republican) Senate Leadership Fund's Claire McCaskill attack ad. But we noticed that Alex Griswold did a fine job of telling the story for the Washington Free Beacon.

Griswold:
In the revised fact check published Wednesday, PolitiFact announced that "after publication, we received more complete video of the question-and-answer session between McCaskill and a constituent that showed she was in fact responding to a question about private planes, as well as a report describing the meeting … We apologize for the error."

PolitiFact still only ruled the ad was "Half True," arguing that the Senate Leadership Fund "exaggerated" McCaskill's remarks by showing them in isolation. In full context, the fact checker wrote, McCaskill's remarks "seem to refer to ‘normal' users of private planes, not to ‘normal' Americans more generally."
Griswold's article managed to hit many of the points we made about the PolitiFact story on Twitter.


For example:

New evidence to PolitiFact, maybe. The evidence was on the World Wide Web since 2017.

PolitiFact claimed it was "clear" from the short version of the town hall video that the discussion concerned commercial aviation in the broad sense, not private aircraft. Somehow that supposed clarity vanished with the appearance of a more complete video.


Read the whole article at the Washington Free Beacon.


We also used Twitter to slam PolitiFact for its policy of unpublishing when it notices a fact check has failed. Given that PolitiFact, as a matter of stated policy, archives the old fact check and embeds the URL in the new version of the fact check. No good reason appears to exist to delay availability of the archived version. It's as easy as updating the original URL for the bad fact check to redirect to the archive URL.

In another failure of transparency, PolitiFact's archived/unpublished fact checks eliminate bylines and editing or research credits along with source lists and hotlinks. In short, the archived version of PolitiFact's fact checks loses a hefty amount of transparency on the way to the archive.

PolitiFact can and should do better both with its fact-checking and its policies on transparency.


Exit question: Has PolitiFact ever unpublished a fact check that was too easy on a conservative or too tough on a liberal?

There's another potential bias measure waiting for evaluation.

Tuesday, October 16, 2018

Fact Checkers for Elizabeth Warren

Sen.Elizabeth Warren (D-Mass.) provided mainstream fact checkers a great opportunity to show their true colors. Fact checkers from the PolitiFact and Snopes spun themselves into the ground trying to help Warren excuse her self-identification as a "Native American."

Likely 2020 presidential candidate Warren has long been mocked from the right as "Fauxcahontas" based on her dubious claims of Native American minority status. Warren had her DNA tested and promoted the findings as some type of vindication of her claims.

The fact checkers did their best to help.


PolitiFact

PolitiFact ran Warren's report past four experts and assured us the experts thought the report was legitimate. But the quotations from the experts don't tell us much. PolitiFact uses its own summaries of the experts' opinions for the statements that best support Warren. Are the paraphrases or summaries fair? Trust PolitiFact? It's another example showing why fact checkers ought to provide transcripts of their interactions with experts.

Though the article bills itself as telling us what we can and cannot know from Warren's report, it takes a Mulligan on mentioning Warren's basic claim to minority status. Instead it emphasizes the trustworthiness of the finding of trace Native American inheritance.

At least the article admits that the DNA evidence doesn't help show Warren is of Cherokee descent. There's that much to say in favor of it.

But more to the downside, the article repeats as true the notion that Trump had promised $1 million if Warren could prove Native American ancestry (bold emphasis added):
At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Just minutes after PolitiFact published the above, it published a separate "In Context" article under this title: "In context: Donald Trump's $1 million offer to Elizabeth Warren."

While we do not recommend PolitiFact's transcript as any kind of model journalism (it leaves out quite a bit without using ellipses to show the omissions), the transcript in that article is enough to show the deception in its earlier article (green emphasis added, bold emphasis in the original):
"I shouldn't tell you because I like to not give away secrets. But let's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’ And let’s see what she does. I have a feeling she will say no. But we’ll hold that for the debates.
Note that very minor expansion of the first version of the Trump quotation torpedoes claims that Trump has already pledged $1 million hinging on Warren's DNA test results: "We will say." So PolitiFact's first story dutifully leaves it out and reinforces the false impression Trump's promise was not a hypothetical.

Despite clear evidence that Trump was speaking of a hypothetical future situation, PolitiFact's second article sticks with a headline suggesting an existing pledge of $1 million--though it magnanimously allows at the end of the article that readers may draw their own conclusions.

It's such a close call, apparently, that PolitiFact does not wish to weigh in either pro or con.

Our call: The fact checkers liberal bloggers at PolitiFact contribute to the spread of misinformation.

Snopes

Though we think PolitiFact is the worst of the mainstream fact checkers, the liberal bloggers at Snopes outdid PolitiFact in terms of ineptitude this time.

Snopes used an edited video to support its claim that it was "True" Trump pledged $1 million based on Warren's DNA test.



The fact check coverage from PolitiFact and Snopes so far makes it look like Warren will be allowed to skate on a number of apparently false claims she made in the wake of her DNA test announcement. Which mainstream fact-checker is neutral enough to look at Warren's suggestion that she can legitimately cash in on Trump's supposed $1 million challenge?

It's a good thing we have non-partisan fact checkers, right?


Afters

Glenn Kessler, the Washington Post Fact Checker

The Washington Post Fact Checker, to our knowledge, has not produced any content directly relating to the Warren DNA test.

That aside, Glenn Kessler has weighed in on Twitter. Some of Kessler's (re)tweets have underscored the worthlessness of the DNA test for identifying Warren as Cherokee.

On the other hand, Kessler gave at least three retweets for stories suggesting Trump had already pledged $1 million based on the outcome of a Warren DNA test.




So Kessler's not joining the other two in excusing Warren. But he's in on the movement to brand Trump as wrong even when Trump is right.

Monday, October 15, 2018

Taylor Swift's Candidates Lag in Polls--PolitiFact Hardest Hit?

We noted pop star Taylor Swift's election endorsement statement drew the selective attention of the fact checkers left-leaning bloggers at PolitiFact.

We've found it hilarious over the past several days that PolitiFact has mercilessly pimped its Swiftian fact check repeatedly on Twitter and Facebook.

Now with polls showing Swift's candidates badly trailing the Republican counterparts we can only wonder: Is PolitiFact the entity hardest hit by Swift's failure (so far) to make a critical difference in putting the Democrats over the top?


The Biggest Problem with PolitiFact's Fact Check of Taylor Swift

The Swift claim PolitiFact chose to check was the allegation that Tennessee Republican Marsha Blackburn voted against the Violence Against Women Act. We noted that PolitiFact's choice of topic, given the fact that Swift made at least four claims that might interest a fact checker, was likely the best choice from the liberal point of view.

Coincidentally(?), PolitiFact pulled the trigger on that choice. But as we pointed out in our earlier post, PolitiFact still ended up putting its finger on the scales to help its Democratic Party allies.

It's true Blackburn voted against reauthorizing the Violence Against Women Act (PolitiFact ruled it "Mostly True").

But it's also true that Blackburn voted to reauthorize the Violence Against Women Act.

Contradiction?

Not quite. VAWA came up for reauthorization in 2012.Blackburn co-sponsored a VAWA reauthorization bill and voted in favor. It passed the House with most Democrats voting in opposition.

And the amazing thing is that the non-partisan fact checkers liberal bloggers at PolitiFact didn't mention it. Not a peep. Instead, PolitiFact began its history of the reauthorization of the VAWA in 2013:
The 2013 controversy
The Violence Against Women Act was two decades old in 2013 when Congress wrestled with renewing the funds to support it. The law paid for programs to prevent domestic violence. It provided money to investigate and prosecute rape and other crimes against women. It supported counseling for victims.

The $630 million price tag was less the problem than some specific language on non-discrimination.

The Senate approved its bill first on Feb. 12, 2013, by a wide bipartisan margin of 78 to 22. That measure redefined underserved populations to include those who might be discriminated against based on religion, sexual orientation or gender identity.
Starting the history of VAWA reauthorization in 2013 trims away the bothersome fact that Blackburn voted for VAWA reauthorization in 2012. Keeping that information out of the fact check helps sustain the misleading narrative that Republicans like Blackburn are okay with violence against women.

As likely as not that was PolitiFact's purpose.



Thursday, October 11, 2018

This Is How Selection Bias Works

Here at PolitiFact Bias we have consistently harped on PolitiFact's vulnerability to selection bias.

Selection bias happens, in short, whenever a data set fails to serve as representative. Scientific studies often simulate random selection to help achieve a representative sample and avoid the pitfall of selection bias.

PolitiFact has no means of avoiding selection bias. It fact checks the issues it wishes to fact check. So PolitiFact's set of fact checks is contaminated by selection bias.

Is PolitiFact's selection bias influenced by its ideological bias?

We don't see why not. And Taylor Swift will help us illustrate the problem.


PolitiFact looked at Swift's claim that Sen. Marsha Blackburn voted against the Violence Against Women Act. That fact check comes packed with the usual PolitiFact nonsense, such as overlooking Blackburn's vote in favor of VAWA in 2012. But this time our focus falls on PolitiFact's decision to look at this Swift claim instead of others.

What other claims did PolitiFact have to choose from? Let's have a look at the relevant part of Swift's statement:
I cannot support Marsha Blackburn. Her voting record in Congress appalls and terrifies me. She voted against equal pay for women. She voted against the Reauthorization of the Violence Against Women Act, which attempts to protect women from domestic violence, stalking, and date rape. She believes businesses have a right to refuse service to gay couples. She also believes they should not have the right to marry. These are not MY Tennessee values.
 Now let's put the different claims in list form:
  • Blackburn voted against equal pay for women.
  • Blackburn voted against the Reauthorization of the Violence Against Women Act
  • Blackburn believes businesses have a right to refuse service to gay couples
  • Blackburn also believes they should not have the right to marry
PolitiFact says it checks claims that make it wonder "Is that True?

The first statement regarding equal pay for women makes a great candidate for that question. Congress hasn't had to entertain a vote that would oppose equal pay for women (for equal work) for many years. It's been the law of the land since the 1960s. Lilly Ledbetter Fair Pay Act? Don't make me laugh.

The second statement is a great one to check from the Democratic Party point of view, for the Democrats made changes to the VAWA with the likely intent of creating voter appeals based on conservative opposition to those changes.

The third statement concerns belief instead of the voting record, so that makes it potentially more challenging to check. On its face, Swift's claim looks like a gross oversimplification that ignores concerns about constitutional rights of conscience.

The fourth statement, like the third, involves a claim about belief. Also, the fourth statement would likely count as a gross oversimplification. Conservatives opposed to gay marriage tend to oppose same-sex couples asserting every legal advantage that opposite-sex couples enjoy.

PolitiFact chose its best candidate for finding the claim "True" instead of one more likely to garner a "False" rating. It chose the claim most likely to electorally favor Democrats.

Commonly choosing facts to check on that type of basis may damage the election prospects of those unfairly harmed by partisan story selection. People like Sen. Blackburn.

It's a rigged system when employed by neutral and nonpartisan fact checkers who lean left.

And that's how selection bias works.


Tuesday, October 2, 2018

Again: PolitiFact vs PolitiFact

In 2013, PolitiFact strongly implied (it might opine that it "declared") that President Obama's promise that people could keep the plans they liked according to his health care overhaul, the Affordable Care Act, received its "Lie of the Year" award.

In 2018, PolitiFact Missouri (with editing help from longtime PolitiFacter Louis Jacobson) suffered acute amnesia about its 2013 "Lie of the Year" pronouncements.


PolitiFact Missouri rates "Mostly False" Republican Josh Hawley's claim that millions of Americans lost their health care plans.

Yet in 2013 it was precisely the loss of millions of health care plans that PolitiFact advertised as its reason for giving Mr. Obama its "Lie of the Year" award (bold emphasis added):
It was a catchy political pitch and a chance to calm nerves about his dramatic and complicated plan to bring historic change to America’s health insurance system.

"If you like your health care plan, you can keep it," President Barack Obama said -- many times -- of his landmark new law.

But the promise was impossible to keep.

So this fall, as cancellation letters were going out to approximately 4 million Americans, the public realized Obama’s breezy assurances were wrong.
Hawley tried to use PolitiFact's finding against his election opponent, incumbent Sen. Claire McCaskill (D-Mo.) (bold emphasis added):
"McCaskill told us that if we liked our healthcare plans, we could keep them. She said the cost of health insurance would go down. She said prescription drug prices would fall. She lied. Since then, millions of Americans have lost their health care plans."

Because of the contradiction between Hawley’s assertion and the promises of the ACA to insure more Americans, we decided to take a closer look.
So, despite the fact that PolitiFact says millions lost their health care plans and the breezy assurance to the contrary was wrong, PolitiFact says it gave Hawley's claim a closer look because it contradicts assurances that the ACA would insure more Americans.

Apparently it doesn't matter to PolitiFact that Hawley was specifically talking about losing health care plans and not losing health insurance completely. In effect, PolitiFact Missouri disavows any knowledge that the promise "if we liked our healthcare plans, we could keep them" was a false promise. The fact checkers substitute loss of health insurance for the loss of health care plans and give Hawley a "Mostly False" rating based on their own fallacy of equivocation (ambiguity).

A consistent PolitiFact could have performed this fact check easily. It could have looked at whether McCaskill made the same promise Obama made. And after that it could have remembered that it claimed to have found Obama's promise false along with the reasoning it used to justify that ruling.

Instead, PolitiFact Missouri delivers yet another outstanding example of PolitiFact inconsistency.



Afters:

Do we cut PolitiFact Missouri a break because it was not around in 2013?

No we do not.

Exhibit 1: Louis Jacobson, who has been with PolitiFact for over 10 years, is listed as an editor.

Exhibit 2: Jacobson, beyond a research credit on the "Lie of the Year" article we linked above, wrote a related fact check on the Obama administration's attempt to explain its failed promise.

There's no excuse for this type of inconsistency. But bias offers a reasonable explanation for this type of inconsistency.



Tuesday, September 25, 2018

Thinking Lessons

Our post "Google Doesn't Love Us Anymore" prompted a response from the pseudonymous "Jobman."

Nonsensical comments are normally best left unanswered unless they are used for instruction. We'll use "Jobman's" comments to help teach others not to make similar mistakes.

"Jobman" charged that our post misled readers in two ways. In his first reply "Jobman" offer this explanation of the first of those two allegedly misleading features:

This post is misleading for two reasons, 1. Because it implies that google is specifically down-ranking your website. (Yes, it still does, even if your little blurb at the bottom tries to tell otherwise. "One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own." and "But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic" Prove this point)
We answered that "Jobman" contradicted his claim with his evidence.


Lesson One: Avoid the Non Sequitur

"Jobman" asserts that our post implies Google specifically downranked the "PolitiFact Bias" website. The first evidence he offers is our statement that in the past Google gave priority to its own. Google owns Blogger and could be depended on to rank a Blogger blog fairly quickly. What does that have to do with specifically downranking the (Blogger) website "PolitiFact Bias"? Nothing. We offered it only as a reason we chose and continued with Blogger. Offering evidence that doesn't support a claim is a classic example of a non sequitur.
  • Good arguments use evidence that supports the argument, avoiding non sequiturs.

Lesson Two: Looking Up Words You May Not Understand Can Help Avoid Non Sequiturs

"Jobman" offered a second piece of evidence that likewise counted as a non sequitur. We think "Jobman" doesn't know what the term "surmise" means. Not realizing that "surmise" means coming to a conclusion based on reasoning short of proof might lead a person to claim that one who claims to have surmised something needs to provide proof of that thing. But that's an obvious non sequitur for a person who understands that saying one "surmised" communicates the idea that no proof is offered or implied.
  • Make sure you understand the other person's argument before trying to answer or rebut it. 

Lesson Three: Understand the Burden of Proof

In debate, the burden of proof belongs on the person asserting something. In non-debate contexts, the burden of proof belongs on anyone who wants another person to accept what they say.  In the present case, "Jobman" asserted, without elaborating, that two parts of our post sent the message that Google deliberately downranked "PolitiFact Bias." It turns out he was wrong, as we showed above. But "Jobman" showed little understanding of the burden of proof concept with his second reply:
The evidence that I point to doesn't contradict what I say. Yes, that's my rebuttal. You haven't proven that It does contradict what I say. Maybe try again later?
Who is responsible for showing that what we wrote doesn't mean whatever "Jobman" thinks it means? "Jobman" thinks we are responsible. If "Jobman" thinks what we wrote means X then it means X unless we can show otherwise. That's a classic case of the fallacy of shifting the burden of proof. The critic is responsible for supporting his own case before his target needs to respond.

Jobman added another example of this fallacy in his second reply:
Your title, "Google doesn't love us anymore" and contents of your post prove that you believe that Google somehow wants to push your content lower, yet you give no evidence for this.
"Jobman" says "Google doesn't love us anymore" means X (Google somehow wants to push our content lower). And "Jobman" thinks the burden rightly falls on us to show that "Google doesn't love us anymore" means ~X, such as simply saying Google downranked the site. "Jobman" thinks we are responsible for proving that Google somehow wants to push our content lower even if we already said that we did not think that is what Google did.

That's a criminal misunderstanding of the burden of proof.
  • Making a good argument involves understanding who bears the burden of proof.

Lesson Four: Strive For Coherence & Lesson Five: Avoid Creating Straw Men

In his second reply "Jobman" suggested that we brushed off our lack of evidence (lack of evidence supporting the point we were not making!) by with our claim we were not making the point we were not making.
Then, since you don't have any evidence, you try to brush it off and say "This post isn't about google targeting us" When every part of your post says otherwise.
With that last line we think perhaps "Jobman" meant to say "every part of your post says otherwise except for the part that doesn't." Though "Jobman" obviously overestimates the part that says otherwise.

His incoherence is palpable, and given that we specifically said that we were not saying Google specifically targeted the PolitiFact Bias site a critic needs an incredibly good argument to claim that we were arguing the opposite of what we argued. "Jobman" does not have that. He has a straw man fallacy supported only by his own non sequiturs.
  • It's a good idea to review your argument to makes sure you don't contradict yourself.
  • Resist the temptation to argue against a distortion of the other person's argument. That path leads to the straw man fallacy.

Lesson Three Review: Understand the Burden of Proof

The burden of proof falls on the one claiming something in the debate context, or on anyone who wants somebody else to believe something in everyday life.
When you claim that Google has made changes that have negatively impacted your website, you DO have to prove that. For now, I'll just dismiss your claim entirely until you provide evidence that google has made these changes, and that your website was previously ranked on the top of the list.
We said we surmised that Google's tweaking of its algorithms resulted in the downranking. As noted earlier, "Jobman" apparently thinks that claiming something while admitting it isn't proven obligates the claimant to prove the claim. Claiming to have proof carries with it the natural expectation that one may obtain that proof by asking. Recognizing when proof is claimed and when it isn't helps prevent mistakes in assigning the burden of proof.

In fact, the PFB post does offer evidence short of proof in the form of screenshots showing top-ranked searches from Bing and DuckDuckGo along with a much lower ranking from Google. Specific evidence of the Google downranking comes from reported evidence of past observations of a consistent top ranking. Evidence of Google tweaking its algorithms is not hard to find, so the argument in our post counted that as common knowledge for which the average reader would require no proof. And others we could expect to research the issue if they questioned it.

As for the promise to dismiss our claims for lack of proof, that is the prerogative of every reader no matter the literature. Readers who trust us will tend to accept our claims about our Google rank. Others can judge based on our accuracy with other matters. Others will use the "Jobman" method. That's up to the reader. And that's fine with us.
 

Lesson Five Review: Avoid Creating Straw Men

It was news to us that we posted the Bing and DuckDuckGo search results to prove Google is specifically biased against the PolitiFact Bias website. We thought we were showing that we rank No. 1 on Bing and DuckDuckGo while ranking much lower on Google.

We suppose "Jobman" will never buy that explanation:

Every single web indexing website in the history of the internet has had the purpose of putting forth the most relevant search results. You could prove that by literally googling anything, then saying "'X' Irrelevant thing didn't show up on the search results", but you compared search results of google and other search engines In order to convey the theme that google is somehow biased in their web searches because your website isn't at the top for theirs.
All search engines are biased toward their managers' vision of relevant search results. The bias at Bing and DuckDuckGo is friendlier to the PolitiFact Bias website than the bias at Google.

"Jobman" finished his second reply by telling us about ways we could improve our website's page rank without blaming Google for it. If that part of his comment was supposed to imply that we blame our website traffic on Google, that's misleading. 

Obviously, though, it's true that if Google gave us the same rank we get from Bing and DuckDuckGo we would probably enjoy healthier traffic. The bulk of our traffic comes from Google referrals, and we would expect a higher ranking to result in more of those referrals.

Like we said in the earlier PFB post, it comes down to Google's vision of what constitutes relevance. And clearly that vision, as the algorithm expresses it, is not identical to the ones expressed in the Bing and DuckDuckGo algorithms.

We did not and do not argue that Google targeted "PolitiFact Bias" specifically for downranking. Saying otherwise results in the creation of a straw man fallacy.




Note: "Jobman" has exhausted his reply privileges with the second reply that we quoted extensively above. He can take up the above argument using a verifiable identify if he wishes, and we will host comments (under other posts) he submits under a different pseudonym. Within limits.

Sunday, September 16, 2018

Google doesn't love us anymore

One of the reasons we started out with and stuck with a Blogger blog for so long has to do with Google's past tendency to give priority to its own.

It took us very little time to make it to the top of Google's search results for Web surfers using the terms "PolitiFact" and "bias."

But we surmise that some time near the 2016 election Google tweaked its algorithms in a way that seriously eroded our traffic. That was good news for PolitiFact, whose fact checking efforts we criticize and Google tries to promote.

And perhaps "eroded" isn't the right word. Our traffic pretty much fell off a cliff between the time Trump won election and the time Trump took office. And it coincided with the Google downranking that occurred while the site was enjoying its peak traffic.

We've found it interesting over the past couple of years to see how different search engines treated a search for "PolitiFact bias." Today's result from Microsoft's Bing search engine was a pleasant surprise. Our website was the top result and our site was highlighted with an informational window.

The search result even calls the site "Official Site." We're humbled. Seriously.



What does the same search look like on Google today?

Ouch:



"Media Bias Fact Check"? Seriously?

Dan flippin' Bongino? Seriously?

A "PolitiFact" information box to the upper right?

The hit for our site is No. 7.

It's fair to charge that we're not SEO geniuses. But on the other hand we provide excellent content about "PolitiFact" and "bias." We daresay nobody has done it better on a more consistent basis.


DuckDuckGo




DuckDuckGo is gaining in popularity. It's a search engine marketing itself based on not tracking users' searches. So we're No. 1 on Bing and DuckDuckGo but No. 7 on Google.

It's not that we think Google is deliberately targeting this website. Google has some kind of vision for what it wants to end up high in its rankings and designs its algorithms to reach toward that goal. Sites like this one are "collateral damage" and "disparate impact."