Thursday, October 27, 2016

Promises, promises

What's more useless than PolitiFact's trademarked "Truth-O-Meter"? How about its device for rating presidential promises, the Obameter?

Years ago, we pointed out an absurd rating when PolitiFact gave President Barack Obama a "Promise Kept" rating for staying in office while the nation achieved on its own what Obama had proposed achieving through a "Renewable Portfolio Standard." Obama promised to change the RPS. PolitiFact gave him credit for keeping the promise before it was kept.

In this new case, we will partly defend the president, albeit without doing him any favors.


This "Obameter" item is focused on the $2,500 promise, as PolitiFact separately failed on the promise to sign a health care bill providing universal coverage.

Obama fulfilled this promise in its literal sense. "Up to $2,500 a year" covers everything equal to or below $2,500 per year. If Obama increased costs to families by $5,000 per year, it fulfills his promise to decrease costs to families by up to $2,500 per year.

Absurdly, Obama could only break this promise by saving a typical family more than $2,500 per year.

Obama's promise was a classic political promise because it didn't really mean anything while at the same time sounding like a wonderful promise. The president implied the typical family would save about $2,500 per year under the legislation he promised. The promise paints operations like PolitiFact into a corner. They can either grade the promise on its implied meaning, which PolitiFact did, or else admit Obama's promise effectively promised nothing.

Obama delivered on the empty literal promise, unquestionably.

PolitiFact deserves partial credit for highlighting Obama's failure to achieve the promise he implied. But a fact-checker can help arm the public against misleading campaign rhetoric by explaining the deception of an "up to" clause.

PolitiFact did not do that.

Wednesday, October 26, 2016

Adding an annotation to PolitiFact's annotation of the third 2016 presidential debate

Is it news that fact-checkers are far from perfect?

Behold, a screen capture from PolitiFact's annotated version of the third presidential debate, hosted at Medium. PolitiFact says you can't see it unless you follow PolitiFact on Medium. If our readers can't see it without following PolitiFact, then maybe they're right (we have our doubts about that, too):



PolitiFact highlights Trump's claim that Clinton wants open borders. By hovering over an asterisk on the sidebar, a window appears showing PolitiFact's comment. PolitiFact says it rated Trump's claim that Clinton wants open borders "False."

Click on the link and you eventually end up on PolitiFact's web page and PolitiFact's fact check of Trump's claim about wanting open borders, where it is rated "Mostly False."


There's no editor's note announcing a change in the rating, so we assume that no issue of timing excuses PolitiFact for falsely reporting its own finding.

PolitiFact. The best of the best. Right?

Thursday, October 20, 2016

Demolition=construction? Yup, says PolitiFact

Bless PolitiFact's heart. Those fact-checking journalists just don't seem to realize that they're having trouble setting aside their biases. Unless they do realize it and wantonly lie in their fact checks.

This is actually the third in our series on PolitiFact's debate-night blogging. We broke with the tradition of mentioning that in the title to bring attention to PolitiFact's fundamental error in the case we will examine.

During the third debate, Democratic presidential candidate Hillary Rodham Clinton said her Republican opponent, Donald Trump, used undocumented workers to construct the Trump Tower in Manhattan.

Let's let PolitiFact's Linda Qiu tell it:
Clinton: "He used undocumented labor to build the Trump Tower."

This is True. Between 1979 and 1980, Trump hired a contractor to demolish a Manhattan building to make way for the eventual Trump Tower. That contractor in turn hired local union workers as well as 200 undocumented Polish workers to meet the tight deadlines.
According to Qiu and PolitiFact, demolishing a building is constructing a building. Or at least not different enough to make a difference in the rating. We assume that if Clinton had said Trump used undocumented workers to demolish the building that once stood where Trump Tower now stands that the claim could rate no higher than "True" on PolitiFact's "Truth-O-Meter." One version of the claim is no more accurate than the other by "Truth-O-Meter" standards.

We can't pass up the opportunity to remind our readers that PolitiFact prides itself on paying careful attention to the way politicians use words:
Words matter – We pay close attention to the specific wording of a claim. Is it a precise statement? Does it contain mitigating words or phrases?
That's a joke, right?

Contrary to PolitiFact, "demolition" and "construction" do not carry the same meaning. The construction of a new building will typically not start until after the complete demolition of the building occupying the site of the proposed new construction. If undocumented workers demolished the building the Trump Tower replaced, then they finished their work before construction of the Trump Tower began. If they finished their work before construction began, then it is misleading at best to say they helped construct the Trump Tower.

How can a fact checker botch something that obvious?

More notes on PolitiFact's debate night blogging

We don't have time to completely go through PolitiFact's election night blogging, but we'll keep picking out a few gems for comment as the week winds down.

Republican presidential candidate Donald Trump said Democratic presidential candidate Hillary Rodham Clinton wants open borders. A WikiLeaks release offered Trump's claim some support.

Observe how PolitiFact rationalizes calling Trump's claim "Mostly False":
In a brief speech expert from 2013, Clinton purportedly says, "My dream is a hemispheric common market, with open trade and open borders, some time in the future with energy that is as green and sustainable."

But we don’t have more context about what Clinton meant by "open borders" because she has not released the full speech. Her campaign has said she was talking about clean energy across the hemisphere.

We rated Trump’s claim Mostly False.
What other context is necessary to understand Clinton's comment? "Hemispheric common market" is pretty clear. "Open trade" is pretty clear. "Open borders" is pretty clear, particularly in the context of "hemispheric common market" and "open trade."

PolitiFact eventually falls back on "he said, she said" journalism by citing the Clinton campaign's explanation of her remarks: "Her campaign has said she was talking about clean energy across the hemisphere." So it was just about having "open borders" so we could trade clean energy in this hemisphere?

Does that even make any sense?

What kind of clean energy gets traded from one nation to another? Wind? Solar? Clean energy proponents bemoan barriers to investment, but what does "open borders" have to do with that?

PolitiFact is using the abbreviated context as a "get out of jail, free" card for Clinton. The context of her speech provides enough context to find Trump's claim at least "Half True."
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
In what way does the definition not fit, other than PolitiFact not knowing for sure Trump's statement is only partially accurate, or not knowing the statement was taken out of context?

Forgive us for pretending that PolitiFact's definitions for its ratings are not ultimately subjective.


Wednesday, October 19, 2016

Notes on PolitiFact's debate night blogging

It's the night of the third presidential debate, and PolitiFact is doing its so-called fact checking thing.

Democratic presidential candidate Hillary Rodham Clinton says Republican presidential candidate Donald Trump is the first major party nominee in 40 years not to release his tax returns. PolitiFact rules this "Mostly True" because there's only one exception out of 22.

Nearly 5 percent of the major party presidential nominees do not release their tax returns over the past 40  years.

That's right. There have been only 22 major-party presidential candidates nominated in the past 40 years. Jimmy Carter, Ronald Reagan, George H. W. Bush, Bill Clinton, George W. Bush and Barack Obama were each nominated twice. So this mighty precedent touches 16 candidates. Saying "40 years" makes it seem like more.

Using 16 nominees for the calculation edges the percentage up over 6 percent.

We're actually a little surprised PolitiFact didn't give Clinton a "True" rating, considering that her claim that she released all her emails was only off by about 30,000 but still received a "Half True" rating.





Tuesday, October 18, 2016

Current Affairs: "Why PolitiFact’s 'True/False' Percentages Are Meaningless"

We're not sure how we missed this gem from Aug. 8, 2016 in Current Affairs magazine.



The author, Nathan J. Robinson, was prompted by the slew of misleading stories featuring PolitiFact "data" from its "report cards":
Scores of media outlets have used PolitiFact’s numbers to damn Trump. The Washington Post has cited the “amazing fact” of Trump’s lie rate, with bar charts showing the comparative frequency of his falsifications. The Week counted only those things deemed completely “True,” and thus concluded that “only 1 percent of the statements Donald Trump makes are true.” Similar claims have been repeated in the US News, Reason, and The New York Times

But all of these numbers are bunk. They’re meaningless. They don’t tell us that lies constitute a certain percentage of Trump’s speech. In fact, they barely tell us anything at all.
Robinson does a great job of hitting nearly every problem with PolitiFact's rating system. The article is somewhat long, but worth the investment in time.

Sunday, October 16, 2016

The problem with the "best chess fact-check ever written"

PolitiFact's founding editor Bill Adair, now a journalism professor at Duke University, used Twitter to heap praise on a recent PolitiFact fact check:




Here's the version from Share the Facts:



A fact checker ought to notice the problem right away. Indeed, the average reader likely sees a big hint about the problem in the Share the Facts version. The key part of the fact check is outside the quotation marks denoting what Republican presidential candidate Donald Trump said.

We presume the fact check more than adequately shows that the United States boasts multiple Grandmaster level chess players. We question whether PolitiFact established as fact that Trump said the United States has no Grandmaster chess players.

Here's what Trump said, via PolitiFact (bold emphasis added):
Trump was in the midst of criticizing international trade agreements, including the Trans-Pacific Partnership. He said he supports the idea of bilateral agreements, saying that such deals would make it possible for the United States to threaten to withdraw, then renegotiate on more favorable terms before the agreement expired.

Trump went on to say that with multilateral pacts like the TPP, "you can't terminate -- there's too many people, you go crazy. It's like you have to be a grand chess master. And we don't have any of them."
Before fact-checking this statement from Trump, one must figure out what he meant. Was he saying that the United States has nobody evaluating its trade deals with skills parallel to a chess Grandmaster? Or was he saying the United States boasts no citizens who have attained Grandmaster rank in chess?

If Trump had added something like "Bobby Fischer was the last one," it would have gone a long way toward confirming what Trump was saying. But how can a fact checker justify assuming the "them" in Trump's statement refers to literal chess players and not figurative ones involved in international trade on the behalf of the United States?

PolitiFact routinely finds its way toward favoring one interpretation over others without bothering to acknowledge the other possibilities and without justifying its choice.

It's one approach to fact-checking that fact-checkers ought to avoid.

Is this fact check the best one ever on chess? If it's the only one, then we suppose we won't argue Adair's claim. But it's not a good political fact check if we value fairness, accuracy, relevance, and non-partisanship.

Wednesday, October 12, 2016

Quoting quotations that aren't

A comparison of two recent PolitiFact ratings involving Democrat Tim Kaine and Republican Mike Pence

Fresh from our post about PolitiFact overlooking Democrat Tim Kaine's inaccurate version of something Republican Mike Pence said, we stumbled over another PolitiFact item showing PolitiFact applying a different standard.

Pence, during the vice-presidential debate, defended a Trump statement from an attack by Kaine. PolitiFact allowed that Pence's defense was valid since it addressed a flaw in Kaine's attack. But PolitiFact also charged that Pence misquoted Trump.


We will draw attention to the exact words PolitiFact uses, because when PolitiFact assures us in its statement of principles that "words matter," we expect PolitiFact to live up to the standards it applies to others.
Mike Pence was right to defend Donald Trump against critics who claim Trump characterized all Mexicans as rapists. But he's wrong to quote Trump as saying "many are good people."
In the case we evaluated hours ago, PolitiFact did not accuse Tim Kaine of misquoting Pence even though Kaine switched out a word from Pence's statement that changed the meaning.

How does PolitiFact judge when one person is quoting another?


Let's try a comparison between these two cases.

Kaine:

But Gov. Pence said, inarguably, Vladimir Putin is a better leader than President Obama."
Pence:
PENCE: He also said and many of them are good people. You keep leaving that out of your quote. And if you want me to go there, I’ll go there.
We used The New York Times' transcript of the vice-presidential debate because PolitiFact had three different versions of Pence's statement. The first one occurs in the article header and has an open quotation mark when Pence says "And":
"also said, ‘And many of them (Mexicans) are good people. You keep leaving that out of your quote."
The second one occurs in the body of the story and repeats the open quotation mark. But it adds a closed quotation mark after "people":
"He also said, ‘And many of them are good people.’ You keep leaving that out of your quote."
The third one occurs in PolitiFact's concluding paragraphs and has no quotation marks within the quotation:
"also said and many of them (Mexicans) are good people. You keep leaving that out of your quote."
The New York Times' version is correct, as is the third version from PolitiFact. The Times' version is correct because AP style reserves quotation marks for precise quotations, not paraphrases or summaries. It follows that AP style will forbid quotation within a quotation where the quotations are not exact. The New York Times follows that rule.

In practice, that means that a person writing in AP style can only justify using punctuation to create a quotation within a quotation when the speaker clearly indicates it is intended as a quotation ("And I quote ...") or where the writer confirms the quotation meets the style guideline's demand for accuracy. To illustrate, the punctuation for "Paul said 'Put the dandelions on the plate'" should ordinarily occur only when it is known that Paul said "Put the dandelions on the plate." Otherwise, the writer ought to assume the speaker is paraphrasing or summarizing: Paul said put the dandelions on the plate.

PolitiFact, judging from its three different attempts at punctuation, had trouble figuring out what to do with Pence's statement. Punctuating Pence's sentence to make a quotation within a quotation would certainly lend an air of authority to its fact check. Anyone with eyes would see clearly see from that version that Pence was trying to quote Trump precisely. So if Pence got the quotation wrong then PolitiFact could justifiably criticize him for it.

Objective fact checkers do not assume quotation if paraphrase or summary better fits the context. If it is possible the words were meant as a paraphrase or summary of another source, then the punctuation should reflect it, as in the version The New York Times published.

Substandard standards

PolitiFact had before it two cases where a candidate spoke of something another person said, where the words used did not match the original words.

Kaine was off by just one word, but PolitiFact did not punctuate Kaine's statement to make a quotation within a quotation.

Pence changed a word and left out two words, but PolitiFact punctuated Pence's statement to make a quotation within a quotation.

Why the difference in treatment?

Kaine's substitution of "better" for "stronger" in his paraphrase of Pence substantially changed Pence's meaning, as we explained in our earlier post.

Pence's substitution of "many" for "some" slightly changed Trump's meaning. Pence's paraphrase was objectively better justified than Kaine's.

PolitiFact did not penalize Kaine for misquoting Pence, for its story did not consider Kaine was quoting Pence.

PolitiFact charged Pence with misquoting Republican presidential candidate Donald Trump.

Kaine received a "Mostly True" rating from PolitiFact.

Pence received a "Half True" rating from PolitiFact.

Pence's statement was more accurate, yet PolitiFact gave it a lower rating than Kaine's.

The only standard we can imagine that justifies these outcomes is the "Republicans lie more" standard. But an objective fact checker should reason from the outcome of numerous fact checks toward the conclusion "Republicans lie more," not reason from "Republicans lie more" to assigning lower fact check ratings for Republicans.

The latter represents a naked expression of ideological bias.

'Stronger'='Better'? Pretty much, says PolitiFact

PolitiFact continues to determinedly destroy whatever credibility it has outside its group of left-wing devotees.

Our latest example consists of PolitiFact's fact check of Tim Kaine from Oct. 5, 2016. Kaine said his vice-presidential debate opponent, Mike Pence, had said Vladimir Putin was a better leader than President Obama.

PolitiFact's cutesy-and-misleading video short captures the moment. Well, one of the moments:


Kaine used the same line twice. PolitiFact did well to report both instances, along with providing the broader context of the first instance:
At one point, Kaine said, "Hillary also has the ability to stand up to Russia in a way that this ticket does not. Donald Trump, again and again, has praised Vladimir Putin. … Gov. Pence made the odd claim — he said, inarguably, Vladimir Putin is a better leader than President Obama. Vladimir Putin has run his economy into the ground. He persecutes LGBT folks and journalists. If you don't know the difference between dictatorship and leadership, then you got to go back to a fifth-grade civics class."

Kaine hammered the point again later in the debate.

"Well, this is one where we can just kind of go to the tape on it. But Gov. Pence said, inarguably, Vladimir Putin is a better leader than President Obama."
It turned out Pence had said "stronger," not "better." Kaine had the wording of the quotation precisely aside from that key word, making sure both times he misquoted Pence that he got the use of "inarguably" right.

Is it a big deal to get the key term wrong? Not so much, according to PolitiFact. PolitiFact rated Kaine's claim "Mostly True":
Pence did say something very similar -- but not exactly as Kaine said. Pence had said that Putin "has been a stronger leader in his country than Barack Obama has been in this country." However, "stronger" is not identical to "better." We rate the statement Mostly True.
Not identical? Certainly not. And certainly not in the context Kaine presented the claim. Remember the examples Kaine gave to show the oddness of Pence's claim?

Putin ran the Russian economy into the ground.

It's not "better" to run an economy into the ground, is it? But bucking the West and annexing Crimea despite Western sanctions takes strong leadership. Strong, yes. Better, no.

Persecutes LGBT folks and journalists

It's not "better" to persecute LGBT folks and journalists. But doing so while maintaining high public approval ratings (over 84 percent) shows strength of leadership. Strong, yes. Better, no.

PolitiFact's ratings game

Kaine fully exploited the difference between "stronger" and "better" the way a skilled liar would. But PolitiFact drops his rating only to "Mostly True" because he literally changed the word Pence had used.

Kaine would have been taking Pence out of context. even if he had quoted Pence correctly. His examples saw to that.

Eeny-Meeny-Miny-Moe (red emphasis added):
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
Is it a critical fact that "stronger" and "better" have different meanings?

Did Kaine provide a context for "better" that differed from the context for "stronger" offered by Pence?

Is the statement "accurate" if the key word is the wrong word?

Do PolitiFact's definitions for its "Truth-O-Meter" ratings mean anything at all?

Make-Believe: A world where PolitiFact's definitions mean what they say

If PolitiFact's definition of "Mostly True" was taken literally, then Kaine's statement could not receive a "Mostly True" rating. Kaine's version of what Pence said used a different word than what Pence had said. That makes Kaine's version inaccurate. If accurate and inaccurate do not mean the same thing, then "Mostly True" cannot fit Kaine's claim.

How about "Half True"? Kaine's paraphrase of Pence was not wildly off. "Stronger" and "Better" have some overlap in meaning, and otherwise Kaine got the words right. Kaine's statement could pass as "partially accurate." Kaine also took things out of context, which fits the description of "Half True."

And what about "Mostly False"? Kaine's statement could also qualify as having an element of truth. Most of the words he attributed to Pence were right, though he switched out "stronger" for "better." Was that change a critical fact, given how the terms differ in meaning? Arguably so. Kaine's statement thus also fits the definition of "Mostly False." Which rating fits better amounts to a subjective judgment. The definitions overlap, like the definitions of "stronger" and "better."

If PolitiFact's definitions were taken literally, Kaine's rating would be a subjective coin flip between "Half True" and "Mostly False." That PolitiFact can bend its definitions to apply the rating for accurate statements to inaccurate statements, like Kaine's, shows that PolitiFact puts even more subjectivity in its ratings than its fuzzy definitions demand.

Coincidentally, the Democrat gained the benefit this time. It's a pattern.

Saturday, October 8, 2016

The Weekly Standard: "Fact Checking the 'Fact Checkers'"

The Weekly Standard's "The Scrapbook" section this week highlighted the work of PolitiFact, the objective and nonpartisan fact checker that we so enjoy exposing for its lack of both objectivity and nonpartisanship.

The article reviews a few of PolitiFact's recent newsworthy problems, including the conflict of interest it showed in defending the Clinton Foundation with help from a grant from another nonprofit that shares a major donor.

It's worth a read. Here's a taste, from the conclusion:
PolitiFact publishes enough fact checks that it no doubt gets some right. But whether as a result of bias, incompetence, dubious financial incentives, or perhaps all of the above, PolitiFact has taken a wrecking ball to its reputation. It should be ignored altogether, but so long as PolitiFact remains a useful vehicle for applying a veneer of credibility to politicized judgments, the rest of the media will no doubt continue to cite it as an authority and use it as a cudgel.

Friday, October 7, 2016

PolitiFact and the Trump tax return promise

The liberal bloggers at PolitiFact have weighed in on whether Republican presidential candidate Donald Trump has broken his promise to release his tax returns. Trump, PolitiFact says, has broken his promise. We know PolitiFact says this thanks to the "False" rating it gave Trump's running mate, Mike Pence, for saying Trump has not broken his promise.



There's a problem for PolitiFact, here. Pence is right, at least if PolitiFact is giving us the right version(s) of Trump's promise. PolitiFact provided no versions of the promise with any deadline attached.

A promise made with no deadline for keeping the promise is never broken, unless we count death as a deal breaker.

PolitiFact has apparently confused not keeping a promise with breaking a promise.

So long as Trump has not released his tax returns, he has not kept his promise to release his tax returns. But lacking a deadline for keeping the promise, until Trump has passed up every opportunity he will ever have to keep the promise, he has not broken the promise.

It's a simple matter of logic.

But, but, but, but ....


Trump implied that he would release his returns before election day!

Yes, perhaps so. It's an arguable point. But until election day rolls around and Trump has not released his tax returns, Trump has not broken his promise. And Pence is right to say so.


It's a simple matter of logic.



So why does PolitiFact struggle so with simple matters of logic?

Thursday, October 6, 2016

A fact checker response to poll showing low trust in fact-checkers

On Oct. 1, 2016, we shared news of a Rasmussen Reports survey showing that most Americans do not trust fact-checking.

The Poynter Institute's* Alexios Mantzarlis tweeted about it promptly, but other than that we saw no response from fact checkers. Today, however, "The Week in Fact-Checking" newsletter (sent out by Mantzarlis and Jane Elizabeth) has a minor mention of the survey:
A Rasmussen poll says that most voters (who were asked this arguably "leading" question) don't trust media fact-checking, but a SurveyMonkey poll says many voters consulted a fact-checking site during the first presidential debate.
We agree the question Rasmussen posed was far from perfect. It placed trust in fact checkers in opposition to the idea that journalists skew the news. Asking respondents to rate their trust in fact-checkers on a scale would have given better information. Asking about trust in different specific fact checkers would give even better information.

That said, we don't see the SurveyMonkey poll as any kind of contradiction to the Rasmussen survey. SurveyMonkey found 7 percent visited a fact-checking site during the debate, 19 percent visited one after the debate, and 9 percent visited both during and after. Finally, 64 percent did not visit a fact-checking website.

With the SurveyMonkey poll we do not get information about whether people visiting fact-checking websites trust the websites they visit. We often visit fact-checking websites. It does not mean we trust them. And the same goes for people who did not visit fact-checking websites. Maybe they trust the websites but figure they know enough to judge the debate without getting help from a fact checker.

The Rasmussen survey is the best we have at this point gauging public trust in fact checkers. While we look forward to better polling on the issue, Rasmussen gives us a fairly clear picture that the fact checkers have ground to make up building trust from the public.

We do not think their current methods, particularly those of PolitiFact, will build trust from the public.

We're not surprised the fact-checking community, and we count Elizabeth and Mantzarlis in that group, do not appear eager to address a broad lack of trust in their movement.


*The Poynter Institute owns PolitiFact through the Tampa Bay Times.

Saturday, October 1, 2016

Rasmussen Reports: Except for Clinton voters, people distrust fact checkers (Updated)

Hat tip to Power Line blog for highlighting the Rasmussen survey and thereby bringing it to our attention

A couple of days ago, I emailed Jeff D. sharing what I felt was one of the good things to come out of this election season: "(T)he media have allowed the mask to slip as perhaps it never has before."

The same type of thinking was apparently happening at Power Line blog at about the same time leading to a post about media credibility on Sept. 30, 2016:
If this year’s presidential election has a silver lining, it is the final demise of “mainstream media.” Which is not to say that liberal media are going away; they aren’t, of course. But liberal media’s claim to being mainstream–reliable, objective, fair, unlike fringe or partisan news sources–is gone forever. That is a good thing.
Making the good thing even better, the Power Line post shared some details about a new Rasmussen Reports survey showing that people do not trust media fact checkers. But there's an exception. Among Clinton voters, 59 percent trust the fact-checkers:
Eighty-eight percent (88%) of voters who support Trump in the presidential race believe news organizations skew the facts, while most Clinton backers (59%) trust media fact-checking. Among the supporters of Libertarian Gary Johnson and Green Party candidate Jill Stein, sizable majorities also don’t trust media fact-checking.

These findings are no surprise given that voters think it's far more likely reporters will try to help Clinton than Trump this election season
Rasmussen makes the percentages of Johnson and Stein supporters who trust fact checkers available to its platinum subscribers. We'd report the numbers if Rasmussen had published them.

This finding ought to serve as a wake up call to media fact checkers. If a relatively slim majority of one party's voters place their trust in you while the others do not, there exists a fundamental problem of credibility. Yet credibility is the only currency for fact-checkers.

Though PolitiFact Bias is not widely read (yet), we think the survey shows most people see, at least to some extent, the same problems we see with PolitiFact and its fact-checking cohorts.

The fact checkers need to find out about this trust gap and figure out how to shrink it.

May we suggest they start by visiting PolitiFact Bias for a few ideas?


Update Oct. 2, 2016

We missed a version of the story from The Hill, which reports less than a third of Americans trust fact checkers:
A survey of likely voters by Rasmussen Reports shows just 29 percent trust media fact-checking of candidates, while 62 percent believe news organizations twist the facts to help candidates whom they appear to support.
We would love to see a survey that compared public trust in each of the mainstream fact checkers.

So far, we haven't noticed any fact checker acknowledging this story. Perhaps that will change on Monday.