Showing posts with label 2018. Show all posts
Showing posts with label 2018. Show all posts

Wednesday, November 14, 2018

PolitiFact misses obvious evidence in Broward recount fact check

On Nov. 13, 2018 PolitiFact's "PunditFact" brand issued a "Pants on Fire" rating to conservative Ken Blackwell for claiming Democrats and their allies were manufacturing voters in the Florida election recount.


The problem?

PolitiFact somehow overlooked obvious evidence reported in the mainstream media. The Tampa Bay Times, former owner of PolitiFact before it was transferred to the nonprofit Poynter Institute, published a version of the story:
Broward's elections supervisor accidentally mixed more than a dozen rejected ballots with nearly 200 valid ones, a circumstance that is unlikely to help Brenda Snipes push back against Republican allegations of incompetence.

The mistake — for which no one had a solution Friday night — was discovered after Snipes agreed to present 205 provisional ballots to the Broward County canvassing board for inspection. She had initially intended to handle the ballots administratively, but agreed to present them to the canvassing board after Republican attorneys objected.
The Times story says counting the 205 provisional ballots resulted in at least 20 illegal votes ending up in Broward County's vote totals.

The Times published its story on Nov. 10, 2018.

PolitiFact/PunditFact published its fact check on Nov. 13, 2018 (2:24 p.m. time stamp). The fact check contains no mention at all that Broward County included invalid votes in its vote totals.

Instead, PolitiFact reporter John Kruzel gives us the breezy assurance that neither he nor the state found evidence supporting Blackwell's charge.
Our ruling

Blackwell said, "Democrats and their allies (...) are manufacturing voters."

We found no evidence, nor has the state, to support this claim. Blackwell provided no evidence to support his statement.

We rate this Pants on Fire.
Inconceivable, you say?



via GIPHY

Saturday, November 3, 2018

PolitiFact's Liberal Tells for $400, Alex

When PolitiFact released the results of a language inventory it commissioned on itself, we were not surprised that the researchers found no clear evidence of biased language. PolitiFact's bias is mostly found in its choice of stories accompanied by bias in the execution of the fact checks.

But ...

On Oct. 31, 2018 PolitiFact Editor Angie Drobnic Holan published an article on the top election issues for 2018 and promptly stepped in it:
PolitiFact has been monitoring and fact-checking the midterm campaigns of 2018 in races across the country. We’ve seen common themes emerge as the Democrats and Republicans clash. Here’s a look at what we’ve found to be the top 10 storylines of the 2018 contests. (We provide short summaries of our fact-checks here; links will take you to longer stories with detailed explanations and primary sources.)

1. Fear of immigration
We'll explain to Holan (and the audience) the right way to identify immigration as an election issue without employing biased language:
1. Immigration
It's pretty easy.

Use "Fear of immigration" and the language communicates a lean to the left. Something like "Inadequate border security" might communicate the opposite (no danger of that from PolitiFact!).

Others from Holan's list of 10 election topics may also qualify as biased language. But this one is the most obvious. "Fear of immigration" is how liberals imagine conservatives reach the conclusion that securing the border and controlling immigration count as good policy.

PolitiFact's claim to non-partisanship is a gloss.

Tuesday, October 30, 2018

PolitiScam: It's Not What You Say, It's How PolitiFact Frames It

PolitiFact's Oct. 29, 2018 fact check of President Trump gave us yet another illustration of PolitiFact's inconsistent application of its principles.

The same day as shooting at a Pittsburgh synagogue resulting in multiple deaths, Mr. Trump justified not canceling his campaign appearances by saying that terrorizing acts should not alter daily business. Trump used the New York Stock Exchange as his example, saying it opened the day after the Sept. 11, 2001 terrorist attacks.

But it didn't open the next day. Trump was flatly wrong.


PolitiFact:


Note that PolitiFact spins its description. PolitiFact says Trump claimed he did not cancel the political rally simply because the NYSE opened the day after Sept. 11. But the NYSE opening was simply an example of the justification Trump was using.

This case involving Trump carries a parallel to a fact check PolitiFact did in 2008 of then-presidential candidate Barack Obama. Both Trump and Obama made false statements. But PolitiFact found a way to read Mr. Obama's false statement favorably:


Obama claimed his uncle helped liberate Auschwitz. But Obama's uncle was never particularly close to Auschwitz. That most famous of the concentration camps was located in Poland, not Germany, and was liberated by troops from the Soviet Union.

One might well wonder how Obama received a "Mostly True" rating for a relatively outrageous claim.


PolitiFact Framing to the Rescue!

It was very simple for PolitiFact to rehabilitate Mr. Obama's claim about his uncle. The uncle was a real person, albeit an uncle in the broad sense, and he did serve with American troops who helped liberate a less-well-known concentration camp near Buchenwald, Germany.

PolitiFact explains in its summary paragraph:
There's no question Obama misspoke when he said his uncle helped to liberate the concentration camp in Auschwitz.

But even with this error in locations, Obama's statement was substantially correct in that he had an uncle — albeit a great uncle — who served with troops who helped to liberate the Ohrdruf concentration/work camp and saw, firsthand, the horrors of the Holocaust. We rate the statement Mostly True.
See? Easy-peasy. The problem? It's pretty much just as easy to rehabilitate the claim Trump made:
There's no question Trump misspoke when he said the NYSE opened the day after Sept. 11.

But even with his error about the timing, Trump was substantially correct that the NYSE opened as soon as it feasibly could following the Sept. 11 terrorist attacks. The NYSE opened the following week not far from where the twin towers collapsed.
PolitiFact only used two sources on the reopening of the NYSE, and apparently none that provided the depth of the Wall Street Journal article we linked. Incredibly, PolitiFact also failed to link the articles it used. The New York Times story it used was available on the internet. Instead, the sources have notes that say "accessed via Nexis."

All it takes to adjust the framing of  these fact check stories is the want-to. Trump was off by a week. Obama was off by a country. Both had underlying points a fact checker could choose to emphasize.

These fact checkers do not have objective standards for deciding how to frame fact checks.


Related: "Lord knows the decision about a Truth-O-Meter rating is entirely subjective"

Monday, October 22, 2018

PolitiFact: One Standard For Me and Another For Thee 2

PolitiFact executed another of its superlative demonstrations of hypocrisy this month.

After PolitiFact unpublished its botched fact check about Claire McCaskill and the affordability of private aircraft, it published a corrected (?) fact check changing the rating from "False" to "Half True." Why "Half True" instead of "True"? PolitiFact explained it gave the "Half True" rating because the (Republican) Senate Leadership Fund failed to provide adequate context (bold emphasis added).
The Senate Leadership Fund says McCaskill "even said this about private planes, ‘that normal people can afford it.’"

She said those words, but the footage in the ad leaves out both the lead-in comment that prompted McCaskill’s remark and the laughter that followed it. The full footage makes it clear that McCaskill was wrapping up a policy-heavy debate with a private-aviation manager and with a riff using the airport manager’s words. In context, he was referring to "normal" users of private planes, as opposed to "normal" Americans more generally.

We rate the statement Half True.
Let's assume for the sake of argument that PolitiFact is exactly right (we don't buy it) in the way it recounts the problems with the missing context.

Assuming the missing context in a case like this makes a statement "Half True," how in the world does PolitiFact allow itself to get away the shenanigan PolitiFact writer Jon Greenberg pulled in his article on Sen. Elizabeth Warren's DNA test?

Greenberg (bold emphasis added):
Trump once said she had as much Native American blood as he did, and he had none. At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Trump said those words, but Greenberg's version of the quote leaves out more than half of Trump's sentence, as well as comments that came before. The full quotation makes it clear that Trump's million dollar challenge was presented as a potential future event--a hypothetical, in other words. In context, Trump was referring to a potential future challenge for Warren to take a DNA test as opposed to making the $1 million challenge at that moment.

PolitiFact takes Trump just as much, if not more, out of context as the Senate Leadership Fund did with McCaskill.

How does that kind of boundless hypocrisy pass the sniff test? Are the people at PolitiFact that accustomed to their own stench?


Afters

PolitiFact's "In Context" presentation of Trump's million-dollar challenge to Sen. Warren, confirming what we're saying about PolitiFact's Jon Greenberg ignoring the surrounding context (bole emphasis in the original):
(L)et's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’
See also: Fact Checkers for Elizabeth Warren

Wednesday, October 17, 2018

Washington Free Beacon: "PolitiFact Retracts Fact Check ..."

Full title:

PolitiFact Retracts Fact Check After Erroneously Ruling Anti-Claire McCaskill Ad ‘False’

We were preparing to post about PolitiFact's crashed-and-burned fact check of  the (Republican) Senate Leadership Fund's Claire McCaskill attack ad. But we noticed that Alex Griswold did a fine job of telling the story for the Washington Free Beacon.

Griswold:
In the revised fact check published Wednesday, PolitiFact announced that "after publication, we received more complete video of the question-and-answer session between McCaskill and a constituent that showed she was in fact responding to a question about private planes, as well as a report describing the meeting … We apologize for the error."

PolitiFact still only ruled the ad was "Half True," arguing that the Senate Leadership Fund "exaggerated" McCaskill's remarks by showing them in isolation. In full context, the fact checker wrote, McCaskill's remarks "seem to refer to ‘normal' users of private planes, not to ‘normal' Americans more generally."
Griswold's article managed to hit many of the points we made about the PolitiFact story on Twitter.


For example:

New evidence to PolitiFact, maybe. The evidence was on the World Wide Web since 2017.

PolitiFact claimed it was "clear" from the short version of the town hall video that the discussion concerned commercial aviation in the broad sense, not private aircraft. Somehow that supposed clarity vanished with the appearance of a more complete video.


Read the whole article at the Washington Free Beacon.


We also used Twitter to slam PolitiFact for its policy of unpublishing when it notices a fact check has failed. Given that PolitiFact, as a matter of stated policy, archives the old fact check and embeds the URL in the new version of the fact check. No good reason appears to exist to delay availability of the archived version. It's as easy as updating the original URL for the bad fact check to redirect to the archive URL.

In another failure of transparency, PolitiFact's archived/unpublished fact checks eliminate bylines and editing or research credits along with source lists and hotlinks. In short, the archived version of PolitiFact's fact checks loses a hefty amount of transparency on the way to the archive.

PolitiFact can and should do better both with its fact-checking and its policies on transparency.


Exit question: Has PolitiFact ever unpublished a fact check that was too easy on a conservative or too tough on a liberal?

There's another potential bias measure waiting for evaluation.

Tuesday, October 16, 2018

Fact Checkers for Elizabeth Warren

Sen.Elizabeth Warren (D-Mass.) provided mainstream fact checkers a great opportunity to show their true colors. Fact checkers from the PolitiFact and Snopes spun themselves into the ground trying to help Warren excuse her self-identification as a "Native American."

Likely 2020 presidential candidate Warren has long been mocked from the right as "Fauxcahontas" based on her dubious claims of Native American minority status. Warren had her DNA tested and promoted the findings as some type of vindication of her claims.

The fact checkers did their best to help.


PolitiFact

PolitiFact ran Warren's report past four experts and assured us the experts thought the report was legitimate. But the quotations from the experts don't tell us much. PolitiFact uses its own summaries of the experts' opinions for the statements that best support Warren. Are the paraphrases or summaries fair? Trust PolitiFact? It's another example showing why fact checkers ought to provide transcripts of their interactions with experts.

Though the article bills itself as telling us what we can and cannot know from Warren's report, it takes a Mulligan on mentioning Warren's basic claim to minority status. Instead it emphasizes the trustworthiness of the finding of trace Native American inheritance.

At least the article admits that the DNA evidence doesn't help show Warren is of Cherokee descent. There's that much to say in favor of it.

But more to the downside, the article repeats as true the notion that Trump had promised $1 million if Warren could prove Native American ancestry (bold emphasis added):
At a July 5 rally in Montana, he challenged her to take a DNA test.

"I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian," Trump said.

Trump now denies saying that, but in any event, Warren did get tested and the results did find Native American ancestry.
Just minutes after PolitiFact published the above, it published a separate "In Context" article under this title: "In context: Donald Trump's $1 million offer to Elizabeth Warren."

While we do not recommend PolitiFact's transcript as any kind of model journalism (it leaves out quite a bit without using ellipses to show the omissions), the transcript in that article is enough to show the deception in its earlier article (green emphasis added, bold emphasis in the original):
"I shouldn't tell you because I like to not give away secrets. But let's say I'm debating Pocahontas. I promise you I'll do this: I will take, you know those little kits they sell on television for two dollars? ‘Learn your heritage!’ … And in the middle of the debate, when she proclaims that she is of Indian heritage because her mother said she has high cheekbones — that is her only evidence, her mother said we have high cheekbones. We will take that little kit -- but we have to do it gently. Because we're in the #MeToo generation, so I have to be very gentle. And we will very gently take that kit, and slowly toss it, hoping it doesn't injure her arm, and we will say: ‘I will give you a million dollars to your favorite charity, paid for by Trump, if you take the test and it shows you're an Indian.’ And let’s see what she does. I have a feeling she will say no. But we’ll hold that for the debates.
Note that very minor expansion of the first version of the Trump quotation torpedoes claims that Trump has already pledged $1 million hinging on Warren's DNA test results: "We will say." So PolitiFact's first story dutifully leaves it out and reinforces the false impression Trump's promise was not a hypothetical.

Despite clear evidence that Trump was speaking of a hypothetical future situation, PolitiFact's second article sticks with a headline suggesting an existing pledge of $1 million--though it magnanimously allows at the end of the article that readers may draw their own conclusions.

It's such a close call, apparently, that PolitiFact does not wish to weigh in either pro or con.

Our call: The fact checkers liberal bloggers at PolitiFact contribute to the spread of misinformation.

Snopes

Though we think PolitiFact is the worst of the mainstream fact checkers, the liberal bloggers at Snopes outdid PolitiFact in terms of ineptitude this time.

Snopes used an edited video to support its claim that it was "True" Trump pledged $1 million based on Warren's DNA test.



The fact check coverage from PolitiFact and Snopes so far makes it look like Warren will be allowed to skate on a number of apparently false claims she made in the wake of her DNA test announcement. Which mainstream fact-checker is neutral enough to look at Warren's suggestion that she can legitimately cash in on Trump's supposed $1 million challenge?

It's a good thing we have non-partisan fact checkers, right?


Afters

Glenn Kessler, the Washington Post Fact Checker

The Washington Post Fact Checker, to our knowledge, has not produced any content directly relating to the Warren DNA test.

That aside, Glenn Kessler has weighed in on Twitter. Some of Kessler's (re)tweets have underscored the worthlessness of the DNA test for identifying Warren as Cherokee.

On the other hand, Kessler gave at least three retweets for stories suggesting Trump had already pledged $1 million based on the outcome of a Warren DNA test.




So Kessler's not joining the other two in excusing Warren. But he's in on the movement to brand Trump as wrong even when Trump is right.

Monday, October 15, 2018

Taylor Swift's Candidates Lag in Polls--PolitiFact Hardest Hit?

We noted pop star Taylor Swift's election endorsement statement drew the selective attention of the fact checkers left-leaning bloggers at PolitiFact.

We've found it hilarious over the past several days that PolitiFact has mercilessly pimped its Swiftian fact check repeatedly on Twitter and Facebook.

Now with polls showing Swift's candidates badly trailing the Republican counterparts we can only wonder: Is PolitiFact the entity hardest hit by Swift's failure (so far) to make a critical difference in putting the Democrats over the top?


The Biggest Problem with PolitiFact's Fact Check of Taylor Swift

The Swift claim PolitiFact chose to check was the allegation that Tennessee Republican Marsha Blackburn voted against the Violence Against Women Act. We noted that PolitiFact's choice of topic, given the fact that Swift made at least four claims that might interest a fact checker, was likely the best choice from the liberal point of view.

Coincidentally(?), PolitiFact pulled the trigger on that choice. But as we pointed out in our earlier post, PolitiFact still ended up putting its finger on the scales to help its Democratic Party allies.

It's true Blackburn voted against reauthorizing the Violence Against Women Act (PolitiFact ruled it "Mostly True").

But it's also true that Blackburn voted to reauthorize the Violence Against Women Act.

Contradiction?

Not quite. VAWA came up for reauthorization in 2012.Blackburn co-sponsored a VAWA reauthorization bill and voted in favor. It passed the House with most Democrats voting in opposition.

And the amazing thing is that the non-partisan fact checkers liberal bloggers at PolitiFact didn't mention it. Not a peep. Instead, PolitiFact began its history of the reauthorization of the VAWA in 2013:
The 2013 controversy
The Violence Against Women Act was two decades old in 2013 when Congress wrestled with renewing the funds to support it. The law paid for programs to prevent domestic violence. It provided money to investigate and prosecute rape and other crimes against women. It supported counseling for victims.

The $630 million price tag was less the problem than some specific language on non-discrimination.

The Senate approved its bill first on Feb. 12, 2013, by a wide bipartisan margin of 78 to 22. That measure redefined underserved populations to include those who might be discriminated against based on religion, sexual orientation or gender identity.
Starting the history of VAWA reauthorization in 2013 trims away the bothersome fact that Blackburn voted for VAWA reauthorization in 2012. Keeping that information out of the fact check helps sustain the misleading narrative that Republicans like Blackburn are okay with violence against women.

As likely as not that was PolitiFact's purpose.



Thursday, October 11, 2018

This Is How Selection Bias Works

Here at PolitiFact Bias we have consistently harped on PolitiFact's vulnerability to selection bias.

Selection bias happens, in short, whenever a data set fails to serve as representative. Scientific studies often simulate random selection to help achieve a representative sample and avoid the pitfall of selection bias.

PolitiFact has no means of avoiding selection bias. It fact checks the issues it wishes to fact check. So PolitiFact's set of fact checks is contaminated by selection bias.

Is PolitiFact's selection bias influenced by its ideological bias?

We don't see why not. And Taylor Swift will help us illustrate the problem.


PolitiFact looked at Swift's claim that Sen. Marsha Blackburn voted against the Violence Against Women Act. That fact check comes packed with the usual PolitiFact nonsense, such as overlooking Blackburn's vote in favor of VAWA in 2012. But this time our focus falls on PolitiFact's decision to look at this Swift claim instead of others.

What other claims did PolitiFact have to choose from? Let's have a look at the relevant part of Swift's statement:
I cannot support Marsha Blackburn. Her voting record in Congress appalls and terrifies me. She voted against equal pay for women. She voted against the Reauthorization of the Violence Against Women Act, which attempts to protect women from domestic violence, stalking, and date rape. She believes businesses have a right to refuse service to gay couples. She also believes they should not have the right to marry. These are not MY Tennessee values.
 Now let's put the different claims in list form:
  • Blackburn voted against equal pay for women.
  • Blackburn voted against the Reauthorization of the Violence Against Women Act
  • Blackburn believes businesses have a right to refuse service to gay couples
  • Blackburn also believes they should not have the right to marry
PolitiFact says it checks claims that make it wonder "Is that True?

The first statement regarding equal pay for women makes a great candidate for that question. Congress hasn't had to entertain a vote that would oppose equal pay for women (for equal work) for many years. It's been the law of the land since the 1960s. Lilly Ledbetter Fair Pay Act? Don't make me laugh.

The second statement is a great one to check from the Democratic Party point of view, for the Democrats made changes to the VAWA with the likely intent of creating voter appeals based on conservative opposition to those changes.

The third statement concerns belief instead of the voting record, so that makes it potentially more challenging to check. On its face, Swift's claim looks like a gross oversimplification that ignores concerns about constitutional rights of conscience.

The fourth statement, like the third, involves a claim about belief. Also, the fourth statement would likely count as a gross oversimplification. Conservatives opposed to gay marriage tend to oppose same-sex couples asserting every legal advantage that opposite-sex couples enjoy.

PolitiFact chose its best candidate for finding the claim "True" instead of one more likely to garner a "False" rating. It chose the claim most likely to electorally favor Democrats.

Commonly choosing facts to check on that type of basis may damage the election prospects of those unfairly harmed by partisan story selection. People like Sen. Blackburn.

It's a rigged system when employed by neutral and nonpartisan fact checkers who lean left.

And that's how selection bias works.


Tuesday, October 2, 2018

Again: PolitiFact vs PolitiFact

In 2013, PolitiFact strongly implied (it might opine that it "declared") that President Obama's promise that people could keep the plans they liked according to his health care overhaul, the Affordable Care Act, received its "Lie of the Year" award.

In 2018, PolitiFact Missouri (with editing help from longtime PolitiFacter Louis Jacobson) suffered acute amnesia about its 2013 "Lie of the Year" pronouncements.


PolitiFact Missouri rates "Mostly False" Republican Josh Hawley's claim that millions of Americans lost their health care plans.

Yet in 2013 it was precisely the loss of millions of health care plans that PolitiFact advertised as its reason for giving Mr. Obama its "Lie of the Year" award (bold emphasis added):
It was a catchy political pitch and a chance to calm nerves about his dramatic and complicated plan to bring historic change to America’s health insurance system.

"If you like your health care plan, you can keep it," President Barack Obama said -- many times -- of his landmark new law.

But the promise was impossible to keep.

So this fall, as cancellation letters were going out to approximately 4 million Americans, the public realized Obama’s breezy assurances were wrong.
Hawley tried to use PolitiFact's finding against his election opponent, incumbent Sen. Claire McCaskill (D-Mo.) (bold emphasis added):
"McCaskill told us that if we liked our healthcare plans, we could keep them. She said the cost of health insurance would go down. She said prescription drug prices would fall. She lied. Since then, millions of Americans have lost their health care plans."

Because of the contradiction between Hawley’s assertion and the promises of the ACA to insure more Americans, we decided to take a closer look.
So, despite the fact that PolitiFact says millions lost their health care plans and the breezy assurance to the contrary was wrong, PolitiFact says it gave Hawley's claim a closer look because it contradicts assurances that the ACA would insure more Americans.

Apparently it doesn't matter to PolitiFact that Hawley was specifically talking about losing health care plans and not losing health insurance completely. In effect, PolitiFact Missouri disavows any knowledge that the promise "if we liked our healthcare plans, we could keep them" was a false promise. The fact checkers substitute loss of health insurance for the loss of health care plans and give Hawley a "Mostly False" rating based on their own fallacy of equivocation (ambiguity).

A consistent PolitiFact could have performed this fact check easily. It could have looked at whether McCaskill made the same promise Obama made. And after that it could have remembered that it claimed to have found Obama's promise false along with the reasoning it used to justify that ruling.

Instead, PolitiFact Missouri delivers yet another outstanding example of PolitiFact inconsistency.



Afters:

Do we cut PolitiFact Missouri a break because it was not around in 2013?

No we do not.

Exhibit 1: Louis Jacobson, who has been with PolitiFact for over 10 years, is listed as an editor.

Exhibit 2: Jacobson, beyond a research credit on the "Lie of the Year" article we linked above, wrote a related fact check on the Obama administration's attempt to explain its failed promise.

There's no excuse for this type of inconsistency. But bias offers a reasonable explanation for this type of inconsistency.



Wednesday, September 12, 2018

PolitiFact flubs GDP comparison between added debt and cumulative debt

Here at PolitiFact Bias we think big mistakes tell us something about PolitiFact's ideological bias.

If PolitiFact's big mistakes tend to harm Republicans and not Democrats, it's a pretty good sign that PolitiFact leans left. For that reason, much of what we do centers on documenting big mistakes.

Veteran PolitiFact fact checker Louis Jacobson gave us a whopper of a mistake this week in a Sept. 12, 2018 PunditFact fact check.

Before reading the fact check we had a pretty good idea this one was bogus. Note the caveat under the meter telling the reason why Scarborough's true numbers only get by with a "Mostly True" rating: The added debt was not purely the GOP's fault.

We easily found a parallel claim, this one from PolitiFact Virginia but with Trump as the speaker:

Trump's parallel claim was dragged down to "Half True" because there was plenty of blame to share for doubling the debt. In other words it was not purely Obama's fault.


A Meaningless Statistic?

Scarborough's statistic makes less sense than Trump's on closer examination. The point comes through clearly once we see how PolitiFact botched its analysis.

Scarborough said the GOP would create more debt in one year than was generated in America's first 200 years.

After quoting an expert who said percentage of GDP serves as a better measure than nominal dollars, PolitiFact proceeded to explain that testing Scarborough's claim using the percentage of GDP tells essentially the same story.  PolitiFact shared a chart based on data from the executive branch's Office of Management and Budget:



So far so good. The OMB is recognized as a solid source for such data. But then PolitiFact PolitiSplains (bold emphasis added):
The chart does show that, when looking at a percentage of GDP, Scarborough is correct in his comparison. Debt as a percentage of GDP in 2017 was far higher (almost 77 percent)  than it was in 1976 (about 27 percent).
Colossal Blunder Alert!

PolitiFact/PunditFact, intentionally or otherwise, pulled a bait and switch. Scarborough said the GOP would create more debt in one year than was generated in America's first 200 years. As PolitiFact recognized when comparing the nominal dollar figures, that comparison involves the cumulative deficit number for one year (which we call the debt) and comparing it to the non-cumulative deficit number for one year (which we call the deficit). It's a comparison of the debt in 1976, following PolitiFact's methodology for nominal dollars in the first part of the fact check, to the deficit for 2017.

But that's not what PolitiFact did when it tried to test Scarborough using percentage of GDP.

PolitiFact compared the debt in 1976 to the debt in 2017. That's the wrong comparison. PolitiFact needed to substitute the deficit in 2017 as a percentage of GDP for the debt in 2017 as a percentage of GDP. That substitution corresponds to Scarborough's argument.

The deficit in 2017 does not measure out to nearly 77 percent of GDP. Not even close.

The OMB reports the deficit for 2017 was 3.5 percent of GDP. That's less than 27 percent. It's also less than 77 percent.

Using the preferred measure for comparing deficit and debt numbers across time, Scarborough's claim fell flat. And PolitiFact failed to notice.

Testing Scarborough's number correctly as a percentage of GDP illustrates the worthlessless of his statistic. Instead of "Mostly True" PolitiFact could easily have issued a ruling more similar to the one it issued to Republican presidential candidate Mitt Romney when he correctly noted that our armed forces were shorter on ships and planes in 2012 than at times in the past.


Cheer up, PolitiFact. You'll be tarring the conservative Scarborough. So it's not a total loss.

Friday, August 31, 2018

False Stuff From Fact Checker (PolitiFact)

A funny thing happened when PolitiFact fact-checked a claim about a bias against conservative websites: PolitiFact did not fact check its topic.

No, we're not kidding. Instead of researching whether the claim was true, PolitiFact spent its time undermining the source of the claim. And PolitiFact even used a flatly false claim of its own toward that end (bold emphasis added):
The chart is not neutral evidence supporting Trump’s point, and it labels anything not overtly conservative as "left. In the "left" category are such rigorously mainstream outlets as the Associated Press and Reuters. The three big broadcast networks — ABC, NBC, CBS — are considered "left," as are the Washington Post and the New York Times. Other media outlets that produce a large amount of content every day, including CNN, NPR, Politico, USA Today, and CNBC, are labeled "left."
The statement we highlighted counts as hyperbole at best. On it's face, it simply counts as a false statement exposed as such by the accompanying graphic:


If PolitiFact's claim was true then any outlet not labeled "left" would overtly identify itself as conservative. We can disprove PolitiFact's claim easily by simply looking down the line at the middle. If a media outlet straddles the line between left and right then that organization is not classified as "left." And if such media organizations do not overtly identify as conservative then PolitiFact's claim is false.

Overtly conservative? Let's go down the line:
And for good measure: The Economist, located on the right side of the chart. Is The Economist overtly conservative (See also Barron's, McClatchy)?

Did PolitiFact even bother to research its own claim? Where are the sources listed? Or did writer Louis Jacobson just happen to have that factoid rattling around in his cranium?

But it's not just Jacobson! The factoid gets two mentions in the fact check (the second one in the summary paragraph) and was okayed by editor Katie Sanders (recently promoted for obvious reasons to managing editor at PolitiFact) and at least two other editors from the PolitiFact "star chamber" that decides the "Truth-O-Meter" rating.

As we have asked before, how can a non-partisan and objective fact checker make such a mistake?

Inconceivable!

And how does a fact checker properly justify issuing a ruling without bothering to check on the fact of the matter?

Saturday, August 25, 2018

PolitiFact's Fallacious "Burden of Proof" Bites a Democrat? Or Not

We're nonpartisan because we defend Democrats unfairly harmed by the faulty fact checkers at PolitiFact.

See how that works?

On with it, then:

Oops.

Okay, we made a faulty assumption. We thought when we saw PolitiFact's liberal audience complaining about the treatment of Nelson that it meant Nelson had received a "False" rating based on Nelson not offering evidence to support his claim.

But PolitiFact did not give Nelson a "Truth-O-Meter" rating at all. Instead of the "Truth-O-Meter" graphic for the claim (there is none), PolitiFact gave its readers the "Share The Facts" version:



Republicans (and perhaps Democrats) have received poor ratings in the past where evidence was lacking, which PolitiFact justifies according to its "burden of proof" criterion. But either the principle has changed or else PolitiFact made an(other) exception to aid Nelson.

If the principle has changed that's good. It's stupid and fallacious to apply a burden of proof standard in fact checking, at least where one determines a truth value based purely on the lack of evidence.

But's it's small consolation to the people PolitiFact unfairly harmed in the past with its application of this faulty principle.


Afters:

In April 2018 it looks like the "burden of proof" principle was still a principle.



As we have noted before, it often appears that PolitiFact's principles are more like guidelines than actual rules.

And to maintain our nonpartisan street cred, here's PolitiFact applying the silly burden of proof principle to a Democrat:


If "burden of proof" counts as one of PolitiFact's principles then PolitiFact can only claim itself as a principled fact checker if the Nelson exception features a principled reason justifying the exception.

If anyone can find anything like that in the non-rating rating of Nelson, please drop us a line.

Thursday, August 23, 2018

PolitiFact Not Yet Tired of Using Statements Taken Out Of Context To Boost Fundraising

Remember back when PolitiFact took GOP pollster Neil Newhouse out of context to help coax readers into donating to PolitiFact?

Good times.

Either the technique works well or PolitiFact journalists just plain enjoy using it, for PolitiFact Editor Angie Drobnic Holan's Aug. 21, 2018 appeal to would-be supporters pulls the same type of stunt on Rudy Giuliani, former mayor of New York City and attorney for President Donald Trump.

Let's watch Holan the politician in action (bold emphasis added):
Just this past Sunday, Rudy Giuliani told journalist Chuck Todd that truth isn’t truth.

Todd asked Giuliani, now one of President Donald Trump’s top advisers on an investigation into Russia’s interference with the 2016 election, whether Trump would testify. Giuliani said he didn’t want the president to get caught perjuring himself — in other words, lying under oath.

"It’s somebody’s version of the truth, not the truth," Giuliani said of potential testimony.

Flustered, Todd replied, "Truth is truth."

"No, it isn’t truth. Truth isn’t truth," Giuliani said, going on to explain that Trump’s version of events are his own.

This is an extreme example, but Giuliani isn’t the only one to suggest that truth is whatever you make it. The ability to manufacture what appears to be the truth has reached new heights of sophistication.
Giuliani, contrary to Holan's presentation, was almost certainly not suggesting that truth is whatever you make it.

Rather, Giuliani was almost certainly making the same point about perjury traps that legal expert Andrew McCarthy pointed out in a Aug. 11, 2018 column for National Review (hat tip to Power Line Blog)
The theme the anti-Trump camp is pushing — again, a sweet-sounding political claim that defies real-world experience — is that an honest person has nothing to fear from a prosecutor. If you simply answer the questions truthfully, there is no possibility of a false-statements charge.

But see, for charging purposes, the witness who answers the questions does not get to decide whether they have been answered truthfully. That is up to the prosecutor who asks the questions. The honest person can make his best effort to provide truthful, accurate, and complete responses; but the interrogator’s evaluation, right or wrong, determines whether those responses warrant prosecution.
It's fair to criticize Giuliani for making the point less elegantly than McCarthy did. But it's inexcusable for a supposedly non-partisan fact checker to take a claim out of context to fuel an appeal for cash.

That's what we expect from partisan politicians, not non-partisan journalists.

Unless they're "non-partisan journalists" from The Bubble.





 

Worth Noting:

For the 2017 version of this Truth Hustle, Holan shared writing credits with PolitiFact's Executive Director Aaron Sharockman.

Tuesday, August 21, 2018

All About That Base(line)

When we do not publish for days at a time it does not mean that PolitiFact has cleaned up its act and learned to fly straight.

We simply lack the time to do a thorough job policing PolitiFact's mistakes.

What caught our attention this week? A fact check authored by one of PolitiFact's interns, Lucia Geng.



We were curious about this fact check thanks to PolitiFact's shifting standards on what counts as a budget cut. In this case the cut itself was straightforward: A lower budget one year compared to the preceding year. In that respect the fact check wasn't a problem.

But we found a different problem--also a common one for PolitiFact. At least when PolitiFact is fact-checking Democrats.

The fact check does not question the baseline.

The baseline is simply the level chosen for comparison. The Florida Democratic Party chose to compare the 2011 water management districts' collective budgets with the ones in 2012 and found that they were about $700 million lower. Our readers should note that the FDP started making this claim in 2018, not 2012.

It's just crazy for a fact checker to perform a fact check without looking at other potential baselines. Usually politicians and political groups choose a baseline for a reason. Comparing 2011 to 2012 appears to make sense superficially. The year 2011 represents Republican-turned-Independent Governor Charlie Crist. The year 2012 represents the current governor, also a Republican, Rick Scott.

But what if there's more to it? Any fact checker should look at data covering a longer time period to get an idea of what the claimed cut would actually mean.

We suspected that 2010 and before might show much lower budget numbers. To our surprise, the budget numbers were far higher, at least for the South Florida Water Management District whose budget dwarfs those of the other districts.

From 2010 to 2011, Gov. Crist cut the SFWMD budget by about $443 million. From 2009 to 2010 Gov. Crist cut the SFWMD budget by almost $1.5 billion. That's not a typo.

The message here is not that Gov. Crist was some kind of anti-environmental zealot. What we have here is a sign that the water management district budgets are volatile. They can change dramatically from one year to the next. The big question is why, and a secondary question is whether the reason should affect our understanding of the $700 million Gov. Scott cut from the combined water management district budgets between 2011 and 2012.

A fact checker who looked at the volatile changes in spending could then use that knowledge to ask officials at the water management districts questions that would help answer our two questions above. Geng listed email exchanges with officials from each of Florida's water management districts. But the fact check contains no quotations from those officials. It does not even refer to their responses via paraphrase or summary. We don't even know what questions Geng asked.

We did not contact the water management districts. But we looked for a clue regarding the budget volatility in the SFWMD's fiscal year 2011 projections for its future budgets. The agency expected capital expenditures to drop by more than half after 2011.

Rick Scott had not been elected governor at that time (October 2010).

This suggests that the water management districts had a budget cut baked into their long-term program planning, quite possibly strongly influenced by budgeting for the Everglades restoration project (including land purchases). If so, that counts as critical context omitted from the PolitiFact Florida fact check.

We flagged these problems for PolitiFact on Twitter and via email. As usual, the faux-transparent fact checkers responded with a stony silence and made no apparent effort to fix the deficiencies.

Aside from the hole in the story we felt the "Mostly True" rating was very forgiving of the Florida Democratic Party's blatant cherry-picking. And somehow PolitiFact even resisted using the term "cherry-picking" or any close synonym.



Afters:
The Florida Democratic Party, in the same tweet PolitiFact fact-checked, recycled the claim that Gov. Scott "banned the term 'Climate Change.'"

We suppose that's not the sort of thing that makes PolitiFact editors wonder "Is that true?"

Tuesday, August 7, 2018

The Phantom Cherry-pick

Would Sen. Bernie Sanders' Medicare For All plan save $2 trillion over 10 years on U.S. health care expenses?

Sanders and the left were on fire this week trying to co-opt a Mercatus Center paper by Charles Blahous. Sanders and others claimed Blahous' paper confirmed the M4A plan would save $2 trillion over 10 years.

PolitiFact checked in on the question and found Sanders' claim "Half True":


PolitiFact's summary encapsulates its reasoning:
The $2 trillion figure can be traced back to the Mercatus report. But it is one of two scenarios the report offers, so Sanders’ use of the term "would" is too strong. The alternative figure, which assumes that a Medicare for All plan isn’t as successful in controlling costs as its sponsors hope it will be, would lead to an increase of almost $3.3 trillion in national health care expenditures, not a decline. Independent experts say the alternative scenario of weaker cost control is at least as plausible.

We rate the statement Half True.
Throughout its report, as pointed out at Zebra Fact Check, PolitiFact treats the $2 trillion in savings as a serious attempt to project the true effects of the M4A bill.

In fact, the Mercatus report use what its author sees as overly rosy assumptions about the bill's effects to estimate a lower boundary for the bill's very high costs and then proceeds to offer reasons why the bill will likely greatly exceed those costs.

In other words, the cherry Sanders tries to pick is a faux cherry. And a fact checker ought to recognize that fact. It's one thing to pick a cherry that's a cherry. It's another thing to pick a cherry that's a fake.

Making Matters Worse

PolitiFact makes matters worse by overlooking Sanders' central error: circular reasoning.

Sanders' takes a projection based on favorable assumptions as evidence that the favorable assumptions are reasonable assumptions. But a conclusion one reaches based on assumptions does not make the assumptions more true. Sanders' claim suggests the opposite, that when the Blahous paper says it is using unrealistic assumptions the conclusions it reaches using those assumptions makes the assumptions reasonable.

A fact checker ought to point out whaten a politician peddles such nonsensical ideas.

PolitiFact made itself guilty of bad reporting while overlooking Sanders' central error.

Friday, July 6, 2018

PolitiFact: "European Union"=Germany

PolitiFact makes all kinds of mistakes, but some serve as better examples of ideological bias than others. A July 2, 2018 PolitiFact fact check of President Donald Trump serves as pretty good evidence of a specific bias against Mr. Trump:


The big clue that PolitiFact botched this fact check occurs in the image we cropped from PolitiFact's website.

Donald Trump states that the EU sends millions of cars to the United States. PolitiFact performs adjustments to that claim, suggesting Trump specified German cars and specifying that the EU sends millions of German cars per year. Yet Trump did not specify German cars and did not specify an annual rate.

PolitiFact quotes Trump:
At one point, he singled out German cars.

"The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions," Trump said.
Saying Trump "singled out German cars" counts as twisting the truth. Trump "singled out" German cars in the sense of offering two examples of German cars among the millions sent to the United States by the European Union.

It counts as a major error for a fact checker to ignore the clear context showing that Trump was talking about the European Union and not simply German cars of one make (Mercedes) or another (BMW). And if those German makes account for large individual shares of EU exports to the United States then Trump deserves credit for choosing strong examples.

It counts as another major error for a fact checker to assume an annual rate in the millions when the speaker did not specify any such rate. How did PolitiFact determine that Trump was  not talking about a monthly rate, or the rate over a decade? Making assumptions is not the same thing as fact-checking.

When a speaker uses ambiguous language, the responsible fact checker offers the speaker charitable interpretation. That means using the interpretation that makes the best sense of the speaker's words. In this case, the point is obvious: The European Union exports millions of cars to the United States.

But instead of looking at the number of cars the European Union exports to the United States, PolitiFact cherry picked German cars. That focus came through strongly in PolitiFact's concluding paragraphs:
Our ruling

Trump said, "The European Union … they send us Mercedes, they send us -- by the millions -- the BMWs -- cars by the millions."

Together, Mercedes, BMW and Volkswagen imported less than a million cars into the United States in 2017, not "millions."

More importantly, Trump ignores that a large proportion of German cars sold in the United States were also built here, using American workers and suppliers whose economic fortunes are boosted by Germany’s carnakers [sic]. Other U.S.-built German cars were sold as exports.

We rate the statement False.
That's sham fact-checking.

A serious fact check would look at the European Union's exports specifically to the United States. The European Automobile Manufacturers Association has those export numbers available from 2011 through 2016. From 2011 through 2013 the number was under 1 million annually. For 2014 through 2016 the number was over 1 million annually.

Data through September 2017 from the same source shows the European Union on pace to surpass 1 million units for the fourth consecutive year.


Does exporting over 1 million cars to the United States per year for three or four consecutive years count as exporting cars to the United States by the millions (compare the logic)?

We think we can conclude with certainty that the notion does not count as "False."

Our exit question for PolitiFact: How does a non-partisan fact checker justify ignoring the context of Trump's statement referring specifically to the European Union? How did the European Union get to be Germany?

Friday, June 22, 2018

PolitiFact Corrects, We Evaluate the Correction

PolitiFact corrected an error in one of its fact checks this past week, most likely in response to an email we sent on June 20, 2018.
Dear PolitiFact,

A recent PolitiFact fact check contains the following paragraph (bold emphasis added):
Soon after, in February 2017, Nehlen wrote on Twitter that Islam was not a religion of peace and posted a photo of a plane striking the World Trade Center with the caption, "9/11 would’ve been a Wonderful #DayWithoutImmigrants." In the following months, Nehlen also tweeted that "Islam is not your friend," implied that Muslim communities should be bombed and retweeted posts saying Bill and Hillary Clinton were murdering associates.

The hotlink ("implied") leads to an archived Twitter page. Unless I'm missing somelthing [sic], the following represents the best candidate as a supporting evidence:


Unless "Muslim no-go zones" represent typical Muslim communities, PolitiFact's summary of Nehlen's tweet distorts the truth. If a politician similarly omitted context in this fashion, would PolitiFact not mete out a "Half True" rating or worse?

If PolitiFact excuses itself from telling the truth where people accused of bigotry are involved, that principle ought to appear in its statement of principles.

Otherwise, a correction or clarification is in order. Thanks.
We were surprised to see that PolitiFact updated the story with a clarification within two days. And PolitiFact did most things right with the fix, which it labeled a "clarification."

Here's a checklist:
  1. Paid attention to the criticism
  2. Updated the article with a clarification
  3. Attached a clarification notice at the bottom of the fact check
  4. Added the "Corrections and Updates" tag to the article, ensuring it would appear on PolitiFact's "Corrections and Updates" page
Still, we think PolitiFact can do better.

Specifically, we fault PolitiFact for its lack of transparency regarding the specifics of the mistake.

Note what Craig Silverman, long associated with PolitiFact's owner, the Poynter Institute, said in an American Press Institute interview about letting readers know what changed:

News organizations aren’t the only ones on the internet who are practicing some form of journalism. There are a number of sites or blogs or individual bloggers who may not have the same standards for corrections. Is there any way journalists or anyone else can contribute to a culture of corrections? Where does it start?

SILVERMAN: Bloggers actually ended up doing a little bit of correction innovation. In the relatively early blogging days, you’d often see <strike>strikethrough</strike> used to cross out a typo or error. This was a lovely use of the medium, as it showed what was incorrect and also included the correct information after. In that respect, bloggers modelled good behavior, and showed how digital corrections can work. We can learn from that.

It all starts with a broad commitment to acknowledge and even publicize mistakes. That is the core of the culture, the ethic of correction.
We think Silverman has it right. Transparency in corrections involves letting the reader know what the story got wrong. In this case, PolitiFact reported that a tweet implied that somebody wanted to bomb Muslim communities. The tweet referred, in fact, to a small subset of Muslim communities (so small PolitiFact says they do not exist hey that one failed its fact check and I forgot to remove it from the first published draft) referred to as "no-go zones"--areas where non-Muslims allegedly face unusual danger to their person and property.

PolitiFact explained its error like this:
This fact-check has been updated to more precisely refer to a previous Nehlen tweet
That notice is transparent about the fact the text of the fact check was changed and transparent about the part of the fact check that was changed (information about a Nehlen tweet). But it mostly lacked transparency about what the fact check got wrong and the misleading impression it created.

We think journalists, including PolitiFact, stand to gain public trust by full transparency regarding errors. Though that boost to public trust assumes that errors aren't so ridiculous and rampant that transparency instead destroys the organization's credibility.

Is that what PolitiFact fears when it issues these vague descriptions of its inaccuracies?

Still, we're encouraged that PolitiFact performed a clarification and mostly followed its corrections policy. Ignoring needed corrections is worse than falling short of best practices with the corrections.

Monday, June 18, 2018

PolitiFact Wisconsin: The Future is Now!

A May 2, 2018 fact check from PolitiFact Wisconsin uses projected numbers from the 2018-2019 budget year to assess a claim that Wisconsinites are now paying twice as much for debt service on road work as they were paying in 2010-2011 before Republican Scott Walker took over as Wisconsin's governor.


Democratic candidate for governor Kelda Helen Roys and her interviewer used a 22-23 percent figure to represent current spending on road work debt service in Wisconsin.

PolitiFact Wisconsin gave both a pass on their fudging of the facts, but lowered Roys' rating from "True" down to "Mostly True" because the numbers used were mere estimates:
The figure is projected to reach 20.9 percent during the second year of the current two-year state budget Walker signed, which is nearly doubling.

With the caveat that the figure for the current budget is an estimate, we rate Roys’ statement Mostly True.
We think that reasoning would work better as a fact check of Roys' claim if the estimated number represented what Wisconsin is paying now for debt service on its road work. Unless PolitiFact Wisconsin is saying the future is now, the estimate for budget year 2017-2018 would better fit the bill.

PolitiFact Wisconsin reported the 2017-2018 estimate as 20 percent but used the higher figure for the following budget year to judge Roys' accuracy.

And that was just one of three ways PolitiFact Wisconsin massaged the Democrat's statement into a closer semblance of the truth.

What is "Just Basic Road Repair and Maintenance"?

Roys' claimed the debt service was "for just basic road repair and maintenance," which would apparently exclude new construction. PolitiFact tested her claim using the numbers for the transportation-related share of the budget (bold emphasis added):
In analyzing 2017-’19 two-year state budget enacted by Walker and the GOP-controlled Legislature, the bureau provided figures on the total of all transportation debt service as a percentage of gross transportation fund revenue -- in other words, what portion of transportation revenue for road work would be going to paying off debt.
PolitiFact's other truth-massage credited Roys with making clear that the debt service increase she spoke of was the debt service amount as a percentage of total spending on roads. Aside from the fact Roys talked about "just basic road repair and maintenance," she offered listeners no clue that she used the same measure PolitiFact Wisconsin used to fact check her claim.

The clue that likely drove PolitiFact to check the debt service as a percentage of road work expenses came from WisconsinEye senior producer Steve Walters, who conducted the interview of Roys. Walter referred no less than twice to a "22 to 23 percent" figure for debt service during the interview.

Since that number came from Walters, PolitiFact Wisconsin apparently felt no need to fact check its accuracy.

Does Some Road Construction Go Beyond 'Basic'?

We think the phrase "basic road repair and maintenance" may leave some members of the audience with the impression that more involved road work such as replacing bridges would balloon the cost of debt service even higher than described.

We found a page run by the Wisconsin Department of Transportation describing its road projects. Here's the description of one costing $9.6 million:
Description of work: The project consists of a full reconstruction of WIS 55 (Delanglade Street) from I-41 to Lawe Street in the city of Kaukauna. Improvements will include roundabouts at the intersections of I-41 ramps, Maloney/Gertrude, and County OO. New traffic signals will be installed at County J/WIS 55/WIS 96, and bike/pedestrian accommodations will be added throughout the project limits along WIS 55. Other work includes storm sewer, sanitary sewer, water main, sidewalks, retaining walls, street lighting, and incidentals.
It appears to us that PolitiFact Wisconsin simply assumed that all the described work rightly fits under Roys' description.

We're skeptical that such assumptions hold a rightful place among the best practices for fact checkers.

Summary


If we assume that Roys was talking about all expenses attached to road work, and also assume she was talking about the increase in the estimated dollar amount of debt service in raw dollars, her estimate is off by only about 7 percent. In that case, PolitiFact Wisconsin did not really need to use future estimates to justify Roys' statement about how much Wisconsin is spending now. It could have just used the measure Roys' described and rated that against the estimate for this year's spending.

But a fact checker could easily have justified asking Roys to define what she meant by "basic road repair and maintenance" and then using that definition to grade her accuracy. A better fact check would likely result.

We wonder if Roys would need to join the Republican Party to make that happen.

Thursday, June 14, 2018

Different Strokes for Different Quotes: What does "voted for tax cuts" really mean?

"What I find is it's hard for me to take critics seriously when they never say we do anything right. Sometimes we can do things right, and you'll never see it on that site."

-PolitiFact Editor Angie Drobnic Holan



Sometimes PolitiFact can do things right.

PolitiFact New York did something right recently that deserves mention because it's the correct way to journomalist:





PolitiFact added the Trump camp "did not get back to us with information supporting his claim, so we can't say for sure what he was talking about in his endorsement."

PolitiFact noted that Trump tweeted about the Tax Cuts and Jobs Act "four other times in May" but acknowledged Trump did not reference that law in the tweet it fact checked.

In our view this is the correct approach.

We think a persuasive argument could be made that Trump inaccurately implied Donovan was a Tax Cut and Jobs Act supporter, but that argument belongs on the editorial page, not in a fact check. PolitiFact examined the claim Trump made without inventing assumptions about what he meant or what he was implying. In this case PolitiFact stuck to the facts.

Notwithstanding our longtime opposition to rating facts on a sliding scale, we think PolitiFact did this one right and we're happy to point it out.


The Other Guy

Readers may wonder "How could a fact checker screw this one up?" Donovan had a documented history of voting for tax cuts, and Trump's claim was not only unambiguous but also easy to check.

How could a serious fact checker get this wrong?







When the Washington Post's unabashed Trump basher/unbiased truthsayer tweeted that "fact checkers sometimes disagree" we were curious. PolitiFact rated Trump's tweet as accurate, while Kessler deemed the exact same tweet false. How can that be?

As it turns out the two fact checkers aren't disagreeing at all.

PolitiFact correctly identified the claim Trump made and ruled based on his actual words. Kessler invented a claim and then gave Trump a false rating for his own fantasy. The fact checkers aren't disagreeing because they're not checking the same claim.

Kessler says Trump's claim that Donovan "voted for tax cuts" is false because "Donovan voted against Trump's tax cut three times." For those of you that aren't experts in journalism or logic, voting against the Tax Cut and Jobs Act does not negate the fact that Donovan has previously voted for other tax cuts.

As far as we can tell, Kessler offered no justification for calling Trump's claim false other than Donovan's opposition to the 2017 tax bill.

Kessler's reasoning here is flatly wrong. And if one wanted to treat Kessler with the same painful pedantry as he applies to Trump in his chart, one could note there's no such thing as "Trump's tax cuts" because only Congress can pass tax bills.

Petty word games aside, this "disagreement" among fact checkers affirms that our fact-divining betters are neither scientific agents of truth nor objective determiners of evidence. When a fact checker can substitute a person's actual words for their own interpretation of what that person meant it counts as commentary, not an adjudication of facts.

Kudos to PolitiFact New York for taking the correct approach. Sometimes PolitiFact can do things right.