Thursday, December 19, 2019

PolitiFact disagrees with IG on IG report on CrossFire Hurricane

What to do when the Inspector General and PolitiFact disagree on what an IG report says?

Well, the IG has no Pulitzer Prize, so maybe trust the fact checkers?

PolitiFact, Dec. 11, 2019:

The investigation was not politically motivated

The IG report also dismissed the notion that the investigation was politically motivated.
Inspector General Michael Horowitz, during Senate testimony on Dec. 18, 2019 (The Epoch Times):
Then [Sen. Josh Hawley (R-Mo.)] asked, “Was it your conclusion that political bias did not affect any part of the Page investigation, any part of Crossfire Hurricane?”

“We did not reach that conclusion,” Horowitz told him. He added, “We have been very careful in connection with the FISA for the reasons you mentioned to not reach that conclusion, in part, as we’ve talked about earlier: the alteration of the email, the text messages associated with the individual who did that, and then our inability to explain or understand or get good explanations so we could understand why this all happened.”
We confirmed The Epoch Times' account via C-SPAN. It edits the exchange for clarity without altering the basic meaning of the exchange. We invite readers to confirm it for themselves via an embedded clip (around 3:11)*:



Seriously, we count the PolitiFact's Pulitzer as no kind of reasonable evidence supporting its reliability. Pulitzer juries do not fact check content before awarding prizes.

It seems clear PolitiFact committed the fallacy of argumentum ad ignorantiam. When the IG report repeatedly said it "found no testimonial or documentary evidence that these operations resulted from political bias or other improper considerations" or similar words, PolitiFact made the fallacious leap to conclude there was no political bias.

Pulitzer Prize-winning and IFCN-verified PolitiFact.

We need better measures of trustworthiness.


*Our embedded clip ended up shorter than we expected, for which we apologize to our readers. Find the full clip here.

Monday, December 16, 2019

A political exercise: PolitiFact chooses non-impactful (supposed) falsehood as its "Lie of the Year"

PolitiFact chose President Trump's claim that a whistleblower's complaint about his phone call with Ukrainian leader Volodymyr Zelensky got the facts "almost completely wrong."

We had deemed it unlikely PolitiFact would choose that claim as its "Lie of the Year," reasoning that it failed to measure up to the supposed criterion of carrying a high impact.

We failed to take into account PolitiFact's dueling criteria, explained by PolitiFact Editor Angie Drobnic Holan back in 2016:
Each year, PolitiFact awards a "Lie of the Year" to take stock of a misrepresentation that arguably beats all others in its impact or ridiculousness.
To be sure, "arguably beats all others in its impact" counts as a subjective criterion. As a bonus, PolitiFact offers itself an alternative criterion based on the "ridiculousness" of a claim.

Everybody who thinks there's an objective way to gauge relative "ridiculousness" raise your hand.

We will not again make the mistake of trying to handicap the "Lie of the Year" choice based on the criteria PolitiFact publicizes. Those criteria are hopelessly subjective and don't tell the real story.

It's more simple and direct to predict the outcome based on what serves PolitiFact's left-leaning interests.


Thursday, December 12, 2019

William Barr, PolitiFact and the biased experts game

Is it okay for fact checkers to rely on biased experts for their findings?

Earlier this year, Facebook restricted distribution of a video by pro-life activist Lila Rose. Rose's group complained the fact check was biased. Facebook relied on the International Fact-Checking Network to investigate. The investigator ruled (very dubiously) that the fact check was accurate but that the fact checker should have disclosed the bias of experts it cited:
The failure to declare to their readers that two individuals who assisted Science Feedback, not in writing the fact-check but in reviewing the evidence, had positions within advocacy organizations, and the failure to clarify their role to readers, fell short of the standards required of IFCN signatories. This has been communicated to Science Feedback.
Perhaps it's fine for fact checkers to rely on biased experts so long as those experts do not hold positions in advocacy organizations.

Enter PolitiFact and its December 11, 2019 fact check of Attorney General William Barr.

The fact check itself hardly deals with the substance of Barr's claim that the "Crossfire Hurricane" investigation of possible collusion between the Trump campaign and Russia was started on the thinnest of evidence. Instead, PolitiFact sticks with calling the decision to investigate "justified" by the Inspector General's report while omitting the report's observation that the law sets a low threshold for starting an investigation (bold emphasis added).
Additionally, given the low threshold for predication in the AG Guidelines and the DIOG, we concluded that the FFG informat ion, provided by a government the United Stat es Intelligence Community (USIC) deems trustworthy, and describing a first-hand account from an FFG employee of a conversation with Papadopoulos, was sufficient to predicate the investigation. This information provided the FBI with an articulable factual basis that, if true, reasonably indicated activity constituting either a federal cri me or a threat to national security, or both, may have occur red or may be occurring. For similar reasons, as we detail in Chapter Three, we concluded that the quantum of information articulated by the FBI to open the individual investigations on Papadopoulos, Page, Flynn, and Manafort in August 2016 was sufficient to satisfy the low threshold established by the Department and the FBI.
The "low threshold" is consistent with Barr's description of "thinnest of suspicions" in the context of prosecutorial discretion and the nature of the event that supposedly justified the investigation (the Papadoupolous caper)*.

But in this post we will focus on the experts PolitiFact cited.

Rosa Brooks

Rosa Brooks, professor of law and policy at Georgetown University, told us that Barr’s assessment that the suspicions were thin "appears willfully inaccurate."

"The report concluded precisely the opposite," she said. "The IG report makes it clear that the decision to launch the investigation was justified."
If PolitiFact were brazen enough, it could pick out Brooks as a go-to (biased) expert based on her Twitter retweets from Dec. 10, 2019.



Brooks' tweets also portray her as a Democrat voter. So does her pattern of political giving.

Jennifer Daskal

Jennifer Daskal, professor of law at American University, agreed. "Barr’s statement is at best a misleading statement, if not a deliberate distortion, of what the report actually found," she said.
Daskal's Internet history shows little to suggest she pre-judged her view on Barr's statement. On the other hand, it seems pretty plain she prefers the presidential candidacy of Pete Buttgieg (one example among several). Plus Daskal has tended to donate politically to Democrats.

Robert Litt

PolitiFact contacted Litt for his expert opinion but did not mention him in the text of the fact check.

We deem it unlikely PolitiFact tabbed Litt to counterbalance the leftward lean of Brooks and Daskal. Litt was part of the Obama administration and his appointment carried an unusual political dimension to it. Litt failed his background check but was installed in the Clinton Justice Department in a roundabout way.

Litt, like Brooks and Daskal, gives politically to Democrats.


So what's the problem?

We think it's okay for PolitiFact to cite experts who lean left and donate politically to the Democratic Party. That's not the problem.

The problem is the echo-chamber effect PolitiFact achieves by choosing a small pool of experts all of whom lean markedly left. As we've noted before, that's no way to establish anything akin to an expert consensus. But it serves as an excellent method for excluding or marginalizing contrary arguments.

It's not like those are hard to find. It seems PolitiFact simply has no interest in them.



*It's worth noting that the information Papadoupoulos shared with the Australian, Downer, came in turn from the mysterious Joseph Mifsud.

PolitiFact's "Lie of the Year" farce, 2019 edition

From the start, PolitiFact's "Lie of the Year" award has counted as a farce.

Why?

Because it has supposedly neutral and unbiased fact-checkers offering their collective editorial opinion on the most impactful falsehood from the past year.

How better to illustrate their neutrality than by offering an opinion?

PolitiFact's actions in choosing its "Lie of the Year" have borne out farcical nature of the exercise, with farces including naming true statements as the "Lie of the Year" and the immortal ObamaCare bait and switch.

On With the Latest Farce

This year we quickly noticed that all of the nominated falsehoods received "Pants on Fire" ratings. That's a first. The nominees usually received either a "False" or a "Pants on Fire" rating in the past, with the ObamaCare bait and switch counting as the lone exception.

Next, we noticed that of the three nominations connected to Democratic Party politicians only one came from PolitiFact National. PolitiFact California and PolitiFact Texas each scored one of those nominations.

No state PolitiFact operation had a Republican subject nominated, and President Trump received three of the four.


Is This the Weakest Field Ever?

PolitiFact says it awards its "Lie of the Year" to "the most impactful significant falsehood."

We don't see much impact on PolitiFact's list of nominations. We'll go through and handicap the list based on PolitiFact's claimed criterion.

But first we remind readers that PolitiFact has a history of not limiting its choice to an item from its own list of nominations. This is a good year in which to pull that stunt.


Says Nancy Pelosi diverted "$2.4 billion from Social Security to cover impeachment costs."
— Viral image on Wednesday, October 9th, 2019 in a Facebook post
Anybody remember any real-world impact from this claim? We don't. 0




"The first so-called second hand information ‘Whistleblower’ got my phone conversation almost completely wrong."

— Donald Trump on Saturday, October 5th, 2019 in a tweet
How about this footnote from the Trump impeachment parade? The impact of this supposed falsehood (isn't it closer to opinion than a specific statement of fact?), if any, comes from its symbolic representation of the case for impeachment. The appeal of this choice comes from the ability of the media to clap itself on the back for its impeachment reporting. 5



Between 27,000 and 200,000 Wisconsinites were "turned away" from the polls in 2016 due to lack of proper identification.

— Hillary Clinton on Tuesday, September 17th, 2019 in a speech at George Washington University
Again, what was the real-world impact of Clinton's claim? If PolitiFact bothered to tie together the exaggerated election interference claims from the Democratic presidential candidates plus failed Democratic nominee for the governorship of Georgia, Stacey Abrams, then maybe PolitiFact could reasonably say the collected falsehoods carried some impact. 1



Originally "almost all models predicted" Dorian would hit Alabama.

— Donald Trump on Wednesday, September 4th, 2019 in a tweet
Although we're not aware of any real-world impact from this Trump tweet, this one had a pretty big impact in the world of journalism (not to be mistaken for the real world).

That may well prove enough to give this claim the win. The press made a huge deal of this presidential tweet, and the issue eventually led to accusations Trump broke the law by altering a weather report (not kidding).

Were the media correct that Trump inspired disproportionate worry in the state of Alabama? We would not expect the media to offer strong support for that proposition. Thanks, Washington Post, for providing us an exemplar of our expectations. 6



"The vast majority" of San Francisco’s homeless people "also come in from — and we know this — from Texas. Just (an) interesting fact."

— Gavin Newsom on Sunday, June 23rd, 2019 in an interview on "Axios on HBO"
Gavin Newsom isn't well known nationally, and his statement had negligible real-world impact, by our estimation. This nomination is another tribute to PolitiFact's difficulty in finding falsehoods from Democrats. If PolitiFact had rated any of the many claims from Stacey Abrams that she won the Georgia election then we might have had a legitimate contender from the Democrats. 1



U.S. tariffs on China are "not hurting anybody" in the United States.

— Peter Navarro on Sunday, August 18th, 2019 in an interview
This "gotcha" fact check ignores much of the context of the Navarro interview, especially Navarro's point about China's major devaluation of its currency. That aside, despite media attempts to trumpet the harm to American consumers Americans seem mostly okay with whatever harm they're supposedly receiving.

Tariffs and the trade war make up a big issue, but it's sad if this shallow fact check treatment had any real-world impact. 2



"Remember after the shooting in Las Vegas, (President Donald Trump) said, ‘Yeah, yeah, we’re going to ban the bump stocks.’ Did he ban the bump stocks? No."

— Kirsten Gillibrand on Sunday, June 2nd, 2019 in a Fox News town hall
Ah. The fact check that brought an end to Kristen Gillibrand's hopes for the Democratic Party's presidential nomination.

Just kidding. Gillibrand never caught fire in the Democratic primary and there's no reason to suppose her statement criticizing Trump had anything to do with it. We doubt many are familiar with either Gillibrand or the fact check. Real world impact? Not so much. But at least PolitiFact National can take credit for this fact check of a Democrat. That's something. 0



Video shows Nancy Pelosi slurring her speech at a public event.
— Politics WatchDog on Wednesday, May 22nd, 2019 in a Facebook post
The video that misled thousands into thinking Speaker Pelosi slurred her speech in public. 1



"There has never been, ever before, an administration that’s been so open and transparent."
— Donald Trump on Monday, May 20th, 2019 in remarks at the White House
What was the context of this Trump claim?

PolitiFact used a short video snippet posted to Twitter as its primary source. PolitiFact offered no surrounding context. Is everyone good with that?

Can a potentially hyperbolic statement lifted out of context serve as the most significant falsehood of 2019?

This is PolitiFact we're talking about. 4



Says John Bolton "fundamentally was a man of the left."
Tucker Carlson on Tuesday, September 10th, 2019 in a TV segment
Fired Trump national security adviser John Bolton might have better name recognition than Kirsten Gillibrand. Not that we'd bet money on it or anything. Carlson was offering the opinion that Bolton's willingness to use government power (particularly military power) marked him as a progressive.

So what? Even if we suppose that political alignment counts as a matter of fact, who cares? 1

Summary

Two Trump claims have a chance of ending up with the "Lie of the Year" award. But the weak field makes us think there's an excellent chance PolitiFact will do what it has done in the past by settling on a set of claims or a topic that failed to make its list of nominees.

PolitiFact's list offers few nominees with significant political impact.



Correction Dec. 17, 2019: We misquoted PolitiFact's description of its "Lie of the Year" criterion and we have overlooked a second criterion PolitiFact claims to use (at least potentially). We fixed our use of the wrong word with a strikethrough and replaced it with the correct word ("significant" for "impactful").

Tuesday, December 3, 2019

PolitiFact ratings aren't scientific--or are they?

We stand bemused by PolitiFact's attempt to straddle the fence regarding its aggregated "Truth-O-Meter" ratings.

On the one hand, as PolitiFact Editor Angie Drobnic Holan told us a few weeks ago, "It’s important to note that we don’t do a random or scientific sample."

On the other hand, we have this eye-popping claim Holan sent out via email today (also published to the PolitiFact.com website):
Trump remains a unique figure in American politics and at PolitiFact. There’s no one we’ve fact-checked as many times who has been as consistently wrong. Out of 738 claims we’ve fact-checked as of this writing, more than 70% have been rated Mostly False, False or Pants on Fire. After four years of watching Trump campaign and govern, we see little to no increase in his use of true facts and evidence to support his arguments.
Even though it is supposedly important to note that PolitiFact doesn't do a random or scientific sample, Holan makes no mention of that important caveat in her email. Instead, she makes it appear to the reader as though Trump's "Truth-O-Meter" record offers a reasonable basis for judging whether Trump has increased "his use of true facts and evidence to support his arguments."

How would PolitiFact judge whether Trump was using more true facts to support his arguments other than by looking at unscientific fact checker ratings?

PolitiFact has made a pattern of this deception.

The occasional admission of non-random, unscientific story selection is perfunctory. It is a fig leaf.

PolitiFact wants readers to judge politicians based on its aggregated "Truth-O-Meter" ratings.

For some unknown reason, though maybe we can guess, the fact checkers think their collected ratings are pretty much accurate judges of character regardless of their departure from the scientific method. And that's why PolitiFact over its entire history has implicitly and explicitly encouraged readers to rely on its graphs and charts to judge politicians. And has steadfastly resisted the obligation to attach a disclaimer to those charts and graphs making the supposedly important point that the selection process is not random or scientific.

We call it an obligation because we view it as a requirement for journalists to avoid deliberately deceiving their audiences.

PolitiFact deceives its audience daily in this way without any visible repentance.


Wednesday, November 20, 2019

PolitiFact as Rumpelstiltskin.

“Round about, round about,
Lo and behold!
Reel away, reel away,
Straw into gold!”

PolitiFact's Nov. 19, 2019 fact check of something President Donald Trump said on the Dan Bongino Show gives us yet another example of a classic fact checker error, the mistake of interpreting ambiguous statements as clear statements.

Here's PolitiFact's presentation of a statement it found worthy of a fact check:
In an interview with conservative show host Dan Bongino, Trump said a false rendition of that call by House Intelligence chairman Adam Schiff, D-Calif., forced him to release the readout of that call.

"They never thought, Dan, that I was going to release that call, and I really had no choice because Adam Schiff made up a call," Trump said Nov. 15. "He said the president said this, and then he made up a call."

The problem with Trump’s statement is that Schiff spoke after the White House released the memo of the phone call, not before.
 Note that PolitiFact finds a timeline problem with Trump's claim.

But also note that Trump makes no clear statement regarding a timeline. If Trump said "I released the transcript after Schiff did his 'parody' version of the telephone call," then he would have established an order of events. Trump's words imply an order of events, but it is not a hard logical implication (A, therefore B).

PolitiFact treats the case exactly like a hard implication.

Here's why that's the wrong approach.

First, significant ambiguity should always slow a fact-checker's progress toward an interpretation.

Second, Trump gave a speech on Sept 24, 2019 that announced the impending release of the transcript (memorandum of telephone conversation). The "transcript" was released on Sept. 25. Schiff gave his "parody" account of the call the next day, on Sept. 26.  And Trump responded to Schiff's "parody" version of his call on Sept. 30 during an event honoring the late Justice Antonin Scalia:
Adam Schiff — representative, congressman — made up what I said.  He actually took words and made it up.  The reason is, when he saw my call to the President of Ukraine, it was so good that he couldn’t quote from it because it — there was nothing done wrong.  It was perfect.
PolitiFact's interpretation asks us to believe that Trump either forgot what he said on Sept. 30 or else deliberately decided to reverse the chronology.

What motive would support that decision? Is one version more politically useful than the other?

It's not uncommon for people to speak of "having no choice" based on an event subsequent to that choice. The speaker means that the choice would have had to take place eventually.

When a source makes two claims touching the same subject and differ in content, the following rule applies: Use the more clear statement to make sense of the less clear statement.

Fact checkers who manufacture certitudes out of equivocal language give fact-checking a bad name.

They are Rumpelstiltskins, trying to spin straw into gold.


Afters

We would draw attention to a parallel highlighted at (Bryan's) Zebra Fact Check last month.

During a podcast interview Hillary Clinton used equivocal language in saying "they" were grooming Democratic Party presidential hopeful Tulsi Gabbard as a third-party candidate to enhance Trump's chances of winning the 2020 election.

No fact checker picked out Clinton's claim for a fact check. And that's appropriate, because the identity of "they" was never perfectly clear. Clinton probably meant the Russians, but "probably" doesn't make it a fact.

In that case, the fact checkers picked on those who interpreted Clinton to mean the Russians were grooming Gabbard (implicitly finding that Clinton's ambiguity clearly meant "Republicans").

Fact checkers have no business doing such things.

Until fact checkers can settle on a consistent approach to their craft, we justifiably view it as a largely subjective enterprise.

Monday, November 18, 2019

We want Bill Adair subjected to the "Flip-O-Meter"

It wasn't that long ago we reported on Bill Adair's article for Columbia Journalism Review declaring "Bias is good" along with a chart indicating fact-check journalism has more opinion to it than either investigative reporting or news analysis.

Yet WRAL, in announcing its new partnership with PolitiFact North Carolina, quoted Adair saying PolitiFact is unbiased:
“What is important about PolitiFact is not just that it’s not biased,” Adair said, “but that we show our work and that we show all of our sources.”
Naturally we cannot allow that to pass. We used WRAL's contact form to reach out to the writer of the article, Ashley Talley.

We pointed out the discrepancy between what Talley reported from Adair and what Adair wrote for Columbia Journalism Review. We suggested somebody should fact check Adair.

Next we'll be contacting Paul Specht of PolitiFact North Carolina over this quotation:
“One thing I love about PolitiFact is that the format is very structured and it's not up to me to decide what is or isn't true,” said Paul Specht, WRAL’s PolitiFact reporter who has been covering local, state and national politics for years. “It's up to me to go do the research and then it's up to the research to tell us what is true.”
We're not sure how that's supposed to square with Adair's declaration from a few years ago that "Lord knows the decision about a Truth-O-Meter rating is entirely subjective."

What changed?

In addition to its "Truth-O-Meter" PolitiFact publishes "Flip-O-Meter" items.

We'd like to see Adair on the Flip-O-Meter.

Friday, November 15, 2019

PolitiFact editor: "It’s important to note that we don’t do a random or scientific sample"

As we have mentioned before, we love it when PolitiFact's movers and shakers do interviews. It nearly guarantees us good material.

PolitiFact Editor Angie Drobnic Holan appeared on Galley by CJR (Columbia Journalism Review) with Mathew Ingram to talk about fact-checking.

During the interview Ingram asked about PolitiFact's process for choosing which facts to check (bold emphasis added):
MI
One question I've been asking many of our interview guests is how they choose which lies or hoaxes or false reports to fact-check when there are just so many of them? And do you worry about the possibility of amplifying a fake news story by fact-checking it? This is a problem Whitney Phillips and Joan Donovan have warned about in interviews I've done with them about this topic.
ADH
Great questions! We use our news judgement to decide what to fact-check, with the main criteria being that it’s a topic in the news and it’s something that would make a regular person say, "Hmmm, I wonder if that’s true." If it sounds wrong, we’re even more eager to do it. It’s important to note that we don’t do a random or scientific sample.
It's important, Holan says, to note that PolitiFact does not do a random or scientific sample when it chooses the topics for its fact check stories.

We agree wholeheartedly with Holan's statement in bold. And that's an understatment. We've been harping for years on PolitiFact's failure to make its non-scientific foundations clear to its audience. And here Holan apparently agrees with us by saying it's important.

How important is it?

PolitiFact's statement of principles says PolitiFact uses news judgment to pick out stories, and also mentions the "Is that true?" standard Holan mentions in the above interview segment. But what you won't find in PolitiFact's statement of principles is any kind of plain admission that its process is neither random nor scientific.

If it's important to note those things, then why doesn't the statement of principles mention it?

At PolitiFact, it's so important to note that its story selection is neither random nor scientific that no example from three pages of Google hits in the politifact.com domain when searching for "random" AND "scientific" has anything to do with PolitiFact's method for story selection.

And despite commenters on PolitiFact's Facebook page commonly interpreting candidate report cards as representative of all of a politician's statements, Holan insists "There's not a lot of reader confusion" about it.
If there's not a lot of reader confusion about it, why say it's important to note that the story selection isn't random or scientific? People supposedly already know that.

We use the tag "There's Not a Lot of Reader Confusion" on occasional stories pointing out that people do suffer confusion about it because PolitiFact doesn't bother to explain it.

Post a chart of collected "Truth-O-Meter" ratings and there's a good chance somebody in the comments will extrapolate the data to apply to all of a politician's statements.

We say it's inexcusable that PolitiFact posts its charts without making their unscientific basis clear to readers.

They just keep right on doing it, even while admitting it's important that people realize a fact about the charts that PolitiFact rarely bothers to explain.

Monday, October 14, 2019

Remember back when it was False to say Nixon was impeached?

I remember reading a story years back about a tire company that enterprisingly tried to drum up business by sending out a team to spread roofing nails on the local roads.

Turns out there's a version of that technique in PolitiFact's fact-checking tool box.

Nixon was Never Impeached

Back on June 13th, 2019 PolitiFact's PunditFact declared it "False" that Nixon was impeached. PunditFact said "Nixon was never officially impeached." We're not sure what would count as "unofficially impeached." We're pretty sure it's the same as saying Nixon was not impeached.



But that was way back in June. Over three months have passed. And it's now sufficiently true that Nixon was impeached so that PolitiFact can spread the idea on Twitter and write an impeachment PolitiSplainer that refers multiple times to the Nixon impeachment.

Nixon was Impeached

Twitter
Edit: (if embed isn't working use hotlink above)

Is Nixon a good example to include with Johnson and Clinton (let alone Trump) if Nixon wasn't impeached?
More than anything, the procedural details are derived from historical precedent, from the impeachment of President Andrew Johnson in the 1860s to that of President Richard Nixon in the 1970s and President Bill Clinton in the 1990s.

Got it? The impeachment of President Nixon. Because Nixon was impeached, right?
Experts pointed to a variety of differences between the Trump impeachment process and those that went before.

The differences begin with the substance of the charges. All prior presidential impeachments have concerned domestic issues — the aftermath of the Civil War in Johnson’s case, the Watergate burglary and coverup under Nixon, and the Monica Lewinsky affair for Clinton.
Got it? Nixon was impeached over the Watergate burglary. Because Nixon was impeached, right?
The impeachments of both Nixon and Clinton did tend to curb legislative action by soaking up all the attention in Washington, historians say.
Obviously a fact-checker will not refer to "the impeachments of both Nixon and Clinton" if Nixon was not impeached. Therefore, Nixon was impeached. Right?
Some congressional Republicans have openly supported Trump’s assertion that the allegations against Trump are dubious. This contrasts with the Nixon impeachment, when "on both sides there was a pretty universal acknowledgement that the charges being investigated were very important and that it was necessary to get to the bottom of what happened," said Frank O. Bowman III, a University of Missouri law professor and author of the book, "High Crimes and Misdemeanors: A History of Impeachment for the Age of Trump."
Obviously a fact-checker will only draw a parallel to the Nixon impeachment if Nixon was impeached. Therefore Nixon was impeached. Right?
Trump is facing possible impeachment about a year before running for reelection. By contrast, both Nixon and Clinton had already won second terms when they were impeached. (Johnson was such an outcast within his own party that he would have been an extreme longshot to win renomination, historians say.)
Got it? Nixon and Clinton had already won second terms when they were impeached. Because Nixon was impeached, right?
On the eve of impeachment for both Nixon and Clinton, popular support for impeachment was weak — 38% for Nixon and 29% for Clinton, according to a recent Axios analysis. (There was no public opinion polling when Johnson was president.)
Got it? "On the eve of impeachment for both Nixon and Clinton," because a fact checker doesn't refer to the eve of the Nixon impeachment if there was no Nixon impeachment.

Is there a Christmas Eve if there's no Christmas?

That's six times PolitiFact referred to the Nixon impeachment in just one PolitiSplainer article. And about three months after PolitiFact's PunditFact said Nixon was not impeached.

Want a seventh? We've got a seventh:
During Nixon’s impeachment, "people counted on the media to serve as arbiters of truth," he said. "Obviously, we don’t have that now."
 "During Nixon's impeachment" directly implies Nixon was impeached. Seven.

We've been going in order, too.


(Nixon Wasn't Impeached)


But behold! Context at last!
The uncertainty about Senate process stems from the rarity of the process. Nixon resigned before the House could vote to send articles to the Senate, leaving just one precedent -- Clinton’s trial — in the past century and a half.
Admittedly, that's not PolitiFact saying "Nixon was not impeached." On the other hand, it's PolitiFact directly implying Nixon was not impeached. Blink and you might miss it amidst all the talk about the Nixon impeachment.

Can we get to eight after that bothersome bit of context?

Nixon was Impeached, Continued 

We can:
The impeachments of both Nixon and Clinton did tend to curb legislative action by soaking up all the attention in Washington, historians say.
We're curious which historians PolitiFact talked to who explicitly referred to the impeachment of Nixon. There are no quotations in the text of the PolitiSplainer that would support this claim about what historians say.

PolitiFact flirted with nine in the next paragraph. We're capping the count at eight.

In summary, we'll just say this: If there's a sense of "impeachment" that doesn't mean literally getting impeached by Congress and standing trial in the Senate, then Jimmy Kimmel is entitled to that understanding when he says Nixon was the last president to be impeached.

Contrary to PolitiFact's framing, Kimmel was wrong not because Nixon was not impeached. Kimmel was wrong because President William J. Clinton was the last president to be impeached. There was never any need for PunditFact to focus on the fact Nixon wasn't impeached, unless it was to avoid emphasis on Clinton.

This all works out very well for PolitiFact. PolitiFact does what it can to spread the misperception Nixon was impeached. And then it can draw clicks to its PunditFact fact check showing that claim false.

Just like dropping roofing nails on the road.

Tuesday, September 3, 2019

Fact Check not at PolitiFact Illinois

One of the characteristics of PolitiFact that drags it below its competitors is its penchant for not bothering to fact check what it claims to fact check.

Our example this time comes from PolitiFact Illinois:

From the above, we judge that "Most climate scientists agree" that we have less than a decade to avert a worst case climate change scenario counts as the central claim in need of fact-checking. PolitiFact hints at the confusion it sows in its article by paraphrasing the issue as "Does science say time is running out to stop climate disaster?"

The fact is that time could be running out to stop climate disaster while at the same time (Democrat) Sean Casten's claim could count as entirely false. Casten made a claim about what a majority of scientist believe about a short window of opportunity to avoid a worst-case scenario. And speaking of avoidance, PolitiFact Illinois avoided the meat of Casten's claim in favor of fact-checking its watered-down summary of Casten's claim.

The Proof that Proves Nothing

The key evidence offered in support of Casten was a 2018 report by the United Nations Intergovernmental Panel on Climate Change.

The problem? The report offers no clear evidence showing a majority of climate scientists agree on anything at all, up to and including what Casten claims they believe. In fact, the report only mentions "scientist" or "scientists" once (in the Acknowledgments section):
A special thanks goes to the Chapter Scientists of this Report ...
A fact checker cannot accept that report as evidence of what a majority of scientists believe without strong justification. That justification does not occur in the so-called fact check. PolitiFact Illinois apparently checks the facts using the assumption that the IPCC report would not claim something if a majority of climate scientists did not believe it.

That's not fact-checking.

And More Proof of Nothing

Making this fact-checking farce even more sublime, PolitiFact Illinois correctly found the report does not establish any kind of hard deadline for bending the curve on carbon emissions (bold emphasis added):
Th(e) report said nations must take "unprecedented" actions to reduce emissions, which will need to be on a significantly different trajectory by 2030 in order to avoid more severe impacts from increased warming. However, it did not identify the hard deadline Casten and others have suggested. In part, that’s because serious effects from climate change have already begun.
So PolitiFact did not bother to find out whether a majority of scientists affirm the claim about "less than a decade" (burden of proof, anyone?) and moreover found the "less than a decade" claim was essentially false. We can toss PolitiFact's line about serious effects from climate change already occurring because Casten was talking about a "worst-case scenario."

PolitiFact Illinois rated Casten's claim "Mostly True."

Does that make sense?

Is it any wonder that Independents (nearly half) and Republicans (more than half) think fact checkers favor one side?


Afters

Also worth noting: Where does that "'worst-case scenario'" phrase come from? Does Casten put it inside quotation marks because he is quoting a source? Or is it a scare quote?

We confirmed, at least, that the phrase does not occur in the IPCC report that supposed served as Casten's source.

We will not try to explain PolitiFact Illinois' lack of curiosity on this point.

Let PolitiFact Illinois do that.


Update Sept. 4, 2019: We originally neglected to link to the flawed PolitiFact Illinois "fact check." This update remedies that problem.

Sunday, September 1, 2019

PolitiFact founder: "Bias is good"

It was wasn't even a year ago that PolitiFact pompously announced it isn't biased, but now PolitiFact founder Bill Adair has muddied the waters by announcing from his lofty perch at Duke University that bias is good.

Doubtless it is important to make take Adair's words in context.

We'll certainly try.

Here's the Columbia Journalism Review headline:

Op-ed: Bias is good. It just needs a label.


In context so far: Adair appears to say bias is good if the reader understands it (hence the need for a label).

Adair repeated the same point in the article and then used a graphic to spell out what he's saying:


It's hard not to notice that Adair's graphic appears to concede what we have argued for years here at PolitiFact Bias. Fact-checking is not some kind of objective and scientific pursuit even if we set aside the subjective linear-scale truth ratings. Adair understands fact-checking contains more opinion than does "news analysis," with no other form of journalism closer to "opinion."

Unfortunately Adair does little to distinguish the desirable types of bias he's probably talking about--bias toward truth and democracy, for example--from unhealthy cognitive biases. But at least he gives clear guidance that journalists should appropriately label their work.

Now we just need to find the appropriate label at PolitiFact, right?

PolitiFact is not biased -- here’s why

Okay, great. No problem, right?

Seriously, we're not aware of any prominent acknowledgement of bias labeling at PolitiFact.com.

If such a thing existed, perhaps we should expect to find it on PolitiFact's statement of principles. But we get this instead:
Our ethics policy for PolitiFact journalists

PolitiFact seeks to present the true facts, unaffected by agenda or biases. Our journalists set their own opinions aside as they work to uphold principles of independence and fairness.
Anybody see an expression of the idea "bias is good" in there? We don't.

PolitiFact over its history has encouraged readers to take its biased reporting as objective reporting.

It deceived and continues to deceive its readers by the standard Adair advocates.

Wednesday, August 14, 2019

A PolitiFact gloss on the Michael Brown "murder"

We've been tracking evidence of PolitiFact's look-the-other-way stance on Dem0crats' campaign rhetoric on race. PolitiFact sees no need to issue a "Truth-O-Meter" rating when Democrats call President Trump a racist, for example.

Now, with Democratic presidential candidates like Kamala Harris and Elizabeth Warren asserting that Michael Brown was murdered, again we see PolitiFact reluctant to apply fact-checking to Democratic Party falsehoods.

Instead of issuing a "Truth-O-Meter" rating for either Democratic Party candidate over their Michael Brown statements, PolitiFact published an absurd PolitiSplainer article.

A Fox News article hits most of the points that we would have emphasized:
The fact-checking website PolitiFact again came under fire for alleged political bias Wednesday after it posted a bizarre article that refused to rule on whether Michael Brown was in fact "murdered" by police officer Darren Wilson in Ferguson, Mo. in 2014, as Democratic presidential candidates Kamala Harris and Elizabeth Warren falsely claimed last week.
Indeed, Fox News emphasizes the key expert opinion from the PolitiFact PolitiSplainer:
Jacobson quoted Jean Brown, a communications professor who focuses on "media representations of African Americans," as saying that the entire question of whether Warren and Harris spread a falsehood was nothing more than an "attempt to shift the debate from a discussion about the killing of black and brown people by police."
The Fox article quotes the Washington Examiner's Alex Griswold asking why the expert opinion from Brown was included in the fact check.

We suggest that the quotation represents the reasoning PolitiFact used in deciding not to issue "Truth-O-Meter" ratings for Harris or Warren.

PolitiFact, per the Joe Biden gaffe, seems interested in truth, not facts.

Sunday, August 4, 2019

Highlights of PolitiFact's Reddit AMA from August 2, 2019

PolitiFact newbie Daniel Funke, former fact check reporter for the International Fact-Checking Network, represented PolitiFact for a Reddit AMA on Aug. 2, 2019.

We always look forward to public Q&A sessions with PolitiFact staff, for it nearly always provides us with material.

Funke stuck with PolitiFact boilerplate material for the most part, even channeling Bill Adair with his answer about PolitiFact's response to critics who suggest PolitiFact is biased.

Funke's chief error, in our view, was his repetition of a false PolitiFact public talking point:
As far as corrections: We're human beings, so we do make mistakes from time to time. That's why we have a corrections process. You can read our full corrections policy, but the bottom line is that we fix the wrong information and note it. If we give a new rating to a fact-check, we archive the old version so people can see exactly what we changed. Everything that gets a correction or an update gets tagged - see all tagged items.
We've pointed out dozens and dozens of mistakes at PolitiFact, and though we've prompted PolitiFact to fix quite a few mistakes the majority of the time PolitiFact ignores the critique and doesn't bother to fix anything. We tried to get PolitiFact Georgia not to interpret "pistol" as a synonym for "handgun" because revolvers count as handguns but do not count as pistols. No go. The mistake remains enshrined in PolitiFact's "database" of facts. And Funke's recent mistake in using a number PolitiFact found wanting as the deficit figure handed off from Bush to Obama still hasn't been fixed. Nor do we expect PolitiFact to break tradition by fixing it.

PolitiFact fixes mistakes if and only if PolitiFact feels like fixing the mistakes.

So Funke is wrong about the bottom line at PolitiFact. The PolitiFact "database" has more than its share of bad information.

As for archiving the old version of a fact check when the rating changes, contrary to what Funke says readers can't necessarily find the archived version. Here's an example from 2017. The new version contains no link to the old version. A reader would have to figure out how PolitiFact structures its URLs to track down the archived version (assuming there is one).

Finally, Funke repeats the falsehood that "Everything that gets a correction or an update gets tagged," complete with a link to the very incomplete list of corrected items. PolitiFact does not use tags on many of its articles, particularly those that do not feature a rating. Corrections on those articles do not get tagged and do not appear on the list of corrections. Moreover, PolitiFact simply neglects to tag corrected fact checks on occasion.

Apparently it's too much to ask that PolitiFact staffers know what they're talking about when they describe PolitiFact's corrections process.

Saturday, August 3, 2019

PolitiFact: The true half of Cokie Roberts' half truth is President Trump's half truth

Pity PolitiFact.

The liberal bloggers at PolitiFact may well see themselves as neutral and objective. If they see themselves that way, they are deluded.

Latest example:


PolitiFact's Aug. 3, 2019 fact check of President Trump finds he correctly said the homicide rate in Baltimore is higher than in some countries with a significant recent history of violence. But it wasn't fair of Trump to compare a city to a country for a variety of reasons, experts said.

So "Half True," PolitiFact said.

The problem?

Here at PolitiFact Bias we apparently remember what PolitiFact has done in the past better than PolitiFact remembers it. We remembered PolitiFact giving (liberal) pundit Cokie Roberts a "Half True" for butchering a comparison of the chance of being murdered in New York City compared to Honduras.




Roberts was way off on her numbers (to the point of being flatly false about them, we would say), but because she was right that the chance of getting murdered is greater in Honduras than in New York City, PolitiFact gave Roberts a "Half True" rating.

We think if Roberts' numbers are wrong (false) and her comparison is "Half True" because it isn't fair to compare a city to a country then Roberts seems to deserve a "Mostly False" rating.

That follows if PolitiFact judges Roberts by the same standard it applies to Mr. Trump.

But who are we kidding?

PolitiFact often fails to apply its standards consistently. Republicans and conservatives tend to receive the unfair harm from that inconsistency. Mr. Trump, thanks in part to his earned reputation for hyperbole and inaccuracy, tends to receive perhaps more unfair harm than anybody else.

It is understandable that fact checkers allow confirmation bias to influence their ratings of Mr. Trump.

It's also fundamentally unfair.

We think fact checkers should do better.

Thursday, August 1, 2019

That Time PolitiFact Used Facebook to Amplify a Misleading Message on Fiscal Responsibility


We wrote about PolitiFact's awful fact check of a tweet that used deficit numbers at the start and end of presidential terms in office to show it's wrong to think that Democrats cause deficits.

PolitiFact's FaceBook page took the misleading nature of that fact check and amplified it to the max with a false headline:


Contrary to the headline, the fact check does not tell how the past five presidents affected the deficit. Instead, the fact check pretends to address the accuracy of a tweet that suggests deficit numbers at the start and end of presidential administrations tell us which party causes deficits. That use of deficit numbers serves as an exceptionally poor metric, a fact PolitiFact barely hints at in giving the tweet a "Mostly True" rating.

The tweet falsely suggests those deficit numbers give us a reliable picture of party fiscal responsibility (and the way presidents affect the deficit), and PolitiFact amplifies those misleading messages.

It's almost like they think that's their job.

Tuesday, July 30, 2019

PolitiFact's Inconsistency on True-But-Misleading Factoids

People commonly mislead other people using the truth. Fact checkers have recognized this with various kinds of "True but False" designations. But the fact checkers tend to stink at applying consistent rules to the "True but False" game by creating examples in the "True but False but True" genre.

PolitiFact created a classic in the "True but False" genre for Sarah Palin (John McCain's pick for vice presidential nominee) years ago. Palin made a true statement about how U.S. military spending ranks worldwide as a measure of GDP. PolitiFact researched the ways in which that truth misled people and gave Palin a "Mostly False" rating.

On July 29, 2019, PolitiFact gave a great example of the "True but False but True" genre with a fact check of a tweet by Alex Cole (side note: This one goes on the report card for "Tweets" instead of a report card for "Alex Cole"):


PolitiFact rated Cole's tweet "Mostly True." But the tweet has the same kind of misleading features that led PolitiFact to give Palin a "Mostly False" rating in the example above. PolitiFact docked Palin for daring to compare U.S. defense spending as a percentage of GDP to very small countries as well as those experiencing strife.

But who thinks the deficit at the start and end of an administration serves as a good measure of party fiscal discipline?

Yet that's the argument in Cole's tweet, and it gets a near-total pass from PolitiFact.


And this isn't even one of those situations where PolitiFact focused on the numbers to the exclusion of the underlying argument. PolitiFact amplified Cole's argument by repeating it.

Note PolitiFact's lead:
A viral post portrays Democrats, not Republicans, as the party of fiscal responsibility, with numbers about the deficit under recent presidents to make the case.
PolitiFact sends out the false message that the above argument is "Mostly True."

That's ridiculous. For starters, the deficit is best measured as a percentage of GDP. Also, presidents do not have great control over the rise and fall of deficits. PolitiFact pointed out that second factor but without giving it the weight it should have had in undercutting Cole's argument. After all, the tweet suggests the presidents drove deficit changes without any hint of any other explanation.

Yes, this is the same fact-checking operation that laughably assured us back in November 2018 that "PolitiFact is not biased."

PolitiFact could easily have justified giving Cole the same treatment it gave Palin. But it did not. And this type of scenario plays out repeatedly at PolitiFact, with conservatives getting the cold shoulder from PolitiFact's star chamber.

Whether or not the liberal bloggers at PolitiFact are self-aware to the point of seeing their own bias, it comes out in their work.


Afters

Hilariously, in this article PolitiFact dinged the deficit tweet for using a figure of $1.2 trillion for the end of the George W. Bush presidency:
"(George W.) Bush 43 took it from 0 to 1.2 trillion." This is in the ballpark. Ignoring the fact that he actually started his presidency with a surplus, Bush left office in 2009 with a federal deficit of roughly $1.41 trillion.
Why is it funny?

It's funny because one of the PolitiFact articles cited in this one prefers the $1.2 trillion figure over the $1.4 trillion figure:

The Great Recession hit hard in 2008 and grew worse in 2009. In that period, the unemployment rate doubled from about 5 percent to 10 percent. With Democrats in charge of both houses of Congress and the White House, Washington passed a stimulus package that cost nearly $190 billion, according to the Congressional Budget Office. That included over $100 billion in new spending and a somewhat smaller amount in tax cuts, about $79 billion in fiscal year 2009.

George W. Bush was not in office when those measures passed. So a more accurate number for the deficit he passed on might be closer to $1.2 trillion.
But it's just fact-checking, so inaccuracy is okay so long as it's in the service of a desirable narrative.

?

Monday, July 29, 2019

Reporting on the Mueller Report from the Liberal Bubble

PolitiFact's treatment of things Mueller has fit well with its left-leaning reputation.

A PolitiFact fact check from July 24, 2019 serves as our example.


We would first draw the reader's attention to the way PolitiFact altered Rep. Ratcliffe's claim. Ratcliffe  said Mueller did not follow the special counsel rules. Not following rules may take place though omission or by elaborating on what the rules stipulate. But PolitiFact says Ratcliffe claimed Mueller broke the rules.

We think it's fairly clear that elaborating on the rules counts as failing to follow the rules. It's less clear that elaborating on the rules counts as breaking the rules.

So right off the bat, PolitiFact is spinning Ratcliffe's claim into a straw man that is more easily attacked.

Missing the Point?

Rep. Ratcliffe was repeating a point pretty familiar to conservatives, that the Mueller report failed to follow the special prosecutor statute because Mueller punted on deciding whether to recommend prosecution for obstruction of justice. Conservative pundit and legal expert Andrew McCarthy, for example, has written on the topic.

It's hard to see how PolitiFact's fact check addresses a position like McCarthy's.

PolitiFact contacted three legal experts for comment. But only Mark Osler (University of St. Thomas) was quoted on Ratcliffe's key issue:
Federal regulations say, "At the conclusion of the Special Counsel's work, he or she shall provide the Attorney General with a confidential report explaining the prosecution or declination decisions reached by the Special Counsel."

"It clearly includes declinations, which is taking no action," Osler said.
We humbly submit to the expert Osler that a declination is not merely a lack of action. Declination, in context, is a decision not to prosecute. An explanation of Special Counsel's decision not to prosecute meets the requirements of the statue. But an unexplained decision not to decide whether to prosecute should not meet the requirements even though it is lack of action.

And, hypothetically, taking no action at all as by not filing the report is taking no action but does not satisfy the statute.

A July 24, 2019 article in Washington Post helps make clear that Mueller pretty much declined to spell out why he declined to recommend prosecution for obstruction of justice:
John Yoo, a former top official in the George W. Bush Justice Department, said he found Mueller’s explanation “rather vague and somewhat mysterious,” and that he may have felt he should defer to the attorney general.

“Like everyone else, I have been trying to infer why he did what he did,” Yoo said.

But Mueller offered little elaboration on his reasoning as he was pressed Wednesday by lawmakers in both parties.
Again, the declination description required in the statute concerns the decision not to prosecute, not the decision not to explain the decision not to prosecute. Lack of action is not an explanation.

PolitiFact's Big Whiff

PolitiFact showed the true quality of its fact-checking by apparently knowing nothing about widely-published reasoning like McCarthy's. It's the Bubble!

Check out this faux pas in PolitiFact's summary:
We found no legal scholar who agreed with Ratcliffe.
PolitiFact could not find articles by Andrew McCarthy?

Couldn't find the comments by David Dorsen in this Newsweek article?

Couldn't find this piece by Alan Dershowitz for The Hill?

Trust fact checkers? Why?

Thursday, July 25, 2019

Ocasio-Cortez, PolitiFact and the parking lot

My, how PolitiFact beclowns itself.

A number of media outlets have noted PolitiFact's fact check of the claim Rep. Alexandria Ocasio-Cortez cried about an empty parking lot. We like the account from Amanda Prestigiacomo at the Daily Wire:
Politifact is at it again! The left-wing fact-checker purporting to be unbiased made a mockery of themselves (again) with their latest rating concerning socialist Rep. Alexandria Ocasio-Cortez (D-NY). Politifact, creating their best damage control for the freshman congresswoman, rated the claim that Ocasio-Cortez cried in front of an empty parking lot for a photo-op as "false" because it was, in fact, a "road" with some parked cars, not a parking lot, that the elected Democrat cried in front of.
The hilarity of the story took an exponential leap when PolitiFact Editor Angie Drobnic Holan took to Twitter in defense of PolitiFact:
Holan's defense falls flat because the original story with the "parking lot" language was using humor to make a point. Ocasio-Cortez had nothing to look at that should reasonably produce the emotional response she wore for the camera.There were no children or refugees in view. At most, she would have been able to see signs of the tents set up to house illegal immigrants.

Crying at the sight of distant tents is a little like breaking down upon seeing a hospital. Because of all the suffering that happens in hospitals. But that type of response is uncommon, right?

PolitiFact's reporting leaves doubt as to whether a person at the fence could see tents:
Daniel Borunda, a reporter for the El Paso Times who was at the rally on the same day, told PolitiFact that the tent complex was "visible in the distance several hundred yards away" from the fence.
Certainly PolitiFact's reporting seems intended to produce the impression one could see tents from the entryway fence. But Borunda's quotation is cut off and instead of relying on Borunda we end up relying on PolitiFact for the information. We think it likely PolitiFact fudged the facts.

There's good reason to suspect Borunda did not claim tents were visible from the fence. The Google Maps image, for example, shows great distance and a number of buildings between the entry area and the section of the complex where the tents were set up. Click through to the map and explore for yourself.

There are two buildings that appear round from above to the east of the main ICE building. The tents for the tent city (along with a sloped-roof structure that does not appear in the Google image) occur just south of those buildings in aerial photographs of the time.

It's worth pointing out that photo we just linked shows a line of buildings and a parking lot between the tent city and Ocasio-Cortez's reported location.

This image from later the same year (September 2018) shows the growth of the tent city stretching South and East from its original location--further from Ocasio-Cortez's vantage point and likewise with a view punctuated by intervening buildings and trees. And that was after Ocasio-Cortez made her visit (June 21, 2018).

Did any photographers take pictures of the tent city from outside the fence at Ocasio-Cortez's location? We'd love to see them, if they exist.


Afters

We took our analysis one step further. The images of Ocasio-Cortez, along with PolitiFact's reporting, appear to place her between two sections of wall outside the border compound. A road and a sidewalk run between the sections of wall, and it appears the north wall features a sliding fence/gate that officials may use to block the roadway.

If we're correct about the location, that puts the south part of the wall between Ocasio-Cortez and any view of the tent city.

We put a cluster of red dots where we believe Ocasio-Cortez stood.

The fall of the shadows in the photographs of Ocasio-Cortez suggest the pictures were taken in the morning, if we're correct. An image posted at the end of Time Magazine story supports our analysis (showing evidence of the line of trees to the left of the roadway, along with the utility poles). Facing the road leaves the tent city directly to the left of the protesters pictured, behind a wall and out of sight.


Update July 27, 2019: A kind reader pointed out an excess of "not" in the paragraph beginning with "Holan's defense." We took it out. Our thanks to the kind reader.

Friday, July 19, 2019

PolitiFact Wisconsin: "Veteran" and "service member" mean the same thing

A funny thing happened when PolitiFact examined Democratic presidential candidate Tulsi Gabbard's claim the Trump administration deports service members.

Instead of ruling on whether the Trump administration was deporting service members, PolitiFact Wisconsin decided to look at whether the Trump administration was deporting non-naturalized service veterans.

Therefore "service members" are the same thing as non-naturalized service veterans?

We wish we were kidding. But read PolitiFact's summary conclusion. PolitiFact equates "service members" with "veterans" as though it's the most natural thing in the world, and doesn't even mention citizenship status:
Our ruling

Gabbard said at the same time Trump talks about supporting veterans, "he is deporting service members who have volunteered to serve this country."

The Trump administration expanded the grounds under which people, including veterans, can be deported, which some blame for more veterans being forced to leave the country. That said, GAO documents make clear the issue existed before Trump took office -- something that wasn’t acknowledged in Gabbard’s claim.

Our definition for Mostly True is "the statement is accurate but needs clarification or additional information." That fits here.
PolitiFact does mention citizenship issues in the body of the story. It opens, for example, with a frame emphasizing military service and illegal immigration:
Military matters and illegal immigration.

Both are hot-button issues for voters in the 2020 presidential election, though for different reasons.

U.S. Rep. Tulsi Gabbard of Hawaii, a Democratic presidential hopeful and major in the Hawaii Army National Guard, linked them when she spoke July 11, 2019 at the League of United Latin American Citizens convention in Milwaukee.
In the quotation PolitiFact Wisconsin provided, Gabbard did nothing to explicitly link military service with illegal immigration. The journalist (or reader) would have to infer the connection. And PolitiFact Wisconsin failed to link to a transcript of Gabbard's speech, linking us instead to the Journal Sentinel's news report that fails to supply any additional context to Gabbard's remarks.

Intentional Spin?

We see evidence suggesting PolitiFact Wisconsin applied intentional spin in its story to minimize the misleading nature of Gabbard's statement.

In context, Gabbard referred to "lip service" Trump offers to "our veterans, to our troops," but PolitiFact lops off "to our troops" in its headline and deck material. That truncated version of Gabbard's statement makes it appear reasonable to assume Gabbard was talking about veterans and not active service members.

Put simply, PolitiFact manipulated Gabbard's statement to help make it match the interpretation PolitiFact's liberal bloggers gave it in the story. PolitiFact not only chose not to deal with the obvious way Gabbard's statement might mislead people, but also chose not to transparently disclose that decision to its readers.

Principles Forsaken

PolitiFact's statement of principles is a sham. Why? Because PolitiFact applies the principles so haphazardly that we might as well call the result situational ethics. The ideology of the claimant appears to serve as one of the situational conditions driving the decision as to which principle to apply in any given case.

In Gabbard's case, she made a statement that could easily be interpreted in a way that makes it false. And PolitiFact often uses that as the justification for a harsh rating. In its statement of principles PolitiFact says it takes into account whether a statement is literally true (or false). It also says PolitiFact takes into account whether the statement is open to interpretation (bold emphasis added).:
The three editors and reporter then review the fact-check by discussing the following questions.
• Is the statement literally true?
• Is there another way to read the statement? Is the statement open to interpretation?
• Did the speaker provide evidence? Did the speaker prove the statement to be true?
• How have we handled similar statements in the past? What is PolitiFact’s jurisprudence?
PolitiFact effectively discarded two of its principles for the Gabbard fact check.

We say that a fact-checking organization that does not apply its principles consistently cannot receive credit for consistent non-partisanship or fairness.

With PolitiFact, "words matter" sometimes.



Afters

We've always been open to legitimate examples showing PolitiFact's inconsistency causing unfair harm to liberals or Democrats.

The examples remain few, in our experience.