Tuesday, January 16, 2018

PolitiFact goes partisan on the "deciding vote"

When does a politician cast the "deciding vote"?

PolitiFact apparently delivered the definitive statement on the issue on Oct. 6, 2010 with an article specifically titled "What makes a vote 'the deciding vote'?"

Every example of a "deciding vote" in that article received a rating of "Barely True" or worse (PolitiFact now calls "Barely True" by the name "Mostly False"). And each of the claims came from Republicans.

What happens when a similar claim comes from a Democrat? Now we know:


Okay, okay, okay. We have to consider the traditional defense: This case was different!

But before we start, we remind our readers that cases may prove trivially different from one another. It's not okay, for example, if the difference is that this time the claim from from a woman, or this time the case is from Florida not Georgia. Using trivial differences to justify the ruling represent the fallacy of special pleading.

No. We need a principled difference to justify the ruling. Not a trivial difference.

We'll need to look at the way PolitiFact justified its rulings.

First, the "Half True" for Democrat Gwen Graham:
Graham said DeSantis casted the "deciding vote against" the state's right to protect Florida waters from drilling.

There’s no question that DeSantis’ vote on an amendment to the Offshore Energy and Jobs Act was crucial, but saying DeSantis was the deciding vote goes too far. Technically, any of the 209 other people who voted against the bill could be considered the "deciding vote."

Furthermore, the significance of Grayson’s amendment is a subject of debate. Democrats saw it as securing Florida’s right to protect Florida waters, whereas Republicans say the amendment wouldn’t have changed the powers of the state.

With everything considered, we rate this claim Half True.
Second, the "Mostly False" for the National Republican Senatorial Committee (bold emphasis added):
The NRSC ad would have been quite justified in describing Bennet's vote for either bill as "crucial" or "necessary" to passage of either bill, or even as "a deciding vote." But we can't find any rationale for singling Bennet out as "the deciding vote" in either case. He made his support for the stimulus bill known early on and was not a holdout on either bill. To ignore that and the fact that other senators played a key role in completing the needed vote total for the health care bill, leaves out critical facts that would give a different impression from message conveyed by the ad. As a result, we rate the statement Barely True.
Third, the "False" for Republican Scott Bruun:
(W)e’ll be ridiculously lenient here and say that because the difference between the two sides was just one vote, any of the members voting to adjourn could be said to have cast the deciding vote.
The Bruun case doesn't help us much. PolitiFact said Bruun's charge about the "deciding" vote was true but only because its judgment was "ridiculously lenient." And the ridiculous lenience failed to get Bruun's rating higher than "False."  So much for PolitiFact's principle of rating two parts of a claim separately and averaging the results.

Fourth, we look at the "Mostly False" rating for Republican Ron Johnson:
In a campaign mailer and other venues, Ron Johnson says Feingold supported a measure that cut more than $500 billion from Medicare. That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion. Under the plan, guaranteed benefits are not cut. In fact, some benefits are increased. Johnson can say Feingold was the deciding vote -- but so could 59 other people running against incumbents now or in the future.

We rate Johnson’s claim Barely True.
We know from earlier research that PolitiFact usually rated claims about the ACA cutting Medicare as "Mostly False." So this case doesn't tell us much, either. The final rating for the combined claims could end up "Mostly False" if PolitiFact considered the "deciding vote" portion "False" or "Half True." It would all depend on subjective rounding, we suppose.

Note that PolitiFact Florida cited "What makes a vote 'the deciding vote'?" for its rating of Gwen Graham. How does a non-partisan fact checker square Graham's "Half True" rating with the ratings given to Republicans? Why does the fact check not clearly describe the principle that made the difference for Graham's more favorable rating?

As far as well can tell, the key difference comes from party affiliation, once again suggesting that PolitiFact leans left.


After the page break we looked for other cases of the "deciding vote."

Thursday, January 11, 2018

PolitiFact Texas transparency update

In a post we titled "PolitiFact Texas unpublished" we described how PolitiFact Texas published an article declaring Gov. Greg Abbott broke his promise to provide leadership training for principals of Texas schools. PolitiFact Texas published that article on Jan. 4, 2018. On Jan. 9, 2018, PolitiFact Texas announced it had taken down (unpublished) the article.


We checked and the article was gone from the PolitiFact Texas website. It was still available, and remains available, at Austin American-Statesman website, one of PolitiFact's partners for PolitiFact Texas.

And on Jan. 11, 2018, PolitiFact republished the old version of the story along with an updated version on the same page. PolitiFact Texas' actions make the temporarily unpublished version of the story look like a normal update, albeit the careful reader may notice that the second update followed just one week after the first one.

We think that's goofy.

Why do we think it's goofy? If the unpublished version of the update was just a normal update later updated with new information, then why unpublish it in the first place? It makes no sense.

In its current form, the story offers readers no hint at all that PolitiFact Texas unpublished the story for two days. Instead, PolitiFact Texas offers its readers this explanation:
Back when he was campaigning to be governor of Texas, Greg Abbott called for training school principals to be better leaders.

Legislative proposals to get such trainings off the ground floundered, however, leading us to rate this Abbott vow a Promise Broken.

But that was before the Texas Education Agency alerted us to other efforts focused on bolstering school leaders.

We decided to look afresh at progress on this promise.
Does PolitiFact Texas downplay the timing of its two updates or what? PolitiFact Texas does mention the date in an editor's note (not labeled as an editor's note) as part of an introduction to the earlier update:
The Abbott-O-Meter update below was posted Jan. 4, 2018. It's eclipsed by the update above:
If it's in italics maybe it automatically counts as an editor's note?

The finished product shows PolitiFact Texas rating a fulfilled promise as broken about a week before reversing itself to declare the promise fulfilled. But with no admission of any error and an incomplete description of what occurred.

PolitiFact thought it had erred, but was mistaken to think so?

We don't quite buy that.

Tuesday, January 9, 2018

PolitiFact Texas unpublished (Updated)

It seems like only yesterday we were praising PolitiFact Texas and W. Gardner Selby for taking an important step toward full transparency by making its interviews of experts available to readers.

Now it has come to our attention that PolitiFact Texas has followed PolitiFact National's lead in unpublishing stories when it decides they are defective.

A publisher may have legitimate reasons for unpublishing a story. But in the interest of transparency organizations should not totally remove the defective work from public view. Organizations should archive the story and keep it available before and after the organization puts the needed changes into effect.

Lately PolitiFact disappears the entire story and only posts a link to the archived version after republishing a reworked version.

If there's a good excuse for that doughnut hole in transparency we are not aware of it.


We also disapprove of PolitiFact only communicating its decision to unpublish the item on Twitter. That's transparency only for the Twitterverse. Readers deserve better than that.



Update Jan. 10, 2018

We found what is apparently the original version of the Abbott-O-Meter ruling and archived it at the Internet Archive.

Facebook comments show the dire need for the PolitiFact Bias website

Over the past few days, we received a number of comments on our Facebook page that help show the dire need for our work.

We are not identifying the person by name, though we expect it's easy to find on our page. It's a public page and the comments were posted in response to our public posts on our page. In short, it's public.

We discourage any attempt to harass this person or make contact with them against their wishes.

Beyond that, we offer thanks for the comments because we can use them to help educate others. We're using quotation marks but correcting errors without making them obvious. So the quotations are not always verbatim.

We're spotlighting these comments because they are so typical of our critics.


Saturday, January 6, 2018

More "True But False" fact-checking from PolitiFact

PolitiFact has always had a tendency to skew its fact-checking in favor of liberals and Democrats. But with speak-from-the-hip President Trump in the White House, PolitiFact has let its true blue colors show like perhaps never before.

A Jan. 5, 2017 fact check from PolitiFact's John Kruzel rates two true statements from President Trump "False." Why would a fact checker rate two true statements "False"? That's a good question. And it's one that the fact check doesn't really answer. But it's worth fisking the fact check for whatever nods it makes toward justifying its conclusions.

Framing the fact check

 

President Trump tweeted that he had not authorized any White House access for Michael Wolff, the author of the book "Fire and Fury" and that he had not spoken to Wolff for the book.


Right off the bat, PolitiFact frames Trump's claim as a denial that Wolff had access to the White House. With the right evidence, PolitiFact might have a case interpreting Trump's statement that way. But pending that justification, PolitiFact leads with a red flag hinting that it is more interested in its own interpretation of Trump words than in the words Trump used.

If Trump had meant to indicate Wolff had no access at all to the White House, he could tweet that in under 140 characters.  Like so:
Wolff had Zero access to White House. I never spoke to him. Liar! Sad!!!!!!!!!!!!!!!!
See? Under 90 characters, including the multiple exclamation points.

Most people understand that when a writer or speaker burdens potentially simple statements with more words the extra words are supposed to mean something. For example, if somebody says "I never spoke to Wolff for book" and not "I never spoke to Wolff" then it strongly hints that the speaker spoke to Wolff but not for the book.

Can PolitiFact explain away the importance of all those words Trump used?

Leading with the fact checker's opinion

From the first, we have said PolitiFact mixes its opinion in with its supposedly objective reporting. PolitiFact and Kruzel have opinion high in the mix in the introduction to the story (bold emphasis added):
The Trump administration has scrambled to control damaging headlines based on Michael Wolff’s Fire and Fury: Inside the Trump White House, which was rushed to shelves Jan. 5 over threats from President Donald Trump’s attorneys.

For his part, Trump sought to undermine Wolff’s credibility by calling into question the author’s access to the administration’s highest levels.
Is Kruzel an objective reporter or a prosecuting attorney telling the jury that the accused has a motive?

Kruzel dedicates his first to paragraphs to the creation of a narrative based on Trump's desire to attack Wolff's credibility. As we proceed, we should stay alert from cues Kruzel might offer the reader about Wolff's credibility. Will Kruzel allow any indication that Wolff deserves skepticism? Or perhaps present Wolff as credible by default?

Dissecting Trump's tweet or ignoring what it says?

Trump's tweet:
Kruzel comments:
We decided to dissect Trump’s tweet by sifting through what’s known about Wolff’s White House access. We can’t know everything that goes on behind the scenes, but even the public record shows that Trump’s statement is inaccurate.
This had better be good, given that the headline offers a skewed impression of Trump's tweet.

Kruzel defeats a straw man

PolitiFact and Kruzel deal first with the issue of White House access. Whereas Trump said he authorized no access for Wolff, PolitiFact creates a straw contradiction by pointing out some might believe Trump was saying Wolff had no access to the White House at all.

How we wish we were kidding (though this is by no means a first for PolitiFact):
Wolff’s access to the White House

Trump’s tweet could give the impression that Wolff was denied access to the White House entirely. But as Trump’s own press secretary has acknowledged, the author had more than a dozen interactions with administration officials at 1600 Pennsylvania Avenue.
What if instead of fact-checking people's false impressions fact checkers instead explained to people the correct impression? But that's not PolitiFact's way. PolitiFact dedicates its fact check to showing that misinterpreting the claim shows that Trump wasn't telling the truth.

Kruzel concludes the first section:
So, while it may be the case that Trump did not personally grant Wolff access, his own press secretary says the author had access to administration officials at the White House.
Our summary so far:
  1. PolitiFact finds "it may be the case" that Trump did not authorize Wolff's access to the White House (as Trump said)
  2. No indication from PolitiFact that Wolff should be regarded as anything other than reliable
  3. Proof that the misinterpreted version of Trump's statement is false (straw man defeated)

Kruzel defeats another straw man

With the first straw man defeated, PolitiFact and Kruzel deal with the burning question of whether Trump spoke to Wolff at all.

Yes, you read that correctly. The fact check focuses on whether Trump spoke to Wolff, not on whether Trump spoke to Wolff "for book."
Did Wolff and Trump talk?

To the casual reader, Trump’s tweet could give the impression that he and Wolff never spoke — but that’s far from the case.
Never fear, casual reader! PolitiFact is here for you as it is for no other type of reader. And if PolitiFact has to create and destroy a straw man or two to keep from helping you improve your reading comprehension, then so be it.

Kruzel follows immediately with his conclusion (explaining after that the details behind the defeat of the straw man):
While it may be the case that Trump never talked to Wolff with the express understanding that their discussion would later be incorporated into a book, the two men certainly spoke, though the length and nature of their conversations is not entirely clear.
And we review and add to our summary:
  1. PolitiFact finds "it may be the case" that Trump did not authorize Wolff's access to the White House (as Trump said)
  2. Still no indication from PolitiFact that Wolff should be regarded as anything other than reliable
  3. PolitiFact proves that the misinterpreted version of Trump's first claim is false (first straw man defeated)
  4. PolitiFact finds "it may be the case" that Trump did not talk to Wolff for the book (as Trump said)
  5. PolitiFact proves that the misinterpreted version of Trump's second claim is false (second straw man defeated)

Whatever one thinks of Trump, that's awful fact-checking

Trump made two claims that were apparently true according to PolitiFact's investigation, but because casual readers might think Trump meant something other than what he plainly said, PolitiFact rated the statements "False."


That approach to fact-checking could make virtually any statement false.

Is Wolff reliable? Who cares? PolitiFact is interested in Trump's supposed unreliability.

This PolitiFact fact check ought to serve as a classic example of what to avoid in fact-checking. Instead, PolitiFact's chief editor Angie Drobnic Holan edited the piece. And a PolitiFact "star chamber" of at least three editors reviewed the story and decided on the rating without seeing anything amiss with what they were doing.

Welcome to the "True but False" genre of fact-checking.

You can't trust these fact checkers.

Thursday, January 4, 2018

No Underlying Point For You!

PolitiFact grants Trump no underlying point on his claim about the GOP lock on senate seat



The NBC sitcom "Seinfeld" featured an episode focused in part on the "Soup Nazi." The "Soup Nazi" was the proprietor of a neighborhood soup shop who would refuse service in response to minor breaches of etiquette, often with a shouted "No soup for you!"

PolitiFact's occasional refusal to allow for the validity of an underlying point reminded us of the "Soup Nazi," and gives rise to our new series of posts recognizing PolitiFact's occasional failure to recognize underlying points.

PolitiFact's statement of principles assures readers that it takes a speaker's underlying point into account (bold emphasis added):
We examine the claim in the full context, the comments made before and after it, the question that prompted it, and the point the person was trying to make.
We see credit for the speaker's underlying point on full display in this Feb. 14, 2017 rating of Bernie Sanders, at the time running as a Democratic nominee for president of the United States (bold emphasis added):
Sanders said, "Before the Affordable Care Act, (West Virginia’s) uninsured rate for people 64 to 19 was 29 percent. Today, it is 9 percent."

Sanders pointed to one federal measurement, though it has methodological problems when drilling down to the statistics for smaller states. A more reliable data set for West Virginia’s case showed a decline from 21 percent to 9 percent. The decline was not as dramatic as he’d indicated, but it was still a significant one.

We rate the statement Mostly True.
Sanders' point was the decline in the uninsured rate owing to the Affordable Care Act, and we see two ways to measure the degree of his error. Sanders used the wrong baseline for his calculation, 29 percent instead of 21 percent. That represents a 38 percent exaggeration. Or we can look at the difference in the change from that baseline to reach Sanders' (accurate) 9 percent figure. That calculation results in a percentage error of 67 percent.

PolitiFact, despite an error of at least 38 percent, gave Sanders a "Mostly True" rating because Sanders was right that a decline took place.

For comparison, Donald Trump tweeted that former associate Steve Bannon helped lose a senate seat Republicans had held for over 30 years. The seat was held by the GOP by a mere 21 years. Using 31 years as a number greater than 30 years, Trump exaggerated by about 52 percent. And PolitiFact rated his claim "False":
Trump said the Senate seat won by Jones had been "held for more than thirty years by Republicans." It hasn’t been that long. It’s been 21 years since Democrat Howell Heflin retired, paving the way for his successor, Sessions, and Sessions’ elected successor, Jones. We rate the statement False.
Can the 14 percentage point difference by itself move the needle from "Mostly True" to "False"?

Was Trump making the point that the GOP had controlled that senate seat for a long time? That seems undeniable. Is 21 years a long time to control a senate seat? That likewise appears undeniable. Yet Trump's underlying point, in contrast to Sanders', was apparently a complete non-factor when PolitiFact chose its rating.

We say that inconsistency is a bad look for a non-partisan fact checker.

On the other hand, we might predict this type of inconsistency from a partisan fact checker.

Wednesday, January 3, 2018

'(PolitiFact's) rulings are based on when a statement was made and on the information available at that time'

PolitiFact Texas issued a "False" rating to Gov. Greg Abbott on Nov. 16, 2017, finding it "False" that Texas had experience its lowest unemployment rate in 40 years.

PolitiFact Texas was also rating Abbott's claim that Texas led the nation last month (September?) in job creation. But we will focus on the first part of the claim, for that rating centers on PolitiFact's principle that it bases its rulings on the timing of a statement:
Timing – Our rulings are based on when a statement was made and on the information available at that time.
Our interest in this item was piqued when we found it linked at PolitiFact's "Corrections and updates" page. We went looking for the correction and found this:
UPDATE, Nov. 17, 2017: Two days after Abbott tweeted his claim about the Texas jobless rate, the federal government reported that the state had a 41-year record low 3.9 percent jobless rate in October 2017.
The release of government statistics confirmed the accuracy of Abbott's claim if he was talking about October.

PolitiFact Texas' update reminded us of a PolitiFact rating from March 18, 2011. Actor Anne Hathaway said the majority of Americans support gay marriage. PolitiFact rated her claim "Mostly True" based on polling released after Hathaway made her claim. Note how PolitiFact foreshadowed its unprincipled decision (bold emphasis added):
(P)ublic opinion on gay marriage is shifting quickly. How quickly? Let's just say we're glad we waited a day to publish our item.
I covered PolitiFact's failure to follow its principles back when the incident happened. But in this case PolitiFact was consistent with its principles.

Or was it?

What information was available at the time?

When Hathaway made her claim, no poll unequivocally supported her claim, and we had no reason to think the actor was in any position to have insider pre-publication information about new polling. But upon reading PolitiFact Texas' fact check of Abbott, we were left wondering whether Abbott might know the government numbers before they were released to the public.

PolitiFact Texas did not address that issue, noting simply that the unemployment rates for October were not yet released. We infer that PolitiFact Texas presumed the BLS statistics were not available to government leaders in Texas. As for us, we had no idea whether the BLS made statistics available to state governments but thought it was worth exploring.

Our search quickly led us to a Nov. 17, 2017 article at the Austin American-Statesman. That's the same Austin American-Statesman that has long partnered with PolitiFact to publish content for PolitiFact Texas.

The article, by Dan Zehr, answered our question:
It’s common and appropriate for state workforce commissions to share “pre-release” data with governors’ offices and other officials, said Cheryl Abbot, regional economist at the Bureau of Labor Statistics Southwest regional office. However, she said, the bureau considers the data confidential until their official release.
Zehr's article focused on a dilemma: Was Abbott talking about the October numbers (making him guilty of breaching confidentiality), or was he just wrong based on the number from September 2017? Zehr reported the governor's office denied that Abbott was privy to the October numbers before their official release.

We think Zehr did work that PolitiFact Texas should have either duplicated or referenced. PolitiFact Texas apparently failed to rule out the possibility that Abbott referred to the official October numbers based on the routine early sharing of such information with state government officials.

For the sake of argument, let's assume Abbott's office told Zehr the truth

PolitiFact Texas' fact check based its rating on the assumption Abbott referred to unemployment numbers for September 2017. That agrees with Zehr's reporting on what the governor's office said it was talking about.

If Abbott was talking about the September 2017 numbers, was his statement false, as PolitiFact Texas declared?

Let's review what Abbott said.

PolitiFact (bold emphasis added):
It’s commonplace for a governor to tout a state’s economy. Still, Greg Abbott of Texas made us wonder when he tweeted in mid-November 2017: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."
And let's review what PolitiFact found from the Bureau of Labor Statistics:
(W)e fetched Bureau of Labor Statistics figures showing that the state’s impressive 4 percent jobless rate for September 2017 tied the previous record low since 1976. According to the bureau, the state similarly had a 4 percent unemployment rate in November and December 2000, 17 years ago. The state jobless rate in fall 1977, 40 years ago, hovered at 5.2 percent.
According to PolitiFact's research, what is the lowest unemployment rate in Texas over the past 40 years? The answer is 4 percent. That percentage occurred three times over the 40 year span, including September 2017. But by PolitiFact Texas' reasoning (and Zehr's reasoning, too), it is false for Abbott to claim September 2017 as the lowest in the past 40 years.

We say PolitiFact Texas (and Zehr) were wrong to suggest Abbott was simply wrong about the unemployment rate in Texas.

Ambiguous isn't the same as wrong

Certainly Gov. Abbott might have expressed himself more clearly. Abbott had the option of saying "The Texas unemployment rate is lower now than it has been in 40 years" if he believed that was the case. Such phrasing would tell his audience that no matter what the unemployment rate over the past 40 years, the current rate is lower.

Alternatively, Abbott might have said "The Texas unemployment rate is as low now as it has been in 40 years." That phrasing would clue his audience that the present low unemployment rate was achieved during the past 40 years at least twice.

Abbott's phrasing was somewhere in between the two alternatives we created. What he said hinted that the September 2017 rate was lower than it has been in 40 years but did not say so outright. His words were compatible with the September 2017 rate equaling the lowest in the past 40 years, but fell short of telling the entire story.

Kind of like PolitiFact Texas fell short of telling the entire story.

Though we took note of it on Twitter, we will again take the opportunity to recognize PolitiFact Texas and W. Gardner Selby as PolitiFact's best exemplars of transparency with respect to expert interviews. PolitiFact Texas posted the relevant portions (so far as we can tell!) of its interview of Cheryl Abbot. PolitiFact Texas has done similarly in the past, and we have encouraged PolitiFact (and other fact checkers) to make it standard practice.

Selby's interview shows him asking Cheryl Abbot to confirm his reading of the unemployment statistics. Selby's question was mildly leading, keeping Abbott to the topic of whether the low September 2017 unemployment rate had been equaled twice in the past 40 years. A different approach might have clued Selby to the same valuable information Dan Zehr reported: Gov. Abbott may have had access to the confidential October figures and his statement may prove correct for that month once the BLS releases the numbers.

It's notable that in the interview Abbot said that the numbers from September 2017 were the "lowest" in 40 years (bold emphasis added):
(T)he seasonally adjusted unemployment rate for Texas in September 2017 matched those of November and December 2000, all being the lowest rates recorded in the State since 1976.
Selby did not use the above quotation from Abbot. Perhaps he did not want his audience confused by the fact Abbot used the same term Abbott used.

In our view, Gov. Abbott was at least partially correct if he was talking about September 2017 and correct if he was talking about October 2017.

PolitiFact Texas should have covered both options more thoroughly.

Monday, January 1, 2018

PolitiFact's worst 17 fact checks of 2017

PolitiFact had a terrific year churning out horrible fact-checking content. In 2016 we published a list of PolitiFact's 11 worst fact checks. This year we're expanding that number to 17. Some of the selections show off PolitiFact's tendency to check statements of opinion. Some of them focus on other methodological blunders. And some make our list based on their potential impact on public debate.

Let's get to it.

PolitiFact's worst 17 fact checks of 2017 (blank)


Note: With this post we are experimenting for the first time with multiple pages for a single post. We anticipate having to update the post a number of times to ensure proper formatting. Once that is done, we will remove this message and consider the post completed.

Or, we'll decide the errors require too much work to correct and use a format like our summary post from last year.