Showing posts with label 2017. Show all posts
Showing posts with label 2017. Show all posts

Tuesday, January 16, 2018

PolitiFact goes partisan on the "deciding vote"

When does a politician cast the "deciding vote"?

PolitiFact apparently delivered the definitive statement on the issue on Oct. 6, 2010 with an article specifically titled "What makes a vote 'the deciding vote'?"

Every example of a "deciding vote" in that article received a rating of "Barely True" or worse (PolitiFact now calls "Barely True" by the name "Mostly False"). And each of the claims came from Republicans.

What happens when a similar claim comes from a Democrat? Now we know:


Okay, okay, okay. We have to consider the traditional defense: This case was different!

But before we start, we remind our readers that cases may prove trivially different from one another. It's not okay, for example, if the difference is that this time the claim from from a woman, or this time the case is from Florida not Georgia. Using trivial differences to justify the ruling represent the fallacy of special pleading.

No. We need a principled difference to justify the ruling. Not a trivial difference.

We'll need to look at the way PolitiFact justified its rulings.

First, the "Half True" for Democrat Gwen Graham:
Graham said DeSantis casted the "deciding vote against" the state's right to protect Florida waters from drilling.

There’s no question that DeSantis’ vote on an amendment to the Offshore Energy and Jobs Act was crucial, but saying DeSantis was the deciding vote goes too far. Technically, any of the 209 other people who voted against the bill could be considered the "deciding vote."

Furthermore, the significance of Grayson’s amendment is a subject of debate. Democrats saw it as securing Florida’s right to protect Florida waters, whereas Republicans say the amendment wouldn’t have changed the powers of the state.

With everything considered, we rate this claim Half True.
Second, the "Mostly False" for the National Republican Senatorial Committee (bold emphasis added):
The NRSC ad would have been quite justified in describing Bennet's vote for either bill as "crucial" or "necessary" to passage of either bill, or even as "a deciding vote." But we can't find any rationale for singling Bennet out as "the deciding vote" in either case. He made his support for the stimulus bill known early on and was not a holdout on either bill. To ignore that and the fact that other senators played a key role in completing the needed vote total for the health care bill, leaves out critical facts that would give a different impression from message conveyed by the ad. As a result, we rate the statement Barely True.
Third, the "False" for Republican Scott Bruun:
(W)e’ll be ridiculously lenient here and say that because the difference between the two sides was just one vote, any of the members voting to adjourn could be said to have cast the deciding vote.
The Bruun case doesn't help us much. PolitiFact said Bruun's charge about the "deciding" vote was true but only because its judgment was "ridiculously lenient." And the ridiculous lenience failed to get Bruun's rating higher than "False."  So much for PolitiFact's principle of rating two parts of a claim separately and averaging the results.

Fourth, we look at the "Mostly False" rating for Republican Ron Johnson:
In a campaign mailer and other venues, Ron Johnson says Feingold supported a measure that cut more than $500 billion from Medicare. That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion. Under the plan, guaranteed benefits are not cut. In fact, some benefits are increased. Johnson can say Feingold was the deciding vote -- but so could 59 other people running against incumbents now or in the future.

We rate Johnson’s claim Barely True.
We know from earlier research that PolitiFact usually rated claims about the ACA cutting Medicare as "Mostly False." So this case doesn't tell us much, either. The final rating for the combined claims could end up "Mostly False" if PolitiFact considered the "deciding vote" portion "False" or "Half True." It would all depend on subjective rounding, we suppose.

Note that PolitiFact Florida cited "What makes a vote 'the deciding vote'?" for its rating of Gwen Graham. How does a non-partisan fact checker square Graham's "Half True" rating with the ratings given to Republicans? Why does the fact check not clearly describe the principle that made the difference for Graham's more favorable rating?

As far as well can tell, the key difference comes from party affiliation, once again suggesting that PolitiFact leans left.


After the page break we looked for other cases of the "deciding vote."

Friday, December 22, 2017

Beware, lest Trump & PolitiFact turn your liberal talking point into a falsehood!

PolitiFact gave President Donald Trump a "False" rating for claiming the GOP tax bill had effectively repealed the Affordable Care Act.


We figured there was a good chance that defenders of the ACA had made the same claim.

Sure enough, we found an example from the prestigious left-leaning magazine The Atlantic. The Google preview tells the story, as does the story's URL, though the story's title tames things a little: "The GOP's High-Risk Move to Whack Obamacare in Its Tax Bill."

The key "repeal" line came from an expert The Atlantic cited in its story (bold emphasis added):
Make no mistake, repealing the individual mandate is tantamount to repealing the Affordable Care Act,” said Brad Woodhouse, campaign director for Protect Our Care, an advocacy group supportive of the ACA.
Would Woodhouse receive a "False" rating from PolitiFact if it rated his statement?

Would The Atlantic receive a "False" rating from PolitiFact?

Would PolitiFact even notice the claim if it wasn't coming from a Republican?



Afters (other liars who escaped PolitiFact's notice)

"GOP tax bill is just another way to repeal health care." (Andy Slavitt, USA Today)

"Republican tax bill to include Obamacare repeal" (Christian Science Monitor)

"Republicans undermine their own tax reform bill to repeal Obamacare" (Salon)

"Another Obamacare repeal effort doesn't actually have to be in the tax cuts bill, says the guy heading up popular vote loser Donald Trump's Office of Management and Budget." (Daily Kos)


Tuesday, December 19, 2017

PolitiFact's "Pants on Fire" bias--2017 update (Updated)

What tale does the "Truth-O-Meter" tell?

For years, we at PolitiFact Bias have argued that PolitiFact's "Truth-O-Meter" ratings serve poorly to tell us about the people and organizations PolitiFact rates on the meter. But the ratings may tell us quite a bit about the people who run PolitiFact.

To put this notion into practice, we devised a simple examination of the line of demarcation between two ratings, "False" and "Pants on Fire." PolitiFact offers no objective means of distinguishing between a "False" rating and a "Pants on Fire" rating. In fact, PolitiFact's founding editor, Bill Adair (now on staff at Duke University) described the decision about the ratings as "entirely subjective."

Angie Drobnic Holan, who took over for Adair in 2013 after Adair took the position at Duke, said "the line between 'False' and 'Pants on Fire' is just, you know, sometimes we decide one way and sometimes decide the other."

After searching in vain for dependable objective markers distinguishing the "Pants on Fire" rating from the "False" rating, we took PolitiFact at its word and assumed the difference between the two is subjective. We researched the way PolitiFact applied the two ratings as an expression of PolitiFact's opinion, reasoning that we could use the opinions to potentially detect PolitiFact's bias (details of how we sorted the data here).

Our earliest research showed that, after PolitiFact's first year, Republicans were much more likely than Democrats to have a false claim rated "Pants on Fire" instead of merely "False." Adair has said that the "Pants on Fire" rating was treated as a lighthearted joke at first--see this rating of a claim by Democrat Joe Biden as an example--and that probably accounts for the unusual results from 2007.

In 2007, the lighthearted joke year, Democrats were 150 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2008, Republicans were 31 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2009, Republicans were 214 percent more likely to receive a "Pants on Fire" rating for a false statement (not a typo).

In 2010, Republicans were 175 percent more likely to receive a "Pants on Fire" rating for a false statement (again, not a typo).

We published our first version of this research in August 2011, based on PolitiFact's first four years of operation.

In 2011, Republicans were 57 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2012, Republicans were 125 percent more likely to receive a "Pants on Fire" rating for a false statement.

Early in 2013, PolitiFact announced Adair would leave the project that summer to take on his new job at Duke. Deputy editor Angie Drobnic Holan was named as Adair's replacement on Oct. 2, 2013.

In 2013, the transition year, Republicans were 24 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans started to curb their appetite for telling outrageous falsehoods?

In 2014, Republicans were 95 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2015, Republicans were 2 percent (not a typo) more likely to receive a "Pants on Fire" rating for a false statement.

In 2016, Republicans were 17 percent more likely to receive a "Pants on Fire" rating for a false statement.

In 2017, Democrats were 13 percent more likely to receive a "Pants on Fire" rating for a false statement.

Had Republicans gotten better than Democrats at reigning in their impulse to utter their false statements in a ridiculous form?

We suggest that our data through 2017 help confirm our hypothesis that the ratings tell us more about PolitiFact than they do about the politicians and organizations receiving the ratings.






Do the data give us trends in political lying, or separate journalistic trends for Adair and Holan?

We never made any attempt to keep our research secret from PolitiFact. From the first, we recognized that PolitiFact might encounter our work and change its practices to decrease or eliminate the appearance of bias from its application of the "Pants on Fire" rating. We did not worry about it, knowing that if PolitiFact corrected the problem it would help confirm the problem existed  regardless of what fixed it.

Has PolitiFact moderated or fixed the problem? Let's look at more numbers.

The "Pants on Fire" bias

From 2007 through 2012, PolitiFact under Adair graded 29.2 percent of its false claims from the GOP "Pants on Fire." For Democrats the percentage was 16.1 percent.

From 2014 through 2017, PolitiFact under Holan graded 26 percent of its false claims from the GOP "Pants on Fire" and 21.9 percent for Democrats.

It follows that under Adair PolitiFact was 81.4 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one for a Democrat. That includes the anomalous 2007 data showing a strong "Pants on Fire" bias against Democrats.

Under Holan, PolitiFact was just 18.7 percent more likely to give a "Pants on Fire" rating to a false GOP statement than one from a Democrat.

Story selection bias

While tracking the percentage of false ratings given a "Pants on Fire" rating, we naturally tracked the sheer number of times PolitiFact issued false ratings (either "False" or "Pants on Fire"). That figure speaks to PolitiFact's story selection.

From 2007 through 2012, PolitiFact under Adair found an average of 55.3 false claims per year from Republicans and 25.8 false claims per year from Democrats. That includes 2007, when PolitiFact was only active for part of the year.

From 2014 through 2017, PolitiFact under Holan found an average of 81 false claims per year from Republicans and 16 false claims per year from Democrats.

Under Holan, the annual finding of false claims by Republicans increased by nearly 58 percent. At the same time, PolitiFact's annual finding of false claims by Democrats fell by 38 percent.

Update Jan. 1, 2018: GOP false claims reached 90 by year's end.


One might excuse the increase for the GOP by pointing to staff increases. But the same reasoning serves poorly to explain the decrease for the Democrats. Likewise, increased lying by Republicans does not automatically mean Democrats decreased their lying.

Did the Democrats as a party tend strongly toward greater truth-telling? With the notable blemish a greater tendency to go "Pants on Fire" when relating a falsehood?

Conclusion

We suggest that changes in PolitiFact's practices more easily make sense of these data than do substantial changes in the truth-telling patterns of the two major U.S. political parties. When Adair stepped down as PolitiFact's editor, a different person started running the "star chamber" meetings that decide the "Truth-O-Meter" ratings and a different set of editors voted on the outcomes.

Changing the group of people who decide subjective ratings will obviously have a substantial potential effect on the ratings.

We suggest that these results support the hypothesis that subjectivity plays a large role in PolitiFact's rating process. That conclusion should not surprise anyone who has paid attention to the way PolitiFact describes its rating process.

Has Holan cured PolitiFact of liberal bias?

We recognized from the first that the "Pant on Fire" bias served as only one measure of PolitiFact's ideological bias, and one that PolitiFact might address. Under Holan, the "Pants on Fire" bias serves poorly to demonstrate a clear ideological bias at PolitiFact.

On the other hand, PolitiFact continues to churn out anecdotal examples of biased work, and the difficulty Holan's PolitiFact has in finding false statements from Democrats compared to Adair's PolitiFact suggests our data simply show something of a trade-off.

When we started evaluating PolitiFact's state operations, such as PolitiFact Georgia, we noticed that lopsided numbers of false statements were often accompanied by a higher percentage of "Pants on Fire" statements from the party receiving many fewer false ratings. We hypothesized a compensatory bias might produce that effect when the fact checkers, consciously or unconsciously, encourage the appearance of fairness.

PolitiFact, after all, hardly needs to grade false Republican statements more harshly to support the narrative that Republicans lie more when it is finding, on average, five times more false statements from Republicans than Democrats.


We doubt not that defenders of PolitiFact can dream up some manner of excusing PolitiFact based on the "fact' that Republicans lie more. But we deeply doubt that any such approach can find a basis in empirical evidence. Subjective rating systems do not count as empirical evidence of the rate of lying.


In addition to empirically justifying the increase in GOP falsehoods, defenders will need to explain the decrease in Democratic Party falsehoods implied in PolitiFact's ratings. Why, with a bigger staff, is PolitiFact having a more difficult time finding false statements from Democrats than it did when Adair was steering the ship?

If Truth-O-Meter data were ostensibly objective, it would make sense to question the reliability of the data given the differing trends we see for PolitiFact under Adair and Holan.

Given PolitiFact's admissions that its story selection and ratings are substantially subjective, it makes sense for the objective researcher to first look to the most obvious explanation: PolitiFact bias. 

 

Notes on the research method

Our research on the "Pants on Fire" bias looks at partisan elected officials or officeholders as well as candidates and campaign officials (including family members participating in the campaign). We exclude PolitiFact ratings where a Republican attacked a Republican or a Democrat attacked a Democrat, reasoning that such cases may muddy the water in terms of ideological preference. The party-on-party exclusions occur rarely, however, and do not likely affect the overall picture much at all.

In the research, we use the term "false claims" to refer to claims PolitiFact rated either "False" or "Pants on Fire." We do not assume PolitiFact correctly judged the claims false.

Find the data spreadsheet here.


Afters

We have completed alternative versions of our charts with the data for Donald Trump removed, and we'll publish those separately from this article at a later time. The number of false claims from Republicans went down from 2015-2017 but with PolitiFact still issuing far more false ratings to Republicans. The "Pants on Fire" percentages were almost identical except for 2016. With Trump removed from the data the Republicans would have set an all-time record for either party for lowest percentage of "Pants on Fire" claims.

These results remain consistent with our hypothesis that PolitiFact's "False" and "Pants on Fire" ratings reflect a high degree of subjectivity (with the former perhaps largely influenced by story selection bias).



Update Dec. 19, 2017: Added intended hyperlink to explanations of the research and the notable Biden "Pants on Fire."
Update Dec. 21, 2017: Corrected date of previous update (incorrectly said Dec. 12), and updated some numbers to reflect new PolitiFact ratings of Donald Trump through Dec. 21, 2017: "13 percent"=>"10 percent",  "87.3 claims per year"=>"80.5 claims per year", "23.8"=>"26.1" and "8.7"="19.2." The original 87.3 and 23.8 figures were wrong for reasons apart from the new data. We will update the charts once the calendar year finishes out. Likewise the 8.7 figure derived in part from the incorrect 23.8.

Update Jan 1, 2017:  Changed "10 percent" back to "13 percent" to reflect updated data for the whole year. "80.5 claims per year" updated to "81 claims per year." We also changed "26.1" to "26" and "8.7" to "18.7." The latter change shows that we neglected to make the "8.7" to "19.2" change we announced in the description of the Dec. 21, 2017 update, for which we apologize.

Thursday, December 7, 2017

Another partisan rating from bipartisan PolitiFact

"We call out both sides."

That is the assurance that PolitiFact gives its readers to communicate to them that it rates statements impartially.

We've pointed out before, and we will doubtless repeat it in the future, that rating both sides serves as no guarantee of impartiality if the grades skew left whether rating a Republican or a Democrat.

On December 1, 2017, PolitiFact New York looked at Albany Mayor Kathy M. Sheehan's claim that simply living in the United States without documentation is not a crime. PolitiFact rated the statement "Mostly True."


PolitiFact explained that while living illegally in the United States carries civil penalties, it does not count as a criminal act. So, "Mostly True."

Something about this case reminded us of one from earlier in 2017.

On May 31, 2017, PolitiFact's PunditFact looked at Fox News host Gregg Jarrett's claim that collusion is not a crime. PolitiFact rated the statement "False."


These cases prove very similar, not counting the ratings, upon examination.

Sheehan defended Albany's sanctuary designation by suggesting that law enforcement need not look at immigration status because illegal presence in the United States is not a crime.

And though PolitiFact apparently didn't notice, Jarrett made the point that Special Counsel Mueller was put in charge of investigating non-criminal activity (collusion). Special Counsels are typically appointed to investigate crimes, not to investigate to find out if a crime was committed.

On the one hand, Albany police might ask a driver for proof of immigration status. The lack of documentation might lead to the discovery of criminal acts such as entering the country illegally or falsifying government documents.

On the other hand, the Mueller investigation might investigate the relationship (collusion) between the Trump campaign and Russian operatives and find a conspiracy to commit a crime. Conspiring to commit a crime counts as a criminal act.

Sheehan and Jarrett were making essentially the same point, though collusion by itself doesn't even carry a civil penalty like undocumented immigrant status does.

So there's PolitiFact calling out both sides. Sheehan and Jarrett make almost the same point. Sheehan gets a "Mostly True" rating. Jarrett gets a "False."

That's the kind of non-partisanship you get when liberal bloggers do fact-checking.



Afters

Just to hammer home the point that Jarrett was right, we will review the damning testimony of the  three impartial experts who helped PunditFact reach the conclusion that Jarrett was wrong.
Nathaniel Persily at Stanford University Law School said one relevant statute is the Bipartisan Campaign Reform Act of 2002.

"A foreign national spending money to influence a federal election can be a crime," Persily said. "And if a U.S. citizen coordinates, conspires or assists in that spending, then it could be a crime."
The conspiracy to commit the crime, not the mere collusion, counts as the crime.

Next:
Another election law specialist, John Coates at Harvard University Law School, said if Russians aimed to shape the outcome of the presidential election, that would meet the definition of an expenditure.

"The related funds could also be viewed as an illegal contribution to any candidate who coordinates (colludes) with the foreign speaker," Coates said.
Conspiring to collect illegal contributions, not mere collusion, would count as the crime. Coats also offered the example of conspiring to commit fraud.
Josh Douglas at the University of Kentucky Law School offered two other possible relevant statutes.

"Collusion in a federal election with a foreign entity could potentially fall under other crimes, such as against public corruption," Douglas said. "There's also a general anti-coercion federal election law."
The corruption, not the mere collusion, would count as the crime.

How PolitiFact missed Jarrett's point after linking the article he wrote explaining what he meant is far beyond us.

Monday, November 6, 2017

PolitiFact gives the 8 in 10 lie a "Half True."

We can trust PolitiFact to lean left.

Sometimes we bait PolitiFact into giving us examples of its left-leaning tendencies. On November 1, 2017, we noticed a false tweet from President Barack Obama. So we drew PolitiFact's attention to it via the #PolitiFactThis hashtag.



We didn't need to have PolitiFact look into it to know that what Obama said was false. He presented a circular argument, in effect, using the statistics for people who had chosen an ACA exchange plan to mislead the wider public about their chances of receiving subsidized and inexpensive health insurance.


PolitiFact identified the deceit in its fact check, but used biased supposition to soften it (bold emphasis added):
"It only takes a few minutes and the vast majority of people qualify for financial assistance," Obama says. "Eight in 10 people this year can find plans for $75 a month or less."

Can 8 in 10 people get health coverage for $75 a month or less? It depends on who those 10 people are.

The statistic only refers to people currently enrolled in HealthCare.gov.
The video ad appeals to people who are uninsured or who might save money by shopping for health insurance on the government exchange. PolitiFact's wording fudges the truth. It might have accurately said "The statistic is correct for people currently enrolled in HealthCare.gov. but not for the population targeted by the ad."

In the ad, the statistic refers to the ad's target population, not merely to those currently enrolled in HealthCare.gov.

And PolitiFact makes thin and misleading excuses for Obama's deception:
(I)n the absence of statistics on HealthCare.gov visitors, the 8-in-10 figure is the only data point available to those wondering about their eligibility for low-cost plans within the marketplace. What’s more, the website also helps enroll people who might not have otherwise known they were eligible for other government programs.
The nonpartisan fact-checker implies that the lack of data helps excuse using data in a misleading way. We reject that type of excuse-making. If Obama does not provide his audience the context allowing it to understand the data point without being misled, then he deserves full blame for the resulting deception.

PolitiFact might as well be saying "Yes, he misled people, but for a noble purpose!"

PolitiFact, in fact, provided other data points in its preceding paragraph that helped contextualize Obama's misleading data point.

We think PolitiFact's excuse-making influences the reasoning it uses when deciding its subjective "Truth-O-Meter" ratings.
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
In objective terms, what keeps Obama's statement from deserving a "Mostly False" or "False" rating?
His statement was literally false when taken in context, and his underlying message was likewise false.

About 10 to 12 million are enrolled in HealthCare.Gov ("Obamacare") plans. About 80 percent of those receive the subsidies Obama lauds. About 6 million persons buying insurance outside the exchange fail to qualify for subsidies, according to PolitiFact. Millions among the uninsured likewise fail to qualify for subsidies.

Surely a fact-checker can develop a data point out of numbers like those.

But this is what happens when non-partisan fact checkers lean left.


Correction Nov. 6, 2017: Removed "About 6 million uninsured do not qualify for Medicaid or subsidies" as it was superseded by reporting later in the post).

Friday, October 20, 2017

PolitiFact and the principle of inconsistency

In October, six days apart, PolitiFact did fact checks on two parallel claims, each asserting the existence of a particular law. One, by U.S. Senate candidate Roy Moore, was found "False." The other, by a Saturday Night Live cast member, was found "Mostly True."



Moore asserted that an act of Congress made it "against the law" to fail to stand for the playing of the national anthem. PolitiFact confirmed the existence of the law Moore referenced, but noted that it merely offered guidance on proper etiquette. It did not provide any punishment for improper etiquette.

SNL's Colin Jost said a Texas law made it illegal to own more than six dildos. PolitiFact confirmed a Texas law made owning more than six "obscene devices" illegal. PolitiFact found that a federal court had ruled that law unconstitutional in 2008.

Both laws exist. The one Moore cited carries no teeth because it describes proper etiquette, not a legal requirement backed by government police power. The one Jost cited lacks teeth because the Court voided it.

How did PolitiFact and PolitiFact Texas justify their respective rulings?

PolitiFact (bold emphasis added):
Moore said NFL players taking a knee during the national anthem is "against the law."

Moore's basis is that a law on the books describes patriotic etiquette during the national anthem. But his statement gives the false impression the law is binding, when in fact it’s merely guidance that carries no penalty. Additionally, legal experts told us the First Amendment protects the right to kneel during the national anthem.

We rate this False.
PolitiFact Texas (bold emphasis added):
Jost said: "There is a real law in Texas that says it’s illegal to own more than six dildos."

Such a cap on "obscene devices" has been state law since the 1970s though it’s worth clarifying that the law mostly hasn’t been enforced since federal appeals judges found it unconstitutional in 2008.

We rate the claim Mostly True.
From where we're sitting, the thing PolitiFact Texas found "worth clarifying" in its "Mostly True" rating of Jost closely resembles in principle one of the reasons PolitiFact gave for rating Moore's statement "False" (neither law is binding, but for different reasons). As for the other rationale backing the "False" rating, from where we're sitting Jost equaled Moore in giving the impression that the Texas law is binding today. But PolitiFact Texas did not penalize Jost for offering a misleading impression.

We call these rulings inconsistent.

Inconsistency is a bad look for fact checkers.


Update Oct. 23, 2017: We appreciate Tim Graham highlighting this post at Newsbusters.

Wednesday, October 18, 2017

Surprise! Another way PolitiFact rates claims inconsistently

When we saw PolitiFact give a "Mostly False" rating to the claim state spending in Oklahoma had reached an all-time high, it piqued our curiosity.

PolitiFact issued the "Mostly False" rating because the Oklahoma Council of Public Affairs used nominal dollars instead of inflation-adjusted dollar in making its claim.
The Oklahoma Council of Public Affairs said that spending this year is on track to be the highest ever. While the raw numbers show that, the statement ignores the impact of inflation, a standard practice when comparing dollars over time. Factoring in inflation shows that real spending was higher in 2009 to 2011.

When population and economic growth are added in, spending has been higher over most of the past decade.

The statement contains an element of truth but it ignores critical facts that would give a different impression. We rate this claim Mostly False.
Considering the claim was arguably "Half True" based on nominal dollars, we wondered if PolitiFact's ruling was consistent with similar cases involving the claims of Democrats.

Given our past experience with PolitiFact, we were not surprised at all to find PolitiFact giving a "Half True" to a Democratic National Committee claim that U.S. security funding for Israel had hit an all-time high. There was one main difference between the DNC's claim and the one from the Oklahoma Council of Public Affairs: The one from the DNC was false for either nominal dollars or inflation-adjusted dollars (bold emphasis added).
The ad says "U.S. security funding for Israel is at an all-time high." Actually, it was higher in one or two years, depending whether you use inflation-adjusted dollars. In addition, the ad oversells the credit Obama can take for this year’s number. The amount was outlined by a memorandum signed in 2007 under President George W. Bush. On balance, we rate the claim Half True.
Awesome!

That's not just inconsistent, it's PolitiFinconsistent!


Note

The fact check that drew our attention was technically from PolitiFact Oklahoma, but was perpetrated by Jon Greenberg and Angie Drobnic Holan, both veterans of PolitiFact National.

Sunday, October 15, 2017

PolitiFact: LeBron James is Colin Kaepernick

Yes, we confess to using a strange title for this post.

Yet as far as we can tell, that is what PolitiFact is saying with a Facebook post from earlier today:



A fact check on Colin Kaeperick's shirt? We followed the link. We did find a rating of Kaepernick's misattribution of a quotation to Winston Churchill. But there was nothing about his shirt.

The linked page was titled "All Sports statements."

But we remembered seeing a fact check related to a sports figure that wasn't on that page.

It was a fact check of a Photoshop that changed the text on Lebron James' shirt.


So ... Lebron James is Colin Kaepernick?

We wonder how PolitiFact handles Facebook corrections transparently.

Friday, October 13, 2017

Yet more sweet PolitiLies

Today, with a fresh executive order from President Donald Trump ending subsidies insurance companies had received from the Obama and Trump administrations, PolitiFact recycled a PolitiSplainer it published on July 31, 2017.

The story claimed to explain what it meant when Trump threatened to end an insurance company bailout:


We have no problem with the bulk of the story*, except for one glaring omission. PolitiFact writer John Kruzel somehow left out the fact that a court ruling found the payments to insurance companies were not authorized by the Affordable Care Act. The Court suspended its injunction to leave time for an appeal, but time ran out on the Obama administration and now any such appeal is up to the Trump administration.

Anyone want to hold their breath waiting for that to happen?

That makes the lead of Kruzel's story false (bold emphasis added):
President Donald Trump warned lawmakers he would cut off billions in federal funding that insurance companies receive through Obamacare if Congress fails to pass new health care legislation.
If the ACA legislation does not authorize the spending the insurance companies received, then the insurance companies do not receive their funding "through Obamacare."

How does a "nonpartisan" fact checker miss out on a key fact that is relatively common knowledge? And go beyond even that to misstate the fact of the matter?

Maybe PolitiFact is a liberal bubble?




*At the same time, we do not vouch for its accuracy

Thursday, October 12, 2017

PolitiFact defends itself for money

PolitiFact has a terrible record when it comes to defending itself from public criticism. So it surprised us to see PolitiFact's Jon Greenberg take to Twitter highlighting an article he wrote defending against criticism he received from Mark Hyman.



The destination article by Greenberg failed to link to the attacking video. So persons wishing to see for themselves would just have to take Greenberg's word or else try to find the video through their own effort.

We found the video with a little effort. We could not find it through the association with Sinclair Broadcasting that Greenberg advertised. We found it by connecting the "Mark Hyman" mentioned in Greenberg's self-defense to BehindtheHeadlines.net.



We found a number of things striking about Greenberg's article.

 Greenberg:
Sinclair, which has faced criticism for a clear conservative point of view, published a video commentary last week saying we fabricated data related to a fact-check we published on Sen. Ted Cruz, R-Texas. Cruz claimed, "Two-thirds of the (Sandy disaster relief) bill had nothing to do with Sandy."
First, we were overpoweringly bemused by PolitiFact, which has faced criticism for a clear liberal point of view, mentioning that Sinclair has received criticism for leaning right. If the accusations against Sinclair are worth mentioning, then what of those against PolitiFact?

More importantly, Greenberg made a logical leap with his claim the article says "we fabricated data related to a fact-check we published on Sen. Ted Cruz, R-Texas." That simply isn't in Hyman's video or the transcript. The closest to that occurs at the end of the video, when Hyman refers to his two other criticisms of PolitiFact:
On our website are two other segments [here, here] that show PolitiFact fabricating info and presenting false claims.
While it is possible to read the statement as a suggestion PolitiFact fabricated information in its fact check of Cruz, it may also be read to simply say the other two segments show PolitiFact fabricating information and(/or) presenting false claims. In his tweet Greenberg was more specific, fabricating the claim that Hyman accused him of making up "numbers."

Back to Greenberg's article:
We found that the bulk of the federal money went to states hit hardest by Sandy.

Sinclair executive Mark Hyman countered, saying that "billions were not for emergency relief. Or for Sandy."
Is this a titanic battle of straw men or what?

Hyman's beef with PolitiFact was its supposed suggestion that virtually all of the Sandy relief bill went to pay for relief from Sandy's impact. The quotation Hyman used occurred in the Washington Post version of the Cruz fact check but not in the one PolitiFact published. But it isn't hard to see the same idea presented in PolitiFact's fact check. If the money went to states hardest hit by Sandy, PolitiFact apparently reasoned, then it was for relief from superstorm Sandy.

That's bad reasoning, and worth exposing.

Is Mark Hyman a "Sinclair executive"? We think Greenberg botched the reporting on this one [Pre-publication update: PolitiFact fixed this after we did some Twitter needling]. Hyman's biography (dated October 2017) says he stepped down from an executive position in 2005 and mentions no resumption of a similar post.


That straw man again

Greenberg:
That could be true, but that isn’t what Cruz claimed. He said the lion’s share of the money had no connection to Sandy.

That’s a bold assertion, and nothing Sinclair presented actually supports it.
We must forgive Jon Greenberg for focusing on Hyman's failure to show Cruz was right. A fact checker cannot be expected to notice that Hyman did not try to defend Cruz and did not mention Cruz in his critique of PolitiFact.

In debate terms, Greenberg conceded that Hyman may have a sound premise.


Pictorial interlude/foreshadowing

How about a different straw man?

Greenberg:
It’s a simple question of math and scale. Sinclair’s report said that $16 billion went to the Housing and Urban Development Department. It then gave two examples of that money going to Chicago (to upgrade sewer and water systems) and Springfield, Mass. (to boost development in tornado-damaged low-income neighborhoods). Together, the two grants add up to $85 million.

Those dollars amount to one half of 1 percent of the money HUD got after the storm.
Greenberg omits that Hyman explicitly said he was merely giving two examples among many. It was disingenuous, and a straw man fallacy, for Greenberg to total the amount from Hyman's two examples and use the total to amplify PolitiFact's point that the bulk of the spending went to disaster relief for damages wrought by Sandy. In a way, Greenberg actually proves Hyman's point after the fact.

A point unresponsive to Hyman's charge

This is what happens with straw man arguments. We see arguments advanced that have nothing to do with what the other person was arguing.

Greenberg:
As we reported, HUD granted $12.8 billion to the places hit hardest by Sandy, namely New Jersey, New York and New York City. That represents about 80 percent of the HUD total, the opposite of Cruz’s claim that two-thirds had nothing to do with Sandy.
As Hyman wasn't defending Cruz, Greenberg wastes his words.

A paragraph hinting at what might have been ...

We nominate this next segment as Greenberg's best paragraph:
There are valid reasons to debate what qualifies as emergency relief and what is non-emergency spending. We noted that distinction in our report, as well as that the Sandy appropriation bill was a leaky bucket. The money, for example, could be spent on disasters in 2011, 2012 and 2013. We also highlighted that it takes years to spend many of those billions of dollars, especially when they go to roads, bridges, tunnels and other infrastructure.
The above is the sensible person's response to Hyman's editorial. Hyman charged that PolitiFact made it look like nearly all the money from the Sandy relief bill went for emergency relief. Greenberg's right that PolitiFact made these points in its fact check. Hyman's case, then, is certainly not a slam-dunk.

... and then back into the weeds of falsehood and obfuscation

Greenberg:
The Sinclair report concluded by saying that PolitiFact "is fabricating info and presenting false claims." That is simply not true. Our reporting is accurate, and we list all of our sources.
Greenberg leaves out the context of Hyman's conclusion, as we pointed out above. As a result, Greenberg leaves his readers the misleading impression that his article refutes Hyman's concluding claim. That claim most obviously refers to two other segments about PolitiFact that Greenberg does not address in his article. On what basis does he call those charges false?

Making matters worse for PolitiFact, Greenberg's article contains inaccurate reporting and fails to list all its sources (documented above).

The Ulterior Motive

Why did PolitiFact defend itself from Hyman's video attack when we've been enthusiastically targeting PolitiFact for years while receiving a fairly dedicated silence in response?

The image we inserted above foreshadowed the answer, specifically the blue hotlink in the middle of Greenberg's article encouraging readers to "STAND UP FOR FACTS AND SUPPORT POLITIFACT!" (all caps in the original)

The link leads to a page where readers can sign up to join PolitiFact's "Truth Squad," which helps financially support PolitiFact.

Greenberg's story is PolitiFact's version of those ubiquitous emails politicians send out to spur their constituents to give them money. My opponent is forming a Super PAC! Send $8 to show you support Candidate X and getting rid of money in politics!

PolitiFact chose the attack from Sinclair because it could attach the attack to a company with deep pockets, scaring its supporters into giving it money.

Check out the email we got from Executive Director Aaron Sharockman:

Friends,
The Sinclair Broadcasting Group, the nation's largest owner of television stations, is attacking PolitiFact for a recent fact-check we published about federal funding related to superstorm Sandy.
Sinclair, which has faced criticism for a clear conservative point of view, published a video commentary last week saying we fabricated data related to a fact-check we published on Sen. Ted Cruz, R-Texas. Cruz claimed, "Two-thirds of the (Sandy disaster relief) bill had nothing to do with Sandy."
We found that the bulk of the federal money went to states hit hardest by Sandy.
Sinclair executive Mark Hyman countered, saying that "billions were not for emergency relief. Or for Sandy."
That could be true, but that isn’t what Cruz claimed. He said the lion’s share of the money had no connection to Sandy.
That’s a bold assertion, and nothing Sinclair presented actually supports it.
Will you help PolitiFact fight for the truth?
It’s a simple question of math and scale. Sinclair’s report said that $16 billion went to the Housing and Urban Development Department. It then gave two examples of that money going to Chicago (to upgrade sewer and water systems) and Springfield, Mass. (to boost development in tornado-damaged low-income neighborhoods). Together, the two grants add up to $85 million.
Those dollars amount to one half of 1 percent of the money HUD got after the storm.
As we reported, HUD granted $12.8 billion to the places hit hardest by Sandy, namely New Jersey, New York and New York City. That represents about 80 percent of the HUD total, the opposite of Cruz’s claim that two-thirds had nothing to do with Sandy.
There are valid reasons to debate what qualifies as emergency relief and what is non-emergency spending. We noted that distinction in our report, as well as that the Sandy appropriation bill was a leaky bucket. The money, for example, could be spent on disasters in 2011, 2012 and 2013. We also highlighted that it takes years to spend many of those billions of dollars, especially when they go to roads, bridges, tunnels and other infrastructure.
The Sinclair report concluded by saying that PolitiFact "is fabricating info and presenting false claims." That is simply not true. Our reporting is accurate, and we list all of our sources.
Whenever we can, we let the numbers do the talking and in the case of Cruz’s statement, the numbers spoke loud and clear. He said two-thirds had nothing to do with Sandy. The dollars show that the bulk of the money went to the places hit hardest by Sandy.
Yours truly,

Aaron Sharockman
Executive Director
Isn't that precious? Sharockman sends out Greenberg's article under his own name! You'd think Sharockman could give Greenberg the credit for writing the thing, right?

Cheap Stunt, Poorly Executed


Anyway, PolitiFact makes it clear that it didn't answer Hyman as part of its supposedly firm commitment to transparency. PolitiFact answered Hyman to propel an appeal for money.

It only compounds our amusement that PolitiFact left out the fact that gives that appeal whatever urgency it might have. Maybe PolitiFact didn't want to credit HBO's John Oliver for it. Who knows? Regardless, Oliver reported  (our link goes via The Hill) that Sinclair Broadcasting Group makes Hyman's commentaries, among others, mandatory showing on local news programs (all or some we do not know). So Hyman's videos have greater reach than the modest Alexa rankings we see for BehindtheHeadlines.net (a little ahead of PFB in the 3.1 millions) would suggest.

All in all, we call this a cheap stunt poorly executed.

Nice work, PolitiFact. It makes a good bookend with your PolitiFact Evangelism and Revival Tour.

Monday, October 9, 2017

PolitiFact does racial profiling (Updated)

On Oct. 6, 2017 PolitiFact confirmed Newsweek's report that white men commit the majority of mass shootings (bold emphasis added):
As details about the Las Vegas shooter’s identity emerged, media outlets noted some of the characteristics fit neatly within a familiar profile of prior mass shooting perpetrators.

Newsweek, for instance, ran a story with the headline, "White men have committed more mass shootings than any other group." The article builds on this claim, stating that 54 percent of mass shootings carried out since 1982 were done so by white males.
The "Mostly True" rating awarded to this claim counts as extremely dunderheaded. PolitiFact even explains why in the text of the fact check, but skips its common practice of rating "meaningless statistics" harshly on the "Truth-O-Meter" even when reported accurately.

That, for example, is why Donald Trump received a "Half True" rating for correctly stating that the Hispanic poverty rate went up under the Obama administration. PolitiFact said the statistic meant little because the Hispanic poverty rate went down during the same span.

In this case also, the rate counts as the key statistic. But PolitiFact's numbers showed that whites were no more than proportionally represented in the statistics (bold emphasis added):
Newsweek's claim is literally accurate. But it's worth noting the imprecision of this data, and the percentage of mass shootings by white men is lower than their share of the male population, according to Mother Jones.
Newsweek, for its part, allows a liberal expert to expound on the racial resentment factors that might explain the white male penchant for shooting up the town. And Newsweek follows that with the admission that maybe the sheer abundance of white people might help explain the data (bold emphasis added):
The high number of white men committing mass shootings is also explained, at least in part, by the fact white people make up a majority of the U.S. population (63 percent) and men are more likely to commit violent crime in general: In the U.S., 98 percent of mass shootings and 90 percent of all murders are committed by men.
Newsflash: There's no need to look for special explanations for the high number of whites committing mass shootings unless they are committing more than their share. And they aren't, according to the numbers PolitiFact used.

How did a statement just as flawed as Trump's garner a "Mostly True" rating?

That one's not hard. We know the ratings are substantially (if not entirely) subjective and that PolitiFact staffers are like everybody else: They're biased.  And their bias trends left.

But this fact check we found particularly egregious because it helps inflame racial conflict, albeit illogically. And we find it hard to imagine that the folks at PolitiFact did not realize, before publishing, that the fact check would feed that illogical thinking.

Here's a smattering of commentary from the comments at PolitiFact's Facebook page. We won't offer up the names because our purpose is to shame PolitiFact, not the people PolitiFact helped mislead.
"Because they listen to news media outlets that tell them they will be minorities in 20 years and that immigrants are taking away their jobs and women are threatening to take away their masculinity through economic means via education and most of them are stupid enough to believe it."
"White males, directly or indirectly, are responsible for far more than we realize!! We need to own what we've done and are currently doing!!"
"When you factor in the native genocide attacks and racially driven attacks, of course. Leave those out and they still are."
"I'm white so I'm not proud of it and I'd rather not admit it, but yes, probably. We probably lead in serial killers too."
"Add in serial killers and it gets worse."
"It shows 54% of mass shooting are done by whites, that there is a great chance of possibility that the next one can follow the same pattern."
"The point is that people like to pass laws based on statistics. For instance, there is, in effect, a Muslim ban and that was based on the assumption that it would make America safer because, supposedly and erroneously, Arab Muslims are a danger. Facts like the ones posted here negate that logic and show the racist intentions and biases behind actions like the travel ban."
"testosterone.....white priviledge [sic].....dangerous outcome?"
"Yes. And most of them are the ones with all of the guns... The REPUBLICANS."
"The real terrorist threat to the U.S.: extreme right wing white males."
Thanks, PolitiFact, for helping to bring us the truth in politics. Or something.


Update Oct. 9, 2017: PolitiFact reposted its fact check to Facebook. We'll take the opportunity to supplement our selection of comments from people buying into the deception.


"it seems white men are more motivated to commit mass shootings the [sic] people of color so there you have it."
"The focus is the right-wing NRA narrative that implies either minorities or radical Islamic terrorists are the biggest threat to safety rather than angry/crazy white guys with guns."
"well thankfully if the trend continues [white majority shrinking?--ed.] that won't be the case and we can stop sending worthless thoughts and prayers all the time."
"the facts say that even after you adjust for the per capita rates white males still do far more then their fair share of the mass shootings. The "Mostly True" rating is only because there's some debate over what really qualifies as a mass shooting for statistical purposes. RTFA."
"Ban white men."

Maybe PolitiFact will post the article again soon so we can update with even more comments.


Sunday, October 8, 2017

A mainstream media fact-checking scandal continues

Somewhere along the line, mainstream fact-checkers like PolitiFact had an epiphany about cutting funds from future baseline projections.

In the days of yore, when it was trendy for Republicans to decry the Affordable Care Act for cutting Medicare, PolitiFact said a cut from a budget baseline wasn't really a cut.

PolitiFact Virginia, June 2012 (bold emphasis added):
American Crossroads says (Sen. Tim) Kaine promoted a $500 billion cut to Medicare.
The Affordable Care Act contains about $564 billion in cost-savings measures for Medicare over 10 years. But the definition of a cut means there would be a reduction in spending. That’s not the case here. Medicare spending will continue to expand. The law will slow the projected rate of growth.
Now in the age of science and science-y fact-checking, PolitiFact has discovered that cutting funds from a future spending baseline is, in fact, a cut.

PolitiFact, October 2017 (bold emphasis added):
The Senate Budget Committee has a point that Medicare spending will be going up, just not as fast as it would under the status quo. It also has a point that more modest cuts sooner could stave off bigger cuts later. (Experts have often told us that it’s presumptuous to assume significant economic growth impacts before they materialize.)
But we don’t find it unreasonable for Schumer to call cumulative reductions to Medicare and Medicaid spending in the hundreds of billions of dollars "cuts."
That, friends and neighbors, is a major-league flip-flop. Zebra Fact Check documented it more extensively with a post on July 20, 2017. I pointed out the discrepancy on Twitter to the guilty parties, FactCheck.org, PolitiFact and the Washington Post Fact Checker. If they took note of the criticism, apparently each has decided that there is nothing amiss with the inconsistency.

Don't miss this tree on account of the forest

We would draw attention back to one detail in PolitiFact's rating of Sen. Charles Schumer (D-NY):
(W)e don’t find it unreasonable for Schumer to call cumulative reductions to Medicare and Medicaid spending in the hundreds of billions of dollars "cuts."
Why did PolitiFact put "cuts" in quotes? If it was a word Schumer had used, then okay, no problem. But PolitiFact says it has no problem with Schumer using "cuts" to describe decreases to projected spending when Schumer used the term "guts," not "cuts."

When Wisconsin's Tommy Thompson said the Affordable Care Act gutted Medicare, PolitiFact Wisconsin had a big problem with him using "gut" instead of "cut":
The health care law slows Medicare’s growth but spending would still rise significantly, and some new services are added.

The changes do not promise to hold seniors harmless, but Medicare is not being gutted.

We rate the claim False.
See PolitiFact Wisconsin's fact check to appreciate the degree to which it emphasized a distinction between "guts" and "cuts."

For fact-checking Schumer in 2017, the words are merely synonyms.

The inconsistency on cuts from a baseline occurs routinely from the mainstream fact checkers.
It's a scandal. And we shouldn't be the only ones emphasizing that point.




Correction Oct. 9, 2017: Changed "put 'cuts' in parentheses" to "put 'cuts' in quotes."

Saturday, October 7, 2017

Miami New Times: "How PolitiFact Got Its "Fake News" Tag Wrong on Occupy Democrats"

This is not a post highlighting PolitiFact's left-leaning bias.

This is a post serving to remind us that PolitiFact often operates in a slipshod manner.

The Miami New Times came out with a story on Oct. 2, 2017 rightly panning PolitiFact for miscategorizing "Occupy Democrats" as fake news.

Sure, Occupy Democrats publishes false stories. But the folks at PolitiFact (and others in the fact-checking clique) bang the drum reminding everybody that fake news is the publication of deliberately false stories. That's made-up stuff, not just mistakenly or stupidly wrong.

Despite that, PolitiFact listed Occupy Democrats on its page of fake news sources.
(W)hen New Times asked why PolitiFact had classified the liberal Facebook-based news empire Occupy Democrats as "Fake News" in its Fake News Almanac, the site admitted Occupy Democrats should never have been on the list in the first place. The misclassification highlights the difficulty of judging fake news, even for the pros, and raises questions about the reliability of the almanac.
 The New Times obtained a predictable excuse from the PolitiFact's Joshua Gillin:
(Gillin said) the site should not have been included in the almanac because the majority of its posts reviewed by PolitiFact were not designated as fake news, and the two that were deemed fake news date to 2016. For a whole site to be classified as fake news, he said, it must regularly make a "deliberate attempt to mislead."
PolitiFact took Occupy Democrats off the list after getting challenged on its inclusion. But one advantage PolitiFact obtains by making its fake news site almanac an embed instead of a published article stems from canceling its responsibility to issue a correction notice.

Obviously there's neither a need nor responsibility to note corrections made to embedded content right, right?

Just call it the embedded content loophole.

Friday, September 22, 2017

Joy Behar lies 100 percent of the time. It's from PolitiFact.

Of course the title of this post is intended solely to draw attention to its content. We do not think Joy Behar lies 100 percent of the time, no matter what PolitiFact or Politico say.

For the record, Behar's PolitiFact file as of Sept. 19, 2017:


As we have noted over the years, many people mistakenly believe  PolitiFact scorecards reasonably allow one to judge the veracity of politicians and pundits. We posted about Behar on Sept. 7, 2017, noting that she apparently shared that mistaken view.

PolitiFact surprised us by fact-checking Behar's statement. The fact check gave PolitiFact the opportunity to correct Behar's core misperception.

Unfortunately, PolitiFact and writer Joshua Gillin blew the opportunity.

A representative selection of statements?


Critics of PolitiFact, including PolitiFact Bias, have for years pointed out the obvious problems with treating PolitiFact's report cards as a means of judging general truthfulness. PolitiFact does not choose its statements in way that would ensure a representative sample, and an abundance of doubt surrounds the accuracy of the admittedly subjective ratings.

Gillin's fact check rates Behar's conclusion about Trump's percentage of lies "False," but he succeeds in tap-dancing around each of the obvious problems.

Let Fred Astaire stand aside in awe (bold emphasis added):
It appeared that Behar was referring to Trump’s PolitiFact file, which tracks every statement we’ve rated on the Truth-O-Meter. We compile the results of a person's most interesting or provocative statements in their file to provide a broad overview of the kinds of statements they tend to make.
Focusing on a person's most interesting or provocative statements will never provide a broad overview of the kinds of statements they tend to make. Instead, that focus will provide a collection of the most interesting or provocative statements the person makes, from the point of view of the ones picking the statements. Gillin's statement is pure nonsense, like proposing that sawing segments from a two-by-four will tend to help lengthen the two-by-four. In neither case can the method allow one to reach the goal.

Gillin's nonsense fits with a pattern we see from PolitiFact. Those in charge of PolitiFact will occasionally admit to the problems the critics point out, but PolitiFact's daily presentation obscures those same problems.

Gillin sustains the pattern as his fact check proceeds.

When is a subjective lie an objective lie?


In real life, the act of lying typically involves an intent to deceive. In PolitiFact's better moments, it admits the difficulty of appearing to accuse people of lying. In a nutshell, it's very dicey to state as fact a person was lying unless one is able to read minds. But PolitiFact apparently cannot resist the temptation of judging lies, or at least the temptation of appearing to make those judgments.

Gillin (bold emphasis added):
Behar said PolitiFact reported that "95 percent of what (Trump) says is a lie."

That’s a misreading of Trump’s file, which notes that of the 446 statements we’ve examined, only 5 percent earned a True rating. We’ve rated Trump’s statements False or Pants On Fire a total of 48 percent of the time.

The definitions of our Truth-O-Meter ratings make it difficult to call the bulk of Trump’s statements outright lies. The files we keep for people's statements act as a scorecard of the veracity of their most interesting claims.
Is Gillin able to read minds?

PolitiFact's fact checks, in fact, do not provide descriptions of reasoning allowing it to judge whether a person used intentionally deceptive speech.

PolitiFact's report cards tell readers only how PolitiFact rated the claims it chose to rate, and as PolitiFact's definitions do not mention the term "lie" in the sense of willful deception, PolitiFact ought to stick with calling low ratings "falsehoods" rather than "lies."

Of course Gillin fails to make the distinction clear.

We are not mind readers. However ...

Though we have warned about the difficulty of stating as fact that a person has engaged in deliberate deception, there are ways one may reasonably suggest it has occurred.

If good evidence exists that a party is aware of information contradicting that party's message and the party continues to send that same message anyway, it is reasonable to conclude that the party is (probably) lying. That is, the party likely engages in willful deception.

The judgment should not count as a matter of fact. It is the product of analysis and may be correct or incorrect.

Interviews with PolitiFact's principal figures often make clear that judging willful deception is not part of their fact-checking process. Yet PolitiFact has a 10-year history of blurring the lines around its judgments, ranging from the "Pants on Fire" rating ("Liar, liar, pants on fire!") for "ridiculous" claims, to articles like Gillin's that skip opportunities to achieve message clarity in favor of billows of smoke.

In between the two, PolitiFact has steadfastly avoided establishing a habit of attaching appropriate disclaimers to its charts and graphs. Why not continually remind people that the graphs only cover what PolitiFact has rated after judging it interesting or provocative?

We conclude that PolitiFact wants to imply that some politicians habitually tell intentional falsehoods while maintaining its own plausible deniability. In other words, the fact checkers want to judge people as liars under the deceptive label of nonpartisan "fact-checking" but with enough wiggle room to help shield it from criticism.

We think that is likely an intentional deception. And if it is intentional, then PolitiFact is lying.

Why would PolitiFact engage in that deception?

Perhaps it likes the influence it wields on some voters through the deception. Maybe it's just hungry for click$. We're open to other explanations that might make sense of PolitiFact's behavior.