Tuesday, June 28, 2022

PolitiFact: How can we rig this abortion fact check to help President Biden? Part II

Lo and behold, PolitiFact made changes to the fact check we critiqued in our previous post.

Recall that we lodged three main criticisms of PolitiFact's "Mostly True" confirmation of President Biden's claim the Supreme Court's Dobbs decision made the United States an outlier among developed nations.

  1. PolitiFact cherry-picked its pool of "developed nations."
  2. It misidentified "Great Britain" as a member of the G7, enabling it to ignore a Northern Ireland spanner in the works
  3. It falsely stated members of the G7, except the United States, "have national laws or court decisions that allow access to abortion, with various restrictions."

PolitiFact partially fixed the second problem.

Fixing the second problem without fixing the third problem magnifies the third problem. And PolitiFact, again, failed to follow its own corrections policy.

Let's start with the "clarification" notice and work from there:

CLARIFICATION, June 27, 2022: This story has been clarified to reflect that the United Kingdom, which contains Northern Ireland, is a G-7 nation. It has also been updated to describe current abortion laws in Northern Ireland.

Note the "clarification" notice announces a clarification and an update.

What does PolitiFact's statement of principles prescribe for clarifications and updates?

Clarification:

Oops! PolitiFact's statement of principles offers no procedure for doing a clarification!

The either means that PolitiFact is following its principles because the principles allow it to do whatever it wants, or else it means that PolitiFact isn't really following a principle.

Update:

Updates – From time to time, we add additional information to stories and fact-checks after they’ve published, not as a correction but as a service to readers. Examples include a response from the speaker we received after publication (that did not change the conclusion of the report), or breaking news after publication that is relevant to the check. Updates can be made parenthetically within the text with a date, or at the end of the report. Updated fact-checks receive a tag of "Corrections and updates."

We think it clear that this policy calls for newly added "update" material within the original text to occur with clear cues to the reader where the material was added ("parenthetically within the text with a date"). Otherwise, the new material occurs at the end of the item after the update notice.

It's easier to find PolitiFact updates done incorrectly than ones done correctly. But this example shows an update done the right way:

 

The method shown communicates clearly to readers how the article changed.

This is infinitely more transparent than PolitiFact's common practice of an update notice at the end saying, in effect "We changed stuff in the story above on this date."

Understood correctly, PolitiFact corrected its story. It fixed its mistake in misleadingly identifying Great Britain as a member of the G7. And the "update" was not new information. It was information PolitiFact should have included originally but mistakenly did not.

The fact check continues to do readers a disservice by failing to inform them that the U.K. in 2019 forced Northern Ireland, via special legislation, to permit abortion. That still-missing fact contradicts PolitiFact's claim that members of the G7 other than the United States"have national laws or court decisions that allow access to abortion, with various restrictions." The legislation forcing Northern Ireland to permit legal abortion was not a national law, nor was it a court decision.

The law is specific to Northern Ireland.

We'll end with an image we created for Twitter. It's an image from the Internet Archive Wayback Machine, using its comparison feature. Text highlighted in blue was changed from the original text, and we added red lines under the part of PolitiFact's fact check that remains false.



Saturday, June 25, 2022

PolitiFact: How can we rig this abortion fact check to help President Biden?

Day by day, it's amazing to watch the kind of material PolitiFact publishes as supposedly "not biased" fact-checking.

From yesterday, June 24, we have this (red X added):


In what respect was President Biden claiming the United States counts as an outlier among "developed nations"? Here's how PolitiFact presented the president's statement:
"With this decision, the conservative majority of the Supreme Court shows how extreme it is, how far removed they are from the majority of this country," Biden said a couple hours after the ruling was released on June 24. "They have made the United States an outlier among developed nations in the world."

How far removed is the Supreme Court from the majority in this country? If we start with the amount of legal education, the gulf between does seem obvious. But Biden surely meant the Court's attitude toward abortion, even though the Court was faced with ruling on what U.S. law says about abortion, and not how to transform popular American attitudes into national law.

And how had the Court "made the United States an  outlier among developed nations of the world"? PolitiFact has published items showing it at least understands that abortion law varies widely among comparably developed nations. If it's "Half True" that "'39 out of 42 (countries) in Europe have more restrictive abortion laws' than Mississippi" then how is PolitiFact supposed to pull Biden's fat out of the fire on this one?

PolitiFact called in the Spin Team (our term!) of Madison Czopek and Tom Kertscher. The team developed a PolitiFantastic interpretation of Biden's claim:

While the high court’s decision leaves in place state laws that permit abortion, it removes the national right to an abortion — something that is widely guaranteed by laws or court rulings in other developed nations.

So ... President Biden was just saying that Court made the United States, home of the 10th amendment, an outlier on the basis of  its newfound lack of an explicit or de facto national abortion access law?

But what other developed nations have something like the 10th amendment in their constitutions? If they lack such a thing then isn't this a pointless exercise? The U.S. is an outlier because of the structure of its federalist system, not simply because of the Court.

Damn those torpedoes! PolitiFact's going full speed ahead!

Would you believe ...

... no mention of 'federalism'?

PolitiFact came oh-so-close to unraveling the mystery of the missing federal law setting abortion policy nationally in the United States:

The high court ruled 6-3 to uphold a restrictive Mississippi law and 5-4 to reverse Roe, with the majority opinion saying "the Constitution makes no express reference to a right to obtain an abortion." The decision ended nearly 50 years of federally protected access to abortion and returned power to individual states to set their own laws.

That means access to abortion varies widely in the U.S.

 Correct, PolitiFact. It's almost like the European Union that way.

In the EU, Malta is the only country where abortion is still completely prohibited. But Poland has a near-total ban in place, and many EU states have a range of legal barriers to abortion, such as mandatory counselling, waiting periods between a request and the abortion, third party consent, low upper time limits and limited legal grounds that force many women to travel to other countries, and all the increased restrictions on travel due to Covid-19, especially where access to abortion pills and self-managed abortion have not been made available.

 It's going to be really something when these fact checkers discover the concept of federalism.

... 'developed nations' don't need to be restricted to just the G7?

PolitiFact cleverly removed Malta and Poland from the "developed nations" list by restricting that list to the G7:

Developed nations consisting of the world’s leading economies are sometimes referred to as the G7, or the Group of Seven, which includes the U.S. and six other industrialized nations. Unlike the U.S., those six have national laws or court decisions that allow access to abortion, with various restrictions.

Would you believe PolitiFact has no consistent history of confining "developed nations" to the G7?

... 'Great Britain' as a G7 nation?

Would you believe Great Britain isn't listed as a G7 nation even though PolitiFact includes "Great Britain" on its list?

Would you believe that Great Britain isn't the same thing as the United Kingdom with respect to abortion law?

Great Britain technically refers to the main British island, made up of England, Scotland and Wales. "Great Britain" is sometimes used as a synonym for the United Kingdom, but if PolitiFact meant it that way then it should have pointed out that the U.K. only passed a law later resulting in liberalized abortion law in Northern Ireland in 2019.

"U.K." literally means "The United Kingdom of Great Britain and Northern Ireland."

So the United Kingdom was the G7 outlier until 2019. The G7 went with no outlier for a couple of years before the SCOTUS struck down the Roe v. Wade precedent.

Why did PolitiFact miss all this stuff?

Probably because PolitiFact is biased. Very probably.

Tuesday, June 14, 2022

PolitiFact issues another pathetic appeal for donations

The pressure PolitiFact feels to raise money from individual donors has resulted in a distinct story genre. PolitiFact's lying "PolitiFact is not biased" article from 2018 serves as a fine example, with PolitiFact skirting the facts about itself to sell its biased fact-checking to would be donors.

In 2022, PolitiFact looked to do whiny Taylor Lorenz one better with its mewling "PolitiFact reporters face online harassment; we keep fact-checking anyway."

 In the alleged interest of transparency, PolitiFact tells us it receives vicious attacks. But it does not relate the specifics because that might give them undue attention!

In other words, it's not about transparency. It's about money.

And it's about twisting the truth.

The self-pity party, penned by PolitiFact Editor-in-Chief Angie Drobnic Holan, begins:

Fact-checking journalism has never been exactly easy. 

Our normal work days include overcoming a series of obstacles: We field questions from readers seeking the facts on topics large and small. We dig for hard-to-find information over the internet and track down experts to interview over the phone. We reach out to press secretaries and spokespeople, seeking their comments and insights. We write the fact-checks and take them through a rigorous editing process with many revisions. We list and link to all of our sources so readers can review our work for themselves. We energetically debate ratings on our Truth-O-Meter.

The "rigorous editing process" counts as a howler. Significant errors often slide past three(!) PolitiFact editors as well as the writer. Zebaa Fact Check documented one such example from May 25, 2022, when PolitiFact rated a false statement (and bankrupt argument) from a Democrat "Mostly True."

The claim that they "energetically debate ratings on our Truth-O-Meter" counts as fluff. The Truth-O-Meter debates are not public. You'll just have to take their word for it. Such transparency! And if the vote of three editors gets split, they're not going to tell you how the vote went down.

Too transparent?

In the interest of transparency, we're not linking to any specific examples

Hilariously, PolitiFact trumpets its transparency in detailing the attacks it has suffered while also taking pains not to identify any specific examples of attacks it has suffered. Instead, PolitiFact names two of its attackers (Dan Bongino! And Ron DeSantis' press secretary Christina Pushaw!). As to what the evildoers did, PolitiFact cannot mention anything specific for fear of amplifying the evil message. But trust PolitiFact when it says the behavior "can only be described as online harassment and intimidation."

To assure our readers of our high-quality content we need more smoke!

PolitiFact offends most deeply with its sanctimonious attitude. Just check this out:

The actions of these anti-journalism forces are deeply concerning to everyone who cares about the independent practice of fact-finding. Disparagement of individual journalists has become an occupational hazard for PolitiFact’s staff and among journalists at media organizations around the country.
Again, PolitiFact offered no specific description of the actions undertaken by "anti-journalism forces."  Bongino supposed misrepresented PolitiFact's findings (no specifics, sorry!). Pushaw published inquiries from journalists, ridiculed them and then did something else bad (PolitiFact hints).

Deeply concerning to everyone who cares about the independent practice of fact-finding? Maybe if PolitiFact had divulged some of the specific facts we would have reason for real concern.

More sanctimony: They say we made mistakes--but we didn't!

PolitiFact (bold emphasis added):

Reporters were singled out individually as being unfit for their jobs. They’ve been vilified for not having advanced credentials or specialized academic degrees. (Conversely, they’ve also been criticized as out-of-touch elites.) They’ve been told over and over that they should be fired for incompetence. In reality, their credentials are entirely appropriate for journalism, their reporting was factually valid, and the published fact-checks were solid and without error. For the record, they’re in zero danger of being fired.
Again, in the interest of transparency PolitiFact tells us that the charges of error PolitiFact will not tell us about were themselves wrong, and PolitiFact was right all along!

This is from an organization that, its internal "Truth-O-Meter" debates aside, typically avoids commenting on charges of error. Indeed, the fact checkers at PolitiFact often seem quite happy to stand behind obviously false reporting.

Of course the point is obviou$

Our readers remain a huge source of solace in the face of online attacks. Many of them support us financially with monthly donations. They share our reports online and publicly comment that they appreciate our work. Others contact us directly with simple words of support sent via email or on social media. We are here to serve those who seek fact-based, civil expression. Disagreement is fine. Personal, ugly attacks are shameful — and won’t discourage us from our work.
PolitiFact's article is best viewed as an appeal for money.

If people want to give money to a left-leaning fact checker, give to one that plausibly hews to standards of integrity, like FactCheck.org. FactCheck.org resisted joining the ludicrous "normal tourist visit" media smear of of Rep. Andrew Clyde (R-Ga.). PolitiFact went all in (and couldn't even punctuate the supposed quotation correctly).

Name-calling aside, PolitiFact deserves heaps of criticism. It stinks at fact-checking. The worst of the mainstream fact-checkers seems to see itself in no need of improvement except improved cash flow.

Not. Very. Objective.

Sunday, June 12, 2022

Enter Strawman: PolitiFact uses interpretive follies to downgrade Republican claim

PolitiFact routinely applies uncharitable interpretation to reach nonsensical conclusions with its trademarked "Truth-O-Meter" ratings. Rep. Glenn Grothman (R-Wisc.) received that treatment from PolitiFact Wisconsin on June 10, 2022.

Grothman said the proposed loan forgiveness plan would primarily benefit the wealthy. And he went on to emphasize the program occurs while low income Americans struggle.

PolitiFact did a fair job at first of presenting Grothman's words:

"Nearly 60% of all student loan debt is held by the rich and upper-middle class," he said in a May 21, 2022 newsletter. "So, by forgiving student loan debt, we would be handing the wealthy a financial windfall while low income Americans suffer further from inflation and rising costs."

DALL-E image of "Enter Strawman"
Enter Strawman

But then the twisting began:
For the purposes of this fact-check, we’re going to look at the portion of the claim about who holds student loan debt, and whether or not forgiveness would help low-income people.
Out of the blue, PolitiFact Wisconsin jumps to the conclusion Grothman's saying loan forgiveness extended to low-income persons would do little to help them.

PolitiFact simply declines to consider Grothmann might mean that executing a policy that most benefits the wealthy makes little sense during an inflation crunch that's particularly hurting lower-income Americans. PolitiFact confirms Grothman was right that the policy would tend to benefit the wealthy. But by pursuing its far-fetched interpretation of Grothman's claim, PolitiFact ends up defeating a straw man:
(Grothmann) misfires a bit in suggesting that loan forgiveness would not matter much to low-income people. For college graduates in lesser-paying jobs, it might make a huge difference in terms of their finances.

"Suggesting." That's PolitiFact-ese for "We made it up."

Grothman wasn't saying loan forgiveness would not help lower income people who received it. He was saying loan forgiveness mostly would benefit the wealthy when lower income people are the ones in need of relief.

But try to tell that to a liberal blogger wearing the "fact checker" label.

Thursday, June 2, 2022

PolitiFact's ongoing double standard on correlation versus causation

PolitiFact does not advertise the fact that it applies standards inconsistently.

But it could do so without misleading people.

The liberal bloggers at PolitiFact, who pass themselves off as objective fact checkers, presented us with a new example the other day.

President Biden passed correlation off as causation. "Mostly True," said PolitiFact: Because, you know, the correlation was there.

PolitiFact's reasoning:

A key study backs Biden up. But the reality is millions of assault weapons and large-capacity magazines remained in circulation during the ban, and that makes it hard to tease out the law’s impact. 

We rate this claim Mostly True.

Of course it's child's play to come up with an example where somebody factually claimed a correlation and got PolitiDinged for it.

The Facebook post did not directly claim causation any more than did President Biden.



PolitiFact confirmed the claimed correlation, but guess what? There was no proof the higher mask usage caused the deaths! So, "False."

PolitiFact's reasoning:

A Facebook post said there’s a "‘positive correlation’ between higher mask usage and COVID-19 deaths."

The post was referencing a study that reviewed data from 35 European countries and found that in places where mask usage was higher, COVID-19 deaths were also higher. But the study’s author said there was no cause-and-effect found.  

Critics of the study said masking protocols were issued in response to high rates of transmission. So it would be expected that deaths would occur while masking would be in place. 

Public health officials recommend masking as one way to help reduce transmission..

We rate this claim False.

Parallel cases. Both claims asserted a correlation. In both cases PolitiFact substantially confirmed the correlation but noted that correlation does not prove causation. Nearly polar opposite ratings resulted. 

That's how PolitiFact rolls.

Wednesday, June 1, 2022

Literally false and the underlying point is false, therefore "Mostly True"

 Have we mentioned PolitiFact is biased?

Check out this epic fail from the liberal bloggers at PolitiFact (red x added):


PolitiFact found it "Mostly True" that most of the "killers" to which Sen. Chris Murphy (D-Conn.) referred tend to be 18, 19 years old.

What's wrong with that?

Let us count the ways.

In reviewing the context, Sen. Murphy was arguing that raising the age at which a person may buy a gun would reduce school shootings. Right now that threshold stands at 18 in most states and for most legal guns, with certain exceptions.

If, as Murphy says, most school shootings come from 18 and 19-year-olds then a law moving the purchase age to 21 could potentially have quite an effect.

"Tend To Be"="Tend To Be Under"?

But PolitiFact took a curious approach to Murphy's claim. The fact checkers treated the claim as though Murphy was saying the "killers" (shooters) were 20 years old or below.

That's not what Murphy said, but giving his claim that interpretation counts as one way liberal bloggers posing as objective journalists could do Murphy a favor.

When PolitiFact checked Murphy's stats, it found half of the shooters were 16 or under:

When the Post analyzed these shootings, it found that more than two-thirds were committed by shooters under the age of 18. The analysis found that the median age for school shooters was 16.

So, using this criteria [sic], Murphy is correct, even slightly understating the case.

See what PolitiFact did, there?

Persons 16 and under are not 18, 19 years old. Not the way Murphy needs them to be 18, 19 years old.

If Murphy can change a law that makes it illegal for most shooters ("18, 19 years old") to buy a gun, that sounds like an effective measure. But persons 17 and under typically can't buy guns as things stand. So, for the true majority of shooters Murphy's law (pun intended?) wouldn't change their ability to buy guns. Rather it would simply remain illegal as it is now.

To emphasize, when PolitiFact found "the media age for school shooters was 16" that effectively means that most school shooters are 17 or below. That actually contradicts Murphy's claim that most are aged 18 or 19. We should expect that most are below the age of 17, in fact.

If Murphy argues for raising the age for buying a gun to 21 based on most shootings coming from persons below the age of 16, that doesn't make any sense. It doesn't make sense because it would not change anything for the majority of shooters. They can't buy guns now or under Murphy's proposed law.

Calculated Nonsense?

By spouting mealy-mouthed nonsense, Murphy succeeded laying out a narrative that gun control advocates would like to back. Murphy makes it seem that raising the gun-buying age to 21 might keep most school shooters from buying their guns.

As noted above, the facts don't back that claim. It's nonsense. But if a Democratic senator can get trusted media sources to back that nonsense, well then it becomes a compelling Media Narrative!

Strict Literal Interpretation

Under strict literal interpretation, Murphy's claim must count as false. If most school shooters are 16 years old or younger then the existence of just one 17 year-old shooter makes his claim false. Half plus one makes a majority every time.

Murphy's claim was false under strict literal (hyperliteral) interpretation.

Normal Interpretation

Normal interpretation is literal interpretation, but taking things like "raining cats and dogs" the way people (literally) understand them normally. We've reviewed how normal interpretation should work in this case. To support a legitimate argument for a higher gun-buying age, Murphy needs to do so by identifying a population that the legislation would reasonably affect. The ages Murphy named (18, 19) meeting that criterion. And, because Murphy used some language indicative of estimation ("tend to be") we can even reasonably count 20 years of age in Murphy's set.

Expanding his set down to 17 doesn't make sense because changing the gun purchase age from 18 to 21 has no effect on a 17-year-old's ability to purchase a gun at 17.

But combining the shootings from 18, 19 and 20 year-olds cannot make up "most" of the school shootings if the media age for the shooters is 16 and at least one shooter was either 17 or over 20.

Murphy's claim was false given normal (literal) interpretation.

Biased Interpretation

PolitiFact used biased interpretation. The fact checkers implicitly said Murphy meant most of the shootings came from people under the age of 18 or 19, even though that makes nonsense of Murphy's argument.

PolitiFact's biased interpretation enhanced a misleading media narrative attractive to liberals.

Coincidence?

Nah. PolitiFact is biased to the left. So we see them do this kind of thing over and over again.

So it's not surprising when PolitiFact rates a literally false statement from a Democrat as "Mostly True."


Correction June 1, 2022: Fixed a typo (we're=we've)

Thursday, March 17, 2022

PolitiFact's "Pants on Fire" bias in 2021

As we noted in our post about the "Pants on Fire" research for 2020, we have changed the way we do the research.

PolitiFact revamped its website in 2020, and the update made it next to impossible to reliably identify which of PolitiFact's various franchises were responsible for a fact check. Instead of focusing on PolitiFact National, it makes more sense to lump all of PolitiFact together. But the new approach has a drawback. The new evaluations represent an apples-to-oranges comparison to the old evaluations.

To deal with that problem, we went back and did PolitiFact's entire history since 2007 using the new method.

With the research updated using the new method, we are now able to compare the new research with the old method.

Spoiler: Using the new method, PolitiFact was 2.66 times more likely to rule a claim it viewed as false as "Pants on Fire" from a Republican than for a Democrat. That's PolitiFact's third-highest bias figure of all time, though PolitiFact National, considered separately, has exceeded that figure at least three times.

 

Method Comparison: New vs. Old 

Our new graph shows the old method, running from 2007 through 2019, along with the new method graphed from 2007 through 2021.


The black line represents the old method. The red line represents the new.

The numbers represent the what we term the "PoF bias number," which is an expression of how much more likely it is that PolitiFact will a claim it regards as false a "Pants on Fire" rating for a Republican over a Democrat. So, for 2009 under the old method (black line), the GOP was 3.14 times more likely to have one of its supposedly false statements rated "Pants on Fire."

As our research has documented, PolitiFact has never offered an objective means of determining the ridiculousness of a claim viewed as false. The "Pants on Fire" rating, to all appearance, has to qualify as a subjective judgment. In other words, the rating represents PolitiFact's opinion.

In 2017, under the old method, the bias number dropped to 0.89, showing a bias against Democrats for that year at PolitiFact National. On average over time, of course, Republicans were significantly more likely to have their false claims regarded as "ridiculous" by PolitiFact.

Notably, the new method (red line) shows a moderating effect on PolitiFact's "Pants on Fire" bias from 2008 through 2014. The red line hovers near 1.00 for much of that stretch. After 2015 the red line tends to run higher than the black line, with the notable exception of 2019.

Explaining the Numbers?

We found two correlations that might help explain the patterns we see in the graphs.

PolitiFact changes over time. From 2007 through 2009, PolitiFact National did nearly every rating. Accordingly, the red and black lines track very closely for those years. But in 2010 PolitiFact added several franchises in addition to PolitiFact Florida. Those franchises served to moderate the PoF bias number until 2015, where we measured hardly any bias at all in the application of PolitiFact's harshest rating.

After 2015, a number of franchises cut way back on their contributions to the PolitiFact "database" and a number ceased operations altogether, such as PolitiFact New Jersey and PolitiFact Tennessee. And in 2016 PolitiFact added eight new state franchises (in alphabetical order): Arizona, Colorado, Illinois, Nevada, New York, North Carolina, Vermont and West Virginia.

The Franchise Shift

We made graphs to help illustrate the franchise shift. PolitiFact has had over 20 franchises over its history, so we'll divide the graph into two time segments to aid the visualization.

First, the franchises from 2010 through 2015 (click for larger view):

We see Florida, Texas, Rhode Island and Wisconsin established as consistent contributors. Tennessee lasts one year. Ohio drops after four years. Oregon drops after five and New Jersey after three.

Next, the franchises from 2016 through 2022 (click for larger view):


I omitted minor contributions from PolitiFact Georgia in 2016 (12) and 2017 (2). The orange bar near the top of 2016 is six states combined (hard to make out in the columns after 2016).

Note that the contributions are skinny, except for the one from Wisconsin. But even Wisconsin cut its output compared to the previous graph. We have a correlation suggesting that the participation of different state franchises impacted the bias measure.

But there's another correlation.

Republicans Lie More! Democrats Lie Less!

Liberals like to explain PolitiFact ratings that look bad for Republicans by saying that Republicans lie more. Seriously, they do that. But we found that spikes--especially recent ones--in the "Pants on Fire" bias measure were influenced by PolitiFact's spiking reluctance to give Democrats a "Pants on Fire" rating.

That correlation popped out when we created a graph showing the percentage of false statement given the "Pants on Fire" rating by party. The graph for Republicans stays pretty steady between 20 and 30 percent. The graph for Democrats fluctuates wildly, and the recent spikes in the bias measure correlate with very low percentages of "Pants on Fire" ratings for Democrats.


As is always the case, our findings support the hypothesis that PolitiFact applies its "Pants on Fire" rating subjectively, with Republicans receiving the bulk of the unfair harm. And in this case Republicans receive the bulk of the unfair harm through PolitiFact's avoidance of rating Democrat claims "Pants on Fire."

Do Democrats lie less? We don't really know. We suspect not, given the number of Democrat whoppers PolitiFact allows to escape its notice (such as this recent gem--transcript). We think PolitiFact's bias explains the numbers better than the idea Democrats lie less.



Notes on the PolitiFact franchise numbers: As we noted from the outset, PolitiFact's revamped website made it all but impossible to identify which franchise was responsible for which fact check. So how did we get our numbers?

We mostly ignored tags such as "Texas" or "Wisconsin" and looked for the names of staffers connected to the partnered newsroom. This was a fallible method because the new-look website departs from PolitiFact's old practice of listing any staffers who helped write or research an article. The new site only lists the first one mentioned from the old lists. And it has long been the case that staffers from PolitiFact National would publish fact checks under franchise banners. So our franchise fact check numbers are best taken as estimates.

Saturday, March 12, 2022

Yes, Virginia, state franchise "star chambers" are still a thing

As I noted over at Zebra Fact Check, PolitiFact is saying the people who decide a "Truth-O-Meter" rating have years of PolitiFact experience.

It doesn't appear true. In the past, PolitiFact admitted that state franchises were expected to supply their own board of editors to determine ratings, with PolitiFact supplying additional editors as needed.

It seems that's still the case. But where are the years of experience supposed to come from?



Thursday, March 3, 2022

A handful of baloney from PolitiFact

"At PolitiFact, we wrote "Principles of the Truth-O-Meter" to help guide our work. Words matter was the first principle."

--Neil Brown, Poynter Institute President 



"PolitiFact, thy name is Hypocrisy."

--PolitiFact Bias, longtime PolitiFact critics


What is a "handful"?

What is a "handful"? We could go to a dictionary for a definition. Or we could go to a higher source, such as the fact checkers at PolitiFact.

PolitiFact does the Youngkin handful

"Vaxxed and Relaxed" (@PorterPints) on March 1, 2022 highlighted a PolitiFact fact check of a "handful" claim made by Gov. Glenn Youngkin (R-Va.). Youngkin said Virginia is one of "a handful" of states that taxes veterans' retirement benefits.

In the text of the fact check, PolitiFact informs us that 12 out of 50 states is certainly more than a handful:

Virginia is one of three states that fully tax military pensions. Twelve more states tax the pensions at reduced rates, which is what Youngkin wants to do in Virginia.

All told, 15 states tax military pensions. That’s a minority, but certainly more than the "handful" Youngkin describes.

We rate Youngkin's claim Half True.

So, thanks to PolitiFact we know that the upper boundary for a "handful" is 12 or less, or perhaps 24% or less of a total if we use percentages.

PolitiFact does the Summers handful

Not long after "Vaxxed and Relaxxed" tweeted about the Youngkin "handful," we found another PolitiFact fact check of a "handful" claim, with this claim coming from Democrat Paul Summers.

In this fact check, PolitiFact taught us that 34 out of 66, or perhaps a 51 percent majority, clearly falls below the upper boundary for a "handful" (bold emphasis added):
Early in that year, two of the five incumbent Supreme Court justices stepped aside, reportedly after failing to gather enough political support among party activists on the Democratic Executive Committee. The Democratic nominees wound up being the only candidates on the ballot and were elected to full eight-year terms.

That was clearly a case where, as Summers states, a majority of the committee – 34 of the 66 members, or a "handful of party officials" if you will – was able to choose Supreme Court justices.

PolitiFact, then, has determined that a "handful" has an upper boundary of 14 or less and also an upper boundary no less than 34. Or, by percentage, an upper boundary of 24% and an upper boundary of no less than 51%.

In short, PolitiFact hilariously contradicted itself regarding the matter of the word "handful."

But just out of idle curiosity, what does the dictionary say?

Huh.

Moral of the story

It's folly for a fact checker to try to place definite numerical boundaries around indefinite terms. Claims that include such terms serve as poor fact check fodder.

Pre-publication update:

We note that Matt Palumbo somewhat pre-empted us on this story with a March 2, 2022 item. We will publish our version anyway, as the research locating the Simpson "handful" fact check was original with us. We're entitled to publish on the website the same comparison we made on Twitter on March 1, 2022.

Monday, January 31, 2022

PolitiFact doesn't know the meaning of 'and'?

 Well, well, well.

I've had to focus on things other than PolitiFact Bias posts lately, but PolitiFact and its owner, the Poynter Institute have pulled me out of semi-retirement with an extraordinary clunker of a fact check.

Behold:


PolitiFact's fact check defies logic and establishes an early leader in the "Worst Fact Check of the Year" contest.

The "False" conclusion fails because it rests on a failure to understand the simple logic of "and." The conclusion would work for the logic of "or." But Hannity said "and" not "or."

If Bill says the square is green or red then the square confirms Bill's statement if it is red. Likewise the square confirms Bill's statement if it is green. Bill would be right either way.

But if Bill says the square is green and red then it's a different ballgame.

In the second case the square confirms Bill's statement if it is both green and red. So a square that is half green and half red could confirm Bill's statement. A square that is simply green would contradict Bill's statement. The same goes for a square that is red and not green.

This is extraordinarily basic logic and PolitiFact doesn't get it. 

 Observe how PolitiFact looks to prove Hannity false:

The claim ignored that both Trump and Reagan made similar vows to nominate women to the Supreme Court, then followed through on those promises. Other presidents in history have also considered race and religion as they have made their picks.

We rate Hannity’s claim False.

So, by analogy, PolitiFact says if Trump and Reagan both nominated green squares then both Trump and Reagan each nominated squares that were both green and red.

That's 2+2=5 territory.

Making this even more hilarious, PolitiFact's parent company, the esteemed Poynter Institute, chose to highlight this fact check at the main site. In the title, Poynter's headline writer substituted a comma for the "and," masking the error of logic for those who do not read the "fact check."