Showing posts with label Louis Jacobson. Show all posts
Showing posts with label Louis Jacobson. Show all posts

Sunday, June 23, 2024

Snopes (indirectly) calls out PolitiLie

On twiX (X, formerly known as Twitter) I learned today that Snopes now calls it false that Donald Trump called neo-Nazis and white supremacists "very fine people."

Better late than never! And extra sweet in that we have yet another opportunity to pit fact checkers against each other. That's because PolitiFact, despite initially dodging the issue with an "In Context" feature, later dropped the context and helped spread the "very fine people" lie:

As president in 2017, Trump said there were "very fine people, on both sides," in reference to neo-Nazis and counterprotesters in Charlottesville, Va.

It's notable that the hotlink in the quotation goes right to PolitiFact's "In Context" article, suggesting the neutral-ish article was a dog whistle telling readers, yes, Trump called neo-Nazis and white supremacists "very fine people." 

The more recent article, which superficially gave President Biden a "False" rating for saying no U.S. president before Trump was racist was actually giving Biden a "True" rating for saying Trump is a racist. PolitiFact used the remarks about Charlottesville in making its case that Trump is a racist.

Also of note, my Zebra Fact Check project reported this case as an error to PolitiFact back in July 2020. When PolitiFact made no correction I eventually reported the case to the International Fact-Checking Network and a failure to uphold its standards. Again, no change resulted in PolitiFact's reporting.

The Poynter Institute owns both the International Fact-Checking Network and PolitiFact, of course.

Thursday, February 22, 2024

PolitiFact's how-to primer on improperly fact-checking an analogy

There's so much wrong with this Feb. 22, 2024 PolitiFact fact check that I'm bound to go way beyond the scope of the title.


How To Wrongly Fact Check an Analogy

PolitiFact's summary/quotation of Trump's statement counts as reasonably accurate. He drew an analogy between the fine imposed in the New York fraud case to the political persecution of Alexei A. Navalny, who notably opposed Vladimir Putin in Russian.

At its most basic level, the analogy says Navalny and Trump were treated unfairly in court over politics. But PolitiFact affords nearly zero attention to the basic comparison. Instead, PolitiFact focused on differences as though differences can erase similarities.

Karl Malden's nose remains Karl Malden's nose even if you put it on Emma Stone. And Emma Stone with Karl Malden's nose is Emma Stone having a point of similarity with Karl Malden.

PolitiFact classed Trump's statement (wrongly, we think) as hyperbole, but then justified revoking Trump's license for hyperbole because "we determined there were enough factual elements at play to rate his statement on the Truth-O-Meter."

We're not sure how that's supposed to work. As we noted on X, PolitiFact could use a similar approach to find a vegetarian "Pants on Fire" for comparing vegetarian bacon to regular bacon. The bacon example came straight from a dictionary definition of "analog."

Here We Go Again: "Experts"

Part of PolitiFact's schtick comes from its interviews of experts. Typically the pool of experts leans left, and often has a record of giving politically to Democrats. For some reason PolitiFact doesn't see that as a mark against its own credibility.

Let's take a look, shall we?

Harley Balzer
Highly partisan political giving. One of the most partisan records we've encountered, and that's really saying something.

Erik Herron
We found no political giving under Herron's name. But we did find an X post by Herron that appears to acknowledge the legitimacy of analogies where the comparison may seem strained.


Ric Simmons 
Simmons (employed at The Ohio State University) has two political donations listed. One was to Democrat Joe Biden and the other to the anti-Trump group "The Lincoln Project."

Scott Gehlbach
Gehlbach's partisan political giving fails to challenge that of Balzer, but it's solidly behind Democrats with the exception of one nonpartisan figure, now a (liberal) judge in the Wisconsin court system.

Stephen Sestanovich
Sestanovich has six donations, minimum $250, all going to Democrats.

Kathryn Hendley
Hendley has only one political donation listed, with a Democrat listed as the recipient of the $200 gift.

Mark Osler
Osler has given exclusively to Democrats, with six donations in the range of $50-$250.

What are the chances a fact checker can find seven expert sources and six out of seven have given exclusively to Democrats? It's as though PolitiFact intentionally seeks out Democrats to serve as its experts.

Of course, the mere fact that the experts give to Democrats should not discredit their expertise. But PolitiFact simply uses the experts to underscore that the Navalny case is different than the Trump case. We don't need experts to prove that, and as we pointed out above, differences are irrelevant to the similarities, The former cannot erase the latter.

PolitiFingers on the Scale

As if distracting from the point of Trump's argument and using partisan experts wasn't enough, we have PolitiFingers on the scale of this fact check.

PolitiFact omits all mention of two significant aspects of the fraud case against Trump. Both aspects tend to support the Navalny analogy.

First, the trial judge found that Trump's fraud did not damage anyone financially. That makes the prosecution and the judgment unusual. The fine represents higher conjectured interest charges from lower valuations of Trump properties. We doubt such a basis has ever before been used in the United States to support a fraud penalty.

USAToday:

(Gregory) Germain, the Syracuse professor, said the government did a good job of showing Trump inflated the value of his properties, but noted that sophisticated financial institutions didn't require a third-party appraisal like they do for a typical mortgage on a home.

"There are no cases like it," Germain said.


An Associated Press story makes a related point:

And though the bank offered Trump lower interest rates because he had agreed to personally guarantee the loans with his own money, it’s not clear how much better the rates were because of the inflated figures. The bank never complained, and it’s unclear how much it lost, if anything. Bank officials called to testify couldn’t say for sure if Trump’s personal statement of worth had any impact on the rates.

“This sets a horrible precedent,” said Adam Leitman Bailey, a New York real estate lawyer who once successfully sued a Trump condo building for misrepresenting sales to lure buyers.

Second, PolitiFact's fact check misrepresents the ease of appealing the ruling. 

CNBC:

Former President Donald Trump is gearing up to fight a massive fine in the New York business fraud case that threatens to erase most of the cash he says he has on hand.

But first, he has to secure a bond — and that might not be so easy.

Why doesn't PolitiFact tell you any of that?

Because they're biased.

They make sure there are no observations from a conservative such as Andrew C. McCarthy:

Afters:

PolitiFact is on a real tear against Trump early in 2024. It's almost like they're trying to retroactively make true their false claims about Trump's "Truth-O-Meter" record.

In fact it was Louis Jacobson, listed first on the byline of PolitiFact's fact check, who recently endured two corrections from Slate after it published an interview with him. Jacobson made two flatly false claims about Trump's record on the "Truth-O-Meter."


One wonders whether publicly making false claims about Trump should disqualify Jacobson from working on fact checks involving Trump.

Note: Huh--Looks like Slate botched its editor's note: "It has also been updated to clarify that among major politicians frequently fact-checked by PolitiFact, Trump has the highest percentage of Pants on Fire ratings." I gave them the example of Michele Bachmann, who has had 72 "Truth-O-Meter" ratings.

Hmm. Looks like it's time for another correction request, if there's no clear justification for that claim.

Sunday, December 10, 2023

Example Umpteen Showing How PolitiFact Goes Easier on Democrats

We only wish we had the time and money needed to document as much as 10 percent of PolitiFact's flawed and biased work.

We've documented a number of times PolitiFact's penchant for ignoring its central principle for grading numbers claims. PolitiFact's founding editor Bill Adair declared that the most important part of a numbers claim is its underlying point. But PolitiFact will ignore the underlying point at the drop of a hat if it will benefit a Democrat.

Newsom vs Haley

Newsom and "per capita" interstate migration

Democratic governor Gavin Newsom, defending himself from the charge that California is losing population while Florida gains population, said  "Per capita, more Floridians move to California than Californian's moving to Florida." PolitiFact rated the claim "Mostly True."

What's the underlying point of Newsom's claim? Does it address California's population loss compared to Florida's population gain?

No. Newsom's claim instead distracts from the issue with a pretty much meaningless statistic. Experts PolitiFact cited in the fact check underscored that fact. Note this line from PolitiFact's summary:
Experts gave varying answers about whether the margin was statistically significant, but they agreed that the slim differences make this argument technical, and not necessarily meaningful.
So, PolitiFact effectively ignored Newsom's underlying point (distracting from Sean Hannity's question) and gave him nearly full credit for telling the truth about a meaningless statistic.

Haley and ship counts as a measure of military strength

Contrast PolitiFact's treatment of Newsom to its treatment of Republican presidential candidate Nikki Haley. Haley said China is building up its military, and illustrated her claim by noting China has the largest naval fleet in the world. PolitiFact said she was right with her numbers, but faulted her for her underlying point. "Half True!"


PolitiFact's summary recounts the objections of the experts it interviewed:

Numerically, she’s on target with both countries’ ship counts. But experts say that simply counting ships omits context about a country’s true military capabilities. 

Ship counts ignore overall ship size, specific warfighting capabilities, and overall geographic reach, all of which are metrics where the United States maintains an edge over China.

It's worth noting that Haley made no claim about China's navy possessing more power than the U.S. navy. So why are tonnage and military capability relevant in rating the claim she made?

They're not. But PolitiFact has its excuse for giving Haley a lowball rating compared to the favor they did Newsom. PolitiFact focuses on Haley's underlying point and gives a poor rating for a true claim. PolitiFact ignores Newsom's underlying point and gives him a favorable rating for a claim that might not even be true (check the fine print).

It's part of the baseless narrative PolitiFact weaves: Republicans lie more.

The truth? PolitiFact is biased, and proves it repeatedly with examples like these.

Friday, June 9, 2023

PolitiFact vs the strawman version of Nikki Haley's claim about teen girls and suicide

 Strawmen are always in season at PolitiFact, particularly when a Republican speaks.

Broken quotations always count as a red flag worth investigating. PolitiFact's is especially worth investigating because the key part of the blurb doesn't come from Republican presidential candidate Nikki Haley: "is a reason why."

So, did Haley say it was a reason?

Here's the relevant part of CNN's transcript of its Haley town hall (bold highlights added)

(APPLAUSE) TAPPER: So, woke, the word woke used to be used by progressives to talk about an awareness of inequities and historical inequities, but obviously it means something else to conservatives criticizing it. What does it mean to you? How do you define woke?

HALEY: There's a lot of things. I mean, you want to start with biological boys playing in girl sports. That's one thing. The fact that we have gender pronoun classes in the military now, I mean, all of these things that are pushing what a small minority want on the majority of Americans, it's too much. It's too much. I mean, the idea that we have biological boys playing in girls' sports, it is the women's issue of our time. My daughter ran track in high school. I don't even know how I would have that conversation with her. How are we supposed to get our girls used to the fact that biological boys are in their locker rooms? And then we wonder why a third of our teenage girls seriously contemplated suicide last year. We should be growing strong girls, confident girls.

Then you go and you talk about building a strong military. How are you going to build the morale in a strong military when you're doing gender pronoun classes? Why is it that --

(APPLAUSE)

HALEY: Why is it that you have, you know, kids undergoing critical race theory where if a little girl's in kindergarten if she's -- goes into kindergarten if she's white, you're telling her she's bad. If she's brown or black, you're telling her she's never going to be good enough and she's always going to be a victim. All of these things have gone to where they are pushing, you know, and transgender, the whole issue of the transgender, it's not that people don't think in America you should live the way you want to live. I want everybody to live the way they want to live, but stop pushing your views on everybody else. That's the problem, is there starting to push everything on the rest of us. 

Considering the context, it makes sense to conclude Haley says pushing woke ideology on kids has contributed to the higher suicide rate and that she used biological boys in girls' locker rooms as an example of forcing woke ideology in school.

PolitiFact, however, focuses on its distortion of Haley's argument. Even though Haley did not say that having biological boys with the girls in the girls' locker room caused a higher suicide rate, PolitiFact insists that is what Haley claimed.

PolitiFact asked Haley's campaign to comment, apparently sending a "Have you stopped beating your wife" inquiry to the campaign:

When we asked Haley’s campaign spokesperson to cite research that supported her claim, he sent a statement by Haley that did not answer the question: "We have to grow strong girls, and that is being threatened right now. Whether it’s biological boys going into girls’ locker rooms or playing in girls’ sports, women are being told their voices don’t matter. If you think this kind of aggressive bullying isn’t part of the problem, you're not paying attention."

Note that the campaign's response accords well with our interpretation of Haley's remarks.

But PolitiFact sets its trap, asking for evidence specific to boys in the girls bathroom, with the plan in mind to invoke its fallacious "burden of proof" criterion and find Haley's supposed claim "False."

Teen girls today are experiencing rising rates of suicidal ideation. However, there is no research that suggests this is being caused by the presence of trans athletes in locker rooms.

Research points to other causes, including feelings of isolation or loneliness, feeling like a burden on others, difficulty navigating parental and family relationships and pressures from constant exposure to social media.

Pushing woke ideology on teen girls could not possibly contribute to feeling of isolation or loneliness, feeling like a burden on others, difficulty navigating parental and family relationships and pressures from constant exposure to social media. Right?

Fact checkers have no business putting their own spin on the words of others. Or that would be the case if the modern fact checker weren't in the business of crafting narratives instead of telling the truth.

PolitiFact inventinged a claim for Haley, committing a straw man fallacy, then smacked down its straw man based on the fallacy of appeal to silence. Those two fallacies in PolitiFact's hands add up to a "False" rating for Haley.

 

Thursday, December 1, 2022

More PedantiFact: PolitiFact vs. Kevin McCarthy

 Fact checkers supposedly don't fact check opinions.

PolitiFact fact checks opinions. Real Clear Politics has kept a study going looking at how often a set of top fact checkers rate opinions or predictions (among other things). PolitiFact has paced the group.

We expect Real Clear Politics will get around to adding this Nov. 30, 2022 PolitiFact fact check to the list:

 

Why do we think McCarthy was expressing an opinion?

In other words, why do we have the opinion that McCarthy was expressing an opinion?

We're intentionally giving away the answer, of course. "I think" counts as one of the classic ways of marking one's statement as an opinion.

Why does PolitiFact ignore such an obvious clue?

We think it's likely PolitiFact was looking to build a narrative. By overlooking that McCarthy was expressing opinion and focusing on one part of his statement to the exclusion of another, PolitiFact was able to support that narrative under the guise of fact-checking.

PolitiFact supports the narrative that Donald Trump counts as a racist. Facts don't matter in pursuit of that narrative.

PolitiFact quotes McCarthy correctly, and we'll highlight the part that PolitiFact decided to omit from its fact-checking focus even though it's the only part that McCarthy stated as fact:

"I think President Trump came out four times and condemned him and didn't know who he was," McCarthy said.

That drew real-time pushback from a reporter, who said, "He didn't condemn him or his ideology." McCarthy responded, "The president didn't know who he was."
For PolitiFact, it isn't important whether Trump knew who Nick Fuentes was. It's important that Fuentes is a white nationalist, and important to link Fuentes to Trump in a way that reinforces the narrative that Trump is a racist. Toward that end, PolitiFact ignores the claim Trump did not know who Fuentes was and focuses on the supposed lack of condemnation.

We would argue that Trump saying he did not know Fuentes counts as a condemnation, when we consider the context.

PolitiFact argues the opposite, albeit without any real argument in support:

A look at Trump’s statements during the week between the Nov. 22 dinner and McCarthy’s press availability Nov. 29 show that McCarthy was wrong. Specifically, Trump did not condemn Fuentes on four occasions; instead, Trump said in four statements that he did not know who Fuentes was.
PolitiFact implicitly says that it does not count as a condemnation to profess ignorance of Fuentes' identity.

Here's why that's wrong.

Trump was implying that if he had known who Fuentes was, he would not be welcome at dinner. Hardly anything could be more obvious, particularly given the context that Trump went on record condemning neo-Nazis and white nationalism.

We can even source Trump's quotation through PolitiFact, albeit the fact checkers do an excellent job of not drawing attention to it:

"And you had people -- and I’m not talking about the neo-Nazis and the white nationalists -- because they should be condemned totally. But you had many people in that group other than neo-Nazis and white nationalists. Okay?"

So the fact checkers, though they have reason to know Trump condemned white nationalism, leave that out of a fact check focusing on whether Trump condemned white nationalism. That's context fit for suppression.

The facts don't matter when liberal bloggers posing as unbiased fact checkers want to promote a narrative.

Thursday, September 22, 2022

PolitiSpin: Biden says he cut the debt by $1.5 trillion? Half True!

We kiddeth not when we call PolitiFact a collection of liberal bloggers posing as non-partisan fact checkers.

U.S. debt hasn't gone down at all under Biden, as PolitiFact admits. PolitiFact cited a September 2022 estimate saying the deficit, not the debt, would decrease by about $1.7 trillion compared to FY2021, but that leaves a deficit of almost $1 trillion that will increase the debt by that same amount.

So, how does a left-leaning fact checker go about making a false statement seem like a partially true statement that leaves out important details or takes things out of context?

Watch and learn, wannabe liberal bloggers who covet the "fact checker" label:

"We’ve also reduced the debt and reduced the debt by $350 billion my first year," Biden said. "This year, it's going to be over $1.5 trillion (that we’ve) reduced the debt."

Biden has a point that his administration has presided over smaller deficits than were seen under the Trump administration, based on Congressional Budget Office estimates. But Biden’s remark leaves out important context. The debt had risen because of a temporary phase of unusual federal spending.

No Reduction of U.S. Debt

It's simple. Declare that when Biden says he reduced the debt by $1.5 trillion he's actually making a valid point about reducing the deficit and therefore reducing the growth of the debt. Then imply that the problem with Biden's claim isn't using "debt" instead of "deficit" but that he has left out the fact that most of the deficit reduction happened as old COVID programs stopped shelling out so much federal money.

We're probably not supposed to point out that PolitiFact omits all mention of Mr. Biden's student loan forgiveness program. The CBO said, on the page PolitiFact cited for its deficit figure, loan forgiveness actions in September 2022 could substantially affect deficit figures for FY2022.

That's what a liberal blogger will leave out that a nonpartisan fact checker will mention.

What About Biden's Underlying Point?

PolitiFact has reliably (?) informed us that the most important aspect of a numbers claim comes from the speaker's underlying point. If the numbers are off but the main point stands, a favorable "Truth-O-Meter" rating may result.

 Adair:

(W)e realized we were ducking the underlying point of blame or credit, which was the crucial message. So we began rating those types of claims as compound statements. We not only checked whether the numbers were accurate, we checked whether economists believed an office holder's policies were much of a factor in the increase or decrease.

It turns out in the Biden fact check PolitiFact found Mr. Biden was taking credit for the non-existent debt reduction:

During a Sept. 18 interview with CBS’ "60 Minutes," President Joe Biden touted his administration’s efforts to rein in federal debt.

We judge that if PolitiFact believed Biden was touting "his administration's efforts to rein in federal debt" then it regards his debt reduction claim was an effort to take credit for that supposed reduction.

So, was it a Biden administration effort that reduced the deficit (not the debt) by $1.5 trillion compared to FY2021?

PolitiFact (bold emphasis added):

Spending programs passed earlier in the pandemic began expiring this year, meaning federal outlays have declined. The Committee for a Responsible Federal Budget, a nonprofit public policy group, has estimated that more than 80% of the $1.7 trillion reduction in the deficit can be explained by expiring or shrinking COVID-19 relief.
We calculate that as $1.35 trillion out of the $1.7 trillion, leaving Biden with the potential to claim credit for as much as $350 billion of the deficit reduction. Giving the president credit for the entire amount results in an estimated exaggeration (minimum) of 329 percent (($1.5 trillion-$.35 trillion)/$.35 trillion).

So Biden claimed debt reduction that was not debt reduction and exaggerated his administration's share of the deficit reduction by over three times its actual amount. Therefore, according to PolitiFact, what he said was half true.

The 'Slowing the Rate of Growth' Excuse

PolitiFact cleverly, or perhaps stupidly, excuses Biden's use of "debt" instead of "deficit" by interpreting the claim to mean slowing the growth of the debt. PolitiFact could argue precedent for that approach, for claims about "cutting Medicare" or "cutting Medicaid" tend to receive "Half True" ratings or worse (worse tends to happen if Republican).

The problem? Biden got the "Half True" while exaggerating the numbers in his favor for purposes of claiming credit. And that's with PolitiFact helping out by not mentioning the potential cost of his student loan bailout proposal. The Penn Wharton budget model (University of Pennsylvania) estimated costs of over $500 billion for 2022.

That figure would wipe out the administration's potential share of $350 billion of deficit reduction.

The "slowing the rate of growth" excuse doesn't come close to justifying a "Half True" rating.

We have here another strong entry from PolitiFact for the Worst Fact Check of 2022.

Wednesday, June 1, 2022

Literally false and the underlying point is false, therefore "Mostly True"

 Have we mentioned PolitiFact is biased?

Check out this epic fail from the liberal bloggers at PolitiFact (red x added):


PolitiFact found it "Mostly True" that most of the "killers" to which Sen. Chris Murphy (D-Conn.) referred tend to be 18, 19 years old.

What's wrong with that?

Let us count the ways.

In reviewing the context, Sen. Murphy was arguing that raising the age at which a person may buy a gun would reduce school shootings. Right now that threshold stands at 18 in most states and for most legal guns, with certain exceptions.

If, as Murphy says, most school shootings come from 18 and 19-year-olds then a law moving the purchase age to 21 could potentially have quite an effect.

"Tend To Be"="Tend To Be Under"?

But PolitiFact took a curious approach to Murphy's claim. The fact checkers treated the claim as though Murphy was saying the "killers" (shooters) were 20 years old or below.

That's not what Murphy said, but giving his claim that interpretation counts as one way liberal bloggers posing as objective journalists could do Murphy a favor.

When PolitiFact checked Murphy's stats, it found half of the shooters were 16 or under:

When the Post analyzed these shootings, it found that more than two-thirds were committed by shooters under the age of 18. The analysis found that the median age for school shooters was 16.

So, using this criteria [sic], Murphy is correct, even slightly understating the case.

See what PolitiFact did, there?

Persons 16 and under are not 18, 19 years old. Not the way Murphy needs them to be 18, 19 years old.

If Murphy can change a law that makes it illegal for most shooters ("18, 19 years old") to buy a gun, that sounds like an effective measure. But persons 17 and under typically can't buy guns as things stand. So, for the true majority of shooters Murphy's law (pun intended?) wouldn't change their ability to buy guns. Rather it would simply remain illegal as it is now.

To emphasize, when PolitiFact found "the media age for school shooters was 16" that effectively means that most school shooters are 17 or below. That actually contradicts Murphy's claim that most are aged 18 or 19. We should expect that most are below the age of 17, in fact.

If Murphy argues for raising the age for buying a gun to 21 based on most shootings coming from persons below the age of 16, that doesn't make any sense. It doesn't make sense because it would not change anything for the majority of shooters. They can't buy guns now or under Murphy's proposed law.

Calculated Nonsense?

By spouting mealy-mouthed nonsense, Murphy succeeded laying out a narrative that gun control advocates would like to back. Murphy makes it seem that raising the gun-buying age to 21 might keep most school shooters from buying their guns.

As noted above, the facts don't back that claim. It's nonsense. But if a Democratic senator can get trusted media sources to back that nonsense, well then it becomes a compelling Media Narrative!

Strict Literal Interpretation

Under strict literal interpretation, Murphy's claim must count as false. If most school shooters are 16 years old or younger then the existence of just one 17 year-old shooter makes his claim false. Half plus one makes a majority every time.

Murphy's claim was false under strict literal (hyperliteral) interpretation.

Normal Interpretation

Normal interpretation is literal interpretation, but taking things like "raining cats and dogs" the way people (literally) understand them normally. We've reviewed how normal interpretation should work in this case. To support a legitimate argument for a higher gun-buying age, Murphy needs to do so by identifying a population that the legislation would reasonably affect. The ages Murphy named (18, 19) meeting that criterion. And, because Murphy used some language indicative of estimation ("tend to be") we can even reasonably count 20 years of age in Murphy's set.

Expanding his set down to 17 doesn't make sense because changing the gun purchase age from 18 to 21 has no effect on a 17-year-old's ability to purchase a gun at 17.

But combining the shootings from 18, 19 and 20 year-olds cannot make up "most" of the school shootings if the media age for the shooters is 16 and at least one shooter was either 17 or over 20.

Murphy's claim was false given normal (literal) interpretation.

Biased Interpretation

PolitiFact used biased interpretation. The fact checkers implicitly said Murphy meant most of the shootings came from people under the age of 18 or 19, even though that makes nonsense of Murphy's argument.

PolitiFact's biased interpretation enhanced a misleading media narrative attractive to liberals.

Coincidence?

Nah. PolitiFact is biased to the left. So we see them do this kind of thing over and over again.

So it's not surprising when PolitiFact rates a literally false statement from a Democrat as "Mostly True."


Correction June 1, 2022: Fixed a typo (we're=we've)

Saturday, March 12, 2022

Yes, Virginia, state franchise "star chambers" are still a thing

As I noted over at Zebra Fact Check, PolitiFact is saying the people who decide a "Truth-O-Meter" rating have years of PolitiFact experience.

It doesn't appear true. In the past, PolitiFact admitted that state franchises were expected to supply their own board of editors to determine ratings, with PolitiFact supplying additional editors as needed.

It seems that's still the case. But where are the years of experience supposed to come from?



Wednesday, December 15, 2021

LIndsey Graham out of context

Here we go again. PolitiFact has had quite a run in 2021 when it comes to taking Republicans' claims out of context.

This latest one forced me to set aside other projects that have crow(d)ed out PolitiFact Bias posts.


Did Sen. Graham say the CBO says the "Build Back Better" Act would amount to $3 trillion in deficit spending. 

He did say that, but PolitiFact took it out of context.

PolitiFact explained to its readers that Graham was talking about a modified version of the "Build Back Better" Act (bold emphasis added):

Graham said the CBO predicted the Build Back Better Act would add $3 trillion to deficits over 10 years.

He’s referring to a bill that’s not the Build Back Better Act. At Graham’s request, the CBO looked at the impact of extending the temporary programs in the bill for a full 10 years. That is an assessment of a hypothetical situation, not the bill at hand. 

We rate this claim False.

What's the problem with PolitiFact's reasoning?

It was clear in context that Graham was talking about the CBO's scoring of permanent versions of the bill's temporary provisions. The Fox News interviewer, Chris Wallace, made that clear at the outset of the interview (bold for the portion PolitiFact may have relied on for its quotation of Graham):

WALLACE: You commissioned the Congressional Budget Office to project how much Build Back Better will cost over the 10 years, assuming that the programs that are in it, the spending programs that are in it, go on for 10 years and are not as in the case with child care just for one year.

GRAHAM: Right.

WALLACE: The CBO found, instead of adding $200 billion to the deficit, it will add $3 trillion to the deficit. But, Senator, the White House says that that's fake because if the programs are extended, they'll find ways to pay for them.

GRAHAM: Well, give me a plan to pay for them then. President Biden said the bill was fully -- fully paid for. Vice President Harris said it was paid for. Schumer, Pelosi, Secretary of Treasury Yellen. The CBO says it's not paid for. It's $3 trillion of deficit spending. It's not $1.75 trillion over 10 years, it's $4.9 trillion.
We doubt PolitiFact's headline version of Graham's statement qualifies as proper application of AP style for quotations. But the main point is that, in context, Graham would be understood to be talking about the added cost of making the temporary measures permanent. And PolitiFact affirms what Graham says about that CBO projection.

So how does Graham warrant a "False" rating if he wasn't trying to fool people into thinking the new CBO scoring was for the version of the bill with the temporary provisions?

PolitiFact's Twist on the Committee For a Responsible Budget

Also of note, PolitiFact's fact check takes the Committee For a Responsible Budget out of context, using a part of one of its articles to make Graham look out of line for citing the CBO's scoring of the bill with the temporary provisions made permanent:

Modified means the CBO scored a bill that’s different from the one on the table.

"These estimates do not reflect what is actually written in the Build Back Better Act nor its official cost for scorekeeping purposes," the deficit hawk group Committee for a Responsible Federal Budget wrote. "Lawmakers may choose to allow some provisions to expire, to extend some as written, and to modify some."

That's exactly what the Committee said, but it was in the context of explaining the CBO's alternative scoring and comparing that scoring to the Committee's own alternative scoring of "Build Back Better" with its temporary provisions made permanent (highlights for the portion PolitiFact cherry picked):

Importantly, these estimates do not reflect what is actually written in the Build Back Better Act nor its official cost for scorekeeping purposes. Lawmakers may choose to allow some provisions to expire, to extend some as written, and to modify some. To offset the cost of extending these provisions as President Biden has committed, they would need to more than double current offsets in the bill. Extending programs without these offsets would substantially increase in the debt. $3 trillion of new debt would increase debt to over 116 percent of Gross Domestic Product in 2031, up from 107.5 percent under current law.

The Build Back Better Act relies on a substantial amount of short-term policies and arbitrary sunsets to reduce its cost, raising the possibility of deficit-financed extensions in future years. A more robust and fiscally responsible package would not rely on these gimmicks to achieve deficit neutrality.

The second paragraph in particular aligns well with Sen. Graham's criticism of "Build Back Better."

PolitiFact hid that also from its readers, along with the fact that Graham was obviously talking about the CBO's scoring of temporary provisions made permanent.

Such fact-checking is no better than lying.

Sunday, August 8, 2021

PolitiFact attack on DeSantis attacks a straw man

PolitiFact's supposed fact check of Gov. Ron DeSantis (R) of Florida did not fact check what DeSantis said. Instead it attacked a straw man version of DeSantis' words.

The tag we use on these kinds of stories here at PolitiFact Bias is "altered claims." It's a relatively common occurrence. We just don't have time to document them all.

The problem sticking out like a sore thumb yet invisible to PolitiFact? DeSantis didn't say anything about what's driving the coronavirus surge. Look for yourself. Here's PolitiFact's account of what DeSantis said, with our highlights of DeSantis' actual words:

DeSantis unloaded on Biden during an Aug. 4 news conference in Panama City, Fla. 

"He’s imported more virus from around the world by having a wide open southern border. You have hundreds of thousands of people pouring across every month," DeSantis said. "You have over 100 different countries where people are pouring through. Not only are they letting them through — they're then farming them out all across our communities across this country. Putting them on planes, putting them on buses."

DeSantis doubled down in a fundraising letter later that day: "Joe Biden has the nerve to tell me to get out of the way on COVID while he lets COVID-infected migrants pour over our southern border by the hundreds of thousands. No elected official is doing more to enable the transmission of COVID in America than Joe Biden with his open borders policies."

See? There's not a word from DeSantis about what's driving the current coronavirus surge.

Perhaps the fact checkers somehow derived the core of their fact check based on the news report they cited in the story (WPTV):

DeSantis accused Biden of accelerating the pandemic through lax security at the U.S.-Mexico border.

But again, DeSantis didn't say anything about accelerating the pandemic. He said Biden's border policy was "helping to facilitate" the spread of covid-19:

(")And so he's not shutting down the virus, he's helping to facilitate it in our country."

"Facilitate" is not the same word as "accelerate." They don't mean the same thing.

"Accelerate" is not the same word as "drive." They don't mean the same thing.

In like manner, "facilitate" doesn't mean the same thing as "drive." 

It's irresponsible and wrong for journalists to play the telephone game with key terms.

The fact check's conclusion derives almost entirely from PolitiFact's straw man focus:

DeSantis said Biden has driven the current coronavirus surge because he "imported more virus from around the world by having a wide open southern border." 

The available evidence shows that coronavirus hot spots tend to be clustered either far from the border or on the water, whereas the entire land border with Mexico has fairly low rates. The hotspot locations tend to correlate with low rates of vaccination among the public. 

In addition, the U.S. does not have a "wide open" border. Most people who are encountered are turned away under a Trump-era policy that Biden continued. 

We rate the statement False.

DeSantis did not say Biden has driven the current coronavirus surge. DeSantis said Biden had done more than any other elected official to facilitate the spread of covid. PolitiFact's experts affirmed that border crossings under Biden represent a valid concern. PolitiFact never bothered comparing Biden's border policy to that of any other elected official (Gov. Cuomo, maybe?).

PolitiFact put two other (post-publication note: we deal with one of them!) elements in its fact check that we find worthy of note.

'Hotspot Locations Tend to Correlate With Low Rates of Vaccination'

That sentence was a fact check of Biden, albeit carried out with a carelessness that totally undermines its validity.

Let's take a look at the map of "hotspots" PolitiFact provided.

 


Now take a look at the Johns Hopkins map (as of Aug. 8, 2021--archived version doesn't show the map) showing vaccine percentages by state (fully vaccinated, top; at least one dose, bottom):

 



The claim from President Biden and repeated by PolitiFact, deserved far more scrutiny than it got (look at Nebraska and Nevada, just for starters).

PolitiFact supposedly relied on The New York Times to support the notion that low vaccination rates explain the surge's current pattern:

There’s also a more plausible explanation for the coronavirus surge’s current pattern: Case rates are higher in places with lower rates of vaccination. 

An analysis by the New York Times found that at the end of July, counties with vaccination rates below 30% had coronavirus case rates well over double the case rates in counties with at least 60% vaccination. And five of the six least-vaccinated states — Alabama, Arkansas, Georgia, Louisiana, and Mississippi — are all squarely within the geographical quadrant of the country that has the highest case rates.

PolitiFact's claim relies on specious reasoning, given that the Times conducted nothing like a controlled experiment. The Times showed some charts of test results in high-vaccinated counties compared to low-vaccinated counties. But a vaccinated person is more likely to dismiss mild illness as something other than covid and skip testing. Unvaccinated people would be more likely to get tested and artificially bump the percentage for positive tests in counties with low vaccination percentages.

We'd say that a fact checker who fails to realize this perhaps belongs in another line of work.

Instead of building a straw man out of DeSantis' claim, PolitiFact would have served the public better by doing a serious examination of Biden's implied claim that vaccination effectively provides a significant degree of immunity against covid--to the point where vaccinated persons do not need to worry much about passing the virus on to others (vaccinated and unvaccinated alike).

How does Iceland fit with PolitiFact's rubberstamping of Biden's claim, for example?

From the Brussels Times (bold emphasis added):

About one month ago, the country became the first in Europe to lift all its domestic restrictions, however, on 12 July, it faced a sharp spike in COVID-19 cases for the first time since October, registering 355 new infections, despite over 70% of the total population being vaccinated.

Three-quarters of these were among vaccinated people, and most were linked to the Delta variant of the virus, according to the health authorities. The last such spike in the country had been in late October.

How will mainstream media fact checkers wean themselves from preferring narratives instead of checking facts?

Tuesday, June 8, 2021

PolitiFact turns incoherent Obama statement into "Half True" claim

 Behold:

Remember President Obama the constitutional scholar?
 
Here, the constitutional scholar makes the ability of 30 percent of the U.S. population to control a majority of Senate seats conditional on filibuster reform.

It's a completely preposterous argument, yet somehow PolitiFact arranges the tea leaves so they spell out "Half True."

As for what Obama got wrong, PolitiFact admits it only obliquely (bold emphasis added):

In the transcript of the interview with Klein, this passage about the filibuster included a link to a Washington Post analysis of the differences between population and representation in the Senate. However, the Post article doesn’t precisely support what Obama said. 

...

While the article’s conclusion is generally consistent with Obama’s point, it doesn’t have anything to do with the filibuster or the 60-vote threshold to end one. Rather, the article looked at representation throughout the entire chamber.

PolitiFact tries to make it "Obama's point" that Senate can magnify the power of small populations. But that wasn't really Obama's point. Obama was arguing for filibuster reform.

There is no filibuster reform that changes that basic feature of the Senate. Obama's argument doesn't even count as coherent.

PolitiFact makes a great show of explicating Obama's claim that "30 percent of the population potentially controls the majority of Senate seats." But that's true regardless of the filibuster. We could keep 1,000 people in each of 49 states and have everybody else move to Alaska. That would give a tiny percentage of the U.S. population a supermajority of Senate seats.

So what? There's no argument for filibuster reform in there.

One might use the above scenario to argue for changing the Constitution itself to make it more democratic. But we would hope that somebody would remember that the undemocratic features in the U.S. Constitution were put there deliberately, specifically because the framers considered democracy in the form of popular rule an exceptionally bad form of government. That's why they set up a republic with a federalist system dividing up political power in a variety of ways.

Watch PolitiFact argue Obama's point was something other than filibuster reform (bold emphasis added):

(W)e crunched the numbers from the 2020 Census and concluded that Obama’s overall point had merit but that he misstated the details.

In particular, Obama said that states with a small percentage of the population could control "the majority of Senate seats." Given today’s partisan tendencies in each state, controlling an actual majority of seats would not be feasible for that small a percentage. However, a small percentage of the population could control enough seats to successfully wield the filibuster, which effectively gives them control over whether a majority can pass legislation.

As illustrated above, a small percentage of the population could potentially wield a supermajority in the Senate. It has nothing to do with the filibuster, and the need for filibuster reform was Obama's point.

Check out PolitiFact's summary version of Obama's point:

Obama said, "The filibuster, if it does not get reformed, still means that maybe 30% of the population potentially controls the majority of Senate seats."

In the Senate’s current makeup, senators representing 29% to 39% of the U.S. population would be sufficient to mount a filibuster and block a vote on legislation, in a sense controlling what can be passed in the chamber.

In the first paragraph PolitiFact relates what Obama actually said. In the second paragraph PolitiFact translates what he said into something completely different. "Majority of Senate seats" turns magically into the number of seats needed to successfully filibuster.

Obama's argument was elaborate window-dressing for the real and truthful argument for filibuster reform: "If we change the filibuster we can pass more of the legislation we want to pass." That statement could earn a "True" from PolitiFact, eh?

It was completely ridiculous for Obama to try to suggest filibuster reform would affect the constitutional ability of small-population states to potentially control a majority of Senate seats. The one is independent of the other. That leaves Obama's true point, the supposed need for filibuster reform, without any coherent support.

It was nice of PolitiFact to overlook that fact in rating Obama's spurious argument "Half True."

It's flatly false that the filibuster, reformed or not, allows a minority population to control a majority of Senate seats. That's a feature of the Constitution, not the filibuster.

A constitutional scholar ought to know that.


Correction June 8, 2021: Removed a redundant "the" from "and the the need for filibuster reform." Hat tip to the the Eye Creatures.

Sunday, March 7, 2021

Layers of Editors: How fast is PolitiFact's stupidity growing?

Uh-oh! PolitiFact's incompetence unfairly harmed a Democrat again! This time it was hapless Joe Biden who ended up with the short straw by PolitiFact's blinkered judgment.

PolitiFact explained that over the past 10 years the number of Hispanics increased by about 10 million, while the number of Asian Americans went up by 5.2 million.

Why is an increase, on average, of 520,000 per year a faster increase than about 1 million per year?

PolitiFact explains, sort of:

Biden said "the fastest-growing population in the United States is Hispanic." That’s incorrect: The fastest-growing group is Asian Americans, with Hispanics ranking second. Hispanics did record the largest numerical increase in population of any group between 2010 and 2019, but that’s a different measure than "fastest growing."

Instead of recognizing more than one measure of "fastest-growing," PolitiFact arbitrarily accepts one measure while rejecting the other.

But an increase of 1 million per year on average is a rate of growth, and arguably more useful than measuring rate of growth as a percentage of an existing population.

We pointed out on Twitter that PolitiFact's reasoning would suggest that a one foot tall tree that doubles in size is growing faster than a 50 foot tall tree that grows two feet during the same span of time.

Sure, the first tree may surpass the second tree in size if it continues to double in size year-by-year. But it will never happen unless the first tree starts to surpass the second tree in the number of inches of growth per year.

Never.

And the math works similarly for population growth. Unless Asian Americans start adding more population in absolute numbers than do Hispanics, the number of Hispanics will forever be greater than the number of Asian Americans. Forever. In fact, Asian Americans will not start closing the gap between the two populations until they start adding more people in raw numbers rather than merely in terms of percentage.

So who do these fact checkers think they are?


Update March 8, 2021: Added the link to the PolitiFact "fact check" in the second paragraph.

Monday, February 22, 2021

PolitiFact's "In Context" deception (Updated)

In (a) perfect world, fact checkers would publish "In Context" features that simply offer surrounding context with objective explanatory notes.

This ain't no perfect world.

The PolitiFact "In Context" articles tend to serve as editorials, just like its fact checks. Two "In Context" articles from the past year (actually one from 2021 and one from 2019) will serve as our illustrative examples.

The Vaccine Supply

President Biden said "It’s one thing to have the vaccine, which we didn’t have when we came into office, but a vaccinator; how do you get the vaccine into someone’s arm?"

Instead of using context to figure out what Mr. Biden meant or perhaps intended to say, PolitiFact offered that he was not saying there was no vaccine when he took office because elsewhere in the speech he said there were 50 million vaccine doses when he took office ("we came into office, there (were) only 50 million doses that were available"):

You can judge his meaning for yourself, but it’s clear to us that Biden didn’t mean there were no vaccines available before he took office.
So Mr. Biden could have meant anything except for there were no vaccines available when he took office? Oh thank you, Pulitzer Prize-winning fact checkers!

The fact checkers at CNN at least made a game attempt to make heads or tails out of Mr. Biden's words:

Biden made a series of claims about the Covid-19 vaccine situation upon his January inauguration. He said early at the town hall that when "we came into office, there was only 50 million doses that were available." Moments later, he said, "We got into office and found out the supply -- there was no backlog. I mean, there was nothing in the refrigerator, figuratively and literally speaking, and there were 10 million doses a day that were available." Soon after that, he told Cooper, "But when you and I talked last, we talked about -- it's one thing to have the vaccine, which we didn't have when we came into office, but a vaccinator -- how do you get the vaccine into someone's arm?"

Facts First: Biden got at least one of these statistics wrong -- in a way that made Trump look better, not worse, so Biden's inaccuracy appeared accidental, but we're noting it anyway. A White House official said that Biden's claim about "10 million doses a day" being available when he took office was meant to be a reference to the 10 million doses a week that were being sent to states as of the second week of Biden's term, up from 8.6 million a week when they took over.

CNN's "Facts First" went on to explain that the Trump administration released all vaccine reserves to the states instead of holding back the second doses recommended by the manufacturers. CNN also pointed out that the Biden administration continued that same policy.

The CNN account makes it appear Mr. Biden uttered an incoherent mixture of statistics. PolitiFact didn't even make an attempt in its article to figure out what Biden was talking about. PolitiFact simply discounted the statement Biden made that seemed to contradict his dubious claim about the availability of 50 million vaccine doses when he took office.

PolitiFact's "In Context" article looks like pro-Biden spin next to the CNN account. And we thought of another "In Context" article where PolitiFact used an entirely different approach.

Very Fine People

PolitiFact used Mr. Biden's statement about "50 million doses" to excuse any inaccuracy Biden may have communicated by later saying the vaccine cupboard was bare when he took office.

But PolitiFact's "In Context" article about the circumstances of President Trump's reference to "very fine people," published April 26, 2019, made no similar use of Mr. Trump's same-speech clarification "and I’m not talking about the neo-Nazis and the white nationalists -- because they should be condemned totally."

With Biden, readers got PolitiFact's assurance that he wasn't saying there were no vaccine doses when he took office, even though he used words to that effect.

With Trump, readers were left with PolitiFact's curiosity as to what the context might show (bold emphasis added):

We wanted to look at Trump’s comments in their original context. Here is a transcript of the questions Trump answered that addressed the Charlottesville controversy in the days after it happened. (His specific remarks about "very fine people, on both sides" come in the final third of the transcript.)

Not only did PolitiFact fail to use the context to defend Trump from the charge that he was calling neo-Nazis "fine people," about a year later (July 27, 2020) PolitiFact made that charge itself, citing its own "In Context" article in support:

• As president in 2017, Trump said there were "very fine people, on both sides," in reference to neo-Nazis and counterprotesters in Charlottesville, Va.
Making the situation that much more outrageous, PolitiFact declined to correct the latter article when we send a correction request. PolitiFact remained unmoved after we informed the International Fact-Checking Network about its behavior.

Is PolitiFact lucky or what that its owner, the Poynter Institute, also owns the International Fact-Checking Network?

This is how PolitiFact rolls. PolitiFact uses its "In Context" articles to editorially strengthen or weaken narratives, as it chooses.

It's not all about the facts.


Correction: We left out an "a" in the first sentence and also misstated the timing of the two articles our post talks about. Both errors are fixed using parenthetical comments (like this).

Friday, January 29, 2021

PolitiFact miscounts American deaths during WW2?

When a PolitiFact fact check's subject matter involves math, we (figuratively!) smell blood in the water.

This item came from the PolitiFact article "Joe Biden's inaguration in extraordinary times, fact-checked," published Jan. 20, 2021. Notably, PolitiFact has only done one Truth-O-Meter rating on claims from President Joe Biden since mid-December. That's assuming PolitiFact's page showing Biden's fact checks is accurate.

As it turned out, PolitiFact was right that Biden was "close to accurate." But PolitiFact made a significant methodological blunder in reaching its conclusion. The mistake appears right away in PolitiFact's explanation for its judgment:

As Biden was speaking, the Johns Hopkins University coronavirus tracker was reporting 402,269 deaths in the United States. That is just shy of the 405,399 U.S. deaths during World War II, according to the Congressional Research Service. With the seven-day moving average of coronavirus deaths reaching 3,015 on Inauguration Day, the four-year World War II total was due to be matched by the coronavirus either on Jan. 20 or 21, less than a year after the virus reached the United States.

PolitiFact reports incorrectly in the second sentence of the above paragraph. The Congressional Research Service source document does not give a total for all the American lives lost in World War II. It gives a total for the number of military personnel lost during the war (bold emphasis added):

This report provides U.S. war casualty statistics. It includes data tables containing the number of casualties among American military personnel who served in principal wars and combat operations from 1775 to the present. It also includes data on those wounded in action and information such as race and ethnicity, gender, branch of service, and cause of death. The tables are compiled from various Department of Defense (DOD) sources.

The total PolitiFact used omits more than 10,000 civilian casualties, including nearly 10,000 from the U.S. civilian merchant marine. We don't see where Biden limited his statement to military personnel.

PolitiFact went on to suggest Biden would be right by extrapolating the numbers forward for a full year since the U.S. started to log covid deaths. But doing that turns Biden's claim into a prediction. PolitiFact supposedly does not fact check predictions. Going on the facts alone, Biden was off by more than 10,000 deaths. PolitiFact made his error appear considerably smaller by using a flawed approach to its fact check.

It's what we call PolitiFact's "Rubberstamps for Democrats" program. We argue that the tendency to award lazy favorable ratings to Democrats (and not Republicans) counts as one evidence of PolitiFact's political bias.

Wednesday, December 9, 2020

Does PolitiFact use consistent standards? No.

PolitiFact misleads when it tells its readers "we are applying the same standards to both sides." PolitiFact's methodology leaves open myriad ways to put fingers on the scale. The scale has fingerprints all over it.

In this article we'll focus on yet another example of uneven application of standards. We'll look at two PolitiFact fact checks in the category of health care, one from a Republican and one from a Democrat.


The Republican



On Nov. 30, 2020 PolitiFact published a fact check of Sen. Kelly Loeffler (R-Ga.) looking at her claim that her healthcare plan would protect Americans with preexisting conditions. PolitiFact issued a "False" judgment on Loeffler's claim.

Why the "False" rating?

PolitiFact's subheading suggested a lack of proof led to the rating: "No proof that Kelly Loeffler will ensure protections for preexisting conditions." 

Aside from the lack of proof, PolitiFact noted that Loeffler's plan proposed using something like high risk pools to help people get their preexisting conditions covered. PolitiFact's "If Your Time is Short" story summary gave Loeffler credit for protections that fall short of those offered by the Affordable Care Act (second bullet):

If Your Time is short

  • The GOP Georgia senator’s new plan offers no details on how protections for people with preexisting health conditions would be ensured.

  • Two provisions in the plan indicate protections will be less than those provided by the Affordable Care Act, experts say.

 

Why did the protections in Loeffler's plan count for nothing on PolitiFact's "Truth-O-Meter"? The special insurance groups designed for those with preexisting conditions couldn't even budget the rating up to "Mostly False"? Did PolitiFact assume that when Loeffler said "Americans" she meant "all Americans"? If so, that rationale failed to find its way into the fact check.

The Democrat

People these days tend to know (using that term advisedly) that President Obama's "You can keep your plan" pledge received PolitiFact's "Lie of the Year" in 2013. They've tended to forget, with help from PolitiFact, that the claim never received a Truth-O-Meter rating below "Half True." PolitiFact rated Obama's claim twice, in 2009 and in 2012. Both times it received a "Half True" rating. 

We'll use the 2012 rating to see how PolitiFact's application of standards compared to the ones it used for Loeffler.


PolitiFact's summary paragraphs encapsulate its reasoning:

Obama has a reasonable point: His health care law does take pains to allow Americans to keep their health plan if they want to remain on it. But Obama suggests that keeping the insurance you like is guaranteed.

In reality, Americans are not simply able to keep their insurance through thick and thin. Even before the law has taken effect, the rate of forced plan-switching among policyholders every year is substantial, and the CBO figures suggest that the law could increase that rate, at least modestly, even if Americans on balance benefit from the law’s provisions. We rate Obama’s claim Half True.

PolitiFact says Obama has a reasonable point. PolitiFact made no mention in its fact check of Loeffler to detect whether she had a reasonable point that her health care plan offered protections for preexisting conditions. Is that the same standard?

PolitiFact says Obama "suggested" that keeping one's preferred insurance is guaranteed. That might parallel the assumption that Loeffler was saying her plan guarantees coverage for preexisting conditions. PolitiFact's ruling suggests it made that assumption, though the fact check does not say so specifically. But if Obama was similarly making a guarantee, how did he skate with a "Half True" instead of the "False" rating Loeffler's claim received? Is that the same standard?

And speaking of guarantees, remember that PolitiFact docked Loeffler for not having proof that her plain would cover (all?) those with preexisting conditions. What proof did Obama's plan offer? Apparently none, as PolitiFact noted a Congressional Budget Office assessment saying the ACA would accelerate force churn of insurance plans. Is that the same standard?

We say the same standard did not apply to both. If Loeffler's "False" stems from her leading people to falsely believe her plan guarantees coverage for preexisting conditions then Obama's similar misleading would seem to equally earn a "False" rating. Or, both Loeffler and Obama could receive a "Half True" rating.

That they received quite different ratings shows the application of differing standards.

Sunday, September 6, 2020

Viva Frei: PolitiFact is Fake News

Rest assured, PFB readers, the recent lack of new content at PolitiFact Bias has nothing at all to do with improved work at PolitiFact. PolitiFact stinks as badly as ever. We just don't have the time right now to devote to publishing.

But it was worth taking a moment to highlight a video blog by Viva Frei, a Canadian neighbor who happened to notice some problems at PolitiFact.

Frei hits PolitiFact over a story on cash bail, and hits PolitiFact over a fact check of the claim Speaker Nancy Pelosi (D, Calif.) broke the law when she tore up the copy of the State of the Union address Trump delivered to her before Congress.


Frei's certainly caught PolitiFact grading a different claim than it claimed to fact check on the bail issue. Only the United States and the Philippines have money bail systems dominated by private commercial bond companies. A good number of other countries have money bail systems, and the claimant, Gavin Newsom, did not bother with that kind of specificity. The "Mostly True" rating could not apply for that reason alone.

Enjoy the video! And hat tip to reader "Brian" for bringing the video to our attention.