Showing posts with label Inconsistency. Show all posts
Showing posts with label Inconsistency. Show all posts

Sunday, December 10, 2023

Example Umpteen Showing How PolitiFact Goes Easier on Democrats

We only wish we had the time and money needed to document as much as 10 percent of PolitiFact's flawed and biased work.

We've documented a number of times PolitiFact's penchant for ignoring its central principle for grading numbers claims. PolitiFact's founding editor Bill Adair declared that the most important part of a numbers claim is its underlying point. But PolitiFact will ignore the underlying point at the drop of a hat if it will benefit a Democrat.

Newsom vs Haley

Newsom and "per capita" interstate migration

Democratic governor Gavin Newsom, defending himself from the charge that California is losing population while Florida gains population, said  "Per capita, more Floridians move to California than Californian's moving to Florida." PolitiFact rated the claim "Mostly True."

What's the underlying point of Newsom's claim? Does it address California's population loss compared to Florida's population gain?

No. Newsom's claim instead distracts from the issue with a pretty much meaningless statistic. Experts PolitiFact cited in the fact check underscored that fact. Note this line from PolitiFact's summary:
Experts gave varying answers about whether the margin was statistically significant, but they agreed that the slim differences make this argument technical, and not necessarily meaningful.
So, PolitiFact effectively ignored Newsom's underlying point (distracting from Sean Hannity's question) and gave him nearly full credit for telling the truth about a meaningless statistic.

Haley and ship counts as a measure of military strength

Contrast PolitiFact's treatment of Newsom to its treatment of Republican presidential candidate Nikki Haley. Haley said China is building up its military, and illustrated her claim by noting China has the largest naval fleet in the world. PolitiFact said she was right with her numbers, but faulted her for her underlying point. "Half True!"


PolitiFact's summary recounts the objections of the experts it interviewed:

Numerically, she’s on target with both countries’ ship counts. But experts say that simply counting ships omits context about a country’s true military capabilities. 

Ship counts ignore overall ship size, specific warfighting capabilities, and overall geographic reach, all of which are metrics where the United States maintains an edge over China.

It's worth noting that Haley made no claim about China's navy possessing more power than the U.S. navy. So why are tonnage and military capability relevant in rating the claim she made?

They're not. But PolitiFact has its excuse for giving Haley a lowball rating compared to the favor they did Newsom. PolitiFact focuses on Haley's underlying point and gives a poor rating for a true claim. PolitiFact ignores Newsom's underlying point and gives him a favorable rating for a claim that might not even be true (check the fine print).

It's part of the baseless narrative PolitiFact weaves: Republicans lie more.

The truth? PolitiFact is biased, and proves it repeatedly with examples like these.

Tuesday, June 28, 2022

PolitiFact: How can we rig this abortion fact check to help President Biden? Part II

Lo and behold, PolitiFact made changes to the fact check we critiqued in our previous post.

Recall that we lodged three main criticisms of PolitiFact's "Mostly True" confirmation of President Biden's claim the Supreme Court's Dobbs decision made the United States an outlier among developed nations.

  1. PolitiFact cherry-picked its pool of "developed nations."
  2. It misidentified "Great Britain" as a member of the G7, enabling it to ignore a Northern Ireland spanner in the works
  3. It falsely stated members of the G7, except the United States, "have national laws or court decisions that allow access to abortion, with various restrictions."

PolitiFact partially fixed the second problem.

Fixing the second problem without fixing the third problem magnifies the third problem. And PolitiFact, again, failed to follow its own corrections policy.

Let's start with the "clarification" notice and work from there:

CLARIFICATION, June 27, 2022: This story has been clarified to reflect that the United Kingdom, which contains Northern Ireland, is a G-7 nation. It has also been updated to describe current abortion laws in Northern Ireland.

Note the "clarification" notice announces a clarification and an update.

What does PolitiFact's statement of principles prescribe for clarifications and updates?

Clarification:

Oops! PolitiFact's statement of principles offers no procedure for doing a clarification!

The either means that PolitiFact is following its principles because the principles allow it to do whatever it wants, or else it means that PolitiFact isn't really following a principle.

Update:

Updates – From time to time, we add additional information to stories and fact-checks after they’ve published, not as a correction but as a service to readers. Examples include a response from the speaker we received after publication (that did not change the conclusion of the report), or breaking news after publication that is relevant to the check. Updates can be made parenthetically within the text with a date, or at the end of the report. Updated fact-checks receive a tag of "Corrections and updates."

We think it clear that this policy calls for newly added "update" material within the original text to occur with clear cues to the reader where the material was added ("parenthetically within the text with a date"). Otherwise, the new material occurs at the end of the item after the update notice.

It's easier to find PolitiFact updates done incorrectly than ones done correctly. But this example shows an update done the right way:

 

The method shown communicates clearly to readers how the article changed.

This is infinitely more transparent than PolitiFact's common practice of an update notice at the end saying, in effect "We changed stuff in the story above on this date."

Understood correctly, PolitiFact corrected its story. It fixed its mistake in misleadingly identifying Great Britain as a member of the G7. And the "update" was not new information. It was information PolitiFact should have included originally but mistakenly did not.

The fact check continues to do readers a disservice by failing to inform them that the U.K. in 2019 forced Northern Ireland, via special legislation, to permit abortion. That still-missing fact contradicts PolitiFact's claim that members of the G7 other than the United States"have national laws or court decisions that allow access to abortion, with various restrictions." The legislation forcing Northern Ireland to permit legal abortion was not a national law, nor was it a court decision.

The law is specific to Northern Ireland.

We'll end with an image we created for Twitter. It's an image from the Internet Archive Wayback Machine, using its comparison feature. Text highlighted in blue was changed from the original text, and we added red lines under the part of PolitiFact's fact check that remains false.



Thursday, June 2, 2022

PolitiFact's ongoing double standard on correlation versus causation

PolitiFact does not advertise the fact that it applies standards inconsistently.

But it could do so without misleading people.

The liberal bloggers at PolitiFact, who pass themselves off as objective fact checkers, presented us with a new example the other day.

President Biden passed correlation off as causation. "Mostly True," said PolitiFact: Because, you know, the correlation was there.

PolitiFact's reasoning:

A key study backs Biden up. But the reality is millions of assault weapons and large-capacity magazines remained in circulation during the ban, and that makes it hard to tease out the law’s impact. 

We rate this claim Mostly True.

Of course it's child's play to come up with an example where somebody factually claimed a correlation and got PolitiDinged for it.

The Facebook post did not directly claim causation any more than did President Biden.



PolitiFact confirmed the claimed correlation, but guess what? There was no proof the higher mask usage caused the deaths! So, "False."

PolitiFact's reasoning:

A Facebook post said there’s a "‘positive correlation’ between higher mask usage and COVID-19 deaths."

The post was referencing a study that reviewed data from 35 European countries and found that in places where mask usage was higher, COVID-19 deaths were also higher. But the study’s author said there was no cause-and-effect found.  

Critics of the study said masking protocols were issued in response to high rates of transmission. So it would be expected that deaths would occur while masking would be in place. 

Public health officials recommend masking as one way to help reduce transmission..

We rate this claim False.

Parallel cases. Both claims asserted a correlation. In both cases PolitiFact substantially confirmed the correlation but noted that correlation does not prove causation. Nearly polar opposite ratings resulted. 

That's how PolitiFact rolls.

Saturday, August 14, 2021

PolitiFact's shell game with claim selection

There they go again.

We've pointed out the bias inherent in PolitiFact's choices about what parts of a claim to rate. And they're at it again at PolitiFact, this time at PolitiFact Wisconsin:

PolitiFact Wisconsin based its "Pants on Fire" judgment solely on the source of the money.

  • Cost: about $50k (true)
  • Source of funds: tax dollars (false)
  • Rock considered a symbol of racism by some (true)

So guess where PolitiFact puts its story focus? Take it away, PF:

For this fact-check, we’ll be focusing on her claim that Wisconsin taxpayers were on the hook for the rock removal.
So PolitiFact didn't consider the amount spent on the rock removal or the reason it was moved.

Totally legit? No. It's one of the easy avenues for bias to enter fact-checking, which some people hilariously believe is strictly the telling of facts.

We've brought up in the past the "Mostly True" rating Barack Obama received during the Democratic presidential primaries when he claimed his uncle had helped liberate Auschwitz.

Here's that set of claims, for comparison:

  • Uncle among Allied troops liberating concentration camp (true/truish)
  • Auschwitz: (false--Soviet troops liberated Auschwitz)

In Obama's case, PolitiFact downplayed a claim it could have chosen to make the focus of its fact check. Instead, it prioritized everything else in the claim to justify the "Mostly True" rating.

To avoid that manifestation of bias, a fact checker needs to employ the same standards consistently. Picking and choosing story focus counts as yet another subjective aspect of fact check ratings.

It's a scam. And it's a lie to call it unbiased.

Yet that's what PolitiFact does.

Obama could have received a "Pants on Fire" rating with a story focus on whether his uncle liberated Auschwitz.

Campos-Duffy could have received a "Mostly True" with a story focus taking her whole claim into account and giving her credit for the true elements.

And we want these people partnering with Facebook to help decide what get throttled down?


Updated seconds after publication to tag the PolitiFact writer Laura Schulte.

Thursday, August 5, 2021

PolitiFact has it both ways on 'vaccination'

 PolitiFact's July 30, 2021 fact check confirming as "Mostly True" that Gen. George Washington "mandated smallpox vaccines for the Continental Army" surprised us.

It surprised us because is was barely six months (Dec. 15, 2020) ago that PolitiFact effectively told us that immunity acquired from having COVID-19 did not count as any sort of vaccine.

In December 2020, President Donald J. Trump said (bold emphasis added):

I think that the vaccine was our goal. That was number one because that was the way — that was the way it ends. Plus, you do have an immunity. You develop immunity over a period of time, and I hear we’re close to 15 percent. I’m hearing that, and that is terrific. That’s a very powerful vaccine in itself."

For some reason, PolitiFact concluded Trump was saying 15 percent natural immunity could confer herd immunity. But Trump was obviously saying that immunity acquired via means other than the new vaccines would contribute toward herd immunity. PolitiFact gave the impression that claim was false, basically by suggesting natural immunity doesn't count as a vaccine:

Is 15% natural immunity among the American population anywhere close to a "powerful vaccine," as Trump alleges? 

No, said the experts. And there’s nothing "terrific" about that level of infection within the community.

We doubt the experts were primarily at fault for misinterpreting Trump's statement, by the way. PolitiFact likely insinuated its misleading narrative in the questions it posed to its chosen list of experts.

PolitiFact's July 2021 fact check reversed on viewing naturally acquired immunity as a vaccine.

The smallpox vaccine didn’t exist when Washington was commander in chief of the Continental Army, but the point remains: he ordered the inoculation of troops against smallpox by the means that was then available, variolation.

So, even though vaccines were not invented until after the Revolutionary War, PolitiFact found it "Mostly True" that Washington mandated vaccinations for the Continental Army.

Variolation, by the way, simply meant intentionally infecting people with smallpox. It was the same virus, but tended to cause less severe illness

It's just another reminder that PolitiFact "fact checks" largely count as subjective exercises.


Note: We also wrote about the fact check of Trump back in January 2021.

Note 2: We doubt scientists have a solid idea why variolation was effective.

Monday, February 22, 2021

PolitiFact's "In Context" deception (Updated)

In (a) perfect world, fact checkers would publish "In Context" features that simply offer surrounding context with objective explanatory notes.

This ain't no perfect world.

The PolitiFact "In Context" articles tend to serve as editorials, just like its fact checks. Two "In Context" articles from the past year (actually one from 2021 and one from 2019) will serve as our illustrative examples.

The Vaccine Supply

President Biden said "It’s one thing to have the vaccine, which we didn’t have when we came into office, but a vaccinator; how do you get the vaccine into someone’s arm?"

Instead of using context to figure out what Mr. Biden meant or perhaps intended to say, PolitiFact offered that he was not saying there was no vaccine when he took office because elsewhere in the speech he said there were 50 million vaccine doses when he took office ("we came into office, there (were) only 50 million doses that were available"):

You can judge his meaning for yourself, but it’s clear to us that Biden didn’t mean there were no vaccines available before he took office.
So Mr. Biden could have meant anything except for there were no vaccines available when he took office? Oh thank you, Pulitzer Prize-winning fact checkers!

The fact checkers at CNN at least made a game attempt to make heads or tails out of Mr. Biden's words:

Biden made a series of claims about the Covid-19 vaccine situation upon his January inauguration. He said early at the town hall that when "we came into office, there was only 50 million doses that were available." Moments later, he said, "We got into office and found out the supply -- there was no backlog. I mean, there was nothing in the refrigerator, figuratively and literally speaking, and there were 10 million doses a day that were available." Soon after that, he told Cooper, "But when you and I talked last, we talked about -- it's one thing to have the vaccine, which we didn't have when we came into office, but a vaccinator -- how do you get the vaccine into someone's arm?"

Facts First: Biden got at least one of these statistics wrong -- in a way that made Trump look better, not worse, so Biden's inaccuracy appeared accidental, but we're noting it anyway. A White House official said that Biden's claim about "10 million doses a day" being available when he took office was meant to be a reference to the 10 million doses a week that were being sent to states as of the second week of Biden's term, up from 8.6 million a week when they took over.

CNN's "Facts First" went on to explain that the Trump administration released all vaccine reserves to the states instead of holding back the second doses recommended by the manufacturers. CNN also pointed out that the Biden administration continued that same policy.

The CNN account makes it appear Mr. Biden uttered an incoherent mixture of statistics. PolitiFact didn't even make an attempt in its article to figure out what Biden was talking about. PolitiFact simply discounted the statement Biden made that seemed to contradict his dubious claim about the availability of 50 million vaccine doses when he took office.

PolitiFact's "In Context" article looks like pro-Biden spin next to the CNN account. And we thought of another "In Context" article where PolitiFact used an entirely different approach.

Very Fine People

PolitiFact used Mr. Biden's statement about "50 million doses" to excuse any inaccuracy Biden may have communicated by later saying the vaccine cupboard was bare when he took office.

But PolitiFact's "In Context" article about the circumstances of President Trump's reference to "very fine people," published April 26, 2019, made no similar use of Mr. Trump's same-speech clarification "and I’m not talking about the neo-Nazis and the white nationalists -- because they should be condemned totally."

With Biden, readers got PolitiFact's assurance that he wasn't saying there were no vaccine doses when he took office, even though he used words to that effect.

With Trump, readers were left with PolitiFact's curiosity as to what the context might show (bold emphasis added):

We wanted to look at Trump’s comments in their original context. Here is a transcript of the questions Trump answered that addressed the Charlottesville controversy in the days after it happened. (His specific remarks about "very fine people, on both sides" come in the final third of the transcript.)

Not only did PolitiFact fail to use the context to defend Trump from the charge that he was calling neo-Nazis "fine people," about a year later (July 27, 2020) PolitiFact made that charge itself, citing its own "In Context" article in support:

• As president in 2017, Trump said there were "very fine people, on both sides," in reference to neo-Nazis and counterprotesters in Charlottesville, Va.
Making the situation that much more outrageous, PolitiFact declined to correct the latter article when we send a correction request. PolitiFact remained unmoved after we informed the International Fact-Checking Network about its behavior.

Is PolitiFact lucky or what that its owner, the Poynter Institute, also owns the International Fact-Checking Network?

This is how PolitiFact rolls. PolitiFact uses its "In Context" articles to editorially strengthen or weaken narratives, as it chooses.

It's not all about the facts.


Correction: We left out an "a" in the first sentence and also misstated the timing of the two articles our post talks about. Both errors are fixed using parenthetical comments (like this).

Sunday, February 9, 2020

PolitiFact's charity for the Democrats

PolitiFact is partial to Democrats.

Back in 2018 we published a post that lists the main points in our argument that PolitiFact leans left. But today's example doesn't quite fit any of the items on that list, so we're adding to it:

PolitiFact's treatment of ambiguity leans left

When politicians make statements that may mean more than one thing, PolitiFact tends to see the ambiguity in favor of Democrats and against Republicans.

That's the nature of this example, updating an observation from my old blog Sublime Bloviations back in 2011.

When politician say "taxes" and does not describe in context what taxes are they talking about, what do they mean?

PolitiFact decided the Republican, Michele Bachmann, was talking about all taxes.

PolitiFact decided the Democrat, Marcia Fudge, was talking about income taxes.

Based on the differing interpretations, Bachmann got a "False" rating from PolitiFact while Fudge received a "True" rating.

That brings us to the 2020 election campaign and PolitiFact's not-really-a-fact-check article "Fact-checking the Democratic claim that Amazon doesn't pay taxes."

The article isn't a fact check as such because PolitiFact skipped out on giving "Truth-O-Meter" ratings to Andrew Yang and Sen. Elizabeth Warren. Both could easily have scored Bachmannesque "False" ratings.


Yang and Warren both said about the same thing, that Amazon paid no taxes.

Various news agencies have reported that Amazon paid no federal corporate income taxes in 2017 and 2018. But news reports have also made clear that Amazon paid taxes other than federal corporate income taxes.


Of course neither Yang nor Warren will receive the "False" rating PolitiFact bestowed on Bachmann for a comparable error. PolitiFact treated both their statements as though they restricted their claims to federal corporate income tax.

Is it true that despite making billions of dollars, Amazon pays zero dollars in federal income tax?

Short answer: Amazon’s tax returns are private, so we don’t know for sure what Amazon pays in federal taxes. But Amazon’s estimates on its annual 10-K filings with the U.S. Securities and Exchange Commission are the closest information we have on this matter. They show mixed results for the past three years: no federal income tax payments for 2017 and 2018, but yes on payments for 2019.

That's the type of impartiality a Democrat can usually expect from PolitiFact. They do not need to specify what kind of taxes they are talking about. PolitiFact will interpret their statements charitably. 

Afters

It's worth noting that PolitiFact admitted not knowing whether Amazon paid federal income taxes in 2017 and 2018 ("we don’t know for sure what Amazon pays in federal taxes"). And PolitiFact suspends its "burden of proof" criterion yet again for Democrats.


Feb. 10, 2020: Edited to remove a few characters of feline keyboard interference.

Saturday, January 25, 2020

We republished this item because we neglected to give it a title when it was first published.

Forgetting the title results in a cumbersome URL making it a good idea to republish it.

So that's what we did. Find the post here.

Saturday, August 3, 2019

PolitiFact: The true half of Cokie Roberts' half truth is President Trump's half truth

Pity PolitiFact.

The liberal bloggers at PolitiFact may well see themselves as neutral and objective. If they see themselves that way, they are deluded.

Latest example:


PolitiFact's Aug. 3, 2019 fact check of President Trump finds he correctly said the homicide rate in Baltimore is higher than in some countries with a significant recent history of violence. But it wasn't fair of Trump to compare a city to a country for a variety of reasons, experts said.

So "Half True," PolitiFact said.

The problem?

Here at PolitiFact Bias we apparently remember what PolitiFact has done in the past better than PolitiFact remembers it. We remembered PolitiFact giving (liberal) pundit Cokie Roberts a "Half True" for butchering a comparison of the chance of being murdered in New York City compared to Honduras.




Roberts was way off on her numbers (to the point of being flatly false about them, we would say), but because she was right that the chance of getting murdered is greater in Honduras than in New York City, PolitiFact gave Roberts a "Half True" rating.

We think if Roberts' numbers are wrong (false) and her comparison is "Half True" because it isn't fair to compare a city to a country then Roberts seems to deserve a "Mostly False" rating.

That follows if PolitiFact judges Roberts by the same standard it applies to Mr. Trump.

But who are we kidding?

PolitiFact often fails to apply its standards consistently. Republicans and conservatives tend to receive the unfair harm from that inconsistency. Mr. Trump, thanks in part to his earned reputation for hyperbole and inaccuracy, tends to receive perhaps more unfair harm than anybody else.

It is understandable that fact checkers allow confirmation bias to influence their ratings of Mr. Trump.

It's also fundamentally unfair.

We think fact checkers should do better.

Tuesday, July 30, 2019

PolitiFact's Inconsistency on True-But-Misleading Factoids

People commonly mislead other people using the truth. Fact checkers have recognized this with various kinds of "True but False" designations. But the fact checkers tend to stink at applying consistent rules to the "True but False" game by creating examples in the "True but False but True" genre.

PolitiFact created a classic in the "True but False" genre for Sarah Palin (John McCain's pick for vice presidential nominee) years ago. Palin made a true statement about how U.S. military spending ranks worldwide as a measure of GDP. PolitiFact researched the ways in which that truth misled people and gave Palin a "Mostly False" rating.

On July 29, 2019, PolitiFact gave a great example of the "True but False but True" genre with a fact check of a tweet by Alex Cole (side note: This one goes on the report card for "Tweets" instead of a report card for "Alex Cole"):


PolitiFact rated Cole's tweet "Mostly True." But the tweet has the same kind of misleading features that led PolitiFact to give Palin a "Mostly False" rating in the example above. PolitiFact docked Palin for daring to compare U.S. defense spending as a percentage of GDP to very small countries as well as those experiencing strife.

But who thinks the deficit at the start and end of an administration serves as a good measure of party fiscal discipline?

Yet that's the argument in Cole's tweet, and it gets a near-total pass from PolitiFact.


And this isn't even one of those situations where PolitiFact focused on the numbers to the exclusion of the underlying argument. PolitiFact amplified Cole's argument by repeating it.

Note PolitiFact's lead:
A viral post portrays Democrats, not Republicans, as the party of fiscal responsibility, with numbers about the deficit under recent presidents to make the case.
PolitiFact sends out the false message that the above argument is "Mostly True."

That's ridiculous. For starters, the deficit is best measured as a percentage of GDP. Also, presidents do not have great control over the rise and fall of deficits. PolitiFact pointed out that second factor but without giving it the weight it should have had in undercutting Cole's argument. After all, the tweet suggests the presidents drove deficit changes without any hint of any other explanation.

Yes, this is the same fact-checking operation that laughably assured us back in November 2018 that "PolitiFact is not biased."

PolitiFact could easily have justified giving Cole the same treatment it gave Palin. But it did not. And this type of scenario plays out repeatedly at PolitiFact, with conservatives getting the cold shoulder from PolitiFact's star chamber.

Whether or not the liberal bloggers at PolitiFact are self-aware to the point of seeing their own bias, it comes out in their work.


Afters

Hilariously, in this article PolitiFact dinged the deficit tweet for using a figure of $1.2 trillion for the end of the George W. Bush presidency:
"(George W.) Bush 43 took it from 0 to 1.2 trillion." This is in the ballpark. Ignoring the fact that he actually started his presidency with a surplus, Bush left office in 2009 with a federal deficit of roughly $1.41 trillion.
Why is it funny?

It's funny because one of the PolitiFact articles cited in this one prefers the $1.2 trillion figure over the $1.4 trillion figure:

The Great Recession hit hard in 2008 and grew worse in 2009. In that period, the unemployment rate doubled from about 5 percent to 10 percent. With Democrats in charge of both houses of Congress and the White House, Washington passed a stimulus package that cost nearly $190 billion, according to the Congressional Budget Office. That included over $100 billion in new spending and a somewhat smaller amount in tax cuts, about $79 billion in fiscal year 2009.

George W. Bush was not in office when those measures passed. So a more accurate number for the deficit he passed on might be closer to $1.2 trillion.
But it's just fact-checking, so inaccuracy is okay so long as it's in the service of a desirable narrative.

?

Friday, July 12, 2019

PolitiFact Unplugs 'Truth-O-Meter' for Elizabeth Warren

We seem to be seeing an increase of fact check stories from PolitiFact that do not feature any "Truth-O-Meter" rating. One of the latest pleads that it simply did not have enough information to offer a rating of Democratic presidential candidate Elizabeth Warren's claim that the U.S. Women's National Team (soccer) pulls in more revenue while receiving less pay than the men.

But look at the low-hanging fruit!


The women on the USWNT are not doing equal or better work than the men if the women cannot beat the men on the pitch. The level of competition is lower for women's soccer. And Warren's introduction to her argument is not an equal pay for equal work argument. It is an argument based on market valuation aside from the quality of the work.

It's reasonable to argue that if the women's game consistently creates more revenue than the men's game then the women deserve more money than the men.

That's not an equal pay for equal work argument. Not by any stretch of the imagination.

It was ridiculous for Warren to make that stretch in her tweet and typical of left-leaning PolitiFact to ignore it in favor of something it would prefer to report.

Did that principle of burden of proof disappear again?

PolitiFact's statement of principles includes a "burden of proof" principle that PolitiFact uses to hypocritically ding politicians who make claims they don't back up while allowing PolitiFact to give those politicians ratings such as "False" even if PolitiFact has not shown the claim false.

The principle pops out of existence at times. Note what PolitiFact says about its evidence touching Warren's claim:
Ultimately, the compensation formulas are too variable — and too little is known about the governing documents — for us to put Warren’s claim on the Truth-O-Meter.
 So instead of the lack of evidence leading to a harsh rating for Warren, in this case it leads to no "Truth-O-Meter" rating at all.

Color us skeptical that PolitiFact could clear up the discrepancy if it bothered to try.


Afters

Given Warren's clear reference to "equal pay for equal work," we should expect a fact checker to note that women who compete professionally in soccer cannot currently field a team that would beat a professional men's team.

Not a peep from PolitiFact.

Women's national teams do compete against men on occasion. That is, they do practice scrimmages against young men on under-17 and under-15 teams. And the boys tend to win.

But PolitiFact is content if you don't know that. Nor does its audience need to know that the U.S. Women's National Team's success makes no kind of coherent argument for equal pay for equal work.

Wednesday, May 29, 2019

More Deceptive "Principles" from PolitiFact

PolitiFact supposedly has a "burden of proof" that it uses to help judge Political claims. If a politician makes a claim and supporting evidence doesn't turn up, PolitiFact considers the claim false.

PolitiFact Executive Director Aaron Sharockman expounded on the "burden of proof" principle on May 15, 2019 while addressing a gathering at the U.S. Embassy in Ethiopia:
If you say something, if you make a factual claim, online, on television, in the newspaper, you should be able to support it with evidence. And if you cannot or will not support that claim with evidence we say you're guilty.

We'll, we'll rate that claim negatively. Right? Especially if you're a person in power. You make a claim about the economy, or health, or development, you should make the claim with the information in your back pocket and say "Here. Here's why it's true." And if you can't, well, you probably shouldn't be making the claim.
As with its other supposed principles, PolitiFact applies "burden of proof" inconsistently. PolitiFact often telegraphs its inconsistency by publishing a 'Splainer or "In Context" article like this May 24, 2019 item:


PolitiFact refrains from putting Milano's statement on its cheesy "Truth-O-Meter" because PolitiFact could not figure out if her statement was true.

Now doesn't that sound exactly like a potential application of the "burden of proof" criterion Sharockman discussed?

Why isn't Milano "guilty"?

In this case PolitiFact found evidence Milano was wrong about what the bill said. But the objective and neutral fact-checkers still could not bring themselves to rate Milano's claim negatively.

PolitiFact (bold emphasis added):
Our conclusion

Milano and others are claiming that a new abortion law in Georgia states that women will be subject to prosecution. It actually doesn’t say that, but that doesn’t mean the opposite — that women can’t be prosecuted for an abortion — is true, either. We’ll have to wait and see how prosecutors and courts interpret the laws before we know which claim is accurate. 
What's so hard about applying principles consistently? If somebody says the bill states something and "It actually doesn't say that" then the claim is false. Right? It's not even a burden of proof issue.

And if somebody says the bill will not allow women to be prosecuted, and PolitiFact wants to use its "burden of proof" criterion to fallaciously reach the conclusion that the statement was false, then go right ahead.

Spare us the lilly-livered inconsistency.

Tuesday, October 2, 2018

Again: PolitiFact vs PolitiFact

In 2013, PolitiFact strongly implied (it might opine that it "declared") that President Obama's promise that people could keep the plans they liked according to his health care overhaul, the Affordable Care Act, received its "Lie of the Year" award.

In 2018, PolitiFact Missouri (with editing help from longtime PolitiFacter Louis Jacobson) suffered acute amnesia about its 2013 "Lie of the Year" pronouncements.


PolitiFact Missouri rates "Mostly False" Republican Josh Hawley's claim that millions of Americans lost their health care plans.

Yet in 2013 it was precisely the loss of millions of health care plans that PolitiFact advertised as its reason for giving Mr. Obama its "Lie of the Year" award (bold emphasis added):
It was a catchy political pitch and a chance to calm nerves about his dramatic and complicated plan to bring historic change to America’s health insurance system.

"If you like your health care plan, you can keep it," President Barack Obama said -- many times -- of his landmark new law.

But the promise was impossible to keep.

So this fall, as cancellation letters were going out to approximately 4 million Americans, the public realized Obama’s breezy assurances were wrong.
Hawley tried to use PolitiFact's finding against his election opponent, incumbent Sen. Claire McCaskill (D-Mo.) (bold emphasis added):
"McCaskill told us that if we liked our healthcare plans, we could keep them. She said the cost of health insurance would go down. She said prescription drug prices would fall. She lied. Since then, millions of Americans have lost their health care plans."

Because of the contradiction between Hawley’s assertion and the promises of the ACA to insure more Americans, we decided to take a closer look.
So, despite the fact that PolitiFact says millions lost their health care plans and the breezy assurance to the contrary was wrong, PolitiFact says it gave Hawley's claim a closer look because it contradicts assurances that the ACA would insure more Americans.

Apparently it doesn't matter to PolitiFact that Hawley was specifically talking about losing health care plans and not losing health insurance completely. In effect, PolitiFact Missouri disavows any knowledge that the promise "if we liked our healthcare plans, we could keep them" was a false promise. The fact checkers substitute loss of health insurance for the loss of health care plans and give Hawley a "Mostly False" rating based on their own fallacy of equivocation (ambiguity).

A consistent PolitiFact could have performed this fact check easily. It could have looked at whether McCaskill made the same promise Obama made. And after that it could have remembered that it claimed to have found Obama's promise false along with the reasoning it used to justify that ruling.

Instead, PolitiFact Missouri delivers yet another outstanding example of PolitiFact inconsistency.



Afters:

Do we cut PolitiFact Missouri a break because it was not around in 2013?

No we do not.

Exhibit 1: Louis Jacobson, who has been with PolitiFact for over 10 years, is listed as an editor.

Exhibit 2: Jacobson, beyond a research credit on the "Lie of the Year" article we linked above, wrote a related fact check on the Obama administration's attempt to explain its failed promise.

There's no excuse for this type of inconsistency. But bias offers a reasonable explanation for this type of inconsistency.



Saturday, August 25, 2018

PolitiFact's Fallacious "Burden of Proof" Bites a Democrat? Or Not

We're nonpartisan because we defend Democrats unfairly harmed by the faulty fact checkers at PolitiFact.

See how that works?

On with it, then:

Oops.

Okay, we made a faulty assumption. We thought when we saw PolitiFact's liberal audience complaining about the treatment of Nelson that it meant Nelson had received a "False" rating based on Nelson not offering evidence to support his claim.

But PolitiFact did not give Nelson a "Truth-O-Meter" rating at all. Instead of the "Truth-O-Meter" graphic for the claim (there is none), PolitiFact gave its readers the "Share The Facts" version:



Republicans (and perhaps Democrats) have received poor ratings in the past where evidence was lacking, which PolitiFact justifies according to its "burden of proof" criterion. But either the principle has changed or else PolitiFact made an(other) exception to aid Nelson.

If the principle has changed that's good. It's stupid and fallacious to apply a burden of proof standard in fact checking, at least where one determines a truth value based purely on the lack of evidence.

But's it's small consolation to the people PolitiFact unfairly harmed in the past with its application of this faulty principle.


Afters:

In April 2018 it looks like the "burden of proof" principle was still a principle.



As we have noted before, it often appears that PolitiFact's principles are more like guidelines than actual rules.

And to maintain our nonpartisan street cred, here's PolitiFact applying the silly burden of proof principle to a Democrat:


If "burden of proof" counts as one of PolitiFact's principles then PolitiFact can only claim itself as a principled fact checker if the Nelson exception features a principled reason justifying the exception.

If anyone can find anything like that in the non-rating rating of Nelson, please drop us a line.

Sunday, March 18, 2018

PolitiFact: It's 'Half True' and 'Mostly True' that President Obama doubled the debt

Twitterer Ely Brit (@RealElyBritt) tweeted out this comparison of past PolitiFact ratings on March 17, 2018:


For the Trump fact check, PolitiFact came down hard on The Donald for placing blame too squarely on President Obama when Congress controls the federal government's purse strings.

On the other hand, it's hard to see how Sen. Paul eases up on placing the blame, unless he gets a bipartisan pass for blaming President Bush for the earlier debt increase.

Apart from that, neither Trump nor Paul received a "True" rating because Congress shares the blame for government spending.

Right, PolitiFact?


O-kay, then.



Correction 3/18/2018: Corrected transposed misspellings of Ely Brit's name. Our apologies to Brit. 
Correction 3/18/2018: Commenter YuriG pointed out that I (Bryan) used "deficit" in the headline, conflicting with the content of the post. Changed "deficit" to "debt" in the headline. Our thanks to YuriG for taking the time to point out the problem.

Saturday, February 10, 2018

How we made our meme mocking PolitiFact

Earlier this week we noticed PolitiFact making yet another hypocritical declaration. PolitiFact has ruled it misleading to use "cuts" to refer to reductions to a future projected spending baseline. In many cases a budget might increase year by year but the legislature "cuts" spending by slowing its increase.

In the past, we've pointed out how PolitiFact tended to rate Republicans "Mostly False" for claiming the Affordable Care Act cut Medicare by hundreds of millions of dollars. When President Donald R. Trump and the Republican Congress tried the same thing with Medicaid in 2017, PolitiFact discovered that the claim was "Half True" on the few occasion(s?) it noticed the Democrats' ubiquitous claim and then quickly lost interest.

Fast forward to 2018, and PolitiFact published a fact check of a Trump statement about protests over the United Kingdom's National Health Service, its universal care program. PolitiFact treated Trump unfairly by rating him on something he did not say, but what really knocked our socks off was a sentence PolitiFact reeled off in its summary:
While the NHS has lost funding over the years, the march that took place was not in opposition to the service, but a call to increase funding and stop austerity cuts towards health and social care.
The problem? You guessed it! Spending has gone up for the NHS pretty consistently. The fact checkers at Britain's Full Fact even did a fact check in January 2018 relating to NHS funding. It only reported spending going up.
Spending on the NHS in England has increased in real terms by an average of around 1% a year since 2010. Since the NHS was established spending increases have averaged 4% per year.
So the NHS hasn't "lost funding" except against baseline future spending. The austerity "cuts" PolitiFact reports are a decrease of the rate of future spending.

PolitiFact is making a claim it has rated "Half True" and worse in the past.

We don't appreciate that type of hypocrisy from a supposedly non-partisan and objective fact checker. So we went to work on meme.

First, we looked at PolitiFact's list of stories with the "Medicare" tag. We knew we'd find stories reporting on budget cuts to a baseline. And from those stories we looked for one with a summary that would fit the present case. It didn't take long. We found a "Half True" rating from PolitiFact Ohio that fit the bill:


"So-called cut reflects savings from slowing growth in spending." Doesn't that sound much better than "cutting Medicare"? Hurrah! It's savings!

Our next step was to replace the text and image to the left of the "Truth-O-Meter" graphic. We decided to pin the blame on PolitiFact Editor Angie Drobnic Holan instead of on the intern who wrote and researched the fact check. Holan had good reason to know PolitiFact's history on rating cuts to a future baseline.

We took Holan's image from her Twitter account.

We replaced the text with Holan's name and the outrageous quotation from the Trump fact check.

We credited our faux fact check to "PolitiFact National" on the day the Trump fact check came out. We skipped the em-dash this time since it takes a few extra steps.

And we put a big "PARODY" watermark on the whole thing to make clear we're not trying to trick anybody. The point is to mock PolitiFact for its inconsistency.

Our finished product:


Seriously: It's ridiculous for a national fact-checking service to do such a poor job of reporting consistently. Holan is the chief editor, and she doesn't notice this clear problem? She let the intern down by not catching it. And how long will it take to correct the problem? Eternity?

PolitiFact's past work on budget cuts is already so chaotic that one more miss hardly matters. We don't expect anything to change. PolitiFact will go right on giving readers a slanted view of budget cuts.

For that matter, we expect the other two of America's "elite three" fact checkers to independently follow the same misleading pattern PolitiFact uses. That's what happens when all three lean left.

Monday, February 5, 2018

Does "lowest" mean something different in Georgia than it does in Texas?

Today PolitiFact National, posing as PolitiFact Georgia, called it "Mostly True" that Georgia has the lowest minimum wage in the United States.

Georgia law sets the minimum wage at $5.15 per hour, the same rate Wyoming uses, and the federal minimum wage of $7.25 applies to all but a very few Georgians. PolitiFact National Georgia hit Democrat Stacey Evans with a paltry "Mostly True" rating:
Evans said Georgia "has the lowest minimum wage in the country."

Georgia’s minimum wage of $5.15 per hour is the lowest in the nation, but Wyoming also has the same minimum wage.

Also, most of Georgia’s workers paid hourly rates earn the federal minimum of $7.25.

Evans’ statement is accurate but needs clarification or additional information. We rate it Mostly True.
Sounds good. No problem. Right?

Eh. Not so fast.

Why is it okay in Georgia for "lowest" to reasonably reflect a two-way tie with Wyoming, but in Texas using "lowest" where there's a three-way tie earns the speaker a "False" rating?



How did PolitiFact Texas justify the "False" rating it gave the Republican governor (bold emphasis added)?
Abbott tweeted: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."

The latest unemployment data posted when Abbott spoke showed Texas with a 4 percent unemployment rate in September 2017 though that didn't set a 40-year record. Rather, it tied the previous 40-year low set in two months of 2000.

Abbott didn’t provide nor did we find data showing jobs created in each state in October 2017.

Federal data otherwise indicate that Texas experienced a slight decrease in jobs from August to September 2017 though the state also was home to more jobs than a year earlier.

We rate this claim False.
 A tie goes to the Democrat, apparently.

We do not understand why it is not universally recognized that PolitiFact leans left.



Correction/clarification Feb. 5, 2018:
Removed unneeded "to" from the second paragraph. And added a needed "to" to the next-to-last sentence.


Tuesday, January 16, 2018

PolitiFact goes partisan on the "deciding vote"

When does a politician cast the "deciding vote"?

PolitiFact apparently delivered the definitive statement on the issue on Oct. 6, 2010 with an article specifically titled "What makes a vote 'the deciding vote'?"

Every example of a "deciding vote" in that article received a rating of "Barely True" or worse (PolitiFact now calls "Barely True" by the name "Mostly False"). And each of the claims came from Republicans.

What happens when a similar claim comes from a Democrat? Now we know:


Okay, okay, okay. We have to consider the traditional defense: This case was different!

But before we start, we remind our readers that cases may prove trivially different from one another. It's not okay, for example, if the difference is that this time the claim from from a woman, or this time the case is from Florida not Georgia. Using trivial differences to justify the ruling represent the fallacy of special pleading.

No. We need a principled difference to justify the ruling. Not a trivial difference.

We'll need to look at the way PolitiFact justified its rulings.

First, the "Half True" for Democrat Gwen Graham:
Graham said DeSantis casted the "deciding vote against" the state's right to protect Florida waters from drilling.

There’s no question that DeSantis’ vote on an amendment to the Offshore Energy and Jobs Act was crucial, but saying DeSantis was the deciding vote goes too far. Technically, any of the 209 other people who voted against the bill could be considered the "deciding vote."

Furthermore, the significance of Grayson’s amendment is a subject of debate. Democrats saw it as securing Florida’s right to protect Florida waters, whereas Republicans say the amendment wouldn’t have changed the powers of the state.

With everything considered, we rate this claim Half True.
Second, the "Mostly False" for the National Republican Senatorial Committee (bold emphasis added):
The NRSC ad would have been quite justified in describing Bennet's vote for either bill as "crucial" or "necessary" to passage of either bill, or even as "a deciding vote." But we can't find any rationale for singling Bennet out as "the deciding vote" in either case. He made his support for the stimulus bill known early on and was not a holdout on either bill. To ignore that and the fact that other senators played a key role in completing the needed vote total for the health care bill, leaves out critical facts that would give a different impression from message conveyed by the ad. As a result, we rate the statement Barely True.
Third, the "False" for Republican Scott Bruun:
(W)e’ll be ridiculously lenient here and say that because the difference between the two sides was just one vote, any of the members voting to adjourn could be said to have cast the deciding vote.
The Bruun case doesn't help us much. PolitiFact said Bruun's charge about the "deciding" vote was true but only because its judgment was "ridiculously lenient." And the ridiculous lenience failed to get Bruun's rating higher than "False."  So much for PolitiFact's principle of rating two parts of a claim separately and averaging the results.

Fourth, we look at the "Mostly False" rating for Republican Ron Johnson:
In a campaign mailer and other venues, Ron Johnson says Feingold supported a measure that cut more than $500 billion from Medicare. That makes it sound like money out of the Medicare budget today, when Medicare spending will actually increase over the next 10 years. What Johnson labels a cut is an attempt to slow the projected increase in spending by $500 billion. Under the plan, guaranteed benefits are not cut. In fact, some benefits are increased. Johnson can say Feingold was the deciding vote -- but so could 59 other people running against incumbents now or in the future.

We rate Johnson’s claim Barely True.
We know from earlier research that PolitiFact usually rated claims about the ACA cutting Medicare as "Mostly False." So this case doesn't tell us much, either. The final rating for the combined claims could end up "Mostly False" if PolitiFact considered the "deciding vote" portion "False" or "Half True." It would all depend on subjective rounding, we suppose.

Note that PolitiFact Florida cited "What makes a vote 'the deciding vote'?" for its rating of Gwen Graham. How does a non-partisan fact checker square Graham's "Half True" rating with the ratings given to Republicans? Why does the fact check not clearly describe the principle that made the difference for Graham's more favorable rating?

As far as well can tell, the key difference comes from party affiliation, once again suggesting that PolitiFact leans left.


After the page break we looked for other cases of the "deciding vote."

Thursday, January 4, 2018

No Underlying Point For You!

PolitiFact grants Trump no underlying point on his claim about the GOP lock on senate seat



The NBC sitcom "Seinfeld" featured an episode focused in part on the "Soup Nazi." The "Soup Nazi" was the proprietor of a neighborhood soup shop who would refuse service in response to minor breaches of etiquette, often with a shouted "No soup for you!"

PolitiFact's occasional refusal to allow for the validity of an underlying point reminded us of the "Soup Nazi," and gives rise to our new series of posts recognizing PolitiFact's occasional failure to recognize underlying points.

PolitiFact's statement of principles assures readers that it takes a speaker's underlying point into account (bold emphasis added):
We examine the claim in the full context, the comments made before and after it, the question that prompted it, and the point the person was trying to make.
We see credit for the speaker's underlying point on full display in this Feb. 14, 2017 rating of Bernie Sanders, at the time running as a Democratic nominee for president of the United States (bold emphasis added):
Sanders said, "Before the Affordable Care Act, (West Virginia’s) uninsured rate for people 64 to 19 was 29 percent. Today, it is 9 percent."

Sanders pointed to one federal measurement, though it has methodological problems when drilling down to the statistics for smaller states. A more reliable data set for West Virginia’s case showed a decline from 21 percent to 9 percent. The decline was not as dramatic as he’d indicated, but it was still a significant one.

We rate the statement Mostly True.
Sanders' point was the decline in the uninsured rate owing to the Affordable Care Act, and we see two ways to measure the degree of his error. Sanders used the wrong baseline for his calculation, 29 percent instead of 21 percent. That represents a 38 percent exaggeration. Or we can look at the difference in the change from that baseline to reach Sanders' (accurate) 9 percent figure. That calculation results in a percentage error of 67 percent.

PolitiFact, despite an error of at least 38 percent, gave Sanders a "Mostly True" rating because Sanders was right that a decline took place.

For comparison, Donald Trump tweeted that former associate Steve Bannon helped lose a senate seat Republicans had held for over 30 years. The seat was held by the GOP by a mere 21 years. Using 31 years as a number greater than 30 years, Trump exaggerated by about 52 percent. And PolitiFact rated his claim "False":
Trump said the Senate seat won by Jones had been "held for more than thirty years by Republicans." It hasn’t been that long. It’s been 21 years since Democrat Howell Heflin retired, paving the way for his successor, Sessions, and Sessions’ elected successor, Jones. We rate the statement False.
Can the 14 percentage point difference by itself move the needle from "Mostly True" to "False"?

Was Trump making the point that the GOP had controlled that senate seat for a long time? That seems undeniable. Is 21 years a long time to control a senate seat? That likewise appears undeniable. Yet Trump's underlying point, in contrast to Sanders', was apparently a complete non-factor when PolitiFact chose its rating.

We say that inconsistency is a bad look for a non-partisan fact checker.

On the other hand, we might predict this type of inconsistency from a partisan fact checker.

Thursday, December 7, 2017

Another partisan rating from bipartisan PolitiFact

"We call out both sides."

That is the assurance that PolitiFact gives its readers to communicate to them that it rates statements impartially.

We've pointed out before, and we will doubtless repeat it in the future, that rating both sides serves as no guarantee of impartiality if the grades skew left whether rating a Republican or a Democrat.

On December 1, 2017, PolitiFact New York looked at Albany Mayor Kathy M. Sheehan's claim that simply living in the United States without documentation is not a crime. PolitiFact rated the statement "Mostly True."


PolitiFact explained that while living illegally in the United States carries civil penalties, it does not count as a criminal act. So, "Mostly True."

Something about this case reminded us of one from earlier in 2017.

On May 31, 2017, PolitiFact's PunditFact looked at Fox News host Gregg Jarrett's claim that collusion is not a crime. PolitiFact rated the statement "False."


These cases prove very similar, not counting the ratings, upon examination.

Sheehan defended Albany's sanctuary designation by suggesting that law enforcement need not look at immigration status because illegal presence in the United States is not a crime.

And though PolitiFact apparently didn't notice, Jarrett made the point that Special Counsel Mueller was put in charge of investigating non-criminal activity (collusion). Special Counsels are typically appointed to investigate crimes, not to investigate to find out if a crime was committed.

On the one hand, Albany police might ask a driver for proof of immigration status. The lack of documentation might lead to the discovery of criminal acts such as entering the country illegally or falsifying government documents.

On the other hand, the Mueller investigation might investigate the relationship (collusion) between the Trump campaign and Russian operatives and find a conspiracy to commit a crime. Conspiring to commit a crime counts as a criminal act.

Sheehan and Jarrett were making essentially the same point, though collusion by itself doesn't even carry a civil penalty like undocumented immigrant status does.

So there's PolitiFact calling out both sides. Sheehan and Jarrett make almost the same point. Sheehan gets a "Mostly True" rating. Jarrett gets a "False."

That's the kind of non-partisanship you get when liberal bloggers do fact-checking.



Afters

Just to hammer home the point that Jarrett was right, we will review the damning testimony of the  three impartial experts who helped PunditFact reach the conclusion that Jarrett was wrong.
Nathaniel Persily at Stanford University Law School said one relevant statute is the Bipartisan Campaign Reform Act of 2002.

"A foreign national spending money to influence a federal election can be a crime," Persily said. "And if a U.S. citizen coordinates, conspires or assists in that spending, then it could be a crime."
The conspiracy to commit the crime, not the mere collusion, counts as the crime.

Next:
Another election law specialist, John Coates at Harvard University Law School, said if Russians aimed to shape the outcome of the presidential election, that would meet the definition of an expenditure.

"The related funds could also be viewed as an illegal contribution to any candidate who coordinates (colludes) with the foreign speaker," Coates said.
Conspiring to collect illegal contributions, not mere collusion, would count as the crime. Coats also offered the example of conspiring to commit fraud.
Josh Douglas at the University of Kentucky Law School offered two other possible relevant statutes.

"Collusion in a federal election with a foreign entity could potentially fall under other crimes, such as against public corruption," Douglas said. "There's also a general anti-coercion federal election law."
The corruption, not the mere collusion, would count as the crime.

How PolitiFact missed Jarrett's point after linking the article he wrote explaining what he meant is far beyond us.