Sunday, July 14, 2019

PolitiFact Texas punches "spin cycle" for Julián Castro

Is it possible left-leaning fact checkers still do not realize how their ideologies affect their work?

Consider PolitiFact Texas.


See what PolitiFact Texas did, there?

It presents an accurate hybrid paraphrase quotation of Castro asserting that Section 1325 of U.S. immigration law was put into place in 1929 by a segregationist. And promptly spins the Castro claim into the innocuous-but-loaded "When did it become a crime to cross the U.S.-Mexico border?"

Hilariously, the fact check spends most of its time examining facts other than when it became a crime to cross the border. Instead, it focuses on whether Section 1325 was enacted in 1929 (finding it was not) and whether the legislator who wrote the legislation was a segregationist.

PolitiFact Texas used nine paragraphs to address the segregationist past of Sen. Coleman Livingston Blease, the man who composed the language of an immigration bill in 1929.

Castro was evidently trying to make the point that he was trying to repeal a racist piece of legislation, racist because it was written by a segregationist. Castro was using the genetic fallacy on his audience. PolitiFact took no note of it, instead playing along by fact-checking whether Blease was a segregationist and finding a politically active expert to opine that the Blease-authored legislation was aimed at immigration from Mexico.

Such background does not help establish when it became a crime (at least under certain conditions) to cross the U.S.-Mexico border. It's background information that just happens (?) to support the subtext of Castro's claim.

As for Castro's implication that it was Blease who implemented the policy--as though a U.S. senator has that kind of power--well, that's just not the sort of thing that interests PolitiFact Texas.

In the end, PolitiFact Texas found it false that Blease authored the section of the immigration law Castro mentioned.

But why should that stand in the way of a favorable "Mostly True" rating?

PolitiFact's summary conclusion (bold emphasis added):
Castro said Section 1325 immigration policy, which makes it a crime to enter the country illegally, was "put into place in 1929, by a segregationist."

Technically Blease — a white supremicist [sic] who advocated for segregationist policies and lynching —  was not the author of the statute on illegal entry into the United States as it exists in today’s immigration code. 

But it was the first policy criminalizing all unlawful entry at the nation's southern border, and is considered the foundation of the 1952 policy that  evolved into today's Section 1325.

We rate this claim Mostly True.
Technically false, therefore "Mostly True."

That's how PolitiFact rolls. That's how PolitiFact Texas rolls.

It's spin, not fact-checking. Castro did not assert that the criminalization policy started in 1929. Castro asserted that Section 1325 was put into place in 1929.

Fact checkers should prove capable of noticing the difference. And keeping the spin out of their fact checks.

Friday, July 12, 2019

PolitiFact Unplugs 'Truth-O-Meter' for Elizabeth Warren

We seem to be seeing an increase of fact check stories from PolitiFact that do not feature any "Truth-O-Meter" rating. One of the latest pleads that it simply did not have enough information to offer a rating of Democratic presidential candidate Elizabeth Warren's claim that the U.S. Women's National Team (soccer) pulls in more revenue while receiving less pay than the men.

But look at the low-hanging fruit!


The women on the USWNT are not doing equal or better work than the men if the women cannot beat the men on the pitch. The level of competition is lower for women's soccer. And Warren's introduction to her argument is not an equal pay for equal work argument. It is an argument based on market valuation aside from the quality of the work.

It's reasonable to argue that if the women's game consistently creates more revenue than the men's game then the women deserve more money than the men.

That's not an equal pay for equal work argument. Not by any stretch of the imagination.

It was ridiculous for Warren to make that stretch in her tweet and typical of left-leaning PolitiFact to ignore it in favor of something it would prefer to report.

Did that principle of burden of proof disappear again?

PolitiFact's statement of principles includes a "burden of proof" principle that PolitiFact uses to hypocritically ding politicians who make claims they don't back up while allowing PolitiFact to give those politicians ratings such as "False" even if PolitiFact has not shown the claim false.

The principle pops out of existence at times. Note what PolitiFact says about its evidence touching Warren's claim:
Ultimately, the compensation formulas are too variable — and too little is known about the governing documents — for us to put Warren’s claim on the Truth-O-Meter.
 So instead of the lack of evidence leading to a harsh rating for Warren, in this case it leads to no "Truth-O-Meter" rating at all.

Color us skeptical that PolitiFact could clear up the discrepancy if it bothered to try.


Afters

Given Warren's clear reference to "equal pay for equal work," we should expect a fact checker to note that women who compete professionally in soccer cannot currently field a team that would beat a professional men's team.

Not a peep from PolitiFact.

Women's national teams do compete against men on occasion. That is, they do practice scrimmages against young men on under-17 and under-15 teams. And the boys tend to win.

But PolitiFact is content if you don't know that. Nor does its audience need to know that the U.S. Women's National Team's success makes no kind of coherent argument for equal pay for equal work.

Thursday, June 27, 2019

Selection Bias, Magnified

How PolitiFact uses inconsistent application of principles to help Democrats, starring Beto O'Rourke


PolitiFact Bias has repeatedly pointed out how PolitiFact's selection bias problem serves as a trap for its left-leaning journalists (that likely means somewhere between most and all of them). Left-leaning journalists are likely to fact check suspicious claims that look suspicious to left-leaning journalists.

But beyond that left-leaning journalists may suffer the temptation of looking at statements through a left-leaning lens. Fact-checking a Democrat may lead to confirmation bias favoring the Democrat's statement. The journalist may, perhaps unconsciously, emphasize evidence confirming claims coming from liberal sources. Or cutting the fact-finding process short after finding enough to supposedly confirm what the Democrat said.

When Democratic presidential hopeful Beto O'Rourke claimed to have received more votes than any Democrat in the history of Texas, PolitiFact Texas fact-checked the claim and found it "True."

Note that the fact check was written by long-time PolitiFact staffer Louis Jacobson. PolitiFact National employs Jacobson.

It is literally true that O'Rourke received the most votes for a Democrat ever received in the state of Texas. But literal truth is rarely the benchmark for fact checkers. In this case, we immediately noticed a problem with O'Rourke's claim that typically causes fact-checkers to find fault: As the number of voters in Texas grows, the number of raw votes received shrinks in significance. Measuring the percentage of the total vote (48.3 percent for O'Rourke) or the percentage of registered voters (about 25.6 percent) offers a more complete picture of a candidate's electoral strength in a given state.

For comparison, President Jimmy Carter won Texas in 1976 with 2,082,319 votes. Carter's percentage of the vote was 51.1 percent. His percentage of registered voters was 31.2 percent. It follows that Carter's performance in Texas was stronger than O'Rourke's even though Carter received about half as many votes as O'Rourke received.

We pointed out the problem to a PolitiFact Texas employee on Twitter. PolitiFact elected not to update the story to address O'Rourke's potentially misleading point about his electoral strength.

But it's justified resisting the efforts of conservatives to "work the refs," right? Who would think of trying to put the number of votes in context like we did other than right wing zealots?

Try the BBC, for starters. BBC noted that Hillary Clinton received the most presidential votes in history, then promptly tempered that statement of fact with a caveat:
So the proportion of Clinton votes might be more illuminating than simply how many votes she earned.
Indeed. And even PolitiFact Texas devoted more than one paragraph to the context O'Rourke had left out. Yet PolitiFact had the left-leaning sense not to let that missing information interfere with the "True" rating it bestowed on O'Rourke.
Our ruling

O’Rourke said that in 2018 when he ran for senator, "young voter turnout in early voting was up 500%. We won more votes than any Democrat has in the history of the state of Texas."

His assertion about young voter turnout is backed up by an analysis of state election data by the firm TargetSmart. And he’s correct that no Democrat has ever won more raw votes in a Texas statewide election than he has, an accomplishment achieved through a combination of his own electoral success, a pro-Democratic environment in 2018, and Texas’ rapid population growth in recent years.

We rate his statement True.
PolitiFact does not count the missing information significant, even though it was apparently significant enough to mention in the story.

Partial review of PolitiFact's rating system:
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.

HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
If O'Rourke's statement did not need clarification or additional information, such as the growing number of voters in Texas, then why did PolitiFact provide that clarifying information?

These gray area "coin flips" between ratings offer yet another avenue for left-leaning fact-checkers to express their bias.

PolitiFact has never revealed any mechanism in its methodology that would address this weakness.

Thursday, June 13, 2019

Transparency: How to access PolitiFact's page of corrected or updated fact checks

It has long amused us here at PolitiFact Bias how difficult PolitiFact makes it for readers to navigate to its page of corrections and updates. There are pretty much three ways to navigate to the page.


Someone could link to it by hotlinking using the page URL.

This is the method PolitiFact uses to make finding the page seem easy-peasy in tweets or other messages. Works great!



The reader could use a search engine to find it

No, not the search function at the PolitiFact website. That will not get you there.

We're talking about a search engine like Google or DuckDuckGo. Search politifact + corrections + and + updates and reaching the page is a snap.


The reader could navigate to the page from PolitiFact's homepage. Maybe. 

This is the amusing part. We've already noted that using the "search" function at the PolitiFact website won't reach its dedicated page of corrected and updated fact checks (other corrections and updates do not yet end up there, unfortunately).

And without a guide such as the one that follows, most people browsing PolitiFact's website would probably never stumble over the page.

How To Do It

Step 1: On the homepage, move the cursor to the top menu bar and hover over "Truth-O-Meter" to trigger the drop-down menu
Step 2: Move the cursor down that menu to "By Subject," click on "By Subject"
Step 3: On the "Subjects" page, move the cursor to the alphabet menu below the main menu, hover over "c," click "c"
Step 4: Move the cursor to the subjects listed under "c," move cursor to hover over "Corrections and Updates," click "Corrections and Updates"

Done! What could be easier?

The key? Knowing that PolitiFact counts "Corrections and Updates" as a category of "statements" defined by PolitiFact as Truth-O-Meter stories. The list of corrections and updates consists only of fact checks. Corrections or updates of explainer articles, promise ratings and flip-flop ratings (etc.) do not end up on PolitiFact's page of corrections and updates.

What you'll find under "c" at PolitiFact.com



Afters


When I (Bryan) designed the Zebra Fact Check website, I put the "Corrections" link on the main menu.



It's not all about criticizing PolitiFact. It's also about showing better and more transparent ways to do fact-checking.

This isn't exactly rocket science. Anybody can figure out that putting an item on the main menu makes it easy to find.

There is reason to suspect that PolitiFact is less than gung-ho about publicizing its corrections and updates.

Wednesday, May 29, 2019

More Deceptive "Principles" from PolitiFact

PolitiFact supposedly has a "burden of proof" that it uses to help judge Political claims. If a politician makes a claim and supporting evidence doesn't turn up, PolitiFact considers the claim false.

PolitiFact Executive Director Aaron Sharockman expounded on the "burden of proof" principle on May 15, 2019 while addressing a gathering at the U.S. Embassy in Ethiopia:
If you say something, if you make a factual claim, online, on television, in the newspaper, you should be able to support it with evidence. And if you cannot or will not support that claim with evidence we say you're guilty.

We'll, we'll rate that claim negatively. Right? Especially if you're a person in power. You make a claim about the economy, or health, or development, you should make the claim with the information in your back pocket and say "Here. Here's why it's true." And if you can't, well, you probably shouldn't be making the claim.
As with its other supposed principles, PolitiFact applies "burden of proof" inconsistently. PolitiFact often telegraphs its inconsistency by publishing a 'Splainer or "In Context" article like this May 24, 2019 item:


PolitiFact refrains from putting Milano's statement on its cheesy "Truth-O-Meter" because PolitiFact could not figure out if her statement was true.

Now doesn't that sound exactly like a potential application of the "burden of proof" criterion Sharockman discussed?

Why isn't Milano "guilty"?

In this case PolitiFact found evidence Milano was wrong about what the bill said. But the objective and neutral fact-checkers still could not bring themselves to rate Milano's claim negatively.

PolitiFact (bold emphasis added):
Our conclusion

Milano and others are claiming that a new abortion law in Georgia states that women will be subject to prosecution. It actually doesn’t say that, but that doesn’t mean the opposite — that women can’t be prosecuted for an abortion — is true, either. We’ll have to wait and see how prosecutors and courts interpret the laws before we know which claim is accurate. 
What's so hard about applying principles consistently? If somebody says the bill states something and "It actually doesn't say that" then the claim is false. Right? It's not even a burden of proof issue.

And if somebody says the bill will not allow women to be prosecuted, and PolitiFact wants to use its "burden of proof" criterion to fallaciously reach the conclusion that the statement was false, then go right ahead.

Spare us the lilly-livered inconsistency.

Friday, May 17, 2019

PolitiFact gives "policy trajectory" a "False" rating

Earlier this week PolitiFact Executive Director Aaron Sharockman said PolitiFact does not rate opinions or predictions.

Also this week, PolitiFact Health Check, the PolitiFact partnership with Kaiser Health News, apparently contradicted Sharockman's claim.

Behold:


If it looks like PolitiFact is fact-checking a prediction, that's because PolitiFact is fact-checking a prediction.

More than one of PolitiFact's pool of four experts apparently saw it exactly that way (bold emphasis added):
But, Adler said, the structure of Trump’s claim — promising what his administration "will" do, rather than commenting on what it has done — leaves open the possibility of taking other steps to keep preexisting condition protections in place.

That’s true, other experts acknowledged. So far, the White House has postponed a legislative push until after the 2020 election — leaving a vacuum if the courts do wipe out the health law.
History shows that PolitiFact will not allow mere expert opinion to stand in the way of the hoped-for narrative. When experts offer opinions that do not fit comfortably with PolitiFact's conclusion, PolitiFact ignores them.

Hilariously, PolitiFact's conclusion uses language strongly suggesting an awareness that it is rating a prediction:
Our ruling

Trump said his administration will "always protect patients with preexisting conditions."

The White House’s policy trajectory does exactly the opposite.
What is "policy trajectory" if it is not a projection of what will happen tomorrow based on Trump administration policy today?

PolitiFact Health Check is not fact-checking Trump. It is rating a pledged policy position. PolitiFact could potentially address such cases with an expansion of its ratings of executive promises ("Trump-O-Meter" etc.).

But this so-called "fact check" makes Sharockman a liar if it isn't corrected somehow.

Monday, May 6, 2019

PolitiFact unfairly harms Joe Biden

On May 6, 2019, PolitiFact fact-checked a claim from Democratic Party presidential hopeful (and frontrunner) Joe Biden.

Biden said he was "always" labeled as one of the most liberal Democrats in Congress.

PolitiFact rated Biden's claim "False." Perhaps the rating is fair. But PolitiFact's would-be paraphrase of Biden's claim, below, treats Biden unfairly.


We think there's room for one to count as a "staunch liberal" without always counting as a one of the most liberal.

PolitiFact, for purposes of its headline, changed Biden's claim from one to the other. In terms of its messaging, PolitiFact offers the opinion that Biden does not count as a staunch liberal.

We think fact checks should stick to the facts and not make headlines out of their opinions. PolitiFact's opinion, trumpeted above its fact check, unfairly harmed Biden.


Note: We have always said that PolitiFact's problems go beyond left-leaning bias. PolitiFact represents fact-checking done poorly. The bad fact-checking unfairly harms right and left, with the right getting the worst of it.

Thursday, May 2, 2019

Prescient Sen. Warren, or Gullible PolitiFact?

PolitiFact claims it is "True" Democratic presidential hopeful Sen. Elizabeth Warren saw the financial crisis of 2008 coming.


After announcing in the deck that it was true Warren saw the financial crisis coming, PolitiFact did what it does on occasion: It gave us a fact check that offered hardly any evidence in support of its conclusion.

Having written about this some on Twitter and Facebook already, I can tell our readers that some liberals aren't going to want to admit that Warren claimed she saw a particular crisis coming. But the crisis she supposedly foresaw wasn't merely a crisis for some number of subprime borrowers facing foreclosure. It wasn't just a crisis of predatory lenders preying on people.

It was the crisis that saw many banks failing, lenders not lending and millions losing their jobs.


PolitiFact left no doubt it understood Warren was saying she foresaw that particular crisis, leading with the following:
Democratic Sen. Elizabeth Warren warned about the financial crisis of the 2000s before it happened, she claimed during a CNN town hall where she pitched herself as the best option for president in the 2020 election.
PolitiFact provided no reasonable evidence to show Warren saw that crisis coming. But it still somehow reached the conclusion it was true Warren saw the 2008 financial crisis coming.

We'll review the evidence PolitiFact quoted, the evidence PolitiFact linked and finally look at evidence PolitiFact did not bother to mention.

Sifting the Would-be Evidence 

We start with PolitiFact's presentation of Warren's claim (bold emphasis added):
Warren, a former Harvard Law School professor, told an audience of college students that her whole life’s work has been "about what's happening to working families.

"And starting in the early 2000s, the crisis was coming. I was waving my arms, ringing the bell, doing everything I could. I said families are getting cheated all over this country," Warren said April 22 in Manchester, N.H. "It started when the mortgage companies targeted communities of color. They targeted seniors. They targeted Latinos. They came in and sold the worst possible mortgages and stripped wealth out of those communities, and then took those products across the nation. I went everywhere I could. I talked about it to anyone who would listen, a crisis is coming."

But nobody wanted to listen, Warren said, "so the crisis hit in 2007, 2008, and just took us down."
Warren said she saw a crisis coming. It was the one that hit in 2007, 2008 and "just took us down." She supposedly saw that coming.

The next paragraph from PolitiFact constitutes a non-sequitur (logical fallacy) that characterizes the whole of the fact check:
We confirmed that Warren did raise the alarm about the looming housing and financial crisis. She spoke about debt, financial lending practices and other factors affecting families and the economy years before the financial crisis peaked in 2008.
Does talking about debt, lending practices and other factors affecting the economy and families mean that one has raised the alarm about a looming financial crisis? We say it doesn't unless one says something specific about a looming financial crisis that suitably matches the one we had in 2008.


This cupboard is bare.

We'll hunt through every quotation PolitiFact used and survey every article PolitiFact linked in support of Warren. We cannot quote these sources exhaustively because of copyright issues. But we'll give our readers far more than PolitiFact gave its readers.

PolitiFact:
Warren’s presidential campaign cited several blog posts and comments to media outlets in 2005 and 2006, and Warren’s 2003 book, "The Two-Income Trap," co-authored with her daughter, Amelia Warren Tyagi, as examples of Warren warning about subprime lending and an imminent housing crisis.
PolitiFact does not quote from the listed blog posts (we'll get to those later). Instead PolitiFact leads its presentation of evidence with a quotation from the book it mentions in the same paragraph:
"In the overwhelming majority of cases, subprime lenders prey on families that already own their own homes, rather than expanding access to new homeowners. Fully 80 percent of subprime mortgages involve refinancing loans for families that already own their homes," Warren said in the book. "For these families, subprime lending does nothing more than increase the family's housing costs, taking resources away from other investments and increasing the chances that the family will lose its home if anything goes wrong."
There is no warning of any crisis in that paragraph. There's a warning about borrowing money from the more expensive subprime market. But that warning makes sense regardless of the possibility of an impending financial crisis.

At the risk of understatement, we find PolitiFact's Exhibit A in support of Warren's claim underwhelming.

For Exhibit B, PolitiFact trotted forth part of a 2004 PBS interview:
"I think what the landscape shows is the middle class is under assault in a way that has not happened before in our history," Warren said. "Stagnant wages, rising costs, wildly rising debt. It's in everyone's interest to turn that back around."
Again, there is no warning of any crisis resembling the 2008 financial crisis. Instead, Warren bemoans the fact that the middle class is supposedly under assault. She mentions wages, costs and debt but doesn't tie them together into any type of specific threat.

 PolitiFact cited The New York Times as its Exhibit C:
Professor Warren of Harvard believes that disaster lurks as homeowners borrow against their homes to forestall bankruptcy. When the stock market tumbled five years ago, people in trouble could sell stocks to stay afloat, she said. But home equity doesn't work the same way. As she put it, "You can't sell a part of your home like you could a stock in the stock market bubble."
Like the two preceding exhibits, Exhibit C does not offer any warning of a crisis, unless we count the personal crisis faced by homeowners facing foreclosure. That is the subject of the article and the group facing lurking disaster. The article's kicker quote--from Warren--helps cinch the case.

PolitiFact's Exhibit D consists of comments from a representative of the conservative Housing Center at the American Enterprise Institute.

How did co-director Ed Pinto support Warren's claim that she saw the financial crisis coming?

PolitiFact:
Warren was "substantially correct" in her assessment that home prices were going up rapidly relative to incomes (particularly for households with a one wage earner), said Ed Pinto, co-director of the Housing Center at the American Enterprise Institute, a conservative think tank.
If Warren was right that home prices were going up rapidly compared to incomes then that means there was an impending crisis? One that matches the financial crisis of 2008?

Please, where is the logic (and wouldn't we love to see the interview questions PolitiFact posed to the experts it cited!)?

With its Exhibit E, PolitiFact teases us with a subheader designed to foster (false) hope: "Consumer advocate groups credit Warren for alerting about the crisis"

Now we're getting somewhere?

PolitiFact:
"I remember (Warren) talking about credit card abuses and how they were harming families," [Deborah] Goldstein said. "People were using credit to manage basic daily expenses."

Goldstein said that her group, also concerned about the imminent financial crisis, in the early 2000s communicated with Warren on what could be done about it.
Summing up, we have a secondary source--an interest group that agrees with Warren--saying it was concerned about an imminent financial crisis and "communicated with Warren on what could be done about it."

PolitiFact offers nothing from the Center for Responsible Lending that supports its subheader.

Exhibit F gives us yet another empty endorsement of Warren:
"I'd give then-professor Warren the credit for banging the drum and ringing the bell early on unfair financial practices," said Ed Mierzwinski, senior director of the Federal Consumer Program at the U.S. Public Interest Research Group. Warren was the "No. 1 go-to academic expert" in the mid-to-late '90s and 2000s in the debate over changes to the bankruptcy code, he said.
Credit where it's due: If "banging the drum and ringing the bell early on unfair financial practices" was the same as "banging the drum and ringing the bell early on the 2008 financial crisis" then we'd have something. Hearsay, perhaps, lacking documented evidence in support, but at least hearsay would be something.

Exhibits A-F add up to nothing.

PolitiFact goes on to list some of Warren's accomplishments, as though its list somehow contributes to the case that Warren warned about the 2008 financial crisis (we don't see it).


Quoting the unquoted Warren

PolitiFact used a number of hotlinks, presenting them as though they support Warren's claim but without quoting from them (and most often not even paraphrasing or summarizing them).

We'll go through them in the order PolitiFact used them.

Talking Points Memo: "Is Housing More Affordable?"

Sen. Warren wrote a short blog post on Dec. 12, 2005 criticizing an article on home mortgages by David Leonhardt. Warren appeared to dispute Leonhardt's too-rosy picture of housing affordability.

We don't see anything reasonably taken as a warning about a future national financial crisis. It's hard to even pick out a quotation carrying a hint of that suggestion (please read it for yourself, link above).
(B)y picking the reference point as the early 1980s rather than the 1970s or the late 1980s, the NYT is benchmarking off the worst housing market in the second half of the 20th Century. Because inflation was out of control and mortgage rates were stratospheric, home buying was curtailed and housing markets suffered. Is that what we want to hold up as the model for comparison?
We find Warren's presentation well short of apocalyptic.

Talking Points Memo: "Middle Matters"

The next link, from May 26, 2005, leads to an even shorter four paragraph blog entry. Warren warns about pressure on the middle class:
The middle class is being carved up as the main dish in a corporate feast.  Strugging with flat incomes and rising costs for housing, health care, transportation, child care and taxes (yes, taxes), these folks are under a lot of financial strain.  And big corporate interests, led by the consumer finance industry, are devouring families and spitting out the bones.
Warning that the middle class may not always be with us thanks to a smorgasbord of costs serves as a weak foreshadowing of an impending financial crisis. Indeed, that crisis threatened some of the entities Warren blamed for pressuring the middle class.

Talking Points Memo: "Is Housing More Affordable?"

The third blog post PolitiFact linked was the same as the first.

PolitiFact's sidebar source list links three blog posts from Talking Points Memo but the text of the fact check contains three hotlinks referring to "several blog posts."

We'll take this space to note that the titles Warren chose for her blog posts seem pretty tame if she's going all out to warn people about an impending financial crisis ("I went everywhere I could. I talked about it to anyone who would listen, a crisis is coming.")

Talking Points Memo: "Foreclosures Up, Mortgage Brokers Keep on Selling"

The fourth link (third blog post), from April 21, 2006, did contain a warning. It noted that home foreclosures were up and suggested the housing bubble nationally was perhaps close to popping:
So why aren’t the mortgage lenders cutting back now? The problem, says my friend, is that no single bank or investment house owns those mortgages any more. They have passed them along to huge securitized pools, held in diverse ownership. That means a lot less oversight to be sure the big picture on lending makes any sense. And besides, the Army keeps on offering high returns, at least in the short run.

Nationally foreclosures are up 7% this quarter. That’s well behind Boston’s big numbers, but Boston was a leader during the boom. Will it now lead in the bust?
Note: Warren used "Army" in her post to describe the abundance of mortgage sellers.

Housing bubbles that burst do not routinely lead to the financial crisis of 2008 (or similar ones). As for the lending market making sense, it likely would have made more sense in the early 2000s if Republicans and Democrats alike had resisted the temptation to interfere in those markets by pushing and incentivizing lax lending standards. Government regulation was one of the problems leading to the crisis.

Note to Sen. Warren: If you're trying your best to warn people about an impending crisis, try emphasizing that idea in the titles you choose for your articles warning about the impending crisis, like "Impending Crisis Looms" or something like that. The technique makes it look like it's an idea you're trying to emphasize.


Judgment on PolitiFact

PolitiFact used quotations from Warren that did not support her claim to justify calling her claim "True." We think that speaks to PolitiFact's incompetence and secondarily to PolitiFact's leftward tilt.


Judgment on Sen. Warren

Thanks to a commenter at PolitiFact's Facebook page, we found stronger evidence supporting Warren's claim than PolitiFact was able to find. The commenter recalled seeing Warren on PBS sounding some kind of warning. When we found a search result from before 2008 we reviewed the text of an interview with Warren. The last line from Warren was exactly the type of evidence needed to find some truth in her claim:
But they don't see an economic threat to the banks from these massive bankruptcies?

Right now, they think that everyone can keep feeding and that there are still plenty of families to gobble up before they all head over the cliff, financially. But I have to tell you, the numbers are worrisome.
Based on this answer alone, we think Warren could reasonably receive a "Half True" rating. She described a risk of mass foreclosures that would threaten banks. That's short of describing the extent of the 2008 financial crisis, but at least she described one of the basic elements that helped lead to that crisis.

On the other hand, we saw little in the historical record to justify Warren's claim that she vigorously tried to broadcast a warning about an impending national financial crisis.

Perhaps Warren made other statements that would reasonably support her claim. But PolitiFact's fact check was our focus.  It was mostly an accident that we did a better job than PolitiFact at finding evidence supporting Warren.

Sunday, April 28, 2019

Great Moments in PolitiFact History I

PolitiFact published an impeachment PolitiSplainer on April 26, 2019 and pushed it on Twitter and Facebook.

PolitiFact emphasized that the last time a president was impeached was 20 years ago. The featured image? President Nixon.



Was Nixon president 20 years ago?

Was Nixon impeached?

Nice work, PolitiFact. That ought to help inflate the number of people who mistakenly believe that Nixon was impeached. And it keeps a Clinton from having their picture appear next to deck material announcing that it was 20 years since a president was last impeached.

Thursday, April 25, 2019

Bernie Sanders + PolitiFact + Equivocation = "True"

When PolitiFact plucks a truth from a bed of untruth (or vice-versa) we call it "Tweezers" and tag the example with the "tweezers or tongs" tag.

But every once in a while PolitiFact goes beyond tweezing to pretend that the tweezed item and the bed of untruth were both true.

And that's the case with a PolitiFact Vermont fact check of Democratic Party presidential candidate Sen. Bernie Sanders (I-Vt.).

It's true, as Sanders said, that people in jail in Vermont may vote. Except perhaps those in jail convicted of voter fraud or other crimes that may run afoul of Vermont's constitutional stipulation that voters must maintain "quiet and peaceable behavior."

The problem occurs in the middle of Sanders' claim. Vermont's original 1793 Constitution (like its 1777 Constitution) limits voting to men above a certain age. So it's just not true that it says "everybody can vote."

PolitiFact might have solved the problem in the fact check header by shortening the quotation with an ellipsis. Like this: "In my own state of Vermont ... people in jail can vote." That statement pretty much counts as true if we assume that everyone in jail is of quiet and peaceable behavior by Vermont's definition.

Unfortunately, the text of PolitiFact Vermont's fact check reinforces the false middle of Sanders' claim instead of either explicitly excluding it or providing accurate context. The fact check does not mention that Vermont's early constitution did not allow women to vote. Nor does it let on that men had to attain a certain age to vote.

Given those omissions, we count it a major victory that PolitiFact noted the stipulation that voters must be of "quiet and peaceable behavior"--not that the potential exceptions affected PolitiFact's rating of Sanders' claim:
Our ruling

Sanders said: "In my own state of Vermont, from the very first days of our state’s history, what our Constitution says is that everybody can vote. That is true. So people in jail can vote."

It’s true that Vermont felons can vote from prison today, and we can’t find anything to suggest that hasn’t always been the case in the state. Though it seems quite possible that the efforts being made today to allow them to cast ballots hasn’t always been made.

The Vermont Constitution requires people to be of "quiet and peaceable behavior," but otherwise places no restrictions on who can vote. And Sanders said prisoners "can" vote, not that they always have voted.

We rate this claim True.
 Though PolitiFact claims Vermont's constitution places no restrictions on who can vote (other than "quiet and peaceable behavior"), the fact is that Vermont places a number of restrictions on who can vote:
§ 42. [VOTER'S QUALIFICATIONS AND OATH]

Every person of the full age of eighteen years who is a citizen of the United States, having resided in this State for the period established by the General Assembly and who is of a quiet and peaceable behavior, and will take the following oath or affirmation, shall be entitled to all the privileges of a voter of this state:

You solemnly swear (or affirm) that whenever you give your vote or suffrage, touching any matter that concerns the State of Vermont, you will do it so as in your conscience you shall judge will most conduce to the best good of the same, as established by the Constitution, without fear or favor of any person.

Every person who will attain the full age of eighteen years by the date of the general election who is a citizen of the United States, having resided in this State for the period established by the General Assembly and who is of a quiet and peaceable behavior, and will take the oath or affirmation set forth in this section, shall be entitled to vote in the primary election.
PolitiFact's fact check fairly overflows with misinformation and ends up calling the false parts of Sanders' statement true.

Is this why we have fact checkers or what?

Monday, April 15, 2019

PolitiFact Bias fails to win a Pulitzer Prize for its Eighth Straight Year

Sad news: PolitiFact Bias failed to win a Pulitzer Prize in 2019. That makes eight years in a row PolitiFact Bias has failed to win a Pulitzer.

But there's an upside.

Pulitzer Prize-winning PolitiFact has failed to win a Pulitzer for 10 straight years, beating our streak by two years.

We track these numbers, by the way, because PolitiFact tries to use its Pulitzer Prize from 2009 as a type of mark of excellence endorsing the quality of its fact-checking.

We call that a crock. We've documented that Pulitzer juries do not fact check entries submitted for Pulitzer Prize consideration. And PolitiFact's set of entries in 2009 included its preposterous ruling that it was "Mostly True" Barack Obama's uncle helped liberate Auschwitz.

We created this video a few years ago to commemorate PolitiFact's long-running failure to repeat its Pulitzer Prize success from 2009.

We still think it's funny. It's funnier every year, in fact.

Tuesday, April 9, 2019

PolitiFact: 'Tweets' is to blame!


As if we needed a new reason to condemn the utility of PolitiFact's "report card" featurettes.

PolitiFact, once a candidate has accumulated 10 or more "Truth-O-Meter" ratings, features a page showing a graphic display of the distribution of those ratings. Here's a typical one:


We've always objected to these "report cards" because PolitiFact allows selection bias to serve as the basis for its datasets. In other words, the set of statements making up the basis for the graph is not representative. PolitiFact makes no effort to ensure that it is representative.

Today we ran across a fresh reason for regarding the report cards as unrepresentative. A PolitiFact fact check found that a charge against President Trump, that he was calling illegal immigrants "animals," was "False." PolitiFact's fact check notes that Democratic presidential candidates Kirsten Gillibrand and Pete Buttigieg retweeted the falsehood along with comments condemning Trump's supposed choice of words. Trump, it turned out, was referring to gang members and not ordinary illegal immigrants.

By blaming the falsehood on "Tweets," PolitiFact need not sully the report cards of Gillibrand and Buttigieg.



Bad, naughty "Tweets"!

Good, virtuous Gillibrand. Just look at that report card! No "False" and no "Pants on Fire."


The likewise good and virtuous Buttigieg does not yet have enough Truth-O-Meter ratings to qualify for a report card. He has just one "True" rating and one "Half True" rating. And you can rest assured that when Buttigieg does have a report card graphic that his retweet of a falsehood about Trump will not appear on his record. "Tweets" gets the blame instead.

With just a glance at "Tweets'" report card, one can tell "Tweets" is less virtuous than either Gillibrand or Buttigieg.


Except we're kidding because comparing the report cards is a worthless exercise.

We've repeatedly called on PolitiFact to add disclaimers to its "report cards" informing readers that the report cards serve as no useful guide in deciding which candidate to support.

We believe PolitiFact resists that suggestion because it wants its worthless report cards to influence voters. Don't vote for naughty "Tweets." Vote for virtuous Kirsten Gillibrand. Or virtuous Pete Buttigieg. It's from PolitiFact. And it's science-ish.

Thursday, April 4, 2019

The Worst of PolitiFact's April 2, 2019 Reddit AMA

As we mentioned in a Feb. 2, 2019 post, we love it when PolitiFact folks do interviews. It's a near guarantee of generating material worth posting. In celebration of "International Fact-Checking Day," PolitiFact Director Aaron Sharockman and PolitiFact Editor Angie Drobnic Holan conducted a Reddit "Ask Me Anything" event.

I asked PolitiFact to describe why it advocates transparency while keeping the identities and votes of its "Star Chamber" secret. PolitiFact's "Star Chamber" votes on each "Truth-O-Meter" rating. The majority vote rules, though PolitiFact claims it achieves unanimity for most votes. My question wasn't answered (no great surprise there).

Most of the interactions were boilerplate answers to boilerplate questions. But there were a few items of special interest.


Observing the PolitiFact Code?

Card

Though Holan flatly said "Everything that gets a correction or an update gets tagged (see all tagged items)," we were ready with two recent cases contradicting her claim. And we let that cat out of the bag.

Card

PolitiFact makes statements giving readers the impression that it scrupulously follows its code of principles. In fact, PolitiFact loosely follows its code of principles, as in this example. How do Holan and Sharockman not know this?

One of the examples we used was corrected on approximately March 16, 2019. Most of the uncertainly about the correction date comes from PolitiFact Virginia's decision not to mark the date of the correction. As of April 3, 2019 PolitiFact Virginia had not added the "Corrections or Updates" tag and the story did not appear on PolitiFact's supposed list of all of its corrected or updated stories.


Mythical Truth-O-Meter Consistency

One participant asked a question suggesting PolitiFact does not rate statements consistently (suggesting contemporary ratings of Trump make past ratings look far too harsh). Sharockman implied PolitiFact has kept its system consistent over the years:
But beyond the sheer volume [of Trump ratings--ed.], the standards we use to use [sic] to issue our ratings really hasn't [sic] evolved in the 11 years we've been doing this. In that sense, a Pants on Fire in 2009 should still be a Pants on Fire claim today, and vice versa.
There are two big problems with Sharockman's claim. First, PolitiFact itself announced a change to its rating methodology back in 2012.

Second, PolitiFact has admitted that its ratings are pretty much subjective. Sharockman's chosen example, the dividing line between "False" and "Pants on Fire" is perhaps the most sensational example of that subjectivity. How does Sharockman not know that?


The Vast Right-Wing Conspiracy Against PolitiFact?

Someone (not me!) asked "Who fact checks you [PolitiFact--ed.]?"

We found Sharockman's response fascinating and very probably false:

Card

Sharockman's answer paints PolitiFact as the focus of a concentrated group of hostile editors. With all those people combing PolitiFact material for hours on end, it's amazing that PolitiFact makes mistakes so rarely.

Right?

But is there any evidence at all supporting Sharockman's supposition that "a lot of people are reading everything we write looking for mistakes"? We at PolitiFact Bias announced long ago that we could not do a thorough job vetting PolitiFact's body of work:
As PolitiFact expands its state operations, the number of stories it produces far exceeds our capacity to review and correct even just the most egregious examples of journalistic error or bias.  We aim to encourage an army of Davids to counteract the mistakes and bias in PolitiFact's stories.
Who else could Sharockman have had in mind? Media Matters For America? The (defunct) Weekly Standard?

(We asked Sharockman via Twitter the other day whom he had in mind but received no immediate reply)

We suspect Sharockman of Trumpian exaggeration. He knows at least some people look at some of PolitiFact's work for errors. So to convey his point he turns that into "a lot of people" looking at "everything" PolitiFact publishes looking for errors. It's likely the only organization combing over PolitiFact's entire body of work looking for errors is PolitiFact itself.

And look how many times it fails, without swallowing the fiction that this represents the entire number.

***

Holan and Sharockman are politicians advocating for PolitiFact. It appears we cannot trust PolitiFact to hold its own to account.

Friday, March 15, 2019

Remember Back When PolitiFact was Fair & Balanced?

PolitiFact has leaned left from the outset (2007).

It's not uncommon to see people lament PolitiFact's left-leaning bias along with the claim that once upon a time PolitiFact did an even-handed job on its fact-checking.

But we've never believed the fairy tale that PolitiFact started out well. It's always been notably biased to the left. And we just stumbled across a PolitiFact fact check from 2008 that does a marvelous job illustrating the point.


It's a well-known fact that nearly half of U.S. citizens pay no net income tax, right?

Yet note how the fact checker, in this case PolitiFact's founding editor Bill Adair, frames President Obama's claim:
In a speech on March 20, 2008, Obama took a different approach and emphasized the personal cost of the war.

"When Iraq is costing each household about $100 a month, you're paying a price for this war," he said in the speech in Charleston, W.Va.
Hold on there, PolitiFact.

How can the cost of the war, divided up per family, rightly get categorized as a "personal cost" when about half of the families aren't paying any net federal income tax?

If the fact check was serious about the personal cost, then it would look at the differences in tax burdens. Families paying a high amount of federal income tax would pay far more than the the price of their cable bill. And families paying either a small amount of income tax or no net income tax would pay much less then the cost of their cable service for the Iraq War (usually $0).

PolitiFact stuffs the information it should have used to pan Obama's claim into paragraph No. 8, where it is effectively quarantined with parentheses (parentheses in the original):
(Of course, Obama's simplified analysis does not reflect the variations in income tax levels. And you don't have to write a check for the war each month. The war costs are included in government spending that is paid for by taxes.)
President Obama's statement was literally false and highly misleading as a means of expressing the personal cost of the war.

But PolitiFact couldn't or wouldn't see it and rated Mr. Obama's claim "True."

Not that much has changed, really.


Afters (for fun)

The author of that laughable fact check is the same Bill Adair later elevated to the Knight Chair for Duke University's journalism program.

We imagine Adair earned his academic throne in recognition of his years of neutral and unbiased  fact-checking even knowing President Obama was watching him from behind his desk.

Wednesday, March 13, 2019

Gender Pay Gap Shenanigans from PolitiFact Virginia

PolitiFact has a long history of botching gender wage gap stories, often horrifically.

PolitiFact Virginia's March 13, 2019 treatment of the subject does nothing improve the reliability of PolitiFact's reporting on the topic.


It's not true that women earn 80 percent of the pay men earn doing the same job, though Democrats proclaim otherwise from time to time. And that's probably what Scott did, using "similar" in its role as a synonym for "same."

PolitiFact was apparently very eager to use the technique of charitable interpretation--most likely because Scott is a Democrat. Republicans rarely receive the benefit of that feature of competent fact-checking from PolitiFact.

We're partial to the using charitable interpretation when appropriate, but PoltiFact Virginia ends up running data through the confirmation bias filter in its effort to bail out Scott.

We'd judge Scott's use of the term "similar" as an ambiguity. PolitiFact Virginia calls it "nuance":
Scott’s statement, however, is nuanced. He says women get 80 percent pay for doing "similar" jobs as white men, which is different than saying the "same" job as men.
PolitiFact Virginia apparently skipped the step of checking the thesaurus to see if the terms "similar" and "same" may be used interchangeably. They can. The two terms have overlapping meanings, in fact.

We find it notable that PolitiFact Virginia set aside the usual PolitiFact practice of relying on explanations from spokespeople representing the figure being fact-checked.

Scott's staff said he got his numbers from sources relying on Census Bureau data.

PolitiFact Virginia:
Stephanie Lalle, Scott’s deputy communications director, told us the congressman got the statistic from separate reports published in late 2018 by the Institute for Women’s Policy Research, and the National Partnership for Women and Families.

Both reports said the statistic comes from the U.S. Census Bureau. The latest gender-gap statistics from the Bureau show in 2017 women earned 80.5 percent of what men made - the same percentage as in 2016.
PolitiFact Virginia's determination to defend Scott's statement leads it to spout statistical mumbo-jumbo. Based on apparently nothing more the Scott's "nuanced" use of the term "similar," PolitiFact Virginia tried to reverse engineer an explanation of his statistic to replace the explanation offered by Scott's staff.

What if "similar" meant broad classes of jobs, and data from the Bureau of Labor Statistics showed white men making more than women in those classes of jobs?

PolitiFact Virginia thought it was worth a shot:
Women out-earned men in three occupations: wholesale and retail buying; frontline supervisor of construction trades and extraction workers; and, as we mentioned, dining room and cafeteria attendants and bartender helpers.

Fact-checking Scott, however, requires a deeper dive. The percentages we just discussed compare the full-time weekly earnings of all women to all men in these occupations. Scott, in his statement, compared the earnings of all women to white men in similar jobs.

The BLS’s data set that compares gender pay by specific jobs does not sort men and women by race. It does, however, categorize the jobs into 29 broad fields of work and, in each of those fields, breaks down women and men by sex.

Overall in 2018, women earned 78.7 percent less than white men in the same areas of work. The comparison of women’s pay to white men’s produces a bigger gender gap than the comparison to all men. That’s because white males tend to earn more than black males.

White men out-earned women in all 29 fields of work.
Note that PolitiFact Virginia isn't really showing its work. And what it does show contains appalling mistakes.

Let's break it down piece by piece.

Piece by Piece, Step by Step


"The percentages we just discussed compare the full-time weekly earnings of all women to all men in these occupations."

Do the percentages compare the full-time weekly earnings of all women to all men in those occupations? It's hard to tell from PolitiFact's linked source document. If author Warren Fiske was talking about Table 18, as we believe, then the fact check should refer to Table 18 by name.

Looking at Table 18, it seems Fiske reasoned improperly. The table mentions 121 groups of occupations but most of occupations nested under the list headers have no estimate of a gender wage gap, entering a dash instead of a number. In the notes at the bottom of the table, BLS warns that a dash means "no data or data that do not meet publication criteria." That makes it improper to extrapolate the listed results into a nationally representative number. Nor should a fact-checker assume that the subject of the fact check had such creative reasoning in mind.

In short, using numbers from Table 18 to support Scott represents unjustifiable cherry-picking.

"Scott, in his statement, compared the earnings of all women to white men in similar jobs."

We cannot find any citation in PolitiFact Virginia's fact check that offers data addressing the racial aspect of Scott's claim. Without that data, how can the fact checker reach a reasonable conclusion about the claim?

"The BLS’s data set that compares gender pay by specific jobs does not sort men and women by race."

That's bad news for this fact check. As noted above, without the data on race there's no checking the claim.

"It does, however, categorize the jobs into 29 broad fields of work and, in each of those fields, breaks down women and men by sex."

Breaking down men and women by sex is not the same as breaking them down by race, so this "however" doesn't point the way to a solution to the problem.

"Overall in 2018, women earned 78.7 percent less than white men in the same areas of work."

The fact checkers probably meant 78.7 cents to the dollar compared to men, not 78.7 percent less (about 22 cents to the dollar). But we still do not know the source of the race-based claim. We'd love to see a clear clue from PolitiFact Virginia regarding the specific source of this claim.

"The comparison of women’s pay to white men’s produces a bigger gender gap than the comparison to all men. That’s because white males tend to earn more than black males."

This part we follow. But it doesn't help us understand how PolitiFact can claim how white men out-earned women in all 29 fields of work in the BLS data.

"White men out-earned women in all 29 fields of work."

Based on what? We just went through PolitiFact's argument step by step. There's no reasoning in the fact check to justify it, and we can't find any citation that appears to lead to a justifying document.

Summary

PolitiFact failed to offer information justifying its key data point ("White men out-earned women in all 29 fields of work). It failed to show how cherry-picking that information, even if legitimately sourced, would justify Scott's statement in the context of "fair pay" legislation. And it simply blundered with the claim that women earned 78.7 percent less than white men (in same areas of work or otherwise).

Making the bad news worse, we could write another article about the problems in PolitiFact Virginia's wage gap story without repeating the same points.


Update March 14, 2019

After we contacted PolitiFact Virginia on March 13, 2019 it corrected the "78.7 percent less" mistake.

PolitiFact Virginia attached no editor's note to the story indicating either a correction or clarification.

Note this from PolitiFact's policy on corrections:
Errors of fact – Errors of fact that do not impact the rating or do not change the general outlook of the fact-check receive a mark of correction at the bottom of the fact-check.

The text of the fact-check is updated with the new information. The correction states the correct information that has been added to the report. If necessary for clarity, it repeats the incorrect information. Corrected fact-checks receive a tag of "Corrections and updates."

Typos, grammatical errors, misspellings – We correct typos, grammatical errors, misspellings, transpositions and other small errors without a mark of correction or tag and as soon as they are brought to our attention.
So we're supposed to believe writing "earned 78.7 percent less" instead of "78.7 percent as much" counts as one of the following:
  • typo
  • grammatical error
  • misspelling
  • transposition
  • other small error
Who buys it?

Monday, March 4, 2019

The underlying point saves the day for Bernie Sanders falsehood?

For some reason there are people who believe that if a fact checker checks both sides that means that the fact checker is neutral.

We've kept pointing out that checking both sides is no kind of guarantee of nonpartisanship. It's a simple matter to give harsher ratings to one side while rating both sides. Or softer ratings to one side while rating both sides.

Latest case in point: Democratic presidential candidate Bernie Sanders.

Sanders claimed that the single-payer health care system in Canada offers "quality care to all people  without out of pocket expenses."

PolitiFact found that the Canadian system does not eliminate out-of-pocket expenses (contradicting Sanders' claim).

And then PolitiFact gave Sanders' claim a "Half True" rating.

Seriously. That's what PolitiFact did.


PolitiFact's summary is remarkable for not explaining how Sanders managed to eke out a "Half True" rating for a false statement. PolitiFact describes what's wrong with the statement (how it's false) and then proclaims the "Half True" ruling:
Sanders said, "In Canada, for a number of decades they have provided quality care to all people without out-of-pocket expenses. You go in for cancer therapy, you don't take out your wallet."

So long as the care comes from a doctor or at a hospital, the Canadian system covers the full cost. But the country’s public insurance doesn’t automatically pay for all services, most significantly, prescription drugs, including drugs needed to fight cancer.

Out-of-pocket spending is about 15 percent of all Canadian health care expenditures, and researchers said prescription drugs likely represented the largest share of that.

The financial burden on people is not nearly as widespread or as severe as in the United States, but Sanders made it sound as though out-of-pocket costs were a non-issue in Canada.

We rate this claim Half True.
See?

PolitiFact says Sanders made it sound like Canadians do not pay out-of-pocket at all for health care. But Canadians do pay a substantial share out of pocket, therefore making it sound like they don't is "Half True."

Republicans, don't get the idea that you can say something PolitiFact describes as false in its fact check and then skate with a "Half True" rating on the "Truth-O-Meter."

Friday, March 1, 2019

PolitiFact Tweezes Green New Deal Falsehoods

In our post "PolitiFact's Green New Deal Fiction Depiction" we noted how PolitiFact had decided that a Democrat posting a falsehood-laden FAQ about the Green New Deal on her official congressional website escaped receiving a negative rating on PolitiFact's "Truth-O-Meter."

At the time we noted that PolitiFact's forbearance held benefits for Democrats and Republicans alike:
Many will benefit from PolitiFact's apparent plan to give out "Truth-O-Meter" mulligans over claimed aspects of the Green New Deal resolution not actually in the resolution. Critics of those parts of the plan will not have their attacks rated on the Truth-O-Meter. And those responsible for generating the controversy in the first place by publishing FAQs based on something other than the actual resolution also find themselves off the hook.
 We were partly right.

Yes, PolitiFact let Democrats who published a false and misleading FAQ about the Green New Deal off the hook.

But apparently PolitiFact has reserved the right to fault Republicans and conservatives who base their criticisms of the Green New Deal on the false and misleading information published by the Democrats.

PolitiFact Florida tweezed out a such a tidbit from an editorial written by Sen. Rick Scott (R-Fla.):


False? It doesn't matter at all that Ocasio-Cortez said otherwise on her official website? There is no truth to it whatsoever? And Ocasio-Cortez gets no "False" rating for making an essentially identical claim on her website?

This case will get our "tweezers or tongs" tag because PolitiFact is once again up to its traditional shenanigan of tweezing out one supposed falsehood from a background of apparent truths:
Sen. Rick Scott, R-Fla., outlined his opposition to the Democrats’ Green New Deal in a Feb. 25th Orlando Sentinel op-ed:

"If you are not familiar with it, here’s the cliff notes version: It calls for rebuilding or retrofitting every building in America in the next 10 years, eliminating all fossil fuels in 10 years, eliminating nuclear power, and working towards ending air travel (to be replaced with high-speed rail)."

...

Let’s hit the brakes right there -- do the Democrats want to end air travel?
See what PolitiFact did, there?

Scott can get three out of four points right, but PolitiFact Florida will pick on one point to give Scott a "False" rating and build for him an unflattering graph of  "Truth-O-Meter" ratings shaped by PolitiFact's selection bias.


The Jestation Hypothesis

How does PolitiFact Florida go about discounting the fact that Ocasio-Cortez claimed on her website that the Green New Deal aimed to make air travel obsolete?

The objective and neutral fact checkers give us the Jestation Hypothesis. She must have been kidding.

No, really. Perhaps the idea came directly from one of the three decidedly non-neutral experts PolitiFact cited in its fact check (bold emphasis added):
"It seems to me those lines from the FAQ were lighthearted and ill-considered, and it’s not clear why they were posted," said Sean Hecht, Co-Executive Director, Emmett Institute on Climate Change and the Environment at UCLA law school.
Hecht's FEC contributions page is hilariously one-sided.

Does anyone need more evidence that the line about making air travel obsolete was just a joke?
"No serious climate experts advocate ending air travel -- that's simply a red-herring," said Bledsoe, who was a climate change advisor to the Clinton White House.
Former Clinton White House advisor Bledsoe is about as neutral as Hecht. The supposed "red-herring," we remind readers, was published on Ocasio-Cortez's official House of Representatives website.

The neutral and objective fact-checkers of PolitiFact Florida deliver their jestational verdict (bold emphasis added):
Scott wrote in an op-ed that the Democrats’ Green New Deal includes "working towards ending air travel."

The resolution makes no mention of ending air travel. Instead, it calls for "overhauling transportation systems," which includes "investment in high-speed rail." Scott seized on a messaging document from Democrats that mentioned, perhaps in jest, getting rid of "farting cows and airplanes." But we found no evidence that getting rid of airplanes is a serious policy idea from climate advocates.
Apparently it cannot count as evidence that Democrats have advocated getting rid of airplanes if a popular Democratic Party representative publishes this on her website:
The Green New Deal sets a goal to get to net-zero, rather than zero emissions, at the end of this 10-year plan because we aren’t sure that we will be able to fully get rid of, for example, emissions from cows or air travel before then. However, we do believe we can ramp up renewable manufacturing and power production, retrofit every building in America, build the smart grid, overhaul transportation and agriculture, restore our ecosystem, and more to get to net-zero emissions.
Oh! Ha ha ha ha ha! Get it? We may not be able to fully get rid of emissions from cows or air travel in only 10 years! Ha ha ha!

So the claim was quite possibly a joke, even if no real evidence supports that idea.

But it's all PolitiFact needs to give a Republican a "False" rating and the Democrat no rating at all for saying essentially the same thing.

This style of fact-checking undermines fact checkers' credibility with centrists and conservatives, as well as with discerning liberals.



Afters

There was one more expert PolitiFact cited apart from the two we showed/noted were blatantly partisan.

That was "David Weiskopf, climate policy director for NextGen Climate America."

Here's a snippet from the home page for NextGen Climate America:


So basically neutral, right?

PolitiFact Florida "fact checker" (liberal blogger) Amy Sherman seems to have a special gift for citing groups of experts who skew hilariously left.


Wednesday, February 27, 2019

PolitiFact's sample size deception

Is the deception one of readers, of self, or of both?

For years we have criticized as misleading PolitiFact's selection-bias contaminated charts and graphs of its "Truth-O-Meter" ratings. Charts and graphs look science-y and authoritative. But when the data set is not representative (selection bias) and the ratings are subjective (PolitiFact admits it), the charts serve no good function other than to mislead the public (if that even counts as a good function).

One of our more recent criticisms (September 2017) poked fun at PolitiFact using the chart it had published for "The View" host Joy Behar. Behar made one claim, PolitiFact rated it false, and her chart make Behar look like she lies 100 percent of the time--which was ironic because Behar had used PolitiFact charts to draw false generalizations about President Trump.

Maybe our post helped prompt the change and maybe it didn't, but PolitiFact has apparently instituted some sort of policy on the minimum number of ratings it takes to qualify for a graphic representation of one's "Truth-O-Meter" ratings.

Republican Rep. Matt Gaetz (Fla.) has eight ratings. No chart.

But on May 6, 2018 Gaetz had six ratings and a chart. A day later, on May 7, Gaetz had the same six ratings and no chart.

For PolitiFact Florida, at least, the policy change went into effect in May 2018.

But it's important to know that this policy change is a sham that would hide the central problem with PolitiFact's charts and graphs.

Enlarging the sample size does not eliminate the problem of selection bias. There's essentially one exception to that rule, which occurs in cases where sample encompasses all the data--and in such cases "sample" is a bit of a misnomer in the first place.

What does that mean?

It means that PolitiFact, by acting as though small sample size is a good enough reason to refrain from publishing a chart, is giving its audience the false impression that enlarging the sample size without eliminating the selection bias yields useful graphic representations of its ratings.

If PolitiFact does not realize what it is doing, then those in charge are dangerously ignorant (in terms of improving public discourse and promoting sound reasoning).

If PolitiFact realizes what it is doing wrong and does it regardless, then those in charge are acting unethically.

Readers who can think of any other option (apart from some combination of the ones we identified) are encouraged to offer suggestions in the comments section.



Afters


When do the liberal bloggers at PolitiFact think their sample sizes are big enough to allow for a chart?

Sen. George Allen has 23 ratings and a chart. So it's between 8 and 23.

Tennessee Republican Marsha Blackburn has 10 ratings and a chart. We conclude that PolitiFact thinks 10 ratings warrant a chart (yes, we found a case with 9 ratings and no chart).

Science. And stuff.


Thursday, February 21, 2019

PolitiFact's Magic Math on 'Medicare For All' (Updated)

Would you believe that PolitiFact bungled an explainer article on the Democratic Party's Medicare For All proposal?

PolitiFact's Feb. 19, 2019 PolitiSplainer, "Medicare For All, What it is, what it isn't" stumbled into trouble when it delved into the price tag attached to government-run healthcare (bold emphasis added):
How much would Medicare for All cost?

This is the great unknown.

A study of Medicare for All from the libertarian-oriented Mercatus Center at George Mason University put the cost at more than $32 trillion over 10 years. Health finance expert Kenneth Thorpe at Emory University looked at Sanders' earlier version during the 2016 campaign and figured it would cost about $25 trillion over a 10-year span.

Where would the money come from?

Sanders offered some possibilities. He would redirect current government spending of about $2 trillion per year into the program. To that, he would raise taxes on income over $250,000, reaching a 52 percent marginal rate on income over $10 million. He suggested a wealth tax on the top 0.1 percent of households.
PolitiFact introduces the funding issue by mentioning two estimates of the spending M4A would add to the budget. But when explaining how Sanders proposes to pay for the new spending PolitiFact claims Sanders would "redirect current government spending" to cover about $20 trillion of the 25-to-32 trillion increase from the estimates.

Superficially, the idea sounds theoretically possible. If the defense budget was $2 trillion per year, for example, then one could redirect that money toward the M4A program and it would cover a big hunk of the expected budget increase.

But the entire U.S. defense budget is less than $1 trillion per year. So what is the supposed source of this funding?

We looked to the document PolitiFact linked, sourced from Sanders official government website.

We found no proposal from Sanders to "redirect current government spending" to cover M4A.

We found this (bold emphasis added):
Introduction

Today, the United States spends more than $3.2 trillion a year on health care. About sixty-five percent of this funding, over $2 trillion, is spent on publicly financed health care programs such as Medicare, Medicaid, and other programs. At $10,000 per person, the United States spends far more on health care per capita and as a percentage of GDP than any other country on earth in both the public and private sectors while still leaving 28 million Americans uninsured and millions more under-insured.
Nothing else in the linked document anywhere near approaches PolitiFact's claim of $2 trillion per year "redirected" into M4A.


It's a Big ($2 trillion per year) Mistake

We do not think Sanders was proposing what PolitiFact said he was proposing. The supposed $2 trillion over 10 years, or $20 trillion, cannot help pay for the $25 trillion cost Thorpe estimated. Nor can it help pay for the $32 trillion cost that Charles Blahous (Mercatus Center) estimated.

Why? The reason is simple.

Both of those estimates pertained to costs added to the budget by M4A.

In other words, current government spending on healthcare programs was already accounted for in both estimates. The estimates deal specifically with what M4A would add to the budget on top of existing costs.

Need proof? Good. The expectation of proof is reasonable.

Mercatus Center/Blahous
M4A would add approximately $32.6 trillion to federal budget commitments during the first 10 years of its implementation (2022–2031).
Clear enough? The word "add" shows that Blahous is talking about costs over and above current budget commitments to government health care programs.

Page 5 of the full report features an example illustrating the same point:
National health expenditures (NHE) are currently projected to be $4.562 trillion in 2022. Subtracting the $10 billion decrease in personal health spending, as calculated in the previous paragraph, and crediting the plan with $83 billion in administrative cost savings results in an NHE projection under M4A of $4.469 trillion. Of this, $4.244 trillion in costs would be borne by the federal government. Compared with the current projection of $1.709 trillion of federal healthcare subsidy costs, this would be a net increase of $2.535 trillion in annual costs, or roughly 10.7 percent of GDP.

Performing similar calculations for each year results in an estimate that M4A would add approximately $32.6 trillion to federal budget commitments during the period from 2022 through 2031, with the annual cost increase reaching nearly 12.7 percent of GDP by 2031 and continuing to rise afterward.
The $1.709 trillion in "federal healthcare subsidy costs" represents expected spending under Medicare, Medicaid and other federally supported health care programs. That amount is already accounted for in Blahous' estimate of the added cost of M4A.

Kenneth Thorpe

Thorpe's description of his estimate doesn't make perfectly clear that he is estimating added costs. But his mention of the Sanders' campaign's estimate of $1.377 trillion per year provides the contextual key (bold emphasis added):
The plan is underfinanced by an average of nearly $1.1 trillion per year. The Sanders campaign estimates the average annual financing of the plan at $1.377 trillion per year between 2017 and 2026. Over the same time period, we estimate the average financing requirements of $2.47 trillion per year--about $1.1 trillion more on average per year over the same time period. 
When we look at the estimate from the Sanders campaign we find that the $1.377 trillion estimate pertained to added budget costs, not the gross cost of the plan.

Friedman's Table 1 makes it plain, listing 13,773 (billions) as "added public spending":


It's magical math to take the approximately $27 trillion in "continued government spending" to pay down the $13.8 trillion in "new public spending." But PolitiFact seems to suggest Sanders would use current spending to pay for his program's new spending.

Again, we do not think that is what Sanders suggests.

The liberal bloggers at PolitiFact simply botched the reporting.

Our Eye On Corrections

We think this PolitiBlunder clearly deserves a correction and apology from PolitiFact.

Will it happen?

Great question.

As part of our effort to hold PolitiFact (and the International Fact-Checking Network) accountable, we report PolitiFact's most obvious errors through the recommended channels to see what happens.

In this case we notified the author, Jon Greenberg, via Twitter about the problem. We also used Twitter (along with Facebook) to try to draw PolitiFact's attention to the mistake. When those outreach efforts drew no acknowledgement we did as PolitiFact recommends and emailed a summary of the problem to "truthometer@politifact.com."

Should PolitiFact continue to let the error stand, we will report the error to the International Fact-Checking Network (under the auspices of the Poynter Institute, just like PolitiFact) and track whether that organization will hold PolitiFact to account for its mistakes.

We will update this section to note future developments or the lack of future developments.


Update Feb. 27, 2019

A full week after we started informing PolitiFact of the mistake in its Medicare PolitiSplainer, the bad reporting in the financing section remains unchanged.