Showing posts with label Obviously it's Subjective. Show all posts
Showing posts with label Obviously it's Subjective. Show all posts

Thursday, June 8, 2023

Have politicians discovered asbestos pants? (Update/Correction)

We think PolitiFact's "Truth-O-Meter" rating system offers ample evidence of PolitiFact's bias.

Why?

1) It's an admittedly subjective rating system.

2) Rating patterns differ widely at different franchises.

3) Fundamentally similar fact checks may conclude with very different ratings.

And to those three reasons we add a fourth, newly ascendant in the data we collect:

4) "Pants on Fire" seems to be going extinct/extinguished.

Have politicians discovered asbestos pants?

Through June 8, 2023, the total number of "Pants on Fire" ratings given to politicians totals five. Five.

Correction 6/22/2023: We apparently made a careless error in transcribing the number of Pants on Fire ratings given to party politicians during the first half (or so) of 2023. The correct number was two, not five. The corrected number only strengthens our point that "Pants on Fire" numbers have fallen off a cliff. Yes, the chart is wrong as well in reporting five in 2023.


From 2007 through 2009, PolitiFact was just starting out, which helps explain the low numbers during that period. In 2010 state franchises such as PolitiFact Texas and PolitiFact Florida started to contribute heavily to the number of ratings, including "Pants on Fire" ratings.

The era of Bill Adair's directorship was in full flower through 2013. We see the three-year spike of GOP "Pants on Fire" ratings and a rise followed by a slow decline in Democratic Party "Pants on Fire" ratings.

Current Editor-in-Chief Angie Drobnic Holan took over from Adair, and under Holan we observe a decline in "Pants on Fire" ratings for Democrats. We see the same for Republicans, notwithstanding notable election-year spikes in 2016 and 2020.

So far, the year 2023 stands out for its exceptionally low numbers.

"Republicans Lie More!"

Oh, please!

As a catchall excuse for weird PolitiFact data, that just won't cut it. It's not good as an excuse for PolitiFact's selection bias problem. It doesn't explain PolitiFact's biased application of "Pants on Fire" ratings, and it cannot ever explain lower numbers of "Pants on Fire" ratings over time to both political parties.

So, what's the explanation?

The simplest explanation boils down to money. PolitiFact gets paid for its role as the falsehood-sniffing dog for social media censors. The most recent page of "Pants on Fire" ratings at PolitiFact's webpage is filled with "Pants on Fire" ratings given for social media claims, with not one given to a party officeholder, candidate, appointee or the like. Not one. On the previous page there's one for Donald Trump given back in May.

That suggests PolitiFact now takes a greater interest in its social media role than in holding politicians accountable. To be fair, however, PolitiFact can still manipulate political messaging effectively by giving poor ratings to messages Republicans are likely to share. Rating one social media claim, no matter who it's from, can justify stuffing a sock in the social media mouth that would repeat it.

An alternative explanation? Politicians, both Democrat and Republican, are lying less.

It will be fun to see whether fact checkers try to take credit for making politicians more truthful without any sound basis for that claim.


Monday, December 16, 2019

A political exercise: PolitiFact chooses non-impactful (supposed) falsehood as its "Lie of the Year"

PolitiFact chose President Trump's claim that a whistleblower's complaint about his phone call with Ukrainian leader Volodymyr Zelensky got the facts "almost completely wrong."

We had deemed it unlikely PolitiFact would choose that claim as its "Lie of the Year," reasoning that it failed to measure up to the supposed criterion of carrying a high impact.

We failed to take into account PolitiFact's dueling criteria, explained by PolitiFact Editor Angie Drobnic Holan back in 2016:
Each year, PolitiFact awards a "Lie of the Year" to take stock of a misrepresentation that arguably beats all others in its impact or ridiculousness.
To be sure, "arguably beats all others in its impact" counts as a subjective criterion. As a bonus, PolitiFact offers itself an alternative criterion based on the "ridiculousness" of a claim.

Everybody who thinks there's an objective way to gauge relative "ridiculousness" raise your hand.

We will not again make the mistake of trying to handicap the "Lie of the Year" choice based on the criteria PolitiFact publicizes. Those criteria are hopelessly subjective and don't tell the real story.

It's more simple and direct to predict the outcome based on what serves PolitiFact's left-leaning interests.


Monday, February 5, 2018

Does "lowest" mean something different in Georgia than it does in Texas?

Today PolitiFact National, posing as PolitiFact Georgia, called it "Mostly True" that Georgia has the lowest minimum wage in the United States.

Georgia law sets the minimum wage at $5.15 per hour, the same rate Wyoming uses, and the federal minimum wage of $7.25 applies to all but a very few Georgians. PolitiFact National Georgia hit Democrat Stacey Evans with a paltry "Mostly True" rating:
Evans said Georgia "has the lowest minimum wage in the country."

Georgia’s minimum wage of $5.15 per hour is the lowest in the nation, but Wyoming also has the same minimum wage.

Also, most of Georgia’s workers paid hourly rates earn the federal minimum of $7.25.

Evans’ statement is accurate but needs clarification or additional information. We rate it Mostly True.
Sounds good. No problem. Right?

Eh. Not so fast.

Why is it okay in Georgia for "lowest" to reasonably reflect a two-way tie with Wyoming, but in Texas using "lowest" where there's a three-way tie earns the speaker a "False" rating?



How did PolitiFact Texas justify the "False" rating it gave the Republican governor (bold emphasis added)?
Abbott tweeted: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."

The latest unemployment data posted when Abbott spoke showed Texas with a 4 percent unemployment rate in September 2017 though that didn't set a 40-year record. Rather, it tied the previous 40-year low set in two months of 2000.

Abbott didn’t provide nor did we find data showing jobs created in each state in October 2017.

Federal data otherwise indicate that Texas experienced a slight decrease in jobs from August to September 2017 though the state also was home to more jobs than a year earlier.

We rate this claim False.
 A tie goes to the Democrat, apparently.

We do not understand why it is not universally recognized that PolitiFact leans left.



Correction/clarification Feb. 5, 2018:
Removed unneeded "to" from the second paragraph. And added a needed "to" to the next-to-last sentence.


Tuesday, December 26, 2017

Not a Lot of Reader Confusion VIII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing. This series looks to help acquaint Holan and others with the evidence.


Oh, look. Another journal article using PolitiFact's report card data to judge the veracity of a politician (bold emphasis added):
Political fact-checking organizations and the mainstream media reported extensively on Trump’s false statements of fact and unsubstantiated generalizations. And they noted that he made such statements in staggering proportions. For example, by April of 2017, Politifact assessed 20% of Trump’s statements as mostly false, 33% as false, and 16% as what it called “pants on fire” false— cumulatively suggesting that the vast majority of the time Trump was making either false or significantly misleading statements to the public.
That's from The Resilience of Noxious Doctrine: The 2016 Election, the Marketplace of Ideas, and the Obstinacy of Bias.

The article, by Leonard M. Niehoff and Deeva Shah, appeared in the Michigan Journal of Race and Law.

The authors should be ashamed of themselves for making that argument based on data subject to selection bias and ideological bias.

On the bright side, we suppose such use of PolitiFact data may successfully model the obstinacy of bias.

We recommend to the authors this section of their article:
Confirmation bias, discussed briefly above, is another common type of anchoring bias. Confirmation bias describes our tendency to value facts and opinions that align with those we have already formed. By only referencing information and viewpoints that affirm previously held beliefs, people confirm their biased views instead of considering conflicting data and ideas.
 




Correction: Fixed link to Noxious Doctrine paper 1838PST 12/26/2017-Jeff

Friday, December 1, 2017

Not a Lot of Reader Confusion VII

We say that PolitiFact's graphs and charts, including its PunditFact collections of ratings for news networks, routinely mislead readers. PolitiFact Editor Angie Drobnic Holan says she doesn't notice much of that sort of thing.

We're here to help.

This comes from the lead edge of December 2017 and PolitiFact's own Facebook page:


Somebody introduced a subjective PolitiFact chart in answer to a call for a scientific study showing the unreliability of Fox News. So far as we can tell, the citation was intended as serious.

We predict that no number of examples short of infinity will convince Holan that we are right and she is wrong. At least publicly. Privately, maybe.

Friday, November 10, 2017

'Not a lot of reader confusion' VI

PolitiFact editor Angie Drobnic Holan has claimed she does not notice much reader confusion regarding the interpretation of PolitiFact's "report card" charts and graphs.

This series of posts is designed to call shenanigans on that frankly unbelievable claim.

Rem Rieder, a journalist of some repute, showed himself a member of PolitiFact's confused readership with a Nov. 10, 2017 article published at TheStreet.com.
While most politicians are wrong some of the time, the fact-checking website PolitiFact has found that that [sic] Trump's assertions are inaccurate much more frequently than those of other pols.
When we say Rieder showed himself a member of PolitiFact's confused readership, that means we're giving Rieder the benefit of the doubt by assuming he's not simply lying to his readers.

As we have stressed repeatedly here at PolitiFact Bias, PolitiFact's collected "Truth-O-Meter" ratings cannot be assumed to reliably reflect the truth-telling patterns of politicians, pundits or networks. PolitiFact uses non-random methods of choosing stories (selection bias) and uses an admittedly subjective rating system (personal bias).

PolitiFact then reinforces the sovereignty of the left-leaning point of view--most journalists lean left of the American public--by deciding its ratings by a majority vote of its "star chamber" board of editors.

We have called on PolitiFact to attach disclaimers to each of its graphs, charts or stories related to its graphs and charts to keep such material from misleading unfortunate readers like Rieder.

So far, our roughly five years of lobbying have fallen on deaf ears.

Monday, November 6, 2017

PolitiFact gives the 8 in 10 lie a "Half True."

We can trust PolitiFact to lean left.

Sometimes we bait PolitiFact into giving us examples of its left-leaning tendencies. On November 1, 2017, we noticed a false tweet from President Barack Obama. So we drew PolitiFact's attention to it via the #PolitiFactThis hashtag.



We didn't need to have PolitiFact look into it to know that what Obama said was false. He presented a circular argument, in effect, using the statistics for people who had chosen an ACA exchange plan to mislead the wider public about their chances of receiving subsidized and inexpensive health insurance.


PolitiFact identified the deceit in its fact check, but used biased supposition to soften it (bold emphasis added):
"It only takes a few minutes and the vast majority of people qualify for financial assistance," Obama says. "Eight in 10 people this year can find plans for $75 a month or less."

Can 8 in 10 people get health coverage for $75 a month or less? It depends on who those 10 people are.

The statistic only refers to people currently enrolled in HealthCare.gov.
The video ad appeals to people who are uninsured or who might save money by shopping for health insurance on the government exchange. PolitiFact's wording fudges the truth. It might have accurately said "The statistic is correct for people currently enrolled in HealthCare.gov. but not for the population targeted by the ad."

In the ad, the statistic refers to the ad's target population, not merely to those currently enrolled in HealthCare.gov.

And PolitiFact makes thin and misleading excuses for Obama's deception:
(I)n the absence of statistics on HealthCare.gov visitors, the 8-in-10 figure is the only data point available to those wondering about their eligibility for low-cost plans within the marketplace. What’s more, the website also helps enroll people who might not have otherwise known they were eligible for other government programs.
The nonpartisan fact-checker implies that the lack of data helps excuse using data in a misleading way. We reject that type of excuse-making. If Obama does not provide his audience the context allowing it to understand the data point without being misled, then he deserves full blame for the resulting deception.

PolitiFact might as well be saying "Yes, he misled people, but for a noble purpose!"

PolitiFact, in fact, provided other data points in its preceding paragraph that helped contextualize Obama's misleading data point.

We think PolitiFact's excuse-making influences the reasoning it uses when deciding its subjective "Truth-O-Meter" ratings.
HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.
MOSTLY FALSE – The statement contains an element of truth but ignores critical facts that would give a different impression.
FALSE – The statement is not accurate.
In objective terms, what keeps Obama's statement from deserving a "Mostly False" or "False" rating?
His statement was literally false when taken in context, and his underlying message was likewise false.

About 10 to 12 million are enrolled in HealthCare.Gov ("Obamacare") plans. About 80 percent of those receive the subsidies Obama lauds. About 6 million persons buying insurance outside the exchange fail to qualify for subsidies, according to PolitiFact. Millions among the uninsured likewise fail to qualify for subsidies.

Surely a fact-checker can develop a data point out of numbers like those.

But this is what happens when non-partisan fact checkers lean left.


Correction Nov. 6, 2017: Removed "About 6 million uninsured do not qualify for Medicaid or subsidies" as it was superseded by reporting later in the post).

Wednesday, September 6, 2017

PolitiFact & Roy Moore: A smorgasbord of problems

When PolitiFact unpublished its Sept. 1, 2017 fact check of a claim attacking Alabama Republican Roy Moore, we had our red flag to look into the story. Taking down a published story itself runs against the current of journalistic ethics, so we decided to keep an eye on things to see what else might come of it.

We were rewarded with a smorgasbord of questionable actions by PolitiFact.

Publication and Unpublication

PolitiFact's Sept. 1, 2017 fact check found it "Mostly False" that Republican Roy Moore had taken $1 million from a charity he ran to supplement his pay as as Chief Justice in the Supreme Court of Alabama.

We have yet to read the original fact check, but we know the summary thanks to PolitiFact's Twitter confession issued later on Sept. 1, 2017:


We tweeted criticism of PolitiFact for not making an archived version of the fact check immediately available and for not providing an explanation for those who ended up looking for the story only to find a 404-page-not found-error.  We think readers should not have to rely on Twitter to know what is going on with the PolitiFact website.

John Kruzel takes tens of thousands of dollars from PolitiFact

(a brief lesson in misleading communications)

The way editors word a story's title, or even a subheading like the one above, makes a difference.

What business does John Kruzel have "taking" tens of thousands of dollars from PolitiFact? The answer is easy: Kruzel is an employee of PolitiFact, and PolitiFact pays Kruzel for his work. But we can make that perfectly ordinary and non-controversial relationship look suspicious with a subheading like the one above.

We have a parallel in the fact check of Roy Moore. Moore worked for the charity he ran and was paid for it. Note the title PolitiFact chose for its fact check:

Did Alabama Senate candidate Roy Moore take $1 million from a charity he ran?

 "Mostly True." Hmmm.

Kruzel wrote the fact check we're discussing. He did not necessarily compose the title.

We think it's a bad idea for fact-checkers to engage in the same misleading modes of communication they ought to criticize and hold to account.


Semi-transparent Transparency

For an organization that advocates transparency, PolitiFact sure relishes its semi-transparency. On Sept. 5, 2017, PolitiFact published an explanation of its correction but rationed specifics (bold emphasis added in the second instance):
Correction: When we originally reported this fact-check on Sept. 1, we were unable to determine how the Senate Leadership Fund arrived at its figure of "over $1 million," and the group didn’t respond to our query. The evidence seemed to show a total of under $1 million for salary and other benefits. After publication, a spokesman for the group provided additional evidence showing Moore received compensation as a consultant and through an amended filing, bringing the total to more than $1 million. We have corrected our report, and we have changed the rating from Mostly False to Mostly True.
PolitiFact included a table in its fact check showing relevant information gleaned from tax documents. Two of the entries were marked as for consulting and as an amended filing, which we highlighted for our readers:


Combining the two totals gives us $177,500. Subtracting that figure from the total PolitiFact used in its corrected fact check, we end up with $853,375.

The Senate Leadership Fund PAC (Republican) was off by a measly 14.7 percent and got a "Mostly False" in PolitiFact's original fact check? PolitiFact often barely blinks over much larger errors than that.

Take a claim by Sen. Brad Schneider (D-Ill.) from April 2017, for example. The fact check was published under the "PolitiFact Illinois" banner, but PolitiFact veterans Louis Jacobson and Angie Drobnic Holan did the writing and editing, respectively.

Schneider said that the solar industry accounts for 3 times the jobs from the entire coal mining industry. PolitiFact said the best data resulted in a solar having a 2.3 to 1 job advantage over coal, terming 2.3 "just short of three-to-one" and rating Schneider's claim "Mostly True."

Schneider's claim was off by over 7 percent even if we credit 2.5 as 3 by rounding up.

How could an error of under 15 percent have dropped the rating for the Senate Leadership Fund's claim all the way down to "Mostly False"?

We examine that issue next.

Compound Claim, Or Not?

PolitiFact recognizes in its statement of principles that sometimes claims have more than one part:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
We note that if PolitiFact does not weight the individual pieces equally, we have yet another area where subjective judgment might color "Truth-O-Meter" ratings.

Perhaps this case qualifies as one of those subjectively skewed cases.

The ad attacking Moore looks like a clear compound claim. As PolitiFact puts it (bold emphasis added), "In addition to his compensation as a judge, "Roy Moore and his wife (paid themselves) over $1 million from a charity they ran."

PolitiFact found the first part of the claim flatly false (bold emphasis added):
He began to draw a salary from the foundation in 2005, two years after his dismissal from the bench, according to the foundation’s IRS filings. So the suggestion he drew the two salaries concurrently is wrong.
Without the damning double dipping, the attack ad is a classic deluxe nothingburger with nothingfries and a super-sized nothingsoda.

Moore was ousted as Chief Justice in the Alabama Supreme Court, where he could have expected a raise up to $196,183 per year by 2008. After that ouster Moore was paid a little over $1 million over a nine-year period, counting his wife's salary for one year, getting well under $150,000 per year on average. On what planet is that not a pay cut? With the facts exposed, the attack ad loses all coherence. Where is the "more" that serves as the theme of the ad?

We think the fact checkers lost track of the point of the ad somewhere along the line. If the ad was just about what Moore was paid for running his charity while not doing a different job at the same time, it's more neutral biography than attack ad. The main point of the attack ad was Moore supplementing his generous salary with money from running a charitable (not-for-profit) organization. Without that main point, virtually nothing remains.

PolitiFact covers itself with shame by failing to see the obvious. The original "Mostly False" rating fit the ad pretty well regardless of whether the ad correctly reported the amount of money Moore was paid for working at a not-for-profit organization.

Assuming PolitiFact did not confuse itself?

If PolitiFact denies making a mistake by losing track of the point of the ad, we have another case that helps amplify the point we made with our post on Sept. 1, 2017. In that post, we noted that PolitiFact graded one of Trump's claims as "False" based on not giving Trump credit for his underlying point.

PolitiFact does not address the "underlying point" of claims in a consistent manner.

In our current example, the attack ad on Roy Moore gets PolitiFact's seal of "Mostly" approval only by ignoring its underlying point. The ad actually misled in two ways, first by saying Moore was supplementing his income as judge with income from his charity when the two source of income were not concurrent, and secondly by reporting the charity income while downplaying the period of time over which that income was spread. Despite the dual deceit, PolitiFact graded the claim "Mostly True."

"The decision about a Truth-O-Meter rating is entirely subjective"

Cases like this support our argument that PolitiFact tends to base its ratings on subjective judgments. This case also highlights a systemic failure of transparency at PolitiFact.

We will update this item if PolitiFact surprises us by running a second correction.



Afters

On top of problems we described above, PolitiFact neglected to tag its revised/republished story with the "Corrections and Updates" tag its says it uses for all corrected or updated stories.

PolitiFact has a poor record of following this part of its corrections policy.

We note, however, that after we pointed out the problem via Twitter and email PolitiFact fixed it without a long delay.

Sunday, July 30, 2017

"Not a lot of reader confusion" IV

PolitiFact editor Angie Drobnic Holan has famously defended PolitiFact's various "report card" graphs by declaring she does not observe much reader confusion. Readers, Holan believes, realize that PolitiFact fact checkers are not social scientists. Equipped with that understanding, people presumably only draw reasonable conclusions from the graphed results of PolitiFact's "entirely subjective" trademarked "Truth-O-Meter" ratings.

What planet do PolitiFact fact checkers live on, we wonder?

We routinely see people using PolitiFact data as though it was derived scientifically. Jeff spotted a sensational example on Twitter.
Here's an enlarged view of the chart to which Jeff objected:


How did the chart measure the "actual honesty" of the four presidential primary candidates? Just in case it's hard to read, we'll tilt it 90 degrees and zoom in:


That's right. The chart uses PolitiFact's subjective ratings, despite the even more obvious problem of selection bias, to measure candidates "actual honesty."

The guy to whom Jeff replied, T. R. Ramachandran, runs a newsletter that gives us terrific (eh) information on politics. Comprehensive insights & stuff:

It's not plausible that the people who run PolitiFact do not realize that people use their offal (sic) data this way. The fact that PolitiFact resists adding a disclaimer to its ratings and charts leads us inexorably toward the conclusion that PolitiFact really doesn't mind misleading people. At least not to the point of adding the disclaimer that would fix the bulk of the problem.

Why not give this a try, PolitiFact? Hopefully it's not too truthful for you.




Monday, February 20, 2017

PolitiFact's "Truth-O-Meter": Floor wax, or dessert topping?

The different messages coming from PolitiFact founder Bill Adair and current PolitiFact managing editor Amy Hollyfield in recent interviews reminded me of a classic Saturday Night Live sketch.

In one interview (Pacific Standard), Adair said deciding PolitiFact's "Truth-O-Meter" ratings was "entirely subjective."

In the other interview (The Politic), Hollyfield gave a different impression:
There are six gradations on our [Truth-O-Meter] scale, and I think someone who’s not familiar with it might think it’s hard to sort out, but for people who’ve been at it for so long, we’ve done over 13,000 fact checks. To have participated in thousands of those, we all have a pretty good understanding of what the lines are between “true” and “mostly true,” or “false” and “pants on fire.”
If PolitiFact's "star chamber" of editors has a good understand of the lines of demarcation between each of the ratings, that suggests objectivity, right?

Reconciling these statements about the "Truth-O-Meter" seems about as easy as reconciling New Shimmer's dual purposes as a floor wax and a dessert topping. Subjective and objective are polar opposites, perhaps even more so than floor wax and dessert topping.

If, as Hollyfield appears to claim, PolitiFact editors have objective criteria to rely on in deciding on "Truth-O-Meter" ratings, then what business does Adair have claiming the ratings are subjective?

Can both Adair and Hollyfield be right? Does New Shimmer's exclusive formula prevent yellowing and taste great on pumpkin pie?

Sorry, we're not buying it. We consider PolitiFact's messaging about its rating system another example of PolitiFact's flimflammery.

We think Adair must be right that the Truth-O-Meter is primarily subjective. The line between "False" and "Pants on Fire" as described by Hollyfield appears to support Adair's position:
“False” is simply inaccurate—it’s not true. The difference between that and “pants on fire” is that “pants on fire” is something that is utterly, ridiculously false. So it’s not just wrong, but almost like it’s egregiously wrong. It’s purposely wrong. Sometimes people just make mistakes, but sometimes they’re just off the deep end. That’s sort of where we are with “pants on fire.”
Got it? It's "almost like" and "sort of where we are" with the rating. Or, as another PolitiFact editor from the "star chamber" (Angie Drobnic Holan) memorably put it: "Sometimes we decide one way and sometimes decide the other."


Afters

Though PolitiFact has over the years routinely denied that it accuses people of lying, Hollyfield appears to have wandered off the reservation with her statement that "Pants on Fire" falsehoods on the "Truth-O-Meter" are "purposely wrong." A purposely wrong falsehood would count as a lie in its strong traditional sense: A falsehood intended to deceive the audience. But if that truly is part of the line of demarcation between "False" and "Pants on Fire," then why has it never appeared that way in PolitiFact's statement of principles?

Perhaps that criterion exists only (subjectively) in Hollyfield's mind?


Update Feb. 20, 2017: Removed an unneeded "the" from the second paragraph

Monday, November 21, 2016

Great Moments in the Annals of Subjectivity (Updated)

Did Republican Donald Trump win the electoral college in a landslide?

We typically think of a "landslide" as an overwhelming victory, and there's certainly doubt whether Trump's margin of victory in the electoral college unequivocally counts as overwhelming.

"Overwhelming" itself is hard to pin down in objective terms.

So that's why we have PolitiFact, the group of liberal bloggers that puts "fact" in its name and then proceeds to publish "fact check journalism" based on subjective "Truth-O-Meter" judgments.

When RNC Chairman Reince Priebus (and Trump's pick for his chief of staff) called Trump's electoral college victory a "landslide," PolitiFact Wisconsin's liberal bloggers sprang into action to do their thing (bold emphasis added):
Landslide, of course, is not technically defined. When we asked for information to back Priebus’ claim, the Republican National Committee merely recited the electoral figures and repeated that it was a landslide.
If "landslide" is not technically defined then what fact is PolitiFact Wisconsin checking? Is "landslide" non-technically defined to the point one can judge it true or false?

PolitiFact Wisconsin follows typical PolitiFact procedure in collecting expert opinions about whether Priebus' use of "landslide" matches its non-technical definition. One of the 10 experts PolitiFact consulted said Trump's margin was "close" to a landslide. PolitiFact said the other nine said it fell short, so PolitiFact ruled Priebus' claim "False."
Priebus said Trump’s win was "an electoral landslide."

But aside from the fact Trump lost the popular vote, his margin in the Electoral College isn’t all that high, either. None of the 10 experts we contacted said Trump’s win crosses that threshold.

We rate Priebus’ claim False.
One has to marvel at expertise sufficient to say whether the use of a term meets a non-technical definition.

One has to marvel all the more at fact checkers who concede that a term has a mushy definition ("not technically defined") and then declare that some use of the term fails to cross "that threshold."

What threshold?

One of the election experts said if Trump won by a landslide then Obama won by an even greater landslide.

RollCall, 2015:
In 2006, Democrats won back the House; two years later, President Barack Obama won by a landslide.
LA Times, 2012:
Obama officially wins in electoral vote landslide.
NPR, 2015:
President Obama won in a landslide.
NYU Journalism, 2008:
Obama Wins Landslide Victory, Charts New Course for United States.
Since Obama did not win by a landslide, therefore one cannot claim Trump won by a landslide? Is that it?

It is folly for fact checkers to try to judge the truth of ambiguous claims. PolitiFact often pursues that folly, of course, and in the end simply underscores what it occasionally admits: The ratings are subjective.

Finding experts willing to participate in the folly does not reduce the magnitude of the folly. This would have been a good subject for PolitiFact to use in continuing its Voxification trend. PolitiFact might have produced an "In context" article to talk about electoral landslides and how experts view the matter. But trying to corral the use of a term that is traditionally hard to tame simply makes a mockery of fact-checking.


Jeff Adds (Dec. 1, 2016):

Add this to a long list of opinions that PolitiFact treats as verifiable facts, including these two gems:

- Radio host John DePetro opined that the Boston Marathon bomber was buried "not far" from President John Kennedy. PolitiFact used their magical powers of objective divinity to determine the unarguable demarcation of "not far."

- Rush Limbaugh claimed "some of the wealthiest American's are African-Americans now." Using the divine wizardry of the nonpartisan Truth-O-Meter, PolitiFact's highly trained social scientists were able to conjure up a determinant definition of what "wealthiest" means, and specifically which people were included in the list.

Reasonable people may discount Trump's claim of a "landslide" victory assuming the conventional use of the term, but it's not a verifiable fact that can be confirmed or dismissed with evidence. It's an opinion.

The reality is that the charlatans at PolitiFact masquerade as truthsayers when they do little more than contribute to the supposed fake news epidemic by shilling their own opinions as unarguable fact. They're dangerous frauds whose declaration of objectivity doesn't withstand the slightest scrutiny.

Wednesday, November 9, 2016

Another day, another deceptive PolitiFact chart

On election day, PolitiFact helpfully trotted out a set of its misleading "report card" graphs, including an updated version of its comparison between Democrat Hillary Clinton and Republican Donald Trump.

What is the point of publishing such graphs?

The graphs make an implicit argument to prefer the Democratic Party nominee in the general election. See how much more honest she is! Or, alternatively, see how the Republican tells many falsehoods!

The problem? This is the same PolitiFact deception we have pointed out for years.

The chart amounts to a political ad, making the claim Clinton is more truthful than Trump. But to properly support that conclusion, the underlying data should fairly represent typical political claims from Clinton and Trump--the sort of thing scientific studies capture by randomizing the capture of data.

In the same vein, a scientific study would allow for verification of its ratings. A scientific study would permit this process by using a carefully defined set of ratings. One might then duplicate the results by independently repeating the fact check and reaching the same results.

Yet none of that is possible with these collected "Truth-O-Meter" ratings.

Randomly selected stories aren't likely to grip readers. So editors select the fact-checks to maximize reader interest and/or serve some notion of the public good.

So much for a random sample.

And trying to duplicate the ratings through following objective scientific procedure counts as futile. PolitiFact editor Bill Adair recently confirmed this yet again with the frank admission that "the decision about a Truth-O-Meter rating is entirely subjective."

So much for objectively verifying the results.

PolitiFact passes off graphs of its opinions as though they represent hard data about candidate truthfulness.

This practice ought to offend any conscientious journalist, and that should go double for any conscientious fact-checking journalist.

We have called for PolitiFact to include some type of disclaimer each time it publishes this type of item. Such disclaimers happen only on occasion. The example embedded in this post contains no hint of a disclaimer.

Wonder why Republicans and Trump voters do not trust mainstream media fact-checking?

Take a look in the mirror, PolitiFact.

Saturday, May 14, 2016

PolitiFact's throne of lies



PolitiFact editor Angie Drobnic Holan told some whoppers on May 13, 2016.

We take a dim view here of fact checkers making stuff up. Here's what Holan had to say in her opinion piece about the 2016 election:
Our reporting is not "opinion journalism," because our sole focus is on facts and evidence. We lay out the documents and sources we find; we name the people we interviewed. The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy. The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire. Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did.
Holan's spouting hogwash.

"Our reporting is not 'opinion journalism,' because our sole focus is on facts and evidence." Rubbish. PolitiFact establishes the direction of its fact checks based on its interpretation of a subject's claim. PolitiFact tends to have a tin ear for hyperbole and jest, and that's just the tip of PolitiFact's iceberg of subjectivity. The "Truth-O-Meter" rating system is unavoidably subjective. Perhaps Holan's qualifier "Our reporting" is intended as weasel-words avoiding that fact. Is the "Truth-O-Meter" rating PolitiFact's "reporting"? If not, Holan weasels her way to some wiggle room.

"We lay out the documents and sources we find; we name the people we interviewed." Okay, fine, but what if PolitiFact's uncovering of documents and sources was biased? What if PolitiFact conveniently finds a professional "consensus" on a key issue based on a handful of experts who lean left? Is that an objective process? Isn't that process easily led by subjective opinions? What if PolitiFact unexpectedly overlooks/ignores a report from the Congressional Budget Office when it typically counts CBO reports as the gold standard?

"The weight of evidence allows us to draw conclusions on our Truth-O-Meter that give people a sense of relative accuracy." As we pointed out above, the "Truth-O-Meter" ratings are impossible to apply objectively. For example, PolitiFact judges between "needs clarification," "leaves out important details" and "ignores critical facts." Those criteria do not at all lend themselves to objective demarcation. Where does one of them end and the next one begin? Republican presidential candidate Mitt Romney's 2012 Jeep ad certainly had more than "an element of truth" to it, yet PolitiFact gave the ad its harshest rating, "Pants on Fire," supposedly reserved, if the definitions mean anything, for statements that do not possess an element of truth. The "weight of evidence" is decided by the subjective impressions of PolitiFact editors, not by meeting objective criteria.

"The ratings are True, Mostly True, Half True, Mostly False, False and Pants on Fire." Yes, those are the ratings. If we focus just on this statement we could give Holan a "True" rating. That reminds us of another subjective aspect of PolitiFact's process. PolitiFact decides whether to consider a whole statement or just part of a statement when it applies its ratings. Is that decision based solely on facts and evidence? Of course not. It's just another part of PolitiFact's subjective judgment.

"Readers may not agree with every rating, but there should be no confusion as to why we rated the statement the way we did." From a paragraph full of howlers, this may be the biggest howler. PolitiFact editors say they're often divided on what rating to give. They say it's one of the hardest parts of the job. John Kroll, who worked at the Cleveland Plain Dealer, PolitiFact's original partner for PolitiFact Ohio, said the decision between one rating and another often amounted to a coin flip. It's outrageous for Holan to sell the narrative that the ratings are driven only by the facts.

Holan smells of beef and cheese.

Friday, November 20, 2015

PolitiFact gives Bernie Sanders "Mostly True" rating for false statement

When Sen. Bernie Sanders (I-Vt.) said more than half of America's working blacks receive less than $15 per hour, PolitiFact investigated.

It turns out less than half of America's working blacks make less than $15 per hour:
(H)alf of African-American workers earned less than $15.60. So Sanders was close on this but exaggerated slightly. His claim is off by a little more than 4 percent.
PolitiFact found that half of African-American workers earned more than $15 per hour. That makes Sanders' claim false. PolitiFact said Sanders "exaggerated slightly." PolitiFact said he was "off by a little more than four percent." PolitiFact said he was "not far off."

Euphemisms aside, Sanders was wrong. But PolitiFact gave Sanders a "Mostly True" rating for his claim.

Here's a reminder of PolitiFact's definition for its "Mostly True" rating:
Mostly True – The statement is accurate but needs clarification or additional information.
Sanders' statement wasn't accurate. So how does it even begin to qualify for the "Mostly True" rating the way PolitiFact defines it?

The answer, dear reader, is that PolitiFact's definitions don't really mean anything. PolitiFact's "Star Chamber" panel of editors gives the rating they see fit to give. If the definitions conflict with that ruling then the definitions bend to the will of the editors.

Subjective-like.



Update 22:25 11/23/15: Added link to PF article in 4th graph - Jeff

Wednesday, December 24, 2014

PolitiFact editor explains the difference between "False" and "Pants on Fire"

During an interview for a  "DeCodeDC" podcast, PolitiFact editor Angie Drobnic Holan explained to listeners the difference between the Truth-O-Meter ratings "False" and "Pants on Fire":



Our transcript of the relevant portion of the podcast follows, picking up with the host asking why President Barack Obama's denial of a change of position on immigration wasn't rated more harshly (bold emphasis added):
ANDREA SEABROOK
Why wouldn't that be "Pants on Fire," for example?

ANGIE DROBNIC HOLAN
You know, that's an interesting question.

We have definitions for all of our ratings. The definition for "False" is the statement is not accurate. The definition for "Pants on Fire" is the statement is not accurate and makes a ridiculous claim. So, we have a vote by the editors and the line between "False" and "Pants on Fire" is just, you know, sometimes we decide one way and sometimes decide the other. And we totally understand when readers might disagree and say "You rated that 'Pants on Fire.' It should only be 'False.'" Or "You rated that 'False." Why isn't it 'Pants on Fire'?" Those are the kinds of discussions we have every day ...
One branch of our research examines how PolitiFact differentially applies its "Pants on Fire" definition to false statements by the ideology of the subject. Holan's description accords with other statements from PolitiFact regarding the criteria used to distinguish between "False" and "Pants on Fire."

Taking PolitiFact at its word, we concluded that the line of demarcation between the two ratings is essentially subjective. Our data show that PolitiFact National is over 70 percent more likely to give a Republican's false statement a "Pants on Fire" rating than a Democrat's false statement.

We don't necessarily agree with PolitiFact's determinations of what is true or false, of course. What's important to our research is that the PolitiFact editors doing the voting believe it.

Holan's statement helps further confirm our hypothesis regarding the subjective line of demarcation between "False" and "Pants on Fire."

We'll soon publish an update of our research, covering 2014 and updating cumulative totals.

Tuesday, February 11, 2014

"The Heart of PolitiFact"

We find it fundamentally dishonest the way PolitiFact treats its trademark "Truth-O-Meter" as though it's some sort of machine that objectively measures the truth content of political statements.

The opposite's true.

Have a look at PolitiFact's "About PolitiFact" page (bold emphasis added):
The heart of PolitiFact is the Truth-O-Meter, which we use to rate factual claims.

The Truth-O-Meter is based on the concept that – especially in politics - truth is not black and white.

PolitiFact writers and editors spend considerable time researching and deliberating on our rulings ...

We then decide which of our six rulings should apply.
There's no machinery, only the machinations of biased reporters and editors.  The "Truth-O-Meter" is just the vehicle for the label they put on their decision.  The eventual rulings are even worse than the "coin flip" described by John Kroll, mentioned in our previous post.  The "Truth-O-Meter" is akin to the "Wheel of Fortune," given the level of subjectivity inherent in the system.

The heart of PolitiFact?



Friday, December 7, 2012

Dustin Siggins: The Most Overlooked 'Lie of the Year"

Persistent PolitiFact critic Dustin Siggins wrote up a piece over at Red Alert Politics asking why the GOP's supposed War on Women was left out of PolitiFact's Lie of the Year contenders. Siggins makes some solid points and it's well worth the read, but his big get was his interview with PolitiFact editor Bill Adair. Adair's response to Siggins was typical, and by typical, I mean comically inconsistent with reality.

Siggins quotes Adair:
"We rate the ‘Lie of the Year’ as the boldest statement or the statement with the biggest reach. Obviously, it’s subjective,” he said. “We didn’t do a fact-check on a statement that there was a War on Women. It was an opinion, and we don’t fact-check opinions. People used it as a sum-up of a variety of aspects of the 2012 campaigns, but it was an overall opinion, not a statement of policy fact.”
Siggins makes the case that the War on Women meme was a policy statement, and points out its wide reaching impact on the election. You should read Siggins argument in his own words and in their entirety. For us though, the rest of Adair's response is a howler.
"It was an opinion, and we don’t fact-check opinions."
This statement is from the same guy that gave Mitt Romney a Pants on Fire rating for saying "We're inches away from no longer having a free economy." (Note: Romney actually earned three Pants on Fire ratings for that same claim, something to keep in mind when PolitiFact pimps out their "report cards") What about Rick Perry's opinion that Barack Obama is a socialist? Bill Adair worked on that one too. Oops! PolitiFact calls both of those statements "hyperbole" (coincidentally, PolitiFact claims to have a policy against rating hyperbole as well). I guess hyperbole doesn't count as opinion.

Unfortunately, Adair doesn't fill us in on the objective metric PolitiFact used when they gave Obama a Half True for his claim that Romney's cuts to education would be "catastrophic." Of course, when Obama claimed that his tax plan only asked millionaires to "pay a little more," PolitiFact "decided that "a little more" is an opinion, not a checkable fact."

"Catastrophic"=Verifable fact. "A little more"=Opinion.

The most hilarious part of this is Adair evades the most obvious problem. The Pants on Fire label itself is entirely subjective. The rating is predicated on a claim being "ridiculous." To this day, Adair has never offered up an objective definition of what makes a claim "ridiculous." So the bottom line is PolitiFact doesn't check opinions, but they do use opinions to assign ratings of fact. (Read Bryan's study on the Pants on Fire/False issue here.)

Adair doesn't clarify the issue by adding yet another version of the Lie of the Year criteria:
"We rate the ‘Lie of the Year’ as the boldest statement or the statement with the biggest reach."
Last year, Angie Drobnic-Holan explained the Lie of the Year was a claim PolitiFact rated "that played the biggest role in the national discourse." Which is it?

Regardless, it's hard to imagine some of PolitiFact's finalists even being in the top 20 claims that fit either definition. In what world does Jack Markels (who?) claim that "Mitt Romney likes to fire people" rank as a "bold" statement that has "the biggest reach," let alone played "the biggest role in the national discourse." Of course, don't waste your time looking for any administration comments on Benghazi in the top ten. The reality is that the finalists for PolitiFact's Lie of the Year exemplifies the problems of PolitiFact's selection bias. I've previously said that I suspect the LOTY is predetermined, and a grab bag of nine ratings is thrown in for looks. The competition PolitiFact selected this year does nothing to change my mind.

Finally, I'll give Adair credit for the most honest thing I've ever heard him say about his body of work thus far:
"Obviously, it’s subjective”
Subjective indeed.

Siggins argues the "War on Women" campaign from the Democrats meets key aspects of Adair's criteria and makes a fine Lie of the Year candidate.  He makes a good argument that's worth reading