Showing posts with label 2016. Show all posts
Showing posts with label 2016. Show all posts

Monday, June 11, 2018

PolitiFact 2016: Despite No Evidence Supporting Our Conclusion, It's "Half True" Donald Trump Doesn't Believe in Equal Pay for Equal Work


Our part-time effort to hold PolitiFact accountable allows many problems to slip through the cracks. Sometimes our various research projects bring a problematic fact check to our attention.

Case in point:


PolitiFact's Nov. 2, 2016 "fact check" found Democratic presidential nominee Hillary Clinton's claim "Half True" despite finding no evidence supporting it other than a fired campaign organizer's complaint of gender discrimination.

We must be exaggerating, right?

We challenge anybody to find concrete evidence of Trump's disbelief in equal pay in PolitiFact's "fact check" apart from the allegation we just described.


In fact, PolitiFact's fact check makes it look like the fact checkers have difficulty distinguishing between the raw pay gap and differences in pay stemming from discrimination. Trump comes across looking like he makes that distinction. PolitiFact comes across looking like it interprets Trump's insistence on that distinction as support for Clinton's claim.

Take for example this unsupportive piece of supporting evidence PolitiFact received from the Clinton campaign:
Clinton’s campaign pointed to another August 2015 interview, in which CNN’s Chris Cuomo asked Trump if he would pass equal pay legislation.

Trump said he was looking into it "very strongly."

"One of the problems you have is you get to have an economy where it's no longer free enterprise economy," Trump said.

Trump said he favored the concept, but that it’s "very complicated."

"I feel strongly -- the concept of it, I love," Trump said. "I just don't want it to be a negative, where everybody ends up making the same pay. That's not our system. You know, the world, everybody comes in to get a job, they make -- people aren't the same."
Trump says he favors the concept (PolitiFact's paraphrase) and in Trump's own words he "loves" the concept. Trump cautions that everybody might end up making the same pay. Does that sound like equal pay for equal work? Is all work equal?

If anything, Clinton's evidence against Trump helped undermine its own case.

Making this fact check even more bizarre, PolitiFact's summary fails to reference its best evidence of Trump's disbelief in equal pay--the went-nowhere gender discrimination case:
Our ruling

Clinton said Trump "doesn't believe in equal pay."

Trump’s campaign website does not have a stipulated stance on equal pay for men and women, but his campaign says he supports "equal pay for equal work." Trump has said men and women doing the same job should get the same pay, but it’s hard to determine what’s "the same job," and that if everybody gets equal pay, "you get away from capitalism in a sense."

Trump has also said pay should be based on performance, not gender -- so he does appear to favor uniform payment if performance is alike.

Clinton’s statement is partially accurate but leaves out important details or takes things out of context. We rate it Half True.
Put bluntly, PolitiFact put nothing in its summary in support of Clinton's claim.

Noting that "Trump's campaign website does not have a stipulated stance on equal pay for men and women" counts as an argument from silence. Making matters worse for Clinton, the campaign breaks its silence to endorse the concept of equal work for equal pay.

PolitiFact claims it places the burden of proof on the one making the claim, in this case Hillary Clinton. The evidence suggests PolitiFact instead placed the burden of proof on the Trump campaign.

PolitiFact makes a snippet mosaic out of Trump's statements that appear to show that he doesn't believe men and women should make equal pay regardless of whether they do equal work. Is that supposed to serve as evidence Trump does not believe in equal pay for equal work?

In the end, PolitiFact gave Clinton a "Half True" rating despite finding no real evidence in support of her claim and an abundance of evidence contradicting it.


Afters I

The 2016 discrimination complaint from Elizabeth Mae Davidson was never litigated and was dropped earlier this year.

Afters II

PolitiFact's description of the gender wage gap in the Clinton fact check puts it somewhat at odds with some of its own past fact checks. Note this from the Clinton fact check:
We’ve detailed key issues about the gender wage gap in our PolitiFact Sheet, but a consistent argument is that women earn 77 cents on the dollar that men earn.

(...)

The Institute for Policy Women’s Research says discrimination is a big factor for why the gender wage gap still persists. Experts consider "occupational segregation" another reason for the wage gap, which means women more often than men work in jobs that pay low and minimum wages.
If you've got a "big reason" and "another reason" which reason should you expect to have the greatest effect? The "big reason," right? Some of PolitiFact's past fact checks have correctly cast serious doubt on that proposition. PolitiFact's summary article on the gender wage gap  (the same "PolitiFact Sheet" referenced above in the Clinton fact check) features a fine example (bold emphasis added):
THE BIG PICTURE

Just before Obama took office in 2009, the Department of Labor released a study because, as a deputy assistant secretary explained it, "The raw wage gap continues to be used in misleading ways to advance public policy agendas without fully explaining the reasons behind the gap." The study by CONSAD Research Corp. took into account women being more likely to work part-time for lower pay, leave the labor force for children or elder care, and choose work that is "family friendly" with fuller benefit packages over higher pay. The study found that, when factoring in those variables, the gap narrows to between 93 cents and 95 cents on the dollar.
We would remind readers that the CONSAD study is not saying that gender discrimination accounts for 5 to 7 percent of the raw gender wage gap. It estimates that 5 to 7 percent of the gender wage gap is not explained by a combination of women's occupational and family choices. Those aren't the only factors influencing the raw wage gap.

So about two-thirds of the raw wage gap is explained by the job choices women make, and 7 percent remains unexplained with part of that 7 percent perhaps explained by gender discrimination.

Did PolitiFact not make that clear?

Afters III

In the search from some charitable explanation for PolitiFact's fact-checking, I had to consider the possibility that Clinton's claim is literally correct: Trump does not believe in equal pay if "equal pay" means paying everyone equally regardless of the job or the quality of the work.

Using that interpretation of "equal pay" would make Clinton's claim literally true but at the same time a whopper of deceit. PolitiFact appeared to take Clinton to mean "equal pay for equal work" except possibly when it used Trump's statements in support of Clinton's claim.

If it was PolitiFact's position that Clinton was saying Trump did not believe men and women should earn the same regardless of the job or the work performed then it should have stated so clearly.

Either way, PolitiFact's fact check looks incoherent.

Wednesday, January 3, 2018

'(PolitiFact's) rulings are based on when a statement was made and on the information available at that time'

PolitiFact Texas issued a "False" rating to Gov. Greg Abbott on Nov. 16, 2017, finding it "False" that Texas had experience its lowest unemployment rate in 40 years.

PolitiFact Texas was also rating Abbott's claim that Texas led the nation last month (September?) in job creation. But we will focus on the first part of the claim, for that rating centers on PolitiFact's principle that it bases its rulings on the timing of a statement:
Timing – Our rulings are based on when a statement was made and on the information available at that time.
Our interest in this item was piqued when we found it linked at PolitiFact's "Corrections and updates" page. We went looking for the correction and found this:
UPDATE, Nov. 17, 2017: Two days after Abbott tweeted his claim about the Texas jobless rate, the federal government reported that the state had a 41-year record low 3.9 percent jobless rate in October 2017.
The release of government statistics confirmed the accuracy of Abbott's claim if he was talking about October.

PolitiFact Texas' update reminded us of a PolitiFact rating from March 18, 2011. Actor Anne Hathaway said the majority of Americans support gay marriage. PolitiFact rated her claim "Mostly True" based on polling released after Hathaway made her claim. Note how PolitiFact foreshadowed its unprincipled decision (bold emphasis added):
(P)ublic opinion on gay marriage is shifting quickly. How quickly? Let's just say we're glad we waited a day to publish our item.
I covered PolitiFact's failure to follow its principles back when the incident happened. But in this case PolitiFact was consistent with its principles.

Or was it?

What information was available at the time?

When Hathaway made her claim, no poll unequivocally supported her claim, and we had no reason to think the actor was in any position to have insider pre-publication information about new polling. But upon reading PolitiFact Texas' fact check of Abbott, we were left wondering whether Abbott might know the government numbers before they were released to the public.

PolitiFact Texas did not address that issue, noting simply that the unemployment rates for October were not yet released. We infer that PolitiFact Texas presumed the BLS statistics were not available to government leaders in Texas. As for us, we had no idea whether the BLS made statistics available to state governments but thought it was worth exploring.

Our search quickly led us to a Nov. 17, 2017 article at the Austin American-Statesman. That's the same Austin American-Statesman that has long partnered with PolitiFact to publish content for PolitiFact Texas.

The article, by Dan Zehr, answered our question:
It’s common and appropriate for state workforce commissions to share “pre-release” data with governors’ offices and other officials, said Cheryl Abbot, regional economist at the Bureau of Labor Statistics Southwest regional office. However, she said, the bureau considers the data confidential until their official release.
Zehr's article focused on a dilemma: Was Abbott talking about the October numbers (making him guilty of breaching confidentiality), or was he just wrong based on the number from September 2017? Zehr reported the governor's office denied that Abbott was privy to the October numbers before their official release.

We think Zehr did work that PolitiFact Texas should have either duplicated or referenced. PolitiFact Texas apparently failed to rule out the possibility that Abbott referred to the official October numbers based on the routine early sharing of such information with state government officials.

For the sake of argument, let's assume Abbott's office told Zehr the truth

PolitiFact Texas' fact check based its rating on the assumption Abbott referred to unemployment numbers for September 2017. That agrees with Zehr's reporting on what the governor's office said it was talking about.

If Abbott was talking about the September 2017 numbers, was his statement false, as PolitiFact Texas declared?

Let's review what Abbott said.

PolitiFact (bold emphasis added):
It’s commonplace for a governor to tout a state’s economy. Still, Greg Abbott of Texas made us wonder when he tweeted in mid-November 2017: "The Texas unemployment rate is now the lowest it’s been in 40 years & Texas led the nation last month in new job creation."
And let's review what PolitiFact found from the Bureau of Labor Statistics:
(W)e fetched Bureau of Labor Statistics figures showing that the state’s impressive 4 percent jobless rate for September 2017 tied the previous record low since 1976. According to the bureau, the state similarly had a 4 percent unemployment rate in November and December 2000, 17 years ago. The state jobless rate in fall 1977, 40 years ago, hovered at 5.2 percent.
According to PolitiFact's research, what is the lowest unemployment rate in Texas over the past 40 years? The answer is 4 percent. That percentage occurred three times over the 40 year span, including September 2017. But by PolitiFact Texas' reasoning (and Zehr's reasoning, too), it is false for Abbott to claim September 2017 as the lowest in the past 40 years.

We say PolitiFact Texas (and Zehr) were wrong to suggest Abbott was simply wrong about the unemployment rate in Texas.

Ambiguous isn't the same as wrong

Certainly Gov. Abbott might have expressed himself more clearly. Abbott had the option of saying "The Texas unemployment rate is lower now than it has been in 40 years" if he believed that was the case. Such phrasing would tell his audience that no matter what the unemployment rate over the past 40 years, the current rate is lower.

Alternatively, Abbott might have said "The Texas unemployment rate is as low now as it has been in 40 years." That phrasing would clue his audience that the present low unemployment rate was achieved during the past 40 years at least twice.

Abbott's phrasing was somewhere in between the two alternatives we created. What he said hinted that the September 2017 rate was lower than it has been in 40 years but did not say so outright. His words were compatible with the September 2017 rate equaling the lowest in the past 40 years, but fell short of telling the entire story.

Kind of like PolitiFact Texas fell short of telling the entire story.

Though we took note of it on Twitter, we will again take the opportunity to recognize PolitiFact Texas and W. Gardner Selby as PolitiFact's best exemplars of transparency with respect to expert interviews. PolitiFact Texas posted the relevant portions (so far as we can tell!) of its interview of Cheryl Abbot. PolitiFact Texas has done similarly in the past, and we have encouraged PolitiFact (and other fact checkers) to make it standard practice.

Selby's interview shows him asking Cheryl Abbot to confirm his reading of the unemployment statistics. Selby's question was mildly leading, keeping Abbott to the topic of whether the low September 2017 unemployment rate had been equaled twice in the past 40 years. A different approach might have clued Selby to the same valuable information Dan Zehr reported: Gov. Abbott may have had access to the confidential October figures and his statement may prove correct for that month once the BLS releases the numbers.

It's notable that in the interview Abbot said that the numbers from September 2017 were the "lowest" in 40 years (bold emphasis added):
(T)he seasonally adjusted unemployment rate for Texas in September 2017 matched those of November and December 2000, all being the lowest rates recorded in the State since 1976.
Selby did not use the above quotation from Abbot. Perhaps he did not want his audience confused by the fact Abbot used the same term Abbott used.

In our view, Gov. Abbott was at least partially correct if he was talking about September 2017 and correct if he was talking about October 2017.

PolitiFact Texas should have covered both options more thoroughly.

Thursday, July 6, 2017

PolitiFact Texas uses tongs (2016)

Our "tweezers or tongs" tag applies to cases where PolitiFact had a choice of a narrow focus on one part of a claim or a wider focus on a claim with more than one part.

The tweezers or tongs option allows a fact-check to exercise bias by using the true part of a statement to boost the rating. Or ignoring the true part of the statement to drop the rating.

In this case, from 2016, a Democrat got the benefit of PolitiFact Texas' tongs treatment:

So, it was true that Texas law requires every high school to have a voter registrar.

But it was false that the law requires the registrar to get the children to vote once they're eligible.

PolitiFact averages it out:
Saldaña said a Texas law requires every high school to have a voter registrar "and part of their responsibility is to make sure that when children become 18 and become eligible to vote, that they vote."

A 1983 law requires every high school to have a deputy voter registrar tasked with giving eligible students voter registration applications. Each registrar also must make sure submitted applications are appropriately handled.

However, the law doesn’t require registrars to make every eligible student register; it's up to each student to act or not. Also, as Saldaña acknowledged, registrars aren’t required to ensure that students vote.

We rate this statement Half True.
There are dozens of examples where PolitiFact ignored what was true in favor of emphasizing the false. It's just one more way the PolitiFact system allows bias to creep in.

Here's one for which PolitiFact Pennsylvania breaks out the tweezers:


Sen. Toomey (R-Penn.) correctly says the ACA created a new category of eligibility. That part of his claim does not figure in the "Half True" rating.

We doubt that PolitiFact has ever created an ethical, principled and objective means for deciding when to ignore parts of compound claims.

Certainly we see no evidence of such means in PolitiFact's work.

Saturday, July 1, 2017

PolitiFact absurdly keeps "True" rating on false statement from Hillary Clinton

Today we were alerted about a story from earlier this week detailing a New York Times correction.

Heavy.com, from June 30, 2017:
On June 29 The New York Times issued a retraction to an article published on Monday, which originally stated that all 17 intelligence organizations had agreed that Russia orchestrated the hacking. The retraction reads, in part:
The assessment was made by four intelligence agencies — the Office of the Director of National Intelligence, the Central Intelligence Agency, the Federal Bureau of Investigation and the National Security Agency. The assessment was not approved by all 17 organizations in the American intelligence community.”
It should be noted that the four intelligence agencies are not retracting their statements about Russia involvement. But all 17 did not individually come to the assessment, despite what so many people insisted back in October.
The same article went on to point out that PolitiFact had rated "True" Hillary Rodham Clinton's claim that 17 U.S. intelligence agencies found Russia responsible for hacking. That despite acknowledging it had no evidence backing the idea that each agency had reached the conclusion based on its own investigation:
Politifact concluded that 17 agencies had, indeed, agreed on this because “the U.S. Intelligence Community is made up of 17 agencies.” However, the 17 agencies had not independently made the assessment, as many believed. Politifact mentioned this in the story, but still said the statement was correct.
We looked up the PolitiFact story in question. Heavy.com presents PolitiFact's reasoning accurately.

It makes for a great example of horrible fact-checking.

Clinton's statement implied each of the 17 agencies made its own finding:
"We have 17 intelligence agencies, civilian and military, who have all concluded that these espionage attacks, these cyberattacks, come from the highest levels of the Kremlin, and they are designed to influence our election."
It's very easy to avoid making that implication: "Our intelligence agencies have concluded ..." Such a phrasing fairly represents a finding backed by a figure representing all 17 agencies. But when Clinton emphasized the 17 agencies "all" reached the same conclusion it implied independent investigations.

PolitiFact ignored that false implication in its original rating and in a June 2017 update to the article in response to information from FBI Director James Clapper's testimony earlier in the year:
The January report presented its findings by saying "we assess," with "we" meaning "an assessment by all three agencies."

The October statement, on the other hand, said "The U.S. Intelligence Community (USIC) is confident" in its assessment. As we noted in the article, the 17 separate agencies did not independently come to this conclusion, but as the head of the intelligence community, the Office of the Director of National Intelligence speaks on behalf of the group.

We stand by our rating.
PolitiFact's rating was and is preposterous. Note how PolitiFact defines its "True" and "Mostly True" ratings:
TRUE – The statement is accurate and there’s nothing significant missing.
MOSTLY TRUE – The statement is accurate but needs clarification or additional information.
It doesn't pass the sniff test to assert that Clinton's claim about "17 agencies" needs no clarification or additional information. We suppose that only a left-leaning and/or unserious fact-checking organization would conclude otherwise.

Thursday, January 5, 2017

Evidence of PolitiFact's bias? The Paradox Project II

On Dec. 23, 2016, we published our review of the first part of Matthew Shapiro's evaluation of PolitiFact. This post will cover Shapiro's second installment in that series.

The second part of Shapiro's series showed little reliance on hard data in any of its three main sections.

Top Five Lies? Really?

Shapiro's first section identifies the top five lies, respectively, for Trump and Clinton and looks at how PolitiFact handles his list. Where does the list of top lies come from? Shapiro evidently chose them. And Shapiro admits his process was subjective (bold emphasis added):

It is extremely hard to pin down exactly which facts PolitiFact declines to check. We could argue all day about individual articles, but how do you show bias in which statements they choose to evaluate? How do you look at the facts that weren’t checked?

Our first stab at this question came from asking which lies each candidate was famous for and checking to see how PolitiFact evaluated them. These are necessarily going to be somewhat subjective, but even so the results were instructive.

It seems to us that Shapiro leads off his second installment with facepalm material.

Is an analysis data-driven if you're looking only at data sifted through a subjective lens? No. Such an analysis gets its impetus from the view through the subjective lens, which leads to cherry-picked data. Shapiro's approach to the data in this case wallows in the same mud in which PolitiFact basks with its ubiquitous "report card" graphs. PolitiFact gives essentially the same excuse for its subjective approach that we see from Shapiro: Sure, it's not scientific, but we can still see something important in these numbers!

Shapiro offers his readers nothing to serve as a solid basis for accepting his conclusions based on the Trump and Clinton "top five lies."

Putting the best face on Shapiro's evidence, yes PolitiFact skews its story selection. And the most obvious problem from the skewing stems from PolitiFact generally ignoring the skew when it publishes its "report cards" and other presentations of its "Truth-O-Meter" data. Using PolitiFact's own bad approach against it might carry some poetic justice, but shouldn't we prefer solid reasoning in making our criticisms of PolitiFact?

The Rubio-Reid comparison

In Shapiro's second major section, he highlights the jaw-dropping disparity between PolitiFact's focus on Marco Rubio, starting with Rubio's 2010 candidacy for the Senate, compared with that of Sen. Harry Reid, long-time senator as well as majority leader and minority leader during PolitiFact's foray into political fact-checking.

Shapiro offers his readers no hint regarding the existence of PolitiFact Florida, the PolitiFact state franchise that accounts in large measure--if not entirely--for PolitiFact's disproportional focus on Rubio. Was Shapiro aware of the different state franchises and how their existence (or non-existence) might skew his comparison?

We are left with an unfortunate dilemma: Either Shapiro knew of PolitiFact Florida and decided not to mention it to his readers, or else he failed to account for its existence in his analysis.


The Trump-Pence-Cruz muddle

Shapiro spends plenty of words and uses two pretty graphs in his third major section to tell us about something that he says seems important:
One thing you may have noticed through this series is that the charts and data we’ve culled show a stark delineation between how PolitiFact treats Republicans versus Democrats. The major exceptions to the rules we’ve identified in PolitiFact ratings and analytics have been Trump and Vice President-elect Mike Pence. These exceptions seem important. After all, who could more exemplify the Republican Party than the incoming president and vice president elect?
Shapiro refers to his observation that PolitiFact tends to use more words when grading the statements of Republicans. Except PolitiFact uses words economically for Trump and Pence.

What does it mean?

Shapiro concludes PolitiFact treats Trump like a Democrat. What does that mean, in its turn, other than PolitiFact does not use more words than average to justify its ratings of Trump (yes, we are emphasizing the circularity)?

Shapiro, so far as we can tell, does not offer up much of an answer. Note the conclusion of the third section, which also concludes Shapiro's second installment of his series:
In this context, PolitiFact’s analysis of Trump reinforces the idea that the media has [sic] called Republicans liars for so long and with such frequency the charge has lost it sting. PolitiFact treated Mitt Romney as a serial liar, fraud, and cheat. They attacked Rubio, Cruz, and Ryan frequently and often unfairly.

But they treated Trump like they do Democrats: their fact-checking was short, clean, and to the point. It dealt only with the facts at hand and sourced those facts as simply as possible. In short, they treated him like a Democrat who isn’t very careful with the truth.
The big takeaway is that PolitiFact's charge that Republicans are big fat liars doesn't carry the zing it once carried? But how would cutting down on the number of words restore the missing sting? Or are PolitiFact writers bowing to the inevitable? Why waste extra words making Trump look like a liar, when it's not going to work?

We just do not see anything in Shapiro's data that particularly recommends his hypothesis about the "crying wolf" syndrome.

An alternative hypothesis

We would suggest two factors that better explain PolitiFact's economy of words in rating Trump.

First, as Shapiro pointed out earlier in his analysis, PolitiFact did many of its fact-checks of Trump multiple times. Is it necessary to go to the same great lengths every time when one is writing essentially the same story? No. The writer has the option of referring the reader to the earlier fact checks for the detailed explanation.

Second, PolitiFact plays to narratives. PolitiFact's reporters allow narrative to drive their thinking, including the idea that their audience shares their view of the narrative. Once PolitiFact has established its narrative identifying a Michele Bachmann, Sarah Palin or a Donald Trump as a stranger to the truth, the writers excuse themselves from spending words to establish the narrative from the ground up.

Maddeningly thin

Is it just us, or is Shapiro's glorious multi-part data extravaganza short on substance?

Let's hope future installments lead to something more substantial than what he has offered so far.

Monday, January 2, 2017

CPRC: "Is Politifact really the organization that should be fact checking Facebook on gun related facts?"

The Crime Prevention Research Center, on Dec. 29, 2016, published a PolitiFact critique that might well have made our top 11 if we had noticed it a few days sooner.

Though the title of the piece suggests a general questioning of PolitiFact's new role as one of Facebook's guardians of truth, the article mainly focuses on one fact check from PolitiFact California, rating "Mostly True" the claim that seven children die each day from gun violence.

The CPRC puts its strongest argument front and center:
Are 18 and 19 year olds “children”?

For 2013 through 2015 for ages 0 through 19 there were 7,838 firearm deaths.  If you exclude 18 and 19 year olds, the number firearm deaths for 2013 through 2015 is reduced by almost half to 4,047 firearm deaths.  Including people who are clearly adults drives the total number of deaths.

Even the Brady Campaign differentiates children from teenagers.  If you just look at those who aren’t teenagers, the number of firearm deaths declines to 692, which comes to 0.63 deaths per day.
This argument cuts PolitiFact California's fact check to the quick. Instead looking at "children" as something to question, the fact-checkers let it pass with a "he-said, she said" caveat (bold emphasis added):
These include all types of gun deaths from accidents to homicides to suicides. About 36 percent resulted from suicides.

Some might take issue with Speier lumping in 18 year-olds and 19 year-olds as children.

Gun deaths for these two ages accounted for nearly half of the 7,838 young people killed in the two-year period.
Yes, some might take issue with lumping 18 year-olds and 19 year-olds in as children, particularly when checking Merriam-Webster quickly reveals how the claim stretches the truth. The distortion maximizes the emotional appeal of protecting "children."

Merriam-Webster's definition No. 2:
a :  a young person especially between infancy and youth
b :  a childlike or childish person  
c :  a person not yet of age
"A person not yet of age" provides the broadest reasonable understanding of the claim PolitiFact California checked. In the United States, persons 18 and over qualify as "of age."

Taking persons over 18 out of the mix all by itself cuts the estimate nearly in half. Great job, PolitiFact California.

Visit CPRC for more, including the share of "gun violence" accounted for by suicide and justifiable homicide.

Monday, December 26, 2016

Bill Adair: Do as I say, not as I do(?)

One of the earliest criticisms Jeff and I leveled against PolitiFact was its publication of opinion-based material under the banner of objective news reporting. PolitiFact's website has never, so far as we have found, bothered to categorize its stories as "news" or "op-ed." Meanwhile, the Tampa Bay Times publishes PolitiFact's fact checks in print alongside other "news" stories. The presentation implies the fact checks count as objective reporting.

Yet PolitiFact's founding editor, Bill Adair, has made statements describing PolitiFact fact checks as something other than objective reporting. Adair has called fact-checking "reported conclusion" journalism, as though one may employ the methods of the op-ed writer from Jay Rosen's "view from nowhere" and end up with objective reporting. And we have tried to publicize Adair's admission that what he calls the "heart of PolitiFact," the "Truth-O-Meter," features subjective ratings.

As a result, we are gobsmacked that Adair effectively expressed solidarity with PolitiFact Bias on the issue of properly labeling journalism (interview question by Hassan M. Kamal and response by Adair; bold emphasis in the original):
The online media is still at a nascent stage compared to its print counterpart. There's still much to learn about user behaviour and impact of news on the Web. What are the mistakes do you think that the early adopters of news websites made that can be avoided?

Here's a big one: identifying articles that are news and distinguishing them from articles that are opinion. I think of journalism as a continuum: on one end there's pure news that is objective and tells both sides. Just the facts. On the other end, there's pure opinion — we know it as editorials and columns in newspaper. And then there's some journalism in the middle. It might be based on reporting, but it's reflecting just one point of view. And one mistake that news organisations have made is not telling people the difference between them. When we publish an opinion article, we just put the phrase 'op-ed' on top of an article saying it's an op-ed. But many many people don't know what that means. And it's based on the old newspaper concept that the columns that run opposite the editorial are op-ed columns. The lesson here is that we should better label the nature of journalism. Label whether it's news or opinion or something in between like an analysis. And that's something we can do better when we set up new websites.
Addressing the elephant in the room, if labeling journalism accurately is so important and analysis falls between reporting and op-ed on the news continuum, why doesn't PolitiFact label its fact checks as analysis instead of passing them off as objective news?


Afters

The fact check website I created to improve on earlier fact-checking methods, by the way, separates the reporting from the analysis in each fact check, labeling both.

Friday, December 23, 2016

Evidence of PolitiFact's bias? The Paradox Project I

Matt Shapiro, a data analyst, started publishing a series of PolitiFact evaluations on Dec. 16, 2012. It appears at the Paradox Project website as well as at the Federalist.

Given our deep and abiding interest in the evidences showing PolitiFact's liberal bias, we cannot resist reviewing Shapiro's approach to the subject.

Shapiro's first installment focuses on truth averages and disparities in the lengths of fact checks.

Truth Averages

Shapiro looks at how various politicians compare using averaged "Truth-O-Meter" ratings:
We decided to start by ranking truth values to see how PolitiFact rates different individuals and aggregate groups on a truth scale. PolitiFact has 6 ratings: True, Mostly True, Half-True, Mostly False, False, and Pants on Fire. Giving each of these a value from 0 to 5, we can find an “average ruling” for each person and for groups of people.
Unlike many (not all) past attempts to produce "Truth-O-Meter" averages for politicians, Shapiro uses his averages to gain insight into PolitiFact:
Using averages alone, we already start to see some interesting patterns in the data. PolitiFact is much more likely to rate Republicans as their worst of the worst “Pants on Fire” rating, usually only reserved for when they feel a candidate is not only wrong, but lying in an aggressive and malicious way.
Using 2012 Republican presidential candidate Mitt Romney as his example, Shapiro suggests bias serves as the most reasonable explanation of the wide disparities.

We agree, noting that Shapiro's insight stems from the same type of inference we used in our ongoing study of PolitiFact's application of its "Pants on Fire" rating. But Shapiro disappointed us by defining the "Pants on Fire" rating differently than PolitiFact defines it. PolitiFact does not define a "Pants on Fire" statement as an aggressive or malicious lie. It is defined as "The statement is not accurate and makes a ridiculous claim."

As our study argued, the focus on the "Pants on Fire" rating serves as a useful measure of PolitiFact's bias given that PolitiFact offers nothing at all in its definitions to allow an objective distinction between "False" and "Pants on Fire." On the contrary, PolitiFact's principals on occasion confirm the arbitrary distinction between the two.

Shapiro's first evidence is pretty good, at least as an inference toward the best explanation. But it's been done before and with greater rigor.

Word Count

Shapiro says disparities in the word counts for PolitiFact fact checks offer an indication of PolitiFact's bias:
The most interesting metric we found when examining PolitiFact articles was word count. We found that word count was indicative of how much explaining a given article has to do in order to justify its rating.
While Shapiro offered plenty of evidence showing PolitiFact devotes more words to ratings of Republicans than to its ratings of Democrats, he gave little explanation supporting the inference that the disparities show an ideological bias.

While it makes intuitive sense that selection bias could lead toward spending more words on fact checks of Republicans, as when the fact checker gives greater scrutiny to a Republican's compound statement than to a Democrat's (recent example), we think Shapiro ought to craft a stronger case if he intends to change any minds with his research.


Summary

Shapiro's analysis based on rating averages suffers the same types of problems that we think we addressed with our "Pants on Fire" study: Poor averages for Republicans make a weak argument unless the analysis defuses the excuse that Republicans simply lie more.

As for Shapiro's examination of word counts, we certainly agree that the differences are so significant that they mean something. But Shapiro needs a stronger argument to convince skeptics that greater word length for fact checks of Republicans shows liberal bias.


Update Dec. 23, 2016: Made a few tweaks to the formatting and punctuation, as well as adding links to Shapiro's article at the Paradox Project and the Federalist (-bww).


Jeff adds:

I fail to see how Shapiro contributes anything worthwhile to the conversation, and he certainly doesn't offer anything new. Every criticism of PolitiFact in his piece has been written about in depth before and, in my view, written much better.

Shapiro's description of PolitiFact's "Pants on Fire" rating is flatly wrong. The definition is published at PolitiFact if he had an interest in looking it up. Shapiro asserts that a "Pants on Fire" rating "requires the statement to be both false and malicious" and "usually only reserved for when they feel a candidate is not only wrong, but aggressively and maliciously lying." This is pure fiction. Whether this indicates sloppiness or laziness I'm not sure, but in any event mischaracterizing PolitiFact's ratings only gives fuel to PolitiFact's defenders. Shapiro's error at the very least shows an unfamiliarity with his subject.

Shapiro continues the terribly flawed tradition of some conservative outlets, including the Federalist, where his article was published, by attempting to find clear evidence of bias by simply adding up  PolitiFact's ratings. Someone with Shapiro's skills should know this is a dubious method.

In fact, he does know it:
This method assumes this or that article might have a problem, but you have to look at the “big picture” of dozens of fact-checks, which inevitably means glossing over the fact that biased details do not add up to an unbiased whole.
That's all well and good, but then Shapiro goes on to ask his readers to accept that exact same method for his own study. He even came up with his own chart that simply mirrors the same dishonest charts PolitiFact pushes.

At first blush, his "word count" theory seems novel and unique, but does it prove anything? If it is evidence of something, Shapiro failed to convince me. And I'm eager to believe it.

Unfortunately, it seems Shapiro assumes what his word count data is suppose to prove. Higher word counts do not necessarily show political bias. It's entirely plausible those extra words were the result of PolitiFact giving someone the benefit of the doubt, or granting extra space for a subject to explain themselves. Shapiro's making his assertion without offering evidence. It's true that he offered a few examples, but unless he scientifically surveyed the thousands of articles and confirmed the additional words are directly tied to justifying the rating he could reasonably be accused of cherry-picking.

“When you’re explaining, you’re losing,” may well be a rock solid tenet of lawyers and politicians, but as data-based analysis it is unpersuasive.

We founded this website to promote and share the best criticisms of PolitiFact. While we doubt it matters to him or the Federalist, Shapiro's work fails to meet that standard. 

Shapiro offers nothing new and nothing better. This is a shame because, judging from his Twitter feed and previous writings, Shapiro is a very bright, thoughtful and clever person. We hope his next installments in this series do a better job of exposing PolitiFact's bias.

We've been criticizing and compiling quality critiques of PolitiFact for six years now. Documenting PolitiFact's bias is the main reason for this site's existence. We're exceedingly predisposed to accept and promote good evidence of PolitiFact's flaws.

If your data project can't convince two guys that started a website called PolitiFact Bias and who devote countless hours of our free time preaching to people that PolitiFact is biased, then perhaps your data project isn't very convincing.

Tuesday, December 20, 2016

PolitiFact Wisconsin don't need no stinkin' evidence

Is a fact check a fact check if it doesn't bother checking facts?

PolitiFact Wisconsin brings this question to the fore with its Dec. 16, 2016 fact check of former Democratic senator Russ Feingold. Feingold said Social Security was pretty much invented at the University of Wisconsin-Madison, and that's where President Franklin Delano Roosevelt got the idea.

PolitiFact agreed, giving Feingold's claim a "True" rating:


But a funny thing happened when we looked for PolitiFact Wisconsin's evidence in support of Feingold's claims. The fact check omits those facts, if they exist.

Let's review what PolitiFact offered as evidence:
When we asked Feingold spokesman Josh Orton for backup, he pointed to several Wisconsinites and people tied to the University of Wisconsin-Madison — where Feingold graduated in 1975 — who were influential in developing Social Security.
PolitiFact went on to list four persons with UW-Madison connections (among many) who were influential in bringing Social Security to pass in the United States.

Then PolitiFact Wisconsin summarized its evidence, with help from an unbiased expert from UW-Madison:
Current UW-Madison professor Pamela Herd agreed that Wisconsinites tied to the university were key figures in the development of Social Security.

"There were a lot of people involved in the creation of this program, but some of the most important players were from Wisconsin," said Herd, an expert on Social Security.
Okay, got it? Now on to the conclusion:
Feingold said that the idea for Social Security "was basically invented up on Bascom Hill, my alma mater here; that's where Franklin Roosevelt got the idea."

Historical accounts show, and an expert agrees, that officials who helped propose and initially operate Social Security had deep ties to UW-Madison.

We rate Feingold’s statement True.
And there you have it. Fact-checking.

If officials who helped propose and initially operate Social Security had deep ties to UW-Madison, then Social Security was basically invented at UW-Madison. And that's where President Roosevelt got the idea. "True."

Where was PolitiFact when Al Gore claimed to have taken the initiative in creating the Internet?

Seriously: PolitiFact Wisconsin's fact check produces no solid or unequivocal evidence supporting one of Feingold's claims and completely ignores fact-checking the other (why?). Yet Feingold's claims receive a "True" rating?

What happened to comparing the content of the federal Social Security Act to its precursor from UW-Madison? What happened to looking at where Roosevelt got his ideas about providing social insurance?

That's not fact-checking. That's rubber-stamping.


Afters:

The silver lining from PolitiFact Wisconsin's fact check comes from its links to the Social Security Administration website, which offer facts instead of supposition about the history of Social Security.

PolitiFact Wisconsin did a stellar job of keeping inconvenient facts from the Social Security website out of its fact check.

Sunday, December 18, 2016

Fact-checking the wrong way, featuring PolitiFact

Let PolitiFact help show you the right way to fact check by avoiding its mistakes

Fake and skewed news do present a problem for society. Having the best possible information allows us to potentially make the best possible decisions. Bad information hampers good decision-making. And the state of public discourse, including the state of the mainstream media, makes it valuable for the average person to develop fact-checking skills.

We found a December 11, 2016 fact check from PolitiFact that will help us learn better how to interpret claims and make use of expert sources.

The interpretation problem

PolitiFact believed it was fact-checking Republican Reince Priebus' claim that there was no report available saying Russia tried to interfere with the 2016 presidential election:


Was Priebus saying there was no "specific report" saying Russia tried to "muddy" the election? Here's how PolitiFact viewed it:
"Let's clear this up. Do you believe -- does the president-elect believe that Russia was trying to muddy up and get involved in the election in 2016?" Meet the Press host Chuck Todd asked on Dec. 11, 2016.

"No. 1, you don't know it. I don't know it," Priebus said. "There's been no conclusive or specific report to say otherwise."

That’s wrong. There is a specific report.

It was made public on Oct. 7, 2016, in the form of a joint statement from the Department of Homeland Security and the Office of the Director of National Intelligence. At the time, the website WikiLeaks was releasing a steady flow of emails stolen from the Democratic National Committee and top Hillary Clinton adviser John Podesta.

"The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from U.S. persons and institutions, including from U.S. political organizations," the statement said. "These thefts and disclosures are intended to interfere with the U.S. election process."
Based on the context of Priebus' appearance on "Meet the Press," we think PolitiFact botched its interpretation. NBC's Chuck Todd went back and forth with Priebus for a number of minutes on the nature of the evidence supporting the charge of Russian interference with the 2016 U.S. presidential election. The main topic was recent news reports suggesting Russia interfered with the U.S. election to help Republican candidate Donald Trump. Todd's question after "Let's clear this up" had little chance of clearing up that point. Priebus would not act unreasonably by interpreting Todd's question to refer to interference intended to help the Republican Party.

But the bigger interpretation problem centers on the word "specific." Given the discussion between Todd and Priebus around the epistemological basis for the Russian hacking angle, including "You don't know it and I don't know it" in the immediate context, both the "conclusive" and the "specific" definitions of the word address the nature of the evidence.

"Conclusive" means incontrovertible, not merely featuring a conclusion. "Specific" means including a specific evidence or evidences, and therefore would refer to a report showing evidences, not merely a particular (second definition as opposed to the first) report.

In short, PolitiFact made a poor effort at interpreting Priebus in the most sensible way. Giving conservatives short shrift in the interpretation department occurs routinely at PolitiFact.

Was the report PolitiFact cited incontrovertible? PolitiFact offered no argument to that effect.

Did the report give a clear and detailed description of Russia's attempt to influence the 2016 election? Again, PolitiFact offered no argument to that effect.

PolitiFact's "fact-checking" in this case amounted to playing games with the definitions of words.

The problem of the non-expert expert

PolitiFact routinely cites experts either without investigating or reporting (or both) their partisan leanings. Our current case gives us an example of that, as well as a case of giving the expert a platform to offer a non-expert opinion:
Yoshiko Herrera, a University of Wisconsin political scientist who focuses on Russia, called that letter, "a pretty strong statement." Herrera said Priebus’ comment represents a "disturbing" denial of facts.

"There has been a specific report, and politicians who wish to comment on the issue should read and comment on that report rather than suggest there is no such report or that no work has been done on the topic," Herrera said.
What relevant expertise does a political scientist focused on Russia bring to the evaluation of statements on issues specific to U.S. security? Even taking for granted that the letter Herrera talks about was objectively "a pretty strong statement," Herrera has no obvious qualification that lends weight to her opinion. An expert on international intelligence issues might lend weight to that opinion by expressing it.

The same goes for PolitiFact's second paragraph quoting Herrera. The opinion in this case gains some weight from Herrera's status as a political scientist (the focus on Russia counts as superfluous), but her implicit opinion that Trump made the error she cautions about does not stem from Herrera's field of expertise.

Note to would-be fact checkers: Approach your interviews with experts seeking comments that rely on their field of expertise. Anything else is fluff, and you may end up embarrassing the experts you cite by relying on their expertise for information that does not reflect their expertise.

Was Herrera offering her neutral expert opinion on Trump's comment? We don't see how her comments rely on her expertise. And reason exists to doubt her neutrality.

Source: FEC. Click image for larger view.

Yoshiko Herrera's FEC record of political giving shows her giving exclusively to Democrats, including a modest string of donations to the campaign of Hillary Rodham Clinton.

Did PolitiFact give its readers that information? No.

The wrap

Interpret comments fairly, and make sure you only quote expert sources when their opinion comes from their area of expertise. Don't ask an expert on political science and Russia for an opinion that requires a different area of expertise.

For the sake of transparency, I advocate making interview material available to readers. Did PolitiFact lead Herrera toward the conclusion a "specific report" exists? Or did Herrera offer that conclusion without any leading? An interview transcript allows readers to answer that question.

PolitiFact had announced that it plans to start making interview materials available as a standard practice. Someday? Somewhere over the rainbow?



Afters

Since the time we started this evaluation of PolitiFact's fact check, U.S. intelligence agencies have weighed in with comments hinting that they possess specific evidence showing a Russian government connection to election-related hacking and information leaks. But even these new reports do not contradict Priebus until the reports include the clear and detailed evidence of Russian meddling--from named and reliable sources.

Tuesday, December 13, 2016

Does changing from "True" to "Half True" count as a correction? Clarification? Update? Anything?

The use and abuse of the PolitiMulligan

We've pointed out before PolitiFact's propensity to correct or update its stories on the sly, contrary to statements of journalistic ethics (including its own statement of principles) regarding transparency.

Thanks to PolitiFact, we have another example in the genre, where PolitiFact California, instead of announcing a correction or update, simply executed a do-over on one of its stories.

On July 28, 2016, PolitiFact ruled it "True" that vice-presidential candidate Mike Pence had advocated diverting federal money from AIDS care services to "conversion therapy." But Timothy P. Carney, writing for the right-leaning Washington Examiner, had published an item the day before explaining why the evidence used by Pence's critics did not wash.

I wrote about PolitiFact California's faulty fact check on July 29, 2016 at Zebra Fact Check.

On Dec. 2, 2016, PolitiFact partly reversed itself, publishing a new version of the fact check with a "Half True" rating replacing the original "True" rating.

To be sure, the new item features a lengthy editor's note explaining the reason for the new version of PolitiFact California's fact check. But readers should note that PolitiFact completely avoids any admission of error in its explanation:
EDITOR’S NOTE: On July 28, 2016, PolitiFact California rated as True a statement by Democratic Lt. Gov. Gavin Newsom that Republican Indiana Governor and now Vice President-Elect Mike Pence "advocated diverting taxpayer dollars to so-called conversion therapy." We based that ruling on a statement Pence made in 2000 on his congressional campaign website, in which Pence says "Resources should be directed toward those institutions which provide assistance to those seeking to change their sexual behavior." Subsequently, our readers and other fact-checking websites examined the claim and made some points that led us to reconsider the fact check. Readers pointed out that Pence never explicitly advocated for conversion therapy in his statement and that he may have been pushing for safer sex practices. Pence’s words are open to other interpretations: Gay and lesbian leaders, for example, say his statement continues to give the impression that he supported the controversial practice of conversion therapy when his words are viewed in context with his long opposition to LGBT rights. Taking all of this into account, we are changing our rating to Half True and providing this new analysis.

PolitiFact California’s practice is to consider new evidence and perspectives related to our fact checks, and to revise our ratings when warranted.
While we credit PolitiFact California for keeping an archived version of its first attempt available for readers, we find PolitiFact's approach a bit puzzling.

First of all, there are no "new evidence and perspectives" involved in this case. Carney's July 27 article ought to have pre-empted the flaw in PolitiFact California's original July 28 fact check, and Zebra Fact Check highlighted the problem again two days later: A fact checker needs to account for the difference in wording between "changing sexual behavior" and "changing sexual preference." Also noted was PolitiFact California's failure to explain the immediate context of the smoking gun quotation it used to convict Pence: The Ryan White Care Act.

PolitiFact California made two major mistakes in its fact-checking. First, it failed to offer due attention to the wording of Pence's statement. Second, it failed to consider the context.

The two major errors resulted in no admission of error. And PolitiFact California's do-over fails to even show up on PolitiFact's list of stories that were updated or corrected.

As for the new "Half True" rating? If "changing their sexual behavior" in the context of the Ryan White Care Act is open to interpretation as "changing their sexual orientation," then we claim as our privilege the interpretation of "Half True" as "False."

In other words, PolitiFact California, creative interpretation is no substitute for facts.


Afters


So apparently it is an update. Just not the type of update that PolitiFact includes on its "Corrections and Updates" page.

Friday, December 9, 2016

PolitiFact agrees to disagree with itself on budget cuts

Bless your heart, PolitiFact.

PolitiFact has lately started to wrap up its "Obameter" feature, rating whether President Obama delivered on the set of campaign promises PolitiFact tracked.

One recent item caught our eye, as Obama earned a "Compromise" rating for partially delivering on a re-election campaign promise to cut $1 trillion to $1.5 trillion in spending.

Veteran PolitiFact fact checker Louis Jacobson wrote this one.

Jacobson and PolitiFact received some great advice from an expert, then proceeded to veer into the weeds:
"Like anything else in budgeting, it's all about the baseline," said Steve Ellis, vice president of Taxpayers for Common Sense. "It's a cut relative to what?"

The most obvious way to judge how much spending went down on Obama's watch is to start with how much spending was expected in 2012, when he made the promise, and then compare that to the amount of spending that actually materialized.
Huh? What happened to the incredibly obvious method of measuring how much was spent in 2012 when he made his promise and then looking about how much was spent at the end of his term in office?

Jacobson's Dec. 5, 2016 Obameter story doesn't even acknowledge the method we're pointing out, yet Jacobson appeared well aware of it when he wrote a budget cut fact check way back in 2014:
First, while the ad implies that the law is slicing Medicare benefits, these are not cuts to current services. Rather, as Medicare spending continues to rise over the next 10 years, it will do so at a slower pace would [sic] have occurred without the law. So claims that Obama would "cut" Medicare need more explanation to be fully accurate.
Jacobson faulted a critic of Obama's health care law for using "cuts" to describe slowing the growth of future spending. Yet Jacobson finds that deceptive method "the most obvious way" to determine whether Obama delivered his promised spending cut.

But at least there's a happy ending to the discrepancy. The National Republican Senatorial Committee has "the most obvious method" counted against it (as deceptive), while President Obama receives the benefit when PolitiFact uses the Republicans' deceptive method to rate the president's promise on cutting spending.

There's nothing wrong with favoring the good guys over the bad guys, right?


Inside Baseball Stuff

In terms of fact-checking, we noted a particularly interesting feature of Louis Jacobson's rating of President Obama's promise of cutting spending by $1 trillion to $1.5 trillion: Obama was promising that spending cut over and above one he was already claiming to have achieved. Though PolitiFact's presentation makes this part of Obama's statement obvious, PolitiFact does not bother to confirm the claim.


Consideration of context serves as a fact checker's primary tool for interpreting claims. If Obama saved $1 trillion before making his promise of equal or greater savings in his second term, the means he used to achieve that savings is the means we should expect him to use to fulfill his promise for his second term unless he specifies differently.

We failed to find any mainstream fact check addressing Obama's claim of saving $1 trillion in 2011:
PRESIDENT OBAMA: Well, I-- I have to tell you, David, if-- if you look at my track record over the last two years, I cut spending by over a trillion dollars in 2011.
If Obama did not save $1 trillion in the first place, he cannot fulfill a promise to cut "another" $1 trillion. At best he can fulfill part of the promise: to cut $1 trillion.

Louis Jacobson and PolitiFact did not notice?

Checking whether Obama saved that $1 trillion in 2011 should have served as a prerequisite for rating Obama's second-term promise to save another $1 trillion or more. Fact checkers could then assume Obama would save the second $1 trillion under the same terms as the first.

Monday, December 5, 2016

Handicapping PolitiFact's "Lie of the Year" for 2016

A full plate of stuff to write about has left me a little behind in getting to PolitiFact's list of finalists for its "Lie of the Year" award--the award that makes it even more obvious that PolitiFact does opinion journalism, since judging the importance of a "lie" obviously requires subjective judgments.

I clipped an image of the most important part of the electronic ballot:



One thing jumped out right away. Typically the list of finalists includes about 10 specific fact checks from a number of sources. This year's menu includes only four specific fact checks, two each from Hillary Clinton and Donald Trump. PolitiFact rounds out the menu with two category choices, the whole 2016 election and the "fake news" phenomenon that, without much hint of irony, has galvanized the mainstream press to make even greater efforts to recapture its (legendary?) role as the the gatekeeper of what people ought to know and accept.

Given PolitiFact's recent tendency to select a "Lie of the Year" made up of multiple finalists, these changes make a great deal of sense. We've already pointed out one of the advantages PolitiFact gains from this approach. Having a multi-headed hydra as the winner allows PolitiFact to dodge criticisms of its choice. Oh, that hydra head got lopped off? No worries. These others continue to writhe and gnash their teeth.

Without further ado, I'll rate the chances of the six listed finalists. Doubtless my co-editor Jeff D. will weigh in at some point with his own comments and predictions.

Clinton "never received nor sent any email that was marked as classified" on her private email server while acting as Secretary of State

Of the specific claims listed, this one probably had the biggest impact on the election. Clinton made this claim a key part of her ongoing defense of her use of the private email server. When FBI Director James Comey contradicted this part of Clinton's story, it cinched one of Clinton's key negatives heading into the 2016 election. This one would serve as a pretty solid choice for "Lie of the Year." The main drawback of this selection stems from liberal denial of Clinton's weakness as a presidential candidate. This choice might generate some lasting resentment  from a significant segment of PolitiFact's liberal fanbase, some of whom will insist Clinton was telling the truth.


Clinton says she received Comey's seal of approval regarding her truthfulness about the email server

This item gave us a notable case where a major political figure made a pretty much indefensible and clear statement that was quickly publicized as such. Was this one politically significant? I think journalists were a bit shocked that Clinton made this unforced error. But I doubt voters regarded this case as anything other than a footnote to Clinton's earlier dissembling about her email server.

Trump claims "large-scale voter fraud"

Talk about awkward!

Trump was pilloried by the mainstream press along with pundits and politicians aplenty for his statements calling the presidential election results into doubt. But the political importance of this one gets complicated by liberal challenges to the election results in states where Trump's margin of victory was not particularly narrow (Michigan, Pennsylvania, and Wisconsin). Why challenge the results if they were not skewed by some form of large-scale fraud? This selection also suffers from the nature of the evidence. Trump received the rating not because it is known that no large-scale voter fraud has taken place in 2016, but because of a lack of evidence supporting the claim.

Donald Trump said he was against the war in Iraq

This one counts as the weakest of the specific fact checks on the list. PolitiFact and its fact-checking brethren built a very weak case that Trump had supported the Iraq War. Making this one by itself the "Lie of the Year" will result in some very good challenges in the mainstream and conservative press.


"The entire 2016 election, when falsehoods overran the facts"

Now things get interesting! Could PolitiFact opt for a "Lie of the Year" awarded to a candidate even more generalized than "campaign statements by Donald Trump," which won in 2016? And does PolitiFact have the ability to objectively quantify this election's overrunning of the facts compared to elections in the past? And could PolitiFact admit that falsehoods overran the facts despite proclamations that fact-checking enjoyed a banner year? If falsehoods overrun the facts while fact checkers enjoy a banner year then what will journalists prescribe to remedy the situation? More of what hasn't worked?

This choice will likely have good traction with PolitiFact's editors if they see a way toward picking this one while avoiding the appearance of admitting failure.


The fake news phenomenon(?)

Fake news has been around a good while, but it's the new hotness in journalistic circles. If mainstream journalism can conquer fake news, then maybe the mainstream press can again take its rightful place as society's gatekeepers of information! That idea excites mainstream journalists.

This surprise nominee has everything going for it. Fake news is fake by definition, so who can criticize the choice? It's total journalistic hotness, as noted. And the choice represents a call to action, opposing fake news, in symphony with a call that is already reverberating in fact-checking circles.

Is it a lame choice? Yes, it's as lame as all get out. I'd doubt journalists even have a clue about the impact of fake news, not to even mention the role fact checkers play in supporting false news memes that liberals favor.


Summary

Clinton's claim she never sent or received material marked as classified on her private server is the favorite according to the early established norms of the "Lie of the Year" award. But the fake news choice serves as the clear favorite in terms of sympathy with its Democratic-leaning readership and promoting its own sense of mission. I expect the latter favorite to prevail.


Jeff Adds:

I don't see much to disagree with Bryan. You can dismiss any claim relating to Trump right off the bat. Giving the award to Trump would neither shock people that hate him nor would it upset people that love him (who presumably already have low regard for liberal fact checkers.) It would be a yawner of a pick that would fail to generate buzz.

The Clinton pick would be a favorite in any other year. Because Clinton has already lost the election and her status on the left has been diminished, handing her the award wouldn't do any harm to her, but it would provide PolitiFact with a bogus token of neutrality ("See! We call Democrats liars too!") Likewise, the resulting outrage of PolitiFact's devoted liberal fanbase would generate plenty of clicks, and typically that's what the Lie of the Year has been about. It's true they would temporarily upset the faithful, but we've seen this exact scenario play out before with little consequence. Historically, PolitiFact seems motivated by clicks (and even angry liberal clicks will do, not to mention they keep the "we upset both sides" charade going.)

But the Fake News pick is the obvious favorite here. It's the hottest of hot topics in journalist circles, and PolitiFact sees themselves on the front lines in the war against opposing viewpoints unfacts. They're already trying to rally the troops and want to be seen as a beacon of truthiness in a sea of deceit.

It's been my view that while PolitiFact formerly cared primarily about generating buzz, since Holan's ascension [Angie Drobnic Holan replacing Bill Adair as chief editor--bww] they've behaved more and more like political activists. The Clinton choice would get more clicks, but I'd bet on Fake News being this year's rally cry for PolitiFact's army of Truth Hustlers.

Viva la Factismo!


Monday, November 21, 2016

Great Moments in the Annals of Subjectivity (Updated)

Did Republican Donald Trump win the electoral college in a landslide?

We typically think of a "landslide" as an overwhelming victory, and there's certainly doubt whether Trump's margin of victory in the electoral college unequivocally counts as overwhelming.

"Overwhelming" itself is hard to pin down in objective terms.

So that's why we have PolitiFact, the group of liberal bloggers that puts "fact" in its name and then proceeds to publish "fact check journalism" based on subjective "Truth-O-Meter" judgments.

When RNC Chairman Reince Priebus (and Trump's pick for his chief of staff) called Trump's electoral college victory a "landslide," PolitiFact Wisconsin's liberal bloggers sprang into action to do their thing (bold emphasis added):
Landslide, of course, is not technically defined. When we asked for information to back Priebus’ claim, the Republican National Committee merely recited the electoral figures and repeated that it was a landslide.
If "landslide" is not technically defined then what fact is PolitiFact Wisconsin checking? Is "landslide" non-technically defined to the point one can judge it true or false?

PolitiFact Wisconsin follows typical PolitiFact procedure in collecting expert opinions about whether Priebus' use of "landslide" matches its non-technical definition. One of the 10 experts PolitiFact consulted said Trump's margin was "close" to a landslide. PolitiFact said the other nine said it fell short, so PolitiFact ruled Priebus' claim "False."
Priebus said Trump’s win was "an electoral landslide."

But aside from the fact Trump lost the popular vote, his margin in the Electoral College isn’t all that high, either. None of the 10 experts we contacted said Trump’s win crosses that threshold.

We rate Priebus’ claim False.
One has to marvel at expertise sufficient to say whether the use of a term meets a non-technical definition.

One has to marvel all the more at fact checkers who concede that a term has a mushy definition ("not technically defined") and then declare that some use of the term fails to cross "that threshold."

What threshold?

One of the election experts said if Trump won by a landslide then Obama won by an even greater landslide.

RollCall, 2015:
In 2006, Democrats won back the House; two years later, President Barack Obama won by a landslide.
LA Times, 2012:
Obama officially wins in electoral vote landslide.
NPR, 2015:
President Obama won in a landslide.
NYU Journalism, 2008:
Obama Wins Landslide Victory, Charts New Course for United States.
Since Obama did not win by a landslide, therefore one cannot claim Trump won by a landslide? Is that it?

It is folly for fact checkers to try to judge the truth of ambiguous claims. PolitiFact often pursues that folly, of course, and in the end simply underscores what it occasionally admits: The ratings are subjective.

Finding experts willing to participate in the folly does not reduce the magnitude of the folly. This would have been a good subject for PolitiFact to use in continuing its Voxification trend. PolitiFact might have produced an "In context" article to talk about electoral landslides and how experts view the matter. But trying to corral the use of a term that is traditionally hard to tame simply makes a mockery of fact-checking.


Jeff Adds (Dec. 1, 2016):

Add this to a long list of opinions that PolitiFact treats as verifiable facts, including these two gems:

- Radio host John DePetro opined that the Boston Marathon bomber was buried "not far" from President John Kennedy. PolitiFact used their magical powers of objective divinity to determine the unarguable demarcation of "not far."

- Rush Limbaugh claimed "some of the wealthiest American's are African-Americans now." Using the divine wizardry of the nonpartisan Truth-O-Meter, PolitiFact's highly trained social scientists were able to conjure up a determinant definition of what "wealthiest" means, and specifically which people were included in the list.

Reasonable people may discount Trump's claim of a "landslide" victory assuming the conventional use of the term, but it's not a verifiable fact that can be confirmed or dismissed with evidence. It's an opinion.

The reality is that the charlatans at PolitiFact masquerade as truthsayers when they do little more than contribute to the supposed fake news epidemic by shilling their own opinions as unarguable fact. They're dangerous frauds whose declaration of objectivity doesn't withstand the slightest scrutiny.

Wednesday, November 9, 2016

Another day, another deceptive PolitiFact chart

On election day, PolitiFact helpfully trotted out a set of its misleading "report card" graphs, including an updated version of its comparison between Democrat Hillary Clinton and Republican Donald Trump.

What is the point of publishing such graphs?

The graphs make an implicit argument to prefer the Democratic Party nominee in the general election. See how much more honest she is! Or, alternatively, see how the Republican tells many falsehoods!

The problem? This is the same PolitiFact deception we have pointed out for years.

The chart amounts to a political ad, making the claim Clinton is more truthful than Trump. But to properly support that conclusion, the underlying data should fairly represent typical political claims from Clinton and Trump--the sort of thing scientific studies capture by randomizing the capture of data.

In the same vein, a scientific study would allow for verification of its ratings. A scientific study would permit this process by using a carefully defined set of ratings. One might then duplicate the results by independently repeating the fact check and reaching the same results.

Yet none of that is possible with these collected "Truth-O-Meter" ratings.

Randomly selected stories aren't likely to grip readers. So editors select the fact-checks to maximize reader interest and/or serve some notion of the public good.

So much for a random sample.

And trying to duplicate the ratings through following objective scientific procedure counts as futile. PolitiFact editor Bill Adair recently confirmed this yet again with the frank admission that "the decision about a Truth-O-Meter rating is entirely subjective."

So much for objectively verifying the results.

PolitiFact passes off graphs of its opinions as though they represent hard data about candidate truthfulness.

This practice ought to offend any conscientious journalist, and that should go double for any conscientious fact-checking journalist.

We have called for PolitiFact to include some type of disclaimer each time it publishes this type of item. Such disclaimers happen only on occasion. The example embedded in this post contains no hint of a disclaimer.

Wonder why Republicans and Trump voters do not trust mainstream media fact-checking?

Take a look in the mirror, PolitiFact.

Friday, November 4, 2016

The Daily Caller: "PolitiFact Used Doctored Clinton Foundation Memo On Its HIV/AIDS Program"

The conservative website The Daily Caller has wound up in a bit of a feud with PolitiFact. The Caller ran an article criticizing the Clinton Foundation. PolitiFact did a fact check of the Caller in response. And the Caller has responded blow for blow.

This one'll leave a mark:
High-ranking officers with the Clinton Foundation gave PolitiFact a doctored version of a 2008 memo lauding its HIV/AIDS program presumably to defend against congressional charges that the charity distributed ‘watered down’ drugs to poor patients on the African continent, according to new information acquired by The Daily Caller News Foundation Investigative Group.

The altered memo went to Politifact Sept. 21, 2016, three days after TheDCNF published a story entitled, “Clinton Foundation AIDS Program Distributed ‘Watered-Down’ Drugs To Third World Countries.” (RELATED: EXCLUSIVE: Clinton Foundation AIDS Program Distributed ‘Watered-Down’ Drugs To Third World Countries)
The Caller also achieved a minor miracle by getting PolitiFact's Aaron Sharockman to respond.
Steps were reportedly taken to verify the authenticity of the Clinton Foundation document, Aaron Sharockman, executive director of PolitiFact, claimed in a statement to TheDCNF.

He told TheDCNF that the memo “was provided to us by the Clinton Foundation in response to our questions,” adding that PolitiFact “verified its authenticity through emails sent and received at that time.”
Read the whole thing here.



Correction/clarification Nov. 6, 2016: "The Daily Caller has wound up a bit of a feud"=>"The Daily Caller has wound up in a bit of a feud"

Thursday, November 3, 2016

PolitiFact: "Mostly False" that many are paying more for health care than for mortgage or rent

Sometimes reading a PolitiFact fact check is like being whisked off to Wonderland for a conversation with the Mad Hatter.

Case in point: Donald Trump says that many Americans are paying more for health care than to pay their rent or mortgage for the first time in history. PolitiFact finds it "Mostly False."


This rating drew our attention right away because of the ambiguity of Trump's claim. How does a fact-check go about making a truth distinction about "many instances"? How many is "many"? And if pinning down "many" poses a challenge, how does one go from that challenge to finding out whether it's happening for the first time in history?

After getting into the text of the fact check, it was a matter of trying to control laughter over the way PolitiFact approached the problem.