Showing posts sorted by relevance for query principles. Sort by date Show all posts
Showing posts sorted by relevance for query principles. Sort by date Show all posts

Saturday, December 28, 2024

Why does PolitiFact struggle with simple stuff?

 PolitiFact Bias started out and continues as an effort to improve PolitiFact.

We understand PolitiFact's liberal bloggers disliking criticism. But c'mon, it's for your own good. And why the struggle with simple stuff?

Moments ago, I was dipping into some search results relating to a potential research project. While reviewing a PolitiFact story I noticed it had an update notice.




"An update," I think. "I wonder what was updated?"

So I look through the story for an update. Then I looked for the update again.

Then I cheated and tried for an Internet Archive comparison. The oldest archived page was already updated, so that was initially a dead end.

I looked through the story again looking for the update without finding it.

What does (did) PolitiFact's statement of principles say about updates?

Updates – From time to time, we add additional information to stories and fact-checks after they’ve published, not as a correction but as a service to readers. Examples include a response from the speaker we received after publication (that did not change the conclusion of the report), or breaking news after publication that is relevant to the check. Updates can be made parenthetically within the text with a date, or at the end of the report. Updated fact-checks receive a tag of "Corrections and updates."

The update announcement at the top should have featured the date it was added. And, as PolitiFact's supposed principles state, the story should have had a "Corrections and Updates" tag added. There's no such tag.

I was reminded by my attempt to access PolitiFact's archived statement of principles that PolitiFact's update to its website might hide older pages from ordinary Internet Archive searches. I went to PolitiFact's main page, archived on the date of the article. The article was highlighted on the main page, and clicking on it took me to the page as archived on March 29, 2019.

The page had no update announcement. 

Now we're cookin'.

Comparing March 29 to April 1 revealed five added paragraphs from (liberal blogger) Jon Greenberg.

So, PolitiFact updated the story and did not inform its readers on the specifics of the update. This has the effect of a stealth edit, which counts as a no-no in journalism.

This Is So Minor! Who Cares?

Yes, this case is fairly minor, albeit following published principles, in journalism as in anything else, should count as standard practice. Inconsistent application of principles empties the term "principles" of the meaning it ought to have.

As to who cares, the public ought to care because journalism organizations have principles to establish their trustworthiness. Following the principles provides evidence of trustworthiness. Failing to follow principles offers evidence of untrustworthiness.

The International Fact-Checking Network, in its supposed role in holding fact-checking organizations accountable, also ought to care. But I could send a correction request to PolitiFact asking to have this problem corrected and PolitiFact probably would not bother to fix it. I say that based on past experience. Moreover, after PolitiFact failed to fix the error for weeks, I could send this example to the International Fact-Checking Network as an example of PolitiFact failing to scrupulously follow its corrections policy and the IFCN would ignore it (see here).

Meanwhile, the IFCN (owned as is PolitiFact by the Poynter Institute) will continue to assure the public that fact-checking orgs like PolitiFact that are "verified" by the IFCN scrupulously follow their corrections policies.

These journalists who want our trust are telling us falsehoods.

Why wouldn't it be better to fix stories so that they live up to published principles? If they don't have time to follow principles on corrections and updates (among other things), should we expect them to have time to live up to their principles in reporting and fact-checking?

We believe we haven't been able to help PolitiFact or the IFCN much because they don't want any help.

Sunday, February 18, 2018

PolitiFact partially unveils spectacularly transparent description of its fact-checking process

"The Week in Fact-Checking," an update on the latest fact-checking news posted at the Poynter website, alerted us to the fact that PolitiFact has updated its statement of principles:
PolitiFact made their methodology more transparent, in keeping with other fact-checkers around the world. (And ICYMI,  PolitiFact has moved its headquarters to Poynter, earning a not-for-profit designation.)
We were surprised we had missed PolitiFact's welcome improvement to its methodological transparency. So we visited PolitiFact.com to check it out.

So ... where is it?

PolitiFact created multiple pages of transparent new content and apparently neglected to equip its website with internal links leading readers to the new content.

Clicking "About Us>>Our Process" on the main menu takes the reader to PolitiFact's 2013 statement of principles.

Clicking "Our Process" on the footer takes the reader to PolitiFact's 2013 statement of principles

There's no apparent way to use PolitiFact's main page to find the new even-more-transparent(!) statement of principles.

But people can see PolitiFact's latest extreme transparency through the Poynter.org website. Or maybe via links posted to Twitter. We haven't noticed any yet, but it's possible.

So there's that.

The new material published on Feb. 12, 2018. As of Feb. 18, 2018, PolitFact.com still funneled readers to its 2013 statement of principles.

We see that as illustrative of the PolitiFact bubble. PolitiFact judges its transparency according to its belief it has published a new statement of principles. Those outside the PolitiFact bubble, unaware of the new statement of principles thanks to PolitiFact's oversight, do not likely take the same view of PolitiFact's transparency.

Why are those outside the bubble so ignorant of PolitiFact's extreme transparency?

Friday, February 15, 2013

PolitiFact Oregon: Making pretzels out of PolitiFact's principles

Remember PolitiFact's principles?

No worries.  PolitiFact doesn't either.  At least not enough to update its statement of principles when editor Bill Adair adds to them.

PolitiFact originally published its statement of principles on Feb. 21, 2011.

On Jan. 25 last year, in "Tuning the Truth-O-Meter," Adair wrote:
About a year ago, we realized we were ducking the underlying point of blame or credit, which was the crucial message. So we began rating those types of claims as compound statements. We not only checked whether the numbers were accurate, we checked whether economists believed an office holder's policies were much of a factor in the increase or decrease.

We give a lot of Half True ratings because the numbers are often right, but experts repeatedly tell us that the policies of a single executive have a relatively small impact in a big and complex economy.
Going back one year before Jan. 25 last year, we get to approximately January of 2011--not at all far from the time PolitiFact published its statement of principles.

With the credit/blame issue missing from its statement of principles, who can blame PolitiFact Oregon for ignoring it?

We will.  Fact checkers ought to avoid inconsistency in their rulings.

PolitiFact Oregon uncritically accepts the underlying argument

PolitiFact Oregon fact checked a claim by Jeff Merkley (D-Ore.) in support of the Violence Against Women Act.
(S)upporters like Sen. Jeff Merkley, D-Ore.,  have been talking about the law’s benefits.

Here’s what Merkley said Feb. 7 during a conference call with reporters: "Since 1994 when VAWA was first passed, incidents of domestic violence have dropped more than 50 percent."

That seems to be a pretty strong selling point and as the bill moves toward a final vote in the Senate it’s something that will be repeated and emphasized during debate.
The statistic is only a strong selling point for the VAWA if the VAWA has a substantial effect on the decrease in domestic violence.  That's Merkley's underlying argument.  PolitiFact Oregon fact checks only the statistic and implicitly accepts the underlying argument without any critique at all, giving Merkley a "True" rating for his statement.

There's no question Merkley was crediting the VAWA for the the change.  PolitiFact notes Merkley was "talking about the law's benefits" before breathlessly reporting that it "seems to be a pretty strong selling point."

That's the way you do the fact check if you're biased toward Merkley's point of view.  And unwilling to let your standards for fact checking get in the way.

Friday, July 19, 2019

PolitiFact Wisconsin: "Veteran" and "service member" mean the same thing

A funny thing happened when PolitiFact examined Democratic presidential candidate Tulsi Gabbard's claim the Trump administration deports service members.

Instead of ruling on whether the Trump administration was deporting service members, PolitiFact Wisconsin decided to look at whether the Trump administration was deporting non-naturalized service veterans.

Therefore "service members" are the same thing as non-naturalized service veterans?

We wish we were kidding. But read PolitiFact's summary conclusion. PolitiFact equates "service members" with "veterans" as though it's the most natural thing in the world, and doesn't even mention citizenship status:
Our ruling

Gabbard said at the same time Trump talks about supporting veterans, "he is deporting service members who have volunteered to serve this country."

The Trump administration expanded the grounds under which people, including veterans, can be deported, which some blame for more veterans being forced to leave the country. That said, GAO documents make clear the issue existed before Trump took office -- something that wasn’t acknowledged in Gabbard’s claim.

Our definition for Mostly True is "the statement is accurate but needs clarification or additional information." That fits here.
PolitiFact does mention citizenship issues in the body of the story. It opens, for example, with a frame emphasizing military service and illegal immigration:
Military matters and illegal immigration.

Both are hot-button issues for voters in the 2020 presidential election, though for different reasons.

U.S. Rep. Tulsi Gabbard of Hawaii, a Democratic presidential hopeful and major in the Hawaii Army National Guard, linked them when she spoke July 11, 2019 at the League of United Latin American Citizens convention in Milwaukee.
In the quotation PolitiFact Wisconsin provided, Gabbard did nothing to explicitly link military service with illegal immigration. The journalist (or reader) would have to infer the connection. And PolitiFact Wisconsin failed to link to a transcript of Gabbard's speech, linking us instead to the Journal Sentinel's news report that fails to supply any additional context to Gabbard's remarks.

Intentional Spin?

We see evidence suggesting PolitiFact Wisconsin applied intentional spin in its story to minimize the misleading nature of Gabbard's statement.

In context, Gabbard referred to "lip service" Trump offers to "our veterans, to our troops," but PolitiFact lops off "to our troops" in its headline and deck material. That truncated version of Gabbard's statement makes it appear reasonable to assume Gabbard was talking about veterans and not active service members.

Put simply, PolitiFact manipulated Gabbard's statement to help make it match the interpretation PolitiFact's liberal bloggers gave it in the story. PolitiFact not only chose not to deal with the obvious way Gabbard's statement might mislead people, but also chose not to transparently disclose that decision to its readers.

Principles Forsaken

PolitiFact's statement of principles is a sham. Why? Because PolitiFact applies the principles so haphazardly that we might as well call the result situational ethics. The ideology of the claimant appears to serve as one of the situational conditions driving the decision as to which principle to apply in any given case.

In Gabbard's case, she made a statement that could easily be interpreted in a way that makes it false. And PolitiFact often uses that as the justification for a harsh rating. In its statement of principles PolitiFact says it takes into account whether a statement is literally true (or false). It also says PolitiFact takes into account whether the statement is open to interpretation (bold emphasis added).:
The three editors and reporter then review the fact-check by discussing the following questions.
• Is the statement literally true?
• Is there another way to read the statement? Is the statement open to interpretation?
• Did the speaker provide evidence? Did the speaker prove the statement to be true?
• How have we handled similar statements in the past? What is PolitiFact’s jurisprudence?
PolitiFact effectively discarded two of its principles for the Gabbard fact check.

We say that a fact-checking organization that does not apply its principles consistently cannot receive credit for consistent non-partisanship or fairness.

With PolitiFact, "words matter" sometimes.



Afters

We've always been open to legitimate examples showing PolitiFact's inconsistency causing unfair harm to liberals or Democrats.

The examples remain few, in our experience.

Friday, August 4, 2017

PolitiFact editor: Principles developed "through sheer repetition"

PolitiFact editor Angie Drobnic Holan this week published her ruminations on PolitiFact's first 10 years of fact-checking.

Her section on the development of PolitiFact's principles drew our attention (bold emphasis added):
We also have made big strides in improving methodology, the system we use for researching, writing and editing thousands of fact-checks, more than 13,000 and counting at PolitiFact.com.

Through sheer repetition, we’ve developed principles and checklists for fact-checking. PolitiFact’s Principles of the Truth-O-Meter describes in detail our approach to ensuring fairness and thoroughness. Our checklist includes contacting the person we’re fact-checking, searching fact-check archives, doing Google and database searches, consulting experts and authors, and then asking ourselves one more time what we’re missing.
The line to which we added bold emphasis doesn't really make any sense. One develops principles and checklists by experience and adaptation, not by "sheer repetition." Sheer repetition results in repeating exactly the procedures one started out with.

PolitiFact's definitions for its "Truth-O-Meter" ratings appear on the earliest Internet Archive page we could load: September 21, 2007, featuring the original definition of "Half True" that PolitiFact not-so-smoothly dumped around 2011. So the definitions do not appear to have resulted from "sheer repetition."

The likely truth is that PolitiFact developed an original set of principles based on what probably felt like careful consideration at the time. And as the organization encountered difficulties it tweaked its process.

Does the contemporary process count as "big strides" in improving PolitiFact's methodology?

We're not really seeing it.

When PolitiFact won its 2008 Pulitzer Prize for National Reporting, one of the stories among the 13 submitted was a "Mostly True" rating for Barack Obama's claim that his uncle had helped liberate Auschwitz. Auschwitz was liberated by the Soviet army. Mr. Obama's uncle was not part of the Soviet army. A false claim received a "Mostly True" rating.

This week, PolitiFact California issued a "Mostly True" rating for the claim a National Academy of Sciences study found undocumented immigrants commit fewer crimes than native-born Americans. If PolitiFact had looked at the claim in terms of raw numbers, it would likely prove true. Native-born Americans, after all, substantially outnumber undocumented immigrants. Such a comparison means very little, of course.

PolitiFact California simply overlooked the fact that the study looked at immigrants generally, not undocumented immigrants. We wish we were kidding. We are not kidding:
We started by checking out the 2015 National Academy of Sciences study Villaraigosa cited. It found: "Immigrants are in fact much less likely to commit crime than natives, and the presence of large numbers of immigrants seems to lower crime rates." The study added that "This disparity also holds for young men most likely to be undocumented immigrants: Mexican, Salvadoran, and Guatemalan men.
While the latter half of the paragraph hints at data specific to undocumented immigrants, we should note two important facts. First, measuring crime rates for Guatemalan immigrants in general serves as a poor method for gauging the criminality of undocumented Guatemalan immigrants. The same goes for immigrants from other nations. Second, PolitiFact California presents this information as though it came from the NAS study. In fact, the NAS study was summarizing the findings of a different study.

Neither study reached conclusions specific to undocumented immigrants, for neither used data permitting such conclusions.

Yet PolitiFact California found the following statement "Mostly True" (bold emphasis added):
"But going after the undocumented is not a crime strategy, when you look at the fact that the National Academy of Sciences in, I think it was November of 2015, the undocumented immigrants commit less crimes than the native born. That’s just a fact."
False statement, "Mostly True" rating.

If PolitiFact has learned anything over the past 10 years, it is that it can largely get away with passing incompetent fact-checking and subjective ratings on to its loyal readers.

Sunday, April 2, 2017

Angie Drobnic Holan: "Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

PolitiFact, thy name is Hypocrisy.

The editors of PolitiFact Bias often find themselves overawed by the sanctimonious pronouncements we see coming from PolitiFact (and other fact checkers).

Everybody screws up. We screw up. The New York Times screws up. PolitiFact often screws up. And a big part of journalistic integrity comes from what you do to fix things when you screw up. But for some reason that concept just doesn't seem to fully register at PolitiFact.

Take the International Fact-Checking Day epistle from PolitiFact's chief editor Angie Drobnic Holan:
Find news organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency. (We adhere to those principles at PolitiFact and at the Tampa Bay Times, so if you’re reading this, you’ve made a good start.)
The first sentence qualifies as great advice. The parenthetical sentence that follows qualifies as a howler. PolitiFact adheres to principles of truthfulness, fairness and transparency?

We're coming fresh from a week where PolitiFact published a fact check that took conservative radio talk show host Hugh Hewitt out of context, said it couldn't find something that was easy to find, and (apparently) misrepresented the findings of the Congressional Budget Office regarding the subject.

And more to the issue of integrity, PolitiFact ignores the evidence of its failures and allows its distorted and false fact check to stand.

The fact check claims the CBO finds insurance markets under the Affordable Care Act stable, concluding that the CBO says there is no death spiral. In fact, the CBO said the ACA was "probably" stable "in most areas." Is it rightly a fact checker's job to spin the judgments of its expert sources?

PolitiFact improperly cast doubt on Hewitt's recollections of a New York Times article where the head of Aetna said the ACA was in a death spiral and people would be left without insurance:
Hewitt referred to a New York Times article that quotes the president of Aetna saying that in many places people will lose health care insurance.

We couldn’t find that article ...
We found the article (quickly and easily). And we told PolitiFact the article exists. But PolitiFact's fact check still makes it look like Hewitt was wrong about the article appearing in the Times.

PolitiFact harped on the issue:
In another tweet, Hewitt referenced a Washington Post story that included remarks Aetna’s chief executive, Mark Bertolini. On the NBC Meet the Press, Hewitt referred to a New York Times article.
We think fact checkers crowing about their integrity and transparency ought to fix these sorts of problems without badgering from right-wing bloggers. And if they still won't fix them after badgering from right-wing bloggers, then maybe they do not qualify as "organizations that have a demonstrated commitment to the ethical principles of truthfulness, fairness, independence and transparency."

Maybe they're more like liberal bloggers with corporate backing.



Correction April 3, 2017: Added a needed apostrophe to "fact checkers job."

Sunday, May 20, 2012

PolitiFact Virginia takes mere months to correct obvious misjudgment

It's "CORRECTION" time at PolitiFact? No, not quite.

It's "UPDATE" time at PolitiFact? Er, not exactly.

It's "Editor's Note" time! We love these!
Editors Note: On Dec. 26, 2011, PolitiFact Virginia rated as Mostly True a statement by Democrat Tim Kaine that Republican George Allen, during his term in the U.S. Senate from 2001-2007, helped turn the largest budget surplus in U.S. history into the largest deficit.

Our ruling was largely based on raw federal budget numbers dating back to the 1930s. The Allen campaign recently told us that our rating did not give enough credence to what two economists said in the original story: The best way to compare deficits through history is to express them as a percentage of the Gross Domestic Product at the time.

We took a new look at the fact-check and concluded the Allen campaign is right. So we are changing our rating to Half True because there is still validity to Kaine’s claim, but his numbers need context. 
PolitiFact Virginia published the above on May 15, nearly six months after giving Kaine his inflated grade.

Note that this ruling change does not come as a result of new information.  Everything was there in the story, and the PolitiFact Virginia team just failed to put the pieces together.  Note also that PolitiFact avoids calling this a correction in the editor's note.  Let's review the "Principles of PolitiFact and the Truth-O-Meter":
When we find we've made a mistake, we correct the mistake.
  • In the case of a factual error, an editor's note will be added and labeled "CORRECTION" explaining how the article has been changed.
  • In the case of clarifications or updates, an editor's note will be added and labeled "UPDATE" explaining how the article has been changed.
  • If the mistake is significant, we will reconvene the three-editor panel. If there is a new ruling, we will rewrite the item and put the correction at the top indicating how it's been changed.
If PolitiFact Virginia had committed a factual error, then it would publish an editor's note labeled "CORRECTION."  The note does not contain that word, therefore by PolitiFact's principles it committed no factual error by calling Kaine's claim "Mostly True."

Neat!

It gets even more confusing with the next bullet point.  If there's no factual error but just a clarification or update then we should see the label "UPDATE" along with the explanation of the change.  We don't see that label either.

Apparently PolitiFact Virginia just skipped the first two bullet points and went right for the third.  Reconvene if the mistake is significant and rewrite the item with the (non-correction) correction at the top.  So we have a mistake significant enough to require a rewrite with no admission of a mistake in accordance with PolitiFact's principles.  A mistake is implied by the new ruling with the rewrite, of course.

Seriously, if PolitiFact follows its principles on the matter of corrections in such a haphazard way, what makes anyone think it applies its other principles consistently?

Incidentally, it is clear that Republican George Allen endured the harm from the mistake, while Democrat Tim Kaine reaped the benefit.

Friday, December 16, 2011

New feature: The (annotated) Principles of PolitiFact and the Truth-O-Meter

Jeff and I continually run across cases where PolitiFact applies its standards unevenly, contradicts them or simply ignores them.  But simply mentioning it on a case by case basis doesn't quite carry the impact to the reader as we might hope.  After all, most readers don't happen to look at every case we mention.

To help communicate the degree to which PolitiFact fails to keep to its principles (and, to be sure, to give us an added opportunity to express our snarky sides), we added a new page:  The (annotated) Principles of PolitiFact.  The page takes the text of PolitiFact's "Principles of PolitiFact and the Truth-O-Meter" and adds our commentary regarding the application--or lack thereof--of those principles.

We'll be adding scads of links to provide examples of the failures we point out.

The page will always be a work in progress, reflecting our growing body of work examining PolitiFact.

Wednesday, July 6, 2016

PolitiFact's Hillary hilarity

PolitiFact's Pretzel of Play-Doh Principles


FBI Director James Comey's announcement on July 5, 2016 made clear that former secretary of state Hillary Clinton gave false reports of her handling of top secret and sensitive emails.

That created a problem for PolitiFact. PolitiFact had rated "Half True" Clinton's claim that she neither sent or received email marked as classified--at least not marked that way when it was sent. And PolitiFact just issued that rating on July 3, 2016 (Update July 7, 2016: PolitiFact has deleted the original story from its site, so find the archived version here).



Time for some steam-shoveled CYABS, courtesy of PolitiFact.

A scant two days later Lauren Carroll, yes the same Lauren Carroll from screen-captured byline, published an explanation of sorts, along the same lines as the editor's note that now accompanies the original story:
(After this fact-check published, FBI Director James Comey released details of the FBI's investigation into Hillary Clinton's use of a private email server. This claim will remain rated Half True, because we base our rulings on when a statement was made and on the information available at that time. But the FBI investigation clearly undercuts Clinton’s defense if she makes a similar claim again. You can read more about the findings of the FBI investigation here.)
Since we're talking about PolitiFact, "clearly undercuts" means if PolitiFact knew three days earlier what we all know now, Clinton would have received a mere "Mostly False" rating.

Let's expose the BS for what it is.

Note that PolitiFact declares that Clinton's false claim from July 3 will keep its "Half True" rating. PolitiFact invokes one of its Play-Doh principles to justify the decision:
Timing – Our rulings are based on when a statement was made and on the information available at that time.
We imagine the justification may appear legitimate to some. It is not legitimate.

It's reasonable to base a ruling on information available at the time when somebody makes a claim like Ann Hathaway affirming "the majority of Americans now support gay marriage." Obviously a fact checker can't judge the truth of that statement based on a poll published after the claim was made. Or on a poll where the findings were within the margin of error. Well, PolitiFact did both, but our readers get the point: How could Ann Hathaway justify her claim ahead of the poll, assuming its results were outside the margin of error? If it takes two years after her statement for the majority to occur, should we expect fact-checkers to make corrections at that late date? No. That would be silly. And it's almost as silly three days later.

The case with Clinton is far different.

Hathaway could not have a justified belief that a majority favored gay marriage back on March 15, 2011. PolitiFact could have justified calling Hathaway's claim false (it received a "Mostly True" rating).

Clinton, in contrast, had the very best position available to know whether she sent or received emails marked as classified. She had every reason to know the truth back in 2009-2013 as she served as secretary of state.

Clinton's is not the sort of case where PolitiFact's timing issue makes sense. The fact was established weeks ago that Clinton received classified emails. Apparently the only reason PolitiFact gave Clinton credit for a half-truth is because Clinton lied. The excuse occurs in Lauren Carroll's PolitiSplaining:
"I never received nor sent any material that was marked classified," Clinton said July 2, after Clinton was interviewed by the FBI as part of its investigation. "And there is a process for the review of material before it is released to the public, and there were decisions made that material should be classified. I do call that retroactively classifying."

Clinton’s statements like this left open the question of whether she sent or received classified information that was inappropriately left unlabeled — or that Clinton, as head of the department, failed to recognize and deal with information that should have been classified. Because of that obfuscation, we rated her claim Half True.
See, the evidence said Clinton's claim was false, but Clinton insisted it was true, obfuscating the facts. So PolitiFact had to give Clinton a "Half True" because of the obfuscation.

Got it?

The "Timing" principle makes up only half  of PolitiFact's pretzel of Play-Doh principles. PolitiFact has another (ill-advised, in our opinion) principle relevant to this case:
Burden of proof – People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.
How does that principle work in practice? Ask Senator Harry Reid (D-Nev.). Reid, while serving as Senate Majority Leader, accused 2012 Republican presidential candidate Mitt Romney of not paying any taxes. PolitiFact found no evidence to support Reid's claim and so rated it "Pants on Fire." PolitiFact did not have access to Romney's tax returns showing that Reid was wrong. Rather, PolitiFact used the opinions of tax experts to decide the question.

Don't ask us why Reid's insistence he was right failed to net him a "Half True." Sometimes PolitiFact is so unfair.

The burden of proof principle should have applied in Clinton's case. Was there evidence supporting Clinton's claim? The only way to know was to have access to Clinton's emails. But Clinton made sure that happened only in part. PolitiFact ended up having to take Clinton at her word to give her that "Half True" rating.

In conclusion, don't buy PolitiFact's BS that it's basing the enduring "Half True" for Clinton on some type of real principle. Even if the wording of the principles doesn't change, the principles change in meaning to fit the need of the moment.

It's the type of thing that gives fact-checking, and PolitiFact, a bad name.



Updated this item July 6, 2016 with some grammar and formatting tweaks. 

Wednesday, July 27, 2016

The biased "True" rating for Michelle Obama's claim the White House was built by slaves

Over the years we've built up quite a bit of evidence of PolitiFact's bias, based largely on PolitiFact's inconsistent application of standards.

A PolitiFact fact check of Michelle Obama's speech to the Democratic National Convention supplies yet another strong example.


One could easily read Obama's statement to mean that the White House was built exclusively with slave labor. That was not the case, as the text of the fact check concedes. Not telling a significant part of the story often leads to PolitiFact rating a true claim "Mostly True" or worse.

Not this time (bold emphasis added):
Obama said the White House "was built by slaves." Strictly speaking, the White House was not exclusively built by slaves; it was built by a combination of slaves, free blacks and whites. But slaves were significantly involved in the construction of the White House, so we have no quarrel with the way Obama worded her claim. We rate it True.
Obama's claim was imprecise and people might be misled by it. However, PolitiFact has no problem with the way she worded her claim.

That's tossing principles on the scrap head, not that PolitiFact is consistent enough in applying its principles that they deserve the term "principles."

Need a comparison? There are many. How about this one?


PolitiFact's summary draws the perfect contrast:
Trump has a point here, but he should have used different words to make it. We rate his claim Mostly False.
PolitiFact has no problem with Obama's word choice. But Trump should have used different words to make his valid point.

These two political figures are not being judged according to the same standard.

Fact-checking. This is why so many cannot take PolitiFact's brand of fact-checking seriously. The great mystery is why the folks at PolitiFact think it is okay to check facts this way.

If they know it is not okay and yet do it anyway, well, that puts the problem in a different light.


Afters:

We often hear the excuse from PolitiFact's defenders that PolitiFact always justifies its ratings.

To those people, we ask if you would have accepted this explanation from PolitiFact:
Trump said Chevrolet in Japan "does not exist." Strictly speaking, there are some Chevrolet vehicles in Japan though the number is relatively small compared to the more popular makes. Since the number of Chevrolets is so small we have no problem with the hyperbolic way Trump worded his claim. We rate it True.
Always justifying the rating does not help if the justifications do not follow consistent principles.

Sunday, May 17, 2020

Malleable principles at PolitiFact Pennsylvania

We don't look at every PolitiFact fact check. Not by a long shot. But when we do, we often find problems.

We did not start reading PolitiFact Pennsylvania's fact check of Repblican Mike Turzai looking for problems. It came on our radar because we were updating our "Pants on Fire" bias research. We noticed the fact check had tags for "National" and for "Pennsylvania." State tags do not normally occur on stories with the "National" tag.

But we had to give this one a closer look. It rated Turzai "False" for claiming children are not at risk from COVID-19 unless they have underlying medical issues. It seemed worth looking at since children seem substantially less affected than adults by the novel coronavirus.

It didn't take us long to notice that a study PolitiFact used to justify its rating was published on May 11, 2020:
Turzai was on the right track when he said that children in poor health who contract the coronavirus are at risk of becoming seriously ill. And it’s true that children are far less susceptible than adults. But his claim that other children are totally safe is incorrect, according to a study published recently in the medical journal JAMA Pediatrics.
And the rest of the justification came from an announcement made on May 11, 2020:
Publication of the study came the same day New York City officials announced that a growing cluster of children sickened with the coronavirus have developed a serious condition called pediatric multi-symptom inflammatory syndrome.
So, what's the problem?

PolitiFact's statement of principles stipulates it will judge claims based on information available when the claim was made. Turzai made his claim in a video released on May 9, 2020. Both sources of PolitiFact's rebuttal information came from May 11, 2020. The "False" ruling goes directly against PolitiFact's statement of principles (bold emphasis added):
The burden of proof is on the speaker, and we rate statements based on the information known at the time the statement is made.
The fact check's summary paragraph emphasizes that the ruling's justification came from the two sources identified above, both coming to light on May 11, 2020.
Our ruling

Speaking about the coronavirus, Turzai said children are "not at risk unless they have an underlying medical issue." A new study and a growing number of gravely ill children in New York City prove otherwise. We rate this statement False.
Once again, PolitiFact acted out-of-step with its own principles. In this case a Republican received unfair harm as a result.

That's the tendency we see from left-leaning PolitiFact.

Add caption


Afters

Unlike PolitiFact, we were not sure upon reading Turzai's claim what risk to children he meant. Risk of death? Risk of contracting the disease and/or carrying and spreading it? Risk of suffering severe illness upon contracting COVID-19?

PolitiFact settled on the last of those, without discussion. We think understanding him to mean the risk of death has equal justification.


Tuesday, June 28, 2022

PolitiFact: How can we rig this abortion fact check to help President Biden? Part II

Lo and behold, PolitiFact made changes to the fact check we critiqued in our previous post.

Recall that we lodged three main criticisms of PolitiFact's "Mostly True" confirmation of President Biden's claim the Supreme Court's Dobbs decision made the United States an outlier among developed nations.

  1. PolitiFact cherry-picked its pool of "developed nations."
  2. It misidentified "Great Britain" as a member of the G7, enabling it to ignore a Northern Ireland spanner in the works
  3. It falsely stated members of the G7, except the United States, "have national laws or court decisions that allow access to abortion, with various restrictions."

PolitiFact partially fixed the second problem.

Fixing the second problem without fixing the third problem magnifies the third problem. And PolitiFact, again, failed to follow its own corrections policy.

Let's start with the "clarification" notice and work from there:

CLARIFICATION, June 27, 2022: This story has been clarified to reflect that the United Kingdom, which contains Northern Ireland, is a G-7 nation. It has also been updated to describe current abortion laws in Northern Ireland.

Note the "clarification" notice announces a clarification and an update.

What does PolitiFact's statement of principles prescribe for clarifications and updates?

Clarification:

Oops! PolitiFact's statement of principles offers no procedure for doing a clarification!

The either means that PolitiFact is following its principles because the principles allow it to do whatever it wants, or else it means that PolitiFact isn't really following a principle.

Update:

Updates – From time to time, we add additional information to stories and fact-checks after they’ve published, not as a correction but as a service to readers. Examples include a response from the speaker we received after publication (that did not change the conclusion of the report), or breaking news after publication that is relevant to the check. Updates can be made parenthetically within the text with a date, or at the end of the report. Updated fact-checks receive a tag of "Corrections and updates."

We think it clear that this policy calls for newly added "update" material within the original text to occur with clear cues to the reader where the material was added ("parenthetically within the text with a date"). Otherwise, the new material occurs at the end of the item after the update notice.

It's easier to find PolitiFact updates done incorrectly than ones done correctly. But this example shows an update done the right way:

 

The method shown communicates clearly to readers how the article changed.

This is infinitely more transparent than PolitiFact's common practice of an update notice at the end saying, in effect "We changed stuff in the story above on this date."

Understood correctly, PolitiFact corrected its story. It fixed its mistake in misleadingly identifying Great Britain as a member of the G7. And the "update" was not new information. It was information PolitiFact should have included originally but mistakenly did not.

The fact check continues to do readers a disservice by failing to inform them that the U.K. in 2019 forced Northern Ireland, via special legislation, to permit abortion. That still-missing fact contradicts PolitiFact's claim that members of the G7 other than the United States"have national laws or court decisions that allow access to abortion, with various restrictions." The legislation forcing Northern Ireland to permit legal abortion was not a national law, nor was it a court decision.

The law is specific to Northern Ireland.

We'll end with an image we created for Twitter. It's an image from the Internet Archive Wayback Machine, using its comparison feature. Text highlighted in blue was changed from the original text, and we added red lines under the part of PolitiFact's fact check that remains false.



Tuesday, November 1, 2016

More PolitiFact climate change shenanigans, featuring PolitiFact Wisconsin

One of PolitiFact's more reliable bends to the left occurs on the issue of climate change. The arbiters of truth, for example, class Republicans as climate change deniers if they do not go on record affirming man-made climate change. So much for PolitiFact's burden of proof criterion, right?

Hypocrites.

This related example comes from PolitiFact Wisconsin, checking on a claim from Senate candidate Russ Feingold that his Republican opponent does not believe humans contribute to climate change.


PolitiFact Wisconsin's approach to the fact check resembles the incompetent methods used by other iterations of the PolitiFact family. A Zebra Fact Check critique of PolitiFact's past fact check foreshadows the problems with PolitiFact Wisconsin's fact check of Feingold:
First, interpret an unclear statement according to a more clear statement by the same source.  Second, in judging what a person thinks in the present place greater weight on more recent statements.
PolitiFact Wisconsin does not apply these commonsense principles.

PolitiFact Wisconsin's evidences, in chronological order

2010
Johnson: "I absolutely do not believe in the science of man-caused climate change. It’s not proven by any stretch of the imagination."

2014
"There are other forces that cause climate to change," Johnson told Here and Now’s Robin Young. "So climate does change and I don’t deny that man has some effect on that. It certainly has a great deal of effect on spoiling our environment in many different ways."

But Johnson softened his view as soon as the next sentence: "I’ve got a very open mind, but I don’t have the arrogance that man can really do much to affect climate."
2015
Johnson votes against a proposed amendment to a bill touching the Keystone Pipeline. The amendment would have described the sense of the Senate on the issue of anthropogenic climate change, including the ideas that humans "significantly" affect the climate and that climate change increases the severity of extreme weather events (such as hurricanes).

2015
"Man-made global warming remains unsettled science. World-renowned climate experts have raised serious objections to the theories behind these claims. I believe it is a bad idea to impose a policy that will raise taxes on every American, will balloon energy prices and will hurt our economic competiveness (sic) – especially on such uncertain predictions."
2016
"Listen, man can affect the environment; no doubt about it," he said. "The climate has always changed, it always will. … The question is, how much does man cause changes in our environment, changes in our climate, and what we could possibly even do about it?"

Assessing PolitiFact Wisconsin's evidences

Following the principles mentioned above, Johnson's clearest statements on humans having some role in climate change comes from the 2014 and 2016 quotations. In 2014, Johnson said he does not deny humans have a role in climate change. In 2016, Johnson said humans "clearly" have a role in changing the climate. Johnson's clearest statements on the subject directly contradict Feingold's claim.

Our principles also guide us toward giving a preference to more recent statements. Therefore, we consider the 2015 climate change amendment for some sign that Johnson denied a human role in climate change.

Is there a worse proof of a legislator's specific views on a topic than their willingness to vote in favor of a "sense of the Senate" amendment? Particularly when that amendment does not feature language narrowly tailored to suit the question?

Would Johnson have voted in favor of the amendment if he believed there was good evidence that undefined "climate change" causes an increase in severe weather events? Who knows? We don't. But if you're PolitiFact Wisconsin you can simply assume the answer is "no" and call it fact-checking.

PolitiFact concludes:
Johnson did not support a Senate amendment to acknowledge a man-made role in climate change and expressed skepticism each of the few times he acknowledged humans might contribute. He has acknowledged at times that humans can play a role but downplayed how significant that role might be.

For a statement that is accurate but needs additional clarification, our rating is Mostly True.

PolitiFact's conclusion consists of spin.

The Senate amendment was not simply about "a man-made role in climate change." It stipulated a significant role as well as a worsening effect on severe weather.

When Johnson said humans play a role in climate change he did not express skepticism about whether humans play a role. He expressed skepticism about the extent of that role. They're not the same thing, and skepticism about the latter does not contradict Johnson's recognition that humans play some role in affecting the climate. PolitiFact says Johnson says humans "can" play a role. But that's just more spin. Johnson did not simply say humans "can" play a role. He said humans do play a role, and he said he does not deny humans play a role.

If Johnson says humans play some role in causing climate change, that statement cannot support Feingold's claim that Johnson does not believe humans play any role in climate change.

Johnson's statement cannot reasonably justify the "Mostly True" rating with which PolitiFact Wisconsin gifted Feingold. The statements could reasonably justify "False" or "Mostly False" ratings if PolitiFact's definitions for its ratings meant something.


PolitiFact's continued inability to apply simple logic in the course of its fact checks continues to boggle our minds. At the same time, we're not surprised. This is the type of error that results when left-leaning journalists rate the truth of political statements on a subjective scale.

Thursday, July 3, 2014

PolitiFact's compound problem

Why PolitiFact's rating of Steve Doocy was unfair


After criticizing PunditFact's failure to own up to its mistakes in post this Wednesday past, we promised an example of how PolitiFact applies its rule for compound claims inconsistently.

What is a compound claim?

 

A compound claim is a claim that asserts more than one truth.  For example:
  • The car is a red Chrysler
The statement makes two assertions of truth:  The car is red, and the car is a Chrysler.

In its statement of principles, PolitiFact says it divides compound claims into segments, grades the segments separately, then rates the overall accuracy:
We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
As is normal with PolitiFact, these principles are more like guidelines.  We'll look at the Doocy rating and compare it to another recent PolitiFact rating, this one looking at a statement from liberal columnist Sally Kohn.

Doocy:
"NASA scientists fudged the numbers to make 1998 the hottest year to overstate the extent of global warming."


PolitiFact rated Doocy's claim "Pants on Fire."

Kohn:
"Hobby Lobby provided this (birth control) coverage before they decided to drop it to file suit."


No, wait.  The above quotation is the one PolitiFact said it was checking.  But the actual sentence went on a bit longer (bold emphasis added):

"Hobby Lobby provided this (birth control) coverage before they decided to drop it to file suit, which was politically motivated."

PolitiFact rated Kohn's claim "Mostly True."

With the amputated ending restored, it's easy to see the parallel between the two claims.  Both Doocy and Kohn make assertions of fact, followed by judgments of motivation.  Doocy's claim arguably reports the results of the numbers-fudging rather than asserting that the scientist were motivated to achieve a particular end, but that point isn't necessary to show PolitiFact's inconsistency.

Given the similarity between the two claims, why did PolitiFact treat Doocy's compound claim as a unitary claim and Kohn's as a two-part compound claim?

Slanted.
We suggest a two-part theory.  Treating Doocy's statement as a compound claim might result in a "Mostly False" or better rating for a claim skeptical of human-caused climate change.  Liberals wouldn't like that.  Treating Kohn's claim as a unitary claim, or even dealing with her evidence-free claim of a political motivation for the Hobby Lobby lawsuit, harms the narrative liberals prefer on that topic.

In short, PolitiFact acted inconsistently because of political bias.  That's the theory.  If anybody has a better one, feel free to leave a comment.

The failure to consistently apply its principles provides avenues for the biases of PolitiFact's staffers to suffuse its fact checks.  This is just one example among many.


Additional note on the Kohn fact check

I can't figure out why PolitiFact fact checked Kohn if it wasn't intended to implicitly support her charge that the Hobby Lobby suit was not based on a sincere religious objection.  PolitiFact said "We can’t determine if politics motivated the company."  Without that charge, who cares if Hobby Lobby covered morning-after pills before it decided to bring a suit against the administration?  Despite its disclaimer, PolitiFact goes out of its way to make a circumstantial case supporting Kohn's charge:
The Greens re-examined the company’s health insurance policy back in 2012, shortly before filing the lawsuit. A Wall Street Journal story says they looked into their plan after being approached by an attorney from the Becket Fund for Religious Liberty about possible legal action over the federal government’s contraceptives requirement.

That was when, according to the company’s complaint, they were surprised to learn their prescription drug policy included two drugs, Plan B and ella, which are emergency contraceptive pills that reduce the chance of pregnancy in the days after unprotected sex. The government does not consider morning-after pills as abortifacients because they are used to prevent eggs from being fertilized (not to induce abortions once a woman is pregnant). This is not, however, what the Green family believes, which is that life begins at conception and these drugs impede the survival of fertilized eggs.
We can't determine PolitiFact's motivation for doing this fact check, but ... you get the picture.

Additional additional note:

Somehow, PolitiFact neglected to include the following information from its implicit concurrence with Kohn's attack on the Hobby Lobby's owners, the Green family:
54.  Hobby Lobby's insurance policies have long explicitly excluded--consistent with their religious beliefs--contraceptive devices that might cause abortions and pregnancy-termination drugs like RU-486.
This is from a court document PolitiFact cited in its fact check of Kohn.  PolitiFact used the next item from the document, No. 55, out-of-context against Hobby Lobby.  That was Hobby Lobby's admission that it unwittingly covered two morning-after drugs that may cause abortion.  No. 54 just wouldn't have fit Kohn's narrative, would it?

Jeff Adds: (7-5-2014) It's worth noting that both the Doocy and Kohn ratings were edited by Aaron Sharockman, so the inconsistency cannot be explained by the different journalistic styles of two people.


Update 7/8/2014:

Here's another recent case of the same compound problem, this time featuring Hillary Clinton (bold emphasis added):
"It’s very troubling that a salesclerk at Hobby Lobby who needs contraception, which is pretty expensive, is not going to get that service through her employer’s health care plan because her employer doesn’t think she should be using contraception," Clinton said.
No worries, Mrs. Clinton.  PolitiFact will just focus on the first part of the claim.  It's not really a fact checker's job to point out that Clinton's claim conflicts with Hobby Lobby's willingness to cover 16 kinds of contraception, right?  Nor should we consider Hobby Lobby's religious objection to paying for certain types of contraception.


Edit 7/5/2014: Added links to PolitiFact's Doocy and Kohn ratings - Jeff
Edit 7/5/2014:  Corrected some misspellings, including Mr. Doocy's name.

Friday, November 15, 2019

PolitiFact editor: "It’s important to note that we don’t do a random or scientific sample"

As we have mentioned before, we love it when PolitiFact's movers and shakers do interviews. It nearly guarantees us good material.

PolitiFact Editor Angie Drobnic Holan appeared on Galley by CJR (Columbia Journalism Review) with Mathew Ingram to talk about fact-checking.

During the interview Ingram asked about PolitiFact's process for choosing which facts to check (bold emphasis added):
MI
One question I've been asking many of our interview guests is how they choose which lies or hoaxes or false reports to fact-check when there are just so many of them? And do you worry about the possibility of amplifying a fake news story by fact-checking it? This is a problem Whitney Phillips and Joan Donovan have warned about in interviews I've done with them about this topic.
ADH
Great questions! We use our news judgement to decide what to fact-check, with the main criteria being that it’s a topic in the news and it’s something that would make a regular person say, "Hmmm, I wonder if that’s true." If it sounds wrong, we’re even more eager to do it. It’s important to note that we don’t do a random or scientific sample.
It's important, Holan says, to note that PolitiFact does not do a random or scientific sample when it chooses the topics for its fact check stories.

We agree wholeheartedly with Holan's statement in bold. And that's an understatment. We've been harping for years on PolitiFact's failure to make its non-scientific foundations clear to its audience. And here Holan apparently agrees with us by saying it's important.

How important is it?

PolitiFact's statement of principles says PolitiFact uses news judgment to pick out stories, and also mentions the "Is that true?" standard Holan mentions in the above interview segment. But what you won't find in PolitiFact's statement of principles is any kind of plain admission that its process is neither random nor scientific.

If it's important to note those things, then why doesn't the statement of principles mention it?

At PolitiFact, it's so important to note that its story selection is neither random nor scientific that no example from three pages of Google hits in the politifact.com domain when searching for "random" AND "scientific" has anything to do with PolitiFact's method for story selection.

And despite commenters on PolitiFact's Facebook page commonly interpreting candidate report cards as representative of all of a politician's statements, Holan insists "There's not a lot of reader confusion" about it.
If there's not a lot of reader confusion about it, why say it's important to note that the story selection isn't random or scientific? People supposedly already know that.

We use the tag "There's Not a Lot of Reader Confusion" on occasional stories pointing out that people do suffer confusion about it because PolitiFact doesn't bother to explain it.

Post a chart of collected "Truth-O-Meter" ratings and there's a good chance somebody in the comments will extrapolate the data to apply to all of a politician's statements.

We say it's inexcusable that PolitiFact posts its charts without making their unscientific basis clear to readers.

They just keep right on doing it, even while admitting it's important that people realize a fact about the charts that PolitiFact rarely bothers to explain.

Sunday, September 11, 2016

PolitiFact Florida flip-flops on subjectivity of congressional ineffectiveness

PolitiFact's patina of reliability--perceived mostly by liberals--relies on people not paying close attention.

PolitiFact Florida gives us our latest example of unprincipled fact checking.

The conservative American Future Fund ran an ad attacking Florida Democrat Patrick Murphy, who is running for the senate against incumbent Republican Marco Rubio. The ad said Murphy had been rated one of the nation's least effective congressmen:
In the ad, American Future Fund says, "Patrick Murphy was named one of America's least effective congressmen."
It's completely true that Murphy was named one of America's least effective congressmen. InsideGov produced a set of rankings, and Murphy was rated one of the least effective.

Flip-flop

PolitiFact Florida rated the ad's claim "Mostly False" because InsideGov's system for rating effectiveness fails to take enough factors into account:
The main problem with this ranking is it’s based on a single measure: the percentage of bills sponsored by each member over their time in office that went on to pass committee. That’s not a sufficient way to rate the effectiveness of a member of Congress.

Congressional experts have repeatedly told us that there are many other ways to evaluate the effectiveness of a member beyond getting a sponsored bill passed in committee.
 Got it? A reliable measure of congressional effectiveness needs to take more factors into account.

But when PolitiFact Florida decided not to rate Democrat Alan Grayson over a similar claim about Murphy based on the same YouGov ranking during the Democratic primary, the fact checkers had another approach to the issue:
We’re not going to rate Murphy’s effectiveness as a legislator, because that’s a subjective measure.
To be fair to PolitiFact Florida, without doing it any favors, it started its flip-flop during the fact check of Grayson by pointing out that it's not enough to rate effectiveness using one criterion for measurement.

It apparently does not occur to the folks at PolitiFact Florida that if effectiveness is subjective then it doesn't matter how many criteria one uses. One is as good as a billion.

Congressional effectiveness vs. Trump-caused bullying in schools

We can't help but compare PolitiFact Florida's rating of American Future Fund to the "Mostly True" rating PolitiFact gave Democrat presidential nominee Hillary Clinton for her claim about a "Trump Effect" on our schoolkids.

Both AFF and Clinton credited the claim to a third source (YouGov and "parents and teachers," respectively).

The AFF claim was literally accurate; Clinton's less so (Zebra Fact Check found no anecdote from the source Clinton named to match her claim).

Both claims were credited to dubious sources (AFF's to the simplistic YouGov ratings, Clinton's to a handful of anecdotes--23, estimated--from an unscientific poll of nearly 2,000 teachers).

AFF received a "Mostly False" rating. Clinton received a "Mostly True" rating.

We suggest there is no one set of nonpartisan principles that would allow PolitiFact to justify both ratings. The disparity in these ratings shows unevenly applied principles, or else a lack of principles. The conservative AFF correctly said an untrustworthy source made a certain claim and received a "Mostly False" rating. The liberal candidate semi-correctly said an untrustworthy source made a certain claim and received a "Mostly True" rating.

It doesn't pass the sniff test.

Wednesday, May 29, 2019

More Deceptive "Principles" from PolitiFact

PolitiFact supposedly has a "burden of proof" that it uses to help judge Political claims. If a politician makes a claim and supporting evidence doesn't turn up, PolitiFact considers the claim false.

PolitiFact Executive Director Aaron Sharockman expounded on the "burden of proof" principle on May 15, 2019 while addressing a gathering at the U.S. Embassy in Ethiopia:
If you say something, if you make a factual claim, online, on television, in the newspaper, you should be able to support it with evidence. And if you cannot or will not support that claim with evidence we say you're guilty.

We'll, we'll rate that claim negatively. Right? Especially if you're a person in power. You make a claim about the economy, or health, or development, you should make the claim with the information in your back pocket and say "Here. Here's why it's true." And if you can't, well, you probably shouldn't be making the claim.
As with its other supposed principles, PolitiFact applies "burden of proof" inconsistently. PolitiFact often telegraphs its inconsistency by publishing a 'Splainer or "In Context" article like this May 24, 2019 item:


PolitiFact refrains from putting Milano's statement on its cheesy "Truth-O-Meter" because PolitiFact could not figure out if her statement was true.

Now doesn't that sound exactly like a potential application of the "burden of proof" criterion Sharockman discussed?

Why isn't Milano "guilty"?

In this case PolitiFact found evidence Milano was wrong about what the bill said. But the objective and neutral fact-checkers still could not bring themselves to rate Milano's claim negatively.

PolitiFact (bold emphasis added):
Our conclusion

Milano and others are claiming that a new abortion law in Georgia states that women will be subject to prosecution. It actually doesn’t say that, but that doesn’t mean the opposite — that women can’t be prosecuted for an abortion — is true, either. We’ll have to wait and see how prosecutors and courts interpret the laws before we know which claim is accurate. 
What's so hard about applying principles consistently? If somebody says the bill states something and "It actually doesn't say that" then the claim is false. Right? It's not even a burden of proof issue.

And if somebody says the bill will not allow women to be prosecuted, and PolitiFact wants to use its "burden of proof" criterion to fallaciously reach the conclusion that the statement was false, then go right ahead.

Spare us the lilly-livered inconsistency.

Sunday, June 21, 2020

Does PolitiFact deliberately try to cite biased experts? (Updated)

If there's one thing PolitiFact excels at, it's finding biased experts to quote in its fact checks.

Sometimes there's an identifiable conservative, but PolitiFact favors majority rule when it surveys a handful of experts. It seems to us that PolitiFact lately is suppressing the appearance of dissent by not bothering to find a representative sample of experts.

How about a new example?



For this fact check on President Trump's criticism of President Obama, PolitiFact cited three experts, in support of its "Truth-O-Meter" ruling.

Two out of the three were appointed to Mr. Obama's "Task Force on 21st Century Policing." All three have FEC records showing they donate politically to Democrats:
The first two on the list, in fact, specifically donated to Mr. Obama's presidential campaign.  Thus making them perfect experts to comment on Mr. Trump's criticism of Mr. Obama?

Seriously, isn't this set of experts exactly the last sort of thing a nonpartisan fact-checking organization that declares itself "not biased" should do?

As bad as its selection of experts looks, the real problem with the fact check happens when PolitiFact arbitrarily decides that the thing Trump said President Obama did not try to do was "police reform" when Trump said "fix this." Plenty of things can fit under "police reform," and PolitiFact proves it by citing how "the Justice Department did overhaul its rules to address racial profiling."

Other evidence supposedly showing Trump wrong was the task force's (non-binding!) set of recommendations. The paucity of the evidence comes through in PolitiFact's summary:
The record shows that is not true. After the fatal shooting of Michael Brown in Ferguson and related racial justice protests, Obama established a task force to examine better policing practices. The Obama administration also investigated patterns or practices of misconduct in police departments and entered into court-binding agreements that require departments to correct misconduct.
So putting together a task force to make recommendations on police reform is trying to "fix this."

And, for what it's worth, the fact check offered no clear support for its claim "The Obama administration also investigated patterns or practices of misconduct in police departments." PolitiFact included a paragraph describing what the administration supposedly did, but that paragraph did not reference any of its experts and did not cite either by link or by name any source backing the claim.

Mr. Trump was not specific about what he meant by "fix this." Rather than granting fact-checkers license for free interpretation, that type of ambiguity in a statement makes it nearly impossible to fairly fact check the statement. Put simply, a fact checker has to have a pretty clear idea of what a claim means in order to fact check it adequately. Trump may have had in mind his administration's move to create a record of police behavior that would make it hard for officers with poor records to move to a different police department after committing questionable conduct. It's hard to say.

Here's Mr. Trump's statement with some context:
Donald Trump: (11:32)
Under this executive order departments will also need a share of information about credible abuses so that offers with significant issues do not simply move from one police department to the next, that's a problem. And the heads of our police department said, "Whatever you can do about that please let us know." We're letting you know, we're doing a lot about it. In addition, my order will direct federal funding to support officers in dealing with homeless individuals and those who have mental illness and substance abuse problems. We will provide more resources for co-responders, such as social workers who can help officers manage these complex encounters. And this is what they've studied and worked on all their lives, they understand how to do it. We're going to get the best of them put in our police departments and working with our police.

Donald Trump: (12:33)
We will have reform without undermining our many great and extremely talented law enforcement officers. President Obama and Vice President Biden never even tried to fix this during their eight-year period.
We can apparently credit the Obama administration with talking about doing some of the things Trump directed via executive order.

In PolitiFact's estimation, that seems to fully count as trying to actually do them.

And PolitiFact's opinion was backed by experts who give money to Democratic Party politicians, so how could it be wrong?


Update June 21, 2020:


The International Fact-Checking Network Code of Principles

In 2020 the International Fact-Checking Network beefed up its statement of principles, listing more stringent requirements in order to achieve "verified" status in adhering to its Code of Principles.

The requirements are so stringent that we can't help but think that it portends lower standards for applying the standards.

Take this, for example, from the form explaining to organizations how to demonstrate their compliance (bold emphasis added):
3. The applicant discloses in its fact checks relevant interests of the sources it quotes
where the reader might reasonably conclude those interests could influence the
accuracy of the evidence provided.
It also discloses in its fact checks any commercial
or other such relationships it has that a member of the public might reasonably
conclude could influence the findings of the fact-check.
Is there a way to read the requirement in bold that would relieve PolitiFact from the responsibility of disclosing that every one of the experts it chose for this fact check has an FEC record showing support for Democratic Party politics?

If there is, then we expect that IFCN verification will continue, as it has in the past, to serve as a deceitful fig leaf creating the appearance of adherence to standards fact checkers show little interest in following.

We doubt any number of code infractions could make the Poynter-owned IFCN suspend the verification status of Poynter-owned PolitiFact.

Note: Near the time of this update we also updated the list of story tags.



Edit 2050 PDT 6/21/20: Changed "a" "to" and "police" to "of" "for" and "officers" respectively for clarity in penultimate sentence of paragraph immediately preceding Trump 11:32 quote - Jeff

Tuesday, February 22, 2011

Keeping up appearances at PolitiFact

(crossposted from Sublime Bloviations)

Yesterday PolitiFact published a piece by editor Bill Adair apparently intended to reassure readers that PolitiFact is, well, politifair in the way it does business.

Given Adair's recent past of expressing indifference to the public's perception of bias at PolitiFact this is a significant development.  Eric Ostermeier probably deserves a great deal of the credit for putting PolitiFact on the defensive.  Ostermeier published a study of PolitiFact's results suggesting the strong possibility of selection bias and called for PolitiFact to make its selection process transparent.

Though Ostermeier's name might as well have been "Voldemort" for purposes of Adair's article, the latter probably serves as Adair's response to Ostermeier's call.

How does the answer measure up?
Editor's Note: We've had some inquiries lately about how we select claims to check and make our rulings. So here's an overview of our procedures and the principles for Truth-O-Meter rulings.
The editor's note is about half true.  PolitiFact didn't just have inquiries.  It found itself criticized by a serious researcher who made a good case that PolitiFact ought to be viewed as having a selection bias problem unless PolitiFact could allay the concern by making its methods transparent.  The editor's note isn't exactly transparent.

Adair's off to a great start!