Sunday, November 13, 2016

PolitiFact's "many" problems

On Nov. 3, 2016 we brought some focus to the "Mostly False" rating PolitiFact gave Donald Trump for saying many Americans were paying more for health care than for their mortgage or rent.

PFB co-editor Jeff D. today reminded me about an "Afters" section I added to a post from Sept. 1, 2016:
PolitiFact exaggerated the survey evidence supposedly supporting Clinton by claiming "many" teachers blamed Trump for increasing bullying and harassment:
Many of these teachers, unsolicited, cited Trump’s campaign rhetoric and the accompanying discourse as the likely reason for this behavior.
The Zebra Fact Check investigation suggests PolitiFact was misled about the number of teachers saying Trump was responsible for increasing bullying or harassment. Out of almost 2,000 teachers participating in the survey, 849 answered the question about bullying or biased language and of those 123 mentioned Trump. A fraction of those placed any kind of blame on Trump for anything. We would generously estimate that 25 teachers blamed Trump for something (not necessarily bullying or harassment) in answering that question. This implies that, to PolitiFact, "many" can be less than 1.25 percent of 2,000.
That's right. The hypocritical liberal bloggers at PolitiFact said "many" teachers cited Trump's rhetoric as the likely reason for bullying and/or harassing behavior. PolitiFact shoveled that to its readers as a fact, though the data showed a small fraction of the surveyed teachers offered that opinion. Then. about a month later, PolitiFact said Trump's statement about health care costs was "Mostly False."

Could PolitiFact be right in both cases?

That seems like a stretch.

Wrong in both cases?

That's more likely.


  1. Going with the numbers that were presented it seems like about 15% of the teachers who answer the survey mentioned Trump when talking about bullying that is significant

    1. In your opinion, what hypothesis would that "significant" figure support?

      Bear in mind the skewed sample the SPLC used.

  2. Unknown/ConnecticutHeartthrob,

    I don't see where the definition of "significant" was at issue. I did ask what hypothesis would be supported by the "significant" statistical finding.

    I don't suppose you have an answer to that question?

    Apologies for not getting to your comments earlier. There were quite a few in the moderation folder for which I received no email notification.

  3. ConnecticutHeartthrob wrote:

    **The hypothesis is there are a measurable number of students bullies mimicking Trump's behavior.

    This is because the percentage of teachers who were polled about is outside the margin of error.**

    We've said it before, but it's useless calculating a margin of error for data beset by selection bias.

    Apparently people don't want to hear it.

    ****Non-probability sampling does not permit the computation of a margin of sampling error in the same way that probability sampling does.****

    ****Over the years, the margin of sampling error has typically been provided to give readers a sense of a poll’s overall accuracy. That simple idea requires some critical assumptions, however: It presumes that the sample was chosen completely at random, that the entire population was available for sampling and that everyone sampled chose to participate in the survey. It also assumes that respondents understood the questions and that they answered in the desired way. For pre-election surveys, it assumes that pollsters have accurately defined and selected the population of likely voters.****


Thanks to commenters who refuse to honor various requests from the blog administrators, all comments are now moderated. Pseudonymous commenters who do not choose distinctive pseudonyms will not be published, period. No "Anonymous." No "Unknown." Etc.