Tuesday, June 16, 2015

Reader confusion on PolitiFact's Trump report card?

We have it from PolitiFact editor Angie Drobnic Holan that not much reader confusion exists about PolitiFact's candidate and network report cards. Readers supposedly know the report cards do not represent a scientific attempt to gauge the truthfulness of candidates.

We find Holan's claim laughable.

We often see PolitiFact's readers expressing their confusion on FaceBook. The Donald Trump presidential candidacy, predictably accompanied by PolitiFact highlighting Trump's PolitiFact "report card" offers us yet anther opportunity to survey the effects of PolitiFact's deception.
  • "65% False or Pants on Fire.
    He's a FoxNews dream candidate"
  • "admittedly, 14 is a small sample size but 0% true!"
  • "Only 14 % Mostly True ??? 86 % from Half true to Pants on Fire - FALSE !!! YES - THIS IS THE KIND OF PERSON WE NEED IN THE WHITE HOUSE."
  •  "0% true? Now, why doesn't that surprise me?"
  •  "True zero percent of the time - zero. Great word, zero. Sums up this man's essence well I think"
  • "I believe this is the worst record I've seen?"
  • "Wow 0% true"
  • "Not one single true . sounds right"
  • "Profile of a habitual LIAR !"
  • "Just what we need another republican that can't tell the truth !!"
  • "His record is better than I expected, but still perfectly abysmal."
  • "I realize that Politifact doesn't check everything politicians say, but zero percent true? That's outrageous."
  • "You'd think the GOP would disqualify him for never telling the whole truth."
  • "Zero percent "true"!"
  • "He should do well with a record like that, lol!"
  • "Not a great record in the accuracy department."
  • "Never speaks the truth!!!!"
  • "This looks like the record of a pathological liar."
Occasionally a Facebook commenter gives us hope:
People who believe these percentages are the fools. It's a well known thing called "selection bias". It refers to the FACT they have not checked every statement he has made just a very few. The percentages given are from those their [sic] selected. It's a well known way to lie about someone and is commonly used by left wing media and is meant to inflame their base to believe everything they read that agrees with their own beliefs.
"There's not a lot of reader confusion out there." Right. Sure.

PolitiFact Florida tweezes Governor Scott

We've noted PolitiFact's inconsistency in judging claims narrowly or broadly, using the tag "Tweezers or Tongs" to identify our stories highlighting those inconsistencies.

Our example this time comes from PolitiFact Florida. PF Florida has looked at Florida Governor Rick Scott's claims on environmental spending, working to ensure that people do not believe Florida is spending a record amount on environmental issues generally, even if it's true that the state is spending a record amount to revitalize the Everglades and its natural springs.

The latest rating, a "Pants on Fire" given to Scott for repeating the "record funding" claim drew our attention thanks to the lack of context PolitiFact offered. Supposedly Scott made the claim at a June 2 economic summit attended by, among others, a number of GOP presidential hopefuls.

PolitiFact Florida noted that Scott made his claim in the company of a number of other claims.
As he boasted about the state’s record during his economic summit for GOP presidential contenders in Orlando June 2, Scott reeled off a bunch of statistics about Florida’s budget and economy including this one: "If you care about the environment, we've got record funding."
We confirmed that by locating Scott's speech on CSPAN. We created a clip that provides the context of Scott's remarks.

We have two main questions.

What led PolitiFact Florida to tweeze out that one statement from Scott from the many it could have fact checked?

Should we consider the issue adequately fact-checked when PolitiFact Florida's story publishes no comparative spending totals and fails to separate state funding from federal funding?

We don't think so.

Saturday, June 6, 2015

PolitiFact RI presents Lincoln Chafee's "scorecard"

Do PolitiFact's "report cards" mislead?

You betcha!

PolitiFact Rhode Island gave us former Rhode Island governor Lincoln Chafee's scorecard on June 3, 2015. As is often the case, the PolitiFact story highlighting the "scorecard" omitted any caution to readers that the results represent nothing like a social science experiment. And the story includes suggestive sentence pairings like this one (bold emphasis added):
His campaign promises showed a greater measure of constancy. Among the promises we rated, the Linc-O-Meter showed a grinning governor half the time, with 16 of 32 promises kept.
With the sentence we emphasized, PolitiFact Rhode Island implies for its readers that its "scorecard" allows readers to generalize about Chafee's campaign promises, not just the ones PolitiFact RI subjected to its brand of fact-checking.

With almost a full day's worth of comments up at PolitiFact's Facebook page (27), we already have a few that seem to show reader confusion.

"The data base is simply too small to be useful."

With a larger database the story selection process should produce a parallel objection.

"Guess telling the truth is not his strong suit."

"That looks like a terrible set of ratings. Easily poor enough for me to disqualify him as a candidate."

"At least he doesn't have any Pants on Fire."

PolitiFact admits the "report cards" do not represent a social science experiment, albeit their admission is not conspicuous enough to keep readers from making comments such as the above. People see the graph and many believe it shows some type of general truthfulness profile.

PolitiFact refuses to see the extent to which the "report cards" mislead readers, for whatever reason.

Bias, maybe?

Image from PolitiFact.com

Thursday, June 4, 2015

A report card from PolitiFact Texas

Do PolitiFact's "report cards" mislead people?

You betcha.

PolitiFact's June 4, 2015 item on Rick Perry's PolitiFact report card continues PolitiFact's pattern. There's no warning that PolitiFact does not select stories randomly. Readers just get the eye-candy bar-graph that implies a profile of Perry's general reliability.


PolitiFact editor Angie Drobnic Holan says "There's not a lot of reader confusion out there." Yet on PolitiFact's Facebook page we have comments like these:

"72 percent of his statements are not believable?"

 "15% of the time he is telling the truth."

 "only about 50% lies.??.not bad for a republican.."

 "Pretty weak when pants on fire is nearing your mostly true and true remarks."

 "Appears he has a real problem with dealing in true statements"

Those comments hint strongly at confusion. Why can't PolitiFact see it?

Wednesday, June 3, 2015

Draw conclusions with caution, Data Lounge

We say PolitiFact's report cards mislead people.

PolitiFact counters with "There's not a lot of reader confusion out there."

Who's right? We turned to the Data Lounge (a discussion board) for help:

Fact-Checking Site Finds Fox News Only Tells the Truth 18 Percent of the Time

Huh. We wonder what fact-checking site that might be?
Punditfact, a branch of Politifact, has put together profiles for CNN, MSNBC and Fox News detailing just how honest each of these networks are. And while it’s obviously not a completely comprehensive profile (it would be nearly impossible to fact check every single thing said on each network) it’s a decent measure of the honesty of each. And what do you know, Pundifact found Fox News to have only told the truth 18 percent (15 of 83) of the time for the statements they checked. And even of that 18 percent, only 8 percent of what they said was completely “True.” The other 10 percent was rated as “Mostly True.” A staggering 60 percent (50 of 83) comments were found to be either “Mostly False,” “False,” or “Pants on Fire.” The other 22 percent were rated “Half True.” Essentially well over half of what Punditfact has fact-checked on Fox News has been a lie and only 18 percent has been deemed factual. To compare, CNN was found to have been honest about 60 percent of the time, while only having 18 percent of their comments found to be false. As for MSNBC, they were found to have been honest about 31 percent of the time, while 48 percent of the comments they had fact-checked were deemed untrue. So while MSNBC’s numbers aren’t exactly worth bragging about, they’re still far better than the “fair and balanced” Fox News.
Is this one of the careful conclusions PolitiFact cautions its readers to draw about report card data?

It's the same story at Forward Progressives. Literally.

PolitiFact does not randomize its story selection. Its ratings contain at least some subjectivity (we say it's a high degree of subjectivity). Conclusions like these from Data Lounge don't follow. The report card stories mislead people.

PolitiFact editor: "There's not a lot of reader confusion out there"

In May 2015, the American Press Institute published a long article by Mark Stencel titled "How U.S. politics adapts to media scrutiny." The article made a number of observations about fact checking and included a subsection on the misuse of fact checks.

We contributed a comment noting that fact checkers commit one of the worst abuses of fact checks by publishing aggregated fact check ratings for individuals and groups. Regular readers know Jeff and I have harped on that issue for years.

Stencel, to our delight, drew comments from PolitiFact editor Angie Drobic Holan and embedded them in his reply:
As you said, I don't think any fact-checkers I know suggest their fact-checks are meant to be a random sample. But I just asked PolitiFact editor Angie Drobnic Holan to explain the "report cards" that appear on that site. As she put it, the person-by-person and group-by-group tallies PolitiFact offers are a "snapshot of what we've chosen to fact-check," presented as a "reader service" that helps PolitiFact's audience navigate their reporting.

"I think people understand that the Truth-O-Meter is not a scientific instrument," Holan added, noting that PolitiFact's editors also provide a page on the site where they explain their methods and principles (http://bit.ly/1AbmKCz). "There's not a lot of reader confusion out there," she said. And the feedback and metrics she sees tell her it's a popular feature.
We replied that readers regularly respond on Facebook to PolitiFact's "report card" articles in ways that indicate they are misled. We asked if we're supposed to believe Holan doesn't know that.

Stencel didn't reply to our second comment. And we don't blame him. He's not responsible for what PolitiFact does, after all.

His article on responses to fact checking has some good material in it, and his comment has given us a useful new tag to use when we highlight the way PolitiFact misleads its readers with "report card" stories.