Sunday, October 28, 2012

PFB Semi-Smackdown: Kossack "Brash Equilibrium"

Ordinarily, we use the "PFB Smackdown" feature to critique liberal criticisms of PolitiFact.  This "Semi-Smackdown" deals with something different, a misrepresentation of this blog along with yet another irresponsible attempt to use the ratings of mainstream fact checkers to draw conclusions about the persons whose statements they rate.

Our pseudonymous subject goes by "Brash Equilibrium."  Brash goes through the trouble of adding Kessler's Pinocchios together with PolitiFact's "Truth-O-Meter" ratings and then calculates confidence intervals for various sets of ratings, based on the apparent assumption that the selection of stories is essentially random.

And there's this (bold emphasis added):
If instead we believed like a moderate conservative that the true comparison was reversed - that is, if we believed that Obiden spewed 17% more malarkey than Rymney - then it suggests that the fact checkers's [sic] average bias is somewhere between 16% and 54% for the Democrats, with a mean estimated bias of 34%.

It seems unrealistic to me that PolitiFact and The Fact Checker are on average that biased against the Republican party, even subconsciously. So while I think it's likely that bias could inflate the difference between the Republicans and Democrats, I find it much less likely that bias has reversed the comparison between the two tickets. Of course, these beliefs are based on hunches. Unlike politifactbias.com's rhetoric and limited quantitative analysis, however, it is based on good estimates of the possible bias, and our uncertainty in it.
It's hard to believe Brash put much time into any investigation of our rhetoric and analysis, considering his estimates ignore one of our favorite pet peeves, selection bias.  It's a waste of time calculating confidence intervals if the data set exhibits a significant degree of selection bias.  Giving our site more than a cursory read should have informed Brash on that point.

Our case against PolitiFact is based on solid survey data showing a left-of-center ideological tendency among journalists, an extensive set of anecdotes showing mistakes that more often unfairly harm conservatives and our own study of PolitiFact's bias based on its ratings.

Our study does not have a significant selection bias problem.

Brash's opinion of PolitiFact Bias consists of an assertion without any apparent basis in fact.

Brash:
We need more large-scale fact checking institutions that provide categorical rulings like The Fact Checker and PolitiFact. The more fact checker rulings we have access to, the more fact checker rulings we can analyze and combine into some (possibly weighted) average.
How often have we said it?

Lacking a control for selection bias, the aggregated ratings tell us about PolitiFact and The Fact Checker, not about the subjects whose statements they grade.

We need fact checkers who know how to draw the line between fact and opinion.  And critics who know enough to whistle a foul when "fact checkers" cross the line and conflate the two.



Correction Oct 28, 2012, 11:30 a.m.:  The second paragraph was originally published in this post minus its true beginning, "It seems unrealistic to me that." 

20 comments:

  1. My name is Benjamin Chabot-Hanowell, also known as Brash Equilibrium. Here is my rebuttal to your "semi-smackdown":

    http://www.malarkometer.org/1/post/2012/10/politifactbiascom-and-i-fling-poo-at-one-another.html#.UJAJS280WSo

    ReplyDelete
  2. Hi, Brash/Benjamin (noted that you prefer "Brash").

    What you've posted isn't a rebuttal. It refutes nothing of what I wrote about you.

    Your key line is where you ask what PFB would have you do. The answer is simple: Don't waste your time and recognize that we're right that candidate report cards and aggregated ratings tell you about the fact checkers rather than the candidates.

    You haven't addressed the selection bias problem. And you can't even begin to do that without a baseline against which to measure bias (selection and reporting bias). And you can't get the baseline without apparently contradicting your assertion that we'll never have fact checkers who can separate fact from opinion.

    Calculating confidence intervals is certainly a fine technique of science. But at the bottom line it's still garbage in, garbage out.

    You wrote that we "actually argue that it's all bias!"

    It's not very scientific to write flatly false statements about us. Check the About/FAQ page.

    ReplyDelete
    Replies
    1. > What you've posted isn't a rebuttal. It refutes nothing of what I wrote about you.

      Okay, so that's your thesis statement?

      > Your key line is where you ask what PFB would have you do. The answer is simple: Don't waste your time and recognize that we're right that candidate report cards and aggregated ratings tell you about the fact checkers rather than the candidates.

      The answer actually isn't so simple. I've given you a counter-argument. Could you address my counter-argument in a way that doesn't simply restate your original argument? Otherwise this will fail the rational debate test.

      > You haven't addressed the selection bias problem.

      You're right! Neither have you. I don't address the selection bias problem because I have no quantitative evidence to tell me what kind of selection bias there is so that I can alter my sampling distribution accordingly. In the absence of such information, I simply calculate confidence intervals. The output can be interpreted however you wish. The input is, however, not garbage.

      > And you can't even begin to do that without a baseline against which to measure bias (selection and reporting bias).

      Yes, I agree.

      > And you can't get the baseline without apparently contradicting your assertion that we'll never have fact checkers who can separate fact from opinion.

      You're actually addressing the problem in the wrong way and misunderstanding what I'm saying. What we must do is measure the influence of political belief on fact checker rulings. To do that, assemble a team of fact checkers of widely varying political beliefs. Regularly give them political position surveys. Regularly analyze the effect of political beliefs on aggregate rulings. Use those results to come up with correction factors.

      If you want to technique the influence of political attitudes in only one direction (say, toward the left), you don't even need to employ that diverse of a fact checking team because there is a lot of political attitude variation among liberals and conservatives, respectively.

      > Calculating confidence intervals is certainly a fine technique of science. But at the bottom line it's still garbage in, garbage out.

      You have insufficiently argued that it is GIGO. You've insufficiently addressed my commentary about how to deal with our tremendous uncertainty about the direction and strength of bias among fact checkers. Calculating is a fine technique. Of science. Which is what I'm doing. Care to join me?

      > You wrote that we "actually argue that it's all bias!" It's not very scientific to write flatly false statements about us.

      Well, that's because you wrote that fact checker rulings tell us about the fact checkers, not about the subjects of the fact checking. If that isn't an argument that all we see in fact checker rulings is bias, I don't know what is!

      Delete
    2. (forgive the typos)

      Delete
    3. "You wrote that fact checker rulings tell us about the fact checkers, not about the subjects of the fact checking. If that isn't an argument that all we see in fact checker rulings is bias, I don't know what is!"

      That more or less sums up your problem. You too easily leap to assumptions. We know that if the story selection process for PolitiFact is not random then it is highly likely to show a selection bias. The aggregate rulings for PF give us information about the stories PF selects. Ergo, the aggregate ratings tell us about PF's selection bias, telling us something about the people at PF who do the selecting (it tells us what they select, which can in principle compare to a random selection to measure ideological bias, if any).

      The leap toward assuming that we think all of PF's ratings and/or story selections are biased is unwarranted and, of course, unscientific.

      "To do that, assemble a team of fact checkers of widely varying political beliefs. Regularly give them political position surveys. Regularly analyze the effect of political beliefs on aggregate rulings. Use those results to come up with correction factors."

      So where's the baseline? What are you measuring if you don't have one? How do you "measure the effect of political beliefs on aggregate rulings" if you have no baseline? You claimed that the PFB study doesn't address the selection bias problem. But it does, actually. And moreover it provisionally addresses the problem of bias in PF's ratings. PF's accuracy doesn't really matter with the PFB study since the study results simply rely on the way PF applies an apparently subjective rating category. Inaccuracy isn't an issue with judgments of subjective things.


      "You have insufficiently argued that it is GIGO."

      You've conceded the selection bias problem. Game over.

      Delete
    4. "...if the story selection process for PolitiFact is not random then it is highly likely to show a selection bias."

      But you don't know how strong that bias is. You don't even know what direction it is in. You have reason to suspect there is liberal bias. That is all.

      "The aggregate rulings for PF give us information about the stories PF selects."

      And about the statements that individuals and groups make.

      "Ergo, the aggregate ratings tell us about PF's selection bias,"

      And about the statements that individuals and groups make.

      " telling us something about the people at PF who do the selecting."

      Yeah, but you don't actually know how strong that selection bias is. You've provided me with no estimate of its size or even direction.

      "The leap toward assuming that we think all of PF's ratings and/or story selections are biased is unwarranted and, of course, unscientific."

      You call yourselves PolitFact Detractors. You seriously want to tell me that you don't they're highly suspect? That's what your whole game is about. Don't try and sugarcoat it. You want to discredit PolitiFact. If you want to do that, I've suggested ways you can actually measure the effects you want to measure. Go do that instead of tabulate Pants on Fire and False scores by party.

      "So where's the baseline? What are you measuring if you don't have one?"

      I'm arguing that going on a quest for a "fair and balanced" baseline is not the best way to spend our time. Instead, we measure the effect of political attitudes on fact checker rulings (e.g., do they tend to rate as false things that go against their beliefs, but try things that agree with their beliefs)? Yes? Okay. how often? Okay. How would that effect the malarkey score? Does it make the malarkey score completely worthless as a measure of factuality because political affiliation alone explains fact checker rulings? If yes, the malarkey score is a measure of fact checker bias, nothing more. If not, we can adjust malarkey scores for political bias. Can we do this with current fact checker data? NOPE! We need more information.

      "How do you "measure the effect of political beliefs on aggregate rulings" if you have no baseline?"

      Well, I spoke too soon. For the purposes of statistical models, the statistical baseline is perfect centrism (centrism, by the way, potentially introduces another kind of bidirectional bias that we haven't even discussed).

      "You claimed that the PFB study doesn't address the selection bias problem. But it does, actually."

      No, it doesn't. It estimates differences in Pants on Fire and False rulings across parties and in two panels (before and after CQ). You don't actually measure the effect of selection bias. The bias you're measuring, if it exists, could be a complex mix of straight up partisan bias, absent of selection bias, and partisan selection bias.

      "And moreover it provisionally addresses the problem of bias in PF's ratings."

      Yes, very provisionally. In a subset of categories. Doesn't say much about the other 3/5's of the Truth-O-Meter!

      "PF's accuracy doesn't really matter with the PFB study since the study results simply rely on the way PF applies an apparently subjective rating category. Inaccuracy isn't an issue with judgments of subjective things."

      I don't know why this is relevant. Please clarify.

      "You've conceded the selection bias problem."

      Where did I do that? Oh, by agreeing that there is possibly selection bias? Yeah. That really means that I agree with you that you've categorically proven that PolitiFact is not be trusted (Oh, wait, that's not really your argument. Except it is.)

      Delete
    5. "But you don't know how strong that bias is. You don't even know what direction it is in. You have reason to suspect there is liberal bias. That is all."

      That's a bit like saying that I don't know how strongly the tide is coming in and I don't know which direction it's going though I have reason to suspect. Once you look at the reasons, it's good evidence that the tide is going in one direction.

      "And about the statements that individuals and groups make."

      No. Not without a bunch of assumptions, such as the assumption that ratings are accurate and the assumption that the selection process is close to random.

      "You've provided me with no estimate of its size or even direction."

      Nor, with equal relevance, have I served you tea and cookies. As a scientist you don't trust data collected without a check on selection bias. You don't assume that there is no such bias unless somebody proves the reverse (fallacy of argumentum ad ignorantiam).

      "You seriously want to tell me that you don't (think) they're highly suspect?"

      You seriously want to tell me how you think you can backtrack from "it's all bias!" to "they're highly suspect" and I won't notice? PolitiFact's ratings are highly suspect because they routinely make boneheaded errors, and those errors do the greatest harm to conservatives. I've described for you the three lines of evidence. Yes, PolitiFact is highly suspect. No, it's not all bias.

      "For the purposes of statistical models, the statistical baseline is perfect centrism (centrism, by the way, potentially introduces another kind of bidirectional bias that we haven't even discussed)."

      With that method you end up measuring relative bias (your baseline shifts with the population doing the ratings). You can accomplish that by surveying readers. See the "AllSides" project. The PFB study does not measure relative bias.

      "You don't actually measure the effect of selection bias."

      That's just it. I don't need to. The study design makes selection bias effectively irrelevant. It's unlikely those who designed the rating systems knew of their full utility in exposing their ideological bias or else they wouldn't have bothered. There are some cases for state operations where some of the results are initially counterintuitive, but that's simply a vista for a corollary explanatory hypothesis (which I've got).

      "Doesn't say much about the other 3/5's of the Truth-O-Meter!"

      How likely is it that a group reverses its ideology depending on the situation? It's fair to estimate the bias in other aspects of fact checking (selection, ratings) based on the findings of the PFB study.

      "Please clarify"

      I explained to you how I eliminated a variable in the PFB study (accuracy of ratings isn't important to the PFB study; there's no need to assume accuracy in order to make sense of the results).

      "(Oh, wait, that's not really your argument. Except it is.)"

      *sigh*

      Read the FAQ and wait 24 hours before offering a reply.











      Delete
  3. Oh, one more thing, Brash:

    "6. "Our study does not have a significant selection bias problem."

    I highly doubt that. That PFB.com makes this assumption about its research, which relies heavily on blog entries in which it re-interprets a limited subset of PolitiFact rulings, makes me as suspicious of it as it is suspicious of PolitiFact."

    Our study is not based on blog entries. It is based on PolitiFact's ratings (as I stated in my reply). It does not have a significant selection bias problem because I used virtually all of PF's relevant ratings (Pants on Fire and False ratings) in the research, discarding a small set of data that I judged would skew the results (such as claims where a Democrat attacks another Democrat, for example).

    I'll expand on the giving of advice: Don't waste your time calculating confidence intervals for aggregated fact checking when the fact checkers aren't randomly selecting their stories and where you have no means of verifying the accuracy of their ratings. And stop making false statements about us.

    ReplyDelete
    Replies
    1. > It is based on PolitiFact's ratings (as I stated in my reply).

      Well, whatever it is based on, it doesn't provide a statistical estimate of the strength (or direction) of bias among fact checkers.

      > It does not have a significant selection bias problem because I used virtually all of PF's relevant ratings.

      Let me get this straight. You've re-analyzed all 6,274 PolitiFact rulings since it was founded and statistically analyzed the aggregate differences between your ratings and PolitiFact's? Also, your rulings aren't subject to partisan bias? Please clarify.

      > Pants on Fire and False ratings

      Oh, okay, so your sample is biased, and your analysis is incomplete.

      > such as claims where a Democrat attacks another Democrat, for example

      That would bias your assessment of PolitiFact's liberal bias.

      > Don't waste your time calculating confidence intervals for aggregated fact checking when the fact checkers aren't randomly selecting their stories and where you have no means of verifying the accuracy of their ratings. And stop making false statements about us.

      It's pretty clear that I'm not making false statements about your site. It's also clear that calculating confidence intervals (and inferential statistics in general) is valuable. It's also clear that my results can be interpreted in several ways, as I concede.

      Delete
    2. Brash wrote:

      "Oh, okay, so your sample is biased, and your analysis is incomplete."

      The sample isn't biased, as you'd know if you had read the study. Since you haven't read the study, your sample is biased.

      Stop making statements about things you don't know about, please.

      Delete
    3. I read the study. I still contend that you can't tell from that study whether you are measuring bias or true differences.

      Here. Let me address some of the problems with your study.

      1. You use only False and Pants on Fire rulings. So already you're focusing on only 2 out of the 5 categories. My analyses include all Truth-O-Meter categories. The resulting estimates of leftist bias are smaller than yours. I admit that for now my estimate of bias is based on a comparison of Obiden versus Rymney. A full party comparison is forthcoming. Then we'll see a fuller picture of the potential leftist bias. I'm betting it will be smaller than your estimate. Even if it isn't, there are still a bunch of other problems with your study. Such as...

      2. This gem comes from your study: "Though neutral judgment ought to result in approximately equal proportions of unfair “Pants on Fire” ratings, Republicans drew that designation about 74% more often than Democrats. The numbers show a clear bias against Republicans and conservatives."

      You assume that in order for a fact checking outlet to be unbiased, its ratings must in aggregate be about equal between the two parties. Wow, that's a pretty stringent condition for there to be no bias, especially if there are real differences in factuality between the two parties! This is why I say that you can't discern bias from true difference in your study. You just can't do it.

      3. Regarding the bounce in liberal bias once PolitiFact separated from CQ. You know what else happened around that time? Obama came into office and, not much later, the debates about health care reform began. So there are two conflicting, not mutually exclusive explanations for the increased "bias" (which could also be an increase in the acutal difference between the two parties, or both). While that trend does give me pause (just like all of the other valuable whistle blowing you guys have done), it is inconclusive evidence about the strength and direction of bias. That said, if the increase was in bias rather than real differences, then the bias increased by almost two times. Rest assured, I will file that away for future reference. I am very passionate about detecting bias in fact checker rulings. I just don't think you've done it very well.

      4. As you already mentioned, you disregarded cases when Democrats were making comments about other Democrats. How did you convince yourself that wouldn't bias your results? At Malark-O-Meter, I've presented evidence that incumbents may spew less malarkey than challengers, perhaps because there is more at stake for challengers. Maybe there is a similar effect when members of the same party wage war against each other. So..how many Pants on Fire and False rulings did you discard by making this decision?

      In sum, your study cannot discern bias from real differences, and it may have introduced some biases of its own.

      Delete
    4. Now that you've supposedly read the study, you've also succeeded in missing its point.

      1) Yes, I use only two of PF's ratings. And I gave a reason for that. Did you notice what it was?

      2) "You assume that in order for a fact checking outlet to be unbiased, its ratings must in aggregate be about equal between the two parties." No, I don't. If you read the study you apparently didn't pay attention to what you were reading. The key point is that PF offers no objective method for distinguishing between "False" and "Pants on Fire." See #1.

      3) "it is inconclusive evidence about the strength and direction of bias." I stipulate in the study the limits of its value. Did you skip that part? Why criticize me on a point I'm not making? The evidence is very strong for the specific measurement of bias, which is the bias in the application of the "Pants on Fire" rating compared to the "False" rating. It's a better measure of ideological bias because it is not readily explained by one party lying more than the other (see #2).

      4) "How did you convince yourself that wouldn't bias your results?" It might. But the numbers of such statements are simply too small to significantly move the overall percentages. Plus I applied an intentional bias in the reverse (imperfectly, in retrospect, but the imperfections probably constitute a wash).

      how many Pants on Fire and False rulings did you discard by making this decision?

      Maybe two dozen, and I did the same thing for both sides. The reason for it should be obvious: We should expect ideological bias to play a murkier role in intra-party contests because the ideological differences are smaller. The data I excluded should result in a better measure of ideological bias for measurements of the remainder. The only real question of bias is whether I applied the exclusions evenly for both parties, and even if there's a bias in that department, we're still protected from large error by the small number of cases involved. Even if I messed up every single one of them it wouldn't move the numbers that much (use your math skills on that one).

      In sum, you have a habit of saying things where you just don't know what you're talking about.

      Explain to me, if you can, how we objectively separate "ridiculous" false claims from merely false ones. Once you've succeeded in that, we can take seriously your assertion that the PFB study cannot draw distinctions between real difference and those produced by subjective bias.

      Delete
  4. 1) “Yes, I use only two of PF's ratings. And I gave a reason for that. Did you notice what it was?”
    And
    2) No, I don't [assume that…ratings must in aggregate be about equal between the two parties]…The key point is that PF offers no objective method for distinguishing between "False" and "Pants on Fire."
    Well, by choosing the most subjective category, you have yet again potentially inflated the bias of the organization as a whole. BTW, Malark-O-Meter ranks False and Pants on Fire statements equally, yet still finds statistically significant differences between Obiden and Rymney. Of course, you think PF is highly biased, so I guess that finding is as worthless to you as inferential statistics.
    3) "I stipulate in the study the limits of its value. Did you skip that part?"
    All studies are of limited value. Just because you admit that yours is also limited doesn't mean you aren't trying to convince me that there is strong bias among PolitiFact's journalists based on inconclusive evidence.
    4) "Why criticize me on a point I'm not making?"
    Because your website is called PolitiFactBias.com, because you allege that PolitiFact is very liberally biased, and because (as I've reminded you of a couple of times) you argue that fact checker rulings tell us about the fact checkers, not about the subjects of fact checking. In short, because you -are- making the point that PF is biased and we should have intense distrust its rulings. I make the more measured argument that bias and professionalism aren’t mutually exclusive.
    5) The evidence is very strong for the specific measurement of bias, which is the bias in the application of the "Pants on Fire" rating compared to the "False" rating.
    Just because you find differences between the two parties in how often they are rated as Pants on Fire compared to how often they are rated false, and just because you assert that Pants on Fire vs. False ratios are an indicator of bias doesn't mean that's what it is. And even if you were right that is an indicator of bias, I still find differences between at least a subset of Democrats and Republicans despite the fact that I treat Pants on Fire and False rulings the same. And yet you level the same criticism against me that you level against PolitiFact. Why? Because it is clear that you think ALL of PolitiFact rulings (oh, and Glen Kessler’s rulings) are highly liberally biased, not just Pants on Fire rulings. Which, of course, follows yet more indirectly from the evidence you provide.
    6) "'How did you convince yourself that wouldn't bias your results?' It might. But the numbers of such statements are simply too small to significantly move the overall percentages. Plus I applied an intentional bias in the reverse (imperfectly, in retrospect, but the imperfections probably constitute a wash)."
    How many statements are we talking? Have you measured the effect? If you'd like, I can do the analysis for you once I get the data set together, complete with confidence intervals surrounding the selection bias effect of your decision.
    7) "Maybe two dozen, and I did the same thing for both sides."
    Okay, I update my beliefs on that decision and concede that point.
    8) "In sum, you have a habit of saying things where you just don't know what you're talking about."
    I've conceded one point. The rest stand.
    9) "Explain to me, if you can, how we objectively separate "ridiculous" false claims from merely false ones."
    I've already said I agree with you that the Pants on Fire category is problematic. But focusing on it may skew your results. Furthermore, your study doesn't tell us the strength (or even the type) of overall bias in PolitiFact rulings. At best, it tells us about one category of rulings that can easily be adjusted to be false rulings. And yet you've built a site dedicated to exposing PolitiFact as nothing more than a liberal tool.

    ReplyDelete
    Replies
    1. Brash,

      Again, please adopt the practice of allowing 24 hours to elapse before you post your replies.

      1,2) "I guess that finding is as worthless to you as inferential statistics."
      Your methods are amenable to explanations other than ideological bias. Not worthless, but close since you leave so many factors unaccounted for. The scientific ideal is one variable.

      3) You're switching topics. If the study doesn't suggest a false conclusion (it doesn't) then you have no business criticizing it on that point (see #4).

      4) In essence you seem to admit that you're conflating the site and the study with your criticism. And you're oversimplifying our view of PF, in effect creating a straw man. Straw men are tiresome. Please stop it.

      5) Another straw man. We do not reason from the conclusion to the evidence, and you're misrepresenting the reasoning in the study. Please stop it.

      6) About 400 statements for the high-quality data group (direct party affiliation minus about six intra-party claims).

      7,8) "The rest stand."

      9) "(F)ocusing on (Pants on Fire) may skew your results"

      The opposite's true. The narrow focus gives me a high probability of isolating the effects of subjective (ideological) bias. A wider focus introduces more variables, such as the possibility of one party objectively lying more. The result, as you note, creates a restriction on the conclusions one may draw from the data. But that's not news. I identify the restrictions in the presentation of the study.

      "And yet you've built a site dedicated to exposing PolitiFact as nothing more than a liberal tool."

      *sigh*
      Please read the FAQ again and refrain from replying for about 24 hours.

      Delete
    2. "Again, please adopt the practice of allowing 24 hours to elapse before you post your replies."

      Are you trying to convince me that I need to think more clearly before I write, or do you seriously want me to give you some more time between comments? If the former, sorry, not convinced. If the latter, my apologies, I'll honor your request.

      "Your methods are amenable to explanations other than ideological bias."

      That's what I've been saying all along.

      "Not worthless, but close since you leave so many factors unaccounted for."

      Like what? Estimates of selection and rating bias strength and direction that we don't have? I present a portrait of fact checker rulings that is useful for estimating the likely strengths of biases, and the potential size of the real differences, until we have a better handle on the unknowns. I'd call that far from worthless.

      "The scientific ideal is one variable."

      No, the scientific ideal is an explanatory model that is as simple as possible but no simpler.

      "You're switching topics."

      Well, I'm going beyond your study to the purpose of your site, which is much broader, and which you legitimize with your research study. So my switching of topics is justified, as is my conflation.

      "And you're oversimplifying our view of PF, in effect creating a straw man. Straw men are tiresome. Please stop it."

      I'm responding to the ways that you have described PF, which is that their rulings are -more revealing- of PolitiFact's bias than they are of the statements they rate. Your site and your study concentrate entirely on liberal bias, ignoring centrist bias, for which there is about as much evidence as partisan bias. It's not a straw man I'm addressing. It is your opinion about PF, which you use one limited quantitative study to support.

      "About 400 statements for the high-quality data group (direct party affiliation minus about six intra-party claims)."

      400 statements that you excluded, or that were included in the study? If we're talking six intra-party claims, then yeah, that effect is probably pretty small (do not interpret this as a concession that your study tells us much about the strength and type of PolitiFact bias).

      "The opposite's true. The narrow focus gives me a high probability of isolating the effects of subjective (ideological) bias."

      Isolating the effects of subjective (ideological bias). Yeah, that sounds like overestimating an effect to me!

      "I identify the restrictions in the presentation of the study."

      You suggested that I read your study in the context of my rebuttal against your comments on my analysis of all categories in the Truth-O-Meter. So again, maybe I'm conflating your study with the site as a whole, but that's only because you do, too.

      And, I've read your FAQ. It says a lot about how you think PolitiFact is liberally biased, and possibly deliberately so (although one of you waffles on that). The FAQ doesn't mention anything about how the bias you've measured is limited to the most subjective category in the Truth-O-Meter, and it pretty much calls out PolitiFact as part of the liberal media establishment.

      Please wait however long you want to respond.

      Delete
    3. "Are you trying to convince me ..."

      I'm not concerned about convincing you. I hope that by taking more time between your posts you will think more clearly. It also allows me to respond more easily (without becoming exasperated over your repeated straw men and false statements).

      "I'd call that far from worthless."

      We differ on that point. Let me know when you've found a non-trivial use for your far-from-worthless method.

      "No, the scientific ideal is an explanatory model that is as simple as possible but no simpler."

      Again, you're switching topics. The topic was the number of relevant variables you allow to linger unknown and unmeasured in your system. The PFB study, in contrast, has one. Studies based on the examination of one variable are the scientific ideal.

      "(M)y switching of topics is justified, as is my conflation."

      Rubbish. "You, too" is a fallacy, and the fact is I don't.

      "I'm responding to the ways that you have described PF, which is that their rulings are -more revealing- of PolitiFact's bias than they are of the statements they rate. Your site and your study concentrate entirely on liberal bias, ignoring centrist bias, for which there is about as much evidence as partisan bias. It's not a straw man I'm addressing. It is your opinion about PF, which you use one limited quantitative study to support."

      Yet you wrote: "I make the more measured argument that bias and professionalism aren’t mutually exclusive."

      That assertion has nothing to do with the justification you just cited. We defend PF from attacks based on the personal bias of staffers (such as their voting records) because we think the relevant measure is the quality of the work. PF produces poor quality fact checking compared to its cousins the the field. The problem is a system that encourages an institutional bias that in turn hampers accuracy and consistency. That's all over our site. It's a straw man to suggest we think PF is uniformly biased, radically untrustworthy or that we ignore centrist bias (we recognize that journalists are biased center-left on the main and where further left mostly on social issues, and the potential for ratings designed to make things look fair). We would like for your arguments to forsake the use of straw men. You also said we only have one piece of research in support. The introduction to the PFB study mentions other research we've done that isn't published. So our opinions cannot take those studies into account--is that it? Sorry, that's another straw man/false statement. It's tiresome, and I want you to stop it.

      "About 400 statements for ...?"

      Group A involved slightly over 400 claims, not counting about six intra-party claims excluded from the totals.

      "Isolating the effects of subjective (ideological bias). Yeah, that sounds like overestimating an effect to me!"

      If you have thought of a reasonable way to explain PolitiFact's difference between "False" and "Pants on Fire" on an objective basis then I'd love to hear about it. Otherwise it sounds like you're overestimating the security of your foundation for ridicule.

      "(M)aybe I'm conflating your study with the site as a whole, but that's only because you do, too."

      You do, I don't. This latter branch of the discussion stemmed from your criticism of the PFB study. I did not, IIRC, suggest you read the study. I suggested that you had not read it. Though perhaps one could take that as an implicit request to read it. In any event, I'm simply looking forward to seeing you make fewer false statements about the study and the PFB site.

      "The FAQ doesn't mention anything about ..."

      You shouldn't assume that the published study is the only research we've done. The "research" link goes to a page, not to the research paper, and the paper itself mentions some of the other research.

      Delete
    4. "Let me know when you've found a non-trivial use for your far-from-worthless method."

      I've already found several! I'm flabbergasted you haven't adopted them!

      "...number of relevant variables you allow to linger unknown and unmeasured in your system. The PFB study, in contrast, has one."

      Actually 2. And you exclude 4 others, which might confound your results.

      "Studies based on the examination of one variable are the scientific ideal."

      No, they're not.

      "Rubbish. "You, too" is a fallacy, and the fact is I don't."

      Which fallacy is that exactly? And, yes, you do, come on.

      "If you have thought of a reasonable way to explain PolitiFact's difference between "False" and "Pants on Fire" on an objective basis then I'd love to hear about it."

      That's not relevant to my argument that you've potentially overestimated the bias, and I haven't claimed there is an objective difference.

      RE: how PFB.com acknowledges that journalists make things seem fair and are center-left.

      Well, the Pants on Fire study is the one quant study you've actually posted, thus the only one I can assess. And while the FAQ pays lip service to the possibility that PF isn't always biased against Republicans, the majority of the site's content revises fact checks from a conservative point of view. So I fail to see how I've mischaracterized your site. Its political bent is much more obvious than, for example, PF's, and its agenda is evidenced by more than your voting records.

      Anyway, to review:

      I argued in my OP that you use mainly rhetorical methods and limited quantitative analysis to demonstrate a liberal bias in PF, and that my estimates of the possible partisan bias (if it exists) are better because they are based on sound inferential statistics. You argued that I mischaracterized your opinions. I beg to differ. You also suggested that my inferences are nearly worthless because they don't account for selection bias. I countered that we have insufficient information to formulate a prior probability distribution of report card composition. Yet we can -still- estimate potential biases from the resulting probability distributions. Astoundingly, you don't see the value in that to your agenda. Just as astoundingly, you don't see the value in my suggestions about how to estimate the effect of political attitudes on professional fact checker rulings.

      We've also had a side debate about the merits of your Pants on Fire study and the purpose of your site. I argued that your study doesn't tell us much about the strength or type of bias that exists at PF. You...basically agree with me on that? I go on to explain that your site is clearly motivated by the desire to expose liberal bias at PF. And I maintain that, whatever you're trying to do at this site, you haven't come up with quantitative estimates of PF's bias (partisan or otherwise) that allow you to state your uncertainty in that judgment. At Malark-O-Meter, I have done that.

      Suggestions:

      (1) Cite my research as evidence of the possible liberal (and centrist) bias among fact checkers.
      (2) Shift your focus toward developing better quantitative methods to statistically estimate the strength and type of bias.

      Delete
    5. 1) "I've already found several!"

      Please name the best one for me.

      2) "Actually 2. And you exclude 4 others, which might confound your results."

      You're invited to name them.

      3) "No, they're not." (referring to limiting studies to one variable)

      Yes, they are.
      http://explorable.com/controlled-variables.html

      3) "yes, you do, come on." (supposedly I conflate conclusions from the PFB study with broader conclusions stated at PFB generally).

      If I do, then you can give an example. Please do so.

      4) "That's not relevant to my argument that you've potentially overestimated the bias, and I haven't claimed there is an objective difference."

      What's my estimate of the bias? Or, should I say, what's your estimate of my estimate of the bias? Perhaps you should argue that you've potentially overestimated my estimate of the bias. Kind of puts your criticism in perspective, doesn't it?

      5) "Well, the Pants on Fire study is the one quant study you've actually posted, thus the only one I can assess."

      Right, but you wrote that it was the only one on which we based our conclusions. Remember? ("It's not a straw man I'm addressing. It is your opinion about PF, which you use one limited quantitative study to support.") That's the type of false statement I want you to stop making.

      5) "And while the FAQ pays lip service to the possibility that PF isn't always biased against Republicans, the majority of the site's content revises fact checks from a conservative point of view."

      We're interested in highlighting the best criticisms of PolitiFact. We think the best ones more often come from the right. Feel free to suggest ones from the left you think we should include. It should not have escaped you that we call all "Pants on Fire" ratings unfair. In short, we're not paying lip service to the idea that Democrats as well as Republicans are unfairly harmed by PolitiFact's poor fact checking. That's the type of false statement I'd like for you to stop making.


      6) "I fail to see how I've mischaracterized your site."
      That's disappointing.

      7) "Its political bent is much more obvious than, for example, PF's, and its agenda is evidenced by more than your voting records."

      I'm supposing you've figured out that JD is a Republican.

      Seriously, aside from the fact that PF tries to hide its ideological bent and we do not, what is the empirical evidence of our supposed ideological agenda? Are we exposing non-errors when we criticize PF from the right? Are we ignoring a huge vein of excellent PF criticism from the left?

      I'll tend to your summary of events by this weekend, probably. Don't rush your replies.




      Delete
  5. "[PFB's] political bent is much more obvious than, for example, PF's, and its agenda is evidenced by more than your voting records."

    Oh noes! He figured out that we're biased! Somehow he uncovered our secret right-leaning bent that we keep hidden by discussing it on our FAQ page! We're ruined!

    ReplyDelete
  6. "Suggestions:

    (1) Cite my research as evidence of the possible liberal (and centrist) bias among fact checkers.
    (2) Shift your focus toward developing better quantitative methods to statistically estimate the strength and type of bias."

    Countersuggestion: Wait 24 hours prior to replying.


    That's your last warning. Each time you fail to heed the warning, your post is eligible to contribute toward a strike count. Make three inaccurate statements (which I shall document) and your posting privilege is rescinded.

    Wait the 24 hours each time and you can keep posting as much doggerel as you please. Your choice.

    You get a free pass on your post above.

    Tu quoque
    http://www.infidels.org/library/modern/mathew/logic.html#tuquoque

    ReplyDelete

Thanks to commenters who refuse to honor various requests from the blog administrators, all comments are now moderated. Pseudonymous commenters who do not choose distinctive pseudonyms will not be published, period. No "Anonymous." No "Unknown." Etc.