The Problem With Michael LaCour’s Rebuttal

By Neuroskeptic | June 1, 2015 4:12 am

The hottest story in science over the past couple of weeks has been the accusations of fraud against UCLA political science PhD student Michael LaCour.

The allegations were posted online on May 19th and they concern one of LaCour’s papers, published in Science, called When contact changes minds: An experiment on transmission of support for gay equality. On May 28th the paper was retracted on the request of LaCour’s co-author, Donald Green, but LaCour stands by the data and disagreed with the retraction.

There have been lots of twists and turns in this case – LaCour has admitted lying about some aspects of the data collection. In this post however I’ll focus on the data and on LaCour’s rebuttal to the original accusations, which he posted on May 29th.

LaCour’s key data are measures of attitudes towards gay marriage, using a 0-100 scale called a ‘feeling thermometer.’ LaCour measured this at baseline and then at subsequent timepoints.

According to the accusers, led by David Broockman, LaCour’s baseline feeling thermometer data are statistically indistinguishable from an large existing gay marriage feeling thermometer dataset called CCAP. The implication is that LaCour faked his data by randomly selecting datapoints from CCAP.

The critics showed histograms of the two baseline datasets in LaCour and Green (2014) and of the CCAP thermometer. It can be seen that they’re virtually identical and a statistical test confirms this at p = 0.4776, no significant difference.


In his rebuttal, LaCour disputes this, and implies that the critics are themselves guilty of intentional misrepresentation. He writes that they

Selected the incorrect variable from CCAP, they then further manipulate this variable to make the distribution look more like that in LaCour and Green (2014).

When the correct variable is used, the distributions between the CCAP thermometer and the LaCour and Green (2014) thermometer are statistically distinguishable.

Selecting the incorrect variable may have been an oversight, but further manipulating that variable to make the distribution look more like LaCour and Green (2014) is a curious and possibly intentional “error.”

But to my mind, his objections are very weak. LaCour says that Broockman et al. used the CCAP variable ‘gaytherm’ whereas they should have used one called ‘pp gays t’.

The only difference between them, however, is that in ‘gaytherm’ some missing responses are coded as 50 (i.e. the midpoint of the scale). The ‘further manipulation’ LaCour decries also amounted to replacing missing data with 50s.

LaCour says that his data are statistically distinguishable from ‘pp gays t’ and he presents a histogram of the ‘correct’ CCAP variable:

ccap_newYet the only difference between the two CCAP versions is that there are fewer 50s in this one. LaCour writes that the distributions are “quite different” but there’s only one difference, the 50s. Everything else is identical.

LaCour claims that there is “a modal spike at 100 in the CCAP data, no such spike exists in LaCour and Green (2014)” but the very same spike at 100 is clearly visible in his data, it just looks smaller because the spike at 50 is even bigger!


Overall, I’d say that these results are fully consistent with the theory that LaCour’s data were taken from CCAP with missing items replaced with 50s. This would have been a natural approach to the missing items, because the CCAP dataset itself makes this substitution in the variable called ‘gaytherm’.

As far as I can see, LaCour has failed to refute this central criticism of Broockman et al.

  • Sean Lamb

    The other possibility – shocking although this might be – feeling thermometers on same sex marriage might be reproducible.

    No wait – this is social sciences – if an effect is reproducible it must have been faked!

    • Wouter

      That’s an interesting point you’re raising: what should the data look like to be regarded as authentic? And where are the boundaries of such assessment?

      • Neuroskeptic

        Broockman et al. compared the LaCour data to the CCAP and to five other gay marriage feeling thermometer datasets. LaCour’s data were much more similar to CCAP than they were to any of the others. So there’s nothing inherent to the feeling thermometer that produces data like that.

        Also, it is known that attitudes to gay marriage are changing in the USA. So I don’t think we would not expect studies conducted one or two years apart to produce the same distribution. It’s not a static distribution which is reproducible.

        • Sean Lamb

          At the moment we have two extraordinary scenarios to choose between

          1. A graduate student carried out a survey with a price-tag of $750 000 and yet seems unable to produce the raw data or a funding source
          2. A graduate student faked an email in 2013 and then proceeded to direct 50 canvassers to 1000s of houses with a secret plan to fake an entire data-set 18 months down the track.

          Nothing internal to the dataset convinces me yet that #2 HAS to be the case. Although if LaCour doesn’t ‘fess up to UCLA what actually transpired I am guessing that is the version everyone will assume is true. At very least LaCour will go down as one of the most grandiose scientific fakers of all time

          • Chris

            Your #2 isn’t actually a possibility, and even if it were, “Got in over his head and realized he had to start making stuff up” was the most likely explanation until it became clear that LaCour has likely done this before.

          • Sean Lamb

            Actually Chris, the possibility I am looking at is the “smoking gun” email was tampered with slightly and that there was a survey carried out by Usamp.


          • Chris

            If there was a survey carried out by Usamp, that’s easy enough to prove.

          • Sean Lamb

            Michael LaCour presumably could, since if he did a survey he must have some correspondence concerning it.

            The rest of us can’t – assuming that USamp’s hypothetical client is shy – as he who pays the piper calls the tune.

            If LaCour can’t come up with a survey partner his academic career is toast, so it will be interesting to see what happens.

          • zaphos

            NYT’s interview with LaCour says, I believe in reference to Usamp: “He now says that, in fact, he did not end up using that survey company but another one.” The article doesn’t say what company, though.

            @cskovron on twitter says he worked with LaCour on surveys in 2013: “So, about the umich qualtrics link in the LaCour report. That’s to my account. I’ve taken down the survey. ML and I were collaborating in April/May 2013. We really sent out mail offering an iPad. UCLA IRB approved it. We got some real data! 38 whole responses! So clearly that wasn’t going to work. ML came up with the idea for the uSamp panel. But once he allegedly got it running for wave 1 he stopped returning my calls and emails and kicked me off the project. Now we know why! His timeline misrepresents the pilot.”

          • Sean Lamb

            Quite, zaphos, but perhaps LaCour might change his mind on the issue of USamp. Cskovron seems peculiarly both implying the USamp project was fictitious and yet simultaneously resentful that he was shut out of it! Which is a rather interesting piece of cognitive dissonance

            Obviously no one is going to believe anything LaCour says now, so it will be a matter for the UCLA research integrity committee to determine what took place here

            Lets hope LaCour doesn’t do a Nikolai Bukharin and fall on his sword for the good of the cause, but rather cooperates fully with the UCLA authorities. But I am not going to hold my breath on that

          • zaphos

            In fairness to cskovron, if you look at more of the tweets on his timeline he sounds not so dissonant but rather reasonably glad in retrospect about being shut out, in light of recent events: “I thought it was just a coauthorship conflict. Now I know it’s much more. He actually did me a favor by kicking me out before the fraud.”

          • Sean Lamb

            Well he could hardly say anything different could he?
            However, my judgement – and I am aware this is a completely subjective matter – is that he appears to be still simmering with barely repressed resentment.

          • Aporia27

            Why should anyone seriously entertain the possibility that *other people* were tampering with things, and not LeCour? Let’s consider that:
            1. He lied about other things in the experiment: funding and incentives given to participants
            2. His explanation for why he destroyed the raw data doesn’t make any sense
            3. The Usamp employee he claimed to work with doesn’t exist
            4. He has lied about other things, like a teaching award he supposedly got ( When a journalist discovered that the “Emerging Instructor Award” didn’t exist, LeCour changed his CV and then claimed he hasn’t changed it in over a year, though of course an archived version is still available.
            He’s also lied about other funding on his CV too (also removed but also archived).

            Why on earth should I think that instead of LeCour lying, other people were tampering with things? Of all the options that you’ve presented, this seems the most likely.

          • Todd

            The canvassers were already going to go door to door as part of their outreach effort. LaCour and Green simply offered to work with them to collect data on the effects (varying the sexuality of the canvasser in a natural experiment). So, #2 is plausible – the canvassers did their job; however, LaCour fabricated the data either because the original data was never collected (i.e., there appears to be no evidence of a survey at Qualtrics, uSamp has no record of working with LaCour, etc.) or the data that were collected did not support the underlying theory. The fake emails and evidence of other fabrications (see here: suggest that the former is the most likely explanation. As Green stated, this is surprising because the most difficult canvassing work was already taking place–all LaCour had to do was collect the data from the field experiment and report the results, whatever they may be. Instead, it appears he took many shortcuts, or as Bart Simpson said: “I only lied because it was the easiest way to get what I wanted.”

          • Neuroskeptic

            One possibility is that LaCour did run the survey, but the results were ‘bad’ (i.e. not sexy enough to get into Science, i.e. negative) so he decided to throw them out and make them up.

            I think this is unlikely but it can’t be ruled out.

          • Todd

            Agreed, this is a possibility. Unfortunately, the reason underlying the likely fabrication doesn’t really change the outcome – the study would still be retracted and LaCour’s reputation would suffer. The other difficulty is that LaCour has already admitted to misrepresenting certain things (and other fabrications have come out), so I have a hard time trusting anything he says. I mean, why should we believe him?

        • angel farts

          best argument I’ve heard so far

          • Sean Lamb

            Except the thermometer isn’t about gay marriage, it is about feelings towards gay people in general, presumably with 50 being neutral.
            I think a common response is to go “wtf” and leave it at 50.
            So what LaCour and the CCAP datasets demonstrate is that the rates of “wtf” are fairly stable over the 12 months separating the two surveys. Vavrack who ran the CCAP experiment was his supervisor, so it is likely he just used her methodology and so got similar results.

      • angel farts

        yes, I would love to see a consensus on that question

    • Dennis

      Reproducing the finding is not the same as getting the exact same data. “LaCour’s data” look like they *exactly* covary with the noise in the CCAP data.

  • matus

    “It can be seen that they’re virtually identical and a statistical test confirms this at p = 0.4776, no significant difference.” Eh, WHAT??! The p-value doesn’t confirm anything. The test tells us that the hypothesis that the distributions are identical could not be rejected. We can’t say whether this is because the test didn’t have sufficient power or because the distributions are actually identlical.

    Overall, I don’t think the analyses of the accusers are adequate for the claim they are making. They should have specified the two models explicitly and measure the relative probability of each model. The population model should be obtained from the CCAP and other datasets. The other model is given by randomly sampling from CCAP (I guess that would be some extension of the hypergeometric distribution).

    • Neuroskeptic

      The sample size in this case is several thousand, so it has plenty of power.

      Creating a population model by merging the other datasets would just create a mess IMO because the studies are not random samples from a general population. Each study is sampling its own population (e.g. it has different recruitment strategy, geographical location, and year.)

      Still, if you did do that, I am 99.9% sure that LaCour’s data would be much more consistent with the CCAP than the general population model. Just from eyeballing it.

      • matus

        The distribution tests require a higher sample size than say a standard t-test – that’s why they are not recommended. Several thousand was the case of CCAP. LeCour study had just around 1k. It would be interesting to know the power of the test.

        In this particular case, I don’t trust my or anyone’s eyeballing abilities. Eyeballing for correlations or group comparisons is ok. Eyeballing normality from histograms can be done with some practice. But eyeball-comparing two histograms with irregular highly-peaked bimodal distributions is IMO questionable – to say the least.

        • Tim

          You can do a lot better than an eyeball comparison. I extracted the data from the Broockman critique and tested two hypotheses:
          a) Study 1 and CCAP are independently drawn from an identical underlying distribution; and

          b) Study 1 is drawn from CCAP.
          Case (a) is implausible, but case (b) conclusively demonstrates fraud. These two cases have a distinct statistical signal in the difference of the two data sets. Setting A to be the ratio of the number of CCAP samples to Study 1 samples, the variance on
          CCAP – A*Study1
          should be
          CCAP*(1 + A) in case (a), or
          CCAP*A in case (b).
          If I compute the distribution of
          (CCAP – A*Study1)/sqrt(CCAP*(1 + A))
          I get a variance that is less than 1 (0.88 to be exact). I am doing a quick approximation and mixing Gaussian and Poisson distributions for this, but the result should hold in a more careful analysis. This is positive evidence of fraud (albeit only at the 1 sigma level–0.88 is not *that* surprising given the assumption of independent realizations). Computing the variance of
          (CCAP – A*Study1)/sqrt(CCAP*A)
          gives a value of 1.07. This means that the data are more consistent with outright fraud–drawing Study 1 from the CCAP–than with the very implausible scenario of drawing Study 1 from the *identical* distribution used to construct the CCAP. It’s even damning than the K-S argument the Broockman paper used.

          • Neuroskeptic

            That sounds like a really promising approach – you should do a more formal take on that analysis!

          • Tim

            It’s a cute statistical point that deserves its own post. There is a subtle difference between one distribution being drawn from another and two distributions being drawn from the same large parent distribution. When you open a textbook and it has a section on whether two distributions are the same (the K-S test, for example), it generally refers to the latter case.

      • Sam Wang

        I think any objection to this point is nitpicking. The distributions are pretty obviously identical – anyone can see that by examining the noise. If I were to pick a test, it would be a chi-square score – simplicity is best.

        In addition to the statistical test, one must step back and ask: why is there no documentary evidence that a survey was done at all? Qualtrics and uSamp both say they didn’t do the work. This case seems pretty well settled by now.

        • Neuroskeptic

          I agree, but somehow I fear this won’t be the end of the story. In my view, if LaCour was going to confess, he’d have done so by now.

          I think he will try to drag this out, unless and until someone offers him another shot at prominence (and ideally a book deal) for ‘repenting’ and telling the truth.

          I’d not be at all surprised if five years from now LaCour is trying to present himself as an expert on scientific fraud! We’ll have to wait and see.

          • CPO_C_Ryback

            Excuse me — “political science” is a “science?”

            When did that happen? Is that “science” now “settled?”

            IMHO, this entire kerkuffle is akin to when lifetime politicians claim they have “evidenced-based” solutions. And as Harry Truman noted, that’s the time to go home and make sure all the doors are locked securely.

            Not buying any of this for one nano-second.

        • Sean Lamb

          Sam – I think the point that this nay-sayer is making is the fact the distributions are “identical” doesn’t actually tell us anything.

          Suppose you were to do a survey of the heights 4000 Californians and looked at the distribution.
          Then a year later I did another survey of the heights of 1000 Californians. What is to stop someone like Neuroskeptic coming along and saying I must have fabricated my study because it showed the exact same distribution as your study?

          The point is that the fact the two distributions are similar only adds a very small amount of evidential weight to claim the study is a fabrication.

          In point of fact neither Qualtrics and uSamp have made any public comment whatsoever that I am aware of. However, Donald Green seems certain Qualtric didn’t do the survey and as a co-author he is in an excellent position to know. He has made no comment regarding uSamp.

          • Sam Wang

            That’s still not right, statistically speaking. It is not the base distribution that is the key tell, but the noise riding on top of the distribution. So your analogy is not correct. The narrow question before us is how to test for fluctuations in random-counting noise, which do not reflect properties of the distribution. That is what Broockman et al. pointed out.

          • Sean Lamb

            You may find much of the noise are multiples of 10.

          • Neuroskeptic

            I actually agree with Sean about the ‘noise’, in theory we can’t say with certainty what is noise and what is signal e.g. the little “hump” around 25 is probably not random, but because some people feel they approve of gay marriage “about a quarter”. It’s probably no coincidence that that hump peaks at 25 rather than (say) 27.

            However the fact remains that LaCour’s data is far more similar to CCAP than it is to any other feeling thermometer dataset. And none of those other datasets are so similar to themselves or to CCAP.

          • Sam Wang

            P.S. This gets back to my point about chi-square. If the parent distributions were the same, but one sampled from it twice (which is your example of Californian height), then the chi-square divided by the number of bins would be approximately 1. In this case, that number is far less than 1, therefore too small to be caused by fluctuation.

            Incidentally, this is why such arguments should not be carried out in purely narrative form, i.e. discussions on this thread. At some point statistical tests are essential. Unfortunately, this plays into the hands of obfuscators like LaCour.

          • Sean Lamb

            “Unfortunately, this plays into the hands of obfuscators like LaCour.”

            I would suggest you are probably inflating the role of twitter and blogs in this issue. These issues – in a properly functioning scientific community – are decided by institutional oversight committees. That is where LaCour will have to justify himself, he doesn’t owe Sam Wang any explanation whatsoever.

            What I found surprising in D Broockman’s slightly narcissistic accounts of his personal journey in this matter is not one person seemed to have advised him to take this matter to the appropriate authority. In the case of LaCour it would have been the UCLA Office of Research Policy and Compliance or the Vice Chancellor of Research.

          • Neuroskeptic

            Why are they the appropriate authority to go to in the first instance?

            By going public, Broockman et al. have ensured that, yes the official integrity committees are informed, but not until the important issues have been debated openly.

            Why should the accusations be private when the paper itself was published for the world to see?

          • Sean Lamb

            Correct – it is probably only an obligation on people who seek to maintain a career in academic institutions.
            This is the UCLA policy in this regard:
            “Confidentiality. To
            the extent possible, UCLA and all participants in a Research Misconduct
            Proceeding should limit disclosure of the identity of Respondents and
            Complainants to those who need to know, provided that this limit is
            consistent with a thorough, competent, objective and fair Research
            Misconduct Proceeding and with the law. Except as may otherwise be
            prescribed by applicable law and University policy, and as necessary to
            conduct a Research Misconduct Proceeding, confidentiality must be
            maintained (e.g., through the use of redaction) for any records or
            evidence from which Research subjects may be identified.”

            The rest of us can do as we please. The one advantage playing by the rules provides is you are covered by qualified privilege.

          • Neuroskeptic

            “To the extent possible, UCLA and all participants in a Research Misconduct Proceeding…

            Right. But there is no Research Misconduct Proceeding (yet), this is all just informal discussion.

          • Sean Lamb

            ha – not up to your usual standard of disingenousness.
            But really the point I was trying to make is that people who think they can prove this online are suffering from statistical (and other kinds of) hubris.

            In my view the most likely scenario is that the LA Gay and Lesbian Center were about to engage in some kind of monitoring to see how their advocacy was working anyway and LaCour managed to get a research component tacked on. So he didn’t pay for the surveys or even have access to the raw data.

            Several people ended up feeling hard done by and apparently they had the ear of David Fleischer or David Fleischman (or whoever he is) and the rest is history. But it is fascinating to see them all line up to plunge their stilettos into poor LaCour’s back. Even funnier to see them proclaim they are going to do the research again – and do it “properly”. I guess they had better make sure the effect is a bit smaller than LaCour reported (so they can still call him a cheat) but big enough to be newsworthy – although I expect Science won’t touch it this time. They also better make sure their baseline is radically different from the CCAP.

            I should add I have “Jason Peterson”. He has been taken into witness protection and is being kept in a safe house at the moment.

          • Neuroskeptic

            A creative theory but it has a number of slight problems…

            1) If that were true, why doesn’t LaCour say so?

            2) Why doesn’t anyone *else* say so? Since in this theory, other people know the truth and have the raw data.

            3) Why are the data indistinguishable from CCAP and why is LaCour unwilling to admit it? (And what about all of the other problems that Broockman et al. noted?)

          • Sean Lamb

            All good questions, NS, unfortunately the more pressing issue for me at the moment is what the hell am I going to do with “Jason Peterson”?

            He is being debriefed in the safe house at the moment but he is plainly terrified and keeps muttering “Forget it, Jake. It’s Chinatown” over and over. He can’t stay in the safe house; I have two defectors who worked as technicians in the Merck-Litton Bionectics HIV trials in the late 70s coming in Thursday week.

            I was wondering maybe if someone from the LA Gay and Lesbian Center could get in touch and we might be able to arrange handing “Jason Peterson” over for a reasonable financial consideration?

          • Neuroskeptic

            “Jason Peterson” could try seeking asylum in Moscow.

  • Uncle Al

    Psychology obtains moment by moment “necessary” conclusions. Situational ethics said “gay is good.” The immediate result was gay, lesbian…LGBT, LGBTI, LGBTQQ, MSGI, GSD, SGL, GLBTA, GSM, MSM, FABGLITTER, LGBTQ+… The only obscene category remaining is eusexuality, for refusing to swim in the cesspool. That is discrimination. Discrimination must be ended. Zero tolerance!

  • Richard Williams

    The non-existent $793K in grants totally destroys his credibility. He admits they didn’t exist but offers no explanation for claiming that they did. I would have been more impressed if he had pulled off the study just by offering a couple of iPads as prizes.

    • practiCalfMRI

      And the absence of IRB approval, let’s not forget that! He applied retroactively and his IRB declined to act since the study had been completed (apparently). They recommended he contact Science about the lack of IRB. Apparently that never happened. Incompetence or deception?

  • JB

    All the concern over methodology is moot when you consider LaCour said he paid $100 to over 10,000 people. Where’d the money come from?
    Figures lie and liars figure.
    He’s toast at Princeton.
    The proverbial “10 foot pole” rule is effect now

    • Felonious Grammar

      Don’t all test subjects get paid?

      • Neuroskeptic

        Not always. I once ran a study where participants had a chance (about a 1 in 500 chance) to win £100. The others got nothing. Surprisingly, it had a really good response rate!

  • Pingback: Michael LaCour Probably Fabricated a Document About Research Integrity | News Leader()

  • RealTalkfromNYC

    LaCour is showing signs of narcissism. Its amazing to see a trapped narcissist squirm like this. There is a real train wreck quality to this entire situation, and frankly, many of us are enjoying it, even if you dont want to admit it openly or even to yourselves.

    • chomps

      My MIL is a narcissist. It is one of my favorite things in the world to watch one squirm.

  • Pingback: The Big Hairy Deal: Research Ethics , Roles of IRBs, and Responsibilities of Chairs/Coauthors in Light of Lacour and Green, | Eve of Instruction()

  • Pingback: Dirty Sexy Science | Ordinary Times()

  • Mark Rouleau

    To stray from statistics for a moment, if false accusations have been made about Michael LaCour’s academic integrity this is actionable defamation per se. Clearly this is a stain on his professional integrity and should in a rational world hurt his future in many professions and academia. Thus if Mr. LaCour desires to really clear his name he will bring a defamation action against those who have publicly accused him of academic and intellectual fraud. Given the type of charge I believe it would be incumbent upon the accusers to establish the truth of their charges and with civil discovery they would be able to obtain the raw data that LaCour claims to have produced. The truth would be public one way or the other. Hiding behind closed doors and privilege only perpetuates lies deception and fraud.

    • Neuroskeptic

      If everything that Broockman, Kalla, et al. (including myself) have said is false, then LaCour could sue.

      But if what we’ve said is false, he wouldn’t need to sue us because he could just reveal all of the raw data and the evidence proving that the studies did take place as described and then we’d all look extremely silly, and recant. Well I can’t speak for others but I know I would.


      • Mark Rouleau

        The court system has rules of evidence and the right to obtain discovery (forcing evidence from the other side). It has financial incentives and rewards for prevailing. The reason that LaCour won’t sue is because he knows that he would be proven to be a fraud in a court of law. In the court of public opinion all kinds of garbage is allowed to pass for truth.

        • Sean Lamb

          There are other defenses against libel other than truth. Generally if you haven’t been acting in malice or reckless you are OK. The only people who might be open to court action would be B,K&A and they probably don’t have enough between them to pay off the court costs and the legal bills. Depending on what took place suing the LA Gay and Lesbian Center might be an option, but could ML bring himself to do that?

          As for Neurosceptic all he has said is he thought LaCour’s rebuttal was weak. If it ever came to court his lawyer could just submit that his client was so hopeless at statistics that he couldn’t even recognize situations when you were interested in Type I errors and those when you wanted to estimate the Type II error rate and that would be the end of the matter

          • Neuroskeptic

            Why on earth would LaCour sue the LAGLC?

            Wait, are you still hanging onto your theory that the LAGLC has had the raw data all along?

          • Sean Lamb

            Sorry, I thought we were exploring scenarios in which LaCour could take legal action? I have no idea what took place, although I think it unlikely it is the massive data fabrication people believe. But it is clear LaCour doesn’t have any raw data or any confidence that uSamp/Instantly will acknowledge him. But there are dozens of possible scenarios that might have taken place that could have left LaCour exposed. For example: suppose the money had been provided to the LAGLC for the provision of mental health services to young people, and the LAGLC had decided the best way to improve the mental health of young people was to campaign for marriage equality (and lefties are pretty good at that kind of logic). That would leave all parties to the survey in a position of Mutually Assured Destruction in terms of a charge of misappropriation of public money

            There are all kinds of possibilities. All I know is I still have “Jason Peterson” in my safe house and he is beginning to annoy the other occupant – he brought in a whole tranche of documents relating Crawford Sams and release of bubonic plague in North Korea from the Pentagon. He tells me that ‘Jason Peterson’ is going around melodramatically quoting Primo Levi: “Here there is no Y.” So that his roommate now refers to him as Jason Poser

          • Mark Rouleau

            Those defenses are only available in instances where the defamed/libeled/slandered person is a public figure. It would be reasonable to believe that Lacour is a limited public figure regarding this publication. The fact is that even if damages were not available LaCour would be able to establish the TRUTH if it were true and the defendant’s would have to establish their affirmative defense of Freedom of Speech granting them those defenses.

      • CPO_C_Ryback

        Please. You must be joking. Think of cases like John Hinckley, Jr., and two sets of psychologists, giving two entirely different versions.

        LaCour would need a rock-solid database to stand a chance in court. And he does not. And since he isn’t IBM or Ford Motor, no lawyer will take his case because they need to be paid in cash, upfront.

        • Mark Rouleau

          NO CPO_C_Ryback there is no joke. Apparently you know little about forensic evidence. The “rock solid” database either is or is not there. Your comparison to “psychologists” is like comparing art to mathematics. Art is incredibly subjective while most sane people would agree that math problems (or at least the majority of them) have a correct answer that is objective and repeatable. So too is evidence of this nature. There are digital fingerprints (meta data) all over the data showing when it was generated (entered) and what terminal it was entered on etc. And on the other point in case you haven’t noticed this research is really more about a Political/Social agenda than it is about hard science. Lambda legal has funded a lot of litigation relating to this general political/social issue as well as other stakeholders in this debate. I think that you need to take the blinders off to recognize that if his “data” is real and is “true” there are more than enough stakeholders that would be more than happy to fund the litigation to prove it so. On the other hand if it is fraudulent and made up those same stakeholders will simply make an argument that it’s too hard to prove the truth and that he doesn’t have the funds to do it, rather than simply admit that he fabricated evidence to support a political/social viewpoint.

          “In the 23-page document, political science graduate student Michael
          LaCour of the University of California (UC), Los Angeles, attacks the
          methods and motives of researchers who raised questions about his
          research, but confirms that he lied about some funding sources and the
          incentives used to attract participants. And he admits that he destroyed
          the data used to produce the study, claiming that action was required
          by a UC Los Angeles institutional review board (IRB) in order to protect
          the privacy of participants.”

          How convenient to “destroy” the data used to produce the study. Believe me unless they wiped and reformatted the drive there is evidence. He should be able to get some of the participants to come forward to support his claims. Then add the admitted lie about funding and you have someone who is willing to lie.

          LaCour will not sue because he would risk being shown in open court to be an academic fraud. There are already questions about some of his other research as well.

          • CPO_C_Ryback

            Hey, Chuckles — think. You ask 100 poly-sci PhDs — you get 100 different answers. Duh.

  • daqu

    “LaCour measured this at baseline and then at subsequent timepoints.”

    Of course at this point, we have no idea what LaCour actually did. We know only what he says he did. And he has already been found to have been lying about some things he claimed to have done.

    So as good scientists, let’s stick to the known facts, and state what LaCour _claims_ to have done, not what he supposedly has done.

  • Pingback: Eat chocolate, lose weight! Plus fraudulent gay marriage study()

  • Pingback: Eat chocolate and lose weight! Plus more on the fraudulent gay marriage paper |()

  • Pingback: Open Science and scholarly publishing roundup – June 05, 2015 | Frontiers Blog()

  • Pingback: The Ethics of Scientific Collaboration - Neuroskeptic()

  • Pingback: The Ethics of Scientific Collaboration | Nagg()

  • Pingback: The Ethics of Scientific Collaboration - OK4me2()

  • Pingback: A Primetime Psychology Experiment: Does TV Affect Behavior? - Neuroskeptic()

  • Pingback: Share:A Primetime Psychology Experiment: Does TV Affect Behavior? | scholarlyjournalstation()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar