More on Publication Bias in Money Priming

By Neuroskeptic | April 23, 2016 6:53 am

fixing_science

Does the thought of money make people more selfish? Last year, I blogged about the theory of ‘money priming’, the idea that mere reminders of money can influence people’s attitudes and behaviors. The occasion for that post was a study showing no evidence of the claimed money priming phenomenon, published by psychologists Rohrer, Pashler, and Harris. Rohrer et al.’s paper was accompanied by a rebuttal from Kathleen Vohs, who argued that 10 years of research and 165 studies establish that money does exert a priming effect.

First, compared to neutral primes, people reminded of money are less interpersonally attuned. They are not prosocial, caring, or warm. They eschew interdependence. Second, people reminded of money shift into professional, business, and work mentality.

Now, a new set of researchers have entered the fray with a rebuttal of Vohs.

Britsh psychologists Vadillo, Hardwicke, and Shanks write that

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a “vote counting” approach to decide whether the effect is reliable – that is, simply comparing the number of successful and unsuccessful replications. Vohs’s (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices.

We show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting.

Essentially, Vadillo et al. say simply counting the “votes” of the 165 mostly positive studies, as Vohs does, misses the fact that the literature is biased. To demonstrate this, they plot a funnel plot, a tool used in meta-analysis to look for evidence of publication bias. The key points here are the blue circles, red triangles and purple diamonds which represent the studies in Vohs’ rebuttal.

xge_145_5_655_fig1aHere we see an ‘avalanche’ of blue, red and purple money priming experiments clustered just outside the grey funnel. This funnel represents null results (no money priming), so the studies just outside it are ones in which significant evidence for money priming was found, but only just (i.e. p-values were just below 0.05). This is evidence of publication bias and/or p-hacking. The original avalanche plot, by the way, was created by Shanks et al. from a different social priming dataset.

Vadillo et al. also show an alternative visualization of the same data. The plot below shows the distribution of z-scores, which are related to p-values. This shows an extreme degree of “bunching” to one side of the p=0.05 “wall” (which is arbitrary, remember) seperating significant from non-significant z-scores. It’s as if the studies had just breached the wall of significance and were pushing through it:

UntitledVadillo et al. say that study preregistration would have helped prevent this. I agree completely. Preregistration is the system in which researchers publicly announce which studies they are going to run, what methods they will use, and how they will analyze the data, before carrying them out. This prevents negative results from disappearing without a trace or being converted into positive findings by tinkering with the methods.

It’s important to note, though, that in criticizing Vohs for “vote counting”, Vadillo et al. are not saying that we should simply ignore large numbers of studies. The hand-waving dismissal of large amounts of evidence is a characteristic of pseudoscientists, not rigorous science. What Vadillo et al. did was show, by meta-analysis, that Vohs’ large dataset has anomalies making it untrustworthy. In other words, the 165 “votes” were not ignored, but rather were shown to be a result of ballot-stuffing.

ResearchBlogging.orgVadillo MA, Hardwicke TE, & Shanks DR (2016). Selection bias, vote counting, and money-priming effects: A comment on Rohrer, Pashler, and Harris (2015) and Vohs (2015). Journal of Experimental Psychology. General, 145 (5), 655-63 PMID: 27077759

ADVERTISEMENT
  • Anonymous

    “This prevents negative results from disappearing without a trace(…)”

    I wonder if this only applies when a format like “Registered Reports” is used. That format implies (if i understood it correctly) that there is a) pre-registration and importantly b) the results will be published no matter the final results.

    I think just doing a) pre-registration means researchers can still bury null-results (please correct me if i am wrong). Maybe the pre-registration information is/will become available in some way or form but i don’t see how that will provide researchers with practically useful information to take null-results into account.

    • Nick

      Publication bias is a two-way street. Not only do journals have to commit to publishing the results (no matter what the outcome), but researchers have to commit to writing them up (no matter what the outcome).

      One problem is that there is going to be asymmetry of enforcement here. If I pre-register a study, send in my results, and the journal (e.g., due to there being a new editor for whom pre-registration is not a priority) decides, “Nah, too dull”, then you will hear about it all over the Internet (which might even make something happen). On the other hand, if I fail to send in my pre-registered results, about the worst I can expect is the journal editor politely asking if I’ve finished yet, to which the response “mañana” will usually be sufficient.

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        Yes and no. If the preregistration were public (as it should be), everyone would be able to see that you had preregistered a study 3 years ago and had omitted to publish the results. They could therefore take account of this e.g. in the meta-analysis. they could, for instance, assume that all unpublished studies were null, and see whether the inclusion of these presumed-null studies altered the conclusions.

        Essentially, under the current system, null results can “vanish without a trace”.

        Under preregistration some results might vanish (you can’t *force* people to publish something) but they would leave a trace.

        • Nick

          True, but given how reluctant people currently are to call out other researchers for some of the most egregious malpractice imaginable, I don’t think that many people are going to end up massively shamed for this, when “we never finished” is always a (perhaps the most) plausible reason.

          • Anonymous

            Thanks to the both of you for your replies and thoughts. I worry that pre-registration will give a false sense of doing something about publication bias, when practically and in reality it does very little if anything at all concerning it.

            For instance, do researchers who perform a meta-analysis check all the pre-registration databases? Are these databases even searchable? I worry that the answer to both questions is a “no”… More concretely “aspredicted” seems to keep pre-registrations private (for their reasons see https://aspredicted.org/messages/private_forever.php)

            From what i currently understand, i reason that pre-registration only helps concerning things like p-hacking and selective reporting of outcomes etc. If i understood things correctly, the “Registered Report” format is the only thing that *will* actually help concerning publication bias. Since publication bias is a very serious problem I wonder why not all journals offer this format…In fact, i think it is scandalous that they don’t.

          • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

            Thanks for the comment

            “do researchers who perform a meta-analysis check all the pre-registration databases?”

            At the moment I think the answer is generally not, however this is because there are few preregistered studies right now. Once the number of preregistrations grows I think meta-analysts will start to use them.

          • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

            True, but we are getting more willing to do that. PubPeer is making headway.

  • Денис Бурчаков

    Recently a colleague of mine spoke about another facet of publication bias. He said: “If you want to publish, then first choose a journal and design your study according to journal’s standards and expectations”.

    • OWilson

      A cardinal rule of business.

      “Know your audience”.

      • Денис Бурчаков

        I agree, but perhapse the is a slight difference between selecting the proper journal to tell the people who care about your research, and selecting research methods to comply with journal’s practice? In my point of view design the research protocol should be guided by the nature of question at hand, and this question is about research matters, not publishing. Of course, journal guidelines and theme (both stated and presumed) help to make it more relevant to the audience, but should it be the very first question? Maybe I am wrong, but this idea can lead to more system hacking, which is already aplenty.

        • OWilson

          Sorry I spoke in such general terms, which did not adequately address your specific trade publication issues, which are addressed in the article.

          I was merely responding to your colleague’s proposition: “If you want to publish,….” with my own, based on long years of writing consulting reports for paying clients.

          To be effective, a report, essay, thesis, (or even a comment here :) should be made with your specific audience in mind.

          • Денис Бурчаков

            This is very much true. Nevertheless, I suppose, that while journals should guide researches in choosing a language, they should not limit other options, specifically the design of the experiment. This limitation is a bias, and there is already enough bias in scientific research. My point is idealistic, because journals are children of publishers and publishers are business-minded institutions. The very fabric of this system is prone to bias and there is no other system on the horizon. Better communication is nice, but when the only goal of this communication is to publish something that other people will cite, this process ceases to be science.

  • Pingback: 5 Tips For Avoiding P-Value Potholes | Absolutely Maybe()

  • Pingback: 5 Tips For Avoiding P-Value Potholes | PLOS Blogs Network()

  • Pingback: IFM 4/23-4/25 – Disruptive Paradigm()

  • http://joseduarte.com Joe Duarte

    Relatedly, can we agree that suppressing null findings (when reporting supportive findings) is fraud?

    It appears to be considered fraud in the physical and life sciences, pretty much all sciences outside of social psychology and maybe a cluster of related fields.

    I mean in cases where the supportive and null/unsupportive findings tested the *same hypothesis*, which is almost always the case when social psychologists suppress nulls.

    It’s fine to discard a hypothesis that bore no fruit, fashion a new hypothesis, run new studies, and write-up the new hypothesis and (all) the results of research on that hypothesis. There’s no ethical obligation to mention the earlier, discarded hypotheses (some scientists might mention them anyway if it makes for a good story/journey). There’s no deception in not mentioning them since your paper is only talking about the new hypothesis and the research that explored that hypothesis.

    But in many cases social psychologists take one hypothesis, run ten different studies testing that hypothesis (like money priming makes people more conservative), and only disclose and write up the five that “worked”. They don’t even disclose the *existence* of the five that didn’t work – it’s a miracle that Hal got Caruso, Vohs, and Waytz to admit that they had run several more studies than they disclosed in their paper, and that they were all nulls. That they admitted it in writing, and that the admission is sitting there in a journal article (the Rohrer, et al paper), is amazing.

    I think that admission/article will be an enormously awkward and fascinating artifact in the history of science, an artifact that future historians of science will treat as a stunning illustration of the state of social psychology in the early-21st century. Especially given that *nothing happened* to the guilty parties. The journal **didn’t even retract their original, false, fraudulent paper**. It’s still sitting there, in the scientific literature, like any normal paper. They have to retract, given all the data they withheld that undermined the paper’s thrust, but in social psychology journals seem more like tenure devices than actual scientific journals, and they almost never retract anything, even if it’s fraudulent.

    I like the Encyclopedia.com definition: “By today’s standards, omission of data that inexplicably conflicts with other data or with a scientist’s proposed interpretation is considered scientific fraud.”

    (http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/scientific-fraud)

    So neuroskeptic, do you agree that suppressing nulls (pertaining to the same hypothesis) is fraud? If not, what’s your fraud framework? Do the perps have to think to themselves “this is fraud” in order for it to be fraud?

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      It’s a good question. I agree that suppressing null results while publishing similar but positive ones should be classed as fraud. However, it currently isn’t – in fact it is tolerated and even expected in many academic fields.

      So I don’t think it would be fair to say that Caruso, Vohs, and Waytz are guilty of fraud. They did nothing worse than what many others are doing. However I would say that their selective publication is grossly bad practice, and that we ought to change the rules so that doing the same thing in future is considered fraud.

      • Omnes Res

        If that is fraud what do you call it when you basically do a replication of one of your previous studies but get the opposite results, and instead of reporting those results you p-hack the data set into 4 new publications, brag about the papers in a blog post, then don’t respond to emails when researchers have some questions about the data? c.f. Brian Wansink

      • freecell0sd

        That would mean that ‘fraud’ would be less worthy of condemnation as ‘fraud’, depending upon how low standards are in the field one is working in – “Considering I’m a money launderer, this really shouldn’t be classed as fraud”.

        I’m not sure that’s the best approach to moral matters, especially when the actions of researchers affect those outside of their field. Misleadingly presented results can end up harming people who had no say over the low standards some researchers want to see applied to their actions.

        Different groups can desire different standards. When privileged groups try to adopt low standards for themselves in ways that harm others, it’s important that we refuse to play along. We should judge the actions of individuals within those groups regardless, and also also condemn those who helped to promote low standards.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+