More on Publication Bias in Money Priming

By Neuroskeptic | April 23, 2016 6:53 am


Does the thought of money make people more selfish? Last year, I blogged about the theory of ‘money priming’, the idea that mere reminders of money can influence people’s attitudes and behaviors. The occasion for that post was a study showing no evidence of the claimed money priming phenomenon, published by psychologists Rohrer, Pashler, and Harris. Rohrer et al.’s paper was accompanied by a rebuttal from Kathleen Vohs, who argued that 10 years of research and 165 studies establish that money does exert a priming effect.

First, compared to neutral primes, people reminded of money are less interpersonally attuned. They are not prosocial, caring, or warm. They eschew interdependence. Second, people reminded of money shift into professional, business, and work mentality.

Now, a new set of researchers have entered the fray with a rebuttal of Vohs.

Britsh psychologists Vadillo, Hardwicke, and Shanks write that

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a “vote counting” approach to decide whether the effect is reliable – that is, simply comparing the number of successful and unsuccessful replications. Vohs’s (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices.

We show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting.

Essentially, Vadillo et al. say simply counting the “votes” of the 165 mostly positive studies, as Vohs does, misses the fact that the literature is biased. To demonstrate this, they plot a funnel plot, a tool used in meta-analysis to look for evidence of publication bias. The key points here are the blue circles, red triangles and purple diamonds which represent the studies in Vohs’ rebuttal.

xge_145_5_655_fig1aHere we see an ‘avalanche’ of blue, red and purple money priming experiments clustered just outside the grey funnel. This funnel represents null results (no money priming), so the studies just outside it are ones in which significant evidence for money priming was found, but only just (i.e. p-values were just below 0.05). This is evidence of publication bias and/or p-hacking. The original avalanche plot, by the way, was created by Shanks et al. from a different social priming dataset.

Vadillo et al. also show an alternative visualization of the same data. The plot below shows the distribution of z-scores, which are related to p-values. This shows an extreme degree of “bunching” to one side of the p=0.05 “wall” (which is arbitrary, remember) seperating significant from non-significant z-scores. It’s as if the studies had just breached the wall of significance and were pushing through it:

UntitledVadillo et al. say that study preregistration would have helped prevent this. I agree completely. Preregistration is the system in which researchers publicly announce which studies they are going to run, what methods they will use, and how they will analyze the data, before carrying them out. This prevents negative results from disappearing without a trace or being converted into positive findings by tinkering with the methods.

It’s important to note, though, that in criticizing Vohs for “vote counting”, Vadillo et al. are not saying that we should simply ignore large numbers of studies. The hand-waving dismissal of large amounts of evidence is a characteristic of pseudoscientists, not rigorous science. What Vadillo et al. did was show, by meta-analysis, that Vohs’ large dataset has anomalies making it untrustworthy. In other words, the 165 “votes” were not ignored, but rather were shown to be a result of ballot-stuffing.

ResearchBlogging.orgVadillo MA, Hardwicke TE, & Shanks DR (2016). Selection bias, vote counting, and money-priming effects: A comment on Rohrer, Pashler, and Harris (2015) and Vohs (2015). Journal of Experimental Psychology. General, 145 (5), 655-63 PMID: 27077759

  • Anonymous

    “This prevents negative results from disappearing without a trace(…)”

    I wonder if this only applies when a format like “Registered Reports” is used. That format implies (if i understood it correctly) that there is a) pre-registration and importantly b) the results will be published no matter the final results.

    I think just doing a) pre-registration means researchers can still bury null-results (please correct me if i am wrong). Maybe the pre-registration information is/will become available in some way or form but i don’t see how that will provide researchers with practically useful information to take null-results into account.

    • Nick

      Publication bias is a two-way street. Not only do journals have to commit to publishing the results (no matter what the outcome), but researchers have to commit to writing them up (no matter what the outcome).

      One problem is that there is going to be asymmetry of enforcement here. If I pre-register a study, send in my results, and the journal (e.g., due to there being a new editor for whom pre-registration is not a priority) decides, “Nah, too dull”, then you will hear about it all over the Internet (which might even make something happen). On the other hand, if I fail to send in my pre-registered results, about the worst I can expect is the journal editor politely asking if I’ve finished yet, to which the response “mañana” will usually be sufficient.

      • Neuroskeptic

        Yes and no. If the preregistration were public (as it should be), everyone would be able to see that you had preregistered a study 3 years ago and had omitted to publish the results. They could therefore take account of this e.g. in the meta-analysis. they could, for instance, assume that all unpublished studies were null, and see whether the inclusion of these presumed-null studies altered the conclusions.

        Essentially, under the current system, null results can “vanish without a trace”.

        Under preregistration some results might vanish (you can’t *force* people to publish something) but they would leave a trace.

        • Nick

          True, but given how reluctant people currently are to call out other researchers for some of the most egregious malpractice imaginable, I don’t think that many people are going to end up massively shamed for this, when “we never finished” is always a (perhaps the most) plausible reason.

          • Anonymous

            Thanks to the both of you for your replies and thoughts. I worry that pre-registration will give a false sense of doing something about publication bias, when practically and in reality it does very little if anything at all concerning it.

            For instance, do researchers who perform a meta-analysis check all the pre-registration databases? Are these databases even searchable? I worry that the answer to both questions is a “no”… More concretely “aspredicted” seems to keep pre-registrations private (for their reasons see

            From what i currently understand, i reason that pre-registration only helps concerning things like p-hacking and selective reporting of outcomes etc. If i understood things correctly, the “Registered Report” format is the only thing that *will* actually help concerning publication bias. Since publication bias is a very serious problem I wonder why not all journals offer this format…In fact, i think it is scandalous that they don’t.

          • Neuroskeptic

            Thanks for the comment

            “do researchers who perform a meta-analysis check all the pre-registration databases?”

            At the moment I think the answer is generally not, however this is because there are few preregistered studies right now. Once the number of preregistrations grows I think meta-analysts will start to use them.

          • Neuroskeptic

            True, but we are getting more willing to do that. PubPeer is making headway.

  • Денис Бурчаков

    Recently a colleague of mine spoke about another facet of publication bias. He said: “If you want to publish, then first choose a journal and design your study according to journal’s standards and expectations”.

    • OWilson

      A cardinal rule of business.

      “Know your audience”.

      • Денис Бурчаков

        I agree, but perhapse the is a slight difference between selecting the proper journal to tell the people who care about your research, and selecting research methods to comply with journal’s practice? In my point of view design the research protocol should be guided by the nature of question at hand, and this question is about research matters, not publishing. Of course, journal guidelines and theme (both stated and presumed) help to make it more relevant to the audience, but should it be the very first question? Maybe I am wrong, but this idea can lead to more system hacking, which is already aplenty.

        • OWilson

          Sorry I spoke in such general terms, which did not adequately address your specific trade publication issues, which are addressed in the article.

          I was merely responding to your colleague’s proposition: “If you want to publish,….” with my own, based on long years of writing consulting reports for paying clients.

          To be effective, a report, essay, thesis, (or even a comment here :) should be made with your specific audience in mind.

          • Денис Бурчаков

            This is very much true. Nevertheless, I suppose, that while journals should guide researches in choosing a language, they should not limit other options, specifically the design of the experiment. This limitation is a bias, and there is already enough bias in scientific research. My point is idealistic, because journals are children of publishers and publishers are business-minded institutions. The very fabric of this system is prone to bias and there is no other system on the horizon. Better communication is nice, but when the only goal of this communication is to publish something that other people will cite, this process ceases to be science.

  • Pingback: 5 Tips For Avoiding P-Value Potholes | Absolutely Maybe()

  • Pingback: 5 Tips For Avoiding P-Value Potholes | PLOS Blogs Network()

  • Pingback: IFM 4/23-4/25 – Disruptive Paradigm()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar