How (Not) To Fix Social Psychology

By Neuroskeptic | January 18, 2013 9:37 am

British psychologist David Shanks has commented on the Diedrik Stapel affair and other recent scandals that have rocked the field of social psychology: Unconscious track to disciplinary train wreck,

Lots of people are chipping in on this debate for the first time at the moment, but peoples’ initial reactions often fall prey to misunderstandings that can stand in the way of meaningful reform – misunderstandings that more considered analysis has exposed.

For example, Shanks writes:

[despite claims that] social psychology is no more prone to fraud than any other discipline, but outright fraud is not the major problem: the biggest concern is sloppy research practice, such as running several experiments and only reporting the ones that work.

It’s true that fraud is not the major issue, as I and many others have said. But bad practice, such as p-value fishing, is in no way “sloppy” as Shanks says. Running multiple experiments to get a positive results is a sensible and effective strategy for getting positive results; that’s why so many people do it. And so long as scientists are required to get such findings to get publications and grants, it will continue.

Behavior is the product of rewards and punishments, as a great psychologist said. We need to change the reinforcement schedule, not berate the rats for pressing the lever.

Earlier, Shanks writes that evidence of unconscious influences on human behaviour – a popular topic in Stapel’s work and in social psychology generally –

is easily obtained because it usually rests on null results, namely finding that people’s reports about (and hence awareness of) the causes of their behaviour fail to acknowledge the relevant cues. Null results are easily obtained if one’s methods are poor.

Thus journals have in recent years published extraordinary reports of unconscious social influences on behaviour, including claims that people are more likely to take a cleansing wipe at the end of an experiment in which they are induced to recall an immoral act [etc]…

…failures to replicate the effects described above have been reported, though often papers reporting such failures are rejected out of hand by the journals that published the initial studies. I await with interest the outcome of efforts to replicate the recent claim that touching a teddy bear makes lonely people more sociable.

Here Shanks first says that null results can easily result from poorly-conducted experiments, and then criticizes journals for not publishing null results that represent failures to replicate prior claims! But null replications are very often rejected because a reviewer says, like Shanks, “This replication was just poorly-conducted, it doesn’t count.” Shanks (unconsciously no doubt) replicates the problem in his article.

So what to do? Again, it’s a systemic problem. So long as we have peer-reviewed scientific journals, and the peer-review takes place after the data are collected, it will be open to reviewers to spike results they don’t like – generally although not always null ones. If reviewers had to judge the quality of a study before they knew what it was going to find, as I’ve suggested, this problem would be solved.

Other people have great ideas for fixing science of their own. The problem is structural, not a failing on the part of individual scientists, and not limited to social psychology.

CATEGORIZED UNDER: FixingScience, media, science
  • Andrew Wilson

    I can't describe how much I want the savior of modern psychology to be Skinner :)


    The problem with science is that it is filled with people who see it as just another way to earn a living instead of being driven by a need to know.

    And when you see it as just another job, to manipulate results so you get to keep your job or advance yourself is not more then a logical consequence.

    It's an unforeseen side effect of mass education.

    There will be no way to improve on that except to reinstate balloting or another selection system for higher education to weed out the untalented/uncaring

  • Y.

    You've discussed this before, but I think one of the solutions is pre-registration of all studies, like the way it's done with clinical studies now. It's not perfect; for example, some entries have vague outcome measures. But pre-registration certainly reduces the potential for cherry picking and statistical shenanigans. I don't understand why there isn't more of a push for that from journals and funding agencies.

  • C

    Maybe the problem is most obvious in social psychology just because the experiments and results are easy to understand, and judge. I know that questionnaires with leading questions are bad, but I would not know a bad western blot from a good one.

  • Neuroskeptic

    Y: Yes, very true; and actually pre-peer review is pretty much a form of registration. But even registration with the current system of peer review would be a big step forward.

  • Neuroskeptic

    C: Me too, but some people are amazingly good at spotting dodgy blots! Almost every known case of Western blot manipulation has been caught out by a reader paying attention. I guess if your entire job is making & looking at blots, you get a feel for what's right.

    In fact there's software (I think) that can do it automatically by spotting bands that are too similar to each other, indicating copy pasting, and also spotting 'hard edges'.

  • C

    If there is in fact a market for blot fraud detection software, wouldn´t that suggest fraud is not entirely uncommon?
    I guess my bigger point was that psychology research is easy for non-experts to criticize.

  • Anonymous

    Well remember, berating scientists to follow good research practices and punishing the ones who don't does count as a way of shifting incentives and stuff… Not a permanent solution, but might as well use it for now.

  • Neuroskeptic

    Anon: True… but the only punishment that really matters is publications. There are many scientists who everyone knows are up to some dodgy schemes but who get away with it because of their impressive CVs…

  • David Shanks

    My use of the term 'null result' was a little ambiguous, as you rightly highlight. The sort of study at issue typically finds some influence on behaviour (thinking about soccer hooligans worsens performance on a knowledge test) together with a null result concerning awareness of this influence. The replication typically finds a null result regarding the influence itself (eg fails to find evidence that thinking about soccer hooligans worsens performance). These are of course very different things.

  • Neuroskeptic

    Hi David, thanks very much for the comment.

  • mount analogue

    I can't describe how much I don't want the savior of modern psychology to be Skinner :(



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar