In which I inadvertently create an outlier…

By Ed Yong | February 1, 2011 9:11 am

Much as I try to write for a broad audience, I’m pretty sure that none of my readers is a monkey. Or a bacterium. Or a protein. However, I can be very confident that at least some of my readers are human. That presents an interesting dilemma when I write about fields like psychology, because they involve subjects that are capable of reading and learning about experiments and research areas within that field.

Take this study I covered last year:

In a university physics class, Akira Miyake from the University of Colorado used [a simple writing test] to close the gap between male and female performance. In the university’s physics course, men typically do better than women but Miyake’s study shows that this has nothing to do with innate ability. With nothing but his fifteen-minute exercise, performed twice at the beginning of the year, he virtually abolished the gender divide and allowed the female physicists to challenge their male peers.

The exercise is designed to affirm a person’s values, boosting their sense of self-worth and integrity, and reinforcing their belief in themselves. For people who suffer from negative stereotypes, this can make all the difference between success and failure.

… Miyake recruited 283 men and 116 women who were taking part in the university’s 15-week introductory course to physics. He randomly divided them into two groups. One group picked their most important values from a list and wrote about why these mattered to them. The other group – the controls – picked their least important values and wrote about why these might matter to other people.

The task worked. During the rest of the semester, the students sat for four exams that made up most of their final grade. Among the control group, who wrote about other people’s values, men outperformed women by an average of ten percentage points. But among the students who affirmed their own values, the gender gap largely disappeared. Their final grades reflected this shrunken divide: if the women took Miyake’s exercise, far more got Bs and far fewer got Cs.

Last night, a reader called Nickolas left the following comment on the post:

Holy Crap!!! I’m going to the University of Colorado at Boulder right now and I had to take this exact same survey! I was wondering why my friend told me that he had to write about why values were important to other people compared to me writing about my values. I’m ruining the results by reading this!!! OMG!@#$^@!#$$111!

This is brilliant or awful, depending on how you look at it. It makes me wonder how prevalent this sort of thing is. How often do people who take part in psychological experiments have an inkling about what they’re letting themselves in for, because they’ve heard about it in the media (especially since most psychological studies are done with university students in developed countries)?

And if the coverage of such fields increases, are science writers inadvertently biasing the experiments of the future? I’d love to hear opinions from psychologists and people who have taken part in psychological experiments…

CATEGORIZED UNDER: Neuroscience and psychology

Comments (18)

  1. The university itself had a press release:

    So the local newspaper also covered the same story:

    As did the student newspaper:

    Presumably, the PR had the cooperation of the researchers, since they were quoted, so they knew exactly what the risks were.

    Researchers just have to be a bit more clever. The “gorilla walks through a basketball game” video no longer works, because so many people know to look for the gorilla. But researchers have now found they can have a player walk off the court without anyone noticing, since they’re watching for the gorilla…

  2. QoB

    Yes, I wondered about this. As an undergrad and Masters student I took part in a few studies e.g.: on pattern-recognition and musical skills; game theory; and biological motion recognition, and having studied all those things at some point (and in the latter case, in the class immediately beforehand) I wonder how my results turned out.

  3. When a graduate student, a researcher at the department was running a series of experiments. He was interviewed by a science reporter from the local newspaper about his research, but only on condition that the piece be published a couple of weeks later as the experiment series wasn’t done yet.

    The reporter (or more likely their editor) decided that it was more important to fill a page the same Friday rather than wait for weeks and ignored the agreement. As a result the researcher had to cancel half the experimental series – many of his subjects would be likely to have read the piece – and schedule a new series six months later with subjects from outside that newspaper coverage.

    That reporter was never welcomed to the department again.

  4. Good experimental design when this is a risk includes manipulation checks, etc, to look for these problems. It’s surprisingly hard to consistently fake the ‘right’ pattern in a well designed survey; not impossible, just hard, and often detectable.

    @John Timmer: I’ve never laughed so hard as after I was watching the gorilla video as a non-naive observer, happily seeing the gorilla, but then missing the huge changes in background colour and people, only to have it all pointed out at the end. Best double fake out ever :)

  5. Liz

    I often wonder how I’d react to taking part in a psychological test. I’m sure I’d inadvertently try to second-guess the experimenter and this would surely affect the results. I think that’s a problem just as much as the possibility of someone having read about an experiment before.

    It would be interesting to find out as part of certain psychological experiments what the subjects *thought* they were taking part in and why…

  6. I’ve had to excuse myself from psychology experiments in the past because I already knew the protocol. (Although, once in undergrad participated in an experiment employing the dictator game, and went along despite having done the relevant game theory. I needed the money!!).

    There are, of course, experiments for which this is a much larger problem than usual: the famous “gorillas in our midst” study probably can’t be repeated on any Western campus. (Indeed, the authors themselves had to go to extraordinary lengths to conduct their experiment. They had to run all the trails in parallel because they couldn’t risk word spreading).

    Luckily I’ve not had this problem myself…

  7. I’m sorry, but any experimenter who fails to take these confounders into account, especially one who is working with University Students doing psychology experiments for course credit, is either a newbie or an idiot. Keep writing.

  8. Just to clarify, I’m not going to stop covering psychological studies because of this. I was amused and/or intrigued.

    Also, I wonder just how convoluted the gorillas study will get in the future. How many serial fake-outs can you get away with? Half-way through, a gorilla walks on, everyone changes clothes, gender and species, the court is replaced by a volcano and instead of basketball, everyone is suddenly playing chess. Smug student: “I *know* about the gorilla…”

  9. You know that testing WEIRDs really only exacerbates the problem. Not that too many experimenters would have a problem generalizing their findings to all humanity nonetheless.

  10. Can’t speak for monkey or bacterium, but…

    *raises pseudopod*

    Protozoan reader here!!!

    heh heh

  11. adam

    I’m sure this happens all the time. One of the biggest populations in psychological tests are – psychology undergraduates! You think a good chunk of them haven’t heard of the theory behind most of the tests?

  12. My favorite part is the participant’s, er, punctuational exuberance? :)

  13. Jen

    I was required to take part in some psychology experiments when I was taking Intro Psych in undergrad. A couple successfully duped me in what they were actually testing, but one I figured out right away. I was watching a supposedly live video of a person in another room who was being exposed to louder and louder sounds. I was supposed to keep watching and explain what my reaction was to seeing this person being exposed to apparently painful levels of sound. The problem was I had just read an article the previous day that happened to remark on how loud (in decibals) jet engines were, and these reported decibals were far exceeding that.

    When the grad student asked me what my reaction to the video was, I said “Nothing, because it’s probably just a grad student who’s acting, you wouldn’t actually hurt someone like that in an experiment.” She laughed and said I was right, and I was excused from the rest of the experiment (but I still got credit, woo).

  14. I agree with John’s comment (#1) – I’m sure the researchers are aware that by publishing the methodology of/the idea behind a study before the data collection phase is complete then they’re potentially influencing their results, especially when the subjects are interested in and likely to be reading around the field.

  15. I think proper debrief procedures would alert any experimenter to the possibility that the participant was aware of the “real” purpose of the experiment, so to speak. (Luckily, the experiments I conduct with undergrad participants get at automatic, unconscious mechanisms!)

  16. There’s a similar problem in the law when we’re trying to select juries. With 24/7 news, it is very difficult to find someone who is truly ignorant of the media coverage of criminal events. So we ask them, “would your knowledge influence your ability to hear the evidence and come to a fair and impartial decision?” Now what idiot is going to answer “yes” unless they want to get out of jury duty? And if they WANT jury duty, as a trial lawyer you get nervous about why they want it…

    and these aren’t experiments — people’s lives depend on the outcomes!

  17. Don’t lose sleep over this!

    a) Psychology research, at least where I am, is required to have debriefing procedures which tell students what the research was actually about after they do the experiment. Because of this, and because people doing the research often tend to have friends at the same institution who might have done the research (“what were you up to this morning?” “like, I just did this wacky experiment with a gorilla that wasn’t there but then it was!”), people guessing what it’s about probably happens reasonably often.

    b) Most people participating in psychology research are not only WEIRD but are studying the very field the research is in – it’s reasonably common that people put two and two together.

    c) Psychology research very commonly has procedures for throwing out outlier data, anyway, because plenty of students don’t take the research seriously – at one institution I was at, a lot of students seemed to think that we weren’t doing real research but were just demonstrating what it’s like for their benefit. And, I mean, I once fell asleep while doing a particularly boring ECG study! Basically, the worst you could say if someone read about a study on your blog is that the researcher has wasted an hour, and they’ll waste hours anyway.

    d) My own research (and probably most psychology research outside of social psychology) was like Jason’s – I wasn’t trying to trick the participants, I was just trying to get them to say whether two things were the same or different. So it didn’t really matter if they knew what was going to happen in the experiment (beyond that they shouldn’t know what specific hypotheses I had about how the results will go).

  18. Steve Pratt

    @Jakob pretty much stole my line…that’s just WEIRD!


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Not Exactly Rocket Science

Dive into the awe-inspiring, beautiful and quirky world of science news with award-winning writer Ed Yong. No previous experience required.

See More

Collapse bottom bar