Looking Askance At Cognitive Neuroscience

By Neuroskeptic | May 30, 2013 5:04 am

Yesterday, I read a paper that, to my mind, embodies what’s wrong with cognitive neuroscience: Changes in the Amygdala Produced by Viewing Strabismic Eyes

I have no wish to attack the authors of the piece. This post is rather unfair on them: their paper is no worse than a hundred others, it’s just a clear case of a widespread disease. My own research over the years has certainly not been immune.

So I’m not claiming to be without sin, but equally, someone has to cast the first stone at the elephant in the room.

First a summary. The authors showed 31 volunteers two sets of photos, all of which focused on the eye region of human faces. Half of the pictures showed people with healthy eyes, while half featured eyes suffering from strabismus (aka “cross-eye” or “walleye”), a defect in which the two eyes are not aligned properly, and seem to point in the wrong directions.

All this took place in an MRI scanner, where the volunteers’ neural activation was measured using fMRI. The results showed that looking at strabismus caused increased brain activity, compared to normal eyes, in areas such as the amygdala, hippocampus and fusiform gyrus.

Interpreting this finding, the authors write:

Because the amygdala is the fundamental structure in the processing of negative, fearful, and aversive emotions, the results of this study strongly suggest that healthy individuals are reacting in a negative fashion to strabismus.

This sentence reveals a philosophical confusion.

Firstly, the premise that amygdala activation = negative emotion, is a serious oversimplification. The amygdala is activated by almost any emotionally meaningful stimulus, compared to an unemotional baseline. While it’s true that it is most strongly activated by negative stimuli, amygdala activation just cannot be read as being negative. It might be positive, or just surprising.

This is an example of the dangers of reverse inference – trying to infer psychological events from neural activations. It’s a very common problem in fMRI. As is also very common, the authors selectively draw conclusions from those areas of the brain that most readily fit their theory. The hippocampus was activated more strongly than the amygdala, but this area is less ‘emotional’, so is hardly discussed.

So the papers’ conclusion, that healthy individuals react in a negative fashion to strabismus, doesn’t follow from the fMRI data. However, the conclusion is true anyway, because, look:

Which looks better?

A glance at those pictures tells you more about how strabismus is perceived than any amount of brain scanning. ‘How do people react to strabismus?’ is a psychological question, not a neurobiological one.

I’m sure the photo on the left does activate the negative emotion circuits of the brain (whatever they are) more than the one on the right. But I know that because I know its psychological effects, not the other way around.

Now, 30 out of the 31 volunteers’ amygdalae lit up in response to the strabismus images, but only 23 of them reported feeling an emotional response on a questionnaire. The authors note this, and suggest that the ‘missing’ 7 people might have denied feeling anything in an effort to be politically correct.

This is the popular argument that when brain and behaviour seem to disagree, ‘the neural data is more sensitive’ – it can reveal what behaviour conceals. But what if it’s just less specific? That’s not talked about.

So this paper tells us nothing about strabismus.

However, it might change the way we think about strabismus. The authors write:

This study demonstrates for the first time the organic effect of strabismus on the observer… Strabismus correction surgery can improve quality of life by improving interpersonal relationships by virtue of its organic effect on both parties.

The implication is that other people’s aversion is as much an ‘organic’ feature of strabismus as the misaligned eyeball itself. One that only surgery can correct.

Maybe it is, but these data don’t tell us. It might be (say) that the unpleasantness we feel is aversion to the unknown, and if our society were only better educated about strabismus, we’d be more comfortable with it. There are lots of other possibilities.

Talking about something in a neurobiological way sends the message that this is a neurobiological issue. In this way, many fMRI papers serve to spread the idea that this is an issue that only neuroscience can solve and, therefore, create a demand for more fMRI studies. The authors of this paper are victims of this mentality, a widespread confusion about what neuroscience is for.

fMRI is a great way to approach neuroscientific questions. It’s a bad (and terribly expensive) way to do psychology. This study is about psychology, and should not have involved an MRI scanner.

ResearchBlogging.orgBerberat J, Jaggi GP, Wang FM, Remonda L, & Killer HE (2013). Changes in the Amygdala Produced by Viewing Strabismic Eyes. Ophthalmology PMID: 23706702

  • Y.

    What’s the problem? Just look at the pre-registration record to find out the a-priori regions of interest. Oh, wait… there is no record. Too bad.

  • djlewis

    You left out a crucial part of this story! The authors are not neuroscientists but ophthalmologists, or researchers in that area, and the journal where this was published is Ophthalmology where there is little if any prior neuroscience research. It seems therefore likely that the referees and editors had little or no neuroscience experience.

    Thus, you are drawing unwarranted conclusions about the field of neuroscience. I am not saying your conclusions aren’t largely true, but this piece does not constitute much if any evidence for that. In fact, given the circumstances, this article probably represents more the attitudes and biases of the scientifically educated public than that of neuroscience, which is a different, though in many ways equally disturbing conclusion.

    So, not only did you not do *your* “research” on this paper, you committed many of the same sins that you accuse the authors and the field of. A much more complex and nuanced conversation is really needed here, and obviously blog comments are not the place for it. But if we critics are to succeed in placing neuroscience in its proper scientific and intellectual place, we need to keep our own house in order.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I’m aware of course that this was published in Ophthalmology and did consider mentioning that fact, but I decided it was irrelevant: these problems are widespread.

      They occur in many other journals, most notably in psychiatry but sometimes in ‘pure’ neuroscience as well. Otherwise reverse inference (for instance) wouldn’t have been discussed at such length already.

      I completely agree with you that this paper is closer, in many ways, to a lay understanding of neuroscience than to a ‘neuroscientific’ understanding – the problem is exactly that, that many neuroscience papers seem to be written by laypeople rather than neuroscientists…

      And in fact if you look at it, they often are: many fMRI studies are conducted by people whose training was in medicine or psychology or something else that has little to do with fMRI.

      Such people are frequently funnelled into doing neuroimaging, something they’re not trained to do, and may not have any desire to do, by a system in which it’s seen as “better” than other approaches to research.

      I don’t want to set up a divide between ‘real’ neuroscientists who liked it ‘before it was cool’, and ‘bandwagon jumpers’. But there is at least a ring of truth to that, isn’t there?

      • Y.

        I’m with you neuroskeptic; about every 2nd paper in Neuroimage or in HBM is of this pseudo-neuroscience variety.

        • djlewis

          If this is true, Y, the why didn’t Neuroskeptic pick one of those many, “every 2nd” papers to critique?

          I think “real” neuroscientists (trained & published in that field) are a lot more circumspect about their interpretive claims. They have to be — today’s editors and reviewers in “real” neuroscience journals would not allow what we see in the Ophthalmology paper.

          • Y.

            Sorry, but I consider all the recent “mind reading” stuff published in the Godly Nature and Science no more valid than the above study. Same with diagnosing psychiatric disorders, detecting lies or predicting future behavior using brain scans. These things get published in highly reputable, neuroscience or general science journals. I think you’re overly optimistic to think that the “real” neuroscience field has in any way come to grips with sloppy statistical practices and shoddy interpretation.

          • stefanopj

            The old “no true Scotsman” is alive and well in the neuroimaging field. Here are some examples that were published in the last year in neuroscience journals, by neuroscientists, that fall in the same category of the strabismus one:

            http://intl-scan.oxfordjournals.org/content/early/2012/06/07/scan.nss007.full
            http://www.biomedcentral.com/1471-2202/13/54/
            http://www.sciencedirect.com/science/article/pii/S1053811913001390

            The last one being published in NeuroImage, a very well regarded journal in the field.

          • djlewis

            I looked at the last one, on food labeling — all I have to say is Yike! Looks like standards are not uniformly rising, as Neuroskeptic (whoever he is) says. That would have been a good one for this blog, though it probably requires a bit more nuance and finesse in the critique — but not a lot.

          • Zachary Stansfield

            In fairness to NS, he can’t cover every dodgy paper, and there are more than enough of them.

            Also, he didn’t state that “the standards are uniformly rising”, but that “neuroscience has been raising its standards… [although] the problem hasn’t disappeared…”.

            Albeit, this leaves room for the no-true Scotsman argument above.

            I happen to agree with you on this point dj: some of the most highly-cited, highly-reputable journals partake in publishing studies that represent hogwash, rather than meaningful substance.

            This may not change until the day when journal editors unilaterally accept that “we can’t publish studies that draw strong, generalized conclusions from tiny samples of people who demonstrate statistically-ambiguous variations in brain activity and dress these results up as a profound neural signal”.

        • PubPeer

          If this is true it would be intersting to see the details of some of your comments on these papers. You could post them on PubPeer where the authors and others in the field could contribute to the discussion.

      • djlewis

        I think you’ve missed the real story here. There’s plenty of marginal neuroscience in the actual field, but at least editors and reviewers no longer allow the really bad stuff and particularly not the ridiculous unsupportable conclusions as in the Ophthalmology paper. It was too easy a target.

        The real story, IMHO, is that a bunch of smart, accomplished authors and (presumably) reviewers and editors in an unrelated field have managed to subvert all standards and publish this piece of pseudo-neuroscience.

        There’s no room to explain why in detail, but I find this to be a somewhat different phenomenon, obviously related, but at least as alarming as what’s happening in neuroscience itself. It’s not just the popular press anymore, and those looking for a fast buck on a slickly titled paperback.

        • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

          Actually… I think you are on to something.

          Dodgy neuroimaging used to be common in neuroscience. In recent years neuroscience has been raising its standards of methodology and interpretation.

          But the problem hasn’t disappeared – it has spread outwards, into the growing borderlands of ‘non-neuroscience neuroimaging’.

          I will think about this some more.

  • Psyclic

    Hmmmm! Opthamologists’ research indicates opthalmic surgery is required…..! Now THERE’s a surprise.

  • Lisa Sansom

    Makes me think of this great article and the advice to “trust the results, but not the conclusion”: http://bigthink.com/devil-in-the-data/trust-the-results-not-the-conclusions

  • practiCalfMRI

    No idea what this means for psychology, but comparing fMRI data across two groups of pictures is fraught with complexity. Did the authors also see differences in frontal eye field signal, for example? If not, why not? It’s hard enough to maintain fixation with a normal pair of eyes, but with strabismus there is an even stronger likelihood of saccading between two focal points. Different head motion then immediately becomes a new concern, in addition to the different (bottom up) visual processing that may be happening. That there are differences in the two groups’ signals I’m not surprised, but interpreting those differences is going to require eye tracking data and checks for other systematic differences, whether or not there is an emotional response.

    • DS

      Agreed. I have a hard time looking at the strabismus eyes without moving my eyes back and forth – a problem that does not exist when I view the normal eyes.

  • Pingback: Looking Askance At Cognitive Neuroscience : Neu...

  • Pingback: Are we hard-wired to focus on the bad news? | ADD . . . and-so-much-more

  • Pingback: 2013-05-31 Spike activity « Mind Hacks

  • Zachary Stansfield

    A good question to follow-up might be: “What avenues of discovery remain for cognitive neuroscience, while relying upon current methodologies?”

    It’s easy to point to all of those interesting questions we’d like to answer, but it’s much harder to note the puzzles that we can actually solve. Add to this the fact that today there are probably way more neuroscientists than there are meaningful neuroscientific questions, and we are left with a bit of a conundrum.

    What is there for everyone to do? Is there any real solution, or must we simply wait for the pop neuroscience bubble to implode?

    • Wouter

      I find this to be somewhat of a short-sided point of view. It’s quite hard to determine the boundaries of one particular methodology, since the possibilities greatly rely on the researchers, hypotheses and analysis techniques. Lately, there’s also an increase in studies using multiple methodologies; it’s close to impossible to predict what we can and cannot uncover. And then I’m not even taking into account the growing contributions of neural (network) modeling.

      Additionally, having a surplus of neuroscientists will only increase the competitiveness, which has its downsides but also its upsides when it comes to innovation and clever ideas.

      • Zachary Stansfield

        I have to disagree with you here.

        The possibilities of any particular methodology are most certainly bounded by the limitations of that methodology itself. Moreover, the use of multiple methodologies supports this point: we turn to more than one technique when the single method alone is insufficient and this is itself an innovation in the use of methods. Even so, does this imply that a study which uses multiple techniques, each of which is insufficient to answer the question of interest, will be much better? Perhaps. But, often if the methods suffer from similar limitations we may not gain much additional insight.

        When it comes to increased competition you note this “has its downsides” (which I agree with), but it’s not clear how competition promotes innovation and cleverness above and beyond what is otherwise achievable. Who has ever shown this to be true?

        Competition can providing a motivation factor, but it isn’t specific to promising “innovation”. Often, competition drives the sorts of innovations we hope to avoid: stolen ideas, sabotaged research, infighting, etc. Most prominently, the publish or perish model of competition distorts scientific goals from their true aims (discovery and validation of knowledge) and promotes less useful outcomes (publication of premature and often invalid data).

  • Pingback: Science stuff I like to read in the weekend! | From experience to meaning...

  • FND Hope

    I was recently involved in a fMRI research at the NIH. They are trying to make a connection with movement and pictures of people men and women and their emotions. I could see how the entire test was flawed from the beginning. How could they not?
    It was entirely geared to get the response they wanted. How is this Science? When most research is pointed in a biased direction from the moment the theory is produced. Not many researchers go into a protocol wondering how to prove themselves wrong.
    Science ceases to be science when there is no longer a desire to search for the truth.

  • csmedic

    The author of this article might want to drop in any psych 01 class where first year undergrads are taught that everything psychological is ultimately biological; unless of course the author is a dualist (a view incompatible with modern neuroscience).

    • Zachary Stansfield

      This is a non-sequitur and also happens to miss the entire point.

      Of course everything psychological has a biological basis, but this doesn’t tell us whether we can measure the important underlying biological processes.

      Broadly speaking, cognitive neuroscience lacks the methods to appropriately assess the biology that undergirds many of the psychological processes that are being investigated. Thus, we are forced to use our knowledge about psychological processes to try and fill in the details about the meaning of not-very-informative “biological” measurements. This strategy appears to lend support to the physiological findings, when in reality it just inserts circularity. The same brain activation patterns could be associated with a multitude of other psychological states, and there is no data available to falsify this null hypothesis.

      • csmedic

        It is not a non-sequitur at all. The article said:

        “[fMRI] a bad (and terribly expensive) way to do psychology. This study is about psychology, and should not have involved an MRI scanner.”

        To which I replied that psychology is at its bottom level biology. Biology which is effectively studied with imaging techniques such as fMRI. Correct me if I am wrong, but you are saying here in your reply is that such techniques are not at focused correctly or perhaps too blunt for conducting research such the type in article. If that is what you are trying to say then I see your point and am in agreement. But this wasn’t evident to me from my initial read through of the article.

        • Zachary Stansfield

          Yes, you are correct that is exactly what I am saying. It is also, I believe, what NS was trying to point out (e.g. your above quote)–or at least, it is what he should have been trying to point out.

          Much of psychology is not really worth studying at the biological level given current technology.

          In part, this is because many psychological ideas are really just statistical constructs extracted from group data and so we should question anyone who wants to find “the biological correlate of a statistical abstraction”.

          There will, of course, be a very many psychological states which could be studied at the biological level. However, in a large number of cases, the methods employed will not provide useful information about the biological/physiological factors associated with these mental states. If this is the case, and particularly if we can predict such an outcome beforehand, then it’s a waste of time and resources to collect “biological data” when conducting these experiments. The data themselves are useless because their importance to the state itself is equivocal.

          This research paper is a good example of the latter issue. The data don’t tell us anything about the biological state–their meaning is ambiguous. The authors ignore this fact, and instead draw an appealing yet logically invalid inference from these data in order to support their psychological claims. In effect, they are doing psychology, but for added flavor they [insert cool, but meaningless "bio-measure" here]. Delete the inserted bio-measure and we are left with nothing but a poorly designed psychological study.

          • csmedic

            Thanks for the clarification. Perhaps I was too quick to infer that neuroleptic was saying that examining the brain has no place in psych research

  • Pingback: What’s Wrong With Cognitive Neuroscience

  • Pingback: Strabismus | Find Me A Cure

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »