Brains In Motion Are Bad For Neuroscience

By Neuroskeptic | August 7, 2012 1:11 pm

A new paper in Human Brain Mapping reports on: Functional magnetic resonance imaging movers and shakers: Does subject-movement cause sampling bias?

Head movement is a well known problem that can badly impact the quality of neuroimaging data, introducing spurious signals and obscuring real ones. It’s an issue for all brain scanning research but according to Wylie and colleagues, authors of this paper, it’s especially serious for studies comparing disease patients to healthy controls.

The authors got 34 people with multiple sclerosis (MS) to perform some simple cognitive tasks during fMRI scanning. They found that the harder the task was, the more the patients moved around during the scan; and in those with more severe MS, the correlation between difficulty and motion was even stronger. In healthy people, harder tasks only caused slightly more motion.

And the more people moved, the less brain activity was recorded, probably because movement degraded the data quality.

fMRI data associated with severe head movement is frequently discarded. In discarding these data, it is often assumed that head-movement is a source of random error, and that data can be discarded from subjects with severe movement without biasing the sample. We tested this assumption by examining whether head movement was related to task difficulty and cognitive status among persons with multiple sclerosis (MS).

[In the MS patients] there was a linear increase in movement as task difficulty increased that was larger among subjects with lower cognitive ability. Analyses of the signal-to-noise ratio (SNR) confirmed that increases in movement degraded data quality. Similar, though far smaller, effects were found in healthy control subjects. Therefore, discarding data with severe movement artifact may bias multiple sclerosis samples such that only those with less-severe cognitive impairment are included in the analyses. However, even if such data are not discarded outright, subjects who move more will contribute less to the group-level results because of degraded SNR.

Oh dear. fMRI researchers use two main ways to deal with motion – correction, and rejection. Either you try to take account of movement and analyze the data, or you just throw out the results from people who move a lot. Most people use a combined approach, chucking out the really heavy movers and then using correction on the rest.

This paper however shows that both techniques have problems. Despite motion correction, data from heavy movers has a lower signal to noise ratio, so if you include them, they will “dilute” your sample. However, if you chuck them, that’ll also introduce bias, because heavy movement is not random – people with more severe MS move more, so by excluding heavy movers, you’ll be excluding severe cases.

Informally, every neuroimaging researcher knows that some people move more than others. Learning to spot likely “movers” and avoid wasting money on scanning them is a fine art. In my experience just about every “patient” population move, on average, more than healthy controls, and children and the elderly move more than young adults.

I’m not sure there’s an ideal solution but perhaps the best approach is to run all analyses (at least) twice, once including everyone, regardless of movement, and then again, with strict movement exclusion criteria. Results consistent across both analyses are probably solid.

ResearchBlogging.orgWylie GR, Genova H, Deluca J, Chiaravalloti N, and Sumowski JF (2012). Functional magnetic resonance imaging movers and shakers: Does subject-movement cause sampling bias? Human brain mapping PMID: 22847906

  • Nitpicker

    Interesting. It's not terribly surprising but that makes it only more important that someone finally did this comparison formally.

    Fortunately, there are new developments in online motion correction on the horizon that may help alleviate such problems. Of course, these methods must also be validated to see that they don't distort results.

  • http://petrossa.wordpress.com/ petrossa

    Needs a hefty upgrade of computing power to implement that kind of motion correction.

    Not sure if old(er) machines can easily be upgraded.

    All in all it is getting to be a mighty effort for little gain.

    Other methods of non-invasive function tracking should be further researched rather then trying to get this flawed from ground zero system working.

    There must be by now a valid way to track actual neural signals using similar inference based machines rather then double inference by first tracking oxygenation via inference and then from that inferencing activity.

    Worth looking into.

  • Nitpicker

    I don't think it is such a massive computational problem, at least are already commercial options for that available. How well it works (and what gain to expect) I can't tell you but I hope to know more in a few years time.

    As for the indirectness of fMRI that is certainly a problem that we will face. But I doubt we will face it in this decade.

  • DS

    Some thoughts.

    Independent prospective correction of motion (Measure the subject motion by some means other than by MRI and then update the gradient magnitudes to get a 2D slice where you want it relative to the brain.) could prove be a good means of correcting two kinds of motion-related noise and systematic error – spin history effects and the usual inherent brain contrast effects. Whether it succeeds will depend upon the accuracy and precision of the motion measurement. There could potentially be less stability in Nyquist ghosts but that would probably be a good trade overall trade. But independent prospective correction of motion wont solve the motion-related problem of time varying contrast due to non-homogeneous receive fields and pseudo-motion-related main field drift. I have to wonder, if the persistence of main field drift means that retrospective motion correction algorithms, with their inherent inaccuracies, will still have to be used then will much be gained overall? Yes, drift could be ameliorated but probably at considerable cost and MRI manufacturers are unlikely to be motivated to fix such an fMRI specific problem.

    Being able to check two motion-related problems off the list of contaminants would be great! I have little knowledge of the efficacy of independent prospective correction of motion so I am anxious to see how it pans out.

    Petrossa's point is a good one, of course, but direct imaging of brain activity (let's assume that means electrical activity or changes in current density) is a really really tough problem. For now we have the tools we have and the nagging suspicion that our best efforts to fix problems with these tools may not be good enough.

  • Anonymous
  • http://www.blogger.com/profile/07387300671699742416 practiCal fMRI

    @Nitpicker, DS has mentioned some of the problems with prospective (online) motion correction. The issues are many and complex. And as it happens I have a draft blog post on them that I'll try to expedite tomorrow.

    As for motion, the problem, it's by far the biggest limitation facing fMRI. Forget the number of teslas or the strength of the field, or even the SNR. Motion is the real limit, and it contaminates the method in many, many ways. Keep an eye on my blog and on MathematiCal Neuroimaging for more in the coming months….

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Thanks for some great comments.

    practiCal fMRI: That's really interesting to hear you say that; it's not the impression one would get from the community as a whole, where there's loads of excitement over stronger magnets, little attention paid to motion beyond “Correct and Reject” – or such is my experience…

    Do you think people have got seduced by the allure of stronger magnets and are overlooking older, less 'sexy' stuff like motion?

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Anonymous: Looks like pretty good work but I've only skimmed it, I will probably post about it soon. thanks

  • http://www.blogger.com/profile/07387300671699742416 practiCal fMRI

    @Neuroskeptic: “it's not the impression one would get from the community as a whole, where there's loads of excitement over stronger magnets, little attention paid to motion beyond “Correct and Reject” – or such is my experience”

    Golly. Where to start. Putting my amateur psychologist's hat on for a moment, I think this is understandable albeit unacceptable for neuroimaging as a whole. First of all, the folks who are concerned with acquisition issues, including pulse sequences, RF coil design, etc. aren't often intimately involved with (or knowledgeable about) the real-world uses of their inventions. Likewise, those involved with post-processing steps such as routines for 'data de-noising' may not understand the many complexities of the data the acquisition people have provided for them. So how the humble neuroscientist is then supposed to make sense of what's in the sausage is anyone's guess!

    The excitement follows naturally from this compartmentalization of the efforts. For example, the pulse sequence guys get papers and win fame from inventing fancy new methods. It isn't necessarily their job to also validate their inventions. (Some will disagree, but I tend to think that others should be the critics because (a) it's very difficult to call your own baby ugly, and (b) it leaves those capable of inventing new methods free to do more of it, some of which could be really useful!)

    Turning cynic for a sec, there are also Large Financial Incentives to be a cheerleader. Try getting someone to give you $7 million for a 7 T scanner if you tell them up front you're going to try to find ways to show that it's crap. As Chris Rock puts it, “I ain't sayin' it's right. But I understand.”

    As for the seduction of stronger magnets and other New New Stuff (like massively accelerated pulse sequences), this is another typical human condition. We tend to get excited about the possibilities and it's not until people take the time to work through the issues that robust, validated methods result. It's all a part of the process, for better or worse. Thus, I fully expect some to be able to get remarkable data out of high fields, fancy sequences, etc. But they will be the most circumspect and will treat an fMRI experiment as a particle physicist or a geneticist treats an experiment: as a laborious chain of careful steps, not as a single button that one pushes to get results.

    So the future is bright, not just as bright as some might have us believe. We need to take the time to validate what we've got. That stuff tends not to be terribly sexy – just important. I think the tide has turned though, and validation is becoming in vogue if not sexy. Publication of the 2011 paper by van Dijk (that you blogged about) and the 2011 Power paper have shown that the issues are too big to be ignored.

  • n

    @practiCal: Thanks for your comments. This is very interesting, I'm looking forward to your post. I am keen to see how such online methods hold up in practice but of course these shouldn't be seen as a panacea. I share your scepticism about high fields, at least as long as we have no better way to control for motion artifact and other sources of noise. Sure high fields are theoretically superior but with smaller voxels the deleterious effect of motion artifacts become increasingly worse.

  • DS

    Another potential benefit of
    independent prospective correction of motion would be the decoupling of the temporal interpolation (often used in fMRI) and motion correction.

    With retrospective correction this decoupling is not possible. Nevertheless folks have been proceeding as though it were. It would be really great to decouple these problems.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    practiCal fMRI – Sounds about right! I think that the past few years (voodoo correlations etc.) have shown that neuroimaging is self-correcting, we eventually spot mistakes and then eventually stop doing them. of course by that point we've probably made a bunch of new mistakes based on the latest technology…

    I often feel as though we're getting it all backwards, we should sort out the methods first, and only then apply them.

    But maybe the only way to sort out the methods is to apply them and realize that some applications are wrong.

  • DS

    Neuroskeptic

    I agree that we should sort out the methods first. I don't agree that only way to do that is by applying them first and then “realizing that some applications are wrong”.

    If one considers the scanner plus associated algorithms for generating the image data, which become the input to statistical, as a device then that device should have an associated error (error in the image magnitude and phase) that is independent of the application. That error should be reported. The problem, as I see it, is that nobody is doing due diligence with respect to estimating the error of the device. In fact I don't think anybody is doing so at all.

  • DS

    statistical -> statistical analysis

  • Anonymous

    thanks for sharing.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »