When Data Filtering Introduces Bias (fMRI Edition)

By Neuroskeptic | September 6, 2012 4:19 pm

A couple of months ago I blogged about a paper showing that ‘filtering’ of EEG data can create spurious effects.

Now, we read about another form of bias that filters can introduce, this time for fMRI: Filtering induces correlation in fMRI resting state data.

Australian neuroscientists Catherine Davey and colleagues consider temporal filtering of fMRI data in studies looking at correlation (brain functional connectivity).

Because both very high frequency and very slow changes in the fMRI signal are probably caused by artefacts, rather than interesting brain signals, it’s common to use a filter to try and extract the medium-frequency changes that are of most interest (e.g. approximately 0.01 to 0.1 Hz).

However, while this filtering is very useful, Davey et al show that it can – ironically – create artefacts of its own: here’s the data from one volunteer scanned during a simple task and then analyzed in 4 different ways:

Without filtering (A) there’s a huge amount of ‘connectivity’ – too much to be realistic. This is why filtering is important.

But filtering, without correcting for the effects of the filter, actually makes things worse (B). It solves one problem but at the cost of creating another. The problem is those pesky autocorrelations. The authors say, however, that they’ve calculated a way to correct for filter-induced correlations (D) and that this gives more realistic results. They recommend that this should be used in future connectivity studies, but don’t go into much detail regarding the question of what this means for the existing literature.

Perhaps data ‘filtering’ is a misleading term. It implies that all you’re doing is removing the unwanted noise, leaving pristine, crystal clear data, a bit like a water filter. Mmm. What could go wrong? In fact mathematical ‘filters’ can put stuff into the data as well as take it out, so should we stop using that word and just call them what they are: modifications?

ResearchBlogging.orgDavey CE, Grayden DB, Egan GF, and Johnston LA (2012). Filtering induces correlation in fMRI resting state data. NeuroImage PMID: 22939874

CATEGORIZED UNDER: bad neuroscience, fMRI, methods, papers
  • Anonymous

    In these pictures doesn't the filtering/uncorrected just look like the filtering/corrected if you used a different threshold level (i.e. looked at the yellow stuff instead of the red)? You're pict doesn't show a scale bar, so I don't know if this is picking the less or more active stuff, but I'm not sure this is showing that applying filters introduces “stuff” (and that correcting can take it away).

  • DS

    It is well known causal filters always have consequences to data. For temporal data causal filtering introduces temporal shifts.

    This temporal shifting plays a role in many aspects of the acquisition of MRI data (gradient propagation delays, etc) as well as the reconstruction (Nyquist ghosting, etc).

    Thanks to the authors of this paper for pointing out the costs to fMRI analysis due to filtering.

  • http://www.blogger.com/profile/07387300671699742416 practiCal fMRI

    See also Satterthwaite, et al., http://www.sciencedirect.com/science/article/pii/S1053811912008609

    In part of the paper they look at filtering from a movement perspective, although the bulk of the work involves confound regression.

    From their Discussion:

    “Finally, the effect of motion does not appear to reside uniquely in one frequency domain; motion tends to increase signal magnitude across the frequency spectrum in rsfc-MRI data.”

    Their findings support the contention that filtering may be more kill than cure. Filtering cardiac and respiratory effects is one thing, but perhaps the costs are too high and alternative strategies (confound regression) are cleaner? I'm way out of my field so shall defer to wiser heads!

  • Nitpicker

    Agree with Anon, the corrected maps look suspiciously similar but with a more rigorous threshold. I haven't read this yet so perhaps this is akin to what they are doing. It would still be important how precisely the appropriate threshold is determined so this may not just be reinventing the wheel.

  • Anonymous

    As a practicing shrink with training at Duke and Johns Hopkins, I'm thrilled to see you express the skepticism I have held regarding using scans to localize human behavior. To me this is the new phrenology, where researchers rush to claim that this or that symptom or emotion is explained by areas that light up on functional scans, ergo, “reside” there. Hey, Einstein wanna-be, the brain is so highly interconnected the notion of localization is almost laughable. Your comments remind us how artificial these studies are.

  • Anonymous


    As a shrink do you think that your practice is much more than phrenology?

    Just asking.

  • Nitpicker

    “I'm thrilled to see you express the skepticism I have held regarding using scans to localize human behavior”

    I don't believe NS's post actually does that here. Your comment would have been more relevant in the previous post on comparing managers to non-managers. This paper here doesn't invalidate fMRI as a technique but only how some data are analyzed. It doesn't even invalidate the conclusion of all fMRI analyses but focusses on one particular type of data (resting state analysis).

    Anyhow, more critically, what a lot of people from outside the fMRI community do not realize is that actually only few people still expect to “localize human behavior”. Most modern neuroimaging papers aim to test either how and what information is represented within brain regions or to establish models of how different regions interact in a wider network to produce behavior. Few experiments still only look for “brain centers that respond to X” etc and those that do usually don't do very well in peer review.

  • Catherine Davey

    Hi guys. I'm Catherine Davey, the author of the paper you're all discussing. What a treat to see the discussion the paper sparked! I just wanted to comment on the observation that the corrected figures look like they've just been generated using a different threshold. The third figure – labelled 'filtered, corrected' in the blog – is indeed generated using a different threshold. The crucial point is that, once temporal filtering has been applied, the degrees of freedom remaining in the time series is reduced, which changes the distribution of the correlation test statistic. This in itself doesn't mean that filtering is bad per se, but rather that it needs to be accounted for when testing results for significance. The fourth figure, labelled 'filtered, corrected using intrinsic autocorrelation modelling' employs a unique threshold for each voxel to account for both autocorrelation introduced by filtering, and also any naturally occurring autocorrelation – both sources reduce the effective degrees of freedom in the resulting correlation test statistic.

    I hope this clears up any confusion. Please feel free to ask me more questions about the article.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Hi Catherine, great to have your comments!

    That's what I thought you'd done from reading the paper… but I wasn't sure… so thanks for the clarification.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar