Another Scuffle In The Coma Ward

By Neuroskeptic | January 28, 2013 7:22 pm

It’s not been a good few weeks for Adrian Owen and his team of Canadian neurologists.

Over the past few years, Owen’s made numerous waves, thanks to his claim that some patients thought to be in a vegetative state may, in fact, be at least somewhat conscious, and able to respond to commands. Remarkable if true, but not everyone’s convinced.

A few weeks ago, Owen et al were criticized over their appearance in a British TV program about their use of fMRI to measure brain activity in coma patients. Now, they’re under fire from a second group of critics over a different project.

The new bone of contention is a paper published in 2011 called Bedside detection of awareness in the vegetative state. In this report, Owen and colleagues presented EEG results that, they said, show that some vegetative patients are able to understand speech.

In this study, healthy controls and patients were asked to imagine performing two different actions: moving their hand, or their toe. Owen et al found that it was possible to distinguish between the ‘hand’ and ‘toe’-related patterns of brain electrical activity. This was true of most healthy control subjects, as expected, but also of some – not all – patients in a ‘vegetative’ state.

The skeptics aren’t convinced, however. They reanalyzed the raw EEG data and claim that it just doesn’t prove anything.

This image shows that in a healthy control, EEG activity was “clean” and generally normal. However in the coma patient, the data’s a mess. It’s dominated by large slow delta waves – in healthy people, you only see those during deep sleep – and there’s also a lot of muscle artefacts which can be seen as ‘thickening’ of the lines.

These don’t come from the brain at all, they’re just muscle twitches. Crucially, the location and power of these twitches varied over time (as muscle spikes often do).

This wouldn’t necessarily be a problem, the critics say, except that the statistics used by Owen et al didn’t control for slow variations over time i.e. of correlations between consecutive trials (non-independence). If you do take account of these, there’s no statistically significant evidence that you can distinguish the EEG associated with ‘hand’ vs ‘toe’ in any patients.

However, in their reply, Owen’s team say that:

their reanalysis only pushes two of our three positive patients to just beyond the widely accepted p=0.05 threshold for significance – to p=0.06 and p=0·09, respectively. To dismiss the third patient, whose data remain significant, they state that the statistical threshold for accepting command-following should be adjusted for multiple comparisons… but we know of no groups in this field who routinely use such a conservative correction with patient data, including the critics themselves.

I have to say that, statistical arguments aside, the EEGs from the patients just don’t look very reliable, largely because of those pesky muscle spikes. A new method for removing these annoyances has just been proposed… I wonder if that could help settle this?

ResearchBlogging.orgGoldfine, A., Bardin, J., Noirhomme, Q., Fins, J., Schiff, N., and Victor, J. (2013). Reanalysis of “Bedside detection of awareness in the vegetative state: a cohort study” The Lancet, 381 (9863), 289-291 DOI: 10.1016/S0140-6736(13)60125-7

CATEGORIZED UNDER: bad neuroscience, EEG, papers
  • Anonymous

    It's not the first time that these labs are criticized for fancy analyses on bad data…

    http://www.sciencemag.org/content/334/6060/1203.4.full

  • Anonymous

    looks a pretty typical muscle artefact; I think this tends to reflect a build-up of muscle *tension*. the location this shows up in the EEG doesn't vary so much- mostly directly over the temporalis muscles. In normal subjects I often see these types of artefacts fluctuate over time; for example, people tend to show these artefacts early on, but as they relax they tend to reduce. Or vice versa, as they get bored.

    Anyway, when it comes to removing muscle artefacts, ICA is pretty good at picking them out, so that might be another route to try.

  • http://petrossa.me/ petrossa.me

    This is not unlike a religion, one hopes for life after coma. And just like any religion anything goes to 'prove' that. As one sees maria in a dirty window, so one sees 'life' in the fMRI/EEG what have you.

    What is 100% sure it's inhumane to keep living dead around. Inhumane for the families, who hope against hope, and for the corpse which deserves the right to a humane end of life, not slowly rot away as a nice testobject for curious professionals.

    In the very rare cases a comapatient does recover they never recover intact and would have been better of dead then mentally/physically handicapped waiting for death to release them.

    There should be an ethical committee that sets a maximum 'keep hoping' term, after which the living corpse can be euthanised and everybody can go on with their lives.

    Having seen a guy recover from coma and becoming some shuffling empty husk, barely able to communicate I'd go for about a month (semi) vegetative state and then pull the plug.

  • http://www.blogger.com/profile/08099485960661603080 Matt Craddock

    Hmm, not quite sure I follow the opening paragraph of the Owen group's reply.

    “One obvious problem with this argument is that, if a permutation test were used for all of the patients, half of them would only produce 36 permutations that could contribute to the test. It is accepted statistical practice that at least 1000 permutations are required to draw valid conclusions.2, 3″

    This seems like a misunderstanding. Normally, the number of possible permutations is extremely high, so it would be impractical to calculate exact probabilities this way. In such cases, a random sample of all possible permutations is considered acceptable, and 1000 is a decent rule of thumb to start off with. If there are only 36 possible permutations, it's easy to calculate exact p values, although the min obtainable p value would be .028

  • Anonymous
  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    I don't have time to read that thread but it doesn't look like you're promoting my views over there!

  • Anonymous

    fair enough. thanks for looking.

  • Adam

    In my opinion, this was one of the most most devastating parts of the Owen lab response:

    “Moreover, Goldfine and colleagues' suggestion that our patient data violate the independence requirement of the binomial test is based on an assumption that the patient group should be treated as homogeneous. To make their point, they show that, across the patient group, there seems to be a violation of independence—ie, a U-shaped histogram of p values. Although this might be the case across the group as a whole, it is certainly not the case when the data are inspected on an individual patient basis. It is widely accepted, even by Goldfine and colleagues,5—10 that a significant minority of patients (about 17%4) who are diagnosed as being in the vegetative state nevertheless retain some level of conscious awareness and are able to follow commands detected by fMRI. By extension then, this group is clearly not at all homogeneous—that is to say, some are likely to be truly vegetative, whereas others might appear to be vegetative behaviourally, but are in fact covertly aware. It makes little sense, therefore, to group all of our vegetative state patients together in the way suggested by Goldfine and colleagues, because the (known) majority of truly vegetative patients will water down the covertly aware subgroup, rendering the latter more difficult to detect using any statistical method. Indeed, when we applied the same test for independence used by Goldfine and colleagues to each patient dataset individually, rather than as a group (ie, using the standard working hypothesis that all patients are different), we found that all three of our positive patients pass the assumption of independence—ie, one-tailed histograms. By Goldfine and colleagues' own test, therefore, our use of the binomial method is validated in these positive individuals.”

    And there's also the fact that the Goldfine method failed to detect command-following in 60% of *healthy* volunteers.

    link: http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)60126-9/fulltext

  • http://www.blogger.com/profile/05902787152096233280 trisbek

    Hi guys,

    I have posted free all the articles and comments about this here:

    https://sites.google.com/site/trisbek/lancet-affaire

    Best,

    Tristan.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Thanks!

  • http://www.blogger.com/profile/00352370977403424332 Jonathan Victor

    I'm one of the authors of the re-analysis article. I just want to mention that interested readers can find further comments a propos the response letter of Cruse et al. (and the above comments concerning the U-shaped histogram, multiple comparisons, and detection rate in normals) on my lab website, at
    http://www-users.med.cornell.edu/~jdvicto/pdfs/gfbdnh12_add.pdf

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Thanks!

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »