Google Glass is cool. But could it be philosophically dangerous?
60 years ago, Ludwig Wittgenstein famously wrote:
Where does this idea come from? It is like a pair of glasses on our nose through which we see whatever we look at. It never occurs to us to take them off.
Newly discovered papers have shed light on a fascinating episode in the history of neuroscience: Weighing brain activity with the balance
The story of the early Italian neuroscientist Dr Angelo Mosso and his ‘human circulation balance’ is an old one – I remember reading about it as a student, in the introductory bit of a textbook on fMRI – but until now, the exact details were murky.
In the new paper, Italian neuroscientists Sandrone and colleagues report that they’ve unearthed Mosso’s original manuscripts from an archive in Milan.
A 48-year-old woman woke up one morning without knowing where she was. She recognized her husband and finally realized that she was at home, but reported that she felt that all surroundings appeared ‘strange’ to her. She did not report any changes in the shape of furniture, rooms and people, but complained that voices and noises were ‘dinosaurs shouts’, or were made by ‘prehistorical beasts’…
After arriving at the hospital, she continued to complain that the surrounding sounds were made by dinosaurs, even adding that these were of the meat-eating type. She was not confused, she knew that she was in the hospital, and she reported the exact date.
Is it always good thing to know your limitations?
Over at Scientific American, Samuel McNerney writes about the dangers of learning about common human cognitive biases. The problem is that it’s easy to find out about, say, confirmation bias, and think “Well, it affects other people, but now I know about it, I am immune to it” – and then proceed exactly as you did before, suffering the bias but now with misplaced confidence in your abilities.
I fear that a similar thing is at work in science, in the form of the Limitations Section.
The context is that in Britain, charities and other advocates for people with mental illness have become fond of pointing to famous people, past and present, who suffered from a psychiatric disorder.
Last year, I blogged about a new and very pretty way of displaying the data about the human ‘connectome’ – the wiring between different parts of the brain.
But there are many beautiful ways of visualizing the brain’s connections, as neuroscientists Daniel Margulies and colleagues of Leipzig discuss in a colourful paper showcasing these techniques.
In a short blog post last week, Thomas Insel, director of the National Institute of Mental Health (NIMH), announced that the organization would be “re-orienting its research away from DSM categories“.
Suppose neuroscientists faced absolutely no financial or ethical constraints. What would that allow us to do? What kind of hitherto-intractable questions would we be able to answer?
Prescriptions of antipsychotic (aka neuroleptic) drugs in North American children and adolescents have been rising rapidly in recent years. But why?
Gabrielle Carlson of Stony Brook Children’s Hospital offers her thoughts in a brief paper: The Dramatic Rise in Neuroleptic Use In Children: Why Do We Do It and What Does It Buy Us?
Four years ago, neuroscientists became aware of an ominous-sounding manuscript entitled “Voodoo Correlations In Social Neuroscience”. This piece was eventually published under a more prosaic name but it still hit home, with nearly 500 citations so far.
To me, this paper marked the start of a new era of ‘critical’ (in the proper sense of thoughtful discussion and reflection) neuroscience, with fMRI researchers becoming more aware that fundamental statistics are as important as ever, despite the amazing technical advances and novel techniques of the 1990s and 2000s.
Now London neuroscientist James Kilner has reminded us that the ‘voodoo problem’ applies not only to fMRI but also to EEG and MEG, methods for measuring brain electro-magnetic activity: Bias in a common EEG and MEG statistical analysis and how to avoid it
The problem, in essence, is about selecting values out of a random population. If you apply a selection criteria to lots of random variables, and pick out only the highest (or lowest) values, then any statistical tests you run on those picked values will probably be biased because they’re selected. It sounds simple, but in a complex data analysis, it’s surprising how easy it is to select and test without realizing it.
In the case of EEG and MEG, the recorded data consists of anywhere from 20 to 250 sensors or electrodes placed around the head. It’s not clear however which sensors are most ‘interesting’ in any given experiment.
A common practice is to focus on the electrode at which the largest electrical or magnetic response (ERP) is seen to a given stimulus, but Kilner shows that this is dangerous unless care is taken to make the selection criteria independent of the subsequent analysis. Selection itself is fine, but ‘double dipping’ or ‘circular’ testing of the same things that were used as selection criteria (e.g. testing the size of the ERP at the electrode where that ERP is largest) is problematic.
Kilner, J. (2013). Bias in a common EEG and MEG statistical analysis and how to avoid it Clinical Neurophysiology DOI: 10.1016/j.clinph.2013.03.024