A new paper warns that: All that glitters is not BOLD. This title seems designed to worry neuroscientists, because the blood oxygenation-level dependent (BOLD) phenomenon is what allows fMRI scanning to detect brain activity.
Or is it? Writing in Scientific Reports, Finnish neuroscientists Ville Renval, Cathy Nangini and Riitta Hari argue that BOLD isn’t always central to the fMRI signal.
Anaesthesia and Intensive Care (AIC) is an Australian medical journal. The latest issue, just published online, contains a remarkable – and possibly even unique – pair of Letters. These letters take the form of apologies for the distress caused by the publication of an article – I do not know of any similar cases in science.
An article in Science has been getting a lot of attention this week: Nano-Imaging Feud Sets Online Sites Sizzling
It’s about the ‘stripey nanoparticles’ debate, which I covered a few weeks back. Back in 2004, Francesco Stellacci and his colleagues published a paper claiming to have observed stripes on the surface of certain very small objects. In the years since they have expanded on this claim in numerous more papers. However, a number of scientists argue that the stripes aren’t real – these critics have published their arguments mainly on blogs (e.g.).
The Science piece describes two controversies. Controversy #1 is the scientific question of the reality of those stripes. That is not the topic of this post.
In Part 1 of this post, I covered an emerging story of conflicts of interest within the American Psychiatric Association (APA). The controversy concerns a new “Computerized Adaptive Test” (CAT) that can be used to tell the severity of depression – a ‘dimensional’ measure.
I said that Part 2 would look at the test itself. But I’ve decided to split this further. In this post, I’ll be looking at the ‘practical’ aspects of the CAT. In Part 3 I’ll examine the science and statistics behind it.
After all the criticisms, the street protests and the scholarly debates, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders was finally published by the American Psychiatric Association (APA) in May 2013. And then… well, that was it. The launch itself was a something of an anticlimax – as I predicted in 2010, “When DSM-5 does arrive… it will be a non-event. By then the debates will have happened.”
But now a strange story is emerging that could reignite the controversy.
Lately I’ve been investigating (apparent) plagiarism in various areas of scientific publication. It’s quite interesting how many different ways there are to put together an unoriginal paper. No two cases are alike, but I have noticed some patterns.
The past few years have seen many neuroscientists becoming interested in ‘hyperscanning‘. Rather that contenting themselves to scan just one brain at a time, hyperscanners simultaneously measure activity from two (or even more) people, using techniques such as fMRI and EEG.
A new paper brings worrying news for neuroscientists using fMRI to study memory:
Across-subject reliabilities were only poor to fair… for novelty encoding paradigms, the interpretation of fMRI results on a single subject level is hampered by its low reliability. More studies are needed to optimize the retest reliability of fMRI activation for memory tasks.