Cosmic Dopamine: On “Neuroquantum Theories of Psychiatric Genetics”

By Neuroskeptic | March 27, 2017 12:05 pm

Back in 2015, I ran a three part post (1,2,3) on Dr Kenneth Blum and his claim to be able to treat what he calls “Reward Deficiency Syndrome” (RDS) with nutritional supplements.

Today my interest was drawn to a 2015 paper from Blum and colleagues, called Neuroquantum Theories of Psychiatric Genetics: Can Physical Forces Induce Epigenetic Influence on Future Genomes?.

Read More

CATEGORIZED UNDER: drugs, mental health, papers, select, Top Posts

The Misuse of Meta-Analysis?

By Neuroskeptic | March 24, 2017 5:33 pm

Over at Data Colada, Uri Simonsohn argues that The Funnel Plot is Invalid.

Funnel plots are a form of scatter diagram which are widely used to visualize possible publication bias in the context of a meta-analysis. In a funnel plot, each data point represents one of the studies in the meta-analysis. The x-axis shows the effect size reported by the study, while the y-axis represents the standard error of the effect size, which is usually inversely related to the sample size.

funnel

In theory, the points should form a triangular “funnel” pointing upwards, if there is no publication bias. If the funnel is asymmetric, this is taken as evidence of publication bias. Typically, we say that an asymmetric plot indicates that small studies that find a large effect are being published, while other small studies, that happen to find no effect, remain in the proverbial file drawer.

However, Simonsohn points out that we can only infer publication bias from a funnel plot if we assume that there is no “real” correlation between the effect size and the sample size of the included studies. This assumption, he says, is probably false, because researchers might choose larger samples to study effects that they predict to be smaller:

The assumption is false if researchers use larger samples to investigate effects that are harder to detect, for example, if they increase sample size when they switch from measuring an easier-to-influence attitude to a more difficult-to-influence behavior. It is also false if researchers simply adjust sample size of future studies based on how compelling the results were in past studies… Bottom line. Stop using funnel plots to diagnose publication bias.

In my view, Simonsohn is right about this “crazy assumption” behind funnel plots – but the problem goes deeper.

Simonsohn’s argument applies whenever the various different studies in a meta-analysis are studying different phenomena, or at least measuring the same phenomenon in different ways. It’s this variety of effects that could give rise to a variety of predicted effect sizes. Simonsohn uses this meta-analysis about the “bilingual advantage” in cognition as an example, noting that it “includes quite different studies; some studied how well young adults play Simon, others at what age people got Alzheimer’s.”

Simonsohn’s conclusion is that we shouldn’t do a funnel plot with a meta-analysis like this, but I wonder if we should be doing a meta-analysis like this in the first place?

Is meta-analysis an appropriate tool for synthesizing evidence from methodologically diverse studies? Can we really compare apples and oranges and throw them all into the same statistical juicer?

The “comparing apples and oranges” debate around meta-analysis is an old one, but I think that researchers today often gloss over this issue.

For instance, in psychology and neuroscience there seems to be a culture of “thematic meta-analysis” – i.e. a meta-analysis is used to “sum up” all of the often diverse research addressing a particular theme. I’m not sure that meta-analysis is the best tool for this. In many cases, it would make more sense to just rely on a narrative review, that is, to just write about the various studies.

We also see the phenomenon of “meta-analytic warfare” – one side in a controversy will produce a meta-analysis of the evidence, and then their opponents will reply with a different one, and so on back and forth. These wars can go on for years, as the two sides accuse each other of wrongly including or excluding certain studies. My concern is that the question of which studies to include has no right answer in the case of a “theme” meta-analysis, because a theme is a vague concept not a clearly-defined grouping.

CATEGORIZED UNDER: science, select, statistics, Top Posts

Unethical “Stem Cell” Therapy for Autism In India?

By Neuroskeptic | March 17, 2017 4:29 pm

I just read a concerning paper about an experimental stem cell treatment for children with autism.

Untitled

Read More

The Incredible Lesion-Proof Brain?

By Neuroskeptic | March 15, 2017 8:43 am

How much damage can the brain take and still function normally? In a new paper, A Lesion-Proof Brain?, Argentinian researchers Adolfo M. García et al. describe the striking case of a woman who shows no apparent deficits despite widespread brain damage.

Read More

CATEGORIZED UNDER: papers, select, Top Posts

Don’t Blame Trump’s Brain

By Neuroskeptic | March 13, 2017 2:24 pm

trumpbrain

The past year has seen the emergence of a new field of neuroscience: neuroTrumpology. Also known as Trumphrenology, this discipline seeks to diagnose and explain the behaviour of Donald Trump and his supporters through reference to the brain.

Read More

The Ethics of Citation

By Neuroskeptic | March 12, 2017 2:04 pm

Earlier this week, Jordan Anaya asked an interesting question on Twitter:

This got me thinking about what we might call the ethics of citation.

Read More

CATEGORIZED UNDER: ethics, papers, science, select, Top Posts

Getting High Off Snakebites?

By Neuroskeptic | March 9, 2017 6:18 am

In a curious case report, Indian psychiatrists Lekhansh Shukla and colleagues describe a young man who said he regularly got high by being bitten by a snake.

Read More

CATEGORIZED UNDER: animals, drugs, papers, placebo, select, Top Posts

Ben Carson and the Power of the Hippocampus

By Neuroskeptic | March 7, 2017 2:27 pm

“I could take the oldest person here, make a little hole right here on the side of the head, and put some depth electrodes into their hippocampus and stimulate. And they would be able to recite back to you, verbatim, a book they read 60 years ago.”

So said Ben Carson, the U.S. Secretary of Housing and Urban Development, yesterday. Carson is known for his unorthodox claims, such as his attempt to rewrite the Egyptology textbooks, but this time, as he’s a former neurosurgeon himself, he might be thought to be on safer ground.

Sadly not.

Read More

Brain Activity At The Moment of Death

By Neuroskeptic | March 3, 2017 2:48 pm

What happens in the brain when we die?

Read More

CATEGORIZED UNDER: EEG, papers, select, Top Posts

Why Scientists Shouldn’t Replicate Their Own Work

By Neuroskeptic | February 25, 2017 3:15 pm

Last week, I wrote about a social psychology paper which was retracted after the data turned out to be fraudulent. The sole author on that paper, William Hart, blamed an unnamed graduate student for the misconduct.

Now, more details have emerged about the case. On Tuesday, psychologist Rolf Zwaan blogged about how he was the one who first discovered a problem with Hart’s data, in relation to a different paper. Back in 2015, Zwaan had co-authored a paper reporting a failure to replicate a 2011 study by Hart & Albarracín. During the peer review process, Hart and his colleagues were asked to write a commentary that would appear alongside the paper.

Read More

CATEGORIZED UNDER: papers, science, select, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.
ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+