The past year has seen the emergence of a new field of neuroscience: neuroTrumpology. Also known as Trumphrenology, this discipline seeks to diagnose and explain the behaviour of Donald Trump and his supporters through reference to the brain.
Earlier this week, Jordan Anaya asked an interesting question on Twitter:
Why do we blame the media for reporting on bad studies but we don’t blame scientists for citing bad studies?
— Omnes Res (@OmnesResNetwork) March 6, 2017
This got me thinking about what we might call the ethics of citation.
In a curious case report, Indian psychiatrists Lekhansh Shukla and colleagues describe a young man who said he regularly got high by being bitten by a snake.
“I could take the oldest person here, make a little hole right here on the side of the head, and put some depth electrodes into their hippocampus and stimulate. And they would be able to recite back to you, verbatim, a book they read 60 years ago.”
So said Ben Carson, the U.S. Secretary of Housing and Urban Development, yesterday. Carson is known for his unorthodox claims, such as his attempt to rewrite the Egyptology textbooks, but this time, as he’s a former neurosurgeon himself, he might be thought to be on safer ground.
Last week, I wrote about a social psychology paper which was retracted after the data turned out to be fraudulent. The sole author on that paper, William Hart, blamed an unnamed graduate student for the misconduct.
Now, more details have emerged about the case. On Tuesday, psychologist Rolf Zwaan blogged about how he was the one who first discovered a problem with Hart’s data, in relation to a different paper. Back in 2015, Zwaan had co-authored a paper reporting a failure to replicate a 2011 study by Hart & Albarracín. During the peer review process, Hart and his colleagues were asked to write a commentary that would appear alongside the paper.
Although the use of the Rorschach to diagnose mental illness is mostly a thing of the past, research on the test continues. Last week, two new papers were published on the Rorschach blots, including a fractal analysis of the images themselves and a brain scanning study using fMRI.
A peculiar new paper proposes the idea of “connecting two spinal cords as a way of sharing information between two brains”. The author is Portuguese psychiatrist Amílcar Silva-dos-Santos and the paper appears in Frontiers in Psychology.
Neuroskeptic became suspicious about the three unrelated papers – about food chemistry, heart disease, and the immune system and cancer – after scanning them with plagiarism software. After alerting the journals, two issued formal retractions for the papers – but neither specifies plagiarism as the reason.
These three retractions represent the fruits of a personal project (or perhaps it was a quixotic quest) I carried out last year. Over the space of four months, I reported about 30 cases of plagiarism in review papers to various journals, with the help of Turnitin plagiarism detection software.
Every case I reported was a serious one. The percentage of unoriginal text ranged from 44-90%, with an average of about 65%. What’s more, I didn’t count overlap with the authors’ own work (i.e. self-plagiarism) as this is sometimes seen as less serious. Likewise, I only looked at review papers, because plagiarism is arguably less serious in experimental papers when the data is new.
Yet despite the severity of the problems I reported, most journals never replied to my emails. A few did acknowledge my concerns, and promised to investigate, but nearly a year later, only three papers have been retracted. I don’t know of any expressions of concern or corrections either.
Eventually I got tired of being ignored, and abandoned my one-man crusade against copy-and-paste.
This leaves 27 papers, that I know for a fact to be largely plagiarized, remaining in the scientific literature – and there must be thousands more out there. If I had to estimate the proportion of review papers that contain severe plagiarism, I’d put it at something like 10-15%. Maybe we should call them recycle papers instead of reviews?
I’m not sure how to proceed with this project. I’m happy to share my list of offending papers with anyone who thinks they can do something useful with it, and I may decide to publish it at some point. But will this achieve anything? Journals are meant to uphold the standards of science. If they don’t care about plagiarism, what can anyone else hope to do?
Here’s why I did it:
Plagiarists steal opportunity from their honest peers. In science, for instance, jobs, promotions and funding are assigned largely on the basis of the publication records of the candidates. There are not enough of these things to go around. So whenever a plagiarist wins one of these prizes on the strength of their unfairly inflated record, someone else misses out.
This is why I don’t like plagiarists. I don’t take pleasure in anyone’s ‘downfall’, but I look at it this way: for every disgraced plagiarist, an honest researcher gets a job, or gets funded, or gets promoted.