Touted as a revolutionary new way of measuring depression, the CAT-DI is a kind of computerized questionnaire, that assesses depressive symptoms by asking a series of questions about how the user is feeling. Unlike a standard questionnaire, however, the CAT-DI is adaptive because it picks which question to ask next based on previous responses.
The CAT-DI’s creators have said that the commercial release of the product (and related CATs) is under consideration. They’ve formed a company, Adaptive Testing Technologies (ATT). This commercial aspect has led to fierce controversy over the past few weeks, with accusations of conflicts of interest against some very senior figures in American psychiatry. It was this aspect of the story that I focused on previously.
Now, I’m finally going to delve into the statistics to find out: does it really work?
The abstract to a scientific paper is a brief summary of the content. The start of an abstract, in turn, serves to introduce the subject of the research.
This is fine for most kinds of science, but in the case of psychology (and parts of neuroscience) it can produce some rather odd results. In these fields, the topic of much research is everyday human behaviours and experiences. How do you introduce something that everyone already knows about? How do you make the commonplace sound like a scientific problem?
Well… it’s hard. Hard to do it without sounding like a cross between Spock and Captain Obvious. So that’s often what ends up happening. I thought I’d compile some of my favourite examples of this genre into one compendium of wisdom. Presenting… the Psychology Abstract’s Introduction to Life.
A paper just out in the journal Psychological Science says that: Women Can Keep the Vote: No Evidence That Hormonal Changes During the Menstrual Cycle Impact Political and Religious Beliefs
This eye-catching title heads up an article that’s interesting in more ways than you’d think.
The shape of a newborn baby’s brain can predict its later cognitive development, according to a new study from New York neuroscientists Marisa Spann and colleagues.
An very interesting report from a group of French neurosurgeons sheds light on the neural basis of consciousness and dreams.
I support proposals in psychology and political science to allow preregistration to be done in an open way. I just wouldn’t want preregistration to be required, indeed the concept of preregistration would seem to me to be just about impossible to apply in the analysis of public datasets such as we use in political science.
What Gelman is saying is that preregistration – getting scientists to publicly announce what experiments they will conduct ahead of time, to defeat publication bias – would not be possible in the case of reanalysis studies. Rather than collecting new data, such research consists of taking a new look at old data. There is widespread concern that, because these kinds of studies can’t be preregistered, this kind of research would become denigrated or even unpublishable, were registration to become the norm.
Now, reanalysis is immensely valuable (even I do it), and I’ve yet to meet anyone who wants it abolished. Luckily, I do not think that the rise of preregistration would threaten such studies, even if they were unpreregisterable.
But in this post I want to go further than that – or, maybe, off the deep end – and say: maybe they could be preregistered.
A couple of months ago, I became aware of an organization called Publication Integrity and Ethics (PIE). In three posts (1,2,3) I explored some interesting facts about this group, and about a related organization, Open Access Publishing London (OAPL).
PIE say that their mission is to “promote and maintain a better and a healthier publishing environment through a new set of ethical rules and guidelines”. OAPL, who manage some 50 academic journals, say that they were “the first global publishing house to adopt the PIE Guidelines.” …They are also the only global publishing house whose Director is a relative of PIE’s Director.