In Part 1 of this post, I covered an emerging story of conflicts of interest within the American Psychiatric Association (APA). The controversy concerns a new “Computerized Adaptive Test” (CAT) that can be used to tell the severity of depression – a ‘dimensional’ measure.
I said that Part 2 would look at the test itself. But I’ve decided to split this further. In this post, I’ll be looking at the ‘practical’ aspects of the CAT. In Part 3 I’ll examine the science and statistics behind it.
To recap, the CAT is a software program developed by University of Chicago statistician Robert Gibbons, with the help of colleagues including David Kupfer, who headed the development of the DSM-5 manual. (N.B. I am here using “CAT” to refer to the CAT-DI – Depression Inventory. Gibbons et al have a family of other CATs for other mental health symptoms, at different stages of development.)
The CAT is essentially a self-report questionnaire – it estimates the severity of depression by asking people how they feel. However, unlike a simple pen and paper system, the CAT adaptively chooses which questions to ask, based on the subject’s responses to previous ones. There’s a bank of hundreds of questions, but any given subject only has to answer some 12. In a paper announcing the results of pilot studies, Gibbons et al say this provides for quick and accurate measurement.
How will this work in practice? This is unclear at present. Gibbons has formed a company, Psychiatric Assessment Inc. (also known as Adaptive Testing Technologies) and has issued founder’s shares to Kupfer, amongst others. Their website describes the CAT, but doesn’t describe how to get access to it, and doesn’t mention prices at all. Nonetheless, the fact that a company has been formed, and shares issued, suggests that profit is on the table.
If so, this might be a problem.
My fundamental concern is that the CAT could end up being closed-source; a ‘black box’. The questions that the patient answers are just the front end. The core of the system are the algorithms that decide which questions to ask, and then calculate the score, which would be displayed to the patient or their doctor.
Various published papers have outlined how the CAT works, but (as far as I can see) the key details are missing – the full item bank and the various parameters, derived from the pilot studies, that determine how each question is handled.) In other words, no-one can go off and program their own replication of the CAT. And if someone wants to check whether the CAT has any bugs, say, they can’t.
A conventional questionnaire by contrast is (by its nature) open source. If there’s a misprint, you can see it. If there’s a question that doesn’t make sense in your context, you can delete it. You can study, research, and modify to your satisfaction. Copyright prevents you from publishing your own modification of many questionnaires, but you could still use them. In other words, with an old-fashioned questionnaire, you know what you’re getting, and if you don’t like it, you can change it..
The black box, ‘secret formula’ approach that CAT appears to be heading towards is problematic – but by no means unprecedented. Neuroskeptic readers may remember CNS Response and their EEG-based depression assessment, and the MDDScore blood test for depression – to name just two. Both of these rely on secret equations.
The oldest and by far the most successful of this genre is not from psychiatry at all. The Bispectral Index can be used to monitor the depth of anaesthesia. You hook it up to the patient’s head (it’s literally a box, although not always a black one) and it uses a secret algorithm to judge their state of consciousness based on their brain activity.
All of these cases have common problems from the perspective of you, the doctor using them (and by extension, the patients):
- You can’t be sure how well the technology works and what its limitations are. You just have to trust the manufacturers – who of course, have a conflict of interest.
- User innovation is impossible. There might be an easy way to improve the system or make it better suit your needs – but you can’t.
- You’re paying money purely for the right to do something, not for the ability to do it (the hardware involved in all of the cases I mentioned is simple. If it wasn’t for the secret algorithms, it would be possible to implement these tests at low or zero cost.)
On this last point, you might object: doesn’t an inventor have a right to make money from his or her invention? In a free market, shouldn’t people be able to market the fruits of their labor?
Perhaps, but the CAT is no product of capitalism: it was developed using public money. Robert Gibbons has received $4,958,346 in National Institutes of Health (NIH) grants since 2002. The project title: Computerized Adaptive Testing – Depression Inventory. Robert Gibbons is no John Galt.
Maybe I’m jumping the gun here. No-one is monetizing the CAT yet… but if someone does, then the NIH would effectively have been providing start-up funds for a commercial enterprise. Eventually, CAT might become available on Medicare or Medicaid, in which case the American taxpayer would, outrageously, be paying for the privilege of using a product that they paid for in the first place.
But this hasn’t happened yet. Perhaps Psychiatric Assessment Inc. will turn into a nonprofit and the CAT will end up being free. How useful would it be? Find out in Part 3.