About that New Antidepressant Study

By Neuroskeptic | February 24, 2018 7:52 am

A new Lancet paper about antidepressants caused quite a stir this week. Headlines proclaimed that “It’s official – antidepressants work“, “Study proves anti-depressants are effective“, and “Antidepressants work. Period.”


The truth is that while the Lancet paper is a nice piece of work, it tells us very little that we didn’t already know, and it has a number of limitations. The media reaction to the paper is frankly bananas, as we’ll see below.

Here’s why the new study doesn’t tell us much new. The authors, Andrea Cipriani et al., conducted a meta-analysis of 522 clinical trials looking at 21 antidepressants in adults. They conclude that “all antidepressants were more effective than placebo”, but the benefits compared to placebo were “mostly modest”. Using the Standardized Mean Difference (SMD) measure of effect size, Cipriani et al. found an effect of 0.30, on a scale where 0.2 is considered ‘small’ and 0.5 ‘medium’.

The thing is, “effective but only modestly” has been the established view on antidepressants for at least 10 years. Just to mention one prior study, the Turner et al. (2008) meta-analysis found the overall effect size of antidepressants to be a modest SMD=0.31 – almost exactly the same as the new estimate.

Cipriani et al.’s estimate of the benefit of antidepressants is also very similar to the estimate found in the notorious Kirsch et al. (2008) “antidepressants don’t work” paper! Almost exactly a decade ago, Irving Kirsch et al. found the effect of antidepressants over placebo to be SMD=0.32, a finding which was, inaccurately, greeted by headlines such as “Anti-depressants ‘no better than dummy pills‘”.

The very same newspapers are now heralding Cipriani et al. as the savior of antidepressants for finding a smaller effect…


I’m not criticizing Cipriani et al.’s study, which is a huge achievement. It’s the largest antidepressant meta-analysis to date, including an unparalleled number of difficult-to-find unpublished studies (although both Turner et al. and Kirsch et al. did include some.) It includes a broader range of drugs than previous work, although it’s not quite comprehensive: there are no MAOis, for instance, and in general older drugs are under-represented.

Even so, Cipriani et al. meta-analyzed the evidence on all of the most commonly prescribed drugs, and they were able to produce a comparative ranking of the different medications in terms of effectiveness and side-effects, which is likely to be useful.

It’s important to bear in mind however that the meta-analysis only included ‘acute’ trials of about 8 weeks duration. This is a big limitation because a lot of people take antidepressants for much longer than this (I’ve been on mine for about 10 years, ironically the age of the Kirsch et al. paper). The absence of long-term antidepressant trials isn’t Cipriani et al.’s fault: there just aren’t very many of them out there, unfortunately.

Another caveat is that a meta-analysis is only as good as the data that goes into it, and one concern that hangs over pretty much all antidepressant trials is the issue of unblinding, which I’ve blogged about before. According to some people, all of the benefits of antidepressants might just be a placebo effect, driven by people who feel side-effects and then assume that the drug must be working, making them happier. I don’t subscribe to this view but there is very little good evidence either way.

Overall, there’s no big surprises here. The new paper confirms what we already knew about antidepressants, and the media confirmed what we knew about the media.

  • Bernard Carroll

    Good comments, NS. PhRMA never got serious about studying clinically meaningful subtypes of “depression’ so most data in the meta-analysis just bear on a weak construct called “major depression.” Add to that corruption in the trials and a secular rise in placebo response rates, and an effect size of around 0.3 is about as good as we can expect. For MAOIs in atypical depression and for tricyclics in melancholic depression the effects are considerably greater. For instance NNT (number needed to treat) is around 10 for SSRI drugs in current trials, compared with around 3 for TCAs in early classic studies.

    Bottom line: this analysis by Cipriani et al just bears on current muddled diagnostic and treatment practice, not on the true potential of at least some antidepressant drugs.

  • Joar Guterstam

    The last caveat you mention, about the effect possibly being caused by unblinding due to side-effects, was adressed recently by Hieronymus et al. They found no support for that hypothesis: https://www.ncbi.nlm.nih.gov/m/pubmed/29155804/?i=2&from=hieronymus%20mega

    • rthorat

      That the authors of that study think it is meaningful is pretty shocking. You simply cannot draw the conclusions they do. For one thing, side effects are massively underreported in SSRI trials, so a comparison of side effects to responders is going to be flawed from the start. There are other flaws with the logic as well.

      Anyone who has taken an SSRI knows that unblinding is inevitable, and in the cases where unblinding has been measured, it is very high. I don’t remember precise figures, but I believe it was ~80% of patients were guessing their group and ~90% of the doctors could guess the group of each patient. Not much analysis has been done of this, but I believe the numbers are even worse. My suspicion is that pretty much everyone unblinds, but some of the placebo responders may believe they are on the drug due to their recovery and a few of the non-responders on the drug may believe they are on placebo (though this is less likely due to side effects). The truth is these studies are totally unblinded. And it’s not surprising that drug companies are completely unwilling to do any blind analysis.

      • sweatpants

        I’m curious, any explanation of why doctors could guess the group better than patients themselves?

        • rthorat

          They know the side effect profiles of the drugs better. They are experts. They can probably also tell the difference between a placebo patient who thinks he has a dry mouth because the possibility was read to him before the trial and an actual drug group patient who has real dry mouth. Those would be my guesses.

        • Ozzo

          Probably the facial changes from the sudden onset bruxism:)

      • Joar Guterstam

        Thanks for your input. I’m not saying that their study is the final word on this hypothesis, but to my knowledge it is the best and largest study so far to address it. In my view, it is more convincing than the suspicions of an anonymous commenter on the internet that does not cite any evidence at all.

        • rthorat

          That’s fine. For what it’s worth, my statement is not based on any specific evidence because it’s based on simple logic. They are using a poor dataset in ways that are illogical and claiming it proves their viewpoint, when it does not at all.

          I could do a point by point breakdown of all the ways their methodology is flawed, but there is not really the space here. There are many flaws, including their assumption that side effects primarily occur within the first two weeks (not all of them do). But the biggest by far is their assumption that side effect reporting is accurate. For example, sexual dysfunction is reported for less than 5% of patients in SSRI trials, yet real world data demonstrates the rate is more realistically 50-80% (or even higher). If only 10% of people with a particular side effect are reporting it, then any conclusions you draw based on that data are going to be wrong. In fact, you would expect to not find any association between side effects and recovery – as they did here.

          The reality is that it is impossible to measure blind breaking in this way. They are attempting to measure it indirectly using inaccurate data. The only accurate way to measure it would be directly – ask patients whether they are on drug or placebo and then do an analysis based on that data. Unfortunately, that data is rarely collected, and when it has been collected it shows the blind is routinely broken. This blind breaking problem has been known for decades, yet this data is still rarely, if ever, collected. It’s pretty obvious why.

          • Joar Guterstam

            Speaking of logic, the fact that adverse reactions are underreported to some degree does not explain why the ones actually reported had no impact on the therapeutic outcome, unless you really do suggest that 100% of all patients experience side effects to the degree that they’re sure they’re in the active treatment group (a very strong statement that would require extraordinary evidence).

            I also can’t help but mentioning some of the huge problems with the analytic approach you are suggesting. For instance, the patients who feel they’ve benefitted a lot from the treatment will probably think they’ve been allocated to active treatment. But that fact doesn’t mean that it was all a placebo response in the first place (if so, medications like insulin, penicillin etc would also be questioned on the grounds that the blind may be broken by their efficacy).

          • rthorat

            The reality is that virtually everyone in the active treatment group does experience side effects and does break the blind. Studies that have looked into this have found that blind breaking is rampant, and I suspect if you analyzed in more detail you would find it is nearly universal in the active treatment group, though the data does not seem to exist, as far as I know. It does not require extraordinary evidence in my opinion. I have known many people who have taken SSRIs and they all experienced very noticeable side effects. That is the norm. There are a small number of people who experience little or no side effects – they are typically fast metabolizers of the particular drug they are taking and are not getting a normal dose as a result. But they are in the single digits percentage-wise for any particular SSRI. The fact is that adverse effects are not just under reported by some amount, they are under reported by orders of magnitude and virtually everyone will experience some of them.

            If virtually everyone on the drug experiences side effects, which I believe there is ample evidence in favor, then it is easy to answer your first question. The reason adverse effects seem to have no impact in the study is because whether someone reported adverse effects is somewhat random, and by taking a random slice of the whole you will end up with a similar result. That is pretty close to what the researchers found.

            Also, we have had good evidence for quite some time that not only do adverse effects not increase improvement, but they have a negative impact. Many SSRIs show negative dose-response curves, though they are mainly flat (which is a red flag in itself). Take Zoloft, for instance. One of the trials submitted for FDA approval showed 50mg as superior to all the other formulations, with decreasing effectiveness as the dose goes up. The reason for this is simple: 50mg is a small enough dose to break the blind enough to generate separation from placebo. But as you increase the dose, you increase the intensity of side effects and risk decreasing effectiveness as a result of the adverse impact of those side effects. Over time, the drug companies learned to titrate the higher doses upward to reduce those side effects. When they did that, they increased the effectiveness of the higher doses. The data shows that low doses of SSRIs produce enough side effects to break the blind, while higher doses begin to produce intolerable side effect intensity.

            As for your second paragraph. Indeed, many patients taking penicillin today would break the blind because they get better from the drug. But…that’s only because we already know the efficacy of penicillin. If we were testing a new drug with no track record, patients would not necessarily break the blind in this way, and even if they technically did it would be irrelevant. Why? Because we measure the effectiveness objectively, not subjectively. Patients breaking the blind in a trial of antibiotics is not a big problem because the infection is physical, not mental, and it is measurable by objective standards. Whether the infection resolves is not a judgment call by the clinician – it is a matter of fact. In contrast, when dealing with mental health, everything is a subjective assessment by the physician. If the blind is broken, it is not just a problem, but a crisis for the trial because it colors the subjective assessment of the physician, and we have a lot of data that shows it does indeed color their judgment.

            The same also applies for insulin, of course. We can objectively measure whether it is effective. In contrast to that, things like ibuprofen for headache are a serious placebo problem, though ibuprofen does not have the same propensity for side effects until you get to much higher doses, so I suspect there is much less blind breaking.

          • Joar Guterstam

            For almost all of your statements I see a virtual [citation needed] tag. I’ve met hundreds of patients taking SSRI:s and many of them have explicitly denied any important side effects. In several of the trials I’ve worked with (not in this particular field), I’ve been very surprised when breaking the randomization code, since patients with a number of side effects (and sometimes also therapeutic effects) have gotten placebo. Therefore, the hypothesis that the antidepressant effects of SSRI are due to unblinding is not plausible to me.

            In all, it seems we have very different prior views of the probabilities involved here. So perhaps we should just ”agree to disagree”. Thanks for the discussion!

          • rthorat

            No problem, I enjoy the discussion. Of course, if we were to have a real debate, we would both need citations for our positions. Then we could debate the merits of those citations. But this is not really the space for that.

            I will note that elsewhere in the comment thread I did mention that I believe some number of placebo responders will believe they are on drug because they will imagine mild side effects and coupled with their improvement will assume they are in the drug group. I believe this is nearly all the patients who do not unblind in these studies, though concrete evidence does not exist either way.

            Also, I need to mention that unblinding is a large source of bias of these trials, though it is not the only one. I mentioned elsewhere here there are probably a dozen or so indicators or causes of bias. I listed some.

            I will leave you with this discussion of unblinding and bias by Peter Gotzsche: https://www.youtube.com/watch?v=GNpGe5r0jI0.

          • metamorphosisfour

            Rthorat, as a co-author of the Hieronymus paper, I feel the need to clarify a few things. i) I agree that the reporting of adverse events in clinical trials is most often poor and incomplete. In the paper, we mention that it’s possible that we have missed any relation between outcome and side effects, if the side effects were not registered. However, with the material readily available, and no or very few studies actively asking about blinding, we did the best we could and I do think our patient-level analysis is more accurate than meta-analyses on study level, and that our paper is enough to refute the Kirsch hypothesis. ii) In another recent paper (Näslund 2017) we also investigated if increased anxiety levels based on ratings of individual items of the HDRS scale predicted outcome, and found no such evidence. As you can see in that paper, the overlap between reported AEs and increased rating on the scale was incomplete. iii) If antidepressant outcomes still were only due to unblinding, how come we so often see a significant difference between two antidepressants in head-to-head trials without placebo, even if their side effect profile is very similar? iv) Regarding dose-response, we have another paper (Hieronymus 2016) refuting the claim that the dose-response curve for (at least three) SSRIs is flat. Please continue discussing if you don’t agree – I enjoy receiving criticism.

          • metamorphosisfour

            To add one thing about our study – we did in fact not only look at the first two weeks of treatment, in the supplement you will find a sensitivity analysis for six weeks as well. Re: the Götzsche video, he is consistent in ignoring evidence in another direction that he wants it to be. Moncrieff did indeed find that antidepressants are better than active placebos, but as a post-hoc analysis eliminated the one study that disproved her claim that they aren’t. If the logic about slopes and the placebo group improving just one week later is right – then also in a few months, both groups are infinitely happy (but as he points out himself, this is not a good way of looking at the data). Regarding suicide risk, he talks about his own unpublished data, but ignores i.e. the Jakobsen 2017 meta-analysis he said was “the best ever” that found fewer suicides in SSRI- than placebo-treated patients (and much other research, and real-life data). He published his own meta-analysis of healthy duloxetine-treated women and claimed that “antidepressants double the risk of suicide” (yes, this was an actual headline) while in his material there were zero suicides and even zero cases of suicidal ideation in both groups.

          • rthorat

            In my experience, it is not Götzsche who ignores evidence that runs in another direction. That is all I will say about that – others can make their own analysis. What you say about Moncrieff is misleading, as Götzsche himself explained (and so did Moncrieff). One study was eliminated because it had poor methodology, impossible results, and evidence of bias.

            Both groups are not infinitely happy one week later. The slopes obviously flatten, as can be seen in virtually any of the data (including the (Hieronymus 2016)) you referenced.

            I am inclined to disbelieve your accusations RE: Götzsche and suicide, but I will look up your claims. He has stated he did the best he could with the suicide data as it is, but it is all fraudulent, so the real rates are hard to know. That is consistent with what I have seen elsewhere.

            It is also consistent with recent revelations in the Dolin vs GSK trial. Do you have any response to the revelation that Paroxetine increases suicidality by a factor of 8.9? Do you have any response to the 22 suicides of patients taking Paroxetine vs 0 suicides of patients taking placebo? Or the fact that it was originally reported there were 2 placebo suicides but it turns out they occurred during the washout period? GSK’s fraud is on record. So is Lilly’s and Pfizer’s. They all played fraud games with the suicide data. For reference, here are the 22 people who committed suicide on Paroxetine during clinical trials (one was a murder-suicide): https://www.baumhedlundlaw.com/wp-content/uploads/2017/04/dolin-exhibit-347-img.gif

          • metamorphosisfour

            First, there were other studies in the Moncrieff analysis that also had poor methodology – but they remained.

            Second, I ask you to have a look again at what you write about Götzsches conclusions: “it is all fraudulent”, “his data is an underestimate”, “it is all consistent with”. No, I’m sorry, but unpublished data and speculation proves nothing. Why do you think he hasn’t published his FDA suicide data? And no, the link between insomnia (the most common duloxetine event he states predicts suicide) is extremely low.

            With that said, of course there are numerous unfortunate examples of people taking SSRIs and committing suicide. SSRIs have been taken by roughly 100 million people (my estimate) so it’s really not hard to find cases like you have. People commit suicide on antidepressants, but most importantly, a majority of suicides occur in patients with no contact with psychiatry – this is the real problem that has to be solved. My overall point is, case reports aside, you are less likely to find any correlation between antidepressants and suicide on a larger scale, and in the real world, suicide rates have dropped with increasing antidepressant prescription. Also, do you know what happened the year after FDA started with the black box warning? Suicide rates for children and adolescents increased to record levels. Lastly – if you don’t believe the industry-sponsored research – have a look at independent trials. Suicides are extremely uncommon.

          • rthorat

            First: As Moncrieff noted, generally all studies of the timeframe had at least somewhat poor methodology. You cannot produce any study if you eliminate all the ones with poor methodology. The rejected study was rejected because its methodology was prone to bias AND the results indicated bias was present. Also, the poor methodology risks biasing the studies in favor of the active drug. Thus, it is not fatal to an analysis where the active drug was found to not have any benefits anyway. Because the risk of bias does not run in the opposite direction (drug trial sponsors are not going to intentionally bias a trial against their drug). A lot of people I talk to have a hard time understanding that methodological issues in clinical trials often only run in one direction. But this is true: methodology is usually considered poor because it risks introducing bias. But bias will typically only run in one direction – the direction of the interests of the sponsor. Of course, it is desirable to have cleaner data to analyze, but this is the data we have. And pharma stopped doing active placebo studies for obvious reasons. Their stated reason is illogical and obviously bogus, but I digress.

            Second: You seem to imply that Gotzsche is being dishonest and/or not accurately portraying the data he obtained. Otherwise, why the criticism for not publishing? I do not know why he has not published it, but I do know he is a rather busy man. But I find it an extraordinary implication that the co-founder of the world’s pre-eminent organization for the evaluation of the rigor of trial data would misrepresent data in his analysis. Of course, no one is perfect, but I find it hard to believe that Gotzsche would misrepresent any of this data, particularly given that anyone can request the same data themselves and make their own analysis. If people are so concerned that he has not published the data, why don’t they request the data and do their own analysis? There is good reason why Gotzsche states that the suicide data is fraudulent: because we know it is. We know for a fact that the data submitted for fluoxetine, sertraline, and paroxetine were all fraudulent. We know that suicidal events during the washout period were counted as placebo events and that suicidal events in the drug group that occurred after the study ended were also counted as placebo suicides. That is fraud. And all three of those common SSRIs did it. We also know that all of them hid suicidal ideation by labeling it “emotional lability”. And they did so post hoc in some instances, after the blind was broken. That is fraud.

            I find your response to the paroxetine suicide data lacking. Of course, suicide is a rare event, but it does occur. And even if SSRIs were wonder drugs, some people taking them would commit suicide. But that is not what we have here. We have 22 suicides, not out of 100 million, but out of only thousands. There were roughly 4x as many people in those trials on paroxetine compared to placebo, so you would expect to see 5-6 placebo suicides if paroxetine does not cause suicide, but there were in fact no placebo suicides. 22-0. I’m having a hard time calculating the odds ratio on that because my computer doesn’t divide by zero. But you can quantify the attempted suicides. And when paroxetine originally submitted for approval, through some great fraudulent data manipulation, GSK claimed an odds ratio of 2.6. It was low enough that they were able to convince the FDA it was okay. Years later, the FDA found some of their deception and the odds ratio was revised to 6.7. At trial, David Ross, formerly of the FDA, testified that the most up to date data indicates the odds ratio is 8.9. That is astounding. GSK’s own clinical trial data indicates that patients randomized to paroxetine have a 9x greater risk of attempting suicide. Ross also testified that in his practice he does not prescribe paroxetine because he believes it is both unsafe and ineffective. Which is a strong statement, considering that Cipriani places paroxetine toward the top of the list for efficacy. This information about paroxetine and suicides should be alarming to anyone. I think it demonstrates serious criminal behavior by GSK. And it boggles the mind that paroxetine is still approved by the FDA. And I think it represents serious negligence for any provider to prescribe it, given the data.

            Lastly about your data points. No, in the real world, suicides have not dropped with increasing antidepressant prescriptions (in the real world the number of people disabled by depression has risen dramatically with increasing prescriptions of antidepressants). In the real world, population level data can be misleading because suicide rates are heavily dependent on external factors. In some Western countries, suicides have dropped, though they are on an upward trend again. In other countries, such as Japan, suicides have increased dramatically. In fact, Japan is one of the heaviest prescribers of SSRIs and they saw a dramatic increase in suicides along with a sharp increase in prescriptions. But the population level data is difficult or impossible to interpret and should generally not be used. As for the black box warning: that misleading claim was pushed by Robert Gibbons, a man who seems to commit fraud in every study he publishes. I remain amazed that any journal would still publish him or that he is still employed by any respectable academic institution, given the numerous examples of fraud publicly available. The obvious answer to why Gibbons can still publish is that he is very connected to many pharmaceutical companies who have great pull with the journals.

          • metamorphosisfour

            Re: Moncrieff, you can also not discard antidepressants based on five heterogenous trials on different drugs, but that’s another problem. Whether the result of the one study is unlikely or not depends on your predefined opinion about antidepressants – as is evident here and in our discussion above, if you believe antidepressants work, Moncrieff et al. is (bad) proof that they do. If you believe they are useless – it’s (equally bad) proof that they don’t.

            Re: Götzsche, yes, I do believe he is being dishonest, as I have given several examples of. Another one: he states that many thousand people die from antipsychotics. I do not disagree – they have terrible side effects – but the one study he gets his numbers from is a meta-analysis in old people with dementia. The thing about old people with dementia is, they are very likely to die, especially when they are psychotic and handed one of the more horrible antipsychotics such as haloperidol. This creates a huge inflation in mortality, when he applies these numbers to the whole population, regardless of age or condition, receiving antipsychotics. Does he ever mention this problem? No. Does he report studies finding vastly different mortality rates in different antipsychotics? No. As you mention, with the expertise he has, you certainly expect more from the guy. I do not, however, accuse him of lying regarding the suicide data, I’m simply stating that I have no way of checking them, and I won’t take his word for it. Mind you – the world’s pre-eminent organization for the evaluation of the rigor of trial data specifically distanced themselves from his views on psychiatric drugs (http://www.cochrane.org/news/statement-cochrane).

            You talked about logic, so I’m gonna have to point out where I think your logic fails regarding suicides. There have been sickening cases of fraud in some trials, we can agree on that. You want to make the point that because so, all or most trials are fraudulent or at least have a very high risk of being so. This means that we cannot trust data on suicides. But this also means that suicides are more common in SSRIs. I believe this is called the “principle of explosion”, according to which any statement can be proven by contradiction. In this case I believe I am holding a dialethistic point of view, meaning I do think antidepressants can both cause suicidal behaviour in rare cases, and protect against it, as for example in Näslund 2018, BJP. If there were any more cases of fraud – I am pretty sure that they would have been discovered by now as the incentives for finding them are huge. I have checked myself in the many trials and confidential study reports that I have access to, and found nothing more than what has already been reported (sorry for referencing unpublished data but I think you understand why). Again I do however agree with your last point, that paroxetine should be avoided if possible.

            About the real world – I recommend for example Gusmao 2013, PLOS ONE. For almost every European country listed where long-term data is available, suicides go down with increased prescription. It’s hard not to see it. Of course you can find some examples in the other direction – however Japan is not one of them. There was an increase in suicides between 1997-2004, but since 2009, suicides have fallen back to 1997 levels (https://www.nippon.com/en/files/h00158en_fig011.png). I believe antidepressant prescription levels have remained quite similar during at least the latter period. Nakagawa 2007 J Clin Psych also talks about this, though I haven’t had time to look more closely at their data. A good example that doesn’t necessarily prove my point is USA – where the increase in suicides is likely attributable to the opioid epidemic – now that’s a pharma scandal if any. And of course – the real-world data merely shows correlations, and everyone except Robert Whitaker knows that you cannot infer causality from correlations. But if antidepressants frequently caused suicides – we would absolutely have seen at least a small increase when the prescription goes from 0 to >10 % of the population. This is my point.

            Regarding Whitaker and Gibbons, I do have to agree with Whitaker here that Gibbons did not present his data in a very good way, this is actually new criticism for me so I am grateful to hear that. However, this does not preclude that the number of suicides increased after the FDA warning. See McCain 2009 Pharmacy and Therapeutics for example. I know, he relies partly on Gibbons, but there are some more important points there although I do not agree with every word. And yet again, to end in some agreement with you, regarding Gibbons FDA data the criticism is fair.

          • rthorat

            i) I disagree that this data can refute the Kirsch hypothesis for reasons already stated. I do not doubt that you did the best you could with the available data. But I do not think that providing a good analysis of bad data can make good science. I just don’t think the analysis has any relevance due to the flaws in the underlying data.

            ii) I have no comment because I am not familiar with the paper at this point.

            iii) Outcomes are not ONLY due to unblinding, but rather bias in general. Unblinding is just one form of bias – a prominent one. Many of those head to head trials have strange, often inconsistent results – an indicator of bias. I will answer your question about why we see different results for similar drugs in head to head trials with another question: why do we see different results for citalopram and escitalopram? It’s all but admitted that they are the same drug, with the same pharmacological properties in terms of serotonin reuptake inhibition. Yet one is supposedly superior to the other, and it’s an amazing coincidence the one that came second is superior. It’s just bias in the trials – one of the dirty secrets being that data is often manipulated after unblinding. A general problem I see with the papers analyzing unblinding and side effects is that the patients are suffering the side effects, but the clinicians are the raters and they are the ones likely unblinded. Severity of side effects are unlikely to have much effect on clinicians, who are experts on the side effect profiles of the drugs and unblind at very high rates. If nearly 100% of patients/raters in the treatment group unblind (and I believe the number approaches that), you will not find any signal that links side effect number or severity to unblinding. Also, I should note that if you look at the data for the new Cipriani study that has been in the news, the efficacy of various antidepressants lines up pretty good with their side effect profiles. That is, the shorter half-life, higher side effect drugs seem to have higher efficacy. Venlafaxine->Paroxetine->Sertraline->Fluoxetine. Those line up in the same order for half-life and efficacy. It’s pretty well known that shorter half-life equals greater “potency” equals great side effects (and the study data confirm this). So, the more side effects a particular SSRI causes, the higher its efficacy is in that meta-analysis. That’s consistent with more blind breaking due to more side effects. (This contradicts the dose response curve information somewhat, but as I said, interpreting that data is complicated). Amitriptyline tops the list for efficacy. It’s not an SSRI, so a half life comparison is not valid, but it’s a drug notorious for its side effects. Mirtazapine comes in second. It is also not an SSRI, but also notorious for its side effects and horrible withdrawal.

            iv) The issue of whether there is a dose-response curve for SSRIs is far too large to cover here, and the study you reference has a lot to unpack (I did read it and have some thoughts, but there are a lot of complexities). I will just say a few things. One, the curves appear to have changed over time, most probably as a result of a change in trial methods. Many of the original trials for drugs like fluoxetine and sertraline did not titrate up to the higher doses. In those trials, the response curve is inverted. For example, the main positive study submitted to the FDA for approval of sertraline showed the lowest dose as the most effective and effectiveness falling with each dose increase. This is likely because side effects for higher doses were very severe when patients were started immediately on the higher dose. There are a lot of complexities analyzing what is going on there, though. Another issue with all of these studies is that it is known that dropouts due to side effects increase as dose increases. These dropouts are not random. They are patients who are less likely to improve, in fact, may deteriorate on the drug. At lower doses these patients may stay in the trial, but improve less or deteriorate. At higher doses these patients disappear, skewing the remaining patients toward those who tolerate the drug. This a huge problem for any analysis of dose response curves.

            Even accepting the data in (Hieronymus 2016) is correct, it shows very minimal differences between doses on the primary endpoint. They may achieve significance, but they are small in magnitude. And on the HAM-D scale, they are not significant, which brings up something very important. I believe the use of the depressed mood item as a primary endpoint in this meta-analysis (and others) is scientifically unsound, for several reasons. A) We can assume there is a reason the HAM-D was created and psychiatry did not just determine depression based on the response to this single question. Therefore, we should be skeptical about retroactively deciding this single question is a better determinant of who is depressed. B) The decision to adopt this single question as the best indicator was made retroactively by cherry picking from existing data. All the data we have on SSRIs from past trials was literally mined to see if that data could tell a better story than what the HAM-D was telling. That is the definition of cherry picking. The primary endpoint in those trials was the HAM-D score, but it was not telling a very convincing story for those who believe SSRIs work. So the data was sliced up until it was found that the depressed mood item made SSRIs look better. It was then proposed that future analyses should switch to this cherry picked metric. I don’t think I need to describe how unsound this is from a statistical perspective. C) This is the most fatal of all the problems. Why does the depressed mood item separate SSRIs from placebo even more than the HAM-D? The answer is simple: if your effects are a result of bias, then the HAM-D item most vulnerable to bias is the one that asks whether you have depressed mood. A biased clinician is far more likely to fudge the ratings for the depressed mood item than any other item on the questionaire. So, if all the mountains of evidence that indicate bias is widespread are correct, then you would expect the depressed mood item to be more skewed than the HAM-D as a whole. And that is what you see. The “superiority” of this single item in showing antidepressant efficacy itself consistent with the theory that SSRI efficacy is a result of bias.

          • metamorphosisfour

            I will try to answer in a condensed way: i) The Kirsch hypothesis is based on similar data but with blunt methodology – if you can prove one thing with it, you can also disprove it. ii) Ok. iii) Another massive case of “references needed”. Or you can have a look at the Cipriani trial again – clomipramine and desvenlafaxine certainly didn’t fare very well and they can have equally nasty side effects as amitriptyline. iv) The problem you mentioned with dropouts would skew the curve in disfavour of higher doses. The problem you mentioned with titration studies likely reflects that more severely ill patients could require higher doses, or at least the investigators think so and raise the dose. This is why it’s important to only include fixed-dose studies in your analysis, which is what we did here.
            Further: A) Yes, we mention exactly this in the paper. B) Cherry picking would consist in us looking at depressed mood only, and ignoring or not presenting results for other items. This is not correct. I do agree, as mentioned here and in the paper, that exploratory post hoc-analyses should be read with caution. This however doesn’t preclude that the data is there, and that it shows a consistent and strong effect on study participants’ mood, which is what an antidepressants is supposed to do. Effects of this magnitude are beyond what might be explained by bias. C) I would really like to see your citation for that. I do not think the answer is simple – it rarely is in psychiatry. There are other items that are equally subjective, where there is no or little effect of SSRIs. Depending on which drug you look at, there is not always a big reduction in depressed mood (benzos, for example, do not reduce depressed mood but still show an “antidepressant” effect because they affect the sleep and agitation items on the HDRS).

          • rthorat

            i) Kirsch might have used a similar overall data set, but his work has no reliance on the poor side effect reporting data. Yes, there are many other problems with the research data, as Kirsch has noted, but in general those problems would put the data at risk of bias in favor of the drugs and not the other way around. iii) There are many reasons why a particular drug may not fit neatly in the “side effects = efficacy” hypothesis. For example, clomipramine is an old drug, and the trials may be lacking in methodology or lacking in bias because they were not performed by a party with a vested interest, in contrast to other drug. Also, I see that clomipramine has a very high dropout rate, which could skew the data in any number of ways. Desvenlafaxine is another example of one of those curious drugs. It is essentially the same drug as venlafaxine, yet its efficacy appears much lower. That doesn’t make any logical sense, and is an indication there are serious problems with the trial data for one of those drugs, if not both. I will also state explicitly here what I hinted at before: the drugs with higher side effects have higher potency, and thus they could appear in the efficacy order because they are in fact more efficacious due to their greater potency. So, what I am saying is that the order of efficacy in that list is consistent with both my position and yours. It doesn’t conflict with either. iv) I disagree that dropouts would skew the curve in disfavor of higher doses. I believe the opposite is true. Exploring that would take a lot of space. As for fixed-dose studies, it is my understanding that patients in those studies are still titrated up to the fixed dose. The dose they are assigned is predetermined, but they will begin at a lower dose and work up to the predetermined dose. As far as I can tell, this is standard procedure for quite some time, but was not done often in the early days of SSRI trials. B) It would be fine to report the results of looking at the depressed mood item alone, so long as you included the caveats you mentioned. But it is not acceptable in my opinion to make that cherry picked statistic the primary endpoint of the meta-analysis and base the discussion and conclusion on that data point. It is even worse when the data that should be the primary endpoint shows no statistical significance, as it did in the study. This is not to say that HAM-D is perfect – there are plenty of valid critiques of HAM-D out there, and I agree with many of them (the benzo/sedation thing is one problem you mention). I also take exception to the assertion that effects of that magnitude are beyond what might be explained by bias. As Gotzsche has explained, one of his researchers found that biased raters in all studies skew their ratings by an average of 36%. If only 10% of raters in SSRI trials unblinded, with such small effects observed, that 10% unblinding could explain all the efficacy. And far more than 10% unblind – there is no doubt of that. I am not saying that all the bias is caused by unblinding, as we have solid evidence there are many other ways that bias enters the data. But unblinding is a substantial part of it. C) I don’t think I need a citation to state the obvious, which is that if a rater is biased and pre-disposed to skew scores in favor of an antidepressant under study, then the most likely item where he would introduce bias is the most straight-forward question that asks if the patient has a depressed mood. That seems like common sense to me. And it also seems like it would be something that is impossible to prove empirically. I could not even begin to imagine how one could create a scenario where you could prove or disprove it.

          • metamorphosisfour

            i) If we agree on the poor side effect reporting, I have pointed you to the study where we explore side effects as measured on the HDRS scale. Does it alone prove with 100 % certainty? Does it do that together with the Hieronymus article first mentioned? No, but it is still more precise than the Kirsch data. iii) I say let’s agree to disagree on that one – I do have to point out the many if:s and but:s in your argument however, and that we found absolutely no effect of side effect severity in the mentioned Hieronymus paper. iv) You will have to explain the skewing, I don’t understand your argument. Titrating depends on whether the target dose is high or low. B) What Götzsche presents is a (in my opinion, and I think this boils down to opinions) very unlikely case. Are all raters inherently evil, and fake good results for profit? Can you bribe the 10000+ people involved in the studies mentioned in Cipriani et al? I think, like most conspiracy theories, that it seems very, very improbable. I’m yet to see these 10 % that apparently fare very well in trials, and I’ve spent many hours looking at trial data. I’m also having a hard time seeing how the 36 % study directly applies to all antidepressant trials. And I am still to hear an explanation for the many failed antidepressant trials. Why on earth would Merck spend billions on Aprepitant trials, only to see it fail and lose another tens of billions in profit? If Big Pharma are faking their results – they certainly suck at doing so. Side note: a funny example of Götzsches bias – in his meta-analysis of psychotherapies for suicidality, he claims the unblinding effect in psychotherapy vs. waitlist is nonexistant. I could not agree less. You know when you are receiving psychotherapy. Regarding our study I accept your opinion, which again is an opinion and not a factual or statistical inconsistency, but I disagree with it. C) The depressed mood item scoring indicates that the rater has to question the patient, or listen to him/her. Most symptoms are scored in a similar way and thus as unlikely/likely to be skewed. But I do certainly agree with your last point – we will never be able to get 100 % proof for this. I think many points in our interesting discussion do end in similar conclusions.

          • rthorat

            I think some of the topics are discussed enough, but I wanted to comment on a few. First, about the Gotzsche data on biased raters. You ask if all raters are inherently evil and fake good results, but you don’t have to be evil to bias your ratings, you only have to be human. You don’t have to “bribe” anyone. I find it naive in the extreme that anyone would think that human beings who are hired by a drug company and whose salary and income stream depend on keeping that drug company happy would not be skewed by that relationship. We know they would. Much of the bias is probably even subconscious, but it is there, as the data Gotzsche presents shows. It shows that across all medical trials, biased raters on average produce a 36% skewed result. That can be applied to antidepressants, and in fact, given the highly subjective nature of antidepressant trials, 36% is likely a low estimate. As for the 10000+ people involved in the trials used by Cipriani, others here have discussed at length the flaws in those trials, including that a huge number of them were ghostwritten (among many other problems). And for an explanation for the failed antidepressant trials…do I really need to give one? There are many reasons why a trial may fail to be sufficiently biased to produce the desired result – I don’t think I need to list them here. But are you really implying that the huge number of trials that fail to find an effect are actually an indication that the positive trials are legitimate? Implying that the failure to find an effect is actually an indicator that the effect found in other trials is real? That is a bizarre argument. In response to the claim that Gotzsche claims unblinding in psychotherapy vs waitlist is non-existent. I read several meta-analyses by Gotzsche and could not find such a claim. It does not sound like something a leading expert on placebo in the world would say. I did find a statement that properly blinding psychotherapy trials is really difficult, and not all that important in most contexts. As Irving Kirsch has said – it’s not that important whether psychotherapy results are from placebo effects because there are no substantial side effects to psychotherapy. Psychotherapy is not going to cause prolongation of the QT interval. It’s not going to cause withdrawal effects. It’s not going to cause weight gain. It’s not going to cause birth defects or autism. Now, if we are evaluating whether a particular method of psychotherapy works, it’s important to compare it to placebo or to another method. But otherwise, it is not important at all. Finally, you state that the rater has to question the patient, and therefore scores are unlikely to be skewed. I fail to see how this is true. In fact, it is this interaction that can further tip the rater into breaking the blind, and also is an opportunity for the rater to coach the participant in a certain direction.

            I would also like to respond to another comment that I cannot find anymore, perhaps it was deleted. Again, I will say I find your belief that Gotzsche is dishonest to be extraordinary. Each time you have presented an example of Gotzsche’s “dishonesty” I have shown that you are mistaken. Here again, your example is wrong. You state that he uses mortality data in those over 65 years old to inflate mortality for everyone. But he does not. In fact, his death estimates are based only on patients over age 65. Quoting from the book, “I have deliberately been conservative, and have not factored in deaths occurring in those under 65.” You can see all this data in chapter 14 of his latest book. Along with only estimating deaths for those 65 and older, he also lists a number of caveats, and makes clear that these are merely estimates based on the sparse data available.

            You also include talk about the “principle of explosion” regarding suicides and you state that if there was any more fraud we would know about it. This is naive. The only reason we know about the fraud we do know is based on court cases. These cases have taken decades to go through courts, and the companies involved has sought at every turn to keep the data sealed, and were successful for many years. The reason to believe that there is more fraud is because literally every drug that has received scrutiny has been shown to have fraudulent data regarding suicides. Every one. And the three most widely prescribed SSRIs – fluoxetine, sertraline, and paroxetine – have all been shown to have fraudulent suicide data. It defies simple logic to think that all the metoo drugs that followed are suddenly clean and do not have the same suicide problems as the drugs they were based on. No, the most reasonable expectation is that we simply have not been given a chance to see the data yet. Also, there is no doubt that the main mechanism of action of these drugs is itself responsible for the suicidal ideation and that it is an inherent feature of this drug class (just as sexual dysfunction is). But that is another long discussion.

            In response to the population data on suicide, such as Gusmao 2013. Again, anyone who knows anything about this data knows there are dozens and dozens of confounding factors, none of which you can control for. They know that population level suicide data is useless in this context. Particularly when we have actual suicide data from trials. Those 22 suicides on paroxetine and 0 on placebo sure don’t seem to be lowering the suicide rate. What the Gusmao data show is only a correlation – a correlation that is ridiculous and falls apart as soon as you leave the EU for your data. It does not impress me at all that you can pick one small dataset in Europe and cherry pick it to prove your point. Everyone knows antidepressant prescriptions have been increasing for decades. Everyone knows suicides in Europe have been decreasing (at least up until recently). From there, it is easy to just grab some data and declare the correlation meaningful. Easy, but stupid. And I find it ironic that after citing Gusmao 2013, only a few sentences later you claim that Robert Whitaker is unable to understand causation vs. correlation (which is a false and silly claim). In reality, suicide rates in western europe and the united states began falling around the early to mid-1980s – before fluoxetine was even approved. It would be 10+ years before enough people would be prescribed SSRIs to have any noticeable affect on the suicide rate, yet the rate was already falling and continued falling at a similar pace until it leveled off in the 2000s. In the United States, the rate began climbing again and has reached 30 year highs, despite the increasing number of SSRI prescriptions, completely obliterating any theory about SSRIs lowering the suicide rate. At the same time, in Japan, SSRI precriptions were climbing dramatically while societal suicide rates continued to rise dramatically. They have leveled off in recent years, but remain much higher than before SSRIs were introduced. Looking at charts of suicide rate plotted with SSRI rates, there is simply no correlation in many countries. New Zealand is another example. I could go on and on. While the number of suicide deaths related to opioids has increased (roughly doubled), that amounts to only a couple thousand more suicides per year in the US, far less than the increase we have seen. A quick back of the envelope calculation indicates that opioid suicides are less than 1/5 of the total increase in the US. Not to mention that if the theory is true, suicide rates should continue to fall with the ever increasing SSRI rates. The whole thing just falls apart on critical examination, so I will not waste much more time on useless population level data, where the problem of correlation vs. causation could probably be an academic case study.

            As a final note on Gusmao 2013, I will note that Gotzsche mentions this study in his book. He is quite dismissive. This is the entire quote that references Gusmao 2013 as a footnote: “Some studies are involuntarily comical. For example, a study of trends in use of antidepressants and suicides claimed that there was a clear protective effect from the drugs when it was obvious by looking at the graphs that there wasn’t.”

            “Gibbons did not present his data in a very good way” is the ultimate understatement. He intentionally presented it in a fraudulent and misleading way, to convey the opposite of what the data actually showed. Gibbons has a history of doing this over and over. Throughout all this he kept his position at the University of Chicago and his position at NIMH, which is itself demonstrative of what is going on here – that a known fabricator would face no professional repercussions at all.

            Finally, on the subject of McCain 2009, I will only say the following, 1) as you state, he relied on Gibbons’s fraudulent study, 2) he relies on autopsy data, which has been shown repeatedly to be unreliable (it swings wildly based on changes in autopsy rates and procedures), and 3) he relies on population level data, which has also shown to be unreliable for this purpose.

          • metamorphosisfour

            I do think my last comment was deleted, for some reason. I will try to provide a condensed answer:

            Blinding: Investigators are usually not hired by the drug company, at least not in big multi-center trials. They are also often not paid directly, but receive money for their clinic/university. You did not explain why the 36 % or even a higher number would apply directly to antidepressant trials. There was no indication of profiting bias in the Cipriani paper, which unfortunately is the best proof we have here, but as they mentioned, this is poorly reported. Again, this is the one argument of both of ours which is down to belief, as there is no direct, absolute proof in one or another direction, only speculation. You can call me extremely naïve, it doesn’t matter.

            Ghostwriting: It is a vast misunderstanding, stemming from a tweet by David Healy, that Cipriani would’ve based his findings on merely ghostwritten papers. Of course, as in any other meta-analysis, they have gone through clinical study reports.

            Failed trials: Yes, you need to give an explanation. You cannot merely state that a comment is “bizarre” and move on. I do not argue i.e. that failed SSRI trials would prove that SSRIs would be beneficial, that is indeed ridiculous. I argue that if it’s easy to fake a trial, there is no credible explanation for the many recent failed trials. The patent for SSRIs has expired and the pharmaceutical companies are in dire need of a new cash cow. Aprepitant would have been perfect in many ways.

            Götzsche: The claim about no blinding effect in psychotherapy is from “Cognitive behavioural therapy halves the risk of repeated suicide attempts: systematic review”. The direct quote is: “The authors of the Cochrane review classified most of trials as having high risk of bias for blinding of participants. However, we do not agree that this is an important problem with the trials we reviewed.” So essentially, Götzsche overrode Cochrane methodology to provide his own assessment. He argues that ”any bias related to this outcome [suicide attempts] would be expected to be small compared with the size of the effect we found.” and that “our outcome, a new suicide attempt, is pretty objective” but for antidepressants, he writes “It was difficult to know whether the suicide risk was also increased in adults, as there has been massive under-reporting and even fraud in the reporting of suicides, suicide attempts and suicidal thoughts in the placebo-controlled trials”. Again, I do not disagree that there have been cases of fraud, but for God’s sake, do treat both groups evenly when comparing them. He concludes ”it is not clear why any bias would be expected to exaggerate the effect of cognitive behavioural therapy compared to the psychological support or therapies that were given to control group patients.” If only he had treated antidepressant research like this! Also, I put his data in a Forest plot and ALL of the positive effect by CBT could be explained by publication bias. Are there any Forest plots in his review? No. I also fail to see how you have “disproven” my other claims about him – except the antipsychotics estimate, where I do stand partially corrected. From one of his presentations, I understood that he had used his estimate on the whole population. However, there are still too many caveats, as mentioned the mortality in dementia is high, and there are huge differences in mortality between different antipsychotic drugs. See for example Tiihonen 2017, Schizoprenia Research, just to mention one. Even if he mentions the caveats in his book, this is not how the data is presented in other media.

            On psychotherapy having no side effects: No, just no. http://journals.sagepub.com/doi/full/10.1080/00048670903107559 and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4219072/ to name a few examples. Do we still need psychotherapy, even if it has side effects? Of course.

            On me being naïve: You can repeat is how much you want, it doesn’t change the fact that there is no proof for your claim that “everything” is fraudulent. You are wrong in that only these three drugs have been received scrutiny. CSR:s, individual case reports and previously confidential data can be accessed by any researcher in the field who submits a claim, just like David Healy did. It is really, really hard to hide so many deaths in a trial – the fact that we know about the ones that you mention is indirect proof of my claim, not yours. The other SSRIs, except escitalopram, were developed in the same era, so equal amount of decades have passed. I also did not claim that no other serotonergic drug will absolutely not cause any suicides – I think you are deliberately misunderstanding me here. But I do claim that even if there are rare cases of suicide in trials, the mean effect is beneficial, or neutral in the younger population. You still haven’t commented on our article which found exactly this. You are also mentioning the 22-0 suicide ratio in paroxetine while accusing me of cherry-picking. I concur. If I remember correctly, the mean SSRI exposure also was about four times longer than placebo exposure in Healy’s article about this. I might come back to this however, have to check it. And yet again, if you trust Götzsche, you should also trust “the best meta-analysis” as chosen by him, which showed half the suicide rate in SSRI treated patients.

            Moving on to the real-life data: Again you accuse me of cherry-picking when analysing all of Europe, but USA and Japan (where, again, you are wrong) “completely obliterates” my theory. Nice. Also, the several thousand opioid deaths are a very likely explanation of the whole, not part, of the increase. With that said, USA:s prescription of antidepressants, ADHD medication, as well as most other drugs, is likely far too high, especially if you weigh in that poor people with no health care are very often untreated. You further accuse me of trying to prove causation, which I clearly stated I do not, and then try to do the same thing. You also accuse me of believing that there should be a linear correlation with SSRI prescriptions and fewer suicides, which again, I think is a deliberate misunderstanding so that you can have an argument. The fact that Götzsche has probably looked at this article upside down does also not change that when one line goes up, the other one goes down.

            Lastly, about Gibbons: I already stated that his fraud, if that’s what we want to call it, did not change my conclusion about the number of suicides rising after the FDA black box warning. Also, you are again suffering from a massive [citation needed] on those last claims to be able to disprove it. If McCain found absolutely no proof for the suicide claim, even a big variation in methodology would not make this data useless.

          • rthorat

            Blinding: It does not matter that investigators are not directly hired by the pharma company. Their employer is, and the same incentives apply. The 36% number applies directly to antidepressants because it is an average across all medical trials. Of course, there is a lot of uncertainty because some areas will show more bias than others in trials, but if anything, given the highly subjective nature of antidepressant trials, they would be more prone to bias than the average, so 36% is a reasonably conservative figure to use. You can disagree, but I will not budge on that judgment.

            Ghostwriting: Yes, they went through the CSRs, but the point is that a huge number of the studies were performed by the pharma companies, then secretly published as being performed by some independent researcher. Suffice to say that studies performed with this kind of deception at heart are not going to be the most unbiased studies. This is without going into all the other problems with these studies (washout periods, etc).

            Failed trials: I never intended to claim that trials are “faked” in the sense that everything is just made up (although in some cases data has clearly been altered after blinds are broken). For the most part, trial data is “faked” by using methodologies with known flaws that will bias the results – I have named many of those elsewhere, but some big ones are the unblinding problem and the use of patients who are not drug naive in combination with washout periods. There are many other flaws – for example, suicide counting is stopped 24 hours after the study drug is withdrawn, but withdrawal effects that lead to suicide can last weeks or months. Some studies that have looked beyond that 24 hour period have found many more suicides. One could write an entire book about the flaws in these trials that lead to bias. But what I do not think I need to do is provide an explanation for each time one of these trials failed to produce a biased result. Random luck, maybe? A patient mix that was high on the number of drug naive patients, therefore leading to less problems with withdrawal in the placebo group? Raters who were for whatever reason less biased than usual? Who knows.

            Gotzsche: I read that meta-analysis, and you are misunderstanding it. Gotzsche is correct in his analysis and his departure from Cochrane methodology. Why? Because Cochrane methodology is based on analysis of subjective criteria, such as HAM-D measurements, for an example. But Gotzsche, as he states, is reviewing suicide attempts, a largely objective measure. The risk of bias with objective measures is much, much lower. It is difficult to bias something when it has objective data behind it, rather than subjective. His quote that the bias is expected to be small compared to the effect is perfectly logical. He is saying even if there is bias, given that the measures are objective, that bias must certainly be small compared to the effect size they found, and therefore it does not challenge the findings. As a side note, the trials do actually show that you can bias the results on objective measures like suicide by withholding suicide attempts from the data. But this bias all runs in the opposite direction of the effect found, so it also does not challenge the findings. When you then quote Gotzsche on antidepressants saying that is is difficult to know the true effect in adults, you seem to think he is having it both ways. But he is not. What he is referring to here is just what I said: the complete removal of suicides and suicide attempts from drug trials. I have provided several examples of this occurring. And Gotzsche has provided many more, where suicides from trials were reported one way at one time and another at another time. Suicides seem to appear and disappear based on how much of the underlying data has been released to the public. Gotzsche’s position on this is not at all a contridiction with his meta-analysis position because in both cases the only risk of bias is in favor of no effect, but his analysis indeed found a large effect, in spite of any possible bias running the other direction.

            Psychotherapy Side Effects: I included the modifier “substantial” side effects because I had a feeling you would say psychotherapy has side effects. In reality, these “side effects” are just word games. Psychotherapy has “effects”. You cannot label subjective life events as “side effects”, it is silly and misleading. For example, the study you linked talks about patients divorcing their spouse. But who can make any kind of judgment about whether this is desirable or undesirable? If one has an abusive spouse is it a side effect when the patient divorces them as a result of psychotherapy? Going down that route is absurd. When I talk about side effects, what I mean is actual physical effects as a result of the treatment. Yes, you can receive bad psychotherapy or malpractice, but the end result of those events will be captured in the effect sizes. Whereas with pharmaceuticals, side effects may not be captured in the effect sizes. Whether a patient has a prolonged QT interval will likely not show up in the effect sizes of a small, short trial in any way. But it is dangerous nonetheless, and should be considered when evaluating the benefits of a treatment. The paper you linked listed some “side effects” of psychotherapy as “…treatment failure and deterioration of symptoms, emergence of new symptoms, suicidality, occupational problems or stigmatization, changes in the social network or strains in relationships, therapy dependence, or undermining of self-efficacy.” Those are not side effects. They are just life. And whether those events are positive or negative for the patient will be reflected in the effect sizes of the trials. It is important to know that psychotherapy may not work for everyone and can even have negative effects for some patients. But calling these results “side effects” and making them sound similar to side effects from medicine is not appropriate and serves to obscure and confuse.

            On the naive thing: I don’t think it’s productive to argue further, as I disagree and don’t think either of use will be persuaded. But I do object to labeling my use of the 22-0 numbers for paroxetine as cherry picking. First of all, this is one of the major SSRIs, one of the most effective according to the Cipriani data. And the dataset where that comes from is the entire clinical trial dataset conducted by GSK. That is a full dataset. Yes, it is just one SSRI, but the reason is it is the only SSRI where we have received a full accounting. Paroxetine was developed in 1975 and FDA approved for MDD in 1992. It was 2017 before we discovered the true data behind suicides in these trials. That is 25 years. The order of SSRI approval is fluoxetine->sertraline->paroxetine->citalopram->escitalopram. We know from court cases and other efforts that the suicide data for the first three are false. To my knowledge, the underlying data allowing us to make any kid of judgment on the newer drugs – citalopram and escitalopram – has never been analyzed. You can argue all you want that we would know if the other drugs, including related classes, are fraudulent, but that fact is it took 25 years to get the information on paroxetine, and it’s still not clear if we fully know all the fraud in the fluoxetine and sertraline cases, or whether we have an incomplete picture. Fluoxetine was approved 30 years ago. We only know about paroxetine because of one specific court case – and for esoteric legal reasons that case was highly improbable and a major challenge, which is probably why similar cases have not been tried against the other SSRIs.

            Real-life data: Again, you seem to think I am cherry picking, but what I am doing is just disproving your argument. When you argue that SSRIs lower the suicide rate based on correlative data, you need to show that data from all or substantially all countries proves this correlation – even then it is tenuous. But to disprove your theory, I need only show that a number of countries greater than random chance do not in fact show the correlation. I did that by showing the US, Japan, New Zealand, etc. I do not need to go through the whole dataset and prove what I am saying because I am not making any affirmative argument: I am merely poking holes in your argument. As an end to this point, I will link you to a chart on Wikipedia showing trends in suicide rates in various countries. While the Netherlands, Norway, Sweden, Switzerland, and the United States show a steady decrease in suicide rate during a period that SSRI prescriptions were increasing, in all but Norway, that trendline began before fluoxetine appeared on the market and continued on the same trendline through the time period. Also, Japan does not match this trend at all, with a sharp downward trend, followed by a large spike, then a leveling out of the trend. New Zealand also does not match this trend. Finally, if the chart went further in the future, you would see a rise in the rate in some of these countries, even as SSRI prescriptions increased. This is the case in the US. And as I stated before, the rise in opioid related suicides seems to be roughly 2000 per year, while the rise in overall suicides per year seems to be 10000 or more. This is not just an opioid related increase.

            To summarize the discussion about suicides and population level data: I have zero interest in population level studies because they are likely to be inaccurate. They are an indirect way of proving or disproving causation. You do not need to look at such indirect data, with its serious flaws, when we have the most direct data: the clinical trials themselves. And those trials say SSRIs increase the suicide rate. Again, it is clear from the trials that fluoxetine, sertraline, and paroxetine increase suicidality, in both adults and children. In the case of paroxetine, there is a 9x increase in suicidal events, and 22-0 in actual suicides in trials. In the case of citalopram and its evergreen partner escitalopram, I could not find good suicide data.

            Also, lest you think the problem with hiding suicides only exists in company sponsored trials, may I bring to your attention the TADS trials. Run by NIMH, the investigators in this pediatric trial of fluoxetine clearly and intentionally hid the suicide problems caused by fluoxetine in the trials. Eye-opening reading here: https://www.madinamerica.com/2012/02/the-real-suicide-data-from-the-tads-study-comes-to-light/.

          • metamorphosisfour

            Blinding: Of course it matters if the investigators have an direct financial interest in the outcome of the research. Most of them don’t. And for the third time, you haven’t explained why the 36 % shall apply to all, or most relevant trials. I agree of the subjective nature – that is one of many reasons why it is hard to find an antidepressant effect, the current definition of depression leaves a lot to subjectivity (frankly, this explains most of the problems with antidepressants, including people receiving them unnecessarily). Anyway, to try to end this argument, I agree with you that there might be factors adding to overestimation of a clinical effect, and we need to be critical. But there are many other factors explaining why we fail to see an effect – those need to be taken into account as well. You can read for example Montgomerys “The failure of placebo-controlled trials”. It argues from a perspective that you want to increase the effect – so you might not agree to it in that way – but that together with all mentioned parameters such as the nature of the HDRS, low doses, short durations – it explains some of the factors behind the many apparent failures of antidepressants. On a (hopefully short) side note – what do you think about the efficacy of antidepressants for anxiety disorders, or PMDD?

            Ghostwriting: I agree that this is an honesty problem. But whoever authored the papers, the data would be the same. Ghostwriting is important to assess when you are reading pharma’s own conclusions and implications, the point of it was to add weight to these.

            Bias: Some of those studies have also found the opposite. Statistically, if there is no effect, and if all positive studies have a p=0.05, for one positive study there should be nineteen that show no effect. I’m not saying I would buy a drug with 18/1 fail ratio. I’m saying there’s a lot of that explanation of yours to be done. But that’s not important. I will come back to what is.

            Götzsche: That the bias runs in the (apparently!) opposite direction is NOT an argument to ignore it. And you are not understanding the contradiction – if antidepressant trials wouldn’t have been assessed with bias in your mind, we would not have known many of the negative things. The most objective Cochrane methodology, developed in part by Götzsche himself, showed that there was a risk of bias. Götzsche would never allow this to be ignored in any other analysis. And you are entirely missing my point on publication bias: OF COURSE he finds good results – and the fact that he is not aware of (at least part of) the explanation – is beyond me.

            Psychotherapy: This is where I want to come back to bias. Replace “psychotherapy” with “antidepressants” in your first sentences, and you have a really nice Big Pharma-style explanation of why side effects also for the latter can be skewed (I’m not saying the arguments are true). You do make a list of the more silly parts of both reviews. Every psychologist I have spoken to would say that some patients experience anxiety from therapies. For many therapies, the whole point is to expose yourself to anxiety-provoking things, i.e. CBT for specific phobias. And yes, all negative life events will be recorded as adverse events in trials. This is one of many problems in judging the results of those. And another example of bias would be judging suicides as “just life” in psychotherapy but directly related to medication in those trials. And no, many of them, including therapy dependence, or undermining of self-efficacy, would not be reflected in smaller efficacy. Last, QT-prolongation is probably the worst example you could choose. Very detailed ECG information, including outliers, and reports on individual patients are presented in CSR:s, especially since this was raised as a concern.

            Naïvity: Yeah, again we agree it’s down to belief. Just wanted to say i) that there have indeed been lawsuits against citalopram regarding suicide (if i remember correctly they did not win?) ii) the standard of clinical drug trials has changed immensely from 1975 to where we are now. There is no way you can judge newer trials like 40+ year old ones. We are discussing paroxetine, the worst among the worst. Let’s remember that.

            Real-life data: Again (:D) I think you are misunderstanding me. I have not argued that there is a linear correlation. I have not claimed causality in that more SSRI prescriptions should equal lower suicide rates. I did however claim that the lack of an opposite correlation is remarkable if you want to view SSRIs as frequently causing suicide. Let me show you: Last year, about 1 million people used antidepressants in Sweden and we had about 10 million inhabitants. Also, there were about 1.100 suicides. Further, only about 1/3 of these had been in contact with psychiatry during the last 6 months. 1/2 hadn’t seen a doctor at all during the same period and are thus very unlikely to have a valid antidepressant prescription. This gives us that a maximum of 400 people could have committed suicide because of antidepressants. Yes of course I know that the numbers could be skewed in a number of ways, or not reflect the whole truth, or differ from country to country. But this is also in a scenario where all patients seeing a psychiatrist are prescribed an antidepressant, and no patient seeing by a psychiatrist commits suicide because of their underlying illness (kinda unlikely). Even this extreme estimate would produce that 0.04 % of antidepressant-treated people committed suicide. This is what I mean with real life data. You can show me a GSK report claiming 22 out of 22 paroxetine-treated patients committed suicide – for the sake of my argument, and not taking into account the tragedy of these events, I don’t care.

            Children and suicide: Let me make this clear that it’s an entirely different topic than antidepressants vs. suicide in adults. I know about TADS. Due to my current research I must however abstain from commenting further on the issue. Yes, I know it’s a super boring answer and it’s possible you won’t believe me like I didn’t believe other unpublished research, but saying what I think might spoil any future conclusions from my side.

          • stmccrea

            The thing is, they should be using active placebos in psych drug trials, but they don’t. In fact, they often use a “washout period” in which they actually REMOVE people who have a placebo response from the trials! To say that this biases the results in favor of the drug is a gross understatement. AND the drug companies rarely if ever publish studies that don’t support their drug, and we don’t even know how many such negative studies there are. How do you think the results would look if ALL studies were considered, including those which showed a NEGATIVE response or NO response to the drug?

      • https://personalpeacegarden.wordpress.com/ PersonalPeaceGarden

        If the majority of participants unblind, why do you think there is typically a high percentage of placebo responders in antidepressant trials? Is it just due to spontaneous resolution of depression?

        • rthorat

          Yes, it is virtually all spontaneous resolution. Peter Goetszche has said this repeatedly, and the data backs him up.

          For example, if you look at the response curves for placebo and drug in these trials, they are almost identical, with drug only being slightly larger in magnitude. If you believe the data (which I don’t), then the drug only barely separates itself from placebo by 8 weeks. But if you wait until the 9th week, the placebo group will be at the same spot as the drug group in the 8th week. The dirty secret is that even if you believe the drugs work, at best all you can say is they reduce the symptoms of depression by one week in an 8 week trial.

          It gets worse though. Drug companies do not do long term trials, both because of cost and because they are not required to do so. But studies that have looked at long term use find that people on SSRIs do worse in the long term than people not taking them. The response curves from 8 week trials begin to flatten out as you go beyond 8 weeks. Most likely, the curve for the drug group would begin going the wrong direction at some point.

          • Ozlander Amit

            Very interesting and high quality discussion rthorat. Common sense: you go on a not so blind trial and whether you are on antidepressant or placebo, you have some hope for a few weeks or so that you will get better. Mostly I have read that there aren’t many studies beyond 8 weeks, but you seem to know what you are talking about. What is depression anyway? A convenient catch all word for the miseries of modern life brought on by neoliberal capitalism and global hegemony!

          • stmccrea

            Studies have also shown that placebo response still occurs even when people are told ahead of time that they are receiving an inert substance. Goes to show how strong an effect expectation has on mental/emotional and sometimes even physiological phenomena.

    • https://personalpeacegarden.wordpress.com/ PersonalPeaceGarden

      Thanks for the link. Interesting article…

  • Bernard Carroll

    I was also puzzled by the apparent strength of agomelatine in the report. In an earlier meta-analysis, Cipriani and colleagues reported that it was close to useless. See https://www.ncbi.nlm.nih.gov/pubmed/23999482

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Yes, this surprised me also.

      The new paper also finds that reboxetine is better than placebo, albeit the least effective of all 21 antidepressants. But an earlier meta-analysis found it didn’t work at all.

      • http://www.eiko-fried.com Eiko Fried

        Hey Barney & NS, maybe this can be explained with bias? I didn’t look at prior studies in too much detail, but in the current study, there seemed no correct at all for bias, whereas it seems pretty common in meta-analysis to adjust for bias in the statistical analyses. Since the new study showed bias small to moderate bias in > 80% of all trials, this might explain the differences.

        • Bernard Carroll

          Yeah, Eiko… we really need to hear from Dr. Cipriani.

      • O Privire Sceptică

        I think this is explained by bias. If you look at the supplementary material http://www.thelancet.com/cms/attachment/2119023008/2088154696/mmc1.pdf at page 190 (of the pdf) there are analyses adjusted for small-study effects and reboxetine is no longer better than placebo (OR=1.06, 95% CI: 0.86–1.31). Also, at page 192 there are analyses adjusting for sponsorship and once again reboxetine is not significantly better than placebo (OR=1.30, 95% CI: 0.94–1.76). So maybe the authors should have included some adjusted estimates in the main article, not just the unadjusted ones.

        • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

          Thanks – I didn’t spot that.

          Reboxetine – the antidepressant that isn’t one :-(

  • PersonalPeaceGarden

    I was stuck by how much more effective a given drug was in a head-to-head comparison when it was the novel drug than when it was the control drug. This suggests to me that despite the authors’ efforts to include as much unpublished data as possible, there is still a strong possibility that publication bias is affecting the results.

    • Chris King

      ^^^best read of the paper… of course, that’s what I found to be most disturbing and consistent as well.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      yes, I can’t see any benign explanation for that

      • metamorphosisfour

        The comparator is almost always underpowered in terms of dose. Or you choose a comparator that you know is worse in a certain area. If you want to do a non-inferiority study of tolerance – choose duloxetine or a TCA. If you want to show your drug has less withdrawal symptoms – compare to venlafaxine. Etc.

    • rthorat

      This has been known for quite some time and it is indeed one of the biggest indicators of massive bias. One of many.

    • http://www.scimed.pt João Cerqueira

      If the patient feels that the drug is new, revolutionary and more effective than those on the market, it may be conditioned to report a better improvement than would be expected.

      • dmoerman

        Actually, it’s more important that the clinician thinks the drug is new and more effective than others available.

        • http://www.scimed.pt João Cerqueira

          Yes … and if the study is not performed correctly in regard to concealment, the physician can create expectations in the patient that influence the outcome.

    • Dane Parker

      Right? But what’s strange is how poorly the newer drugs on the market tend rate, per user reviews. User reviews are not exactly scientific measure; yet it still strikes me as odd.

  • Costa Vakalopoulos

    Couple of points relating to probable underestimation of positive treatment effects. As a clinician, therapy is complex at times. Long term untreated depression is a risk in early cognitive decline with some evidence to suggest ssris are protective. The effects of antidepressants might be improved with treatment augmentation of ssri particularly with poor sleep hygiene or non responders i.e. mood stabilizers.

  • Boomer12k

    You can’t tell by me…I was on two of them at different times last decade…Paxil, and another one…. you cannot tell me they work, just because a person is sitting there quiet… the Paxil made me feel like a hollow, wooden door, with a hole in my chest. The other made me feel like I was on MT EVEREST in the morning…I had the chills so bad… both were bad for a person with a thyroid issue…which I had….They are NOT so much “anti-depressants” as they are ANTI-EMOTION PILLS….and they have SIDE EFFECTS… some mimic DEPRESSION….
    My emotional technique has done more for me over the last nine years, than ANY DRUG…. if you are interested in such a technique…I hope you will give my work a look-see….I am an Emotional Researcher with the problems myself. Since 2009, I have done the technique on my negative emotions over 3285 times, successfully. Even this morning on Stress and Anxiety. Visit my blog, and learn more ABOUT my technique. Look at each section. Click VERSIONS on the top menu bar….

    • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

      Paxil is a nursing home favorite. Not only does it implode old people, it is violently addictive. The only thing worse than Paxil is Paxil withdrawal. It’s a miracle for warehousing the pre-deceased as they are ground into medical Accounts Receivable while quietly snuggling with their bedsores.

  • http://russwilson.coffeecup.com/ RustyRiley

    Unfortunately, something few have commented on is the relative degree of Un-acceptability of almost all of them — “haven’t we been there!” — you’re a very rare case, having stuck it out for 10 years! How effective can any drug be if people won’t take it, or keep on taking it?

  • Pingback: Fawning Coverage of New Antidepressants Review Masks Key Caveats - The Wire 2018 – Android ABC()

  • rthorat

    In my opinion, this study and others like it are garbage in, garbage out. Almost none of these studies have any rigor and are rife with bias. There are probably a dozen or more signs of bias. The reality is these drugs do not help depression much (if any) at all – their effect is to numb the emotions in most people. But rather than helping with depression, they just make people apathetic to their depression. Some people think that’s great. Others think it’s not so great. And they also make people kill themselves – but that is a whole other story.

    Just off the top of my head, I can name several indicators of bias: 1) the drugs do worse as comparators than when they are novel, 2) physician ratings are much higher than patient ratings, 3) the trials are essentially all unblinded, 4) patients are not drug naive and many on placebo go through withdrawal during the trial…there are more, but you get the point. Anyone who does not know the history of Prozac & Zoloft approval should read up on it. Very interesting stuff – and…depressing.

    • Ozlander Amit

      Very true. Are you on twitter rthorat? I would like to follow you.

      • rthorat

        I have an account on Twitter, but I do not tweet. Same username, I believe.

        • https://www.earthmedresearch.org Earth Med Research

          Mysterious Man, I am working on a paper for peer-reviewed publication criticizing antidepressants, and my criticisms and conclusions are almost exactly the same as yours. I would love if you would reach out to my email directly (david at earthmedresearch dot org). Someday I would love if you would do some informal peer review before I submit it.

          I see you have also had the misfortune of encountering some of the same folk I have over on SBM. I’m impressed by your ability to not let those guys’ inflated opinions of their “science” have the last word. Lol. I’m also working on an original sensitivity analysis looking at a serious source of confounding in MMR-autism studies. I bet you’d like it. Your someone sharp who I very much want as part of my nonprofits online community.

  • Pingback: Post Of The Week – Sunday 25th February 2018 | DHSB/DHSG Psychology Research Digest()

  • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

    SSRI’s dry your mouth (bottled water!), dry your eyes, cause ejaculatory incompetence (Dapoxetine is an Official good thing!), pus anorgasmia in both sexes. SSRIs fill temporal ullage with side effects for people who are in bad company when alone.

    Intellectual puberty allows one to sublimate suicidal depression into homicidal rage, entertaining the rest of us and affording opportunities for promotion.

  • Peter van Trappen

    For years I took several kinds antidepressants, all of them have side-affects, read the instruction before you take them. Most
    antidepressant were in first place prescibed for other kinds of illnes.
    Its about mayor depressions, so if the antidepressants realy are effective
    the rate off suicides mustbe lower, thats for me is the objective number
    what is easy to control.

  • Atlanta_Girl

    As someone who has lived with depression & major depressive disorder for over 30 years – not one doctor who suggested and or prescribed antidepressants ever discussed my diet, social network, time outside and exercise, and looking at D, B & magnesium levels beforehand.

    2 decades of various antidepressants that ranged from making me numb to suicidal – I will never take them again. I was blessed to be part of the study for transcranial magnetic stimulation – amazing how transformative it was, faster than meds, without side-effects, and lasting more than 5years… I wish more people would explore these options.

  • Pingback: Some Doctors Want More People Taking Antidepressants – William M. Briggs()

  • Ken Gillman

    There are a great many more problems about the validity of the RCTs that make up all meta-analyses performed, about which I have written recently on my website.

    As has been correctly stated, this study, although ‘better’, isn’t really any different and adds nothing substantive to what we already know. Unfortunately, none of the discussion I have seen so far contains much critical analysis, so let me quote a short paragraph from what I have written.
    “Hackneyed as this old computer programmer’s phrase may be, it is obligatory to start by repeating it “garbage in, garbage out”. In layman’s language, you cannot make a silk purse out of a sow’s ear, nor build a castle on sand.
    On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
    Charles Babbage, Passages from the Life of a Philosopher”

    • Bernard Carroll

      I agree with Ken Gillman’s critique. It is a cogent critique of the recent, much debated meta-analysis of antidepressant drug trials last week in Lancet. https://psychotropical.info/lancet-21-antidepressants-meta-analysis/ … They do see things clearly Down Under. Bottom line: did it really move the ball down the field? Gillman says no.

  • Joy Johnson
  • Jean-Claude St-Onge,

    There are very many weaknesses with this meta-analysis (MA): No NNT’s; no reports on harms; 78% sponsored by the pharmaceutical industry; four authors declared COI (how many of the authors of original studies in COI?); way too short (around 8 weeks); 73% moderate risk of bias, 9% high risk; no info about run-in periods (which allows to cherry-pick participants); many trials had no info about randomization and allocation concealment; no evaluation of global functioning; one of the primary outcomes is response rate, a very weak criterion to judge efficacy; likely no active placebo; so-called a «considerable amount of unpublished data»… for 8 drugs out of 21 (were clinical study reports included , how many persons were concerned by these unpublished studies , and can you trust pharma giving negative studies?).
    The Lancet article admits: «certainty of evidence is moderate to low», and then again: «the effect sizes were mostly modest».
    Mostly modest and likely overestimated. A study in the Am. J. of Psychiatry in 2005, estimated that articles published by authors in COI were 4.9 times more likely to estimate the drug was superior to placebo than authors who declared no COI. Roy H. Perlis et al., .
    The authors of The Lancet article say the «great majority» had MODERATE to severe depression as measured by HAMD (25.7 standard deviation 3.97). Whatever criticism can be made about HAMD, this statement seems to be incorrect: according to the American Psychiatric Association, 8-13 is mild depression, 14 to 18 moderate, 19-22 severe, 23 + very severe. So the people in this MA were very very severely depressed.
    Many MA’s (Kirsch, Fournier, many quoted by Dr. Joanna Moncrieff) show that efficacy for mild to moderate depression is no better than placebo. And the great majority of AD’s are prescribed to people who have mild to moderate depression. According to Kirsch, for very very severe depression (28 +), the mean difference on HAMD between AD and PBO is 4.36, slightly more than the 3 considered clinically significant by NICE (Kirsch, I., Zeitschrift für Psychologie, 222(3), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4172306/, 2014).
    Undurraga and well known psychiatrist Ross Baldessarini (107 RCT’s) found an NNT of 8,7 for SSRI’s, 10,2 for SNRI’s, 6,2 for TCA’s (more dangerous in overdose)–(Undurraga, J., Baldessarini, R., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3280655/, 2012, Table 2). Of course, these are means. These numbers are close to Zimmerman’s.
    The MA points out correctly that depressive symptoms tend to improve spontaneously. Peter Gotzsche has shown that there’s no significant difference by week 8 between placebo and AD (comment by Rthorat). Moreover, a study comparing the efficacy of psychotherapy, placed 340 persons on a wait list (10 weeks). They gained 4 points on HAMD (Rutherford, B.R., Roose, S. P., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3628961/, July 2013). Furthermore, a 3-year study carried in the US showed that 50,7% of people with depression, anxiety and drug abuse remit without any treatment (Sareen, J., et al., http://jamanetwork.com/journals/jamapsychiatry/fullarticle/211213, 2013, 43). Of course, it does not mean they don’t need help. Some of the same researchers did another study (1 year) in the Netherlands and found a greater number of remitters (some had residual symptoms).
    Moreover, AD may worsen your depression. In this MA, the relapse rate for placebo is 24.7% versus 48,7 for SSRI’s (Andrews, P. et al., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3133866).
    If AD show efficacy in a minority, it is most likely not by virtue of their antidepressive qualities. It is by virtue of their activation effect (some clinicians adjust the dose to «create a controlled hypomania»), or the opposite, their sedative effects. They numb emotions, bad and in many instances good as a New Zealand study has shown (C. Cartwright, «Long-term antidepressant use: patient perspectives of benefits and adverse effects, Patient Preference and Adherence», 2016).
    And what about overdiagnosis, which does not seem to be the case for this MA (see the recent Thombs study from McGill and R. Mojtabai who determined that 61,4% of people, almost all on AD, did not meet DSM so-called «criteria» (R. Mojtabai, Psychother Psychosom, 2013).
    The Guardian article reporting on this MA, (https://www.theguardian.com/science/2018/feb/21/the-drugs-do-work-antidepressants-are-effective-study-shows) quotes Dr. Geddes, one of the coauthors saying: «we don’t have very precise treatments for depression». True for AD; false for psychotherapy, at least for mild to moderate depression, and no ADR’s as long as you don’t end up in the hands of a charlatan.
    The Guardian piece quotes Dr. Cipriani who says 80% stop their (AD) within one month, and according to those who defend their efficacy, it is generally admitted that it takes a few weeks (more or less 6 to 8) before AD’s ameliorates depression. But just a few days for ADR’s to manifest in many users.
    J.-C. St-Onge, MA Phil, Phd Economic sociology
    Author: Tous fous? L’influence de l’industrie pharmaceutique sur la psychiatrie.

  • Pingback: Scientists Gave Monkeys Ayahuasca and It Helped Their Depression()

  • Pingback: Scientists Gave Monkeys Ayahuasca and It Helped Their Depression | Gizmocrazed - Future Technology News()

  • Pingback: Do 1,000,000 extra sufferers have to be taking antidepressants? | Health blog()

  • Pingback: Do a million more patients need to be taking antidepressants? ⋆ health.10ztalk.com()

  • Pingback: Scientists Gave Monkeys Ayahuasca and It Helped Their Depression – Gentle Bloom()

  • Ozzo

    I thjnk personalised medicine will benefit patients in the future. Maoa and comt polymorphisms I believe are a big part of why all the benefits I experienced were marred by the outright denial of headzaps, vertigo and now tinnitus on withdrawal. Yes, there is others https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3357564/
    I suspect exercise causes more neurogenesis than these pills do and frankly I cant wait for the efficacy trials on it

  • Pingback: Mais Críticas ao Estudo sobre Antidepressivos Publicado em The Lancet | Mad In Brasil()

  • stmccrea

    In addition to unblinding, the authors fail to address selective publication, as they allowed the drug companies to choose which studies they submitted. Publication bias is hugely important, as real science has to look at ALL data, and doesn’t allow us to bury trials when we don’t like the results. In the end, the overall AVERAGE effect of antidepressants does not appear to be great.

    The other issue that is not addressed is that these are averages. It seems likely from my experience that a small subset of people experience fairly dramatic benefits, and a small subset of people experience dire consequences, at least when taking SSRIs. Perhaps our focus should be on figuring out which people actually DO benefit from SSRIs and which should avoid them, instead of wasting our time making global pronouncements like “antidepressants work” and “antidepressants don’t work.”

  • @nsmartinworld

    Ah, but drugs compared to more physical activity and social interactions?

  • Pingback: The Lancet Story on Antidepressants, Part 1 | Faith Seeking Understanding()

  • Tom Luckey

    As a laymen with a degree in Psychology–three questions I have about the study, 1: How many head on head subjects were there? 2: In comparative meta-analyisis wouldn’t studies designed to respond to confounds and anomalies by definition and necessity be designed with a higher degree of heterogenity. 3: One possible confound inferred in the discussion, is the degree, placebos and controls are being treated with cognitive therapies, meditation, exercise, or behavioral interventions in addition to the active treatment. Head on Head reporting would inherently be biased by operational definitions and conflicts of interest and does account for a high percentage of the heterogenity, as somewhat confusingly reported …and possibly account for outcome?

  • Robert Grant

    No decent clinician prescribes one antidepressant, and then never changes it. All of this research is extremely artificial and superficial when we place it in the context of actually treating and helping patients. While average effects and average differences tell us something about effectiveness, they tell us next to nothing about a patient’s actual experience with a class of medications. One person may not tolerate medication A, and have a mild effect from medication D, but then have nearly complete remission with medication G. Studies should be comparisons initially, but studies of the efficacy of a class drugs should be naturalistic, eg do patients treated under a nauralistic and rational algorithm find greater level of improvemt than placebo after say, 6 months? Another issue is the issue of compassion in prescibing. Even if effeects are modest, or even small, shouldn’t patients have that small amount of possible relief, if they want it, just out of pure compassion?

  • Pingback: Solving the Mystery of Depression - STORYBROOKELIFE.COM()

  • Pingback: We'd like new methods of treating melancholy()

  • Pingback: A Psychiatrist on SSRIs and Tapering Off of Antidepressants | Blog Of Nature()

  • Pingback: We Need New Ways of Treating Depression - MatthewPGomez.com()

  • Pingback: Scientists Gave Monkeys Ayahuasca and It Helped Their Depression - Discover Magazine - Tribal Hacker()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar