New Review of FDA antidepressant drug trials suggests antidepressants only “marginally better” than placebo – Ineffectiveness of antidepressants called “jaw-dropping”

A new review of 4 meta-analyses of efficacy trials submitted to the US Food and Drug Administration (FDA) suggests that antidepressants are only “marginally efficacious” compared with placebo and “document profound publication bias that inflates their apparent efficacy.”

Medscape
By Deborah Brauser
August 24, 2010

A new review of 4 meta-analyses of efficacy trials submitted to the US Food and Drug Administration (FDA) suggests that antidepressants are only “marginally efficacious” compared with placebo and “document profound publication bias that inflates their apparent efficacy.”

In addition, when the researchers also analyzed the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial, “the largest antidepressant effectiveness trial ever conducted,” they found that “the effectiveness of antidepressant therapies was probably even lower than the modest one reported…with an apparent progressively increasing dropout rate across each study phase.

“We found that out of the 4041 patients initially started on the SSRI [selective serotonin reuptake inhibitor] citalopram in the STAR*D study, and after 4 trials, only 108 patients had a remission and did not either have a relapse and/or dropped out by the end of 12 months of continuing care,” lead study author Ed Pigott, PhD, a psychologist with NeuroAdvantage LLC in Clarksville, Maryland, told Medscape Medical News.

Sustained Benefit “Jaw Dropping”

“In other words, if you’re trying to look at sustained benefit, you’re only looking at 2.7%, which is a pretty jaw-dropping number,” added Dr. Pigott.

Overall, “the reviewed findings argue for a reappraisal of the current recommended standard of care of depression,” write the study authors.

“I believe there are likely some people where [antidepressants] are truly beneficial beyond placebo. The problem right now is that we simply have no way of knowing who those people are,” noted Dr. Pigott. “My hope is that this kind of analysis creates ‘more oxygen’ for looking at other kinds of approaches to treatment.”

The study was published in the August issue of Psychotherapy and Psychosomatics.

When registering new drug application trials with the FDA, drug companies must prespecify the primary and secondary outcome measures, the investigators report. “Prespecification is essential to ensure the integrity of a trial and enables the discovery of when investigators selectively publish the measures that show the outcome the sponsors prefer following data collection and analysis, a form of researcher bias known as HARKing or ‘hypothesizing after the results are known’,” they write.

For this article, Dr. Pigott and his team reviewed the following meta-analyses:

  • 1. Rising and colleagues (reviewed all efficacy trials for new drugs between 2001 and 2002)
  • 2. Turner and colleagues (reviewed 74 past trials of 12 antidepressants)
  • 3. Kirsch and colleagues, 2002 (reviewed 47 trials of 6 FDA-approved antidepressants)
  • 4. Kirsch and colleagues, 2008 (reviewed depression severity and efficacy in 35 trials)

The researchers also sought to reevaluate the methods and findings of STAR*D, a randomized, controlled trial of patients with depression. Its prespecified primary outcome measure was the Hamilton Rating Scale for Depression (HRSD), whereas the Inventory of Depressive Symptomatology–Clinician-Rated (IDS-C30) was secondary for identifying remitted and responder patients.

“STAR*D was designed to identify the best next-step treatment for the many patients who fail to get adequate relief from their initial SSRI trial,” the study authors write.

“When I first read about STAR*D’s step 1 phase, it just seemed biased to me,” explained Dr. Pigott. “I thought of it as the ‘tag, you’re healed’ research design. Patients who were scored as having a remission during the first 4 to 6 weeks of up to 14 weeks of acute care treatment were counted as remitted, taken out of the subject pool, and put into the follow-up care phase. In other words, they didn’t have the ability to have a relapse. But as most people know, depression ebbs and flows.

“So what made me want to continue to follow this study was that it became clear that the only way that people were really going to be able to evaluate the antidepressants’ effectiveness was to wait for the publication of the follow-up findings,” he added. “After their major final summary study was published, I felt as though the results weren’t really being portrayed in a manner that was consistent with the study’s prespecified criteria.”

High Dropout, Low Remission Rates

In addition to reporting on low efficacy of antidepressants compared with placebo, the 4 meta-analyses “also document a second form of bias in which researchers fail to report the negative results for the prespecified primary outcome measure submitted to the FDA, while highlighting in published studies positive results from a secondary or even a new measure, as though it was their primary measure of interest,” the investigators write.

For example, they note, the meta-analysis from Rising and colleagues found that studies with favorable outcomes were almost 5 times more likely to be published and that over 26% of primary outcome measures were left out of journal articles. Turner and colleagues found that antidepressant studies were 16 times more likely to be published if favorable compared with those with unfavorable outcomes.

In reanalyzing the STAR*D methods, the researchers found that the high dropout rate resulted in frequently missed exit HRSD and IDS-C30 interviews. So the revised statistical analytical plan dropped the IDS-30 for the Quick Inventory of Depressive Symptomatology-Self Report (QIDS-SR), which was given at each visit.

“Even with the extraordinary care of STAR*D, only about one fourth of patients achieved remission in step 1 [and] the dropout rate was slightly larger than the success rate,” the study authors write. Steps 2 through 4 also each showed increasingly fewer success rates and larger dropout rates.

Of the 4041 patients at the study’s initiation, 370 (9.2%) dropped out within 2 weeks, and only 1854 patients (45.9%) obtained remission “using the lenient QIDS-SR criteria.” Of these, 670 dropped out within a month of their remission, and only 108 “survived continuing care” and underwent the final assessment.

Dr. Pigott described reanalyzing STAR*D as being “a bit like an onion. Each time we thought we understood the results, we found another layer. It wasn’t until about a year and a half ago that we discovered that the secondary outcome measure, the QIDS-SR, was not originally supposed to be used as a research measure. What was particularly disconcerting to me was that in their summary article, they basically used the QIDS-SR to report all of the results, which clearly had an inflationary effect on the outcome.”

He also noted that STAR*D did not have a placebo design. “Because the patients knew they were receiving the active medication, I would have expected a higher remission rate than what you’d find normally in a placebo-controlled study.

“The inescapable conclusion from the STAR*D results is that we need to explore more seriously other forms of treatment (and combination thereof) that may be more effective. This effort will require developing new service delivery models to ensure that as treatments are identified, they are widely implemented,” the investigators conclude.

Read entire article here:  http://www.medscape.com/viewarticle/727323
(Free registration required)