PureInsight | March 2, 2008
Many individuals who work in health care - and perhaps quite a few who
don't - will have noticed a general thrust in the direction of what is
known as "evidence-based medicine." But I, for one, have serious
reservations about it since it can be affected by "publication bias."
Originally it was announced: "The practice of evidence-based medicine
means integrating individual clinical expertise with the best available
external clinical evidence from systematic research."  It's amazing,
however, just how often the clinical evidence bit is forgotten.
Having said that, the individuals who forget the importance of clinical
experience do often seem to be academics who don't actually see
patients. I suppose it's only natural that if you don't have much
clinical experience, you're not going to give it that much credence.
Even if we focus on the science and on the studies, evidence-based
medicine is still fraught with difficulties. One of these difficulties
is publication bias, which is the phenomenon in which "positive"
studies tend to be more readily published than "negative" ones.
Such shenanigans are well known in medical research, and can give a
much-skewed impression of a drug's effectiveness and of its
The Jan. 17 issue of the New England Journal of Medicine
carried an interesting article, which sought to identify publication
bias in the area of antidepressant medication . The researchers
assessed a total of 74 studies that had been registered with the FDA
(Food and Drug Administration) in the United States.
Some of these studies had been published, but many had not (details
below). The researchers obtained the unpublished studies using various
means, including invoking the Freedom of Information Act.
Analyzing the 74 studies, the researchers found that:
- Thirty-eight had positive results, and all but one of these had been published.
- Thirty-six had negative results, and 22 of these had not been published.
- Of the 36 negative studies, 11 had been published, but in a way that
conveyed a positive outcome (this is not publication bias, by the way;
it is just plain bias).
This meant that of all the published studies, 94 percent appeared to
have positive findings. However, FDA analysis revealed that only 51
percent of studies were genuinely positive.
Overall, publication bias meant that the drugs appeared about a third
more effective than if all the trials had been taken into consideration.
The lead author of this study, Dr. Erick Turner, said: "The bottom line
for people considering an antidepressant, I think, is that they should
be more circumspect about taking it." That sounds like good advice to
me. But I'd add that the data also suggests that doctors might be a bit
more circumspect about prescribing.
This is not the first time that there's been evidence of publication
bias in the area of antidepressants. Previous analysis found the same
situation seems to have existed in the area of antidepressant use in
A Lancet review found that while published studies support the use of a
variety of antidepressants in childhood depression, unpublished data
shows that, in general, risks of treatment such as an enhanced tendency
toward suicidal behavior seem to have been significantly underplayed.
All this stuff about the selective publication of data on antidepressant medication makes pretty depressing reading.
1. Sackett DL, et al. Evidence based medicine: What it is and what it isn't:
It's about integrating individual
clinical expertise and the best external evidence. British Medical
Journal, 1996; 312(7023): 71-72.
2. Turner EH, et al. Selective
publication of antidepressant trials and its influence on apparent
efficacy. New England Journal of Medicine, 2008; 358(3): 252-60.
3. Whittington CJ, et al. Selective
serotonin reuptake inhibitors in childhood depression: systematic
review of published versus unpublished data. Lancet, 2004; 363(9418):