TUESDAY, July 12, 2005 (HealthDay News) -- Americans who've grown skeptical of the hyping of the latest medical breakthrough got some validation this week: A new review found one out of every three highly cited studies published in influential medical journals is either refuted or seriously weakened by subsequent research.
Some notable examples: the initial finding that hormone-replacement therapy reduced older women's heart disease risk (a larger trial later found that HRT actually raised cardiovascular and cancer risk); the promise of vitamin E in preventing heart attack (it doesn't); and the news that vitamin A supplements cut breast cancer risk (again, a larger, randomized trial showed no effect).
None of this means the initial, promising trial shouldn't have been published or publicized, stressed Dr. John Ioannidis, author of the review appearing in the July 13 issue of the Journal of the American Medical Association.
"This is largely the scientific process at work," said Ioannidis, an associate professor of medicine at the Institute for Clinical Research and Health Policy Studies at Tufts-New England Medical Center, in Boston. "There is nothing wrong with publishing promising evidence."
Ioannidis, who is also chairman of the department of hygiene and epidemiology at the University of Ioannina School of Medicine in Greece, said the message for the public is "to not lose your trust [in science], but don't expect that everything you hear about will be true forever."
In his analysis, Ioannidis examined the long-term validity of 49 of the most widely-cited studies published between 1990 and 2003 in three of the world's most prestigious "generalist" medical journals -- the Journal of the American Medical Association, The Lancet, and the New England Journal of Medicine -- as well as leading "specialty" journals such as Circulation, Diabetes and the Journal of the National Cancer Institute.
He then tracked whether or not the main findings from the 49 studies held true over time. Thirty-nine of the studies were randomized -- meaning that participants were randomly assigned to one of two or more treatment arms in a clinical trial. Large, randomized trials conducted prospectively (i.e., unfolding over time as opposed to a review of past data) are considered the gold standard of clinical research.
According to Ioannidis, "five out of six non-randomized studies and nine out of 39 randomized studies were contradicted or found to have stronger effects compared with subsequent studies on the same topic." Smaller studies were more likely to be refuted than larger ones, and findings from non-randomized trials were often overturned by the results of a larger, randomized trial.
In total, 16 percent of the trial studies ended up being contradicted by later research, and findings from another 16 percent were considerably weakened by later, more reliable data.
Ioannidis said his goal was to reach an "objective estimate" of the pace of change in current medical theory. "I think it is healthy thinking to acknowledge that scientific knowledge is not immutable and final, but is likely to change over time," he said.
He added that none of his findings cast doubt on the integrity of most medical journals.
"These journals have high scientific standards, robust editorial processes, and they are independent," Ioannidis said. "We just have to accept that contradictions will occur, and that we should not consider that science is cut in stone."
Editors at the New England Journal of Medicine agreed. In a group statement, they said the conclusion of Ioannidis' review is "widely appreciated: while many published studies are subsequently confirmed, some are not. This is how medical and scientific research progresses. A single study is not the final word, and that is an important message."
But does the media -- aided by drug companies eager to capitalize on positive findings -- too often hype and simplify the results of new medical research?
"This may be a problem occasionally," Ioannidis said. "The general public should be sensitized to the fact that scientific knowledge has limitations, it is never final and things may change down the road."
Jeff Trewhitt, a spokesman for the Pharmaceutical Research and Manufacturers of America, representing the drug industry, agreed that "there's always new information as the product is introduced to a larger patient population." But he also believes "the evidence is clear that in most situations, new data tend to confirm the initial findings."
Trewhitt also noted that, besides having to pass through the journal editorial process prior to publication, clinical trials are also designed "to meet the tough testing standards at the FDA."
"We believe we are not routinely seeing data that would change the indicated use of the product as approved by the FDA," he added.
Ioannidis did have one criticism for major journals, however: their tendency to publish "positive" findings (where a therapy was proven to be effective) over "negative" ones (where a therapy's effectiveness was cast in doubt).
"There's nothing wrong with publishing promising evidence," he said. "What is wrong is not publishing less promising evidence and studies with 'negative' results when [the studies] are equally well-designed and conducted as those that find impressive effects."
Still, Ioannidis seemed reassured by the findings. "Scientific advances do happen, medical progress is moving at a fast speed," he said. In interpreting medical news that could potentially impact health, Ioannidis advises people to "make decisions about it, try to find out what the limitations and caveats might be, and try and get a sense from your physician or physicians about the surrounding uncertainty. Critical thinking is always useful."
For a better understanding of the clinical trials process, visit the National Institutes of Health.