Handle with Care: Why Today’s Health News Often Becomes Tomorrow’s Retractions
We’ve all seen it played out hundreds of times, as a drug, food or habit is trumpeted as the way to lower the risk of cancer or heart disease only to be walked back the next month in another study. The reasons can be diverse, including a flawed hypothesis, bad data or misleading conclusions, but at the center is the study design itself. A longitudinal trial may yield very different findings from an observational one, while the gold standard – a randomized controlled trial (RCT) – can be extremely costly and difficult to design. The resulting patchwork of research requires professional analysis and a wait-and-see approach until confirmation is received via follow-up studies. We share some expert insights to help you view new studies with both a healthy skepticism and the realization that some of the most important medical breakthroughs of recent years have been discovered in just this way.
Did You Know?
1,400
Number of scientific papers retracted each year
Sources: Vaccine Journal August 2018, Centers for Disease Control, Harvard Health
50%
Percentage of scientific studies confirmed in follow-up studies
Source: Healthy Aging Project, University of Colorado, Boulder
Researchers agree that a randomized, controlled trial is the best way to learn about the world. In a drug study, for instance, a population is randomly divided into groups who receive the drug and those who don’t. If properly controlled and designed, any difference in outcomes between the groups can be measured and credibly attributed to the effects of the treatment. The methodology is highly valued in evidence-based medicine, proving that associations are causal, and not just by chance. The approach has powerful real-world applications, as seen in the Women’s Health Initiative (WHI), one of the nation’s largest-ever health projects.
Begun in 1993, with more than 161,000 women enrolled, the randomized, controlled clinical trial was designed to test the efficacy of long-term hormone therapy in preventing heart disease, hip fractures and other diseases in post-menopausal women over 60 years old on average. Previous observational studies had strongly suggested the preventive benefits of hormone therapy, and it was routinely recommended for women years after menopause. What happened next was stunning.
In 2002, the trial was halted three years earlier than planned as evidence mounted that the estrogen plus progestin therapy significantly raised a woman’s chances of developing cardiovascular disease, stroke and breast cancer. Millions of women stopped taking hormone therapy, and the trial has since been credited with reducing the incidence of breast cancer by 15,000-20,000 cases each year since the results were made public. Numerous follow-up studies were conducted to dig deeper into the surprising data, and while they showed that hormone therapy may still be reasonable short-term to manage menopausal symptoms in younger women, it is no longer routinely recommended years after menopause to prevent chronic disease in women.
Similarly, Vitamin E supplements, once thought to reduce risk of heart disease, were found to not have beneficial properties and actually may increase the risk of heart disease in higher doses. Consequently, the American Heart Association now advises that the best source of Vitamin E is foods, not supplements.
The biggest takeaway from both initiatives: the critical need for randomized, controlled trials to prove that associations between an intervention and a disease are causally related.
Nutrition studies have also come under increased scrutiny, especially with the recent revelation of erroneous data published by high-profile researcher Dr. Brian Wansink, founder of the Food and Brand Lab at Cornell University. Numerous papers have been retracted as the lab’s propensity for data dredging – running exhaustive analyses on data sets to cherry pick interesting and media-friendly findings – came to light. This practice, seen somewhat frequently in food and nutrition research, may be part of why contradictory headlines seem to be the norm.
As the adage goes, data can be tortured until it says what the researcher wants to hear. That’s why your physician will always be the best source for making sense of the tremendous amount of health data released each day…so please ask!
Testing by Design
The most commonly used research models include:
Randomized controlled trial (RCT): carefully planned experiments like the WHI that introduce a treatment or exposure to study its effect on real patients; includes methodologies that reduce the potential for bias and allow for comparison between intervention groups and control groups.
Observational studies: researchers observe the effect of a risk factor, diagnostic test, treatment or other intervention without trying to change who is or isn’t exposed to it. Includes cohort studies, which compare any group of people linked in some way (e.g. by birth year); and longitudinal studies in which data is gathered for the same subjects repeatedly over years or even decades. An example is the Framingham Heart Study, now in its third generation, which has provided most of our current consensus regarding the effects of diet, exercise and medications on heart disease.
Case control study: compares exposure of people with an existing health problem to a control group without the issue, seeking to identify factors or exposures associated with the illness. This is less reliable than RCTs or observational studies because causality is not proven by a statistical relationship.
Meta-analysis: a thorough examination of numerous valid studies on a topic, which uses statistical methodology to combine and report the results of multiple studies as one large study. This is cost-effective but not as accurate as RCTs as the individual studies were not designed identically.
The post Why Today’s Health News Often Becomes Tomorrow’s Retractions appeared first on Specialdocs Consultants.