Featured article in the Winter 2015 Issue of Nutrition Close-Up; written by Tia M. Rains, PhD
Imagine that you’ve just identified a substance with preliminary evidence that suggests it is effective in preventing a particular disease. In this case, let’s pretend that the condition is type 2 diabetes (T2D) and that the substance is an extract. To test whether the extract prevents the onset of T2D, you would conduct a randomized, controlled intervention trial (RCT). Those individuals at risk for the development of T2D (e.g., those with prediabetes) would be recruited and upon meeting the prespecified entry criteria, they would be randomized to receive a capsule that contained either the extract or an identical-looking capsule that acted as a placebo.
Neither the participant nor the study coordinators would know what treatment each participant was receiving. Subjects would be instructed to take the capsule every day for a determined period of time (such as 3 or 4 years) and otherwise, maintain their normal lifestyle behaviors (consistent diet and exercise habits). At regular visits throughout that period, participants would return to the clinical facility and be tested for the presence of T2D. All in all, little effort would be required of the participants (i.e., take a daily pill and report to a clinic several times per year).
Now imagine that it’s not an extract, but rather a food or dietary pattern. To test whether that prevents the onset of diabetes, you would still conduct a RCT trial in those individuals at risk for the development of T2D. However, imagine how the logistics would be different. Participants would need to consume the prescribed food or adhere to the dietary pattern every day for the same 3- to 4-year time frame, including weekends, vacations and holidays. To maintain energy balance, participants would need to be trained how to substitute the food for other foods they typically consume. To encourage compliance, study foods would be provided for each participant, requiring them to store large quantities of it at home, with assurances that other family members would not dip into the supply.
And participants would be required to maintain their same physical activity and body weight over this time period (since changes in body weight influence T2D risk), despite a dietary change. In order to affirm body weight is maintained, participants would be required to visit the clinic more frequently for weigh-ins than they would for T2D testing alone. They may also be instructed to complete a daily study diary to encourage compliance with the treatment.
In the end, the effort required of participants for testing a food or dietary pattern is far greater than that for the “capsule” scenario (which is more like a pharmaceutical trial). Participants would be much more likely to drop out, given the added burden; or at least adhere less vigilantly to study instructions (perhaps consuming far less than an effective dose to bring about meaningful change in any clinical endpoint). Moreover, the cost of the food or dietary pattern study would be astronomical.
Although overly simplistic and for illustrative purposes only, the latter scenario has been acknowledged within the field of nutrition as a major challenge to proving cause and effect relationships between dietary exposures and a disease.1 So in the absence of RCT data, observational studies are often the next best type of scientific evidence. The prospective cohort study, in particular, is hailed as the strongest form of observational evidence because participants are evaluated over long periods of time with diet and lifestyle data being collected prior to the onset of disease and, therefore, subjects are less likely to modify dietary patterns following diagnosis of a disease or presence of disease risk factors. But we all know that observational data can only identify diet-disease associations and not prove cause and effect relationships.
In some instances, however, where RCT data are not available, observational evidence is being used as an equivalent substitute, even though such data are not designed to test cause and effect. Proponents of this approach acknowledge as much, but submit that given the importance of diet-disease relationships, we should rely on whatever data is available for the sake of public health.1 However, observational data have not always been right, with data from RCT evidence often showing no effect or an opposite effect (e.g., hormone replacement therapy; B-vitamins, homocysteine and heart disease; antioxidant vitamins and heart disease).2 These situations lead to radical changes in recommendations that, very often, result in mass confusion.
No nutritionist would argue that researching diet-disease relationships is challenging. However, it seems as though it’s time to re-evaluate the limitations of all types of evidence and come to agreement on how best to draw diet-disease conclusions from the data that is available.
Tia Rains, PhD is Senior Director, Nutritional Research & Communications at Egg Nutrition Center.
1. Maki KC, Slavin JL, Rains TM, Kris-Etherton PM. Limitations of observationalevidence: implications for evidence-based dietary recommendations. Adv Nutr. 2014;5:7-15.
2. Blumberg J, Heaney RP, Huncharek M, Scholl T, Stampfer M, Vieth R, Weaver CM, Zeisel SH. Evidence-based criteria in the nutritional context. Nutr Rev. 2010;68:478-84.