Featured article in the Winter 2016 Issue of Nutrition Close-Up; written by Mitch Kanter, PhD, executive director of The Egg Nutrition Center
A few recent articles that appeared in technical journals and the lay press seemed to collectively make the following arguments: 1) methods employed to conduct nutrition research are often flawed, leading to erroneous conclusions, and 2) nutrition studies funded by industry sources are really, really flawed, thus leading to biased, invalid results. As one who has spent the better part of the past quarter century facilitating industry-sponsored nutrition research, I will submit that there may be a kernel of truth in both of these statements. That said I bristle at the notion that they are absolutely true; that nutrition research in general, and studies funded by industry in particular, should somehow be made to wear a scarlet letter.
Are some nutrition studies inherently flawed? Peter Whoriskey recently addressed this in a Washington Post article, pointing out that “relying on observational studies has drawn fierce criticism from many in the field, particularly statisticians”.1 This is because, as Whoriskey points out, the overwhelming majority of observational studies fail to be replicated by randomized controlled trials. And, to be frank, Whoriskey has a point. The unfortunate fact is that human nutrition research is very difficult to do well. It is impossible to control every aspect of people’s lives, such as the amount of sleep they get, how much they exercise, stress levels, food consumption, and so on.2 Further, it’s difficult (and prohibitively expensive) to get enough human subjects who will allow themselves to live under highly controlled conditions for long periods of time. As a result, many experimental nutrition trials are underpowered and not well controlled, making definitive conclusions difficult.2 The alternative is observational trials, and the means by which nutrition information is collected from subjects in observational databases is marginal at best. Therefore, it is incumbent upon health professionals, the media, and others who utilize nutritional research for various purposes to be aware of the shortcomings of different study designs in nutrition science, to read studies carefully, and to temper hyperbolic headlines based on weak or preliminary data.3
With respect to the role industry plays in contributing to nutrition research, I can only address experiences I have had sponsoring industry-funded studies at the companies in which I have worked. Some contend that industry research is agenda driven, but this doesn’t suggest that most industry-funded studies aren’t well done or transparent. At every company that I have worked, the industry source and the university jointly signed a contract giving assurances that the researchers may publish their data regardless of study outcome. In fact, the Egg Nutrition Center (ENC) strongly encourages researchers with whom we work to publish their data, whether the results favor our product or not. It is our strong belief that the preponderance of evidence on any issue will ultimately carry the day. If a food or nutrient truly has a biological impact—good or bad—no single study is going to definitively prove the point.
As a rule, ENC seeks out quality researchers, folks with a track record in their field, with whom to work. We currently have ongoing studies with researchers at more than 30 U.S. universities, large and small. And we select the studies we fund with the aid of a Scientific Advisory Panel made up of six highly respected nutrition research experts. At times I am dismayed when I read articles challenging the integrity of the latest nutrition findings because “the study was done with industry money, so how much of it can we really believe?” This “guilt by association” assessment not only denigrates the safeguards that responsible industry partners enact in an effort to ensure research quality but, more importantly, it impugns the credibility of university scientists with whom we work. Suggesting that a study funded by industry is inherently flawed implies that those who carried out the study have somehow compromised their integrity.
In the future, when reviewing nutrition research, I would hope that the reader would not simply reject “the worth” of a trial because it is funded by an industry source, but instead think about a number of issues: are the study methods appropriate; does the researcher have a track record; were appropriate statistics applied; do the data corroborate or refute prior similar studies; are the conclusions novel; do they go against conventional wisdom?
There are scores of factors that make a study valid or not.2 It is our hope that the peer review process would address many of these points as a study winds its way through the publication process. But once a study appears in the public domain, it is ultimately up to the reader to place stock in the results or not. Blowing out of proportion the results of a small animal trial or an underpowered human study is counterproductive and only adds to confusion about nutrition. But so too does rejecting good science based solely on the funding source. When it comes to interpreting the results of nutrition research, an open mind and the ability to decipher good from bad science are the best tools that a health professional can possess.
Mitch Kanter, PhD, is Executive Director of The Egg Nutrition Center
1Whoriskey, P. The science of skipping breakfast: How government nutritionists may have gotten it wrong. Washington Post. August 10, 2015.
2Maki KC, Slavin JL, Rains TM, Kris-Etherton PM. Limitations of observational evidence: implications for evidence-based dietary recommendations. Adv Nutr. 2014;5:7-15.
3Slavin JL. The challenges of nutrition policymaking. Nutr J. 2015;14:15.