For 40 years, the Centers for Disease Control and Prevention has been asking people what they eat in an attempt to understand the connections between what we consume and how our bodies feel. And for 40 years, they may have been doing it wrong.
The limitations of the CDC data "make it exceedingly difficult to discern temporal patterns in caloric intake that can be related to changes in population rates of obesity."
That's the claim in a new study published in the online journal PLOSone. The researchers probe CDC's National Health and Nutrition Examination Survey, which has interviewed Americans about the foods they eat and their lifestyles since 1971. From the survey, we learned things about nutrition that now seem so fundamental—that diet and exercise choices are linked to body weight, that cholesterol is linked to heart disease, and so on.
But here's the problem, according to the authors: All of that data was compiled by asking people to recall what they ate.
"Nutrition surveys frequently report a range of energy intakes that are not representative of the respondents' habitual intakes," the authors write. "And estimates of EI [energy intake] that are physiologically implausible (i.e., incompatible with survival) have been demonstrated to be widespread." Men and women have been found to underreport calories by between 12 percent and 20 percent, and are more likely to selectively underreport eating the bad stuff, such as fat and sugar.
Translation: We can't trust human memory as the source of our nutrition data, because people can underreport what they eat to an absurd degree. Their self-reports documented amounts of food that could not possibly support their survival. "In no survey did at least 50 percent of the respondents report plausible EI [energy intake] values," the authors report.
The NHANES survey does contain many, many objective measures such as physical examinations and blood work (for instance, it found elevated levels of lead in Americans' blood work, which lead to the decreased use of the metal in gasoline and soda cans). But it's not like the CDC can monitor all a person eats. Nor is it really feasable to do large-scale experiments on nutrition—that is, separate people into control and experimental groups, have everyone eat the same exact things except for one variable, and then compile this data over decades.
Granted, in recent years, CDC has revised i'ts methodology. Since 2001, it has folded NHANES into the "What We Eat in America Program, which records food intake in a more controlled manner.
But there's even reason to believe that as time went on, and as the survey raised awareness of obesity, people's answers became even more skewed. The authors explain:
"There is strong evidence that the reporting of 'socially undesirable' (e.g., high fat and/or high sugar) foods has changed as the prevalence of obesity has increased. Additionally, research has demonstrated that interventions emphasizing the importance of 'healthy' behaviors may lead to increased misreporting as participants alter their reports to reflect the adoption of the 'healthier' behaviors independent of actual behavior change."
The survey indicates that health behavior influences responses to future surveys. And that's bad science. All in all, the authors conclude that the limitations of the NHANES data "make it exceedingly difficult to discern temporal patterns in caloric intake that can be related to changes in population rates of obesity."
It doesn't mean our nutrition assumptions are wrong. It just means we haven't proven them, because our methods have been flawed.