On January 14th I published a critique of an editorial by Thomas Sherman and Adriane Fugh-Berman on the Hasting Center for Bioethics blog. A few days later they were moved to respond.
The issue at hand was the retraction of the notorious Séralini rat study.
Sherman and Fugh-Berman held that the retraction was the result of industry pressure and that the retraction didn’t cite reasons that fell within accepted guidelines for retraction [pdf]. I quote them at length to avoid misrepresenting them.
According to the Committee on Publication Ethics, a group that advises medical editors and publishers on ethical issues, particularly, how to handle cases of research and publication misconduct:
Journal editors should consider retracting a publication if:
- they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
- the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
- it constitutes plagiarism
- it reports unethical research
There are hundreds of studies that should be permanently removed from the scientific literature, but the Séralini study is not one of them. The FCT retraction announcement very clearly states: “Unequivocally, the Editor-in-Chief found no evidence of fraud or intentional misrepresentation of the data” – and then goes on to say, incredibly, that the study is being withdrawn because the journal’s own review of the primary data show that the results are inconclusive.
Inconclusive? Until a hypothesis is proven, all results are inconclusive.
It would have been perfectly appropriate for the journal to have written an editorial expressing its concerns. Instead, it seems the editors may have succumbed to industry pressure to do the wrong thing.
. . . The retraction of the Séralini study is a black mark on medical publishing, a blow to science, and a win for corporate bullies.
My response was three fold. I agreed that the retraction had a political element, but that it did not seem to be in response to industry pressure.
Second, Sherman and Fugh-Berman had ignored Séralini’s own ethical lapses. The two that I pointed out were his unusual and manipulative press embargo on the study and his decision to allow the rats to die from massive tumors rather than euthanize them. I did not bring up the conflict of interest that the funding of the study represented. This was a conflict of interest not stated in the paper. Séralini wrote in his book that he funneled industry money through CERES to obscure the funding sources for this study. He failed to disclose any conflicts of interest in the paper. That seems like a major no-no to me.
Third, Sherman and Fugh-Berman had thrown around a lot of innuendo about conflicts of interest. While conflicts of interest raise red flags and call for heightened scrutiny, they do not justify jumping to conclusions. Instead, they should be seen as presenting a hypothesis which should be tested. Sherman and Fugh-Berman say, “The quality of Séralini’s work aside, the process by which his paper was retracted reeks of industry pressure.” But how can you judge whether the retraction can be confidently attributed to industry pressure if you put the quality of Séralini’s work aside?
TIME: Does Organic Food Turn You into a Jerk? (Short answer: yes)
The Atlantic: Does Organic Food Make You a Judgmental Jerk? Maybe
Jezebel: Study Suggests that Eating Organic Foods Contributes to Moral Depravity
Pacific Standard: Get Stressed, Stop Organics, Become A Better Person
I somehow missed it in May when it made the rounds, but the TIME piece came up in a Facebook conversation the other day, just after I had written about bad health reporting and chicken nuggets. So I was primed. Just seeing the headline I knew. I knew. I knew it was going to be another article with a linkbait headline and an over interpreted the study. That turned out to be the best case scenario.
My two questions when reading an article like this are:
“Did the study even demonstrate what the journalist says it says?”
“Did the study even test it’s own hypothesis in any meaningful way?” (hint: the answer is almost always in the controls)
You can take a guess what happened.
And a new study shows that organic foodies’ humane regard for the well-being of animals makes some people rather snobbish. The report, published last week in the Journal of Social Psychological & Personality Science, notes that exposure to organic foods can “harshen moral judgments.” Which, to us, sounds like a nice way of saying that organic-food seekers are arrogant.
. . . Eskine and his team showed research subjects photographs of food, ranging from überorganic fruits and vegetables to fattening brownies and baked goods. He then gauged the primed eaters’ moral fiber with stories that warranted judgment, like one about a lawyer who lurks in an ER to try to persuade patients to sue for their injuries.
Reacting to the events on a numbered scale, the organic-food participants were more judgmental than those in the comfort-food category. They were also more reluctant when asked to volunteer time to help strangers, the study found, offering only 13 minutes vs. the brownie eaters’ 24 minutes. It’s like the group had already fulfilled its moral-justice quota by buying organic, so it felt all right slacking off in other ethics-based situations. Eskine labeled it “moral licensing.”
The writer, Nick Carbone has told us that people who seek organic food are arrogant and snobbish. Is that what the study he has described shows? No. It shows that anybody, not just organic shoppers can become more judgmental and stingy when exposed to pictures of organic food. It shows that it is the exposure to images of food that triggers this, not “organic foodies’ humane regard for the well-being of animals.”
This would be an interesting observation, if it had been demonstrated in the literature that organic shoppers were in fact more judgmental and stingier. It would provide a clue as to causality. But the entire underlying premise is never addressed. It’s not like there is no literature on the subject. Or that you can’t find any research to support the premise. No one even tried. Not even the authors of the paper.
The paper references the literature on how different foods can effect people’s moral bearings. But it does not look at the literature on the moral or ethical attitudes of organic consumers in order to establish the premise that organic consumers are be more judgmental and stingy. The hypothesis they are testing is that exposure to images of organic food could influence people’s levels of empathy. Do they even succeed at that? I would say, No.
First let’s look at what the study did.
Sixty-two Loyola University undergraduates (37 females, 25 males) participated in the present experiment for course credit and were randomly assigned to one of three food conditions (organic, comfort, control) in a between-subjects design. Told that they were
participating in two unrelated studies (a consumer research survey about food desirability and a separate moral judgment task), participants were first given a packet containing four counterbalanced pictures of food items from one of the following categories: organic foods with organic food labels (apple, spinach, tomato, carrot), comfort foods (ice cream, cookie, chocolate, brownie), or control foods (oatmeal, rice, mustard, beans). Participants also rated each food item on a 7-point scale (1 = not at all desirable to 7 = very desirable) to help
corroborate the cover story as well as provide information about their personal food preferences.
. . . Participants next received a packet containing six counterbalanced moral transgressions describing second cousins engaging in consensual incest, a man eating his already-dead dog, a congressman accepting bribes, a lawyer prowling hospitals for victims, a person shoplifting, and a student stealing library books. Each moral judgment was indicated on a 7-point scale (1= not at all morally wrong to 7 = very morally wrong). As with previous research (Eskine et al., 2011), all judgments were averaged into a single score.
After next answering demographic questions, participants were told “that another professor from another department is also conducting research and really needs volunteers.” They were informed that they would not receive course credit or compensation for their help and were asked to indicate how many minutes (out of 30) they would be willing to volunteer.”
“On a scale of 1 to 7, the organic people were like 5.5 while the controls were about a 5 and the comfort food people were like a 4.89.” The organic people also only offered to volunteer for a mere 13 minutes, as compared with the control group’s 19-minute offer and the happy comfort-food group’s 24-minute commitment.
Before we move on to why they fail to test their hypothesis, I want to highlight a missed opportunity in their use of the data. If they really wanted to show something about organic consumers specifically and not just Loyola undergrads in general, they would have calculated the correlation between the strength of subjects’ preference for organic foods and and their response to the moral challenges. That might have told us something about organic consumers’ moral orientations. But they didn’t and it wouldn’t have mattered anyway, since there were no controls in this experiment to begin with.
Wait, what about the control group of oatmeal, rice, mustard and beans? Those were meant as a control in a comparison to organic fruits and vegetable vs. non-organic desserts. That would be fine if there was one variable of moral superiority, organic versus non-organic, but there are two, the other being fruits and vegetables versus desserts. As the test was designed, we have no way of knowing whether it is was the moral halo of fruits and vegetable or the moral halo of organic that produced the result. And as anyone who has ever shopped at Trader Joe’s can tell you, produce isn’t the only type of organic food. There are plenty of organic desserts and snacks. In fact, organic junk food is a bigger segment of the organic market than produce.
If it had those proper controls, we could compare the difference in response between an organic carrot and a conventional carrot. We could compare the difference between organic oatmeal and conventional apple. But as designed, we can’t compare anything meaningful.
The correct comparison would have been organic produce vs. conventional produce, organic neutral foods vs. conventional neutral foods and organic desserts vs. conventional desserts. If those had been the categories, if they had calculated the correlation of preference for organic with moral response and the study group had been larger than 62 students it might have told us something interesting but it didn’t.
The study took about five minutes to read and about 8 seconds to see the flaws. The fact that these poorly designed, under powered studies are reported on at all drives me crazy. It’s even more infuriating that they are misrepresented instead of debunked. The fact that they absolutely litter the health sections of reputable publications is all the more maddening because interesting and significant papers are routinely ignored.
But before we let organic consumers off the hook too fast, note that the first commenter on the TIME version of the story went out of her way to make Eskine’s point.
To label people that eat organic food as “Jerks” is completely ridiculous. I am a proud supporter of organic food and will be till the day that i die. Calling someone a jerk because they eat organic food is childish. There is one thing that this article did get right about the organic community. We do congratulate ourselves for our moral and environmental decisions, because we are doing the right thing. Choosing all organic foods shows that you care about your health and the environment.
You can’t make this stuff up.
Wholesome Foods and Wholesome Morals? Organic Foods Reduce Prosocial Behavior and Harshen Moral Judgments | Kendall J. Eskine | 2013
Final draft [pdf]
Organic purchasing motivations and attitudes: are they ethical? | M.G. McEachern, P. McClean | 2002
The relationship between high-fat dairy consumption and obesity, cardiovascular, and metabolic disease | M Kratz, T Baars, S Guyenet
An overview of the last 10 years of genetically engineered crop safety research | A Nicolia, A Manzo, F Veronesi, D Rosellini | 2103
This story on the non-chickenness of Chicken Nuggets had been persisting as a featured story in my RSS reader, so despite my better judgement, I finally gave in and clicked.
Chicken nuggets: Call ’em tasty, call ’em crunchy, call ’em quick and convenient. But maybe you shouldn’t call them “chicken.”
So says , a professor of pediatrics and medicine at the University of Mississippi Medical Center. In a published in The American Journal of Medicine, deShazo and his colleagues report on a small test they conducted to find out just what’s inside that finger food particularly beloved by children. Their conclusion?
“Our sampling shows that some commercially available chicken nuggets are actually fat nuggets,” he tells The Salt. “Their name is a misnomer,” he and his colleagues write. The nuggets they looked at were only 50 percent meat — at best. The rest? Fat, blood vessels, nerve, connective tissue and ground bone — the latter, by the way, is stuff that usually .
Now, this was an informal test. To conduct their chicken “autopsy,” the researchers went to two different national fast-food chains near their health center in Jackson, Miss., and ordered chicken nuggets over the counter.
This isn’t unsurprising and I wouldn’t be at all surprised that it is representative, but what the hell are “scientists” doing reporting an informal test to the public. That is the antithesis of science. If you throw a few nuggets under a microscope and find something interesting, then do some science before reporting it to the public.
It doesn’t rise to the level of rigor that Mythbusters or Alton Brown would give this issue.
The informal test they did suggested the need for a real test that could get an actual meaningful result. This nonsense is a big reason the public feels jerked around that what ‘science’ tells us keeps changing every other week. It reminds me of the nutrition professor that wrote up his N=1 Twinkie Diet experience. I really don’t understand the abysmal level of health reporting in this country, especially in organizations with resources and reputations to look after like NPR and CNN.
This piece has gotten fairly wide circulation and deservedly so. I have a few quibbles and observations.
1. You really need to disentangle biotech seeds and problems relating to the pesticide use associated with specific seeds before you explain how they are related. To someone who isn’t already on top of the issues, they are hopelessly conflated in this piece.
The local differences over glyphosate are feeding the long-running debate over biotech crops, which currently account for roughly 90 percent of the corn, soybeans and sugar beets grown in the United States.
While regulators and many scientists say biotech crops are no different from their conventional cousins, others worry that they are damaging the environment and human health. The battle is being waged at the polls, with ballot initiatives to require labeling of genetically modified foods; in courtrooms, where lawyers want to undo patents on biotech seeds; and on supermarket shelves containing products promoting conventionally grown ingredients.
This is the opposite problem from what Amy Harmon was criticized for in her citrus greening piece. Many felt that she did not provide enough context. I disagreed with that criticism. I thought Harmon was wise not to attach a giant boilerplate rehash of the entire GMO debate before moving on to tell the story that she had chosen to tell. Balancing the proper amount of background necessary for clarity and context is tricky.
2. Strom’s choice to use the term ‘biotech’ without ever using ‘GMO’ is an interesting and loaded choice. I’m not entirely sure what to make of it. Is there a move a foot at The Times to tell these stories in a less polarizing way? Not enough data. Stay tuned.
3. I’m sure that this story will fuel Monsanto Derangement Syndrome but it’s not clear to me that there are any clear policy takeaways other than the need for funding independent ag research at our public universities to make sure farmers get the information they need to make good choices.