Three Years Later, Coauthor of “Blinded with Science” Paper Has Made Some Ironic Retractions

1 tahun ago kesimpulan 0
Bringing you the latest Science news from the world.

What is happening in our world? Who is discovering what? What is going on now? These are questions that will be answered. Enjoy.

Three Years Later, Coauthor of “Blinded with Science” Paper Has Made Some Ironic Retractions

Earlier this week, BuzzFeed published a detailed investigation of a prominent food psychologist who massaged and manipulated data to produce media-friendly results. You’ve probably heard of some of Cornell University professor Brian Wansink’s studies. There was the one with the “bottomless” soup bowl that refilled itself while subjects ate, to study portion control; the one about characters on cereal boxes making eye contact with kids from grocery-store shelves; and so on. Several of Wansink’s papers have been retracted for issues like duplicated material or unreliable data. More of them have been corrected after publication.

Reading about Wansink, I felt a sinking familiarity at the title of one of his papers: “Blinded with science: Trivial graphs and formulas increase ad persuasiveness and belief in product efficacy.” I wrote about it back in October 2014. The paper found that consumers reading about a drug believe that drug is more effective if they see a graph or formula, even if the graph or formula gives them no new information.

This paper hasn’t been retracted. But the irony is hard to escape—it’s about using the appearance of science to convince people of something. “Graphs equal truthiness,” lead author Aner Tal told me in 2014.

Tal isn’t at Cornell anymore, but I tracked him down and asked what he thought about the recent developments.

Tal was a postdoc in Wansink’s group when the paper was published. Tal stressed that he designed, ran and analyzed this study himself; Wansink was the second author. And the “Blinded with science” paper has received some scrutiny, Tal says. Another researcher asked for one of his datasets and did an independent analysis of it. He found the same results, Tal says. “It’s as true as it was the day it was published.”

He also says that the “Blinded with science” study was designed to test a specific hypothesis. By contrast, the BuzzFeed article describes other studies where researchers gathered lots of data first, then analyzed it many different ways until they found a “hypothesis” that held up statistically. (If diners pay half price for a buffet, does it affect how they feel afterward? What if you only look at men, or women? What about people who sit close to the buffet, or order soda, or eat alone?) “But that’s not how science is supposed to work,” writes Stephanie M. Lee in her BuzzFeed article. With enough poking and prodding at the data, you might eventually find a result that looks statistically significant, just by luck—not because there’s a real effect. If somebody else tries to repeat the experiment, they won’t get the same result.

“I still believe in that work,” Tal says of his study. He’s planning to do followup experiments.

In 2014, Tal said one lesson of his research was that scientists and journalists should make sure to convey the uncertainty in results. Otherwise, people may be blinded by a “halo of scientific validity.” That warning turned out to be all too relevant for Wansink’s research group.

“I also highly, highly encourage people to run replications and extensions of this work,” Tal says now. “I do believe that is how science should work, with findings confirmed by independent researchers.”

Image: from Tal & Wansink (2014).

Source link