Negative Research Results? There's a Cure

Andrew J. Vickers, PhD


February 10, 2011

NEW YORK - According to a meta-analysis recently presented at the annual meeting of the Association for Clinician Researchers, failing to submit a paper for publication is of proven benefit for negative research results. Primary author Anthony Brown, an endocrinologist from the University of Connecticut, Storrs, reported the results of an extensive meta-analysis comparing a number of different treatment options. "Although some have claimed that a carefully argued discussion section, or a hard-hitting press release, can ameliorate the effects of negative results, the data are quite clear," said Dr. Brown. He also stated: "In head-to-head comparisons, nothing beats not having a paper in the first place," In an interview, Eileen Williams, senior author, concurred: "Researchers have long worried whether burying discomfiting data in a desk drawer was really effective. Our research should really end the debate; what other people don't know can't harm you."

Participants expressed a sense of relief that common practice could finally be placed firmly on evidence-based footing. Eric James, Chair of Nuclear Medicine at Yale University, New Haven, Connecticut, said that, previously, all that researchers had to go on was anecdote: "I've previously told colleagues about personal experiences when our research came back with the wrong results and we didn't publish. But it is so much more effective to cite hard data showing that putting pressure on the statistician isn't half as effective as pretending you never did the study in the first place." Dr. Irene Jenkins, a researcher who has mentored countless clinical researchers at Johns Hopkins University, Baltimore, Maryland, agrees. Dr. Jenkins stated that "I often have junior faculty come to me distressed over a nonsignificant P value. I generally advise 1 of 2 basic approaches: Either you can insist to co-authors that negative results simply aren't publishable, or you can describe them as 'difficult to interpret,' and that you need time -- lots and lots of time -- to consider the data carefully in the light of existing research. Of course, this was just a judgment call on my part. Now I have clear evidence supporting what I have been doing for years."

One aspect of Dr. Brown's meta-analysis that generated particular interest was the finding that countries differed in their tendency to avoid publishing high P values. Several papers in the literature review clearly indicated that although US researchers sometimes published negative results, the Chinese journals almost always reported statistically significant findings. "This is just one more way in which we are falling behind the Chinese," said Dr. Brown.

Nonetheless, some investigators expressed reservations about the new findings. John Waldin, a doctoral candidate at McGill University, Montreal, Quebec, Canada, said that he and coworkers had not found failure to submit statistically superior to alternative strategies for dealing with negative results. According to Waldin, techniques, such as selective data reporting, unplanned subgroup analysis, and data dredging, can be equally effective -- especially if done carefully. That said, Waldin did concede that his doctoral advisor had told him that his findings were probably not worth publishing, given that his P values were all > .05.

Meanwhile, Dr. Brown is all set to submit his work for publication, and is confident that he has a good chance of getting accepted by a high-impact journal: "Many of the best journals agree with us that the American public need to be protected from high P values. Unless the research is about something we don't like, of course."

With apologies to The Onion


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: