Retractions Inmedicine: The Tip of the Iceberg

Ivan Oransky; Stephen E. Fremes; Paul Kurlansky; Mario Gaudino

Disclosures

Eur Heart J. 2021;42(41):4205-4206. 

In 1983, in the aftermath of what was then considered one of the most significant cases of scientific fraud ever, The New York Times reported that 82 papers by John Darsee, formerly of Harvard and Emory, had been retracted (available at https://www.nytimes.com/1983/06/14/science/notorious-darsee-case-shakes-assumptions-about-science.html). That idea persisted: ~30 years later, Nature said that >80 of Darsee's papers had been withdrawn.[1]

In truth, just 17 papers by Darsee have ever been retracted (available at: http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dDarsee%252c%2bJohn%2bR). That may seem surprising, given how high-profile the case was, but we have learned in the decades since that thousands—or even tens of thousands—of papers that should have been retracted have not been.

Last year, there were >2300 retractions, up from just 38 in the year 2000 (Figure 1).[2] Even accounting for the growth in papers published, the rate has increased dramatically. There are far more eyeballs on papers today, including the eyeballs of sleuths who find image manipulation, plagiarism, duplication suggestive of paper mills, statistical anomalies, and other issues (available at: https://retractionwatch.com/2018/06/17/meet-the-scientific-sleuths-ten-whove-had-an-impact-on-the-scientific-literature/). Take the example of John Carlisle, an anaesthetist whose work spotting data too good to be true, and randomization issues, has led to scores of retractions, including one in the New England Journal of Medicine (available at: https://www.npr.org/sections/health-shots/2018/06/13/619619302/errors-trigger-retraction-of-study-on-mediterranean-diets-heart-benefits).

Figure 1.

Retractions in the health sciences by year since 2000. Data are from the Retraction Watch Database at retractiondatabase.org.

Still, retractions remain a relatively infrequent occurrence, on the order of four in 10 000—0.04%—published papers.[3] While the rate may appear to be plateauing or even declining in some fields,[4] the amount of time that retractions typically take means that the data lag.

Regardless of the true rate, however, it is clear that far more papers should be retracted than are being retracted. We can say this with confidence because of several factors. First, it is not unusual for journals—as in the Darsee case—to fail to retract papers despite official requests for retraction from universities or government agencies following findings of misconduct.[5] Second, sleuths with good track records for accuracy routinely complain—with good reason—that only a fraction of the papers they flag to journals are ever acted on (available at: https://www.the-scientist.com/news-opinion/eye-for-manipulation-a-profile-of-elisabeth-bik-65839).

While it is impossible to know with certainty just how much of the literature is flawed enough to be retracted, we may derive some clues. A 2009 systematic review and meta-analysis of surveys found that 2% of researchers admitted to committing misconduct[6] and a 2016 study found that a very similar percentage of a large sample of papers included evidence of deliberate image manipulation.[7] Two percent is, of course, much larger than 0.04%.

What explains this discrepancy? Here, too, we can point to a number of factors (see Table 1). One is the 'publish or perish' system that rewards publication in journals nearly exclusively and leads to fierce pushback on any criticisms of papers, particularly critiques that could lead to retraction. Another is lawyers hired by authors, who as Nature has acknowledged can slow down the process or grind it to a halt.[8] And content management systems used by publishers still struggle with retraction processes.

Times are, however, changing. It has become more difficult for authors, institutions, and journals to ignore critiques on sites such as PubPeer.com. Some journals are hiring research integrity managers whose role is to investigate allegations about published papers and nip problematic papers in the bud before they're even published (available at: https://www.statnews.com/2018/11/21/research-misconduct-journals-hiring-research-integrity-czars/). Those moves are a tacit acknowledgment that peer review, for all of its strengths, is not the Good Housekeeping seal of approval that many journals would like us to believe it is.

A small number of journals have implemented 'retract and replace' policies designed to encourage authors to correct significantly flawed papers but not lose a publication entirely. 'If the error is judged to be unintentional, the underlying science appears valid, and the changed version of the paper survives further review and editorial scrutiny, then retraction with republication of the changed paper, with an explanation, allows full correction of the scientific literature', according to the International Committee of Medical Journal Editors (available at: http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/corrections-and-version-control.html). While the approach has had its growing pains, including a replaced paper that then had to be retracted (available at: https://retractionwatch.com/2017/10/20/retract-replace-retract-beleaguered-food-researcher-pulls-article-jama-journal/) and metadata practices that can confuse databases and other repositories,[9] the goal is a laudable one.

In other cases, researchers are taking courageous steps, coming forward about their errors or even about misconduct in their labs. Whether they know it or not, such bravery need not mean a bump in the road of their career.[10] Just as we should trust journals that retract papers, and be wary of those that do not, we should be more confident in work by researchers who do what's necessary to correct the record than in work by those who ignore critiques. That includes checking reference lists for retracted papers, which is now possible thanks to efforts like a partnership between Zotero and Retraction Watch, which one of us (I.O.) co-founded (available at https://www.zotero.org/blog/retracted-item-notifications/).

There is, however, no heuristic—number of retractions, journal title, institutional prestige, Impact Factor, or any other metric—that can replace the good old-fashioned way of assessing a paper: Reading it.

processing....