Crowdsource the Integrity of Published CV Research? Critics and Editors Debate

October 03, 2014

LONDON, UK — Does the current system for publishing medical research encourage good people to do bad things and then do a poor job of protecting the evidence base from the harm? A recent opinion piece calls for reforms to how journals and institutions respond to discredited published research and invites a spectrum of parties in the scientific community to participate in maintaining the integrity of its publications[1].

Not all its ideas are new, its tone and arguments are often personal, and some would dispute whether the current system is fine, deranged, or somewhere in the middle. The article, moreover, is at another level one of the latest salvos in a dispute between its authors and at least one journal[2] that has played out over more than a year, only recently showing signs of resolution[3,4]. Its proposals drew a mixture of responses from physician editors contacted by heartwire .

The controversy stems from 2011, when renowned researcher Dr Don Poldermans was fired from his institution for fraud relating in part to his studies of perioperative beta-blocker use, as covered by heartwire . The fraudulent data found their way into European guidelines, with contentious alleged implications, especially regarding the extent of mortality resulting from the supposedly tainted guidelines[5].

The affair eventually led to the current article, published September 21, 2014 in the European Heart Journal by Dr Graham D Cole and Prof Darrel P Francis (National Heart and Lung Institute, Imperial College London, UK), which is an extensively revised combination of a two-part series originally planned for the same journal. Part 1 of the series had found its way online before it was soon pulled down by the journal, a drama in itself covered in detail by the blog CardioBrief[5] and described from the perspectives of the journal's editor[2] and Cole and Francis themselves[1,3].

The Cole-Francis revision now brings a crowdsourcing twist to recommendations for improving data integrity in medical publishing, calling on examples from their own roles in the European-guidelines controversy. The new article proposes that the research, publishing, and readership communities share responsibility for keeping errors out of the literature and for catching misconduct before either can affect clinical practice.

"Multiple Potential Guardians"

"For research failure to evoke enduring harm with global reach, multiple potential guardians must assist, actively or passively," according to Cole and Francis. "Aviation professionals have pioneered systems to identify, examine, and improve practice from professional failures. Medicine is now entering this path in clinical practice but must do the same for research," they write.

Our current system may be forcing good people to do bad things.

"Our current system may be forcing good people to do bad things," Cole said in an email to heartwire . "When problems arise in a published paper, perhaps neither institutional staff nor journal editors are the ideal people to judge its scientific validity. All have an inescapable conflict of interest. It would be better to have that judgment issued by people whose sole focus is the welfare of future patients and who do not feel any reputational link with the research."

Francis said by email: "Readers want journals filled with reliable science, but this will not happen by magic. We all need to work for it. Our article describes how everyone can and should help. We owe this to our patients."

Assumptions of Truthfulness: What Do Editors Say?

Currently, "the integrity of clinical research relies on three key elements: the researcher, the research institution, and the medical journal," Dr Catherine M Otto (University of Washington, Seattle) emailed heartwire . She is editor in chief of the journal Heart, which ran a meta-analysis that played a role in the European-guidelines controversy[6], of which both Cole and Francis were coauthors.

Editors are not in a good position to determine fraud.

"When research is submitted to a medical journal, editors start from the premise that the authors are truthful and that the research has received appropriate approval and oversight from the author's research institution. Peer review ensures the presented research is relevant, important, and of high scientific caliber," she said.

"Editors are not in a good position to determine fraud," JAMA Internal Medicine editor Dr Rita Redberg (University of California San Francisco) observed by email. "Part of the assumption is that when authors are submitting papers they are honest." And they are, most of the time, she said.

Fraudulence or Misconduct Uncommon?

"I must admit, all editors are embarrassed from time to time when it turns out that something that they've published turns out to have some errors," acknowledged Dr Anthony N DeMaria (University of California, San Diego). But manuscripts tainted by "fraudulence or academic misconduct" that survive the peer-review process "remain relatively uncommon," DeMaria, who was editor in chief of the Journal of the American College of Cardiology from 2002 until earlier this year, told heartwire .

"My sense of the medical literature is that it does a reasonably good job in ensuring that published material is high quality." Editors now trust that a manuscript's content is accurate. "I think that trust is justified," despite the occasional violation, he said.

I believe that the system tends to be generally self-correcting.

"If somebody publishes something that is just downright wrong, sooner or later, hopefully sooner, other studies that attempt to confirm it will not be able to confirm it. And so I believe that the system tends to be generally self-correcting."

But Francis pointed to his own experiences in bringing attention to effects of discredited research on guidelines[3], which did not lead to the retractions that he sought. "We should not assume coauthors, institutions, or editors will speak out definitively when trial reports are unreliable."

Problems and Proposals

Cole and Francis outline alleged weaknesses with the current medical publishing system, including peer review, and make some broad proposals for countering them. The shortcomings and proposed corrections include, among others:

•   Falsification of data. That can happen, for example, by "suppressing nonfitting measurements or selecting patients to support a hypothesis."

•   "Russian-doll publication" of overlapping data sets, which "can occur innocently when experts understandably report growing cohorts."

•   Accountability that is too narrow. For example, "Coauthors are far better placed than readers, editors, or even institutions to identify misconduct. If we made them all share the consequences when research is misconducted, they would try harder to prevent it."

•   Don't let a culpable researcher's institution off the hook: "We should not trust an institution to swiftly correct science its workers have seriously misreported," Cole and Francis write. "We must make institutions fear not discovery of misconduct but slowness to retract trial reports that are false."

•   Involvement by readers. "Even when merely reading papers, we [the clinical community] should not accept claims that lie so far outside the range of plausibility that they are likely to be incorrect."

•   Data transparency. "Publish all the data, all the time," they write. Patient-level data can be included in online supplements or centralized databases so that there can be universal oversight and to facilitate after-the-fact inquiries. "Openness also protects all of us from any temptation to edit data to match expectations."

Data Access and Data Verification

The Cole-Francis article, according to Otto, "raises several important questions about publishing reliable clinical science. In addition, they provide several detailed examples of where things went wrong. However, they do not go far enough in providing concrete constructive suggestions for how the clinical research community can remedy these issues."

"The journal editor and peer reviewers cannot directly verify the truth of the presented data. Data repositories with access to the entire research data set will allow more detailed postpublication peer review but will not eliminate the problem of scientific misconduct," Otto said.

"It would be nice, in the best of all possible worlds, if journals would require that the authors submit all their raw data, including patient-consent forms, and that the journal would then review all the raw data, all the measurements, et cetera, to ensure that everything was absolutely accurate," DeMaria said.

"But we don't live in the best of all possible worlds. If you take a journal receiving, say, 100 manuscripts a week, it's just not logistically possible to review all of the raw data."

Some journals make a limited effort to verify some parts of manuscripts, according to Redberg. They may, for example, check whether the methods and end points match what's noted on

"I think that is common. [But] we wouldn't generally go into the primary data unless there were a reason to." A similar situation applies to conflict-of-interest disclosures. "We ask everyone, but we can't check on everyone," she said.

"You can't put the responsibility all on any one party, we all have a stake and a responsibility in ensuring accurate data—clinicians, readers, editors, researchers, academic institutions."

"We Have to . . . Promote a Culture of Honesty"

Commenting for heartwire by email, Prof Thomas F Lüscher (University Hospital Zurich, Switzerland), editor in chief of the European Heart Journal, said "supervisors and chairmen/women have the obligation to teach fellows good scientific publishing," echoing his own paper on the topic[7]. "We have to ensure and promote a culture of honesty and precision. This is the main pillar of scientific discovery."

Indeed, Lüscher et al's editorial earlier this year explained that the decision to retract the original version of the Cole and Francis paper earlier this year within hours of it appearing online was because it had inadvertently been published without appropriate review. Francis, however, has previously pointed out to heartwire that it took three years after Poldermans's dismissal to review the guidelines incorporating that fraudulent data.

Lüscher said, "I am also concerned about the increasing retractions of manuscripts, particularly from high-impact journals. Therefore, statistical plausibility has to be checked, and we do this for every manuscript we seriously consider." Editorial reviewers, too, "should consider the option of inappropriate data collection in their critique, if applicable."

DeMaria certainly agrees that sometimes in manuscripts, inconsistencies, implausible confidence intervals, or other technical issues with the data emerge. The system, he said, "will never be perfect, and things perhaps could be better. If there were greater resources to review more original data or to have people assigned who can go over every single table, number by number, to ensure that the data are completely consistent, then things would be better. And maybe that's something we ought to hope for in the future."

"All Authors Accountable"

"We should commend Cole and Francis for suggesting that all authors be held accountable for the validity of research publications, along with other members of the research group," Otto said.

"In addition, I suggest that the author's research institution is equally responsible for the integrity of research publication and should equally bear the consequences. Research institutions often are remiss in their responsibility to provide an environment in which it is unthinkable to perform poorly designed, inadequately documented, plagiarized, or fraudulent research."

According to Redberg, "I think it certainly is the responsibility of an institution that supports someone who is accused of fraud to make their own investigation and act accordingly."

The question of coauthor accountability played heavily into the Poldermans case that questioned the impact of fraudulent data on the guidelines. Coauthors on some of the now-discredited research papers included some prominent names. Notable among those is Prof Jeroen Bax (Leiden University Medical Center, the Netherlands), president-elect of the European Society of Cardiology[8]. Moreover, he and Poldermans held leadership positions in the development of the society's guidelines on perioperative beta-blocker use.

Research institutions often are remiss in their responsibility to provide an environment in which it is unthinkable to perform poorly designed, inadequately documented, plagiarized, or fraudulent research.

Coauthor accountability should be assessed on a case-by-case basis, according to DeMaria. Sometimes, he said, "the poor coauthor is dragged into it," unknowingly and innocently. "That's a really tough one. Now I suppose that [Cole and Francis] could argue that you should never let your name be put on anything that you didn't personally assure yourself was carried out perfectly," he said. But "there's got to be trust in the system."

As for publications based on multicenter trials, "to me it doesn't seem reasonable that you could hold every investigator in every center responsible for an error or misconduct that might have happened at any one of those centers."

On the other hand, DeMaria noted, future coauthors working on the same research at the same institution can become aware of an error or other issue with the data. "Oftentimes those people will remain silent, when in fact they're in a perfect position to bring forth the fact that there's an error." To his knowledge, that has happened "a number of times, actually, and those papers typically don't get published, and so nobody knows about it."

Lüscher said he agrees "that coauthors should really read the articles they're involved in and approve the latest version. We [at the European Heart Journal] are now installing a system whereby every author has to officially approve the submitted version and indicate his contribution."

Cole and Francis declared no conflicts of interest.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.