Fraud Is Easy
Robert M. Califf, MD: Hello. I'm Rob Califf from Duke University, and I'm here at the European Society of Cardiology with Tony DeMaria. Tony just finished his term as editor of the Journal of the American College of Cardiology (JACC), a major journal.
I happen to have had some experience with irreproducible data and journal retractions; in fact, I think, Tony, I may hold the record for overseeing journal retractions. At Duke, we had a cancer researcher who was totally discredited, and there were over 20 manuscripts we had to completely or partially retract.
What was your experience at JACC with this sort of thing?
Anthony N. DeMaria, MD: That experience that you had is really quite easy. When something is clearly false, or fabricated, then life gets easy—and that's retractable.
But there are so many other problems with the medical literature that are more in the gray zone: Did you get institutional review board (IRB) approval? Did you adhere to what the IRB wanted? Is there some plagiarism or reduplication? Did you not get informed consent from the patients? Are you unable to reproduce the data? All of these things raise questions about the validity of the manuscript, but whether they rise to the level of warranting retraction is not easy to assess.
Dr Califf: One thing I found when this fellow came under my purview—
Dr DeMaria: Was this before or after 60 Minutes?
Dr Califf: This is the one that precipitated my appearance on 60 Minutes .
That was quite an experience to go through. I contacted all the coauthors from the papers that he was on, and I found that even though many people knew that there was something wrong, nobody had done anything about it.
Have you had experience with coauthors, or did you correspond only with the primary author?
Dr DeMaria: It's a major problem. As an editor, you don't get the raw data. Even if everybody submitted the raw data, you're not really equipped to review it; there's a limit on your ability to actually verify the data. So you depend on coauthors; you depend on institutions. If a question was raised, we always went to the author first and asked, "How do you respond to this?" If there were still major questions, we'd go to the coauthors and to the institution.
But no coauthor or institution wants to have their own reputation sullied by fabricated data, or even incorrect data. A lot of times, it's not willful fraud; it's just that people make mistakes. In all my years as editor, maybe at most, once or twice an institution would respond, "Wow, you know, we've checked it out, and everything's fine."
Dr Califf: Institutions are very reticent to do it, but on the other side, we had a paper at JAMA that we said should be retracted, and they didn't do it. There were questions raised in peer review that we thought were answered incorrectly. The peer review was confidential and, as an institution, we had no right to see it. The first author was under siege legally and didn't let us see the reviews. So there's an article out there that we thought should be retracted, but that JAMA kept in play.
Misconduct Is Common
Dr DeMaria: There are some interesting statistics about these things. The data indicate that around 15% of authors know of misconduct on the part of a colleague of theirs. And about the same percentage say they know of misconduct of a colleague with whom they publish papers. Then about 2% or 3% admit to manipulating data themselves. From those numbers, presumably 10%-15% of the literature is off a little bit.
As editors, we're not very sharp at detecting it, because there are not that many retractions. And the most incredible thing, Rob, is that retracted articles continue to be cited.
Dr Califf: That's amazing. As papers and analyses have gotten more complicated, you wouldn't expect the clinicians who enroll the patients to necessarily be able to interpret the multivariable statistical analysis. It actually turns out that, technically, you're responsible for your part of the paper; you don't have to vouch for the validity of the other people's work. I hadn't known that, but it became a big issue when I had to deal with our problems at Duke. Ultimately, we had to look at each part of the paper and go to the person who had contributed to that part.
Our particular case involved genetic studies with many coauthors, and obviously they can't all repeat the analysis.
Dr DeMaria: When I was editor at JACC, we had experiences with multicenter studies where there were irregularities with one of the centers, but not with the others. So the other centers would argue that if you eliminate the data from the dubious center, all of their data are clearly well founded, and the results still hold true. Presumably, a retraction should occur only if it's going to lead to some false treatment, or if it's going to adversely affect future research.
But if it's duplicate data, or if the person didn't get informed consent, the data may still be true. It's an interesting dilemma: If someone submitted an article that had the cure for cancer, but they didn't get informed consent, would you withhold that information?
Repeat and Verify
Dr Califf: That's a classic ethics dilemma, and there are different opinions about what one should do.
We're now in this formal reproducible science era, and there are recommendations and standards about replicating results before you submit them—having audit trails (even in basic science labs)—that didn't used to exist, so if there's a question about the results, you can go back and see who manipulated the data. The National Institutes of Health (NIH) is actually giving R01 grants to reproduce someone else's research—like a tax audit. What do you think about that?
Dr DeMaria: It's very interesting and important, but every medical editor strives for novelty. If you ask the editor of any journal, "What's the most important attribute of a manuscript?" I believe that they will say, "It's got to be new. I don't want to publish the second, or third, or fourth article on something." Confirmatory studies often have difficulty getting published. Negative studies have to be extremely strong to get published, because the reviewer will say, "These data don't support the accepted findings."
I think confirmatory studies are very important, and when I was at JACC, we published a lot of them, but you have more difficulty getting them accepted by the highest-tier, competitive journals.
Dr Califf: Given all these problems with journals—the fact that they're not equipped to know whether the data are right or not, and they're looking for novelty, which encourages people to stretch what they're doing and maybe disregard less novel parts of their research—why can't we just be like the physicists and put it on the Web and let people have at it?
Dr DeMaria: Obviously, we could do that, but let's not be authors; let's be readers. Now you're a reader, and there's all this medical literature coming at you. Our readers at JACC used to tell me that it's like trying to drink from a fire hose, there's so much out there, and people want things prioritized. What's the best information? What do I absolutely need to know, because I only have X amount of time every day to read? That's an important function of the peer review process.
One thing I came to appreciate over the years that is generally (but not invariably) true is that the articles in the very lower-tier journals have some little flaw, some little consideration that raises a question, more often than the top-tier journal articles. The top-tier manuscripts are pretty accurate, and that's why the readers tend to go there.
Dr Califf: Looking back on your time as an editor, in terms of reproducible science and veracity of data, what's the main advice you would give a young person beginning to write articles now?
Dr DeMaria: In terms of reproducibility of data, I'd be very cautious about submitting data that I had any questions about. Before I even started a study, I'd make sure that it was adequately powered. So many people start studies and use convenience samples, but they're underpowered, and they come to some conclusion that shortly thereafter is overturned. I'd get somebody seasoned and experienced—somebody like yourself—and I'd ask them to review the data to see whether they can pass muster with a highly skeptical individual (not that you're skeptical).
Dr Califf: To close, I would say that we're entering an era where data are going to be in the cloud; they are going to be available to a lot of people, and transparency is going to be demanded. The worst thing that could happen to you as a young investigator would be to publish something and then have someone else discover that you had overlooked critical data, or you had done the analysis incorrectly. There's a very high premium on doing things right the first time.
Dr DeMaria: Yes, it's important to do things right, but errors occur. I remember one of our junior faculty and I collaborated on an abstract. We sent it in on seven dogs, maybe eight, or something like that. The abstract was accepted, and we said A = B. Well, we did seven more dogs. Turns out the answer changed.
So the young guy said, we've got to retract this abstract. And I said, No, we're going to change the presentation. The first slide said A = B (the original conclusion), and then underneath, A ≠ B, because once we doubled the sample size, the answer was totally different.
Errors are made; that doesn't mean that it's fraud, as long as it's corrected. If a retraction is for fabrication, that's serious business. If it's for an honest error—well, very few of us are perfect.
Dr Califf: Right. Let's end by saying that whether it's clinical medicine, research, or life in general: It's not whether you make errors—we all do—it's how you deal with the errors that really distinguishes a person.
Dr DeMaria: I agree.
Dr Califf: Thanks, Tony.
© 2015 WebMD, LLC
Cite this: Journal Retractions: From Innocent Errors to Outright Fraud - Medscape - Jan 27, 2015.