COMMENTARY

Ioannidis: Most Research Is Flawed; Let's Fix It

; John P. A. Ioannidis, MD, DSc

Disclosures

June 25, 2018

Eric J. Topol, MD: Hello. I am Eric Topol, editor-in-chief of Medscape. With me today is Professor John Ioannidis from Stanford, who I have been dying to have a chat with for a long time. I'm so glad we could get together. John, welcome.

John P. A. Ioannidis, MD, DSc: Thank you, Eric. It's a great pleasure to chat with you.

Becoming the 'Conscience of Biomedicine'

Topol: I have been following your work and career for a number of years. You are the "contrarian of medicine." I say that in a positive way.

Until I finally had the chance to do this interview with you, I did not know some of your background. You were a math prodigy in high school, you received the National Award in Greece, and you are the son of two physician researchers. You seem like you were made for this role you have, in terms of the conscience of biomedicine. How did you get your roots in this model that you really espouse?

Ioannidis: I was exposed to a lot of science early on. I loved lots of different aspects of the scientific method and scientific discipline I found in mathematics, biology, bench research, clinical research, and clinical epidemiology. I was always very unfocused and I wanted to try my hand at different types of research.

I realized that I was making errors again and again in almost everything that I was trying. I started realizing that other people were also making errors—in the lab, the clinic, and in published literature. Errors are common. They are human. Some of them are probably more common than they should be.

Topol: You got to the point where you estimated that 90% of medical research is flawed.[1,2] That gets depressing, right?

Ioannidis: One can see it as the glass half empty or half full, or 10% full or maybe a little bit more. Medicine has made tremendous progress and is still making progress. One can focus on that.

The question is, how can we improve the efficiency of what we are doing? And how can we decrease the error rate? How can we less frequently be misled and send our best people down blind alleys?

If we see the positive message that we can identify problems and get rid of them, that is very optimistic.

It's Not Just Biomedical Research

Topol: You have been on a crusade and have hit on almost every discipline: genetics, psychology, neuroscience, clinical trials, drug companies, the whole lot. Most recently, I noticed you even went after economics.[3] Is there anything that you have not worked over?

Ioannidis: The great fun and opportunity when working on meta research—or research on research—is that one very quickly realizes that research methods and research practices, and the way they are applied or transformed, are pretty similar across very different disciplines.

The scientific method is pretty unique. There is heterogeneity in the way that different disciplines have preference for some aspects of it or how exactly to operationalize it, but we can learn a lot by comparing notes. If you look at different fields, you realize that some of the big problems we face in biomedicine may have been solved in other fields pretty easily and may be a done deal.

Vice versa, one could probably transplant some good ideas from biomedical disciplines to other fields. The concepts are similar and the manifestations are different. Obviously the consequences are different, because in medicine it is about lives and people dying because of suboptimal information.

Evidence- ish-Based Medicine

Topol: The wide-angle lens that you have applied is important. It is much more than medicine, and I give you a lot of credit for identifying these common threads.

The problem we have in medicine, though, is this evidence basis, which as you have really proven over the years is so shaky and tenuous. We are trying to make decisions for patients and select treatments and tests and whatnot. What are we going to do since most of the evidence is baseless?

Ioannidis: Some evidence is reliable. There is a gradient. We have very strong evidence for some treatments, interventions, and policies and we need to do something because of it. If we don't, it would be really stupid.

This is not just for interventions but for risk factors. Even in observational epidemiology, no one would deny that smoking is horrible and is going to kill 1 billion people unless we get rid of it. We don't need randomized trials to prove that.

But, of course, there is the other end of the gradient where there is a lot of unreliable evidence. A lot of evidence is very tenuous. We need to train people to understand what the limitations are, what the caveats are, how much they can trust or distrust what they read or what they see, and what they are being called to do. Then make them ask for better evidence.

There is no reason why we should continue to live with suboptimal evidence. Clinicians and clinical researchers should be at the forefront because they realize on a daily basis that they don't have evidence they can trust. They can create questions to try to get the type of evidence they need.

Thoughts on PREDIMED

Topol: This brings up something that just happened. One area that you have tackled is nutritional science. The Mediterranean diet was studied in PREDIMED, the largest trial of a randomized diet using hard outcomes. It was published in 2013 in the New England Journal of Medicine, and now NEJM retracted it and republished it[4] in the same day. It had all sorts of irregularities. What is your take on this? It is right up your alley as to flawed science.

Ioannidis: Nutrition is clearly a mess, and I have long advocated that we can fix some of that mess by running large-scale, long-term, randomized trials with clinical endpoints. PREDIMED was a trial that tried to do that. It was pretty much the exception compared with all of that irreproducible mess of nutritional epidemiology. I was very happy to see it published. I was very excited that at last we are making some progress.

My strong belief is that PREDIMED is a seriously flawed trial. I cannot trust it any longer.

But unfortunately, PREDIMED seemed to take the path of observational epidemiology in publishing zillions of papers with results that were far more tenuous, and I think what we saw in the retraction was a signal that the data had major flaws. Clearly, the retraction was the right thing to do. However, even after the retraction, I don't feel that we have seen the whole story.

I think that the problem that was detected by statistical analysis was with baseline characteristics being so similar. The correction that led to the re-publication does not explain that this cannot happen by chance; meaning, there is no reason why (if indeed a whole village was randomized as an entity instead of on an individual basis, or some couples were randomized together rather than as individuals) that should not have led to the pattern that was detected by testing the baseline characteristics.

My strong belief is that PREDIMED is a seriously flawed trial. I cannot trust it any longer. I love olive oil. But I'm sorry—I cannot trust it. I think there are major problems beyond the retraction. We are looking at some of that and hopefully we will publish some evidence showing that there are deeper problems than that.

Topol: That is really important because I have been influenced by prior studies, like the Lyon Heart study,[5] which was fairly well done and a smaller trial, albeit for secondary prevention. But this is why it is such an opportune time to talk to you. A very high-profile journal, NEJM, retracts and republishes an article in the same day. Something is wrong with our system of evidence, right?

Ioannidis: Clearly, and I think that just republishing a trial with seemingly the same results is not going to fix it. In the case of PREDIMED, I would argue that one would have to obtain all of their old data—not the clean data, the raw data—before arbitration for an independent committee to analyze.

If this were to happen, my bet would be that the effect sizes would shrink or even go away. I would hate to see that. I would like to bet against my own prediction. But there are some very serious problems when we trust trials that have no transparency. They have no openness. They are not willing to share. They are not willing to go through re-analysis. They are not willing to have some independent scrutiny on what is going on. This is still [true for] the majority of randomized trials being published—in NEJM and in other journals as well.

Topol: Wouldn't you have thought that the editors of NEJM, particularly due to this unprecedented thing, would have raked over these data and raked over the investigators as to getting to transparency and truth?

Ioannidis: I would have hoped so, and I still hope that they will allow some further probing into this trial. It would be a lost opportunity if we don't learn more because I think it is just the tip of the iceberg. Far more is going on and, in a way, PREDIMED may be the most honest compared with other trials that may be less honest.

Intellectual Conflict of Interest

Topol: That is saying a lot right there.

You have emphasized in some of your writings the intellectual conflict of interest. I think that is important. For the most part, people don't really understand bias and the fact that so many careers are tagged to a particular belief system and pursuit. One critique of that is, "John's role is to be the take-down artist and that is an intellectual conflict." How do you respond to that charge?

Ioannidis: Yes, I think that I am biased. I think this is unavoidable and people should take that for granted when they read my work and then when they read other scientists' work. We all have some priors, and sometimes it is possible to track these priors based on what we have published.

What makes a scientist is an acknowledgement that he or she can be biased.

I don't think it is wrong to have opinions or hypotheses. I don't think it is wrong even to have beliefs. To be honest, when I launch a new project, I try to be as open as possible to all types of outcomes. If anything, my biases are more towards getting nonsignificant results. If I get significant results, even if it is without biases, I have to ask myself, "Why did I get that? Could it be that I was wrong? Could it be that I need to go back and recheck the process and find some errors?" Sometimes, I have found errors in the process, hopefully early enough before publishing.

What makes a scientist is an acknowledgement that he or she can be biased. We have to watch out for that possibility in whatever we do.

Preprints

Topol: That is really a terrific answer. One of the things I was surprised about, because you usually come out on the negative side of almost everything, is preprints. You are pretty positive on preprints.[6] Tell us why that is the case.

Ioannidis: Preprints are one opportunity to disseminate research broadly and in an earlier fashion, and to open that research to criticism at an early stage to the entire scientific community. One might argue, "Well, there is no peer review here." I am a strong supporter of the need for peer review, but peer review is suboptimal. Of papers that get submitted, it probably substantially improves about 20%, makes about 5% worse, and 75% probably don't change much other than just linguistically or stylistically.

If we could have a system where information is available early on to the entire scientific community to scrutinize, comment on, and to make suggestions for improvement before we have the "definitive" paper, I think this is a good thing. People just need to realize that this is just early dissemination of information and it has to be taken with an extra level of caution.

Topol: I am a big fan but the only concern I've had is that some disciplines, particularly in artificial intelligence, now consider this as the final submission. Many papers are being submitted without any intent of trying to go through the peer-review process—not that peer review is so great, but at least there is another layer of independent assessment.

Can We Get This on Track?

Topol: I think it is remarkable that you have taken on the role of the conscience of the field, and your work has been so impactful—the number-one article cited in PLoS Medicine and so many other journals as well. Now that you have exposed the problems, where do you go from here? Do more of the same? How do you get this thing on track?

Ioannidis: My wish is not to expose problems. Obviously, there are lots of problems, so it is not a big deal to point out another one, but there is no end to them. My wish is to try to fix problems. I want to make sure that the work I do and the work that others do who work with me takes the direction towards solutions rather than just identifying the issues.

One-percent improvement because of adopting a better scientific process across science is tremendous progress. It could translate to tens of millions of lives saved.

Much of the work I have been doing at the Meta-Research Innovation Center at Stanford over the past 4 years has focused on identifying solutions. It may not be easy to document solutions and find evidence to support them. Much like interventions in medicine or any other specialty, we need evidence about proposed solutions. One may come up with lots of ideas, but some may be horrible, some may be neutral and not really make any difference, and some may work.

The good thing is that scientists in general want to get science into better shape. I don't think that people want to hide things under the carpet; so many scientific communities do come up with solutions, implement them, test them out, and see major improvements in the credibility and transparency of their work. It is an issue of making sense of the options, prioritizing them, testing them out, getting rid of the false leads, and making further progress.

One-percent improvement because of adopting a better scientific process across science is tremendous progress. It could translate to tens of millions of lives saved.

Topol: Wow, terrific point. I noted that algorithms are being used to screen papers with respect to statistics and are using artificial intelligence. This may be one of the many ways we can fix these issues.

John, I want to thank you—not just for this interview, but for the bigger role you have played in medicine. You have taught us so much. You have really had a fantastic effect, like a wake-up call, and it has not just been one time; it is all the time. Every time I see something that is seriously flawed or possibly flawed, I think about you. Thank you for all of your effort and your continued pursuit of evidence that is real, and for excellence in research. Good luck, and continued success to you and your colleagues at Stanford.

Ioannidis: Thank you, Eric, it was a great pleasure to talk with you, and I hope we will have more good news next time.

Topol: Excellent. Thank you for joining us and thank all of you at Medscape and our audience for joining us in this series of some of the most interesting people in all of medicine.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....