Clinical Trials: A 250-Year-Old Interpretation

Henry R. Black, MD; George A. Diamond, MD

Disclosures

November 13, 2013

A Blow to Evidence-Based Medicine?

Dr. Black: Does this mean do you think that this worshipping of what has been called evidence-based medicine might be a little premature or maybe even incorrect?

Dr. Diamond: It is overly optimistic. There is no question that it is a desirable goal, but I would guess that 90% of our decisions are made in the absence of evidence. They are based on consensus opinion and on community standard, not on firm observational evidence.

Even when we have observational evidence, it's often biased. Observational registries are being used more often because it costs so much to do a clinical trial. The observational registries come at much less cost, and much less in the way of invested resources. But the problem with them is that the observational registries are subject to verification bias.

If we believe that some new treatment works -- let's say a new stent for use in percutaneous coronary intervention procedures -- then we are going to refer most of our patients to treatment on the basis of that belief. If we always refer patients to the new treatment, then the observational registry would be grossly biased in favor of the stent. So you wouldn't learn anything from analyzing those data.

Dr. Black: I am seeing the use of the word "Bayesian" more and more in looking at large databases. I am very concerned about some of the conclusions that are made. Whether it is because they are Bayesian or non-Bayesian, they just don't make sense. They don't fit what a person who was part of a consensus might say, and it's very disturbing.

There were some recent attempts to look at a relationship in patients with kidney disease where yes, you can draw a nice graph. But the logic is incorrect. For example, in the recent study about mortality in people with chronic kidney disease, a systolic pressure of 120 mm Hg had the same risk as a systolic pressure of 180 mm Hg.[2] I don't think that's right, and I don't think anyone who has ever really dealt with those individuals would agree with that finding.

We have to be much more careful (and thanks to you, I feel justified in saying this) in accepting a lot of what people tell us is evidence. I think it's often incorrect.

Dr. Diamond: Your intuition is very correct on that point, and your statement that you "don't think that's right." As a Bayesian, I would simply ask you to quantify it. To what degree don't you think it's right? Is there only a 5% chance that you don't think it's right? Or is there a 45% chance you don't think it's right? Then incorporate that prior belief into the analysis of the same data.

If every one of the investigators or study groups that were reporting the clinical trials reported their prior beliefs, then we could interpret their analysis in light of that prior. If there are differences of opinion, it is very likely that those differences of opinion don't relate to an interpretation of the observations. They relate to differences in our beliefs as to what the prior likelihood of the hypothesis being true was.

Dr. Black: It sounds as if that kind of information should be collected before the trial starts.

George, I appreciate you sharing this information with us. I find it very persuasive. I am becoming concerned, even though I'm a trialist, about what is happening with so-called evidence-based medicine. To throw away expert consensus as not being important is a very serious mistake. Thank you very much.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....