When Do We Need Randomized Controlled Trials, and When Can We Manage Without Them?

Vinay Prasad, MD, MPH


August 07, 2018

Recently I watched a debate unfold on Twitter over whether we should run a randomized controlled trial (RCT) of the inferior vena cava (IVC) filter for patients with a venous thromboembolism (VTE) and a contraindication to anticoagulation.[1] Like witnessing a car accident, I was horrified but could not turn away. The debate made me realize that we have to clarify what RCTs are for and when you can get away without them. There are two basic prerequisites.

First, randomized trials are for interventions that are thought, or hoped, to offer benefits. We randomize participants to interventions that might leave them better off. Every so often a doctor says, "We didn't need a randomized trial to know that smoking is harmful." Yes, sure; bravo—that's true. We also didn't need a randomized trial to know that drinking a glass of battery acid is not good, or being shot. I didn't need a randomized trial to know that I would prefer not to fall off my bicycle, or not get kicked by a horse. But, spoiler alert: We don't conduct randomized trials to demonstrate that some actions result in net harm. We run them when we think, but do not know, that an intervention will leave us better off.

Second, randomized trials are run for interventions that, at best, offer modest to medium benefits. If an intervention is like a light switch and if it is overwhelmingly beneficial, then we don't ask for RCTs. For instance, wearing a parachute when jumping out of an airplane—an intervention that improves mortality from a nearly 100% chance of death to a nearly 100% chance of survival. No one would ask for an RCT.

The reason why we talk so much about RCTs in medicine is because medicine lives in the "RCT zone." Most of what we do involves offering interventions thought to benefit our patients, and most of what we offer confers a modest benefit, at best.

Data support the idea that medical treatments rarely have large benefits. Researchers studied every medical practice in the Cochrane database and found that only 1 in 80,000 practices had a very large, consistent benefit on all-cause mortality—extracorporeal membrane oxygenation (ECMO) for neonates.[2]

Moreover, for years, docs have circulated a list of practices with overwhelming benefit but lacking RCTs.[3] But after many years, this list remains just a few hundred items long. Contrast that with the hundreds of thousands of interventions and practices we do in biomedicine, and you realize that parachutes are few and far between.

But that does not stop experts from pretending that their "pet intervention" is a parachute. My resident, Michael Hayes, MD, studied articles that claimed a medical intervention was like a parachute. First, only half concerned a binary endpoint of colossal significance, such as death or having a child.[4] Many "parachutes" concerned endpoints of lesser importance—like preserving teeth.

Second, about half of parachute practices had been tested in RCTs, a fairly clear sign that the profession really didn't consider it a true parachute. In these cases, roughly a third were positive, a third negative, and a third had mixed results—not exactly a ringing endorsement. In short, parachutes are rare, and the analogy is overused in medicine.

This is the core problem of our profession.

Now let's turn to the IVC filter. In the recent, well-done paper in JAMA Network Open, Turner and colleagues[5] found that the IVC filter was associated with increased risk for death when used in patients with venous thromboembolism and a contraindication to anticoagulation—the one group where guidelines consistently recommend use.

The authors found, after adjusting for a common problem in observational data called immortal time, that IVC filters were associated with increased mortality (hazard ratio, 1.18; 95% CI, 1.13-1.22; P < .001). Of course, like all observational studies, this one has limitations. However, we don't have RCTs to guide us on this question.

In fact, to date there are three randomized trials of IVC filter placement that included at least 100 participants.[6,7,8] None show a mortality benefit, even with extended follow-up. All trials excluded patients with a contraindication to anticoagulation. And one trial used an odd methodology that screened patients for symptoms of pulmonary embolism (not letting clots present on their own).[9]

In other words, the available evidence base for IVC filters is lousy. All observational studies have limitations—in this case, the patients who got filters may be different from those who did not; and these differences, and not the filter itself, may be responsible for any findings—and we may not be able to adjust for all of these factors. Moreover, all of the relevant RCTs are small and limited, and none show a clear benefit for filters in any setting. And there are no studies in the setting of contraindication to anticoagulation—a common reason why filters are placed.

So if you have a device that, at best, has a modest benefit, has no reliable testing to date, and has conflicting observational data, what should you do? There is only one right answer: TEST IT IN A RANDOMIZED TRIAL. And yet, the author of an accompanying editorial, Eric Secemsky, MD, "does not believe a randomized trial is feasible."[10] In comments to Medscape Medical News, he asked, "Which clinician would risk a patient who had had a large clot being randomized to nothing?"[10]

Which clinician? The answer is, any clinician who wants to know if this invasive, costly, harmful device actually makes patients better off. If one truly accepted the argument that we cannot test interventions that might have modest effect sizes, this argument would mean that no one could perform any placebo-controlled randomized trial. Drugs and devices would flood the market and we would have no idea which, if any, help patients. Frankly, the position is intellectually bankrupt.

I would go as far as saying that this is the core problem of our profession. We become so seduced by practices that we become unable to test those interventions. The history of medicine is replete with examples of professionals, experts, the best of the best, being similarly reluctant to test their interventions. Randomized trials are eventually performed against the odds, and the proponents find that results were not as expected. In cardiology, there was perhaps no greater example than the CAST trial, and even more recently, the results of the ORBITA study.

The bottom line is that we need RCTs for interventions that you hope, but do not know, benefit patients. Interventions with modest effect sizes at best—where your optimism and bias may cloud a clear assessment of the benefit—must be proven in RCTs. The IVC filter falls into this camp, and so does much of the medicine we practice. While I wish we were in the parachute business, they are few and far between.

Click here to subscribe to Vinay Prasad's column on Medscape.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.