COMMENTARY

In Defense of Digoxin -- No, It's Not a Killer Drug

John M. Mandrola, MD

Disclosures

July 26, 2019

Medical topics worthy of writing about transcend the details of the drug or device and implicate a broader lesson. The medical establishment's turn against digoxin perfectly illustrates this duality.

My senior colleague uses a poison symbol for digoxin on his teaching slides. Stanford electrophysiologist Mintu Turakhia wrote in a leading journal that "perhaps, it's time we leave foxglove in the garden and in the history books—and out of the medicine cabinet."[1] Yale cardiologist Harlan Krumholz wrote in a recent tweet that "the major benefit of dig may [be] achieved by not using it."

Not only do I disagree with this sentiment concerning one specific drug, but I see a broader lesson on the dangers of being swayed by observational studies.

The core flaw of any observational study, or meta-analysis of observational studies of a therapy, is that a clinician chose to use (or not use) the drug or device. Multiple factors influenced that choice. Some of these factors (patient age, weight, ejection fraction) can be put in a database and adjusted for; other deciding factors (frailty, depression, lack of family support) cannot. The beauty of a randomized control trial (RCT) is that treatment assignment is random, and that balances both measured and unmeasured confounding factors.

The New Study

A new and clever analysis[2] of the DIG trial—the only randomized outcome trial of digoxin—strongly suggests that the mountain of observational data on digoxin use is too biased to yield any firm conclusions.

The original DIG trial assigned 6800 patients with systolic heart failure and sinus rhythm to digoxin or placebo.[3] The primary endpoint, all-cause mortality, was nearly identical in both groups. Digoxin-treated patients had a 28% lower rate of admission to the hospital for heart failure (P < .001).

The goal of this new analysis was to test the idea that prescription bias[4] explains why many observational studies associate digoxin with increased mortality while the DIG trial showed no effect. Prescription bias is a form of selection bias—for example, sicker patients get digoxin, and it is their being sicker, not the digoxin, that "causes" their worse outcomes.

The authors took advantage of the fact that nearly half (44%) of the patients enrolled in the DIG trial were on digoxin before the trial started. Once enrolled, these patients were randomized to continued use of digoxin or placebo.

Now the authors had two tests of digoxin: the main trial, in which drug use was random, and an observational cohort, in which a clinician chose to use or not use the drug. If there was no bias in the use of digoxin before the trial, or if statistical adjustments truly evened out baseline differences, then the effect of digoxin should be neutral, as it was in the main trial.

It was not neutral. Not even close.

Three Findings

The 44% of patients enrolled in the main trial who were on digoxin before randomization had much higher rates of heart failure symptoms, signs, and medication use compared with the 56% of patients not previously treated with digoxin.

Not surprisingly, then, mortality was significantly higher in the patients on digoxin vs those not on digoxin before randomization (40% vs 31%; hazard ratio [HR], 1.36;  95% confidence interval [CI], 1.25 - 1.47; P < .001) regardless of ultimate treatment assignment—placebo or active arm. Crucially, statistical adjustments for baseline factors in the pretreated patients reduced, but did not eliminate, their higher hazard ratio for death.

Further evidence for prescription bias came when the authors analyzed heart failure hospitalizations. In the randomized trial, the digoxin-treated group had significantly fewer admissions for heart failure, but in the observational comparison, the opposite was seen: Patients previously on digoxin had higher hospital admissions for heart failure than those not previously on digoxin (adjusted HR, 1.47; 95% CI, 1.33 - 1.61; P < .001).

Comments

These data provide strong evidence for digoxin prescription bias: Sicker patients receive the drug. Despite adjustments, mortality and heart failure admissions were higher in patients whose clinicians chose to prescribe digoxin, even if they landed in the placebo arm of the trial. 

I've seen few better examples of how observational studies trick us. Namely, important prognostic variables that influence a clinician's decision to use digoxin remain unmeasured. Turakhia said it well: "Ultimately, no amount of statistical machination, however brilliant, can overcome the problem of unidentified confounders."[1]

The authors of the latest analysis made another striking observation in their discussion. The higher adjusted HR for mortality (about 1.22) for their observational comparison of prior vs no prior digoxin is nearly identical to the higher HR for death in numerous published meta-analyses of digoxin observational studies.[5,6,7,8,9] It's almost as if the degree of prescription bias is the same in all the nonrandomized studies of digoxin—whether it's used for atrial fibrillation (AF) or heart failure.

I have more. Three divergent post hoc (hence observational) analyses of the AFFIRM trial[10] further illustrate the unreliability of digoxin observational studies. AFFIRM compared the strategy of rate control to rhythm control in patients with AF. Digoxin use was left to clinicians' discretion. One analysis of AFFIRM found that digoxin use was associated with increased mortality,[11] another found no association,[12] and yet another found that digoxin was associated with lower mortality.[13]

Some caveats: The DIG trial excluded patients with AF and enrolled patients in the 1990s, before the era of β-blocker use. There are no RCTs of digoxin use in patients with AF. And a post hoc analysis of DIG, centering on the subset of patients who had serum drug concentrations measured, found that when levels ranged between 0.5 and 0.9 ng/dL, digoxin use was associated with reduced death and hospital admission from heart failure.[14] Thus, for the DIG trial to have external validity, cautious use of the drug plus monitoring of levels is required.

These caveats, plus the fact that digoxin can be used for AF and/or heart failure, complicate matters. While I agree that we have better first-line drugs for both conditions, digoxin helps selected patients.

The problem with the current anti-digoxin climate is that clinicians may be afraid to use a potentially useful drug. Consider that two older RCTs[15,16] demonstrated that withdrawal of digoxin in patients with heart failure increased the risk for worsening symptoms.

The authors of this elegant study have done clinicians two favors: One was to expose prescription bias as the fatal flaw in digoxin observational studies. The other, far more important, message was to reiterate the timeless lesson of science—correlation does not equal causation.

If you suggest we not use digoxin because it causes harm, you need to show me more than biased observational studies. Bring me an RCT.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....