Novel Oral Anticoagulants vs Warfarin: The Truth is Relative

John Mandrola


December 18, 2013

The makers of novel anticoagulant (NOAC) drugs have done well. A neutral observer might think these drugs are the next penicillin. The ads are everywhere, no medium spared. Influential thought leaders are out in force, exerting their influence. Compared with warfarin, novel anticoagulants have been sold as both superior and more convenient—and oh, how Americans love easy.

The problem, of course, is that when the free samples run out, patients and third-party payers are left asking the question: Are these drugs worth the added expense?

The answer depends on how you define value and superiority.

I had originally set out in this post to explain how two recently published meta-analyses of novel anticoagulant trials had once and for all demonstrated the drugs' superior safety and efficacy compared with warfarin.

But that is not what I found. Not at all. Rather, I made a discovery:

In the measures that matter for patients with atrial fibrillation, hard outcomes like stroke, bleeding, and mortality, dabigatran (Pradaxa, Boehringer Ingelheim), rivaroxaban (Xarelto, Bayer Pharma/Janssen Pharmaceuticals), apixaban (Eliquis, Pfizer/Bristol-Myers Squibb), and edoxaban (Lixiana, Daiichi-Sankyo) perform almost identically to warfarin. Yet the drugs are priced and promoted as if they are special, more valuable.

Not only do I intend to prove that NOAC drugs are clinically equivalent to warfarin, I hope to convince you that the simple math that follows could be used to improve the quality of all evidence-based medical decisions.

The story begins with two recent meta-analyses:

Dr Christian Ruff (Brigham and Women's Hospital, Boston, MA) and colleagues culled together the four phase 3 randomized clinical trials of novel anticoagulants vs warfarin in patients with nonvalvular AF. Publishing in the Lancet (and summarized on heartwire ), these researchers report significant reductions in the relative risk of stroke, intracranial hemorrhage (ICH), and mortality. The authors emphasize a halving of the relative risk of ICH.

Dr Saurav Chatterjee (Brown University, Providence RI) and colleagues studied the risk of ICH in AF patients treated with either novel anticoagulants or warfarin. They used the phase 3 randomized clinical trials that compared the three FDA-approved novel anticoagulants and warfarin (edoxaban is still investigational). Publishing in JAMA Neurology, they also reported that novel anticoagulant therapy reduced the relative risk of ICH by 50%.

The key word is relative:

It is true—not a lie—to say relative to the patients who had strokes or ICH, novel anticoagulants looked favorable. But that's not a useful way to explain the trade-offs to a patient in the exam room. It's not a useful way for doctors to interpret clinical evidence.

An AF patient who has accepted the net benefits of anticoagulation (an important decision in and of itself) wants to know something simple: what is the risk of an event on a novel anticoagulant vs warfarin? That's how they judge value. It's also how payers judge value.

Here, it is critical to look at absolute, not relative, risks and benefits of each drug. This is because most AF patients treated with either drug experience no effects.

Absolute numbers are truth:

According to both meta-analyses, the most significant relative risk reduction was observed for ICH. Let's look at the raw numbers from the JAMA Neurology paper. (Numbers from the Lancet paper and this 2012 meta-analysis are nearly identical.)

From Figure 1: There were 31 830 patients treated with NOAC drugs and 25 661 treated with warfarin. There were 186 ICH events in the NOAC group and 317 in the warfarin group. The absolute risk for ICH was 0.58% with NOAC drugs and 1.24% with warfarin. The NOAC drugs prevented 131 ICHs. The absolute difference between the two groups was a mere 0.65%. Said another way: for 151 of 152 patients treated, there was no difference between NOAC drugs and warfarin.

That means we can tell an AF patient similar to the 60 000+ enrolled in the three randomized clinical trials that he or she has a 99.4% chance of not having an ICH on a NOAC drug and a 98.8% chance of not having one on warfarin.

Is this clinically superior?

You don't believe me yet. I know; this discovery had me running around like Archimedes, too. Let's perform the same simple math on the stroke-prevention numbers.

From Figure 1 of the Lancet meta-analysis: There were 29 312 patients treated with NOAC drugs and 29 229 patients treated with warfarin. There were 911 stroke or systemic-embolism events in the NOAC group and 1107 in the warfarin group. The absolute risk of an event was 3.1% on a NOAC drug and 3.8% on warfarin. The reduction in absolute risk was 0.7%. In this case, 141 of 142 patients treated with a NOAC drug received no benefit over warfarin. Again, our AF patient has a 96.9% chance of not having an embolic event on a NOAC drug and a 96.2% chance of not having one on warfarin.

When asked to comment, Dr Chatterjee noted, "In a patient at high risk of ICH, an option of possibly cutting the risk by half will carry definite significance, and future research endeavors should be directed at identifying such patients." Dr Ruff concurred and cautioned against focusing on a single outcome. "What matters is what happens to patients overall. NOACs offer the potential of a more effective and safer anticoagulation option with reductions in stroke, ICH, and mortality. A net benefit would combine all of those outcomes."

I agree with Dr Chatterjee that ICH is perhaps the most serious complication of anticoagulation, and reducing it is important. But I reiterate: for the patient who is trying to decide whether to pay up to 50 times more for a drug purported to prevent devastating brain bleeding, the fact that there is a greater than 99% chance of no incremental benefit is central to decision making. My point in saying it that way is to be clear. When only relative risk reductions are emphasized in scientific writing, it would be easy to get the impression that one has a 50% lower risk of ICH on a NOAC drug. That's not the case. I live in the real world. Believe me: relative risk reductions confuse caregivers and patients alike.

Dr Ruff's emphasis of net clinical benefit is well founded. Patients with AF and risk factors for stroke are often burdened with other medical problems, like hypertension, diabetes, arthritis, immobility, dementia, and vascular disease. AF is only one of their problems. The trade-off of stroke reduction with an anticoagulant is accepting the risk of bleeding. In the Lancet meta-analysis, the risk of major bleeding was not significantly different, but the risk of gastrointestinal bleeding was higher for NOAC drugs by a factor of 0.5%. Though it is true a brain bleed is worse than a GI bleed, the larger point remains—less than 1% net difference.


The two approaches to anticoagulation in patients with AF have been studied head-to-head in thousands of patients in trials that measured hard outcomes. Strokes, bleeds, and deaths are easy to count. Division is easy. So is subtraction.

In the outcomes that matter to the patient who sits across from us, the two classes of drugs perform nearly identically—that is, if you count greater than 99% the same.

This doesn't mean novel anticoagulants are bad drugs or that I recommend stopping them. It simply means they are clinically equivalent to warfarin. And, therefore, at the current premium, these drugs are grossly overvalued.

To be fair, NOAC drugs have some practical advantages, like convenience, lack of dietary interactions, and fewer drug-drug interactions. And not all patients do well with warfarin. For these patients, NOAC drugs may be an alternative. (See footnote.) What's more, if one is willing to pay for convenience and absolute differences of less than 1%, then that is his or her choice.

The larger message:

This is not just an important story about atrial-fibrillation therapy. The NNT message extends to all evidence-based medical decision making. To achieve the highest-quality decisions, caregivers should understand and communicate absolute risks and benefits. Journal editors should look askance at studies that emphasize relative risk reductions.

I challenge you to apply this method to all clinical studies. Such raw data are usually presented in the first or second table of published studies. All you need is a calculator and the strength to ignore the hype.



Some experts have even suggested trouble with warfarin may be predictable. Dr Gregory Lip (University of Birmingham, UK) and colleagues have validated a simple scoring system (the SAMe-TT2R2 score) that uses easily measured clinical factors and might predict either good or bad INR control. A patient with a high SAMe-TT2R2 score might warrant additional interventions, including consideration of a novel anticoagulant.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.