Many New Cancer Drug Approvals Based on 'Fragile' Data

David J. Kerr, CBE, MD, DSc, FRCP, FMedSci


September 11, 2019

This transcript has been edited for clarity.

I'm David Kerr, professor of cancer medicine from the University of Oxford. I'd like to pick up today on a nice article that's been published recently in Lancet Oncology by my old friend, Ian Tannock.[1] You may have seen us [discussing on Medscape] European Society for Medical Oncology conference highlights from last year. We relatively grumpy old men mumbled, complained, and muttered about how low the bar was being set with the introduction of new cancer medicines in terms of relative efficacy.

Ian's gone a step further in this rather nice paper. He's introduced the concept of a fragility index. This was a new statistical tool to me, and basically it's a measure of statistical stability. For too long, we as a scientific medical community have been utterly obsessed with a P value of .05 as being a statistical god at whose feet we must worship.

The fragility index is a measure that tells us, if patients were to cross from being responders to nonresponders, progressors to nonprogressors, how many would have to change camp, as it were, in order for us to lose significance. Ian and colleagues looked back between 2014 and 2018. They discovered 36 randomized trials which led to new drug approvals by the US Food and Drug Administration. The methodology only works for a 1:1 randomization, so they looked at a subset of 17 studies. In nine of these 17, they found that a fragility index was at 1% or less. That means that if only 1% of patients changed from good to bad, responding to nonresponding, then you would lose statistical significance. A very small number. But perhaps even more worrying is that in five of these 17 trials, they found that "the number lost to follow-up was more than the fragility index."

Patients are lost to follow-up for many reasons, but if there were any unconscious or operating bias in effect, you can see how, just by certain patients being omitted from the final trial analysis, there could be a significant effect in swinging a potentially negative trial to positive. Such was the fragility of the statistical observation.

I found it interesting to read. It rather reinforced our view that trials being used to get new cancer medicines onto the streets and into the clinics are based on a rather fragile, narrow base. It's an interesting new tool. I think it's one that regulators should in some way take notice of.

Have a look at it and see what you think. Is this something that we as a clinical community should be considering when we make decisions as to whether—yes or no—we should introduce new drugs into our compendium? Should it in any way inform any discussions we have with the individual patients we see in our clinic? I'd be terribly interested in your thoughts on all of this.

For the time being, Medscapers, ahoy. Thank you.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: