The Trial Had a Positive Outcome -- Now What?

David J. Kerr, CBE, MD, DSc, FRCP, FMedSci


November 02, 2016

This feature requires the newest version of Flash. You can download it here.

Hello there. I am David Kerr, professor of cancer medicine at the University of Oxford. I would like to talk not about a specific trial or piece of science, but rather about a very nice review[1] in the New England Journal of Medicine (NEJM) by Stuart Pocock and Gregg Stone, distinguished biostatisticians who have a long history of contributing to clinical trials, particularly in the cardiovascular field.

This is part of a longer ongoing series in the NEJM, "The Changing Face of Clinical Trials." This particular "episode" is about positive trials, and they pose the question: The primary outcome is positive; is that good enough? They walk us, very carefully, through the issues and items that we need to look out for when we interpret positive data. We go to the summary of the article. Did the trial meet prespecified values—usually a P value of .05? Just a binary response—black or white, yes or no, it did or it did not—is not good enough.

[W]e must have a much better feel for the quality of the trial data, its power, subset analyses, and whether the results are consistent across important subgroups.

Clearly, in these days of strongly evidence-based medicine, we must have a much better feel for the quality of the trial data, its power, subset analyses, and whether the results are consistent across important subgroups, and so on. Being hung up on a P value of .05 is entirely arbitrary. We know that setting the trial in this way means that there is a 5% chance of a false-positive result—that we will attribute a positive result in a completely wrong setting. If we are looking for a P value that would give us undeniable evidence that the new treatment is good, we should set the P value much lower. For example, P = .001 would give us a greater degree of confidence in the benefits of the new therapy.

It is a great article. It is thoughtfully written and the sort of thing, even to oldsters like myself, that acts as a useful refresher. To the young docs coming through, I recommend that you read it. For those of you who are involved in journal clubs and in thinking through the evidence base, it gives a very nice structured way of looking at trial design, how [data are] reported, and whether we can actually believe [the results]. "Believe" is too strong a word, but we do not want to overinterpret trial results that are primarily positive.

The final section is "Do the Findings Apply to My Patients?" Again, this is the old chestnut of the applicability of a highly selective patient population [to clinical practice], which is the case for almost all clinical trials. There are age cut-offs, biochemical cut-offs, hematologic cut-offs, and so on. Is this trial applicable to my patients? Again, the wider the trial entry and the more the patient demographics link to our own patients, the more comfortable we would be in applying results of that positive trial and potentially changing practice or altering how we deliver treatment.

Have a look at it. It is beautifully written and very clear. All of the examples come from cardiovascular medicine. It is very easy indeed to see how this would easily be applied to an interpretation of oncology results.

I'm in a reflective, philosophical mood this week. Thanks for listening, as always. I would be very interested in any of your comments. For the time being, Medscapers, ahoy.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: