Faster, Cheaper, Clinical Trials: Are We There Yet?

Robert A Harrington, MD; Gregg W Stone, MD


December 31, 2014

This feature requires the newest version of Flash. You can download it here.

Something's Gotta Give

Robert A Harrington, MD: I'm Bob Harrington from Stanford University. I am here with Gregg Stone, from Columbia University and the Cardiovascular Research Foundation. We are having a great meeting at the European Society of Cardiology (ESC). The growth of this meeting is impressive, and they are doing many innovative things.

We heard about some terrific trials on the first day. You and I both like trials, but we also recognize that the way we do trials can't continue. They cost too much money. That leads us to consider how we might do clinical trials more effectively and efficiently. There was a lot of talk at this meeting about how to do that.

Gregg W Stone, MD: It's a real problem. We agree that randomized trials are the pinnacle of science. They are the only way to eliminate unmeasured confounders, which is the major issue in nonrandomized studies. There are many reasons for patients to be treated one way or the other, and it is impossible to collect all of them in a case report form.

What have people talked about? The problem with randomized trials is that they are very time consuming and expensive. They are not necessarily generalizable because a small percentage of patients (usually the less sick patients) are entered into randomized trials. They are certainly not perfect, but if you have a large-scale, adequately powered, randomized trial, you can usually believe the results. What can we do to make them quicker, cheaper, and faster to enroll? People have come up with two different ideas. One is to not to rely on hard clinical end points, but instead use surrogate end points and biomarkers—for example, cholesterol and lowering cholesterol instead of mortality or controlling hemoglobin A1c.

Dr Harrington: Those were in the news with the [proprotein convertase subtilisin-kexin type 9] PCSK9-inhibitor cholesterol-lowering drugs.

Surrogate End Point or Intermediate Outcome

Dr Stone: The concept of surrogate end points has been around forever and has fallen out of disfavor in the past several years because of some noticeable discrepancies between directions of biomarkers and outcomes of patients. The other option is to do more "comparative-effectiveness research," which is a simple way of saying, "Let's mine large databases and do nonrandomized comparisons to see what we can glean from megadata."

Dr Harrington: Those are two of the hot topics that we hear about in the hallway. Do we really need these big clinical outcome studies? Why aren't intermediate outcomes good enough? For me, one of the great examples was the SOLID TIMI 52 trial[1] with the interesting agent, an LP(a) inhibitor (darapladib). It had already struck out in STABILITY,[2] a trial that our group was involved in. Now we have a second population, but there is no effect. We have a biologic hypothesis that tracks along with epidemiologic hypotheses, a nice scientific approach to inhibiting something.

Dr Stone: There was even a surrogate end point. It was one of the IBIS trials,[3] and it showed reduction of necrotic core with darapladib, and necrotic core is at the heart of thin-cap fibroatheromas that lead to acute coronary syndromes.

Dr Harrington: So, it's just natural to assume it should have worked?

Dr Stone: It should have worked, but we haven't firmly linked necrotic core volume as a very strong hyperacute surrogate to future acute coronary syndromes, even though there is obviously a relationship. Maybe that is the problem.

Dr Harrington: Isn't part of the issue the terminology we use? I prefer to use the term "intermediate outcome." When you use "surrogate," you are saying that the surrogate should be a marker along the causal pathway and that that marker should be interfered with in a way that is similar to the way you would affect the clinical outcome. If you think about an intermediate outcome becoming a surrogate by those criteria, very few things rise to the level of a true surrogate.

Dr Stone: Exactly. It's very difficult. There are formal criteria—the Prentice criteria[4] and the Hughes criteria[5]—in which you must explain 100% of the effect of the intervention by the surrogate. It has to be perfectly mechanistically related.

Dr Harrington: In striving to reduce trial size or complexity, people have wanted us to use some of these intermediate markers. That is a mistake. You can't replace clinical outcomes.

Type 1 and Type 2 Errors in Underpowered Trials

Dr Stone: On the other hand, look at the difficulties that the pharmaceutical industry has been having over the past several years. Trial after trial, and the phase 3 trials, are negative. Part of the problem is that these trials are often based on phase 2 trials—the dose-ranging studies that are looking for efficacy in underpowered trials using clinical end points. Investigators are looking for reductions in ischemia and bleeding. Type 1 and type 2 errors are rampant, but if they get lucky and see what they want to see, they go ahead to phase 3. Many of those phase 3 trials are going to be negative.

Dr Harrington: That is a great point. I have often said over the past few years that what we need to conduct more intense biologic experiments early with a much more sophisticated, systems biology or systems pharmacology, approach to understanding the biology. When you want to assess clinical outcomes, you have to use well-powered, well-designed sufficiently sized trials. I often ask people who are doing phase 2 trials, "Are you going to go forward if you don't see anything?" They always say "Yes, because we know it's too small." I say, "Okay, but don't get too excited if you see something." If you are willing to continue no matter what, you need to have that view in place.

Dr Stone: It's very different from 30 or 40 years ago, when there was a very rigorous stepwise approach to understanding the science first. You wanted to understand everything about the mechanism of an agent or even the pathophysiologic process before you started human experiments and large outcomes trials. Now, it's almost the opposite. We want to get to phase 3. We want a positive study. We often don't even understand the mechanism. In the large outcomes trials, some have been positive and have shown a reduction in mortality. For example ticagrelor [Brilinta, AstraZeneca] in PLATO[6] vs prasugrel [Effient, Lilly/Daiichi-Sankyo] in the TRITON-TIMI 38 trial[7] two potent ADP antagonists, one of which reduced mortality and one of which didn't. Is that play of chance, or is there a mechanism unique to ticagrelor that does it? Some people say, "Who cares? We reduced mortality. That's what we should be doing."

Dr Harrington: That's an interesting example of a trial that we were involved in. There may be mechanistic reasons, but we just don't understand them yet.

Dr Stone: We may never understand them.

Can We Do Away With Randomization?

Dr Harrington: Let's go to the second topic, which is the notion in this era of big data that we don't need randomization. You made the case that if you are going to compare A and B, you must have randomization to balance the playing field and reduce or eliminate confounders as much as possible. However, many people don't agree with us on this.

Dr Stone: It's a real problem. Let me give you an example. For years we have known of the strong relationship between door-to-balloon time in ST-elevation myocardial infarction [STEMI] reperfusion therapy and mortality. After 10 years of "Get with the guidelines," the American Heart Association [AHA] and the American College of Cardiology [ACC] door-to-balloon initiatives in the United States, we have reduced door-to-balloon time by more than 40 minutes, and we have seen a reduction in mortality.[8] That was great until about 4 years ago. Over the past 4 years we have continued to reduce door-to-balloon time in the United States by about another 30 minutes with no change in mortality.

What happened to our observational hypothesis? Did it used to be right, and now it is no longer right? There may have been confounders as to why patients with longer door-to-balloon times have higher mortality rates. Maybe they have pulmonary edema or ventricular arrhythmias. Perhaps they require more up-front care before they can get to the cath lab. You don't capture all these things in administrative databases. I am of the strong belief that we have to be very careful with nonrandomized data. We all go in with our preconceived notions, and if nonrandomized data show you what you believe to be true, and you have a good hypothesis, and then you see it—then you adopt it.

Dr Harrington: A great study came out on blood transfusions around the time of acute coronary syndrome.[9] The conclusion was that blood transfusion is not so bad after all, the prior research was wrong. But that's the wrong conclusion. The real conclusion ought to be that we don't know which is correct. We need a randomized trial. That is the only way to settle this.

Dr Stone: We almost had a good randomized trial. We could have a sophisticated talk about instrumental variables and ways to have pseudo- or Mendelian randomization out of nonrandomized data. A fantastic study published in the New England Journal of Medicine[10] looked at the age of transfused bank blood.

It turns out that by chance, if you received older blood, you had higher mortality and postoperative complications. This finding was totally by chance, so, although not formally randomized, it suggested that there is something bad about getting old blood.

Dr Harrington: It opened up a whole area of research on depletion of NO as blood is stored. It sounds as though we agree on that—if you really want to compare A vs B, you have to do randomization.

Mining Registries and Health Records

You and I are both interested in the Swedish approach. Can we do that in the United States? Can we randomize within a registry to make clinical trials more efficient? Can we get the data efficiently but still have randomization?

Dr Stone: The question is how can we do large-scale randomized trials with meaningful clinical end points (which we have already said we need), less expensively and more quickly? Those issues are related but not entirely. We have to look at where we spend money on randomized trials. Do we need to continue to spend money, and where can we save money? For example, could we use end points, such as mortality, that are relatively simple to adjudicate?

You could even argue you don't need to adjudicate that. You could probably believe death records. You could believe hospital records. On the other hand, as you and I both know very painfully, if you want to use myocardial infarction, which is actually diagnosed with a biomarker, it can be extraordinarily difficult. You have to look at creatine kinase-myocardium band (CK-MB), troponin trends, and ECGs in relation to chest pain, and the clinical syndrome that brought you to the hospital. It is the same with stent thrombosis. If you care about these end points, they don't lend themselves very well to large national registries whether those in Sweden or those of the ACC, the Society of Thoracic Surgeons, or the National Cardiovascular Data Registry. In larger simple trials, mortality, which in some cases is getting easier to show, is the end point that really matters. Maybe that is what we need to focus on.

Dr Harrington: The other thing that excites me these days is figuring out ways to harness the power of the electronic health record and pull out those data in in a way that eliminates the need for monitoring and filling out case-report forms. If you can get your data from the electronic health record, it doesn't obviate the need for adjudication of end points, but it might help us get closer to where we want to be.

Is Simpler Better?

Dr Stone: We spend way too much money on monitoring. It can be tens of millions of dollars in these big trials. If you are off on an occasional patient because of age or because of whether the patient has peripheral vascular disease or a family history, that doesn't change the outcomes of the trial. Where you don't want to be off is on the clinical end points. If the randomization works, you have to show that it works and that it was allocated and balanced.

Dr Harrington: Find the end points, and make sure they are the right ones.

Dr Stone: Exactly, and simple end points are easier and less expensive. But you have to pay sites something to motivate them and because there is work involved. You want them to actively screen and identify patients. In that respect, it's timely that we are at the European Society of Cardiology meeting. Clinical research is alive and thriving here. It's palpable, the fact that the Europeans and many other cultures around the world really value clinical research. It's something that we're losing in the United States.

Dr Harrington: More than 11,000 abstracts were submitted.

Dr Stone: It's pretty unbelievable.

Dr Harrington: It reflects the vibrancy of the meeting, and it has prompted some good discussion here today on how we might improve clinical research.


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: