Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending June 2, 2023, John Mandrola, MD comments on the following news and features stories.
First, Some Listener Feedback
Within minutes of the podcast dropping last Friday, an astute listener informed me that I misspoke about the PARADISE MI trial of sacubitril/valsartan. I said it was vs valsartan alone, but that is wrong; it was against ramipril. I knew that, but I got excited. So, thanks for the correction.
The context here was that other than PARADIGM-HF, sacubitril/valsartan has not impressed in other outcome trials. We should repeat PARADIGM HF. The US Food and Drug Administration (FDA), should have stuck to the norm of having two randomized controlled trials (RCTs).
The big story last week though, was publication of the ELAN (AY-LON) trial. I called it Ee-Lan. Which is another mispronunciation from an American. Sorry to the ELAN authors.
ELAN was a super special RCT, not because of the treatments tested, or the effect size, but the way in which the trial was conducted and presented.
The question of early vs later initiation of oral anticoagulants (OAC) after an atrial fibrillation (AF)-related embolic stroke is an important question. ELAN found that early vs late initiation reduced the composite primary outcome of recurrent ischemic stroke, systemic embolism, major extracranial bleeding, symptomatic intracranial hemorrhage, or vascular death within 30 days. The hazard ratio (HR) was 0.70, a 30% reduction.
But Mandrola, you forgot to tell us if this was a positive trial. What was the confidence interval (CI) and P-value. Was it significant?
That’s the provocative thing. The ELAN authors make no such proclamation. The CI ranged from 0.44 to 1.14. They did no statistical test. There is no P-value.
The conclusions read that the incidence of the primary outcome with early vs late initiation was estimated to be from 56% lower to 14% higher. Period. It’s darn amazing.
They encouraged doctors to be mature. To not rely on the dumb dichotomy of positive or negative results. Instead, they wrote that this trial was designed to estimate the treatment effects of early vs later initiation of OAC as well as the precision of this estimate. Their protocol paper said:
Although we propose a different analytic approach to that often seen in clinical trials, this should not hinder interpretation of trial data or their clinical utility. We also believe that the complexity of managing patients with AF early on after AIS [acute ischemic stroke] precludes simplified dichotomous decision-making and necessitates some leeway for individual decision-making.
This makes me tingle with delight. It supports my view that guideline writers should resist the urge to put therapies in those colored boxes of recommendations. Rather they should review the literature and doctors would be responsible for melding best evidence with clinical judgement and patient preferences on an individual basis.
This view brought strong disagreement with a European academic who I have very high respect for. She wrote to me:
I have to disagree when you say that colored boxes are not useful in guidelines because they oversimplify decisions.
I believe that we have to remember what was the driver of starting these recommendations in the first place.
It is a way to standardize treatment and “protect” patients from physicians who don’t keep up with updates and may make decisions without taking all the information/evidence into account.
Moreover, it helps clinicians’ discussions with payers about reimbursement.
Not everyone is able (or takes the time) for critical appraisal of published data and the potential arbitrariness of this new approach may hinder evidence-based decisions.
If we want to disseminate knowledge and widely implement simple treatment strategies we have to keep it simple.
My response to this definitely majority view is first to thank the listener for listening and taking the time to write.
Yes, I agree that there is a lot of bad care out there. But I would rebut the argument that colored boxes protect patients from poorly informed doctors by saying that for each case that the simplified recommendations help a doctor make a good decision, it also gives profit-driven doctors license to over-treat.
Here in the United States, from the position of my lens, if a therapy is even in those boxes, a IIB/level of evidence (LOE) C (the lowest level) recommendation, this can be used by American doctors to overuse implantable cardioverter defibrillators (ICDs), AF ablation, occlusion devices, stents – both peripheral and coronary, etc. It doesn’t matter if an expensive therapy gets a IIB/LOE C. If it’s in the guidelines, it’s fair game to be abused in a profit-driven system.
As for the matter of using guidelines for making payers’ decisions, this is extremely complex, not least because of the competing interests. To be fair, I don’t have an easy rebuttal for the decision to use guidelines for coverage, but why not apply similar logic? For every inappropriate procedure prevented with simplified recommendations, they can also be used to over-treat. I mean, European guidelines haven’t seemed to resolve the differences in procedure use from, say, Germany vs Portugal.
One more interesting bit of feedback. Professor David Cohen, who you should be following on Twitter, wrote to say that, in 2013, the New England Journal of Medicine published a trial that had a moderate sample size and an exploratory statistical approach.
They found that pre-hospital lytic vs going straight to PCI led to a 14% lower relative risk of the major adverse cardiac event (MACE) endpoint. CI ranged from .68 (a 32% reduction) to 1.09 (a 9% increase). But they still calculated a P-value, and declared “no difference.”
The discussion was even weirder. Here they sort of explained that “factors” such as getting adequate funding, and a global shift to primary PCI, and the capacity for pre-hospital randomization, induced them to do a smaller trial without a primary statistical hypothesis.
They went on to write that the upper limits of the CI of 1.09, essentially excludes a 9% increase in MACE. And, although this was not a noninferiority (NI) trial, most NI trials use margins that are greater than 9% increased risk, implying that this could be considered NI.
I don’t know about you all, but this is an intensely interesting concept to me, because what STREAM and ELAN authors seemed to be saying is: We are doing this trial, we are pretty sure we can’t enroll enough patients, and we expect wide CIs, but we will leave it up to the user of evidence to apply the results.
I like the idea of letting doctors interpret evidence without made-up thresholds of significance, but I also would not want this “exploratory” approach to enable underpowered trials. When I look at the CIs of ELAN and STREAM, it would not have taken many more randomly assigned patients to increase the precision of the effect size.
I know I am spending a lot of time on this idea of trial interpretation, but it is super-important for the entire evidence-based approach to patient care.
Monitor HF Trial
I’m going to need to be careful here. I mean no personal malice toward the authors of the MONITOR HF trial, or Prof Angermann who wrote the editorial, or the editors and peer reviewers of the Lancet, but this is a really difficult story to tell.
MONITOR HF is a Dutch RCT comparing the use of the CardioMEMS pulmonary artery (PA) monitoring device in 348 patients with heart failure (HF).
Half the patients were randomly assigned to have the HF managed with the wireless clip in the PA. Half were treated with standard of care.
The primary endpoint was a subjective endpoint measuring quality of life (QOL). They used the Kansas City Cardiomyopathy Questionnaire or KCCQ.
KCCQ is patient-reported. It includes a bunch of questions that ask patients to assess things like physical limitations, symptoms, self-efficacy, QOL, social limitations, and emotional well-being.
The responses are on a Likert scale which gives numbers to the responses.
MONITOR HF was not powered to detect clinical outcomes. The authors tell us in the first sentence of the methods: “This was an open-label RCT....” In other words, one group got a super-fancy wireless and invasive monitor. The other group got nothing.
The difference in mean change in KCCQ overall summary score at 12 months was 7.13 points, so it was highly significant (95% CI 1·51–12·75; P=0·013).
So, yes, it was positive.
Also positive was the hospitalizations for HF (HHF). There was a 44% reduction in HHF. Of course, the absolute risk reduction (ARR) was minimal – a delta of ≈ 0.3 HHF per patient years. Cardiovascular (CV) death and all-cause death was similar in both arms.
The authors concluded:
Hemodynamic monitoring substantially improved quality of life and reduced heart failure hospitalizations in patients with moderate-to-severe heart failure treated according to contemporary guidelines.
These findings contribute to the aggregate evidence for this technology and might have implications for guideline recommendations and implementation of remote pulmonary artery pressure monitoring.
The accompanying editorial comprised nine paragraphs. One paragraph in the middle discussed the stomping elephant in the room, namely that MONITOR HF was an open label trial that measured a totally qualitative endpoint.
But Dr Angermann dismisses this fatal — and I mean fatal flaw — by arguing that PA pressures and BNP values were nominally lower in the CardioMEMS arm.
She then spends three paragraphs lauding the positive findings: a) It’s consistent with previous trials, and b) it’s positive for CardioMEMS in a different healthcare setting (in the Netherlands rather than the US). Quote:
Together, findings provide robust information for health-care providers, regulatory agencies, and payers about the potential of pulmonary artery pressure-guided heart failure management.
This is the ninth paragraph:
I declare personal fees from Abbott for serving as the Chair of the Steering Committee for the (MEMS-HF) study, which evaluated the CardioMEMS pulmonary artery pressure monitoring technology (manufactured by Abbott). I also disclose consulting fees and speaker honoraria and reimbursement of travel costs from Abbott.
Before I tell you my comments, let me also add that my Altmetric chrome extension reveals that 71 news outlets carried this trial with glowingly positive headlines. More than 100 Tweeters sent it out, almost none with any criticism.
Comments. Critical appraisal guru Sanjay Kaul sent out a sober and wise Tweet. He contrasted MONITOR HF with GUIDE HF — a trial three times the size with a proper blinded arm that found no difference in KCCQ.
This story is as depressing as it gets for me. I believe in medicine; I welcome innovation. I’ve seen lives saved and extended by advances in medicine. We need innovation.
But this is terrible. You have a totally broken trial. You cannot assess QOL when one arm gets an invasive treatment, and the other arm gets nothing. It would be like a pain trial wherein one group is told their injection has strong doses of morphine and the other arm is told it gets nothing. But that’s not the worst part.
The worst part is that this is spun by the authors and the editorialist as positive. Media covers it as positive.
It contributes to the marketing power of an expensive device that shows little to no evidence of efficacy. Recall that GUIDE-HF failed to meet its primary endpoint, but gained approval based on a post-hoc pre-COVID sub-analysis that barely met significance.
Let me make a broader point about medical conservatism. My friend Andrew Foy and I have talked about this. Medical conservatism is basically about a vision. Like Sowell’s conflict of visions, between constrained and unconstrained visions.
The medical conservative vision is one of skepticism, that most stuff doesn’t work or works marginally. True breakthroughs, like insulin and antibiotics are rare. We believe skepticism is most consistent with true scientific thinking.
But the alternative vision is one of optimism. This isn’t all bad; optimism is necessary for innovation. Just as the unconstrained visionaries in Sowell’s view, optimists believe that if we keep trying hard enough, we can find the cure of all that ails humanities.
If there was a fair fight, with no camp having their hand on the scales, the conflicting skeptical and optimist visions would likely come to a draw. Empirically, we probably have the stronger argument. Most people don’t want low-value extra care.
A therapy that increases lifespan by 2 to 4 weeks wouldn’t move the needle (and that’s generous because most new heart therapies can’t look at mortality because it would make no difference). A study without a proper placebo arm would be dismissed. The skeptical vision would win.
But MONITOR HF shows how this isn’t close to a fair fight.
The optimists have industry backing. They have the dollars. The dollars from industry not only support researchers and editorialists, but they also support the professional societies, and they infuse the guidelines, which are translated to standard of care.
Walk through the expo of any cardiology meeting and you can see the influence. One company at the American College of Cardiology (ACC) meeting had a NASCAR with drug-labeling on it.
CardioMEMS is a perfect example. A skeptic has no chance. Not only does the device make money for the company, but each time it transmits data, it creates a bill.
It’s a cash machine for all the “stakeholders.”
Profit is fine if it improved quality of life or extended life, but, obviously, anyone who looks at the data, without the spin would know it doesn’t do those things.
Now consider that the company that makes the device contributes to professional societies. It will be written into the guidelines. It’s not a fair fight.
I picked on CardioMEMS, but there are many other examples: Sacubitril/valsartan, left atrial appendage occlusion, early AF ablation, vascular screening, and the list goes on.
My friends, read A Conflict of Visions by Thomas Sowell. Tell me I am wrong. Send me a note, convince me not to be cynical.
The Story of Intensive Treatment of Hypertension in Hospitalized Older Patients
The longer you practice medicine in a hospital the more cases of harm you will see from over-zealous treatment of high blood pressure (BP) readings in older patients.
The standard scenario: an older person is in the hospital, often for some non-cardiac condition. He or she is anxious. Maybe in pain. Maybe they have missed their morning meds.
A nurse records a BP of 210/90. BP is a vital sign. The nurse is concerned and calls the doctor who then orders a potent BP lowering drug, sometimes by vein. Two hours later the patient falls in the bathroom and breaks a hip. Suffers pneumonia after the hip surgery, and subsequently dies.
This is pathos not logos. But despite these stories, not much has changed. I and likely you, too, still get calls tempting us to treat the abnormal vital sign, high BP.
Tim Anderson and colleagues at Beth Israel Deaconness in Boston and University of California San Francisco, have attempted to turn all this pathos into logos. JAMA-Internal Medicine has published their large retrospective observational cohort study of 66,000 veterans who were older than 65 years.
Their goal was to study the association of intensive treatment of high BP readings with clinical outcomes in the hospital. All these patients were admitted for non-cardiac reasons.
They defined a BP intervention as receipt of one or more intravenous antihypertensive doses of any class or oral doses of antihypertensive classes not being filled prior to hospitalization.
All these patients had elevated BP readings in the first 2 days after admission.
The authors now had two groups, non-random; one got intensive treatment for BP and one did not.
They attempted to do a trial emulation. One assumption was that there is great heterogeneity in who gets treatment for high BP readings. Read, it is sort of random.
They then defined an exposure window of the first 48 hours. Read, time zero.
The study outcome was a composite of many things: inpatient mortality, acute kidney injury, stroke, myocardial injury, BNP elevation, and transfer to intensive care unit.
Since these were non-random groups, the authors used propensity matching with something called an overlap weighting approach. It’s over my pay grade, but it appears to be robust. The results are unlikely to surprise anyone who works in a hospital:
First, patients who received early intensive BP treatment continued to receive more BP-lowering meds during the remainder of the hospital stay; 6 vs 1.6 meds for those who did not receive early treatment.
Intensive BP treatment was associated with a statistically significant, 28% higher odds of experiencing a primary outcome (OR 1.28; CI 1.18-1.39).
Intravenous BP lowering had the highest association with a primary outcome ( 90%).
These associations were consistent across subgroups.
The authors concluded:
These findings do not support the treatment of elevated inpatient BPs without evidence of end organ damage, and they highlight the need for randomized clinical trials of inpatient BP treatment targets.
Note the causal language in those verbs.
Comments. Lead author, Dr. Anderson, had a nice thread on Twitter and it provoked many comments. (By the way, this is an advantage of social media such as Twitter, as it can be a discussion section of a study; that used to occur in one room at a meeting, but now occurs globally.)
The momentum of the commentariat was that we do great harm when we treat isolated high BP readings in older patients admitted for non-cardiac reasons. We should stop doing it. Less is more in this case.
I get that. I have seen the harm, the pathos.
In the early years of this podcast, I would have leaped to the conclusions of most commenters that this study confirms our prior beliefs (based on experience) that intensive treatment of BP in the hospitalized elderly is a bad. Right out of Bayes Theorem: Prior belief plus strong associations = even stronger belief.
But now, after spending many podcasts on the dangers of making causal conclusions from non-randomized data, and just weeks ago, covering a highly imperfect attempt by super-experts to emulate RCTs with observational data, I am more cautious.
Non-random association studies are always susceptible to selection. Two clinicians (a nurse who calls the doctor and a doctor who treats) decided to treat some veterans vs not treat others.
The authors suggest that this is likely random — because of the massive heterogeneity in who gets treated. And they bolster any natural variation with propensity matching.
So, perhaps, this is a stronger-than-typical non-random comparison. But it is still possible that sicker patients received intervention and their inherent sickness is what drove the association. The authors do special sensitivity analyses to argue that it is unlikely to be confounding. Okay. Maybe.
Anil Makam, a super-smart academic internist pushed back on my caution by saying:
There's no clinical equipoise here or expectation that bringing a number down will change any of these outcomes for noncardiac hospitalizations; more of one-sided analysis of the harms of an errant practice.
And of course, when I wear my doctor hat, I agree. Proponents of anything aggressive bear the burden of proof. Pathos tells me that this association is real. But when I wear my neutral Martian science-adjudicator hat, I have trouble knowing whether this is one of those trial emulations that would align with a proper trial.
The answer would be to do a trial. Elders with a systolic BP above a certain number could be randomly assigned to receive intensive BP lowering or not. Then you can follow them for outcomes.
That’s not likely to occur, so we are left with our experiences (pathos) and this bit of data. And I conclude by remaining more afraid of aggressive treatment of a number than leaving it alone, but I am not sure this data moves that belief much.
© 2023 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Jun 02, 2023 This Week in Cardiology Podcast - Medscape - Jun 02, 2023.