Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending January 20, 2023, John Mandrola, MD comments on the following news and features stories.
This podcast, cardiology journals, meetings, and news’ sites often feature the big four classes of drugs for heart failure (HF)—renin angiotensin system (RAS) inhibitors, mineralocorticoid receptor agonists (MRA), beta blockers, and SGLT2 inhibitors. Devices, too, get lots of attention. But gosh, the core of helping folks with heart failure and congestion are loop diuretics.
The question of which of the big two—furosemide or torsemide—is best has been debated. Basic furosemide is fine for newly diagnosed diuretic-naïve patients but once patients get sick enough to be hospitalized, I’ve often wondered if one of my secret tricks—torsemide—is better. And to know that, you need a randomized control trial (RCT).
The Journal of the American Medical Association (JAMA) has published the TRANSFORM-HF trial — a pragmatic RCT of furosemide vs torsemide in patients with HF. The primary endpoint was all-cause mortality, with five secondary outcomes.
I discussed this trial on the November 11, 2022 #TWICPodcast after the study was presented at the American Heart Association meeting. It’s a big trial and worth a quick re-look now.
The stand-out feature was its pragmatic nature. Patients were recruited from 60 centers after a hospital stay for HF.
About 1400 patients were in each arm.
Dose and frequency of the randomly assigned therapy during hospitalization and at discharge were determined by the treating clinician, but the trial did offer suggested guides for dosing.
Changes in dose and frequency of the randomly assigned therapy after discharge were at the discretion of the patient’s usual outpatient clinicians.
After discharge from the hospital no further study-specific follow-ups were scheduled. Patients were followed by phone.
The choice of endpoint — all-cause death — is curious, right? I say that because I wonder why they thought one brand of loop diuretic would have an effect size so large that it could move mortality? I mean, there hasn’t been a mortality reduction in a primary endpoint from a HF therapy since the RALES trial in 1999.
Well, the rate of death in the two arms were nearly identical. And there were a lot of deaths — 26% in each arm.
So, you might wonder, well, what about hospitalizations for HF (HHF)? No, you don’t get to know that. Why? Because the authors measured the correct endpoint — total hospitalizations. HHF are one small component of hospitalizations, and if you are a 75-year-old person, you don’t care what kind of hospital admission is prevented, you just don’t want to be admitted.
TRANSFORM HF authors did it right: they measured total hospital admissions and found no statistically significant difference.
I won’t tell you about subgroups, because without even a trace of a signal in the main results, it would be foolhardy to look at one subgroup.
Before we conclude that the old stand-by furosemide is where we should start, there were some limitations in this trial, most of which occurred because it was a pragmatic trial. Pragmatic trials, by definition, have limitations.
First was that they reached the target number of primary outcomes – death — even though the sample size was half what was planned. This limits the interpretation of outcomes like hospital admissions.
I already discussed the second limitation, the choice of endpoint. All-cause death is a great endpoint because there’s no bias in adjudication — see also the last two weeks discussions of FOURIER — but it’s pretty far-fetched to think a loop diuretic choice is disease-modifying enough to move death.
Another limitation was diuretic discontinuation and crossovers. Again, this isn’t one of those trials with strict protocols. Pragmatic trials are designed to simulate what happens in real practice. That’s good because the results generalize well, but it’s bad because there’s a lot of noise that might hide signal.
That said, I think this data tells us there isn’t a big obvious signal, and thus no reason to favor torsemide over furosemide in general.
But that is not the main message. I believe the main message is the tension between how to produce evidence. The authors show us a roadmap for doing a pragmatic trial that collects outcomes from regular practice. That is a positive.
But the noise — the broad entry criteria (of all types of HF), the dropouts, the crossovers, the varying dosing, the ability to look at big endpoints only, is something we have to think about.
I spend a lot of time each week critiquing the generalizability of highly selective and highly controlled RCTs. The question is always, is my patient in my clinic similar to those in the trials? Does the evidence from a trial with oodles of nurse coordinators apply to my patient who can’t find a ride to clinic?
Well, at least in a standard tightly- controlled RCT, one can get an average effect for that group of patients in that environment.
Final comment: Please do read the editorial from mastermind Michelle Kittleson. Of all the debates I have lost, Dr. Kittleson most thoroughly shredded me. Kittleson has an excellent Twitter feed and a brand new book, called Mastering the Art of Patient Care.
She makes another really important point: It is absolutely crucial to publish all trials, even those with non-significant results. Why? Because if the only trials that get published are those with positive results, what do you think happens to meta-analyses?
Adverse Events During Hospital Admissions
The New England Journal of Medicine (NEJM) published a retrospective observational study from Harvard researchers who studied adverse events during a hospital admission. The authors frame their study by citing 1991 data on adverse events. They say lots has changed for the good. Reducing central line infections has been a shining star. But we now have electronic data, and we need more of a contemporary look at adverse events during a hospital admission.
I feel old because I’ve practiced through this entire time course. Things are better, but every month when the committee I sit on reviews cases, I think we could do better. And when I look back at my work, I think I could do better. This work is so darn humbling.
Before I tell you a single thing about this random sample of 11 Massachusetts hospitals, I want to set out two important facts about the modern age:
One is that you have to be very ill to be admitted on an inpatient service.
The second is related: the best way to avoid having an adverse event during a hospital admission is to not be sick enough to be admitted. And that’s the thing, the better we get at keeping people alive longer, the sicker the patients we will see in the hospital. 25-year-old men don’t get urinary retention after an ablation; 80-year-old men do.
Here are the topline data.
In a random sample of 2800 admissions during 2018, they found at least one adverse event in 24%.
Of these, nearly 1 in 4 were judged to be preventable, and nearly a third of these had a severity level of serious or higher.
In all, nearly 7% of admissions had at least one preventable adverse event.
Adverse drug events were the most common adverse events (accounting for nearly 40% of all events).
Second most common was surgical or other procedural events (30.4% and then patient-care events such as falls and pressure ulcers.
Healthcare infections were about 10% of all events.
Here is copy and paste from the conclusions: “These findings underscore the importance of patient safety and the need for continuing improvement.”
Also, “Our approach may have resulted in conservative estimates of event rates.”
Comments. While I applaud the effort to improve patient care, I really dislike this paper. I don’t know any of the authors, and mean them no malice, but this sort of analysis does the work of caring for extremely sick patients a real disservice.
First of all, you have the methodology. Nurses reviewed an electronic record for things they identified as an adverse event. In hindsight.
Nowhere in that review is a picture of the patient. Or the context of the care, or the family. This is a fatal flaw in my opinion.
When we review cases each month at my hospital, here is what happens. A physician who has 30 plus years of inpatient care, who worked in our hospital, screens cases to bring to the committee. Most “quality referrals” don’t pass muster for review. Of those that are reviewed by a committee of physician peers, a scant minority rise to the level of further inquiry. That doesn’t mean we can’t do better, it means when we are judged by practicing clinicians, major issues are rare. Clinician judges with experience realize the sharpness of hindsight and the foolishness of thinking you’d do better without the knowledge of hindsight.
Another fatal flaw comes from a look at the data table listing the actual complications. The most common adverse event from a drug reaction was hypotension. Again, no context. How in the world would one ever “maximize the rapid initiation of guideline directed medical therapy” in patients with HF without encountering low blood pressure (BP) The only way to avoid low BP with a drug is to stop prescribing drugs.
In the supplement, one low BP episode was labeled a fatal event after surgery. Again, we don’t know if this was a life-saving attempt at surgery, no context. Retrospective chart reviews lack context.
Here is another example from the appendix of a serious and potential adverse event: “Patient triggered for low BP and newly developed acute kidney injury with creatinine of 2.0. Physician’s note states that both are likely due to overdiuresis.”
This is nuts! If you haven’t over-diuresed a patient, you can’t be treating HF patients.
Then there are falls. You hate for a fall to occur in a patient in the hospital. But how do we prevent such things? One way is to hook up bed alarms that shriek with ventricular fibrillation-inducing decibels. These audible shocks can go off when a patient shifts in the bed. Healing is nearly impossible.
There are ways to improve hospital care:
One is by hiring enough staff.
Another is retaining the staff you have. That’s hard in this care environment. Hospitals can’t do any good if they go bankrupt. This situation is no one human’s fault. It’s a policy failure. I like to say, we get what we tolerate. And voters seem to tolerate lousy policy.
Another way to improve outcomes is to reduce the burden and distractions on caregivers. Those who are taking care of patients want to do it well, but right now, they have to serve two masters: one is the patient; the other is the middle managers making sure that all the boxes are checked. Again, that is a policy failure. We get what we tolerate.
Every day, every clinician I have ever spent any amount of time with, works really hard at delivering the best care possible. Motivation is not the problem. Insufficient algorithms are not the problem. Documentation is never the problem.
The problem is a) the system, b) the notion that healthcare can be free of errors. We must and I mean must have a discussion of what our tolerance of errors are. If the answer is zero, there will be tremendous harm.
The sponsors of the study were the Controlled Risk Insurance Company and the Risk Management Foundation of the Harvard Medical Institutions.
Finally: Just in case, by accident I am sure, an NEJM editor happens by this podcast, we love your journal; and we understand the business model of attention and clicks. So, we have no problem spotting you a few head scratchers like the one about MI-care-during the Boston marathon. To err is human, after all.
But please, when you are sent flawed observational studies that can harm those who actually do the work at the bedside of the infirmed, resist the urge. Please. Always. Stand strong for the people who attend the sick and vulnerable, at all hours of the day and night.
The World Hypertension League has published a consensus document that is worth a mention. Their document calls for urgent strengthening of regulations regarding the accuracy of automated blood pressure monitoring devices (BPMDs). It turns out that most automated BPMDs that are marketed for sale globally have not undergone adequate validation testing to ensure clinical accuracy.
Of course, I agree with such a commonsense policy, and the reason to talk about this topic is scale. Few problems could be more relevant to cardiovascular (CV) disease than hypertension. But at the core of tackling this problem is accurate and precise measurement of blood pressure.
I know an office that is going to start a BP quality control program. Middle managers are going to begin looking back at electronic health records, to see how well doctors are doing with BP control of their patients.
But there may be a problem with measurement. That’s because, in this office, I am told, patients have to first struggle to find a parking space, then run a gauntlet of paperwork, then face questions such as have you been exposed to Mpox or been to Uganda in the last week, and then finally, they have their BP taken by an associate who is also asking them questions to fill in boxes for middle management.
The obvious worry I have with much of hypertension management is the measurement of BP. A person records this one BP, manually, by the way, and will place that number in the chart. It is often much higher than the blood pressures that the patient brings in from home.
A hypertension specialist at a major referral center told me how they take BP measurements. Consider the difference. The patient enters a room, sits at a table, both feet on the floor, arm at heart level on the table. An automated cuff is placed on the patient’s arm.
The patient sits quietly. The attendant leaves the room. Five blood pressures are taken automatically, every minute. The first one is thrown out, the next four are averaged.
The digital era has created a population of caregivers and patients obsessed with numbers and surrogates. I have no doubt that systolic BP is an important surrogate, but the World Hypertension League is right to emphasize the absolute crisis in obtaining accurate and reproducible numbers.
When we record falsely high blood pressures, caregivers’ intent on looking good are going to prescribe more tablets. And that can put elders at risk for falls.
My message is simple: before thinking about hypertension treatment, we should always think first about BP measurement accuracy. If any BP experts are listening, please let me know your thoughts.
Left Main CA PCI vs CABG — Again
Here we go again. The Journal of the American College of Cardiology: Cardiovascular Interventions (JACC-Intervention) has published a big “real-world” examination of revascularization strategies for left main coronary artery (LMCA) disease in Ontario, Canada.
This study used clinical and administrative databases, which are quite good and allow researchers to look at all patients having LM procedures without participation bias. But, of course, this was not a randomized comparison.
Over the past decade or so, about 1300 patients had LM percutaneous coronary intervention (PCI) and 21,000 had coronary artery bypass grafting (CABG).
The primary outcome was late mortality and major adverse cardiac events (MACE).
The two groups differed quite a bit in baseline characteristics, so the authors used propensity matching to find about 1000 patients in each arm that looked similar on a spreadsheet.
They confirmed that there was no difference in in-hospital deaths, but over the 7 years, death came to 46% in the CABG arm vs 65% in the PCI arm. The hazard ratio (HR) was 1.63, or 63 higher death rate for PCI.
MI favored CABG by a factor of 2.2 times; stroke favored PCI (HR 0.61). And, of course, revascularization was 3 times higher with CABG.
The authors concluded, “After matching, there was no difference in early mortality but improved late survival and freedom from major adverse cardiac and cerebrovascular events with CABG.”
I highlight this paper, briefly, because it’s another great example of confounding — that is, sicker patients get PCI. How do I know that? Here is the trick: look at the main Kaplan Meier (KM) curve in Figure 2. The death curves begin separating early, within a year.
No matter how you feel about mortality signals in LM revascularization — do you favor the EXCEL analysis that showed higher mortality at 5 years with PCI, or NOBLE that showed no difference, or the big TIMI-group MA that showed no difference — you know that if CABG had an advantage, it would take years to have an effect.
Early separation of KM curves proves confounding in this case. Now, the authors did an extensive number of statistical adjustments, most of which are over my head, but always remember, you can only adjust for that which makes a data sheet. But the doctor or doctors making the decision at the bedside, in the real world, are using far more than what’s in a data sheet.
That, my friends, is the beauty of randomization. It balances all the measured and unmeasured confounders.
I agree with my friend Milos Milojevic who wrote the excellent editorial. What is needed to answer this question is a another very large definitive LMCAD trial. This could possibly done in a registry-based system, similar to all those SWEDEHEART studies.
A final note, my thinking is coming around on this. Two more points:
If you take the average patient, CABG is clearly the best option. But I also have no doubt that the iterations in PCI allow for care of the non-average patient who may benefit from not having to go to surgery.
Second though, come on; look at REVIVED-BCIS. Medical therapy is amazing. The Yousef MA was in another era. We really need an LM revascularization vs medical therapy trial. This would finally make us believe COURAGE and ISCHEMIA.
© 2023 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Jan 20, 2023 This Week in Cardiology Podcast - Medscape - Jan 20, 2023.