Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending May 5, 2023, John Mandrola, MD, comments on the following news and features stories.
A quick note on listener feedback. Regarding the observational study on lead extraction in patients with cardiac devices and endocarditis that I heavily criticized last week, one colleague asked me, John, why do journals publish so many flawed studies?
Recall that this was a hopelessly confounded, non-random comparison wherein sicker patients surely did not get lead extraction, which then led to a strong association with lead extraction and better survival.
My answer involves economics – that is, incentives.
Journals get attention. Authors get publications, which help them advance in the field.
Industry can also win from these studies. In this case, the flawed analysis promotes lead extraction, and the companies that make extraction tools stand to profit.
Legacy media also wins because they can cover these studies and garner attention. Rare is the presence of a news article saying that the study they covered is too flawed to draw conclusions.
This is a core reason for the #TWICPodcast. Incentives will continue to drive the publication of flawed studies. Consumers of medical literature — you and me — need to keep up our skills of critical appraisal. If there were only great studies and cautious conclusions, there’d be no need for this podcast. You could just read the study’s conclusions, and the editorials, and guidelines.
Another comment I received referred to my lengthy dive into a trial emulation paper in JAMA. Another colleague said, John, I don’t know if it was me nodding off, but I could not follow your discussion. It was too complicated.
I worried about that, because this is the most important story of the year. My three-sentence summary:
Some of the top data scientists in the world set out to emulate already published randomized controlled trials (RCTs) from observational data sets.
Success at such an endeavor could change everything about medical evidence because observational data is much easier to come by than RCTs.
Their results were mixed: sometimes they could well emulate an RCT result; but sometimes they could not.
Perhaps take another listen to last week. I will talk more about this project. For now, I think RCTs remain the only standard for knowing what works and does not work.
Beta Blockers Post-MI
In my talks on critical appraisal, I have a chapter on whether trials should have an expiration date.
The classic example is implantable cardioverter defibrillators (ICDs) for nonischemic cardiomyopathy. Herein, the trials that enrolled patients in the 1990s and early 2000s, like SCD-HeFT, found that ICDs reduced mortality when implanted in patients with heart failure (HF). But then background therapy improved, rates of sudden cardiac death fell, and a decade later, DANISH showed that ICDs had no effect on mortality. It’s the same therapy but it’s a different time.
Quality measures and therapeutic fashion hold that we use beta blockers in patients after myocardial infarction (MI). Technically speaking this is an evidence-based practice. But. The trials that found beta blockers reduced outcomes in post-MI patient were done decades ago, even before, before the reperfusion era.
The condition of being post-MI is clearly different now than it was 20 years ago, so the question is whether the drugs would have the same benefit.
Journalist Sue Hughes covered a Swedish observational study published in Heart that drew from a nationwide database. The sample size of post-MI patients between 2005 and 2016 was more than 46,000. They excluded patients with left ventricular (LV) dysfunction and HF.
Patients were allocated to two groups according to beta-blocker treatment. The primary outcome was a composite of all-cause mortality, MI, unscheduled revascularization and hospitalization for heart failure.
Overall, 34,253 (78.5%) patients received beta-blockers and 9365 (21.5%) did not at the index date, 1 year following MI.
As I have said many times on this podcast, these are non-randomized comparisons. A doctor, not randomization, chose to use or not use beta-blockers.
So, the authors did statistical adjustments to attempt to match the two groups.
After these adjustments they found absolutely no difference in the primary composite outcome. Hazard ratio (HR) 0.99, with a tight confidence interval (CI) of 0.93-1.04.
The authors concluded: “Evidence from this nationwide cohort study suggests that beta-blocker treatment beyond 1 year of MI for patients without heart failure or LV systolic dysfunction was not associated with improved cardiovascular outcomes.”
Comments. This was a particularly strong association study. Large numbers, a robust national database, statistical adjustments. And. The lack of association of beta-blockers vs no beta-blockers in post MI patients comports with what you would expect in an era where MIs are stopped by reperfusion therapy. In other words, the finding of this nonrandom comparison is plausible.
Yet I think we have to be careful not to make causal conclusions. When clinicians decide on a treatment rather than randomization there could be confounding variables.
The good news is that the authors tell us that there are now multiple (five) RCTs that will test beta-blockers in the post-MI state. BETAMI from Norway, REDUCE-SWEDEHEART from Sweden, DANBLOCK from Denmark, another from Spain, and ABYSS from France.
In a few years, the use of beta-blockers in patients after MI who do not have LV dysfunction or HF, will be one of most studied interventions in all of cardiology.
I lead with this topic because trial settings are so hugely relevant to translating. The post-MI setting has changed so much.
I will bet any of you a large cappuccino that these studies will be neutral. And that is because event rates after you reperfuse an MI, and prevent the acute occlusion from causing LV dysfunction, that it will be hard to lower them with beta-blockers.
Kudos to all the investigators who are studying this topic. I wish some of the money and time would be spent repeating PARADIGM-HF. Gosh it would be great to know if sacubitril/valsartan vs valsartan at the same dose comes out positive.
Declining Trust in Medicine and Statins, BMI
A Stanford research group, publishing in JAMA Network Open, reported a qualitative study of 10,000 statin-related discussions on Reddit over the last decade. They used a complicated artificial intelligence (AI) program — one that I do not exactly understand — to cluster these discussions into thematic groups. They seemed surprised by their findings. I wasn’t.
The first observation was that raw number of discussions have increased over time.
Another observation was that the AI program identified thematic groups — ketogenic diets, diabetes, supplements, statin side effects, statin hesitancy, clinical trial appraisal, pharma bias, and red yeast rice and statins, for example.
The authors were able to conduct something called a sentiment analysis and noted that most discussions had a neutral or negative sentiment.
In the news article, senior author Fatima Rodriguez made some interesting comments to journalist Sue Hughes.
"Some of the themes were surprising to us. While we expected discussion on side effects, we were surprised to see so much discussion refuting the idea that increased levels of LDL were detrimental. There were also large amounts of posts about statin use being correlated with COVID outcomes. Our findings show how widespread this misinformation is.
"As a preventative cardiologist I spend a lot of my time trying to get patients to take statins, but patients often rely on social media for information, and this can contain a lot of misinformation.
"We need to understand all sorts of patient engagement and use the same tools to combat this misinformation. We have a responsibility to try and stop dangerous and false information from being propagated."
Comments. Obviously, this qualitative study with novel machine learning methods is not an RCT that will change practice tomorrow. But it sheds light on an important topic — trust in the medical profession.
Statin drugs are the most rigorously studied medical intervention, bar none. More patients have been enrolled in statin trials than any other drug. Most statins are generic. They are making few people rich anymore.
Statin drugs have been incredibly consistent in delivering 20% to 25% reductions in future cardiac events. The meta-analyses of these trials show almost no heterogeneity.
When studied with proper placebo controls and blinding, statin drugs have minimal to no side effects.
Yes, it is true, statin trials were done over 3 to 5 years and, as Rod Hayward has well described, there are unknown unknowns of taking the drugs over 10, 20, 30 years. But that could be said of almost any drugs.
Here is my question then: How can it be that such universally helpful drugs, with minimal to no side effects and low cost, carry so much negative baggage?
Think about that question before answering it. Here is my answer: It is the fault of the medical profession. This paper reveals the symptoms of distrust of us.
In my opinion, we are increasingly distrusted because of our approach. It is not the fault of social media platforms, nor those who use social media to speculate.
My theory is that instead of promoting a culture of critical appraisal we (the medical profession) take a paternalistic, holier-than-thou approach to certain things. Think guideline directed medical therapy writ large on the American people.
We saw it during the pandemic. Instead of being transparent about the vast uncertainty, we decreed and mandated things, without evidence. We leaned on expert opinion when we should have used the uncertainty to promote a culture of randomized trials. It could have been a revolution in epistemology. We could have said, gosh, this is a once-in-a-lifetime pathogen, and here is how we are going to learn more about it. We are going to randomize different approaches.
Unlike Professor Rodriguez, I do not try to convince patients to take a statin or any medical treatment. I see almost all medical treatment as preference sensitive. Especially preventive medicine.
The approach I take is to be an advisor. Even as an electrophysiologist, many patients take the opportunity to ask about statins. I am shocked at how few of them have been shown the risk calculator so as to understand their risk on and off the drug.
“My doctor said I need to take this to lower my cholesterol.” In that sentence, in the attitude of that sentence, is a grand metaphor for why we have lost trust.
It’s exactly why there are online discussions questioning basic things about medicine. Because people know they will get the, you need to take this pill to lower cholesterol line from their doctor.
These are all probabilistic decisions that can be discussed at the level of your patient. That doesn’t happen enough. It doesn’t happen in guideline documents, in quality measures and in the public sphere.
The American medical profession treats people as if they are stupid. And it makes me mad.
Take diet. We know that eating pure sugar and processed food is bad but, we don’t know the first thing about the heart-healthiest diet. We probably never will. And we should just say that.
People would trust us more if we acted like humble advisors rather than pompous professors.
One more trust shredder: People read about our debates over body mass index (BMI) as a measure of obesity.
A normal person thinks: how can they argue about something so tangential to one of the nation’s top health problems? Normal people know the problem isn’t how we measure obesity; it’s how we fix it.
I say good on the Stanford team for publishing these observations. We need more of it, so someday, the medical profession will better understand how the public sees us. Combating misinformation begins in our clinics and medical schools.
The Canadian Journal of Cardiology has published a study looking at access to transcatheter aortic valve implantation/replacement (TAVI/TAVR).
This topic is a hot potato. Here is why. When done in high- or moderate-risk surgical patients, TAVI can be a net positive for patients. (You know how I feel about low-surgical risk patients: we need a lot more data).
Yet, clearly, older patients with aortic stenosis (AS) who have good insurance and reside next to big centers in big cities have a greater chance of getting TAVI.
So, access to TAVI is a good thing, right? Yes, but there is a downside. TAVI takes a long time to master. High-volume operators and high-volume centers will perform better than low-volume centers. So, the downside or risk of expanding TAVI to, say, more rural areas, is that some of this access is to low-volume centers.
The Canadian study was an observational retrospective cohort study from Ontario and New York State (NY) and the goal of the authors was to explore the variations in access to TAVI in two areas.
One area had high access (NY) and the other had low access (Ontario). The populations are pretty close: 15 vs 19 million. Canada has a much different system — universal public health vs NY state’s mostly fee-for-service, mixed insurance model.
The primary outcome was post-TAVI, 30 day in-hospital mortality and all-cause readmissions. To get to this outcome they had to use observed vs expected outcomes for NY patients had they been treated in Ontario.
17,000 patients had TAVI in 36 hospitals in NY; 5000 patients had TAVI in only 11 hospitals in Ontario.
In Ontario, access to TAVI increased from -18 per million in 2012 to 87 T per million in 2018.
In NY, access to TAVI increased from 32 per million to 220 per million.
There was almost a 3-fold higher use of TAVI in NY State compared with Ontario over this time span.
30-day mortality was 3.1% in Ontario vs 2.5% in NY.
With adjustment, this translated to an observed-expected ratio of 0.70 (95% CI, 0.54-0.92) for NY patients. Of interest, when this indirect standardization was done for readmissions, there was no difference. Why is that? A clue perhaps.
The conclusion – “Having greater access to TAVR may be associated with improved outcomes, potentially because of intervention earlier in the trajectory of the disease.”
Comments. Whenever I teach critical appraisal, I start by asking the reader to consider what a study is for. Sometimes it is for marketing. Sometimes it is to answer a question.
When you read the authors’ discussion — I mean, when you read between the lines, you get the sense this was a call to improve access to TAVI in Canada.
“It is an important area for further work to understand if the differences we observed among our cohorts are indeed due to fewer periprocedural issues. If so, this may be due to the New York cohort being earlier in their disease progression by having greater access and therefore shorter wait times,” the authors say.
The issue of access to TAVI, access, that is, from high-volume skilled operators, is a vital question. The problem is that this study has way too many limitations to answer the question.
They had no info on wait time or events in NY. (Recall that they used an observed vs expected outcome)
Further, as with all non-random comparisons they cannot exclude confounders. For example, outcomes may have looked worse in Canada because they operated on sicker patients.
I worry about confounding because readmissions were not different and if this was a true association — NY is better — why would they also not have fewer readmissions?
While I am sure the authors are well-intentioned. Studies like this are ill-advised. The methods of this analysis are simply too limited to persuade anyone.
Better access to TAVI is a laudable goal in NY or Canada or anywhere. Using flawed analyses as a means to that goal is problematic because it reduces trust in the scientific method.
I wish accomplished academics would resist the urge to do these sorts of analyses. I wish journal editors and reviewers refused to publish them. Because for every flawed analysis, especially those with outsized conclusions, trust in medical science is diminished.
Improving access to TAVI is not necessarily a universal good, if that access comes from low-volume centers. This is an important area of study, but it will take stronger methods than used in this paper.
© 2023 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: May 5, 2023 This Week in Cardiology Podcast - Medscape - May 05, 2023.