Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending March 11, 2022, John Mandrola, MD comments on the following news and features stories.
Two Announcements
First, This Week in Cardiology will take a week off as I am traveling to Denmark to speak. Second, I want to lead with a thanks for the comments and kind reviews on Apple Podcasts. Such things really help others find us.
AF Screening
Yet another trial has been published on atrial fibrillation (AF) screening, and yet another trial has failed to show that AF screening benefits patients. Circulation published the VITAL-AF trial from researchers at Harvard. It was test of point of care screening with a handheld single-lead electrocardiogram (ECG) at primary care visits. The question was simple, and the primary outcome was the incidence of newly diagnosed AF.
This was a pragmatic cluster-randomized trial in which the authors randomly assigned 16 clinics to AF screening with a KARDIAMobile device during vital signs or usual care. Cluster randomized trials are nifty and deserve a word. Here, the randomization is at the clinic — not patient or clinician — level. Cluster-randomized trials are useful for studying methods or approaches to patient care as opposed to evaluating specific effects of a medicine. These types of trials would have been perfect during the pandemic to study system-wide things, such as from National Provider Identifier registries.
Inclusion criteria enriched the cohort with patients likely to have AF, that is, those over age 65 years, which is smart.
Cluster randomization of that many clinics produced big numbers of patients: about 15,000 in each group.
The primary outcome of newly diagnosed AF occurred in 1.72% individuals in the screening arm vs 1.59% in the control arm at 1 year (risk difference [RD] 0.13%, P=0.38).
The authors did sensitivity analyses such as intent-to-treat vs as-treated and multivariable analyses and these showed similar results.
This is a solidly null result. Not even close to significance. As such, the proportion of individuals with new AF who were initiated on oral anticoagulants were not different. The authors rightly concluded: “Screening for AF using a single-lead ECG at primary care visits did not affect new AF diagnoses among all individuals aged 65 years or older compared to usual care.”
However, I am sad to report that the authors resorted to spin. In the results section, in the paragraph right after reporting the null results, the authors tell the results by age, and in patients older than 85 years, the results favored screening. The problem of course is that this was the smallest group, representing only 1200 of the 15,000 individuals.
Spin is language that distracts from the nonsignificant primary endpoint. This is clearly spin. And in the news story, the spin made the first and second paragraphs as well as the title of the piece. I did an Altmetric search on this paper. Altmetric is a measure of attention and looks at news coverage and Tweets, blogs etc. It was picked up by seven news outlets and every one led with something like, handheld ECGs may be most effective in oldest adults. It is interesting that Twitter—of all things—seemed to more accurately emphasize the negative results.
Four Comments:
First, congratulations to the research group. Researchers who conduct trials to answer important question deserve kudos regardless of the results. I wish that was the norm. Why didn’t they say: we thought a single-lead super-nifty ECG used in primary care clinics might help identify more AF, but it didn’t. And that is important to know.
I wish that were enough. Knowing what doesn’t work represents a huge advance in biomedicine. Then there would be less incentive to dip into small subgroups to find “positive findings.”Second, I am not surprised by this finding. Why? Because of the control arm. I mean, if a patient cannot expect a clinician to pick up new AF, that would be a pretty low standard. Modern blood pressure cuffs can alert for an irregular rhythm. And if a patient gets even a modicum of an exam — a palpated pulse, a stethoscope outside the clothes — a trained clinician ought to be able to feel or hear an irregularly irregular rhythm.
The KardiaMobile device, and various smart watches, have their greatest utility in diagnosing symptomatic palpitations. A patient feels something and records the rhythm. This often avoids the need for medical-grade monitoring. It is cool. But as a screening device, intermittent recordings are challenged. Recall that most asymptomatic AF is paroxysmal. A device that checks the rhythm once or twice daily stands a good chance of missing intermittent symptoms.
I think we have enough data on AF screening as a means to reduce stroke. I wonder if the next frontier of AF screening is to determine whether picking up extra AF may lead to important behavior changes, such as more exercise, improved diet, sleep hygiene, adherence to meds etc. In an ideal world, a new AF diagnosis ought to lead to attention to these cardiometabolic risk factors and doing so could have huge effects on overall quality of life.
CTA vs ICA
The New England Journal of Medicine has published a large pragmatic randomized controlled trial (RCT) led by a German group in Berlin. It’s called the DISCHARGE Trial.
The group compared coronary computerized tomographic angiography (CTA) vs invasive coronary angiography (ICA) as an initial imaging strategy to guide treatment of patients with stable chest pain who had an intermediate pre-test probability of obstructive disease.
The primary outcome was a good one: cardiovascular (CV) death, myocardial infarction (MI), or stroke over 3.5 years. They also looked at procedure-related complications and angina.
In multiple centers in Europe about 3500 patients were randomly assigned. Patients were 60 years old, 56% were women, and more than one-third had nonanginal chest pain. Only one-third had undergone stress testing. This was a superiority trial. It was powered to show CTA reduced clinical outcomes over ICA.
Now the results:
Major adverse cardiac events (MACE) occurred in 2.1% in the CTA group and 3.0% in the ICA group (P = 0.10)
The 30% relative reduction and 0.9% absolute risk reduction did not reach statistical significance.
Major procedure complications occurred in 0.5% in CTA group vs 1.9% in the ICA group.
There was no significant difference in angina during the last month of follow-up.
The Kaplan Meier curves of the primary endpoint begin to separate at 6 to 12 months. Think about that: A diagnostic test (not an intervention) begins to reduce hard outcomes at 6 months.
Only 22% of the patients in the CTA arm ended up having angiography. The frequency of coronary revascularization procedures was 24% lower in the CTA group than in the ICA group, 14.2% vs.18.0%.
The authors write this line in their discussion: “Our trial confirmed the safety of a CT-first strategy and showed results that were similar to those with ICA.”
Media coverage led with this line: “CTA appears preferable to standard cath-based angiography for the initial evaluation of most stable, intermediate-risk patients with angina-like symptoms, researchers say, based on their study conducted at centers across Europe.”
Comments. First, I love pragmatic trials. They often lead to better external validity or generalizability because they simulate normal practice patterns. Digging into the methods and supplements, this appears to be an internally valid trial. Lost to follow-up for instance is low. But some of the core questions and interpretation perplex me.
I don’t know what European norms are, but here in the United States, the initial evaluation of patients with typical angina, atypical angina, and noncardiac chest pain is stress testing. Functional testing. Not ICA. I can’t write in the chart that a 61-year-old has atypical chest pain and haul them off for cath. No payer would allow it.
The peer-to-peer would ask: Doc, what did the stress test show? And to be honest, isn’t this correct? For if you have an intermediate probability of coronary artery disease and a normal low risk test, what are the indications for defining the anatomy? The better pragmatic question for me would be, CTA vs stress testing.
The second perplexing thing is the notion that an imaging test can reduce MACE. The authors of DISCHARGE powered this study to test superiority of an imaging test to reduce MACE. They also had to know that patients with a pretest probability of 36% would have low event rates. They expected CTA would reduce MACE by 40%.
I am just a regular electrophysiologist, but don’t we already know that revascularization (which is an actual intervention) did not reduce MACE at all in COURAGE, BARI-2D, and ISCHEMIA, and these were higher risk patients? If interventions don’t reduce MACE in higher risk patients, how can we expect an imaging test to reduce MACE?Third point, and here Sanjay Kaul form Cedars Sinai helped me out: The narrative of this trial seems to be that CTA did not reduce MACE, but it was as good as ICA with fewer complications. But that is not technically the correct interpretation of a superiority trial. Instead, we say: CTA was not shown to be superior to ICA. You can qualify this by noting that the event rates were lower than expected, which reduced the power to detect real differences if there were any. Kaul says the results are best called inconclusive. And words like similar, comparable, as good as, don’t apply.
I worry a lot about the expansion of CTA. Yes, it is useful for diagnosing left main disease, but the problem, at least in the United States, is the presence of coronary artery disease (CAD) begins a cascade of downstream intervention.
And so often the CAD is incidental. A patient with AF gets a CTA to rule out CAD. He is found to have left anterior descending artery disease and, boom, he gets a stent, despite having no angina, being on a statin, and the results of ISCHEMIA. Now he faces dual or triple anti-thrombotic therapy. The problem was AF.
Having looked at the functional vs anatomic studies for diagnosing CAD, I am not convinced CTA should dominate. You get a ton of information from functional testing: exercise capacity, arrhythmia issues, and degree of ischemia.
Reducing Stroke After TAVR
One of most feared complications of transcatheter aortic valve replacement (TAVR) is stroke. While rates of post-TAVR stroke have decreased somewhat they have remained sort of flat, around 2%.
Actually, I am quite surprised that there aren’t more strokes with this procedure, given what actually goes on in the proximal aorta and valve. I don’t mean to sound unscientific, but the new valve is sort of squished into a fibrotic calcific current valve and you’d expect tons of debris to go northward to the brain. In fact, debris is thought to be the main mechanism of stroke. So it makes sense that a cerebral protection device placed as a sort of filter to catch debris before it hits the brain would reduce stroke. Strengthening the plausibility argument on stroke reduction is the fact that studies show that these devices actually catch debris. However, small RCTs and observational studies have failed to show a reduction in stroke or mortality, despite the fact that the devices catch debris.
A meta-analysis of six studies of cerebral embolic protection devices (EPDs) found: “We found no evidence of difference between patients with and without CPD [RR 0.70 (95% CI 0.40-1.21)] for the primary composite outcome of stroke and mortality at 30 days.”
And a large registry observational study published last year found no association between EPD use for TAVR and in-hospital stroke in our primary instrumental variable analysis, and only a modestly lower risk of in-hospital stroke in a secondary propensity-weighted analysis.
Now to the research letter in JACC: Cardiovascular Interventions. The TAVR team at Cleveland Clinic used the readmissions database between 2018 and 2019 to study the association between EPD and stroke-related mortality after TAVR. This is administrative claims data.
They selected patients who developed stroke during the index hospitalization and compared patient characteristics, treatment, and outcomes between stroke after TAVR with EPD vs without EPD. The primary endpoint was mortality, and the total sample size was 136,000 TAVR recipients.
This is obviously a non-randomized comparison. There were 10,000 who had the device and 126,000 who did not — already a seriously unbalanced number. Here is the key data point although it is not their endpoint:
The rate of stroke was nearly the same; 1.85% in those with protection device and 1.94% in those who did not have it.
These are the results:
Patients with stroke after TAVR with EPD had significantly lower in-hospital mortality (6.3%) than those with stroke after TAVR without EPD (11.8%).
This difference persisted after adjustment. Despite this massively lower rate of death in the hospital, the 30-day mortality was not significantly different.
This paper offers a good lesson in critical appraisal. To be totally fair, the authors spend the next paragraph telling the readers to be cautious in their interpretation. Because, despite the large sample of total patients, the number who had stroke (about 1.9%) is small. That led to wide confidence intervals and despite a halving of mortality rates (6% vs 12%) the P-value barely made significance at 0.049.
Think about that. You reduce mortality by 6% in absolute terms and barely reach significance. That means there is a lot of noise.
The authors also note the limitations of administrative claims — you don’t get data on amount of neurologic impairment, imaging findings, or details of the procedure.
But my friends, there are more obvious reason this is noise rather than signal. Embolic protection devices work by reducing stroke. If you reduce stroke enough, you could possibly reduce death, but it would have to be a heck of a reduction in stroke to lead to lower death rates, because of competing causes of death: stroke is only one of many ways a person can die. Recall that in LAAOS III, surgical left atrial appendage closure at the time of heart surgery led to a large and statistically robust reduction in stroke but had no effect on mortality, because old people have many ways to die. That does not mean stroke reduction isn’t an important endpoint. It is.
In this study, the stroke rates were the same whether or not a EPD was used. I don’t see how you can reduce death if the stroke rates are the same. The authors say it might be because the device catches large debris, but if this were true, would we not have seen a similar signal in the many studies done before?
Here is another reason this is likely spurious. My structural partner says they don’t use a lot of EPD because of cost. The margins for TAVR are razor thin and adding another $2000 is not feasible without compelling reasons.
So the selection of an EPD is not random. Big wealthy centers like Cleveland Clinic use these devices more than community programs. Is there any reason to doubt that the patients who have TAVR at the Cleveland Clinic are perhaps healthier than those in community hospitals? Dr David Cohen, an academic interventional cardiologist, had a nice quote on Twitter this week about observational studies: “Some of them are right. I just don't know which ones."
Here I would argue this one is highly likely to be wrong. Due to bias. But here’s the thing: a major journal publishes it. Media covers it. Readers get the topline. Regardless of the limitations, readers see big name people publishing data that has a positive topline. That is how therapeutic fashion of unproven therapies become ensconced. My advice to you all is this: when you read a study always be asking yourself whether this is marketing or science.
Writing in Medicine
Medscape Medical News has story on the Yale Internal Medicine Residency Writing Workshop, the idea being that it gives residents the tools they need to craft meaningful narratives about the human experience surrounding medicine, giving physicians the agency to tell their stories in meaningful ways. The program began within Yale's internal medicine program but has now grown to include residents of many specialties and other programs.
I love writing. I have advocated for young people to write more than they do. And personally, I regret not starting to write earlier in my life. What makes medicine so special are the stories. The human stories—both the joyous and the tragic.
So, on the one hand, I welcome such programs. At minimum I hope such efforts lead to improved notes in the chart, which the electronic health record has turned into unreadable gibberish.
To any of the young listeners out there, please learn to tell a brief story in the chart. Forget the form, the syntax, the grammar. These are far less important than telling us something important about the person or the event. Write something that makes us think a human is caring for this person.
Cautionary notes:
Yes, there is a craft to writing well. But the craft part is overemphasized. Far more important is simply putting butt to chair and starting. You will learn the craft as you go.
Three simple Mandrola rules:
Write (mostly) short sentences.
Reduce jargon and words that end in ion. A word like revascularization gives me a rash.
Pick up a journal like JAMA or Annals or Health Affairs and read a health policy piece: then don’t write like that.
Read writing books instead. Bird by Bird, by Anne Lamott, On Writing Well, by Zinsser and Roy Peter Clark’s How to Write Short are three of my favorites. Roy Pater Clark’s analysis of Tom Petty’s song Free Fallin is an example of beautiful short writing and is worth the price of the book. Strunk and White is not my favorite.
As a middle-aged person, one who may need medical care soon, I would rather have a knowledgeable doc than a great writer for a doc. Training is one of the only times you will have dedicated time to learn from master clinicians. I would not want writing to be a major focus. Learning how to be a clinician is the most important. But you can learn to write gradually without compromising your once-in-a-lifetime chance to learn how to help people. Writing well is not a sprint. It is a marathon.
Finally, in this day and age, privacy has never been more important. I see a lot of stories on social media that are clear violations of privacy. Don’t post this: ”We did our first XYZ procedure today.” That identifies the patient, and it doesn’t matter if they gave you permission. You shouldn’t be asking for permission.
© 2022 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Mar 11, 2022 This Week in Cardiology Podcast - Medscape - Mar 11, 2022.
Comments