Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending August 11, 2023, John Mandrola, MD comments on the following news and features stories.
Ways of Knowing—Observational Studies vs RCTs
First, I want to say thanks for the ratings and comments. I really appreciate the feedback. Keep it coming even when you disagree with me.
In my first semester at Hobart College, before there was an Internet, I took a course called “Ways of Knowing.” Little did I know how relevant this would be to the translation of medical evidence.
While we doctors need evidence to separate us from palm readers, the question remains, how much can we rely on the published evidence since so much of it, in cardiology, at least, stems from non-random comparisons — so called observational studies.
Robert Yeh, MD, from Beth Israel in Boston, has focused much of his research in the “ways of knowing” category. He wrote an important editorial in Circulation that I would like to discuss briefly. The title is “Bringing the Credibility Revolution to Observational Research in Cardiology.”
This is one of his early sentences:
Fueled by the ubiquity of data, an explosion of medical journals, and the unchecked incentive to publish, cardiovascular observational research has descended more deeply into a credibility crisis.
The crux of the problem, Yeh argues, is bias. Specifically, a bias I speak about regularly on #TWICpodcast, that is, confounding by indication, or treatment selection bias. You know, healthier patients get one treatment and that is why it looks better.
Yeh goes on to describe something I see every week:
Medical observational studies often pose incompletely specified questions, they address difference in groups with statistical adjustments, and they avoid causal language, even though the entire effort has a clear causal intent.
Everyone in the science endeavor knows the flaws in these studies, but one way of disguising the flaw is by removing causal language in the paper. Effects between treatments are called “associations” and verbs like influence and impact are removed.
He cites Harvard epidemiologist Miguel Hernan, who has argued that this avoidance of causal language in observational studies is tantamount to deception, and that this deception is not only disingenuous, but harmful to science.
Hernan writes in the American Journal of Public Health:
The proscription against the C-word is harmful to science because causal inference is a core task of science, regardless of whether the study is randomized or nonrandomized.
He argues that scientists need to stop treating “causal” as a dirty word. He uses the example of wine drinking and 10-year risk of coronary artery disease (CAD). Say the hazard ratio (HR) is 0.8, meaning that one-glass-wine-drinkers have a 20% lower risk of CAD. Of course, Hernan argues, this is confounded because wine drinkers may do other things that reduce CAD risk.
The 0.8 risk ratio, he argues, is a biased confounded measure of the causal effect of wine on heart disease. But. We knew this before doing the study. Saying this is a confounded effect is not a scientific statement. It is a logical one. It can never be proven wrong. It’s the same as saying “you can die in the next 5 years.”
One reaction is to completely ditch causal language in observational studies, but this does not solve the tension between causation and association; it just sweeps it under the rug.
The scientific goal of this example is to know whether modifying wine intake influences CAD, in other words, the causal effect of wine on heart disease. It is not the association between wine and heart disease that’s important.
Indeed, this is the goal of the hundreds of observational studies that I have covered here. Of course, it would be better to do a randomized controlled trial (RCT), but many things, like wine drinking over a decade would be impossible to study in an RCT.
Hernan, and Yeh, go on to argue that eliminating the causal-associational ambiguity may help improve the quality of observational research.
The first step is asking better causal questions, meaning don’t just look for an association between drinking one glass of wine and CAD 10 years on. Rather, think about how you would design an RCT to answer the question.
A helpful approach to define the causal effect would have been observed in a hypothetical trial in which individuals in our population had been randomly assigned to drinking one glass of wine vs no wine drinking for some period (say, 10 years).
That is not possible but finding a causal effect in an observational study then becomes similar to emulating a hypothetical RCT. Such an approach would force researchers to consider the intervention over a time period, say, wine drinking from age 55.
The second step is doing better confounding adjustments. Hernan argues that if the goal is only associational, little to no adjustments are needed because confounding is a given.
But if causality is the goal, as it should be, because that is all there is in knowing, “we need to think carefully about which variables can be confounders.”
This is where I get a bit confused, because it is hard to know all the confounders.
Hernan acknowledges what I have often said: There is no guarantee that a causal model incorporates all the confounders, and hence, there is no guarantee that an estimate can be causally interpreted.
His answer is great: Yes, there is no guarantee that we can infer causality, but we can only have an informed scientific discussion if we first acknowledge the causal goal of the analysis.
I like this. Do the analysis. Then argue about the results.
Yeh takes it further in his analysis, writing that the bar is high to emulate a trial; that is, to reduce or eliminate confounding.
Yeh argues that there are techniques you can use to balance these variables.
Falsification end points that are not expected to differ is one way to sort out confounding.
Sensitivity analyses for the frequency and magnitude of possible confounding factors can also be used.
Instrumental variables, regression discontinuity designs, and difference-in-difference studies are another techniques that could help differentiate high vs low quality observational studies.
All of which we should be doing, he argues, along with reducing the number of low-quality observational studies.
One problem I have -- and I would like to ask any listeners for their opinions — is with the recent paper from Wang and colleagues reporting the RCT-DUPLICATE Initiative. This was a top group of casual inference scientists, choosing 32 trials to try to emulate using observational databases, and the correlation was decent but far from perfect.
So the problem raised by Dr David Cohen still stands: Some observational studies are correct, I just don’t know which ones.
It makes me quite nervous making treatment decision based on non-randomized data, but I think now my frame is different. Let me know what you think.
And as an example, let’s discuss a new paper on digoxin in atrial fibrillation (AF).
Digoxin for Atrial Fibrillation
Whenever we talk about digoxin and AF, I think the discrimination faced by digoxin is unfair. I am sure of it. And a well-done study published last month lends some support to my thesis that digoxin can help patients with AF. The study fits well right after the discussion about Dr. Yeh’s proposal to improve the quality of observational studies.
The Canadian Journal of Cardiology (CJC) published this retrospective cohort claims database study of patients who were discharged from the hospital with a diagnosis of AF.
James Brophy and Lyne Nadeau attempted to emulate a trial comparing outcomes among those patients discharged on beta blockers, digoxin, or both.
The primary outcome was in-hospital mortality or repeat cardiovascular (CV) hospitalization.
They studied more than 14,000 patients using a Truven Health Analytics Market Scan Database plus US commercial and Medicare claims.
Most were discharged on beta blockers, about 12,000; about 400 were on digoxin alone; and 1500 were on both.
Follow-up time was 1 year.
After propensity adjustment for baseline differences — obviously patients discharged on digoxin were sicker — they found no significant harm from digoxin alone or combined with beta blockers.
Using the beta-blocker alone group as the control, the HR for the digoxin group was 1.24 with the confidence interval (CI) ranging from 0.85 (15% lower) to 1.84 (84% higher), which was nonsignificant.
For the combined group, the HR was 1.09 and CI from 0.90-1.31. So, digoxin was not harmful in this observational study.
Comments. I know. It is observational. And the authors list the limitations. But they used a target trial emulation. They have a time-zero. No immortal time bias. They did their best to adjust, knowing full well that sicker patients get digoxin.
The ideal way to study this is in RCT form. But neither digoxin or standard beta blockers have a Novo Nordisk or Lilly to promote them and fund a study. But there are clues. I have a whole lecture defending digoxin.
Paul Dorian and Paul Angaran wrote a nice editorial on the observational study. It’s worth a read.
Some main points:
Yes, digoxin use has been associated with increased mortality in observational studies, but these are mostly confounded by indication — sicker patients get the digoxin.
The evidence for that statement comes from a worthy meta-analysis by Ziff and colleagues in the British Medical Journal. They meta-analyzed just about every study ever done on digoxin. It was massive.
The key finding in their analysis is that the more you adjust for baseline variables, the less the mortality hazard from digoxin. And in RCTs, there are no digoxin hazards.
You all recall the DIG trial in heart failure (HF). A major RCT. No difference in mortality, the primary endpoint.
In the DIG trial, a secondary endpoint was hospitalizations for HF (HHF). And digoxin reduced HHF by a statistically significant 28%, as much as some of our favored drugs now — say SGLT2 inhibitors.
And perhaps the strongest evidence for selection bias in digoxin observational studies came from a brilliant analysis of the DIG trial by Davila and colleagues. You should look this paper up. It is really clever.
You know the results of the DIG RCT. No difference in mortality and big reductions in HHF in the digoxin arm.
Davila studied the outcomes of patients who, in the screening process were on digoxin. Some were randomly assigned to stay on digoxin and others to placebo. This made up an observational cohort. Right? Those on digoxin and those not on digoxin before the trial.
They then counted up what happened to these patients in the trial regardless of what group they were put in.
Patients on digoxin before the trial had a 22% higher death rate and a 47% higher HHF rate. Nearly opposite the main trial.
This just about proves that sicker patients get digoxin and that is why observational studies have found mortality signals.
Two more things to consider about the beta blocker vs digoxin decision in patients with AF. You know that beta blockers are one of the pillars of guideline directed medical therapy in patients with HF with reduced ejection fraction (HFrEF). Well, most of the patients in the seminal trials were stable outpatients -in sinus rhythm.
Now look at the famous Lancet meta-analysis by Kotecha and colleagues.
They looked at mortality in the seminal beta-blocker HF trials and found that the drugs did indeed reduce death in patients in sinus rhythm but not in patients with HFrEF and AF.
Their conclusion: Beta blockers should not be used preferentially over other rate-control medications and not regarded as standard therapy to improve prognosis in patients with concomitant heart failure and atrial fibrillation.
The second thing to remember about beta blockers vs digoxin is the small RCT of digoxin vs bisoprolol, the 2020 RATE AF RCT.
In patients with permanent AF, digoxin performed equally well in controlling symptoms of HF. At 12 months, 8 of 20 secondary outcomes significantly favored digoxin over beta blockers. Adverse events were less with digoxin.
CTO-PCI
The issue of chronic total occlusion (CTO) comes up often. Estimates vary between 15% to 50% in patients with multi vessel CAD.
A recent meta-analysis in this area purports to show encouraging results but its limitations preclude making any conclusions. But first some background thoughts.
Since I am sort of simple-minded, I will go back to basics on CTOs for a moment.
It sounds terrible to have a chronic total occlusion of a coronary artery, but it’s not as terrible as one would imagine for a couple of reasons.
One is that the body has amazing abilities to grow collateral vessels to the area. Say the right coronary artery is occluded, you can see collaterals from the proximal right, circumflex or left anterior descending arteries. Collaterals may not provide perfect supply to the inferior wall, angina may occur at high demand, but it’s still decent blood flow.
The second reason a CTO may not be awful is that it often leads to myocardium that has been infarcted and scarred. Opening it, therefore, would do little to help the scar, because scar is scar.
In days of old, there were only two choices with CTO – optimal medical therapy or surgery. A surgeon might bypass a CTO if they were in there bypassing other vessels. But surgery alone for a single CTO was uncommon to rare.
Now, though, doctors have amazing tools. It has become possible — albeit very hard — to open a CTO and recanalize a previously occluded vessel.
The question is, why do this? In cardiology there are basically two reasons to do things: better outcomes and better quality of life (QOL).
We have already established, in oodles of trials, -- COURAGE, BARI-2D, ISCHEMIA -- that there is little to gain in adding a percutaneous coronary intervention (PCI) to optimal medical therapy in patients who have stable CAD, so it’s hard to argue for opening a CTO on an outcome basis.
That leaves symptoms. Most patients with CTOs have angina, sometimes limiting, because the collaterals I spoke of are inadequate. Doing things to improve symptoms is totally normal and legitimate. We do AF ablation to improve QOL.
But if you are going to intervene for symptom control, you have to have proper evidence of efficacy and the procedure has to be safe enough that harms don’t exceed benefits on average.
CTO-PCI is a super complex decision then. You have four main components.
Efficacy. Will it improve symptoms? For this, there is only one way to know. You need a proper blinded control. Yes, a sham procedure. You cannot study a subjective endpoint in one group who got the procedure, and the other group gets only tablets. That’s unscientific.
The second component in the decision is feasibility. Is the lesion even doable? That’s a judgement call and it depends on the skills of the operator.
The third and fourth components are short and long term harms. Short term harms approach 3% and some of the complications are serious, such as coronary perforation. But there are also long-term harms, such as bleeding from dual antiplatelet therapy, and stent thrombosis.
Now to the meta-analysis. Boy oh boy, it is problematic. The second sentence of the whole report is misleading.
This meta-analysis of 7 trials, including 2500 patients, found that successful chronic total occlusion revascularization was associated with improved quality of life parameters of patients compared with patients receiving optimal medical therapy or after failed chronic total occlusion revascularization.
No. There were not seven trials. There were only three. The other four studies were observational.
Before I say a word more about this paper, a learning point here is that you should be very cautious in using summary effects from meta-analyses that combine observational studies and RCTs.
Another fatal flaw with this study is that they compared outcomes in patients with “successful” CTO-PCI to “failed” CTO-PCI or medical therapy.
This is ridiculous. You gain very little by looking at outcomes after the fact and only include the good outcomes. This, by the way, was one of the fatal flaws of the seminal trials of Watchman vs warfarin. The co-primary endpoint of PREVAIL was stroke/systemic embolism, excluding the first seven days. Which patient ever gets to exclude the time of or immediately after a procedure. Why this is allowed in medical science I have no idea.
The third flaw in this meta-analysis — as if the first two are not enough — is that looking at subjective symptoms after an unblinded procedure borders on unscientific.
The ORBITA trial demonstrated the absolute need for proper controls when doing single vessel PCI.
To the proponents of doing CTO PCI, I understand the issues. There are probably patients that will benefit. CTOs come in many different varieties.
But, given the higher risk, the lack of outcomes data, the onus is on you all to show that doing high risk procedures can benefit some type of patient.
If I were commissioner of health, there would be reimbursement for this procedure only if the patient was in an RCT. That RCT would have a proper sham control. It would measure outcomes and QOL and it would follow patients for at least 3 years.
This is a solvable problem, but not with flawed observational studies, not with unblinded trials, and surely not with flawed meta-analyses.
BP Measurement
JAMA-Internal Medicine, has published an important study from a group at Johns Hopkins. First author Junichi Ishigami, MD.
You might think the matter of measuring blood pressure (BP) is too basic. Boring. Or, we already know this. Well, you would be wrong, because medical harm from mistreatment of BP, especially in the elderly, is too common. I bet I see one to two cases per week of syncope in the elderly due to hypertension (HTN) therapy run amok.
The Johns Hopkins group did something simple. They took about 200 patients in a clinic and did four sets of triplicate BP measurements, using appropriate size BP cuff, too small cuff, and too large cuff, in random order.
Compared with the BP using the correct size cuff:
Using too large a cuff resulted in significantly lower BP readings.
Using too small a cuff resulted in significantly higher BP readings.
They concluded that miscuffing resulted in strikingly inaccurate BP measurements.
I know you already knew this. Maybe. But I think it is important and worth a public service announcement.
Please spread the word. Let’s work to normalize good BP measurement. I sent a PDF of the paper to our powers that be for circulation to all caregivers. Also, kudos to the authors and JAMA-IM for publishing such an important and well-done study.
A potential breakthrough in CV prevention with GLP-1 agonists.
I’ve covered the GLP-1 agonists often. Well, we are going there again. One more time, GLP-1 is a gut hormone released in response to food intake and it acts as a satiety signal, stimulates insulin release, inhibits glucagon secretion, and regulates gastric emptying.
GLP-1 has other effects too, like natriuresis, diuresis, BP reduction and reducing of inflammation. Diabetes trials have found the drugs reduces cardiovascular (CV) outcomes.
SELECT is a massive RCT sponsored by Novo Nordisk to study the effect of semaglutide on CV outcomes in patients with body mass index greater than 27 and established CV disease. Emphasis on established disease. You had to have prior MI, prior stroke, or symptomatic peripheral artery disease to participate. This is a big one, and the company has released positive top line results.
The N was more than 17,000, semaglutide vs placebo.
Primary outcome was time to first occurrence of CV death, myocardial infarction (MI), or stroke.
The company announced this week that the trial met its primary endpoint. And they gave some encouraging details.
They found a significant 20% relative risk reduction. And all three components of the primary outcome contributed to the reduction. That’s good, especially CV death being lower.
The results also showed a level of safety and patient tolerance for weekly 2.4 mg injections of semaglutide that were consistent with previous reports on the agent.
Comments. This has to be considered good news. Of course, we need to look at the details. We need to consider the degree of absolute risk reduction. The adverse effects. But it is a new chapter on secondary prevention. It also may make us think differently about overweight and obesity.
Right now, it seems that obesity is somewhat normalized, which, medically speaking, seems wrong, because obesity and the cardiometabolic issues that go along with it, seem like something medical professionals should address as we do HTN or smoking.
Now you have an outcomes study that shows, in patients with established heart disease, that a drug that induces substantial weight loss actually improves outcomes.
Weight loss, therefore, is medical therapy.
Of course, I don’t want to sound dim, there are other potential ways in which GLP-1 agonism may reduce CV outcomes, but weight loss surely seems the most likely causal factor.
Another question I will have is whether a motivated patient can achieve the same success with lifestyle interventions. I have seen patients, and surely you have too, who have a small MI or stroke and become totally transformed into a beacon of health. By losing weight, gaining fitness, improving diet, does one gain the same outcomes as taking a drug that induces nausea and decreased gastric emptying?
© 2023 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Aug 11, 2023 This Week in Cardiology Podcast - Medscape - Aug 11, 2023.
Comments