About 15 years ago, cardiologist Dr Michael S. Lauer memorably remarked as he waited on a malpractice verdict that, when it came to truth seeking, he trusted scientific peer review over courtroom jury review.[1] He looks prescient today, as an influx of criminal cases against cardiologists for allegedly unnecessary coronary interventions—lawyers call them stent cases—have uncritically adopted pieces of conventional medical wisdom as empirical, scientific facts. We express no opinion here on the accused doctors' individual guilt or innocence in such cases. Our concern is that convictions in cases like theirs are being won with scientific claims that would not stand up to the rigor of peer review.
The stent cases began ≈10 years ago. Federal investigators realized that they could use a suspected provider's angiograms to "roll back the tape," as though an angiogram were a crime scene surveillance video, looking for unnecessary interventions. Sometimes the prosecutors' suspicions came from a traditional whistleblower or patient complaint. But increasingly, they came from big data: a review of Centers for Medicare & Medicaid Services or other administrative claims data, looking for providers who seemed to be performing abnormally high numbers of interventions. The government can subpoena any such "outlier" provider under the Health Insurance Portability and Accountability Act for a copy of every angiogram taken over some period (eg, 5 years) for some group of its doctors (usually the busiest ones).
The government then searches the subpoenaed angiograms for fraud using what could be called the 70/30 Rule. The government hires individual cardiologists to review sometimes hundreds of angiograms and provide an opinion on the degree of stenosis in each procedure in which a patient received a stent. When a reviewer sees a patient receive a stent in an area that the reviewer believes was <30% blocked where the treating doctor recorded a blockage of ≥70%, the government has probable cause to criminally prosecute the doctor and potentially the hospital and its administrators with federal felonies punishable by decades in prison.
Where did the 70/30 Rule come from? It did not come from a peer-reviewed study or from a commonly accepted treatise. Rather, the 70/30 Rule seems to have originated from a belief among some practitioners that 2 honest cardiologists, viewing the same angiogram, will not disagree about the extent of a blockage by >10% to 20%. According to this conventional wisdom, it is thought that if a defendant cardiologist describes a vessel as 70% blocked and a reviewing cardiologist describes it as 30% blocked, the 40% difference of opinion between them is too great to be the result of honest disagreement, so the defendant must have lied.
The 70/30 Rule appears throughout stent cases, made by cardiologists testifying for the government and repeated by prosecutors, judges, and even defense lawyers who have not checked its accuracy. It has been adopted by 3 of the 13 US Federal Courts of Appeal, meaning that it may carry the force of legal precedent in the federal courts of 12 states to have considered the issue so far.[2–4]
Yet the 70/30 Rule has little empirical support. Research scientists studying variability in coronary angiography have long known of "the problem of reproducibility of visual angiographic interpretation of lesion severity," calling the lack of agreement between readers "disturbingly high as well as remarkably consistent."[5] Although cardiologists, in our experience, hold strong views about their own correctness and are thus often willing to accept the notion that there is an objectively right answer to visual angiographic interpretation, this hypothesis has been repeatedly tested and found to be false in studies dating back to the 1970s.
We have identified numerous examples in published studies in which 2 cardiologists disagreed on the size of a blockage by >40% (full list available from the authors). In one of the most troubling examples, Leape et al[5] used panels of cardiologists at Duke University Hospital to review a large batch of angiograms performed by New York cardiologists and record the severity of stenosis by vessel. In 43 of 643 cases, the Duke panel found no reportable disease (stenosis of ≤25 percentage points), whereas the New York cardiologists found a >50% blockage. In 8 cases, the New York cardiologists found no reportable disease, but the Duke panel found a >50% blockage. In 11 cases, 1 group found no reportable disease, and another found that the vessel was completely occluded. In all, there were at least 51 cases in which variability exceeded 40 percentage points.
Prosecutors, confronted with studies like that by Leape et al,[5] have correctly noted that these reports were older and none involved digital angiography. But there is no reason to assume that interobserver variability in coronary angiography ended when digital angiography began. Certainly, there is no empirical basis for this assumption, that is, no published study showing that once-persistent variability is now gone.
Indeed, we can attest that variability remains. In defending cardiologists investigated in stent cases involving digital angiograms, we have regularly seen qualified experts disagree with each other on the size of a blockage by >40%. We have seen this not only for single angiograms that could be dismissed as one-off flukes but also for batches of ≥10 suspected angiograms. That is, we have seen instances in which a government expert identified ≥10 angiograms as showing lesions <30% and in which multiple other reviewers concluded that all ≥10 angiograms showed lesions >70%.
Why could there be so much variability? None of the doctors in the published studies listed above, presumably, were committing fraud. Instead, we believe variability speaks toward the inherent difficulty of interpreting angiograms. Reviewers may consider different reference segments or different "worst view" projections or simply dismiss a narrowing as artifact, catheter-induced spasm, or another anomaly. Yet such individualistic assessments are fundamentally at odds with the black-and-white questions more common to criminal law: Was the substance in the backpack heroin? Was a statement true or false? Did the victim live or die?
We do not doubt that some cardiologists commit fraud or that fraud should be severely punished when it is proven. Fee-for-service medicine creates incentives for unnecessary procedures. Our concern is with the method of proof. The wiretap, the eyewitness statement, and the incriminating document—traditional criminal evidence—are fine sources of proof for detecting and prosecuting fraud. They may be supplemented by expert review, preferably by independent, blinded panels of reviewing cardiologists.
But disagreements between individual reviewers, by themselves, are insufficient. When empirical evidence has shown a piece of conventional wisdom to be false, we should be mindful not to repeat it, especially not in a court proceeding against a doctor facing years or decades in prison.
The government has never lost a stent case. But before another doctor is jailed on the basis of the 70/30 Rule, updated studies should test whether variability can still occur in the age of digital angiography. If such testing confirms that significant variability remains, as we have seen in our own experience, that will present an opportunity to begin the slow grind of correcting a mistaken factual precedent in the courts. But until science has demonstrated that digitization has solved the problem of variability in coronary angiography, we urge cardiologists to stop making definitive claims about the extent of blockages, or about the extent to which other doctors could reasonably disagree, in cases against other doctors.
Circulation. 2019;140(25):2051-2053. © 2019 American Heart Association, Inc.