HHS Seeks to Prevent Discrimination by Clinical Algorithms

Kerry Dooley Young

November 01, 2022

The US Department of Health and Human Services (HHS) intends to add language to a federal rule to make it clear that physicians could be held accountable for decisions made while relying on clinical algorithms that discriminate on the basis of a patient's race.

The intent, HHS said, is to make physician practices and hospitals take a closer look at clinical decision support algorithms. Overreliance on algorithms could lead to violations of Section 1557 of the Affordable Care Act, which prohibits discrimination on the basis of sex, gender, race, and other identities.

In the proposed rule, HHS said its intent is not to prohibit or hinder the use of clinical algorithms but rather to make clear that discrimination that occurs through their use is prohibited.

HHS said it expects its Office of Civil Rights to address future cases of discrimination involving algorithms with an emphasis on voluntary compliance, as it has handled past cases of discrimination that were based on gender.

The agency concedes that checking for discrimination in algorithms is "a complex and evolving area that may be challenging" for clinicians. The American Medical Association (AMA) is among the groups that have asked HHS to put the plan on hold and to gather more information from the medical community about how to address discrimination in algorithms.

Many unanswered questions persist as to what would happen in cases in which physicians are found to have used algorithms improperly or have made decisions on the basis of flawed ones, according to Sharona Hoffman, JD, co-director of the Law-Medicine Center at the Case Western Reserve University School of Law, in Cleveland.

"For example, what are healthcare providers expected to do in order to detect discrimination? Will they be expected to conduct studies to determine if women or minorities are disadvantaged by particular algorithms?" Hoffman told Medscape Medical News. "Will healthcare providers become so concerned about liability that they will stop purchasing and using algorithms?"

The algorithm proposal is part of a broader draft rule addressing discrimination that HHS unveiled in July.

Racism "Hidden in Plain Sight"

In the medical community, awareness has grown that algorithms and other clinical decision support tools can harm patients.

"For Black, Hispanic, and Asian people whose hearts, lungs, bones, brains, bladders, and kidneys have been judged differently for years because of race-based algorithms that are 'hidden in plain sight,' this new avenue for recourse is much needed," Aletha Maybank, MD, MPH, AMA's chief health equity officer and group vice president, and her co-authors wrote recently in Health Affairs.

For many years, the estimated glomerular filtration rate (eGFR) test weighed an input of "African American" as automatically indicating a higher concentration of serum creatinine than it would for a non-Black patient on the basis of the unsubstantiated idea that Black people have more creatinine in their blood at baseline, as Medscape has reported.

And in 2021, the American Academy of Pediatrics (AAP) retired guidelines used to diagnose urinary tract infections in children aged 2 to 24 months after researchers argued that the recommendations improperly raised the threshold for testing Black children for this condition, thereby putting them at risk for untreated illness. In May, AAP published a policy statement that kicked off its examination of clinical guidelines and policies that include race as a biological proxy.

Dr Lundy Braun

Clinicians often mistakenly view algorithms and clinical decision-making tools as purely objective scientific aides that can better guide treatment decisions. But researchers create tools, and they can embed their prejudice and misconceptions, according to Lundy Braun, PhD, a researcher at Brown University in Providence, Rhode Island.

Algorithms didn't just come out into the world by themselves. They were brought into the world by human beings Dr Lundy Braun

"Algorithms didn't just come out into the world by themselves," Braun said. "They were brought into the world by human beings."

Steps in a Long Journey

The National Kidney Foundation (NKF) and the American Society of Nephrology in 2021 recommended the adoption of a new eGFR equation in which the faulty adjustment for Black patients had been removed.

Dr Joseph Vassalotti

"There is a groundswell to drive health equity in this country, and I think we have a lot of work to do," Joseph A. Vassalotti, MD, chief medical officer at NKF, said. "These steps are definitely positive steps, but we have to be honest that these are just steps in a long journey."

The American Academy of Family Physicians (AAFP) in comments to the proposed rule recommended that HHS warn physicians about violations of civil rights through faulty algorithms and work with them to establish new policies, rather than focus on punishment.

In a September 28 comment on the proposed rule, AAFP said clinicians should not be expected to evaluate algorithms on their own, and it urged that responsibility be shared with vendors that make tools.

Dr Steven Waldren

If used correctly, algorithms and other forms of artificial intelligence have the potential to improve the care of patients and to help ease the burden on primary care specialists, according to Steven Waldren, MD, chief medical informatics officer at AAFP.

"The market is going to continue to move in this direction" of increased use of artificial intelligence in medicine, Waldren said. "We can't just put our head in the sand and say, 'They're not quite ready, so let's not do anything.' "

The AMA and America's Health Insurance Plans (AHIP) separately told HHS to drop its current proposal on algorithms, and they urged the agency to work with the medical community on the issue.

In an October 3 comment on the proposed rule, James L. Madara, MD, AMA's chief executive officer, said, "Questions of assignment of liability seem at best exceedingly premature and at worst highly detrimental to continued innovation in this space."

AHIP had the same view, arguing for a need to first figure out how best to detect problems in the design of algorithms that can cause harm to patients.

"There is unanimity in the healthcare system that we must ferret out and mitigate such biases, but we are still in the early stages of being able to do so," wrote Jeanette Thornton, executive vice president of policy and strategy at AHIP, in a comment to HHS.

Kerry Dooley Young is a freelance journalist based in Miami Beach. Follow her on Twitter @kdooleyyoung.

For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....