Helping people. It's really that simple.
While people who work in hospitals earn wages, money is hardly the only driver to do good work. Most everyone in a hospital knows that the job of helping sick people has a deeper meaning than making money.
I believe one of the main reasons the rise of administrative burdens correlates with burnout is that clinicians who work at the bedside despise anything that makes it harder to help people get well.
Now let's talk about hospital accreditation.
The idea behind accreditation is sound. The public deserves to know that their hospital meets a standard of quality. The ultimate measure of quality is outcomes, say, mortality. The problem is that the quality of hospital care is but one of many factors affecting outcomes. Severity of illness, social supports, and patient age are surely as important. So instead of outcomes, the current way to accredit hospitals is to measure surrogate markers of quality, such as structural factors and process measures.
Hospital accreditation is high stakes. The Centers for Medicare & Medicaid Services, the largest payer of healthcare in the United States, has mandated hospital accreditation by approved groups. A multimillion-dollar industry has evolved to meet these needs.[1] Although state agencies and other entities can accredit hospitals, one group is dominant: The Joint Commission (JC), an independent nonprofit, controls more than 80% of the market for accreditation. In short, hospitals pay fees and travel expenses to their umpires.
The mission of the JC (formerly the Joint Commission on Accreditation of Healthcare Organizations [JCAHO]) is "to continuously improve healthcare for the public by evaluating hospitals and inspiring them to excel in providing safe and effective care of the highest quality and value."
The Study
This prompts the question: Does the JC deliver on its mission? A group of researchers led by Ashish Jha, MD, MPH, from Harvard Medical School in Boston, Massachusetts, set out to find the answer.[2]
They used a Medicare database that included more than 4 million hospital admissions for 15 common medical conditions and six common surgical procedures. The primary outcomes were risk-adjusted mortality, readmission at 30 days, and patient experience scores. Three groups of hospitals were compared: those accredited by the JC (n = 2847), those accredited by independent entities (n = 490), and those accredited by state survey agencies (n = 1063) during 2014–2017.
The authors first compared outcomes between accredited hospitals (JC plus other entities) and hospitals surveyed by the state. Mortality rates did not statistically differ. Readmissions at 30 days were slightly lower for medical conditions in the accredited hospitals but not for surgical conditions.
Next, the authors compared outcomes between JC-accredited hospitals and non–JC-accredited hospitals. Here there were no differences in mortality outcomes or 30-day admissions.
Finally, the authors looked at patient experience scores based on standardized questionnaires completed between 48 hours and 6 weeks after discharge. They found few differences in many of the categories assessed, but when there were differences, state-surveyed hospitals and non–JC-accredited hospitals scored better than JC-accredited hospitals.
Comments
Nurses, doctors, hospital administrators, and, yes, patients, too, owe these researchers a debt of gratitude. While one cannot make causal claims from observational studies, these findings—lack of benefit from JC -branded accreditation—can start a conversation about using evidence to guide policy.
One of the great advances of modern medicine is that clinicians now use evidence to guide therapy. Well-meaning interventions that make biologic sense don't always work. Hormone replacement therapy for postmenopausal women to reduce cardiac events[3] and antiarrhythmic drugs to suppress premature ventricular contractions after myocardial infarction[4] were interventions that made sense but failed when tested in randomized controlled trials.
Health policy interventions should face the same test. I would argue that because policy actions can affect many-fold more patients than a single medical/surgical therapy, it's vital that these actions be tested—no matter how well-meaning or obvious they seem.
Indeed, the requirement that anointed groups, such as the JC, accredit hospitals, is a massive policy intervention. Readers who have experienced JC surveys know this is not hyperbole.
JC visits create serious distractions. People normally charged with clinical duties become focused on pleasing capricious surveyors from the JC. Hospitals can be sanctioned for a nurse or doctor drinking coffee too close to a patient care area. I once argued with a surveyor about the nonsense of using process measures as surrogates for true outcomes. I will never do that again. The hubris of nonclinicians given power over clinicians is now burned into my brain.
Disrupting hospital flow patterns and personnel would be fine if that brand of accreditation caused better outcomes. Thus far, we don't have evidence that it does.
The cost of JC accreditation also deserves mention. Basic economics dictates that the value of a good or service determines its cost. JC accreditation costs would be justified if it delivered on its mission to improve outcomes. This study suggests it does not do that. If this lack of benefit is confirmed in other studies, then we must conclude JC is just another organization benefiting financially from regulatory capture. Sadly, the US healthcare system overflows with these—think American Board of Internal Medicine–branded maintenance of certification.
Given the disparities in access to care across the United States, forcing institutions to pay fees to rent-seeking regulatory agencies is a moral outrage.
Yet perhaps the strongest reason to require evidence for policy interventions is the possibility of harm. How much did the JC's enforcement of the American Pain Society's "Pain as the Fifth Vital Sign" initiative contribute to the opioid crisis? It's hard to know exactly, but its role was enough for David Baker, MD, MPH, from the JC to write a "lesson's learned" editorial[5] in JAMA in 2017.
Clinicians who use evidence to guide practice should stand against non–evidence-based policy interventions. When medicines or surgeries don't benefit patients, we stop using them. If the JC's brand of accreditation can't show benefit, then it too needs to be de-adopted.
© 2018 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: John M. Mandrola. Joint Commission Accreditation: Mission Not Accomplished - Medscape - Oct 25, 2018.
Comments