In the past 60 years, modern medical ethics has reshaped medical practice in ways that many doctors may not even realize, and these changes have generally been for the better.
In the 1930s, medicine was a paternalistic profession. Doctors gave advice, and patients were expected to follow along. Patients did not have many rights. They could even be enrolled in experiments without their knowledge, which was widely condoned.
After World War II, the world learned the horrors of German doctors working in the concentration camps and conducting deadly scientific experiments in which the subjects had no say. Efforts to right these wrongs, enshrined in the Nuremberg Code, signaled the beginning of modern medical ethics.
It took several decades, however, for these changes to work their way through the system and become accepted norms. In 1972, word of the Tuskegee syphilis experiment in Alabama appeared in the media. Since 1946, researchers in an observational study of the disease at the US Public Health Service had been enrolling African Americans infected with syphilis under the guise of offering them free healthcare.
Even though penicillin became the standard treatment for syphilis in 1947, it was withheld from the Tuskegee subjects. They were lied to and given placebos. As a result, many of them died of syphilis, and many of their wives and children became infected.
The Rise of Patient Autonomy and Transparency
News of Tuskegee and other exploitative experiments in the United States prompted researchers in the 1980s to put strict limits on how research subjects are treated and heightened peer-based oversight.
When skepticism of authority emerged in the 1960s and 1970s, medicine put a new emphasis on patients' right to know what was being done to them and have a say in the clinical process. Medical values changed. Protecting patients from unwelcome news, such as a cancer diagnosis, gave way to forthright honesty toward patients, unless they explicitly stated that they didn't want to know.
This new ethical standard, which still prevails today, states that physicians do not automatically know what is best for the patient—and, in fact, patients are normally the best people to know what is in their best interest.
The Concept of Futility Emerges
Also in the 1960s and 1970s, hospitals began to routinely use ventilators, feeding tubes, and other technology to keep patients alive. All this life support, however, rarely helped dying patients to recover and eventually had to be removed.
It was up to hospital-based doctors, often in intensive care units, to decide when to remove life support. This put them in the uncomfortable position of deciding who should live and who should die. To deal with these sobering choices, the concept of futility emerged as a way to describe medical interventions that have little prospect of altering a patient's ultimate clinical outcome.
Deciding what is futile care and when life support should be removed can pit doctors against families, and family members against each other. These battles have been played out in several high-profile legal cases involving patients in permanent comas, memorably Karen Ann Quinlan (1976), Nancy Cruzan (1990), and Terri Schiavo (2005).
Dialysis and Allocation of Resources
Dialysis, another new technology of the 1960s, could save many lives, but it was a scarce and expensive resource at the time. Doctors had the unwelcome task of selecting which patients would get dialysis. Bioethicists, a new profession at the time, arose to help make these determinations.
The bioethicists came up with new ethical principles that could be used to determine allocation. "Beneficence" described the provider's obligation to support the well-being of the patient. Then, as more people received dialysis, providers had another ethical challenge: that dialysis could harm quality of life. The harms could outweigh the benefit in some dialysis patients. That is, providers had to balance beneficence with an obligation of "nonmaleficence"—to avoid harm.
Medicare began covering dialysis for patients of all ages in 1972, for-profit companies rose to meet the demand, and the need to allocate dialysis units slowly faded away. Allocation of scarce resources ceased to be an issue for dialysis, but it remains a pressing ethical issue in other areas, such as organs for transplantation and vaccinations. And most assuredly, it will become an issue in more areas.
Today, the principles and values of medical ethics have achieved a great deal of acceptance within the medical community. The field can be roughly divided into four areas: hospital ethics, ethics at private practices, clinical research ethics, and ethics in public health.
Hospital Ethics
Much of modern ethics concerns itself with inpatient matters. Many of the most pressing ethical issues, such as withdrawal of treatment for dying patients and informed consent for procedures, primarily take place in the hospital.
This is why almost every large hospital today has an ethics committee, but hospital ethics committees are a relatively new phenomenon.
One of the first hospital ethics committees was mandated in a decision by the New Jersey Supreme Court on the right-to-die case of Karen Ann Quinlan in 1976. In a situation that made headlines across the country, the hospital had a very important ethics decision to make but did not have the expertise to do so. The hospital needed to have a panel of physicians and others who would be dedicated to such questions.
At first, the number of hospital ethics committees was quite small, but it took off in the next decade. Growth was accelerated by a new requirement by the Joint Commission, the hospital accreditor, that all hospitals had to have some mechanism for ethics review. In 1983, only 1% of US hospitals had an ethics committee, but by 2001 more than 90% had one. [1]
Hospital ethics committees bring together professionals from a variety of disciplines, including doctors, nurses, chaplains, social workers, ethicists, and lawyers. Their work is mainly advisory. Functions include developing hospital policies on key issues, such as end-of-life care; educating staff on ethical issues; retrospective review of cases; and overseeing clinical ethical consults in the hospital.
Ethics committees often defer to ethical consultants, usually known as "medical ethicists," to help doctors resolve problems with patients or their families, as well as many other ethical issues in the hospital. Medical ethicists often are healthcare providers with training in bioethics. Their work requires one-on-one communications, tactful negotiation, and a firm grasp of the issues.
A recent survey found that 100% of hospitals with more than 400 beds had ethical consultants, and 81% of the smaller hospitals had them. [1]
In the hospital, doctors, other clinicians, patients, and their families have the option of choosing whether or not they want to use ethicists. Physicians who consult with ethicists often appreciate having access to a second opinion when making a difficult ethical decision.
Doctors in general are split on the use of ethicists. According to a 2006 survey of doctors at a Florida hospital, 72.2% of those who did not use ethicists thought it was their responsibility to resolve issues with patient or families on their own, and 90.8% of those who used ethicists believed in shared decision-making and the need to consider alternate points of view. [2]
Ethics in Private Practices
Although most hospitals provide medical ethicists for doctors to consult, relatively few practices have them on staff. Physicians in small practices must fend for themselves in terms of ethics, even though they too have many ethical issues.
Whereas physicians in hospitals face high-profile issues, such as end-of-life decisions or the use of novel experimental treatments, private practices face such issues as cultural sensitivity, professional responsibility, distribution of resources, and time constraints on appointments.
Physicians in small practices have similar challenges to those in large organizations when it comes to finding the time for ethical decision-making. Whereas physicians in large organizations often have to see a certain number of patients each day, physicians in small practices may feel pressure to make more money. Both groups have the practical problem of having to carve out the time needed to deal with the ethical dimension of their work.
Clinical Research Ethics
Obtaining informed consent from research participants is the single greatest ethical issue in medical research. Investigators are asking participants to take a risk, primarily to benefit someone else. Potential participants sign up often because they think they're going to get a new cure—even though they will often see little or no benefit, at least in early trials.
Clinical research is a closely monitored activity. Under federal law, the organization doing the research is required to set up an institutional review board (IRB), a group of peers who are not supposed to benefit financially if the study is successful.
The IRB sets criteria for the study's deliberations and decisions, such as weighing risks and benefits of a particular experiment, making sure potential participants know their options, and overseeing the informed consent process.
Owing in large part to IRBs, research institutions take their ethical obligations very seriously. They sometimes ask for help from professional ethicists to make sure their informed consent processes are ethically sound.
Informed consent documents must thoroughly describe the risks and benefits, but they can't be so long that research participants can't easily read through them, and they can't use terms that the participants don't understand. Increasingly, visual and electronic aids are used to supplement written consent and improve comprehension by research participants.
Ethics in Public Health
A great deal of medical ethics has to do with public health. This is territory riven with ethical pitfalls, involving such issues as preventing disease, prolonging life, and promoting health through organized efforts.
Public health authorities deal with such problems as flu epidemics, drug abuse, providing mental health services, monitoring children for health issues at birth, and the cost of healthcare. All of these areas have to do with resource allocation, which is a seminal issue in modern medical ethics.
Public health issues traditionally exist at the government level, but they are increasingly arising at the practice level. A few examples in practices involve countering the opioid epidemic, counseling on safe use of guns, urging the use of bike helmets, and making sure patients are vaccinated.
Vaccination
In the case of vaccinations, some parents refuse the measles vaccine for their children, citing baseless claims that it causes autism.
At first blush, these parents appear to be merely exercising their individual rights, a sacred issue for Americans, but the overriding concern is a societal problem. Unvaccinated children can spread the disease to others. Requiring vaccinations thus becomes a matter of "protecting the herd."
Politicization of Firearms
In 2011, Florida enacted a law that barred doctors from discussing the dangers of gun ownership with their patients. Doctors who disobeyed the law could be censured, fined, or lose their licenses.
Besides Florida, 14 other states considered similar bills. But none of them were passed, and an appeals court overturned the Florida law in 2017, ruling that it violated doctors' right to free speech.
Now that doctors are free to talk about gun safety, how should they handle this issue? Studies show they will be more effective if they refrain from a disapproving tone and instead focus on the risks of suicide prevention, keeping guns in homes, and other storage practices. [3]
Cost and Utilization of Healthcare Resources
Cost issues are another topic that is increasingly turning up in physician practices. With the emergence of accountable care organizations and other modes of value-based payments, physicians are under greater pressure to keep costs in check without harming quality.
Some physicians are adopting the concept of stewardship as an ethical way to balance cost-savings with quality of care. Although patient advocacy comes first, doctors can also consider whether it is worthwhile to provide services that are only marginally beneficial to their patients.
Stewardship also is a growing need for patients. As patients increasingly pay for services out-of-pocket, the high costs of marginally beneficial services can threaten their livelihood.
Physicians can get help in deciding when to reject marginal services by consulting Choosing Wisely, a list of overutilized services selected mainly by medical associations. Doctors and their patients can discuss the list and hold conversations about the costs and benefits of medical services.
Discussions of Political Views
With physicians under pressure to take a stand on various issues, how much should they voice their views on these issues to their patients?
Physicians have a great deal of influence over patients' views on healthcare issues, such as advocating the use of health savings accounts, defending the Affordable Care Act, or calling for a single-payer healthcare system.
Doctors may even bring up issues that directly affect the medical profession rather than patients; such issues include tort reform, opposition to maintenance of certification, or the need to reduce physician burnout.
Physicians do have a right to raise these issues. Political action is the only way they can deal with issues such as inadequate healthcare. Be prepared to lose some patients owing to your views.
However, physicians need to exercise discretion about their causes. If the patient does not appear to be interested in the pitch or actually disagrees with it, the doctor should drop it. Maintaining a good patient/doctor relationship is more important than scoring political points.