Participation of Women in Clinical Trials of Drug Therapies: A Context for the Controversies

Marianne N. Prout, MD, MPH, Susan S. Fish, PharmD, MPH

In This Article

Historical Framework of Clinical Trials and Women's Participation

Clinical trials of pharmaceuticals evolved during the twentieth century in the United States in response to problems with development, manufacturing, and marketing of drugs, as summarized in Table 2.[14,15,16,17,18,19] Medicinals were hawked directly to the public by unlicensed practitioners and promoted by unregulated companies in the early 1900s. The initial Food and Drug Act set standards of quality and purity for drugs to provide protection from toxins and ensure inclusion of listed ingredients. Later revisions, in response to more than 100 deaths from sulfanilamide mixed with solvent, required safety testing of new drugs before marketing.

Modern methods for clinical trials were developed using new tools from statistics and pharmacology during the 1930s and 1940s. Comparisons of effects between groups of individuals edged out case reports. Placebos and blinding were introduced to reduce investigator and subject bias. Randomization was introduced in human research after initial use in agricultural experiments.

As medical research increased, ethical issues captured attention. What were the headlining problems in medical research, and how did they contribute to the restriction of women in clinical research? The most infamous case was the US Public Health Service (USPHS) Syphilis Study (commonly known as the Tuskegee Syphilis Study), in which a cohort of African American men with tertiary syphilis on no therapy were followed from 1932 until 1972, long after the effectiveness of penicillin was established. Although this was a study of only men, the public attention given to clinical research conducted in an unethical manner (coercion, deception, no informed consent) continues to affect the willingness of African Americans, both women and men, to participate in research. In another case, institutionalized children were deliberately exposed to hepatitis without the complete knowledge or informed consent of their parents. During testing of oral contraceptive pills, clinic patients were given pills or placebos without being informed of the research intent of their "treatment." Henry Beecher[20] discussed such problem cases in a well-publicized speech and article in 1966, bringing to light many ethical issues involved in clinical research in the United States.

There was a worldwide response: the Helsinki Declaration was initially adopted by the World Medical Association in 1964, setting standards for ethical treatment of human subjects; its multiple revisions reflect the evolution of clinical research and of research ethics. The USPHS, in 1966, required local review of the ethical issues of its research proposals through a system of Institutional Review Boards (IRBs). The publicity given to the Syphilis Study precipitated Congressional action: a National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in 1974 and legislation for the protection of human subjects in 1981.[21,22]

One of the 3 ethical principles for the conduct of clinical research that the National Commission delineated is the principle of justice. One way that this principle of distributive justice is applied is through the equitable selection of research subjects; if women are going to "reap the benefits of the research," ie, use the new drugs or devices, they have the right and the responsibility to participate in the research. However, because this Commission and the subsequent federal regulations were a response to the syphilis study, special regulations were promulgated in 1975 to provide extra protections for vulnerable subjects undergoing research.[23]

Vulnerable subjects of research were defined as children, mentally disabled people, educationally disadvantaged persons, prisoners, and pregnant women. In the latter case, the fetuses, not the pregnant women, were considered to be the actual vulnerable population. The most certain way to protect fetuses was to exclude all potential fetuses from research, thus excluding all women of childbearing age, even those using contraception or who stated that they were not having sex with men.

Equally important events occurred with the use of drugs for therapy in addition to use in research studies. Thalidomide use by pregnant women resulted in over 10,000 children with birth defects worldwide. Even though it had not been approved for use in the United States, thalidomide focused public and political attention on the approval of new drugs, prompting amendments to the FDA regulations that established processes for new drug applications (NDAs) and approvals. Drug use by pregnant women was again questioned after the discovery of an association between vaginal adenocarcinomas in daughters and maternal use of diethylstilbestrol (DES) in the 1940s and 1950s to prevent miscarriage.[24]

In 1977, the FDA recommended that premenopausal women capable of becoming pregnant be excluded from early drug trials. Thus, excluded from such trials were all women who were using reliable methods of contraception, women whose male partners had had vasectomies or used condoms, and women who were "single." Although the FDA guidelines pertained to early phases of drug development, in practice the participation of women in all phases was affected.[25]

Clinical trials emerged over the past century as a method for medical research whose importance and potential impact on health have grown with the expansion of pharmacologic agents. The regulatory framework promotes clinical trials of drug therapies by requiring safety and efficacy data for drug approval. This framework also prescribes specific phases for drug development: preclinical, phase 1 (safety, dose-finding), phase 2 (safety, effect testing), phase 3 (efficacy and safety in randomized, controlled trials), and phase 4 (postmarketing surveillance for safety and effectiveness in "real world" use).

Although premenopausal women could enter phase 3 trials in theory, the appropriate doses and uses of drugs are generally established in earlier phases. When women are excluded from early-phase drug trials, any specific dosing requirements for women will remain undiscovered until much later in the drug development process, if ever. In addition, because most drugs fail early trials, exclusion of women from early phases may limit identification of drugs that are useful specifically for women.[12,15,26]


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.