Clinical Trials and Ethics, or Lack Thereof

Jin Komerska is a second-year biochemistry major with plans to pursue a career in the medical field. Jin is interested in the historical and social factors that affect medicine, and she will be spending next year in Vietnam and New Zealand, learning about public health, development, and attitudes toward science across cultures.

Clinical trials play a pivotal role in the release of new drugs for medical use. Though the emphasis placed on testing treatments through this type of study is relatively new, researchers have long had guiding principles that direct research involving humans. In 1025, Persian physician and astronomer Avicenna published The Canon of Medicine, which provided precise guidelines for testing and evaluating new drugs. He advised researchers against basing their conclusions on results that could not be reproduced, cautioned readers to watch out for both main and side effects of drugs, and urged testing to be done on humans when attempting to draw conclusions about the effect of a drug in humans [1]. Despite these early directions, clinical trials quickly became messy, and frequently unethical, when they started to become commonly used. Drug manufacturers often employed unreliable or immoral practices while promoting new treatments, and it was only after many decades of unreliable data and major ethical breaches that laws were passed requiring highly-regulated clinical tests of drugs before their release. These laws developed interventions that drastically altered the amount of freedom left to the researcher, and they mainly arose from the exposure of researchers’ use of unethical practices in their experiments and conclusions.

Front cover of Avicenna's The Canon of Medicine, titled as Canon Medicinae.
Cover of The Canon of Medicine (Image credit:

Early studies that resembled modern clinical trials were mostly informal and regulated only at the discretion of the researcher. Formal testing of drugs was not required or even commonplace, and even after the American Medical Association began assessing drugs for certification in 1905, receiving approval of a treatment before marketing was merely optional. Lawmakers and members of the public made several attempts to reduce the freedom with which drug manufacturers and promoters could make claims about treatments, but they were largely unsuccessful. One attempt, the 1906 Pure Food and Drug Act, banned the misbranding and contamination of drugs, but still provided no authority to inspect drugs or to require studies showing their effectiveness, rendering the act essentially useless [2]. Another notable failure of the 1906 act was its ban on making false claims about the ingredients of the drugs, while the law still permitted false claims about the drug itself. A series of legal cases between 1911 and the 1930s found similar loopholes in the law [3].

The first truly effective step toward regulating the release of new drugs took place in 1931 with the creation of the Food and Drug Administration (FDA). The federal government tasked this new department with verifying the safety and effectiveness of drugs—as well as their correct labeling—and the creation of the FDA finally provided the authority to mandate regulation and to investigate drugs that did not conform to safety standards [4].

The first major clinical trial scandal to which the FDA responded involved the infamous “Elixir Sulfanilamide” released by the well-respected drug firm the Massengill Company. The Massengill Company had previously released tablet and capsule forms of its sulfanilamide drug but wanted to develop a liquid form. In 1937, the company released Elixir Sulfanilamide, sulfanilamide dissolved in diethylene glycol, without testing for toxicity, and over 100 people died painful deaths as a result of taking the liquid drug [5]. The FDA investigated the Massengill Company and found the company guilty of mislabeling a drug. Though the FDA was unable to convict the company of causing the deaths, due to the absence of laws about testing new treatments before their release, the case provided the first prominent instance of successful conviction of a company for releasing a harmful new drug. The Elixir Sulfanilamide incident also acted as a catalyst for a new law requiring that the FDA certify the safety of a drug before its release. Though the Massengill Company ultimately avoided being charged with the deaths caused by Elixir Sulfanilamide, the incident was a landmark case in terms of its prosecutions and its effects on future regulations.

A large and small bottle of liquid sulfanilamide
Bottles of liquid sulfanilamide (Image credit:

Despite the progress made in the 1930s, though, World War II brought a host of new challenges for medical research ethics, both domestically and internationally. Most prominently, consent became a widespread concern after the war, largely due to accounts of torture and murder enacted by Nazi doctors in the name of medical experimentation. Nazi doctors forced subjects to endure inhumane environments, such as extreme cold or oxygen deprivation, and painful medical conditions, and the experiments typically caused death or permanent damage to the subjects [6]. After the war, a judicial trial known as the Doctors Trial charged several Nazis with crimes related to the medical experiments, and a ten-point code called the Nuremberg Code was released to set international guidelines to protect human research subjects. The first of the ten points called for subjects’ voluntary consent, which—despite being described decades earlier—had not yet been explicitly instructed for all experiments on humans [7].

This new emphasis on consent was a significant milestone in the protection of human subjects, but the second world war also left behind a problematic set of beliefs about medical research in the United States. The war led to a push for more medical research that could benefit war efforts; and while previous medical studies tended to benefit the subjects of the study, research efforts during the war primarily benefited strangers. These factors alone were not inherently dangerous to human subjects, but “the common understanding that experimentation required the agreement of the subjects… was often superseded by a sense of urgency that overrode the issue of consent” [8]. Americans acknowledged the importance of consent, but their sense of urgency and their (incorrect) belief that their own country was incapable of committing medical atrocities often led them to cut corners or ignore the principles of voluntary and informed consent, despite the newly written code of medical research ethics that resulted from World War II.

These dangerous sentiments persisted in the United States for nearly two decades until Dr. Henry Beecher, an anesthesiologist, published an eye-opening article highlighting 22 examples of cases that represented the wide variety of ethical breeches common in recent clinical research. Beecher’s article drew attention to unnecessary risks brought to patients in several clinical trials and to researchers’ non-disclosure of the details of treatment to subjects. One case told of a study in which “liver cancer cells were injected into 22 human subjects as part of a study of immunity to cancer… The subjects (hospitalized patients) were merely told they would be receiving ‘some cells’—… the word cancer was entirely omitted” [9]. The frequency of unethical practices, even at well-renowned medical school clinics, alarmed the public.

Just a few years later, the public uncovered yet another medical research scandal. In 1972, news spread about a study that followed a group of African American men with syphilis for forty years. The study’s title, “Untreated Syphilis in the Male Negro,” indicated that researchers made no attempt to treat the men’s syphilis despite new treatments becoming available within the course of the study [10]. Similar, though less publicized, studies involved bribing prisons, psychiatric hospitals, and orphanages with critical resources in order to receive the institutions’ consent to use prisoners, patients, and orphans in studies without asking for consent from the subjects themselves [11]. These studies often employed painful procedures and coerced—rather than voluntary—consent, if any at all. Some studies also involved attempts to infect subjects with diseases, reminiscent of the practices that the Nuremburg Code and other guidelines were meant to prevent.

The discovery of these inhumane experiments, as well as Beecher’s article exposing common ethical issues within clinical studies, demonstrated the need for medical research reform. Researchers, lawmakers, and the public alike began to realize that the United States needed to implement and enforce policies specifically addressing American medical research, rather than relying on international guidelines such as the Nuremburg Code.

Because of these realizations, the 1970s saw an influx in the formation of departments and committees dedicated to addressing protection of research participants. The federal government worked to pass laws that specifically outlined rules for involving human subjects in American medical research; and by 1974, lawmakers had passed the US National Research Act. The Belmont Report of 1979 followed soon after, and the report updated and explicitly laid out the requirements for conducting medical research on humans [12]. In 1981, the US Department of Health and Human Services released an updated policy for working with human subjects, based on suggestions in the Belmont Report. The report has four subparts, and subpart A, which has 24 parts and lays out the general policies for protecting human research subjects, is now known as the “Common Rule” for medical research. Subparts B, C, and D provide more comprehensive details on completing research with particularly vulnerable populations such as pregnant women, prisoners, and children [13]. The law also provides for review committees, called Institutional Review Boards (IRBs), that review research proposals involving human subjects so that potential ethical breeches can be addressed and prevented or before research begins [14].

Since the 1981 enactment of the Common Rule, a variety of amendments and additional laws have been developed to provide further protection for subjects. Various national, as well as international, committees have formed around the topics of biomedical research, bioethics, and clinical trials, and there are clear guidelines that must be followed for all medical studies involving human participants [15]. IRBs are commonly used by amateur and professional researchers alike, and there are a variety of mandated trainings for completing research on humans. These provisions give hope for a safe and ethical era of medical research and suggest that ethics and participant safety are now truly a priority in clinical trials. However, the history of human participation in clinical studies should serve as a reminder of the dangers of prioritizing scientific advancement or personal goals over patient safety and comfort.


[1] David Machin and Peter M. Fayers, Randomized Clinical Trials: Design, Practice and Reporting (Hoboken: John Wiley and Sons, 2010), 16.

[2] Shein-Chung Chow and Jen-Pei Liu, Design and Analysis of CLINICAL TRIALS: Concepts and Methodologies, 2nd ed. (Hoboken: John Wiley & Sons, 2004), 3.

[3] Ibid.

[4] Ibid.

[5] Peter Temin, “The Origin of Compulsory Drug Prescriptions,” The Journal of Law and Economics 22, no. 1 (1979): 91-105.

[6] George J. Annas and Michael A. Grodine, “The Nuremberg Code,” in Emanuel J. Ezekiel et al., eds., The Oxford Textbook of Clinical Research Ethics (Oxford: Oxford University Press, 2008), 136-139.

[7] Ibid.

[8] David J. Rothman, Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making (New York: BasicBooks, 1991), 30.

[9] Ibid, 74.

[10] Susan M. Reverby, “’Normal Exposure’ and Inoculation Syphilis: A PHS ‘Tuskegee’ Doctor in Guatemala, 1946-1948,” Journal of Policy History 23, no. 1 (2011): 7.

[11] Ibid, 12.

[12] Arun Bhatt, “Evolution of Clinical Research: A History Before and Beyond James Lindt,” Perspectives in Clinical Research 1, no. 1 (2010): 6-10.

[13] Public Welfare: Protection of Human Subjects, 45 C.F.R. § 46 (1981).

[14] Ibid.

[15] A Vijayananthan and O Nawawi, “The importance of Good Clinical Practice guidelines and its role in clinical trials,” Biomedical Imaging and Intervention Journal 4, no. 1 (2008): 1-5.

Further reading:

Bull, John P. “A Study of the History and Principles of Clinical Therapeutic Trials.” MD Thesis, University of Cambridge, 1951.

Daemmrich, Arthur. “A Tale of Two Experts: Thalidomide and Political Engagement in the United States and West Germany.” Social History of Medicine 15, no. 1 (April 2002): 137-159.

Jones, David S., Christine Grady, and Susan E. Lederer. “’Ethics and Clinical Research’ – The 50th Anniversary of Beecher’s Bombshell.” New England Journal of Medicine 374 (June 2016): 2393-2398.