• Ei tuloksia

2 Bioethics: exploring ethics in multiprofessional healthcare

2.2 Historical overview

The birth of bioethics as a discipline is rooted in major coincidental social and biomedical developments occurring in the Western world, especially in the field’s central birthplace, the United States. Biomedicine has made advances over the last five decades that would have been unbelievable and even unforeseeable at the beginning of the 20th century. Innovations such as pacemakers, organ transplantation, dialysis, ventilators, and in vitro fertilization—”the test tube babies”—were introduced in the 1960s and 1970s, to name some. Along with the new emerging technology, bioethics started to develop as a discipline when questions about the moral dimensions of the new, incredible medical possibilities started to seem inevitable. Bioethics became an interdisciplinary field right from the beginning, even though philosophy and theology especially played foundational roles in its creation (Jonsen 1988, 34–58, 65–84).

Callahan (2012, xv) summarizes bioethics as having a number of cultural roots ranging from an ambivalence about technology to the upheavals in

the 60s that included suspicion of any established institutions. Bioethics developed in a time of societal democratization that involved harsh criticism of past authorities, including those in the medical profession. The 1960s and 70s introduced the “hippie culture” as well as the civil rights movement in the U.S. and a push for women’s rights throughout the Western democracies. And not to forget the obvious, the close history of that era was overshadowed by World War I, from the inhumane horrors and human experimentation of Nazi concentration camps to the atomic bombs of Nagasaki and Hiroshima.

Both the moral use of technology and the moral righteousness of the medical profession were under heavy criticism by the public after World War II. The United Nations General Assembly signed the Universal Declaration of Human Rights2 for the first time in 1948, signifying the beginning of a new era of moral regulation and public concern about governments and authorities.

The Nuremberg Code was also established in its final form in the same year, declaring research subjects’ right to informed consent (see Bulger 2007). The Nuremberg code was an international document, but it did not initially carry the force of law in most places and was, therefore, blatantly violated on many occasions (Bulger 2007, 81). The time may not have been ripe for the ethos of the codes right after they were published, yet the emerging field of bioethics would build in the following decades on the ethos and heritage that these documents have come to signify—open society, individual rights, and freedom of thought and religion.

The skepticism in the zeitgeist, in turn, provoked questions about who should be the legitimate authority to make ethically complex decisions, such as deciding whether a critically ill patient’s life support should be terminated.

Physicians would have unquestionably made such decisions in earlier times, but since the physician’s authority was contested, questions about who was to be the new, legitimate decision maker arose. The undeniable question

“who should decide?” was the central content of bioethical conversations during the 1960s and 1970s (Callahan 2005).

The context of clinical care and healthcare are now a major field of influence for today’s bioethicists, but Rothman (1991, 10) writes the story

2 See https://www.un.org/en/universal-declaration-human-rights/.

of bioethics as having begun in the laboratory rather than in the examining room. Whistleblowers made exposés in the 1960s about the practices in human experimentation, and stark conflicts of interest were revealed of instances of patients’ well-being being sacrificed because of researchers’

ambitions. Scandals unsurprisingly followed these accounts. The result was the formation of an entirely new system of governance for human experimentation, introducing formal structures of oversight (institutional review boards) and putting new emphasis on the role of the research subjects themselves through the then-emerging principle of informed consent (Ibid., 70–100).

The same dynamics later spread to clinical care or, in Rothman’s (1991) terms, to the “bedside.” The latter half of the 20th century saw bioethics spreading fast in North American healthcare institutions as clinical ethics committees were being established, and a novel job title emerged in hospital wards: the clinical bioethicist. Conclusions about the vast spread and establishment of the field can be made due to the fact that the Joint Commission on the Accreditation Manual for Hospitals concluded in 1992 that in order to gain accreditation, U.S. hospitals were from then on required to have a “mechanism(s) for the consideration of ethical issues in the care of patients and to provide education to caregivers and patients on ethical issues in health care” (see Heitman 1995, 412–413; original source Joint Commission 1992, 156).

It is clear that bioethics emerged as a response to a changing time. The emancipated attitudes of the 1960s and 70s also brought a critique against paternalism as an attitude in medicine. Physicians’ ethics and laypeople’s conceptions of ethical behavior had simply grown too far apart from each other due to the changing attitudes of the times—Veatch (2005, 208) calls this expanding moral distance “the dissonance between physician ethics and other ethics.” Autonomy, meaning patients’ right to make decisions for themselves when considering their treatment, emerged in medical language during the decades following the emergence of bioethics in the 60s. The patient’s autonomy was then (and only then) established as a basic ethical principle for medical care in many Western societies. Today, autonomy is so taken for granted that it is hard to even imagine that before the critical

challenge to physicians’ ethics in the 1950s, “physicians intentionally withheld grave diagnoses from patients; they did research on them without informing them; they sterilized some patients whom they thought were not worthy of being parents; they routinely kept critically and terminally ill patients alive against the wishes of patients; they refused to perform sterilizations, abortions, and provide contraceptives if they thought patients shouldn’t have them; they allocated scarce resources in controversial and nondemocratic ways” (ibid). So radical was the push from bioethicists and from society that paternalism eventually had to give way to a demand for the right of patients to decide for themselves while gaining adequate and truthful information about their medical conditions from their doctors. After such a profound change, it is hard to remember that before the establishment of the principles of autonomy and informed consent, “physicians’ authority over their patients was complete and absolute” (Sher & Kozlowska 2018, 35).

It was, thus, in the historical, social and political context of the post-World War II era that bioethics emerged, “beginning as an amorphous expression of concern about the untoward effects of advances in biomedical science and gradually forming into a coherent discourse and discipline” (Jonsen 1998, xiii). Bioethics grew out of the Anglo-Saxon cultural ethos that emphasizes individual rights and interests with the central value of institutions bearing responsibility to the individuals rather than the other way round—in other words, “the moral triumph and vindication of an open society” (Jennings and Moreno 2011, 269). Bioethics has participated in the societal efforts to create new kinds of social and governmental structures since the field’s emergence to keep conversations about ethics vivid in healthcare arenas, from practices such as institutional review boards to clinical ethics committees. Having first started as a critique of the establishment and authority, bioethics itself grew to render a new era of authorities and establishments into being.