Monthly Archives: Ноябрь 2013

Ethical Issues of Animal and Human Experimentation in the Development of Medical Devices




подпись: 191.1

Subrata Saha

Clemson University

Pamela S. Saha

Clemson University

Introduction Clinical Trials

The Reason for Clinical Trials • Dilemmas Presented by Clinical Trials • The Need for Double-Blind Trials Animal Experimentation Animal Testing • The Need for Animal Research • Regulations/Guidelines Related to Animal Research • The Public Debate


The number and sophistication of new medical devices is transforming modern medicine at a rate never experienced before. These developments have saved the lives of many more patients and improved their quality of life drastically. Artificial joint replacements alone have transformed the field of orthopaedic surgery, and over 250,000 total hip and total knees are implanted annually in the United States. Ventricular assistive devices (VASs) have extended lives by years, lives that before would have certainly ended at the point the device would have been needed. An estimated two to three million artificial or prosthetic parts, manufactured by hundreds of different companies, are implanted in America each year. The massive production of such devices is not only big science but big business.

New medical devices, however, require thorough testing for safety and efficacy as well as submission for approval by the federal Food and Drug Administration (FDA) before being put on the market for public use. Testing of a new product takes considerable time and expense and is not without problems beyond those of a technical nature. This paper considers the ethical questions that arise when the demands of science, economics, and progress are not entirely compatible with issues raised about the rights and obligations toward human beings and animals.

One of the first levels of testing where ethical debate is more prominent is at the stage when the biomedical scientist is faced with the need to use animal subjects.1 Over the last 30 years there has been a growing debate over whether or not the use of experimental animals is even appropriate. Some animal rights activists have made violent protests and vandalized research facilities where animal experimentation takes place. Increased public sensitivity did help promote an effort by the government, as well as the
Scientific community, to regulate the use of animal subjects and to educate the public as to the importance of such research to the health and well being of both animals and humans. As a result efforts to promote the humane use of experimental animals that are used to advance knowledge of biomedical sciences are to be supported.

Inevitably human beings are involved during the final stages of testing medical devices or systems. While this imposes numerous ethical concerns and has stirred much discussion, clinical trials are nec­essary as the alternative would mean an end of learning anything new for the betterment of medical science and continued use of unsupported practices based on conjecture.2

Consequently, engineers involved with the development and design of medical technology need to become familiar with various aspects of clinical trials and animal research as well as the ethical issues that they raise. Normally, conduction of human experimentation is not a part of the training of an engineer,3 nor are complications presented by ethical concerns a traditional part of engineering educa­tion.4 In this chapter various ethical concerns are examined that are essential elements in the animal testing and the clinical trial of any new treatment modality.

Clinical Trials

The Reason for Clinical Trials

Clinical trials are designed to ascertain the effectiveness and safety of a new medical device as compared with established medical practice. This form of rigorous scientific investigation in a controlled environ­ment for the assessment of new treatment modalities is superior to the is forum of private opinion and individual chance taking. Holding the practice of medicine at the status quo and discontinuing all innovative work would be the only way to prevent exposing patients to some unforeseen risks in new treatments that come with the promise of improved care. Yet, even this does not protect patients from uncontrolled experimentation as conditions change even if the practice of medicine stands still. The changing effectiveness of antibiotics as well as the increase in antibiotic resistant strains through unin­formed overuse of such drugs is a perfect example of how even standard care held stationary can lead to a decline rather than just a status quo in level of medical care. If we wish to continue to seek out new ways to conquer disease or even maintain a current level of care, then we are essentially forced to decide on a manner in which human beings are to shoulder the risks involved.

Total Mastectomies for Stage I and II breast tumors and extracranial-intracranial anastomosis for internal carotid atherosclerosis were once routinely practiced. However, randomized trials have brought the necessity of these surgical procedures under closer scrutiny. Certain procedures which are continually used, such as the now twenty-year-old use of obstetric ultrasonography, are losing support in the battle to cut medical costs as these do not have the benefit of randomized clinical trials to prove their value. Answers are needed to questions concerning health and economic risks and benefits, and the burden of false positive and negative results, especially in this time of multiple options in a profit-driven atmosphere, along with dwindling resources and government aid to the sick, old, and poor.5

A clinical trial is the most reasonable means to test a device as well as control risks and prevent abuse of human subjects because of the following six major factors: (1) A limited number of closely monitored subjects in a controlled environment are in a safer situation than are the same subjects unobserved in relation to one another within the larger population, where the same kinds of risks are imposed by uncertainty about the efficacy of new devices. (2) Clinical trials give conclusive answers to important medical inquiries that otherwise could only be answered by guessing. Medical decisions based on proven results are certainly superior to those dependent on untested clinical opinion. One author has stated that “trials were introduced because personal opinion was so notoriously fragile, biased, and unreliable.” 6 Risks would indeed be greater and potential for harm magnified if doctors are made to make decisions in an environment of general uncertainty and when medical products available are unsupervised by review boards. In an uncontrolled situation, the individual practitioner can be influenced by motives of economic profit, the need to appear knowledgeable and abreast of the new, or by high-pressure sales­manship. In the zeal to argue for individual rights in clinical trials, worry should be placed on how those same rights are threatened in an environment without objective controls. (3) A society that restricts and oversees the advancement of medicine through regulated human experimentation will prevent the sub­jection of people to needless risks brought on by a plurality of devices that may cause harm and offer only a maintenance of the status quo as a benefit. There are a plentitude of types of redundant consumer goods on the market today, e. g., soft drinks that offer multiple ways to relieve one condition—thirst. However, in medicine the concerns are different. The onslaught of numerous types of drugs to relieve nausea during pregnancy should be limited to the testing of a few medicines that offer the greatest benefit. Control of the market place in medicine through government or private testing of the few most promising forms of treatment is a much safer environment than an extensive supermarket full of many possibilities and many potential risks. (4) Clinical trials advance expedient corroboration of medical theories so that research can be channeled in directions that show meaningful results. Promising research is rapidly pinpointed, and harmful products are removed from the hiding place of private opinion and promotional tactics. (5) Clinical trials are the answer to the moral imperative to thoroughly test all new medical devices. Such testing is deficient without a controlled study on human subjects. No researcher can state confidently that a product is safe and effective for human use without such a test. (6) AH clinical trials are evaluated by specialized committees formed for the objective of supervising the ethical conduct of investigators using human beings as subjects. In this manner, individual rights are protected and ethical guidelines effectively imposed in a manner superior to what could be expected in the isolated world of private clinical practice, with its competition for patients and pressures from manufacturers.

Dilemmas Presented by Clinical Trials The Problem of Informed Consent

One of the most controversial issues generated by clinical trials is that of informed consent.7 Informed consent protects certain human rights, such as the patient’s freedom to decide what risks to take with his/her own body, the right to the truth from the doctor in the doctor-patient relationship, and a just distribution of goods in accordance with a standard of equity and access to redress for undeserved harm. These values cannot be sacrificed for any sort of anticipated benefit from research. “The loss of such values is so harmful that benefits become meaningless.”8

Clinical Trials and the Doctor-Patient Relationship

Another issue currently debated is the conflict of clinical trials with the therapeutic obligation. Some authors argue that we must face the fact that if we are to expand the knowledge needed for obtaining high-quality treatment, we must sacrifice our therapeutic obligation. There are others who take a more apprehensive view of clinical trials, stating that trials on healthy subjects are condemned by the Nurem — burg Code, the Tokyo Declaration, and the Helsinki Declaration of the World Medical Association.9

However, outright condemnation of experimentation on healthy human subjects ignores the vital need for progress in preventive as well as remedial medical treatment. For example, the development and use of vaccines carry minimal but real risks to their recipients. Yet few dispute that vaccine research using human subjects is morally justified and may even be compulsory, despite the reality that persons can and do die from experimental as well as FDA approved vaccines. The control of crippling and deadly diseases, and their eventual elimination (i. e., small pox), is due to the study and implementation of vaccines. Those who support clinical trials say that a validated medical practice is a far better alternative for both the individual and society as a whole than subjection to treatments whose effectiveness are not validated by controlled trials.

One article offers several ideas for the elicitation of informed consent and assignment to randomized groups. The best model suggested for general use begins with selection of eligible patients, who are pre­randomized and given the entire protocol with an explanation of benefits and risks of all the options; consent is then sought, the patient knowing his or her group assignment.10

Proposed deviations from the above standard could be defended before review boards; for example, investigations that would be impeded by the patient knowing his or her group’s assignment but that promise the patient major benefits. However, the patient should be informed of any such stipulations and of the relative risks and benefits.

Standards of research require further consideration as a consensus needs to be achieved. For example, in order to test the effectiveness of transplanted embryonic pig cells into the brain for the treatment of Parkinson’s disease, a control group is required in which surgical boring of a hole into the skull occurs without the addition of any cells. This is to remove any placebo effect or any other unforeseeable effect of a mock surgical procedure on Parkinson’s disease. However, the question of simply boring a hole does subject a human being to a procedure with significant risk with little bases for suspecting a beneficial outcome. The design of scientific studies needs to weigh the risks to the human subjects against the need for scientific purity. Such opinions have been voiced by Arthur Caplan, director of the University of Pennsylvania’s Center for Biomedical Ethics who recently told the Boston Herald that, “striving for scientific accuracy is a commendable goal, but asking someone to have a hole drilled in their head for no purpose is putting science ahead of the subject’s interest.”

There has been much effort particularly in recent years toward the protection of individual rights. This is not surprising in this age of rising autonomy and declining paternalism. This concern has led to the consideration of whether or not treatment should be changed mid-course in an experiment due to the “appearance” of a tendency toward one result as opposed to another. At some point in collecting data, the experimenter may begin to surmise a particular outcome or even come to expect it (although he/she may not be certain). In these circumstances, the question arises as to whether the individual subject should be granted the hypothetical benefit of the “good guess,” or should the guess be treated as merely a guess and, hence, a gamble. To do so would mean that the best treatment for the individual might not be one that has been proven valid. In fact, such a choice implies that the best treatment is a perpetual gamble. Nonetheless, treatment methods that maximize the good guess have been presented.11

Another issue that has surfaced from the debate on informed consent is the idea of the “therapeutic misconception,” which refers to an unyielding expectation of personalized care on the part of patients during a clinical trial. This finding shows a need for better communication between the patient and the doctor. However, such “false expectations” could be due to a fundamental trust on the patient’s part believing that through cooperation with the physician in research, personal health needs will eventually be met according to the best available data at any given time. Perhaps, this notion should be addressed during a clinical trial.12

A local newspaper reported that some companies are enticing private physicians to register patients for their studies that have significant fees. What used to be the domain of academic researchers motivated by the drive for new discoveries, fame, and career advancement is now a multibillion dollar industry with numerous companies working with thousands of doctors in private practice having a profound impact on the doctor-patient relationship. Often the patient is unaware that significant amounts of money are involved in their recruitment for a study.13 In addition to the expected problems of human research that have been debated in the past, the influence of a growing industry invites the need for continued inspection and control not only for the prevention of abuse of human subjects but for even the possible effect such methods of recruiting could have on scientific integrity.

The Need for Double-Blind Trials

The double-blind study has magnified the issues concerning clinical trials already mentioned. Obviously, this type of trial means that both the physicians and the patients involved will not be given full information about the experiment. The double-blind study is the best safeguard against biased results. Indeed, the purpose of the double-blind trial is to bring about a treatment plan based on objective fact rather than on biased personal belief or guess work. Even well intentioned and honest researchers can fall victim to seeing their data too subjectively. For instance, an investigator may have a vested interest in the study because of personal prestige, financial gain, or merely normal enthusiasm and faith in his own work.14 These represent obstacles to objective results in research that are not blind. The originator and/or sponsor Of a project may suffer from a conflict of interest when either becomes directly involved in a study. The double-blind approach protects against such conflict. Although some investigators have argued that it is unethical for them to deprive control subjects of their product because they believe so strongly in its efficacy, personal conviction of the usefulness of one’s own work is not a good ethical reason for failing to thoroughly test a new device for safety and effectiveness. The moral imperative to thoroughly test is a basic concept to be followed to achieve a standing of good biomedical engineering.15

The biomedical engineer must face the current debate over the use of medical technology in accordance with the ethical, professional, and scientific imperative to thoroughly test his or her innovation. It is now time for biomedical engineers to stand up and support the sound moral use of clinical trials for the purposes of (1) scientific credibility of the biomedical engineering field, (2) protection of the individual from improper medical care due to unsupervised, unethical market-place forms of human research, and (3) the promotion of medical care based on sound reasoning and scientific fact. These principles are at the heart of the professionalism required of biomedical engineering research.16

Animal Experimentation

Animal Testing

Animal experimentation is an important step in the development of new implants and devices to determine their safety and effectiveness prior to their use in humans. There is no current alternative to the use of animal models to evaluate the biocompatibility of new materials and the response by a host.17 However, animal rights groups have raised important ethical considerations and have challenged the scientific community to justify the use of animals in research. The scientific community should respond and debate these issues. They should help to educate the public and increase understanding of the need for such research. A more proactive approach in this matter is imperative. It is also important that efforts be made by scientists to demonstrate to the public that animal research is done humanely, sparingly, and only when alternatives do not exist.18

The Need for Animal Research

Biomedical engineering has brought remarkable advancement to the field of medicine in a very short time frame. Just in the last quarter of this century we have seen achievements such as (1) pacemakers,

Total joint replacement, (3) artificial hearts, (4) CAT scan machines, and (5) improved surgical techniques made possible through the use of fiber optics, lasers, and ultrasonic devices. Along with these remarkable advances came the need for closer monitoring of the safety and effectiveness of inventions before their release for public use. More regulations, standards, and testing protocols were devised to ensure that each new product is subjected to a uniform system of scrutiny and procedure for approval. As part of that process, the testing of a product in animals is a likely necessary step. For example, NIH Guidelines for the Physicochemical Characterization of Biomaterials outlines a stepwise process for testing blood-contacting devices progressively in animals. A possible hierarchy of testing starting from in vitro to in vivo systems may be as follows:19

Cell culture cytotoxicity (mouse L929 cell line)

Hemolysis (rabbit or human blood)

Mutagenicity [human or other mammalian cells or Ames test (bacterial)]

Systemic injection acute toxicity (mouse)

Sensitization (guinea pig)

Pyrogenicity (limulus amebocyte lysate [LAL] or rabbit)

Intracutaneous irritation (rat, rabbit)

Intramuscular implant (rat, rabbit)

Blood compatibility (rat, dog)

Long-term implant (rat, rabbit, dog, primate)

Animal testing is a necessary means of evaluating a device in the internal environment of the living system. The suggestion that this can be replaced by computer modeling can only be made by one unaware of the lack of knowledge about in vivo conditions.20 The nearly inscrutable chemical pathways, the complex milieu of high chemical, mechanical and electrical activity, and the incalculable number of interactions inside the living organism cannot possibly be modeled theoretically today. Add to this the fact that normal conditions are different from disease states such as thrombosis or inflammation, and we find ourselves still further removed from the prospects of using simulation methods. Yet to thoroughly rule out both long and short-term failure of these products, they must be tested sufficiently to rule out risks of toxicity and even possible carcinogenic effects over many years. In response to questions raised about animal research, the American Association for the Advancement of Science issued a resolution in 1990 that supports the use of animals in research while emphasizing that experimental animals be treated humanely.

Regulations/Guidelines Related to Animal Research

A biomedical engineer must become familiar with regulations with regard to animal research. The National Research Council published a Guide for the Care and Use of Laboratory Animals initially as early as 1963 and most recently in 1996.

In addition to becoming familiar with this Guide, it is important to be aware of all applicable federal, state, and local laws, regulations, and policies such as the Animal Welfare Regulations and the Public Health Service Policy on Humane Care and Use of Laboratory Animals.22

The general objective is to have a cogent reason for conducting a particular experiment. It must be important to the health and well-being of humans or animals.23 It must be demonstrated that alternative means would not achieve the necessary goal. The choice of species must be shown to be essential and that species lower on the evolutionary ladder would not be suitable. An effort must be made to minimize the number of animals required and to prevent unnecessary duplication of tests. Pain and discomfort must be minimized unless important to the conduction of the experiment. Standards must be followed concerning (1) space allotment, (2) type of confinement, (3) availability of food and water, (4) care in transit, and (5) periods of exercise. The experiment must have clear limits and be performed under the close supervision of individuals appropriately trained. Provisions for veterinary care, environmental conditions, and euthanasia must also be made. If palliation of discomfort or pain must be withheld, a detailed explanation must be made available.

In 1985 congress passed The Food Security Act which amended the Animal Welfare Act of 1966. This required that research institutions have a committee of no less than three persons who are qualified to monitor animal care and practices in experimentation.17 These persons are to represent society’s interest in the proper treatment of animal subjects. The committee requires one veterinarian and one other person not connected with the institution. This committee must inspect all animal research and care facilities at the institution twice a year. The institution should in turn keep a report of each inspection on file for 3 years and report any information that notes violation of legal standards on the treatment of animals to federal officials. The committee must review specific areas of treatment of experimental animals, including pain management, veterinary care, and pre — and post-surgical care. A single animal must not be subjected to major survival surgery more than once unless justified scientifically. Dogs must be exercised and envi­ronmental conditions must be conducive to psychological well-being of primates. Note that policies by the National Science Foundation now extend to all vertebrate animals which is broader than the Animal Welfare Act that did not include (1) rats, (2) mice, (3) farm animals, and (4) birds.

The Public Debate

The debate about animal experimentation is marred by extremes on both sides of the issue.24 Scientists have been physically threatened and assaulted and their laboratories vandalized by extremist animal rights groups.25 Unwarranted charges that scientists only conduct animal research for personal gain and that these experiments are cruel and unnecessary demonstrate the extent of misinformation that exists for the public.

The actions of irresponsible activists must not obscure the serious philosophical issues raised by animal rights advocates. Some of them argue that there exists no justification for the claim that animals can be exploited for human benefit. This line of reasoning is that animals have rights equal to that of humans and as animals cannot give informed consent for an experiment they should be precluded from use. This represents a radical change that would have consequences beyond the use of animals in science. When animals are used as beasts of burden for example, should this be considered slavery? What of the use of animals for food, clothing, or even simply in sport. One author noted that a 1990 study showed that although 63% of literature advocating animal rights addresses only their use in research, such use annually is only 0.003% of the number of animals used as food.26 What about protection against pests in the home as well as in agriculture? Even the ownership of pets may be called into question, as a case of imprison­ment. Of all possible targets for basing a case against human exploitation of animals, the use of them in medical experiments is the least worthy. Scientists not only make the least use of animals but do so under strict regulation and with all attempts being made to minimize their use. If we are seriously going to elevate animals to a moral standing equal to human beings, then the research laboratory is not the appropriate place to start. The burden of explaining how all future interaction should take place with the rest of the animal kingdom lies with those who make such a claim.

The other extreme however, is the claim by some scientists that animals have no moral standing or intrinsic value. A report by the National Research Council in 1988 on the Use of Laboratory Animals in Biomedical and Behavioral Research says “Our society does, however, acknowledge that living things have inherent value.”27 This is consistent with society’s tendency to treat animals differently than inanimate objects. Even a mouse merits higher consideration than a stone. That animals have value, moral standing, or even rights is not equivalent to the suggestion that they are equal to human beings in their value, standing, and rights.

Humans are not the only beneficiaries of research on animals. Veterinary medicine would suffer without such research and the care of animals has significantly improved as a result of responsible animal experimentation. Today, human psychoactive drugs are being used for pets with behavioral problems.28


While there is continued consensus within the scientific community on the need for animal research, there is also increased sensitivity and awareness of the need for humane treatment of animals and the intrinsic value and moral standing of non-human animals. This has lead to improved guidelines and regulation on such use as well as much needed discussion on the purpose and ethics of animal research. While the debate will certainly continue, the use of violence, slander, and media sensationalism should be discouraged. This will only lead to further polarization of extreme views. Constructive and responsible discussion will help bring about a consensus on this very important matter that concerns the health and welfare of animals and humans alike.


An, Y. H. and Friedman, R. J. Animal Models in Orthopaedic Research, CRC Press, Boca Raton, FL, 1998.

Saha, P. and Saha, S. “Clinical trials of medical devices and implants: Ethical concerns,” IEEE Eng. Med. Y Biol. Mag., 7, 85-87, 1988.

Saha, P. and Saha, S. Ethical responsibilities of the clinical engineer, J. Clin. Eng., 11, 17-25, 1986.

Saha, P. and Saha, S. The need of biomedical ethics training in bioengineering, In Biomedical Engineering I: Recent Developments, S. Saha, Ed., Pergamon Press, New York, 369-373, 1982.

Bracken, M. B. Clinical trials and the acceptance of uncertainty, British Med. J., 294, 1111-1112, London, 1987.

Vere, D. W. Problems in controlled trials: A critical response. J. Med. Ethics, 9(2), 85-89, 1983.

Kaufmann, C. L. Informed consent and patient decision making: Two decades of research, Soc. Sci. Med., 21, 1657-1664, 1983.

Dyck, A. J. and Richardson, H. W. The moral justification for research using human subjects, In Biomedical Ethics and the Law, J. M. Humber and R. F. Almeder, Eds., Plenum Press, New York, 243-259.

Arpzilange, P., Dion, S., Mathe, G. Proposal for ethical standards in therapeutic trials, British Med. J., 291, 887-889, 1985.

Kopelman, L. Randomized clinical trials consent and the therapeutic relationship, Clin. Res., 31(1), 1-11, 1983.

Meler P. Terminating a trial: The ethical problem, Clin. Pharmacol. Therap., 25, 637-640, 1979.

Appelbaum, P., Lidz, C. W., Benson, P., et al. False Hopes and Best Data: Consent to Research and the Therapeutic Misconception, Hastings Center Report, 17(2), 20-24, 1987.

Eichenwald, K. and Kolata, G. Drug trials threaten doctors’ credibility, Anderson Independent Mall, Sunday, May 16, 1999, 13A.

Saha, S. and Saha, P. Bioethics and applied biomaterials, J. Biomed Mat. Res., App. Biomat., 21(A-2), 181-190, 1987.

Saha, S. and Saha, P. Biomedical ethics and the biomedical engineer: A review, Crit. Rev. Biomed. Eng., 25(2), 163-201, 1997.

Mappes, T. A. and Zembaty, J. S. Biomedical Ethics, 2nd ed., McGraw-Hill, New York, 1986.

Saha, P. and Saha, S. Ethical Issues on the Use of Animals in the Testing of Medical Implants, J. Long-Term Effects of Med. Impl., 1(2), 127-134, 1991.

Lukas, V. and Podolsky, M. L. The Care and Feeding of an IACVC, CRC Press, Boca Raton, FL, 1999.

Vale, B. H., Wilson, J. E., and Niemi, S. M. Animal models, In Biomaterials Science, Academic Press, San Diego, CA, 240, 1996.

Malakoff, D. Alternatives to animals urged for producing antibodies, Science, 284, 230, 1999.

National Research Council, Guide for the Care and Use of Laboratory Animals, 2nd ed., 1996.

Rollin, B. E. and Kesel, M. D. The Experimental Animal in Biomedical Research, CRC Press, Boca Raton, FL, 1, 1990.

Saha, S. and Saha, P. Biomedical ethics and the biomedical engineer: A review, Crit. Rev. Biomed. Eng., 25(2), 163-201, 1997.

Saha, S. and Saha, P. Biomedical engineering and animal research, BMES Bulletin, 16(2), 22, 1992.

Kower, J. Activists ransack Minnesota labs, Science, 284:410-411, 1997.

Conn, P. M and Parker, J. Animal Rights: Reaching the Public, Science, 282:1417, 1998.

Herzog, H. A. Jr. Informed Opinions on Animal Use Must Be Pursued, ILAR News, Institute of Laboratory Animal Resources, 31(2), Spring 1989.

Bunk, S. Market Emerges for Use of Human Drugs on Pets, The Scientist, 1 & 10, April 12, 1999.

Bronzino, J. D. “Regulation of Medical, Device Innovation.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

Beneficence, Nonmaleficence, and Technological Progress



Joseph D. Bronzino 190.3

Trinity College/Biomedical Engineering Alliance for Connecticut (BEACON)


Defining Death: A Moral Dilemma Posed by

Medical Technology


Active Versus Passive Euthanasia • Involuntary and Non Voluntary Euthanasia • Should Voluntary Euthanasia be Legalized?


Two moral norms have remained relatively constant across the various moral codes and oaths that have been formulated for health-care deliverers since the beginnings of Western medicine in classical Greek civilization, namely beneficence—the provision of benefits—and nonmaleficence—the avoidance of doing harm. These norms are traced back to a body of writings from classical antiquity known as the Hippocratic Corpus. Although these writings are associated with the name of Hippocrates, the acknowl­edged founder of Western medicine, medical historians remain uncertain whether any, including the Hippocratic Oath, were actually his work. Although portions of the Corpus are believed to have been authored during the sixth century BC, other portions are believed to have been written as late as the beginning of the Christian Era. Medical historians agree, though, that many of the specific moral directives of the Corpus represent neither the actual practices nor the moral ideals of the majority of physicians of ancient Greece and Rome.

Nonetheless, the general injunction, “As to disease, make a habit of two things—to help or, at least, to do no harm," was accepted as a fundamental medical ethical norm by at least some ancient physicians. With the decline of Hellenistic civilization and the rise of Christianity, beneficence and nonmaleficence became increasingly accepted as the fundamental principles of morally sound medical practice. Although beneficence and nomaleficence were regarded merely as concomitant to the craft of medicine in classical Greece and Rome, the emphasis upon compassion and the brotherhood of humankind, central to Christianity, increasingly made these norms the only acceptable motives for medical practice. Even today the provision of benefits and the avoidance of doing harm are stressed just as much in virtually all contemporary Western codes of conduct for health professionals as they were in the oaths and codes that guided the health-care providers of past centuries.

Traditionally, the ethics of medical care have given greater prominence to nomaleficence than to beneficence. This priority was grounded in the fact that, historically, medicine’s capacity to do harm far exceeded its capacity to protect and restore health. Providers of health care possessed many treatments
that posed clear and genuine risks to patients but that offered little prospect of benefit. Truly effective therapies were all too rare. In this context, it is surely rational to give substantially higher priority to avoiding harm than to providing benefits.

The advent of modern science changed matters dramatically. Knowledge acquired in laboratories, tested in clinics, and verified by statistical methods has increasingly dictated the practices of medicine. This ongoing alliance between medicine and science became a critical source of the plethora of technol­ogies that now pervades medical care. The impressive increases in therapeutic, preventive, and rehabil­itative capabilities that these technologies have provided have pushed beneficence to the forefront of medical morality. Some have even gone so far as to hold that the old medical ethic of “Above all, do no harm” should be superseded by the new ethic that “The patient deserves the best.” However, the rapid advances in medical technology capabilities have also produced great uncertainty as to what is most beneficial or least harmful for the patient. In other words, along with increases in ability to be beneficent, medicine’s technology has generated much debate about what actually counts as beneficent or nonma — leficent treatment. To illustrate this point, let us turn to several specific moral issues posed by the use of medical technology (Bronzino, 1992; 1999).

Defining Death: A Moral Dilemma Posed by Medical Technology

Supportive and resuscitative devices, such as the respirator, found in the typical modern intensive care unit provide a useful starting point for illustrating how technology has rendered medical morality more complex and problematic. Devices of this kind allow clinicians to sustain respiration and circulation in patients who have suffered massive brain damage and total permanent loss of brain function. These technologies force us to ask: precisely when does a human life end? When is a human being indeed dead? This is not the straightforward factual matter it may appear to be. All of the relevant facts may show that the patient’s brain has suffered injury grave enough to destroy its functioning forever. The facts may show that such an individual’s circulation and respiration would permanently cease without artificial support. Yet these facts do not determine whether treating such an individual as a corpse is morally appropriate. To know this, it is necessary to know or perhaps to decide on those features of living persons that are essential to their status as “living persons.” It is necessary to know or decide which human qualities, if irreparably lost, make an individual identical in all morally relevant respects to a corpse. Once those qualities have been specified, deciding whether total and irreparable loss of brain function constitutes death becomes a straightforward factual matter. Then, it would simply have to be determined if such loss itself deprives the individual of those qualities. If it does, the individual is morally identical to a corpse. If not, then the individual must be regarded and treated as a living person.

The traditional criterion of death has been irreparable cessation of heart beat, respiration, and blood pressure. This criterion would have been quickly met by anyone suffering massive trauma to the brain prior to the development of modem supportive technology. Such technology allows indefinite artificial maintenance of circulation and respiration and, thus, forestalls what once was an inevitable consequence of severe brain injury. The existence and use of such technology therefore challenges the traditional criterion of death and forces us to consider whether continued respiration and circulation are in them­selves sufficient to distinguish a living individual from a corpse. Indeed, total and irreparable loss of brain function, referred to as “brainstem death,” “whole brain death,” and, simply, “brain death,” has been widely accepted as the legal standard for death. By this standard, an individual in a state of brain death is legally indistinguishable from a corpse and may be legally treated as one even though respiratory and circulatory functions may be sustained through the intervention of technology. Many take this legal standard to be the morally appropriate one, noting that once destruction of the brain stem has occurred, the brain cannot function at all, and the body’s regulatory mechanisms will fail unless artificially sustained. Thus mechanical sustenance of an individual in a state of brain death is merely postponement of the inevitable and sustains nothing of the personality, character, or consciousness of the individual. It is merely the mechanical intervention that differentiates such an individual from a corpse and a mechan­ically ventilated corpse is a corpse nonetheless.

Even with a consensus that brainstem death is death and thus that an individual in such a state is indeed a corpse, hard cases remain. Consider the case of an individual in a persistent vegetative state, the condition known as “neocortical death.” Although severe brain injury has been suffered, enough brain function remains to make mechanical sustenance of respiration and circulation unnecessary. In a persistent vegetative state, an individual exhibits no purposeful response to external stimuli and no evidence of self-awareness. The eyes may open periodically and the individual may exhibit sleep-wake cycles. Some patients even yawn, make chewing motions, or swallow spontaneously. Unlike the complete unresponsiveness of individuals in a state of brainstem death, a variety of simple and complex responses can be elicited from an individual in a persistent vegetative state. Nonetheless, the chances that such an individual will regain consciousness virtually do not exist. Artificial feeding, kidney dialysis, and the like make it possible to sustain an individual in a state of neocortical death for decades. This sort of condition and the issues it raises were exemplified by the famous case of Karen Ann Quinlan. James Rachels (1986) provided the following description of the situation created by Quinlan’s condition:

In April 1975, this young woman ceased breathing for at least two 15-minute periods, for reasons that were never made clear. As a result, she suffered severe brain damage, and, in the words of the attending physicians, was reduced to a “chronic vegetative state” in which she “no longer had any cognitive function.” Accepting the doctors’ judgment that there was no hope of recovery, her parents sought permission from the courts to disconnect the respirator that was keeping her alive in the intensive care unit of a New Jersey hospital.

The trial court, and then the Supreme Court of New Jersey, agreed that Karen’s respirator could be removed. So it was disconnected. However, the nurse in charge of her care in the Catholic hospital opposed this decision and, anticipating it, had begun to wean her from the respirator so that by the time it was disconnected she could remain alive without it. So Karen did not die. Karen remained alive for ten additional years. In June 1985, she finally died of acute pneumonia. Antibiotics, which would have fought the pneumonia, were not given.

If brainstem death is death, is neocortical death also death? Again, the issue is not a straightforward factual matter. For, it too, is a matter of specifying which features of living individuals distinguish them from corpses and so make treatment of them as corpses morally impermissible. Irreparable cessation of respiration and circulation, the classical criterion for death, would entail that an individual in a persistent vegetative state is not a corpse and so, morally speaking, must not be treated as one. The brainstern death criterion for death would also entail that a person in a state of neocortical death is not yet a corpse. On this criterion, what is crucial is that brain damage be severe enough to cause failure of the body’s regulatory mechanisms.

Is an individual in a state of neocortical death any less in possession of the characteristics that distinguish the living from cadavers than one whose respiration and circulation are mechanically main­tained? Of course, it is a matter of what the relevant characteristics are, and it is a matter that society must decide. It is not one that can be settled by greater medical information or more powerful medical devices. Until society decides, it will not be clear what would count as beneficent or nonmaleficent treatment of an individual in a state of neocortical death.

190.3 Euthanasia

A long-standing issue in medical ethics, which has been made more pressing by medical technology, is euthanasia, the deliberate termination of an individual’s life for the individual’s own good. Is such an act ever a permissible use of medical resources? Consider an individual in a persistent vegetative state. On the assumption that such a state is not death, withdrawing life support would be a deliberate termination of a human life. Here a critical issue is whether the quality of a human life can be so low or so great a liability to the individual that deliberately taking action to hasten death or at least not to postpone death is morally defensible. Can the quality of a human life be so low that the value of extending its quantity is totally negated? If so, then Western medicine’s traditional commitment to providing benefits and avoiding harm would seem to make cessation of life support a moral requirement in such a case.

Consider the following hypothetical version of the kind of case that actually confronts contemporary patients, their families, health-care workers, and society as a whole. Suppose a middle-aged man suffers a brain hemorrhage and loses consciousness as a result of a ruptured aneurysm. Suppose that he never regains consciousness and is hospitalized in a state of neocortical death, a chronic vegetative state. He is maintained by a surgically implanted gastronomy tube that drips liquid nourishment from a plastic bag directly into his stomach. The care of this individual takes seven and one-half hours of nursing time daily and includes

Shaving, (2) oral hygiene, (3) grooming, (4) attending to his bowels and bladder, and so forth.

Suppose further that his wife undertakes legal action to force his care givers to end all medical treatment, including nutrition and hydration, so that complete bodily death of her husband will occur. She presents a preponderance of evidence to the court to show that her husband would have wanted this result in these circumstances.

The central moral issue raised by this sort of case is whether the quality of the individual’s life is sufficiently compromised by neocortical death to make intentioned termination of that life morally permissible. While alive, he made it clear to both family and friends that he would prefer to be allowed to die rather than be mechanically maintained in a condition of irretrievable loss of consciousness. Deciding whether the judgment in such a case should be allowed requires deciding which capacities and qualities make life worth living, which qualities are sufficient to endow it with value worth sustaining, and whether their absence justifies deliberate termination of a life, at least when this would be the wish of the individual in question. Without this decision, the traditional norms of medical ethics, beneficence and nonmaleficence, provide no guidance. Without this decision, it cannot be determined whether termination of life support is a benefit or a harm to the patient.

An even more difficult type of case was provided by the case of Elizabeth Bouvia. Bouvia, who had been a lifelong quadriplegic sufferer of cerebral palsy, was often in pain, completely dependent upon others, and spent all of her time bedridden. Bouvia, after deciding that she did not wish to continue such a life, entered Riverside General Hospital in California. She desired to be kept comfortable while starving to death. Although she remained adamant during her hospitalization, Bouvia’s requests were denied by hospital officials with the legal sanction of the courts.

Many who might believe that neocortical death renders the quality of life sufficiently low to justify termination of life support, especially when this agrees with the individual’s desires, would not arrive at this conclusion in a case like Bouvia’s. Whereas neocortical death completely destroys consciousness and makes purposive interaction with the individual’s environment impossible, Bouvia was fully aware and mentally alert. She had previously been married and had even acquired a college education. Televised interviews with her portrayed a very intelligent person who had great skill in presenting persuasive arguments to support her wish not to have her life continued by artificial means of nutrition. Nonetheless, she judged her life to be of such low quality that she should be allowed to choose to deliberately starve to death. Before the existence of life support technology, maintenance of her life against her will might not have been possible at all and at least would have been far more difficult.

Should Elizabeth Bouvia’s judgment have been accepted? Her case is more difficult than the care of a patient in a chronic vegetative state because, unlike such an individual, she was able to engage in meaningful interaction with her environment. Regarding an individual who cannot speak or otherwise meaningfully interact with others as nothing more than living matter, as a “human vegetable,” is not especially difficult. Seeing Bouvia this way is not easy. Her awareness, intelligence, mental acuity, and ability to interact with others means that although her life is one of discomfort, indignity, and complete dependence, she is not a mere “human vegetable.”

Despite the differences between Bouvia’s situation and that of someone in a state of neocortical death, the same issue is posed. Can the quality of an individual’s life be so low that deliberate termination is morally justifiable? How that question is answered is a matter of what level of quality of life, if any, is taken to be sufficiently low to justify deliberately acting to end it or deliberately failing to extend it. If there is such a level, the conclusion that it is not always beneficent or even nonmaleficent to use life-support technology must be accepted.

Another important issue here is respect for individual autonomy. For the cases of Bouvia and the hypothetical instance of neocortical death discussed above, both concern voluntary euthanasia, that is, euthanasia voluntarily requested by the patient. A long-standing commitment, vigorously defended by various schools of thought in Western moral philosophy, is the notion that competent adults should be free to conduct their lives as they please as long as they do not impose undeserved harm on others. Does this commitment entail a right to die? Some clearly believe that it does. If one owns anything at all, surely one owns one’s life. In the two cases discussed above, neither individual sought to impose undeserved harm on anyone else, nor would satisfaction of their wish to die do so. What justification can there be then for not allowing their desires to be fulfilled?

One plausible answer is based upon the very respect of individual autonomy at issue here. A necessary condition, in some views, of respect for autonomy is the willingness to take whatever measures are necessary to protect it, including measures that restrict autonomy. An autonomy-respecting reason offered against laws that prevent even competent adults from voluntarily entering lifelong slavery is that such an exercise of autonomy is self-defeating and has the consequence of undermining autonomy altogether. By the same token, an individual who acts to end his own life thereby exercises his autonomy in a manner that places it in jeopardy of permanent loss. Many would regard this as justification for using the coercive force of the law to prevent suicide. This line of thought does not fit the case of an individual in a persistent vegetative state because his/her autonomy has been destroyed by the circumstances that rendered him/her neocortically dead. It does fit Bouvia’s case though. Her actions indicate that she is fully competent and her efforts to use medical care to prevent the otherwise inevitable pain of starvation is itself an exercise of her autonomy. Yet, if allowed to succeed, those very efforts would destroy her autonomy as they destroy her. On this reasoning, her case is a perfect instance of limitation of autonomy being justified by respect for autonomy and of one where, even against the wishes of a competent patient, the life-saving power of medical technology should be used.

Active Versus Passive Euthanasia

Discussions of the morality of euthanasia often distinguish active from passive euthanasia in light of the distinction made between killing a person and letting a person die, a distinction that rests upon the difference between an act of commission and an act of omission. When failure to take steps that could effectively forestall death results in an individual’s demise, the resultant death is an act of omission and a case of letting a person die. When a death is the result of doing something to hasten the end of a person’s life (giving a lethal injection, for example), that death is caused by an act of commission and is a case of killing a person. When a person is allowed to die, death is a result of an act of omission, and the motive is the person’s own good, the omission is an instance of passive euthanasia. When a person is killed, death is the result of an act of commission, and the motive is the person’s own good, the commission is an instance of active euthanasia.

Does the difference between passive and active euthanasia, which reduces to a difference in how death comes about, make any moral difference? It does in the view of the American Medical Association. In a statement adopted on December 4, 1973, the House of Delegates of the American Medical Association asserted the following (Rachels, 1978):

The intentional termination of the life of one human being by another—mercy killing—is contrary to that for which the medical profession stands and is contrary to the policy of the American Medical Association (AMA).

The cessation of extraordinary means to prolong the life of the body where there is irrefutable evidence that biological death is imminent is the decision of the patient and immediate family. The advice of the physician would be freely available to the patient and immediate family.

In response to this position, Rachels (1978) answered with the following:

The AMA policy statement isolates the crucial issue very well, the crucial issue is “intentional termi­nation of the life of one human being by another.” But after identifying this issue and forbidding “mercy killing,” the statement goes on to deny that the cessation of treatment is the intentional termination of a life. This is where the mistake comes in, for what is the cessation of treatment in those circumstances (where the intention is to release the patient from continued suffering), if it is not “the intentional termination of the life of one human being by another?”

As Rachels correctly argues, when steps that could keep an individual alive are omitted for the person’s own good, this omission is as much the intentional termination of life as taking active measures to cause death. Not placing a patient on a respirator due to a desire not to prolong suffering is an act intended to end life as much as the administration of a lethal injection. In many instances the main difference between the two cases is that the latter would release the individual from his pain and suffering more quickly than the former. Dying can take time and involve considerable pain even if nothing is done to prolong life. Active killing can be done in a manner that causes death painlessly and instantly. This difference certainly does not render killing, in this context, morally worse than letting a person die. Insofar as the motivation is merciful (as it must be if the case is to be a genuine instance of euthanasia) because the individual is released more quickly from a life that is disvalued than otherwise, the difference between killing and letting one die may provide support for active euthanasia. According to Rachels (1978), the common rejoinder to this argument is the following:

The important difference between active and passive euthanasia is that in passive euthanasia the doctor does not do anything to bring about the patient’s death. The doctor does nothing and the patient dies of whatever ills already afflict him. In active euthanasia, however, the doctor does something to bring about the patient’s death: he kills the person. The doctor who gives the patient with cancer a lethal injection has himself caused his patient’s death; whereas if he merely ceases treatment, the cancer is the cause of death.

According to this rejoinder, in active euthanasia someone must do something to bring about the patient’s death, and in passive euthanasia the patient’s death is caused by illness rather than by anyone’s conduct. Surely this is mistaken. Suppose a physician deliberately decides not to treat a patient who has a routinely curable ailment and the patient dies. Suppose further that the physician were to attempt to exonerate himself by saying, “I did nothing. The patient’s death was the result of illness. I was not the cause of death.” Under current legal and moral norms, such a response would have no credibility. As Rachels (1978) notes, “it would be no defense at all for him to insist that he didn’t do anything. He would have done something very serious indeed, for he let his patient die.”

The physician would be blameworthy for the patient’s death as surely as if he had actively killed him. If causing death is justifiable under a given set of circumstances, whether it is done by allowing death to occur or by actively causing death is morally irrelevant. If causing someone to die is not justifiable under a given set of circumstances, whether it is done by allowing death to occur or by actively causing death is also morally irrelevant. Accordingly, if voluntary passive euthanasia is morally justifiable in the light of the duty of beneficence, so is voluntary active euthanasia. Indeed, given that the benefit to be achieved is more quickly realized by means of active euthanasia, it may be preferable to passive euthanasia in some cases.

Involuntary and Non-Voluntary Euthanasia

An act of euthanasia is involuntary if it hastens the individual’s death for his own good but against his wishes. To take such a course would be to destroy a life that is valued by its possessor. Therefore, it is no different in any morally relevant way from unjustifiable homicide. There are only two legitimate reasons for hastening an innocent person’s death against his will: self-defense and saving the lives of a larger number of other innocent persons. Involuntary euthanasia does not fit either of these justifications. By definition, it is done for the good of the person who is euthanized and for self-defense or saving innocent others. No act that qualifies as involuntary euthanasia can be morally justifiable.

Hastening a person’s death for his own good is an instance of non-voluntary euthanasia when the individual is incapable of agreeing or disagreeing. Suppose it is clear that a particular person is sufficiently self-conscious to be regarded a person but cannot make his wishes known. Suppose also that he is suffering from the kind of ailment that, in the eyes of many persons, makes one’s life unendurable. Would hastening his death be permissible? It would be if there were substantial evidence that he has given prior consent. This person may have told friends and relatives that under certain circumstances efforts to prolong his life should not be undertaken or continued. He might have recorded his wishes in the form of a Living Will (below) or on audio — or videotape. Where this kind of substantial evidence of prior consent exists, the decision to hasten death would be morally justified. A case of this scenario would be virtually a case of voluntary euthanasia.

But what about an instance in which such evidence is not available? Suppose the person at issue has never had the capacity for competent consent or dissent from decisions concerning his life. It simply cannot be known what value the individual would place on his life in his present condition of illness. What should be done is a matter of what is taken to be the greater evil—mistakenly ending the life of an innocent person for whom that life has value or mistakenly forcing him to endure a life that he radically disvalues.

To My Family, My Physician, My Clergyman, and My Lawyer:

If the time comes when I can no longer take part in decisions about my own future, let this statement stand as testament of my wishes: If there is no reasonable expectation of my recovery from physical or mental disability,

I, , request that I be allowed to die and not be kept alive by artificial means

Or heroic measures. Death is as much a reality as birth, growth, maturity, and old age—it is the one certainty. I do not fear death as much as I fear the indignity of deterioration, dependence, and hopeless pain. I ask that drugs be mercifully administered to me for the terminal suffering even if they hasten the moment of death.

This request is made after careful consideration. Although this document is not legally binding, you who care for me will, I hope, feel morally bound to follow its mandate. I recognize that it places a heavy burden of responsibility upon you, and it is with the intention of sharing that responsibility and of mitigating any feelings of guilt that this statement is made.



Witnessed by:

Living Will statutes have been passed in at least 35 states and the District of Columbia. For a Living Will to be a legally binding document, the person signing it must be of sound mind at the time the will is made and shown not to have altered his opinion in the interim between the signing and his illness. The witnesses must not be able to benefit from the individual’s death.

Should Voluntary Euthanasia be Legalized?

The recent actions of Dr. Kavorkian have raised the question: “Should voluntary euthanasia be legalized?” Some argue that even if voluntary euthanasia is morally justifiable, it should be prohibited by social policy nonetheless. According to this position, the problem with voluntary euthanasia is its impact on society as a whole. In other words, the overall disutility of allowing voluntary euthanasia outweighs the good it could do for its beneficiaries. The central moral concern is that legalized euthanasia would eventually erode respect for human life and ultimately become a policy under which “socially undesirable” persons would have their deaths hastened (by omission or commission). The experience of Nazi Germany is often cited in support of this fear. What began there as a policy of euthanasia soon became one of eliminating individuals deemed racially inferior or otherwise undesirable. The worry, of course, is that what happened there can happen here as well. If social policy encompasses efforts to hasten the deaths of people, respect for human life in general is eroded and all sorts of abuses become socially acceptable, or so the argument goes.

No one can provide an absolute guarantee that the experience of Nazi Germany would not be repeated, but there is reason to believe that its likelihood is negligible. The medical moral duty of beneficence justifies only voluntary euthanasia. It justifies hastening an individual’s death only for the individual’s benefit and only with the individual’s consent. To kill or refuse to save people judged socially undesirable is not to engage in euthanasia at all and violates the medical moral duty of nomaleficence. As long as only voluntary euthanasia is legalized, and it is clear that involuntary euthanasia is not and should never be, no degeneration of the policy need occur. Furthermore, such degeneration is not likely to occur if the beneficent nature of voluntary euthanasia is clearly distinguished from the maleficent nature of involuntary euthanasia and any policy of exterminating the socially undesirable. Euthanasia decisions must be scrutinized carefully and regulated strictly to ensure that only voluntary cases occur, and severe penalties must be established to deter abuse.


Bronzino, J. D. Chapter 10 Medical and Ethical Issues in Clinical Engineering Practice. In: Management of Medical Technology. Butterworth, 1992.

Bronzino, J. D. Chapter 20 Moral and Ethical Issues Associated with Medical Technology. In: Introduction to Biomedical Engineering. Academic Press, 1999.

Rachels, J. “Active and Passive Euthanasia,” In: Moral Problems, 3rd ed., Rachels, J., (Ed.), Harper and Row, New York, 1978.

Rachels, J. Ethics at the End of Life: Euthanasia and Morality, Oxford University Press, Oxford, 1986.

Further Information

Daniels, N. Just Health Care. Cambridge University Press, Cambridge, 1987.

Dubler, N. N. and Nimmons, D. Ethics on Call. Harmony Books, New York, 1992.

Jonsen, A. R. The New Medicine and the Old Ethics. Harvard University Press, Cambridge, MA, 1990. Murphy, J. and Coleman, J. The Philosophy of Law, Rowman and Allenheld, 1984.

>., Saha, P. S. “Ethical Issues of Animal and Human Experimentation in the Development of Medical Dev omedical Engineering Handbook: Second Edition.

;eph D. Bronzino

Laton: CRC Press LLC, 2000





Optical absorption coefficient in semiconductors (After Ref 1)


1 C Belove, Ed, Handbook of modern electronics and electrical engineering, Wiley, New York, 1986

Ethical Issues Associated with the Use of Medical Technology

Subrata Saha

Clemson University

Joseph D. Bronzino

Trinity College/Biomedical Engineering Alliance for Connecticut (BEACON)

Professional Ethics in Biomedical Engineering Daniel E. Wueste

A Variety of Norms Govern Human Contact • Professional Ethics and Ethics Plain and Simple • Professions • The Profession of Biomedical Engineering • Two Sources of Professional Ethics • Professional Ethics in Biomedical Engineering • Tools for Design and Decision in Professional Ethics • Professional Integrity, Responsibility, and Codes

Beneficence, Nonmaleficence, and Technological Progress Joseph D. Bronzino Defining Death: A Moral Dilemma Posed by Medical Technology • Euthanasia

Ethical Issues of Animal and Human Experimentation in the Development of Medical Devices Subrata Saha, Pamela S. Saha

Clinical Trials • Animal Experimentation

Regulation of Medical Device Innovation Joseph D. Bronzino

Ethical Issues in Feasibility Studies • Ethical Issues in Emergency Use • Ethical Issues in Treatment Use • The Safe Medical Devices Act


Iomedical engineering is responsible for many of the recent advances in modern medicine. These developments have led to new treatment modalities that have significantly improved not only medical care, but the quality of life for many patients in our society. However, along with such positive outcomes new ethical dilemmas and challenges have also emerged. These include: (1) involvement of humans in clinical research, (2) definition of death and the issue of euthanasia, (3) animal experi­mentation and human trials for new medical devices, (4) patient access to sophisticated and high cost medical technology, (5) regulation of new biomaterials and devices. With these issues in mind, this section discusses some of these topics. The first chapter focuses on the concept of professional ethics and its importance to the practicing biomedical engineer. The second chapter deals with the role medical technology has played in the definition of death and the dilemmas posed by advocates of euthanasia. The third chapter focuses on the use of animals and humans in research and clinical experimentation. The final chapter addresses the issue of regulating the use of devices, materials, etc. in the care of patients.

Since the space allocated in this Handbook is limited, a complete discussion of the many ethical dilemmas encountered by practicing biomedical engineers is beyond the scope of this section. Therefore, it is our sincere hope that the readers of this Handbook will further explore these ideas from other texts and articles, some of which are referenced at the end of the chapter. Clearly, a course on biomedical ethics should be an essential component of any bioengineering curriculum.

With new developments in biotechnology and genetic engineering, we need to ask ourselves not only if we can do it, but also “should it be done?” As professional engineers we also have an obligation to educate the public and other regulatory agencies regarding the social implications of such new develop­ments. It is our hope that the topics covered in this section can provide an impetus for further discussion of the ethical issues and challenges faced by the bioengineer during the course of his/her professional life.

Wueste, D. E. “Professional Ethics in Biomedical Engineering.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000


Professional Ethics in Biomedical Engineering

A Variety of Norms Govern Human Contact

Professional Ethics and Ethics Plain and Simple


The Profession of Biomedical Engineering

Two Sources of Professional Ethics

Professional Ethics in Biomedical Engineering

Tools for Design and Decision in Professional

Daniel E. Wueste Ethics

Clemson University 189.8 Professional Integrity, Responsibility, and Codes

A Variety of Norms Govern Human Conduct

Various norms or principles govern our activities. They guide us and provide standards for the evaluation of conduct. For example, our conduct is governed by legal and moral norms. These two sets of norms overlap, but it is clear that the overlap is not complete. Morality requires some acts that are not legally required of us and vice versa. Not surprisingly, then, we speak of legality and morality, of legal obligations and moral obligations. This is appropriate primarily because (1) law and morality are distinct sources of obligation and (2) it is possible for legal and moral obligations to conflict. Recognizing that moral and legal obligations have distinct sources, it is easier to appreciate the nature of a conflict between them and work toward its resolution. The same thing can be said within the sphere of morality. Moral rules and principles can and do give rise to conflicting obligations. Here too, the way people speak is revealing. People speak of professional obligations and social responsibilities, as well as the duties of “ordinary morality.” Such talk is increasingly heard in the high-tech fields of biomedicine and biomedical engi­neering, for here the development and use of sophisticated technology intersect with the rights and interests of human beings in an especially profound way.

Professional Ethics and Ethics Plain and Simple

Talk of professional ethics presupposes a distinction between the constraints that arise “from what it means to be a decent human being” (Camenisch, 1981) and those that come with one’s role or attach to the enterprise in which one is engaged. Paul Camenisch calls the former “ethics plain and simple.” The latter are the elements of an occupational or role morality. They only apply to persons who occupy a specific role. The idea here is simple enough. A father, for example, has responsibilities that a man who is not a father does not have. So too, a college teacher, a cleric, or a police officer has responsibilities that persons not occupying these roles do not have.

A professional ethic is an occupational or role morality. It is like the law in several ways. For example, it is not a mere restatement of the norms of ordinary morality. Another point of similarity is that like the law, its scope is limited; it has a jurisdiction in the sense that what it requires or permits is role specific. Thus, for example, the norms of the lawyer’s professional ethic do not impose obligations on people who are not lawyers. Still another similarity is that a professional ethic may allow (or even require) acts that ordinary morality disallows or condemns. An explanation of this fact begins with the observation that, in general, as professionals do their work they are allowed to put to one side considerations that would be relevant and perhaps decisive in the ethical deliberations of nonprofessionals. So, for instance, an attorney is free to plead the statute of limitations as a bar to a just claim against his/her client or to block the introduction of illegally seized evidence in a criminal trial even though his/her client committed the alleged offense. Now one thing that might be said here is that the attorney is simply doing his/her job. The point would be well taken. However, it leaves something rather important unsaid, namely, that the conduct in question is obligatory for the attorney.

Thinking along lines such as these is not confined to the paradigm professions of law and medicine. For example, scientists have been known to claim that as scientists they are free to (indeed must) put to one side social, political, and moral concerns about the uses to which their discoveries may be put. Indeed, some have claimed that scientists are morally obligated not to forgo inquiry even if what emerges from it can be put to immoral or horrendous uses. Plainly, such an appeal to a professional ethic can excite controversy. And this reveals another respect in which a professional ethic (role morality) is like law: it is subject to moral critique. Indeed, it is not only subject to moral criticism, its validity depends on its being morally justified (i. e., justified in terms of “ethics plain and simple”).

One important way in which a professional ethic differs from law is that it is widely held that its standards are in some sense higher than those of ordinary morality. More is expected of professionals than nonprofessionals. They are expected to act on the basis of the knowledge that sets them apart and in doing so they are expected to put the interests of clients or patients ahead of their own interests. Clearly, if professionals are governed by special, higher standards, if the rights and duties of professionals differ from the rights and duties of nonprofessionals, then it makes a difference whether one’s occupation counts as a profession.


As it happens, there is no generally accepted definition of the term “profession”. However, several writers have suggested that some characteristics common to recognized professions are necessary or essential. The idea that these writers share is that with these characteristics in mind one can mark a serviceable distinction between professional and nonprofessional occupations.

Bayles [1989] maintains that three features of a profession are necessary: (1) extensive training, (2) that involves a significant intellectual component, and (3) puts one in a position to provide an important service to society. To be sure, other features are common. For example: (1) the existence of a process of certification or licensing, (2) the existence of a professional organization, (3) monopoly control of tasks,

Self-regulation, and (5) autonomy in work. But, according to Bayles, they are not essential. The crucial point in the argument that they are not essential is that a large (and growing) number of professionals work in organizations (e. g., HMOs or institutions such as hospitals) where tasks are shared and activities are directed and controlled by superiors. Two things are noteworthy here. First, Bayles’s analysis does not include normative features among those that distinguish professions from non professions. Second, in his analysis an occupation may count as a profession though it lacks certain features common to most professions. While Bayles’s decision not to include normative features among the distinguishing features of professions has been criticized by several writers who insist that such features are indeed essential, the second point has met with widespread agreement.

It will be well to avoid the controversy respecting what is and what is not essential to a profession. Happily, a survey of the substantial literature on professions reveals agreement on several points that, taken together, provide a helpful picture of professions and professional activity. This picture is rather Like a sketch made by a police artist who works with descriptions provided by various witnesses. It has five elements. The first is the centrality of abstract, generalized, and systematic knowledge in the perfor­mance of occupational tasks. The second element is the social significance of the tasks the professional performs; professional activity promotes basic social values. The third element is the claim to be better situated/qualified than others to pronounce and act on certain matters. This claim reaches beyond the interests and affairs of clients. Professionals (experts) believe that they should define various aspects of society, life, and nature and we generally agree. For example, we defer to them (or at least our elected representatives do) in matters of public policy and national defense. Moreover, in certain settings, a hospital for example, it is simply impossible not to defer to the judgment of experts/professionals. The crucial premise can scarcely be doubted: in the contemporary world there is more to know (much of it having immediate practical application) than any one person is capable of knowing. The fourth point is that, on the basis of their expertise and the importance of the work that requires it, professionals claim that as practitioners they are governed by role-specific norms—a professional ethic—rather than the norms that govern human conduct in general. Now, it cannot be denied that particular applications of the relevant norms have been a source of controversy. However, since controversy of this sort generally presupposes the applicability and validity of the role-specific norms, it only serves to confirm the general point. The final element of the composite picture is that now most professionals work in bureaucratic organizations/institutions. The romantic appeal of the model of the professional as solo-practitioner may incline one to balk at this. But romance must not be allowed to prevail over evidence. And in this case the evidence is substantial. In fact, some of it is arresting. For example, it was recently reported that eighty percent of recent medical school graduates are salaried employees of HMOs, clinics, or hospitals (US News and World Report, 1999).

The Profession of Biomedical Engineering

As we have seen, it is not an idle question whether biomedical engineering is a profession. If professionals are governed by special, higher standards, if the rights and duties of professionals differ from the rights and duties of nonprofessionals, then it makes a difference whether one’s occupation counts as a profession.

Since there is no standard definition, no set of necessary and sufficient conditions the satisfaction of which would be decisive, in answering the question we will have to proceed in a different way. It is suggested that we think in terms of characteristics shared by recognized professions that, taken together, constitute a composite picture akin to what a police sketch artist might draw. What is seen when one looks at this picture? In particular, does the picture match the reality of biomedical engineering practice? Looking at this composite picture of a profession it is seen that (1) abstract, generalized, and systematic knowledge is crucial to the performance of occupational tasks (2) these tasks promote basic social values,

Practitioners claim to be better situated/qualified than others to pronounce and act on certain matters,

The conduct of practitioners is governed by role-specific norms, and (5) most of the work done by practitioners is done within bureaucratic institutions. The fit between the reality of biomedical engineer­ing practice and this composite picture of professions is tight. Indeed, looking to this picture to answer the question whether biomedical engineering is a profession, it can scarcely be doubted that the answer is yes.

One possible objection to this answer might be that there is no code of ethics for bioengineers, and thus bioengineering does not count as a profession because the fourth requirement is not satisfied. The objection could be met by pointing to a code for bioengineers, if there was one. But a better response is that the objection itself is misplaced. It is based on the mistaken assumption that there is a set of conditions the satisfaction of which is necessary and sufficient for the ascription of the term “profession” to an occupation. But the existence of a code of ethics is neither necessary nor sufficient for an occupation to count as a profession; what we are working with is a composite picture, not a definition. The objection is misplaced for a second reason as well. Even if there is no code of ethics for bioengineers, codes of ethics are well known in engineering. For example, the National Society of Professional Engineers and the Institute of Electrical and Electronic Engineers have codes of ethics. In addition, the National

Committee on Biomedical Engineering (Australia) has produced a set of professional standards. More­over, a professional ethic may develop by other than quasi-legislative means; in this respect it is like law which has both legislative and customary forms. Thus, there is nothing here to impugn the claim that bioengineering is a profession.

Two Sources of Professional Ethics

It is important to be clear about the fact that a professional ethic may develop in more than one way. A professional ethic is a role morality. The norms of a profession’s role morality need not be expressly “legislated” by, for example, a professional organization, because most of them are implicitly legislated in practice. Thus, one way to identify the norms of a profession’s role morality is to reflect on the expectations one has respecting the conduct of one’s peers. The stable interactional expectancies of practice can constitute what amounts to a customary morality of a profession, an uncodified professional ethic. To be sure, these customary norms may be codified (and often are). However, just as in the case of customary law, codification is not necessary for their validity. Indeed, it is entirely appropriate to say that codification of some such norms (like some laws) is the result of the codifier’s recognition of their independent validity.

It should be noted here that the norms of a customary morality are valid only if they are accepted in practice. It is important, however, that while acceptance is necessary, it is not sufficient. It cannot be sufficient, because if it were, the idea that something is right simply because someone or some group believes it is right would have to be granted. And that, of course, is patently false. What, then, are the additional conditions for the validity of such norms? This is surely a fair question. However, answering it completely would lead us far afield. Consequently, a short answer will have to suffice. The case for the validity of a norm of customary morality (in other words, for its status as a moral norm) turns on whether, in addition to being accepted in practice, compliance with it has good consequences and does not infringe upon the rights of other persons. It will be noticed that these are precisely the sorts of considerations that persons charged with the task of rule making do, or at any rate should, regard as decisive in doing their work. In any case, whether the norms of a professional ethic arise in practice or are expressly “legislated” by a group, they are “dual aspect norms” (Wellman, 1985). They are in play for role agents who are trying to decide what action to take; they are also in play for others within and outside of the profession who observe or by other means become aware of deviation from (or conformity to) them and react accordingly.

Professional Ethics in Biomedical Engineering

It is clear that biomedical engineering has an ethical dimension. After all, human well being is at stake in much if not most of what a biomedical engineer does. Indeed, error or negligence on the part of a biomedical engineer can result in unnecessary suffering or death. Of course, much the same can be said of other engineering fields. Yet, there is something distinctive here. The National Committee on Biomed­ical Engineering has identified three ways in which biomedical engineering differs from other branches of engineering. First, biomedical engineers work with biological materials that behave differently from and have different properties than the materials that most engineers work with. Second, preparation for a career as a biomedical engineer involves study of both engineering principles and the life sciences. Third, and most important for present purposes, is “the indirect and very often direct responsibility of biomedical engineers for their work with patients.” Such responsibility for the well-being of others is a clear indicator that a role has an ethical dimension.

Many things in biomedical engineering that fall under the rubric of professional ethics have to do with policies or procedures. For example, the development of

Methods for obtaining informed consent and criteria for justified departure from these methods,

Means for identifying subjects for clinical trials,

Criteria of thorough testing,

Standards to obviate or mitigate conflicts of interest, as well as mechanisms for their application and enforcement, and

Criteria for just distribution of scarce biomedical resources (expertise and technology).

AH of these things (in this far from exhaustive list) fall under the rubric of professional ethics. However, they are institutional in the sense that they call for decisions about policies or procedures rather than individual action. Decisions in these areas are not decisions to be made by individual practitioners nor are they decisions to be taken case by case. They are decisions about structures of practice that require quasi-legislative activity. Relying on others with requisite expertise, and soliciting input from persons whose interests are at stake in biomedical engineering practices, biomedical engineers should work to develop structures of practice that satisfy legal requirements and ensure, as far as possible, that their professional practice manifests a commitment to safety and the promotion of human well-being. These carefully designed structures of practice should be part of the explicitly quasi-legislated portion of the professional ethic of biomedical engineering.

The question that arises naturally here is how to proceed in this undertaking. Precisely which of the dominant approaches in ethics—utilitarian, deontological, or aretaic—is best is a subject of vigorous debate among philosophers. But this issue will not be debated here. Instead, a sketch will be presented of an approach that can be employed in designing ethical structures of practice, and then, with one difference to be explained, used in making individual ethical decisions in one’s capacity as a professional.

Tools for Design and Decision in Professional Ethics

Multiple analyses or several independent judges are often relied on in making decisions. In general, this is done when something significant turns on the final decision. For example, physicians frequently call for a consultation and patients are encouraged to seek a second opinion before an invasive procedure is performed. Similarly, hospitals and universities rely on panels or commissions—an Institutional Review Board, for example—to make decisions about proposed research or other pressing issues. In such cases it is assumed, rightly, that relying on multiple modes of analysis or several judges is wise even though (1) they produce the same judgment in many—hopefully most—cases and, thus, appear to involve redundancy; and (2) in some cases they produce conflict that could have been avoided by relying on a single mode of analysis or single judge. Why is this a wise course? One part of the answer is that our confidence is bolstered when the same conclusion is reached by different trustworthy means or judges. Here redundancy is a value. The second part of the answer is that being open to conflicting opinions and analyses can help us to avoid errors that would occur otherwise. This will happen when the conflict prompts reexamination of the question that reveals facts previously overlooked or undervalued or mistaken analyses. It should be noted that the approach recommended here is not political; logical and evidentiary considerations rather than simple consensus justify the judgments it produces. Randy Barnett sums up the case for such an approach:

The virtue of adopting multiple or redundant modes of analysis is… twofold: (a) convergence (or agreement) among them supports greater confidence in our conclusions; and (b) divergence (or conflict) signals the need to critically reexamine the issue in a search for reconciliation. In sum, convergence begets confidence, divergence stimulates discovery. [Barnett, 1990]

In the context of professional ethics an approach of this sort would involve reliance on three modes of analysis: (1) utilitarian, (2) rights-based deontological, and (3) role-based institutional. A brief descrip­tion of each mode of analysis is presented in the following paragraphs.

A utilitarian analysis begins with the assumption that rightness is a function of value and tells us that what is morally required of us is the production of the greatest amount of good possible in a situation for all of the affected parties. Utilitarian thinking leads quite directly to an embrace of the familiar principles of nonmaleficence and beneficence. Deontological analysis denies the essential connection between rightness and goodness asserted by utilitarianism. Unlike utilitarians, deontologists hold that some actions are intrinsically right and some actions are intrinsically wrong. More particularly, they insist that the fact that the consequences of an action are the best possible in a given situation does not show that the action is right. The most famous of deontological theories, that of Immanuel Kant, teaches that what is morally required of us is that wherever found, in ourselves or others, humanity is always treated as an end and never as a mere means to an end. In other words, persons have intrinsic worth (as Kant said, they have a dignity rather than a price) and must never be treated as if their value were merely instrumental (as if they were things). That is our duty; the other side of that coin is the right others have to receive that sort of treatment from us. What matters on this approach, then, is whether one is treating people as they deserve to be treated. Thus, deontological thinking leads directly to an embrace of the familiar principles of autonomy and justice. These two modes of analysis are alike in this: both proclaim independence from what the commonly accepted ideas regarding right and wrong are in a community; on both views morality is not something that is instituted (made); rather, it is discovered (not by empirical investigation, but by ratiocination). Thus, these modes of analysis contrast sharply with the third mode of analysis to be discussed here, namely, the role-based institutional mode.

Thinking in terms of role morality rightness is a function of conformity with the stable interactional expectations—accepted norms of conduct—associated with a role. Responsibilities and rights are tied to the function of a role agent within an institution. A role morality is institutional in the quite basic sense that it is instituted, that is, brought into existence by human beings. Sometimes, of course, this is accomplished by quasi-legislative means. But such activity is neither necessary nor the most common means of creation. A role morality is implicit in practice, it is established through mutually beneficial interaction over time. It is a customary or conventional morality. (There is an analogy here to the law in its customary and legislative forms.) One final point, noted earlier, is that considerations of non — institutional morality play a critical role in the validation of the norms of a role morality. The familiar maxim in medicine, primum non nocere, as well as the implicit rule of lawmaking that lawmakers must promulgate the laws they make (no secret laws), are examples of principles that would be readily embraced by those thinking along these lines and validated by considerations of non-institutional morality.

We can summarize this brief description of the three modes of analysis in the following way. The key question for the utilitarian is one of maximal value; for the rights-based deontological thinker it is one of deserved or rightful treatment; with role morality it is one of conformity to established custom or practice.

The recommended approach for the design of ethical structures of practice is to use all three modes of analysis. The hope is that they will converge on the same result. When they do we can be confident that implementation of the principle or policy is justifiable. That is the case, for example, with the rules of professional practice requiring fidelity, confidentiality, privacy, and veracity.

Using this approach one hopes for convergence on the same result. That is to be expected in easy cases. But not all cases are easy cases. Our analyses may diverge rather than converge. What then? Barnett suggests that divergence “stimulates discovery.” The idea is that achieving convergence may be difficult, but at least sometimes it may be achieved on a second pass by retracing one’s steps—going through the analyses again, paying special attention to the input—or rethinking the analyses themselves, paying special attention to previously identified sources of difficulty. For example, utilitarian and deontological analyses may diverge because they deal with individual rights in different ways. Deontologists treat rights as trumps; utilitarians simply include them among the considerations that count in their calculations respecting likely consequences. It may be that the two modes of analysis diverge because of an erroneous assignment of weight to the rights that are in play. If so, convergence can be achieved by rethinking the weight assigned to the rights in the utilitarian calculations. It must be admitted, however, that divergence may not be eliminated by such means. What then? In anticipation of such cases, a presumption should be made in favor of one of the modes of analysis.

When one’s project is the design of ethical structures of practice, the presumption should be made in favor of rights-based deontological analysis, it being understood that utilitarian considerations may rebut the presumption favoring rights in some circumstances. When one’s project is not the design of decision Devices (structures of practice), but deciding what one ought to do as a professional, a presumption should be made in favor of institutional responsibilities, i. e., professional ethics, it being understood that utilitarian or deontological considerations may rebut the presumption in favor of role responsibilities under some circumstances. The point of this presumption is that the burden of justification is properly placed on those who would depart from valid norms. Two things argue in favor of this: (1) a presumption in favor of institutional responsibilities (professional ethics) presupposes a justification of the sort pro­vided by the convergence of the three modes of analysis and (2) making this presumption guards against the dangers of failing to take the professional ethic seriously and robbing the earlier work—constructing ethical structures of practice—of its point.

Professional Integrity, Responsibility, and Codes

Professional ethics involves more than merely complying with the norms of a code of ethics. This is true for several reasons, not the least of which is that there may not be a code. But even if there is a code, for example, the NSPE code, or the code of the AMA or ABA, there is still much more to professional ethics than compliance with the norms of that code. The rules and principles of a code set out the criteria for distinguishing malpractice from minimally acceptable practice. They do not reach to, nor do they define responsible practice. Indeed, they cannot do this, for responsible practice is more than doing one’s duty (thus we speak of responsibilities rather than duties); responsible practice involves discretion and judg­ment in an essential way. Moreover, it involves the integration of professional judgment (expertise) and moral judgment [Whitbeck, 1998]. Here the boundaries between fact and value are fluid or at any rate they vary much as boundaries marked by a river, the course of which changes over time.

It is perhaps best to conceive professional ethics as a call for responsible conduct on the part of practitioners. The call is justified because the integrity of individual practitioners is required for the integrity of a profession, which, in turn, is necessary to justify the trust of others essential to the success of professional practice in any area. There is much work to be done in making clear what the demands of responsible practice are and in maintaining integrity in practice (which is produced by adherence to the standards of responsible practice). There is no ethical algorithm; responsible judgment and action are essential in the development, interpretation, and application of the normative principles governing the profession of biomedical engineering.


Abbott, A. The System of Professions, Chicago: University of Chicago Press, 1988.

Barber, B. Some problems in the sociology of the professions. In: Lynn, K. S. and the editors of Deadulus, Ed. The Professions in America Boston: Houghton Mifflin Company, 1965.

Barker, R. Recent work in the history of medical ethics and its relevance to bioethics. American Philo­sophical Association Newsletter on Philosophy and Medicine, 1996; 96 Fall (1): 90-96.

Barnett, R. Foreword: of chickens and eggs—the compatibility of moral rights and consequentialist analysis. Harv. J. L. & Pub. Pol’y 1989, 12:611.

Barnett, R. The virtues of redundancy in legal thought. Cleveland State Law Review 1990, 38:153.

Baum, R. Engineers and the public: sharing responsibilities. In: Professional Ethics and Social Responsibility, Wueste, D. E. Ed. Lanham, MD: Rowman and Littlefield, 1994.

Bayles, M. Professional Ethics, 2nd ed., Belmont CA.: Wadsworth, 1989.

Beauchamp, T. L. and Childress, J. F. Principles of Biomedical Ethics 2nd ed., New York: Oxford University Press, 1983.

Beer, J. J. and Lewis, W. D. Aspects of the professionalization of science. In: Lynn, K. S. and the editors of Deadulus, Ed. The Professions in America Boston: Houghton Mifflin Company, 1965.

Behrman, J. N., Essays on Ethics in Business and the Professions, Englewood Cliffs, N. J.: Prentice Hall, 1988. Callahan, J. Professions, institutions, and moral risk. In: Professional Ethics and Social Responsibility, Wueste, D. E. Ed. Lanham, MD: Rowman and Littlefield, 1994.

Camenisch, P. Business ethics: On getting to the heart of the matter, Business and Professional Ethics Journal 1981; 1:59.

Emmet, D. Rules, Roles and Relations, Boston: Beacon Press, 1975.

Hardwig, J. Toward an ethics of expertise. In: Professional Ethics and Social Responsibility, Wueste, D. E.

Ed. Lanham, MD: Rowman and Littlefield, 1994.

Hughes, E. C. Professions. In: The Professions in America Lynn, K. S. and the editors of Deadulus, Ed.

Boston: Houghton Mifflin Company, 1965.

National Committee on Biomedical Engineering, Professional Standards in Biomedical Engineering, The Institution of Engineers, Australia, 1983.

Saha, P. and Saha, S. Ethical responsibilities of the clinical engineer, J. Clin, Eng., 1986; 11(1):17.

Saha, P. and Saha, S. Clinical trials of medical devices and implants: Ethical concerns, IEEE Eng. in Med.

And Bio. Magazine 1988 (June):85.

U. S. News and World Report, March 15, 1999, 12.

Wellman, C. A Theory of Rights, Totowa, NJ: Rowman and Allanheld, 1985.

Whitbeck, C. Ethics in Engineering Practice and Research, New York: Cambridge University Press, 1998. Wueste, D. E. Professions, professional ethics and bioengineering, Critical Reviews in Biomedical Engi­neering, 1997 25(2):127.

Wueste, D. E. Role moralities and the problem of conflicting obligations. In: Professional Ethics and Social Responsibility, Wueste, D. E. Ed. Lanham, MD: Rowman and Littlefield, 1994.

Bronzino, J. D. “Beneficence, Nonmaleficence, and Technological Progress.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

Natural Language Processing in Biomedicine




Linguistic Principles


Applications in Biomedicine

Speech Systems • Lexical Systems • Syntactic and

Stephen B. Johnson

Semantic Systems • Discourse Systems

Columbia University




Natural language is the primary means of communication in all complex social interactions. In biomedical areas, knowledge and data are disseminated in written form, through articles in the scientific literature, technical and administrative reports, and hospital charts used in patient care. Much vital information is exchanged verbally, in interactions among scientists, clinical consultations, lectures, and in conference presentations. Increasingly, computers are being employed to facilitate the process of collecting, storing, and distributing biomedical information. Textual data is now widely available in an electronic format, through the use of transcription services, and word processing. Important examples include articles published in the medical literature and reports dictated during the process of patient care (e. g., radiology reports and discharge summaries).

While the ability to access and review narrative data is highly beneficial to researchers, clinicians, and administrators, the information is not in a form amenable to further computer processing, for example, storage in a database to enable subsequent retrievals. At present, the most significant impact of the computer in medicine is seen in processing structured data, information represented in a regular, pre­dictable form. This information is often numeric in nature, e. g., measurements recorded in a scientific study, or made up of discrete data elements, e. g., elements selected from a predefined list of diseases.

The techniques of natural language processing provide a means to bridge the gap between textual and structured data, allowing humans to interact using familiar natural language, while enabling computer applications to process data effectively.

Linguistic Principles

Natural language processing (or computational linguistics) is a branch of computer science concerned with the relationship between information as expressed in natural language (sound or text) and repre­sented in formalisms that facilitate computer processing. Natural language analysis studies how to convert natural language input into a structured form, while natural language generation investigates how to produce natural language output from structured representations. Natural language processing investi­gates and applies scientific principles of linguistics through development of computer systems. One of the most important principles is that natural language is built up from several layers of structure, each layer defined as a set of restrictions on the previous layer [Harris 1991]:

TABLE 188.1 Layers of Linguistic Structure

Linguistic Layer



Mapping of sounds to phonemes (letters)


Grouping of phonemes into morphemes (roots and inflections)


Combining of morphemes into words


Combining of words into sentence structure


Mapping of sentence structure into literal meanings


Combining of sentence meanings into discourse meaning

These layers correspond roughly to the subfields of linguistics, and to the areas of study in natural language processing, which seeks to develop them into computer models. Knowledge about the rules of language structure at any or all of its levels is called competence. Most natural language processing applications in biomedicine do not even begin to approach the language competence of humans. For example, a transcription system may have knowledge about the sounds of a language but know nothing about the syntax of sentences. Similarly, a program that indexes scientific articles may know which terms to look for in the text, but have no ability to express how these terms are related to one another. However, it is important to emphasize that such applications may perform extremely useful tasks, even with very limited competence.

Many natural language processing applications exploit the fact that biomedical fields are restricted semantic domains, which means that natural language information associated with that field is focused on a narrow range of topics. For example, an article about cell biology is limited to discussions of cells and tissues, and is unlikely to mention political or literary issues. The natural language of a restricted semantic domain is called a sublanguage. Sublanguages tend to have specialized vocabularies and spe­cialized ways of structuring sentences (e. g., the “telegraphic style” of notes written about patients in hospital charts) and ways of organizing larger units of discourse (e. g., the format of a technical report) [Grishman and Kittredge 1986, Kittredge and Lehrberger 1982].

These properties of sublanguages allow the use of methods of analysis and processing that would not be possible when processing the language of newspaper articles or novels. For example, a program that indexes medical articles can select index terms from a list of terminology known to be of interest to researchers; a speech recognition system can exploit the fact that only certain words can be uttered by a user in response to a given prompt; a system that analyzes clinical reports can look for predictable semantic patterns that are characteristic of the given domain.

Applications in Biomedicine

There are four forms of natural language exchange possible between human and the computer: (1) the human can supply information to the computer using natural language; (2) the human can retrieve information from the computer through natural language; (3) the computer can supply information to the human in the form of natural language output; and (4) the computer can request information from the human using natural language questions. Applications of natural language processing may support one or several of these modes of exchange. For example, an automated questionnaire for patients may generate multiple choice questions but be able to receive only numbered answers, while a database interface may be able to accept natural language questions submitted by a researcher and return natural language replies. Applications can be classified according to the levels of language competence embodied in their design: (1) speech systems, (2) lexical systems, (3) syntactic and semantic systems, and (4) discourse systems.

Speech Systems

Speech recognition systems [Grasso 1995] process voice data into phonemic representations (such as text), while speech synthesis systems generate spoken output from such representations. Speech process­ing applications may work only at this level of language competence, e. g., software that transcribes spoken input into text, or software that generates speech when visual information cannot be employed, such as on the telephone. Applications in this category include systems that capture structured endoscopic reports [Johannes and Carr-Locke 1992], and emergency room notes [Linn et al., 1992]. Applications may also employ speech technology as part of a larger system. Examples include interfaces to diagnostic systems [Landau et al., 1989], [Shiftman et al., 1992], and systems for taking patient histories [Johnson et al., 1992].

Lexical Systems

Lexical systems work with language at the level of words and terms (short sequences of words). A lexicon is a special type of database that provides information about words and terms which may include pronunciation, morphology (roots and affixes), and syntactic function (noun, verb, etc.). The Specialist Lexicon provides information about medical terms and general English words [National Library of Medicine 1993]. A thesaurus groups synonymous terms into semantic classes, which are frequently organized into a hierarchical classification scheme. The Systematized Nomenclature of Medicine (SNOMED) classifies medical terms in several hierarchies, such as (1) topography, (2) morphology,

Etiology, (4) function, (5) disease, (6) occupation, and (7) procedure [Rothwell et al., 1993]. Medical Subject Headings (MeSH) classifies terms used in medical literature [National Library of Medicine 1990]. The Metathesaurus of the Unified Medical Language System (UMLS) combines SNOMED, MeSH, and other thesauri [Lindberg 1993].

Semantic classes are used to index natural language documents to facilitate their retrieval from a database. For example, MeSH is used to index the medical literature in the MEDLINE database [Bachrach 1978]. Semantic classes are also used to define the set of data elements (controlled vocabulary) that can be processed by a computer application. Controlled vocabularies are used by diagnostic systems and clinical information systems—by the programs that collect data, store it in databases, and display it [Linarrson and Wigertz 1989].

Syntactic and Semantic Systems

Lexical techniques can only approximate the meaning of an article or some text entered by the user of a computer application. One approach to obtaining a deeper understanding is to produce a representation of the syntactic structure of a sentence (e. g., determining the subject, the verb, and the object), and then map this structure into a representation of its meaning (such as predicate calculus). A computer program that analyzes the structure of sentences is called a parser, and uses a lexicon and a grammar (a formal representation of the syntactic rules of a language).

Representations of semantic content (or knowledge representation) vary, but most representations are variations of a structure called a frame, in which predefined slots are filled with information from natural language sentences. A number of systems process clinical reports into their own specialized frame representations, for example the Linguistic String Project [Sager et al., 1986], MedLEE [Friedman 1994], and SymText [Haug 1997]. The representation of “conceptual graphs” [Sowa 1984] has emerged as a potential standard for semantics and is used for medical language processing by a number of systems which include METEXA [Schroder 1992], MENELAS [Bouaud et al., 1997], and RECIT [Baud et al., 1992].

Information from the sentences of a discourse (e. g., an article or clinical report) combine in complex ways; for example, cross-references are made (e. g., using pronouns), and events are related temporally or causally. Understanding a discourse fully may require information about the context in which the natural language exchange occurs, or the background knowledge of a technical field. Systems based on syntax and semantics usually carry out an analysis of the discourse and context information as a subse­quent stage. Other systems attempt to approximate the semantic analysis of sentences, using medical knowledge to guide the overall processing of a text by defining the possible sequences of topics and subtopics characteristic of the domain. Domains in which these methods have been attempted include: (1) the history and physical sections of a patient chart [Archbold and Evans 1989]; (2) echocardiography reports [Canfield et al., 1989]; (3) discharge summaries [Gabrieli and Speth 1987]; and (4) radiology reports [Ranum 1988].

Natural language is also a convenient mode for the output of applications, for example in producing clinical reports [Kuzmak and Miller 1983]. Some expert systems have used natural language to express their recommendations [Rennels et al., 1989], and to provide explanations of their reasoning process [Lewin 1991].


The design of controlled vocabularies is an important area of research [Cimino et al., 1989]. As yet, no comprehensive vocabulary for clinical medicine exists, and the effort to create one is beyond the means of any single institution. Indexing of articles and reports is currently a labor-intensive process, and quality control is a significant problem. Index terms are a poor approximation of the meaning of a text. Similarly, systems that capture clinical information through the entry of individual findings fall short of capturing the patient history. Natural language processing systems with greater understanding of syntax and seman­tics may help by providing richer information representations.

No natural language processing system currently covers the complete range of language interaction with significant competence at all levels—from speech to discourse. In the near future, the systems with the greatest practical success will specialize in performing selected language processing tasks in well — defined domains.

Defining Terms

Controlled vocabulary: A set of well-defined data elements intended for use in a computer application.

Frame representation: A structure for representing complex information, consisting of fixed, named

“slots” which may contain data values or frames.

Grammar: A formal representation of language structure, usually syntactic structure.

Language competence: Knowledge about the rules of a language.

Lexicon: A formal compilation of the words of a language, with information about phonetics, mor­

Phology, syntax, and semantics (depending on requirements of computer applications). Morphology: Field studying how sounds combine into morphemes, and how morphemes combine to

Form words.

Parser: A computer program that uses a grammar and lexicon to analyze sentences of a language,

Producing a syntactic or semantic representation.

Phonology: Field studying the rules that govern the sounds used in languages.

Pragmatics: Field studying how sentences are combined to form larger units of discourse, and how to

Represent meaning in the context of a situation.

Semantics: Field studying the relation between natural language sentences and formal representations

Of meaning.

Structured data: Data that has a explicit, unambiguous, and regular structure, making it amenable to

Computer processing.

Sublanguage: The natural language of a restricted semantic domain that is focused on a specific set of

Tasks or purposes, e. g., interpreting X-ray images.

Syntax: Field studying how words combine to form sentences.


Allen, J. 1995. Natural Language Understanding, Second Edition. Redwood City, CA: The Benjamin Cummings Publishing Company, Inc.

Archbold, A. and Evans, D. 1989. On the Topical Structure of Medical Charts. In: Proceedings of the 13th Annual SCAMC, p. 543-547. IEEE Computer Society Press, Washington, D. C.

Bachrach, C. A. and Chaen, T. 1978. Selection of MEDLINE contents, the development of its thesaurus, and the indexing process, Medical Informatics. 3(3): 237-254.

Baud, R. H., Rassinoux, A. M., and Scherrer, J. R. 1992. Natural Language Processing and semantical representation of medical texts. Meth. Inform. Med. 31(2):117—125.

Bouaud, J., Zweigenbaum, P., et al., 1997. A Semantic Composition Method Driven by Domain Knowledge Models. Twenty-First Annual Symposium of the American Medical Informatics Association.

Canfield, K., Bray, B., Huff, S., and Warner, H. 1989. Database capture of natural language echocardio­graphy reports: A Unified Medical Language System approach. In: Proceedings of the 13th Annual SCAMC, p. 559-563. IEEE Computer Society Press, Washington, D. C.

Cimino, J. J., Hripcsak, G., Johnson, S. B., Friedman, C., Fink, D. J., and Clayton, P. D. 1989. Designing an Introspective, Multi-Purpose Controlled Medical Vocabulary. Proceedings of the 13th Annual SCAMC, p. 513—518. IEEE Computer Society Press, Washington, D. C.

Friedman, C., Alderson, P. O., Austin, H. M., Cimino, J. J., and Johnson, S. B. 1994. A general natural language text processor for clinical radiology. JAMIA 1(2): 161—174.

Gabrieli, E. and Speth, D. 1987. Computer Processing of Discharge Summaries. In: Proceedings of the 11th Annual SCAMC, p. 137—140. IEEE Computer Society Press, Washington, D. C.

Haug, P. J. et al., 1994. A Natural Language Understanding System Combining Syntactic and Semantic Techniques. Eighteenth Annual Symposium of the American Medical Informatics Association.

Grasso, M. A. Automated speech recognition in medical applications. MD Computing, 1995;12(1):16—23.

Grishman, R. and Kittredge, R., Eds. 1986. Analyzing Language in Restricted Domains: Sublanguage Description and Processing. Erlbaum Associates, Hillsdale, New Jersey.

Harris, Z. 1991. A Theory of Language and Information—A Mathematic Approach. Clarendon Press, Oxford.

Johannes, R. S. and Carr-Locke, D. L. 1992. The Role of Automated Speech Recognition in Endoscopic Data Collection. Endoscopy. 24(Suppl 2): 493—498.

Johnson, K., Poon, A., Shiffman, S., Lin, R., and Fagan, L. 1992. A history taking system that uses continuous speech recognition. In: Proceedings of the 16th Annual SCAMC, p. 757—761. IEEE Computer Society Press, Washington, D. C.

Kittredge, R. and Lehrberger, J., Eds. 1982. Sublanguage—Studies of Language in Restricted Semantic Domains. De Gruyter, New York.

Kuzmak, P. M. and Miller, R. A. 1983. Computer-aided generation of result text for clinical laboratory tests. In: Proceedings of the 7th Annual SCAMC, p. 275—278. IEEE Computer Society Press, Wash­ington, D. C.

Landau, J. A., Norwich, K. N., and Evans, S. J. 1989. Automatic Speech Recognition—Can it Improve the Man-Machine Interface in Medical Expert Systems? Int. J. Biomed. Comp.. 24(2): 111—117.

Lewin, H. C. 1991. HF-Explain: a natural language generation system for explaining a medical expert system. In: Proceedings of The 15th Annual SCAMC, p. 644—648. IEEE Computer Society Press, Washington, D. C.

Lindberg, D. A.B., Humphreys, B. L., and McCray, A. T. 1993. The Unified Medical Language System. In: Yearbook of Medical Informatics, van Bemmel, J. H., McCray, A. T., Eds., p. 41-51. International Medical Informatics Association, Amsterdam.

Linarrson, R. and Wigertz, O. 1989. The data dictionary—a controlled vocabulary for integrating clinical databases and medical knowledge bases. Meth Inform. Med. 28(2): 78-85.

Linn, N. A., Rubenstein, R. M., Bowler A. E., and Dixon, J. L. 1992. Improving the Quality of Emergency Room Documentation Using the Voice-Activated Word Processor: Interim Results. In: Proceedings of the 16th annual SCAMC, p. 772-776. McGraw Hill, New York.

National Library of Medicine. 1990. Medical Subject Headings (NTIS NLM-MED-90-01). National Library of Medicine. Bethesda, Maryland.

National Library of Medicine. 1993. The Specialist Lexicon. Natural language systems group, National Library of Medicine, Bethesda, MD.

Ranum, D. 1988. Knowledge Based Understanding of Radiology Text. In: Proceedings of the 12th Annual SCAMC, p. 141-145. IEEE Computer Society Press, Washington, D. C.

Rennels, G., Shortliffe, E., Stockdale, F., and Miller, P. 1989. A computational model of reasoning from the clinical literature. AI Magazine. 10(1): 49-57.

Rothwell, D. J., Palotay, J. L., Beckett, R. S., and Brochu, L., Eds. 1993. The Systematized Nomenclature of Medicine. SNOMED International. College of American Pathologists, Northfield, Illinois.

Sager, N., Friedman, C., and Lyman, M. 1987. Medical Language Processing—Computer Management of Narrative Data. Addison-Wesley, Reading, Mass.

Scherrer, J. R., Cote, R. A., and Mandil, S. H. 1989. Computerized natural language medical processing for knowledge representation. North Holland, Amsterdam.

Schroder, M. 1992. Knowledge-Based Processing of Medical Language: A Language Engineering Approach. In: Advances in Artificial Intelligence, 16th German conference on AI, p. 221-234. Springer Verlag, Berlin.

Shiffman, S., Lane, C. D., Johnson, K. B., and Fagan, L. M. 1992. The integration of a continuous speech recognition system with the QMR diagnostic program. In: Proceedings of the 16th Annual SCAMC, p. 767-771. IEEE Computer Society Press, Washington, D. C.

Spyns, P. 1996. Natural Language Processing in Medicine. Meth. Inform. Med. 35:285-301.

Sowa, J. F. 1984. Conceptual Graphs: Information Processing in Mind and Machine. Addison-Wesley, Read­ing MA.

Van Bemmel, J. H. (Ed.). Meth. Inform. Med., 1998:4(5).

Further Information

For general information about the field of natural language processing, see [Allen 1987], and [Covington

1994]. Surveys of research in sublanguage can be found in [Kittredge and Lehrberger 1982] and [Grish —

Man and Kittredge 1986]. A variety of papers on medical language processing are collected in [Scherrer

Et al., 1989 and Van Bemmel 1998].

Geddes, L. A. “Historical Perspectives 5 — Electroencephalography ” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

Historical Perspectives 5


Leslie A. Geddes Historical Background

Purdue University Commercial Production of EEG Machines

Historical Background

Hans Berger (1929) was the first to record electroencephalograms from human subjects. However, before then, it was well known that the brain produced electrical signals. In fact, in Berger’s first paper there is a short history of prior studies in animals. Interestingly, the first person to demonstrate the electrical activity of the brain did not make recordings. In 1875, Richard Caton in the United Kingdom used the Thomson (Kelvin) sensitive and rapidly responding reflecting telegraphic galvanometer to display the electrical activity of exposed rabbit and monkey brains. His report [Caton, 1875], which appeared in the British Medical Association Journal, occupied only 21 lines of a half-page column. In part, the report stated:

In every brain hitherto examined, the galvanometer has indicated the existence of electric currents. The external surface of the grey matter is usually positive in relation to the surface of a section through it. Feeble currents of varying direction pass through the multiplier [galvanometer] when the electrodes are placed on two points of the external surface, or one electrode on the grey matter, and one on the surface of the skull. The electric currents of the grey matter appear to have a relation to its function. When any part of the grey matter is in a state of functional activity, its electric current usually exhibits negative variation. For example, on the areas shown by Dr. Ferrier to be related to rotation of the head and to mastication, negative variation of the current was observed to occur whenever those two acts respectively were performed. Impressions through the senses were found to influence the currents of certain areas; e. g., the currents of that part of the rabbit’s brain which Dr. Ferrier has shown to be related to movements of the eyelids, were found to be markedly influenced by stimulation of the opposite retina by light.

No recordings of the movement of the spot of light on the scale of the Kelvin galvanometer have been found, perhaps because, at that time, telegraphic operators used to read the dots and dashes of the Morse code by watching the movements of the spot of light on the galvanometer scale. Nonetheless, Caton’s description clearly shows than he witnessed the fluctuating potentials that we now know exist. Also important is the fact that Caton was the first to report visual-evoked potentials.

Berger, a psychiatrist in Jena, Germany, was aware of the several prior electroencephalographic animal studies and had conducted experiments using dogs. The only recording devices available to him were the string galvanometer, developed by Einthoven [1903] for electrocardiography and the capillary elec­trometer developed by Marey [1876]. Although there were a few mirror-type oscillographs available for recording waveforms from alternating current (50 to 60 Hz) generators and transformers, the sensitivity of such devices was very low, and they could not be used for bioelectric recording without a vacuum — tube amplifier.

The voltage appearing on the scalp produced by the brain is only about one-tenth that of the ECG detected with limb leads. To enable recording brain activity with the string galvanometer, the tension in the string was reduced, which increased the sensitivity but reduced the speed of response. This was the method used by Berger when he found that the capillary electrometer was unsatisfactory.

Natural Language Processing in Biomedicine

• I I I i ;

« * ‘W—* r^,^. ,

I > I I i i

FIGURE HP5.1 (a) The Edelmann double-string galvanometer of the type used by Berger. On the left it is a funnel

For detecting the arterial or venous pulse and on the right is a device for detecting the heart sounds, all of which can be recorded along with the two channels of ECG, and a time signal on 12-cm-wide photographic paper. (b) The phonocardiagram, venous-pulse record, and two recordings of the ECG. (From Zusatz apparate zum Elektrokar- diographen, Siemens-Reiniger-Veifa Berlin ca. 1926.)

Apart from problems of the lack of sensitivity and speed of response in the recording apparatus, Berger faced severe electrode problems. Because of the low amplitude of the cortical signals, the electrodes had to be very stable, producing no voltages that could be seen on the electrocortical recordings. In other words, the electrode noise and stability had to be in the low-microvolt range. Zinc electrodes were popular at that time, and Berger used them in his first dog studies just after the turn of this century. For human use, he used zinc-plated needles (insulated down to the tip) and inserted them through existing trephine holes of skull defects so that the tip was epidural. The electrodes were sterilized with 10% formalin solution. Berger first used the single-string Edelmann galvanometer designed for electrocardiography. Later he used a two-string unit of the type shown in Fig. HP5.1 sO that he could record the ECG along with the EEG. Such units also were equipped with devices to record the venous or arterial pulses and heart sounds. The photographic recording paper was 12 cm wide and 50 m in length.

The string galvanometer is a current-drawing device; therefore, a low electrode-subject resistance was necessary for adequate sensitivity. Berger stated that it was difficult to obtain a low resistance and reported a value of 1600 Q for his needle electrodes when placed 5 to 6 cm apart with the tips in the epidural space. A high-resistance electrode pair would reduce the amplitude of the recorded activity. Later Berger used chlorided silver needle electrodes to help solve this problem.

Difficulties with the zinc-plated needle electrodes led Berger to develop very thin lead-foil electrodes, wrapped in flannel, and soaked in 20% NaCl solution. The combination of saline-soaked flannel and the use of a rubber bandage to hold the electrodes on the scalp permitted recording the EEG for hours without the saline evaporating. The resistance measured between a pair of such electrodes ranged from 500 to 7600 Q depending on the size of the electrodes.

Berger’s final improvement to his recording equipment consisted of placing a capacitor in series with the electrodes and string galvanometer to block the steady potential difference due to slight electrochem­ical differences in the two lead electrodes. Recall that the amplitudes being recorded were in the range of tens of microvolts, and any steady difference in electrode potential would cause a steady deflection of the baseline of the recording. With the capacitor, this steady offset potential did not deflect the galva­nometer baseline.

Describing his first studies with the zinc-plated needle electrodes, Berger stated [translation by Gloor, 1969]:

As a general result of these recordings with epidural needle electrodes I would consequently like to state that it is possible to record continuous current oscillations, among which two kinds of waves can be distinguished, one with an average duration of 90 ct, the other with one of 35 ct. The longer waves of 90 ct are the ones of larger amplitude, the shorter, 35 ct waves are of smaller amplitude. According to my observations there are 10 to 11 of the larger waves in one second, of the smaller ones, 20 to 30. The magnitude of the deflections of the larger 90 ct waves can be calculated to be about 0.0007 to

00015 V, that of the smaller 35 ct waves, 0.00002 to 0.00003 V. [The symbol ct was used for millisec­onds.]

In his first paper, Berger called the dominant low-frequency first-order and the higher-frequency waves second order. In his second paper he stated:

For the sake of brevity I shall subsequently designate the waves of first order as alpha waves = a — w, the waves of second order as beta waves = P — w, just as I shall use “E. E. G.” as the abbreviation for the electroencephalogram and “E. C. G.” for the electrocardiogram.

The average duration of the waves reproduced in [his] Figure 1 is for the a — w = 120 ct and for the P — w = 30 to 40 ct.

Berger objected to the term electrocerebrogram to designate a record of the electrical activity of the brain. He stated:

Because for linguistic reasons, I hold the word “electrocerebrogram” to be a barbarism, compounded as it is of Greek and Latin components, I would like to propose, in analogy to the name “electrocar­diogram” the name “electroencephalogram” for which here for the first time was demonstrated by me in man.

Berger’s papers contain many EEGs from patients, but those from his son Klaus are discussed fre­quently. In fact, Klaus was used as a subject for electrode testing. For example, Berger wrote:

Klaus’ records were taken with every other possible type of electrodes: silver, platinum, lead electrodes, etc; also, different arrangements of these on the skin surface of the head were used. However, time and again it was found that the best arrangement was that with electrodes placed on the forehead and occiput. Of Klaus’ many records, I only want to show another small segment of a curve obtained in this manner [Fig. HP5.2]. In this instance head-band electrodes were applied to the forehead and


FIGURE HP5.2 Klaus at the age of 15. Double-coil galvanometer. Condenser inserted. Recording from forehead and occiput with head-band electrodes. (Top) The record obtained from the scalp; (bottom) time in 1/10 second.

Occiput and were fixed with rubber bandages. From these head-band electrodes, records were taken with galvanometer 1 of the double-coil galvanometer; galvanometer 2 was set at its maximum sensi­tivity and was used as a control to make sure that no outside currents were entering the galvanometer circuit to disturb the examination. At that time I was still very distrustful of the findings I obtained and time and again I applied such precautionary measures. The record of galvanometer 2 ran as completely straight line, without my oscillation.

From the lengthy discussions in Berger’s papers, it is easy to see that few believed that his recordings originated in the brain. To dispel some of the uncertainty, Berger usually recorded the ECG along with the EEG. Occasionally, he recorded heart sounds and the arterial pulse. In addition in the electrical activity of the heart, various critics proposed that Berger’s recorded activity was due to friction of blood in the cerebral arteries, pulsations of the brain and/or scalp, respiration, contraction of piloerector or skeletal muscle, and glandular activity. Berger dealt with all the potential artifacts, pointing out that their time course and frequency were different from those of the EEG and showed that the EEG continued with transient slowing of the heart rate. Finally, he stated (perhaps in exasperation):

I therefore believe I have discussed all the principal arguments against the cerebral origin of the curves reported here which in all their details have time and again preoccupied me, and in doing so I have laid to rest my own numerous misgivings.

Among the first to publish an English-language verification of Berger’s observations were Jasper and Carmichael [1935], then at Brown University in the United States. Silver electrodes (1 to 2 cm in diameter), covered with flannel and soaked in saline, were connected to an amplifier/mirror-oscillograph system. They were able to confirm Berger’s findings and extend them, showing that with a two-channel system used to record the EEG of a girl with a convulsive disorder, the alpha wave frequency was 10 per second on the left side of the head and 6 to 8 per second on the right side, one of the early indications that the EEG was altered by brain pathology. Figure HP5.3a sHows one of the records obtained by Jasper and Carmichael.

Jasper later came to the Montreal Neurological Institute (McGill University) and created the Electro — pysiological Laboratory, which was formally opened with a celebration meeting held on February 24-26, 1939. In attendance were the world leaders in electrophysiology.

Clinical EEG at the Montreal Neurological Institute was inaugurated using a machine built by Andrew Cipriani. The inkwriter was of unique design and featured a strong magnetic field produced by an electromagnet. In this field was a circular coil coupled to an inkwriting pen. It is interesting to observe that this principle had been used by d’Arsonval [1891] with pneumatic coupling to a tambour that caused a pen to write on a rotating drum, as shown in Fig. HP5.4a. LAter in the 1930s, this coil design was coupled to a conical diaphragm and became the first dynamic loudspeaker. The pen motor devised by Cipriani is sketched in Fig. HP5.4b. A small vane, affixed to the writing stylus, dipped into an oil chamber (dashpot) to provided damping. This four-channel instrument was in routine use when the author first came in contact with Jasper in the early 1940s; it was replaced in 1946 by a six channel model III Grass EEG.

Meanwhile, at Harvard University (Boston, Mass), Gibbs, Davis, and Lennox [1935], were pursuing their interest in epilepsy. Recognizing the potential of the EEG for the diagnosis of epilepsy, they initiated a series of studies that would occupy the next many decades. Their recorder consisted of an ink-writing telegraphic recorder called the Undulator. In December of 1935, they published their first paper on EEG which carried a footnote that read: “This paper is no, XVII in a series entitled Studies in Epilepsy.” Citing the Berger papers and that by Jasper and Carmichael, Gibbs et al. stressed the importance of direct-inking pens for immediate viewing of the EEG so that the effect of environmental factors could be identified immediately. They reported:

The method is exceedingly simple. Electrical contact is made to two points on the subject’s head. Except for the study of grand mal epileptic seizures we regularly employ as electrodes two hypodermic

Natural Language Processing in Biomedicine

FIGURE HP5.3 The first U. S. records of the EEG to confirm Berger’s report. (a) The records obtained by Jasper and Carmichael (1935). The first channel shows the alpha waves, and the second shows the electrical activity detected by electrodes on the leg above the knee. The second record shows alpha inhibition by illumination of the retina, and the third record shows return of the alpha waves when the light was extinguished. (b) The first records published by Gibbs et al. (1935) showing alterations in normal subjects by various types of sensory stimulation eyes open and closed, problem solving, noise (rattle), and smelling ether. (Both by permission.)

Needles inserted one into the scalp at the vertex of the skull and the other into the lobe of the left ear.

Enough procaine hydrochloride is injected previously to insure the continued comfort of the subject.

Figure 3b iS a reproduction of the first EEG obtained by Gibbs et al. Soon Gibbs produced his well — known Atlas of Electroencephalography, which first appeared in 1941 and became the “bible” for training electroencephalographers.

The first recording equipment used by Gibbs et al. [1935] was built by Lovett Garceau. It consisted of a four-stage, singled-sided, resistance-capacity-coupled amplifier made with high-grain, screen-grid tubes driving a distinct-linking telegraphic recorder called the Undulator (U in Fig. 5) obtained from the Western Union Telegraph Company, Fig. 5 shows the circuit diagram. the overall high-frequency response extended to 25 Hz [Grass, 1984].

Slightly earlier, in Germany, Toennies [1932] had developed a direct-linking recorder that he called the Neurograph. Fig. HP5.6 shows a picture of the instrument and a record of the human electrocardio­gram and the response of canine eyes to light. Note that the chart is running in the opposite direction to conventional recordings. The time marks are 1/5 second.

Natural Language Processing in Biomedicine

FIGURE HP5.4 Electropenumatic inkwriter, described by d’Arsonval (1891) (a) and sketch of moving-coil inkwriter devised by Cipriani in the late 1930s for EEG at the Montreal Neurological Institute (b).

Natural Language Processing in Biomedicine

FIGURE HP5.5 The single-sided, five-stage, resistance-capacitor-coupled amplifier developed by Gareau to drive the Western Union inkwriting telegraphic recorder (Undulator) used by Gibbs et al. to record EEGs (From Garceau et al. 1935. Arch Neurol Psychiatry. With permission.)

Natural Language Processing in Biomedicine

FIGURE HP5.6 Toennies Neurograph and recordings showing a 1-mV calibration, the human ECG, and electrical activity from the canine eye, time marks 1/5 s. (From Toennies [1932].)

Commercial Production of EEG Machines

The excellent collaboration between Gibbs and Grass in 1935 resulted in replacement of the Undulator telegraphic inkwriter with a robust, d’Arsonval-type inkwriter in early 1936. Meanwhile, the Grass Instrument Company had been founded (1935) to produce electrophysiologic equipment at the well — known address, 101 Old Colony Avenue, Quincy, Mass; the address in the same today.

In 1937, Grass adopted folding chart paper and by 1939 was providing three-, four-, and six-channel EEGs. The Model III shown in Fig. HP5.7 iS the machine recognized by all and founded many EEG laboratories. It featured a knee-hole console with the pens at the right and ample viewing space for the record as it evolved. The chart speed was 30 mm/s, which ultimately became the standard.

At about the same time Grass was building EEG machines in Quincy, Mass., Franklin Offner, a research associate of Ralph Gerard at the University of Chicago, started his own company at 5320 North Kedzie Avenue, Chicago, to produce an EEG using the Crystograph, a high-speed, piezoelectric inkwriter that was described by Offner and Gerard [1936]. It consisted of two slabs of rochelle salt crystals, three corners of each were clamped, and the fourth was free to move when a voltage was applied to electrodes on the crystals. The moving corners were mechanically linked by a slender brass belt that caused motion of the rod that carried the inkwriting stylus. The sinusoidal frequency response extended uniformly to 100 Hz; the chart speed 25 mm/s, the same as for ECG. However, a gear shift provided chart speeds above and below 25 mm/s.

The Crystograph was used with the first Offner EEG machines, and it was ideally suited for high efficiency energy transfer from vacuum-tube amplifiers because of its high impedance. The differential amplifiers were housed in two 19-in relay tracks, as shown in Fig. HP5.8a; The Crystograph rested on an adjacent table; a six-channel Crystograph is illustrated. Damping was adjusted by a series variable resistor to achieve an excellent response to a step function. The record shown in Fig. HP5.8a wAs produced by a 3-p. V (peak-peak) square wave, showing the excellent transient response and the remarkably low internal noise level of the amplifiers.

Natural Language Processing in Biomedicine

FIGURE HP5.7 The Grass Model III EEG, circa late 1940s. (Courtesy Grass Instrument Co., Quincy, Mass.)

Although the Cyrstograph EEG produced elegant records, temperature and humidity played havoc with the rochelle salt crystals, and Offner replaced it with the Dynagraph, shown in Fig. HP5.8b, The recorder being a low-impedance, robust d’Arsonval-type moving-coil inkwriter.

Efficient coupling between the output amplifier stage and the moving-coil inkwriter was always a problem that was solved by Offner in his Dynagraph. A synchronous vibrator sampled the signal from the preamplifier, amplified the pulses, and recombined them after passing through a step-down trans­former, thereby providing an efficient impedance match between the amplifier output stage and penwriter. Fig. HP5.8b iS an illustration of the Offner Dynagraph that featured a sit-down console for viewing the record. The antiblocking feature of this design is shown at the bottom of Fig. HP5.8b By a sine wave record in which a transient, 100 times larger than the recording, was presented and the recording was restored a fraction of a second after the transient. Offner later replaced the vacuum tubes with transistors, bringing out the first transistorized EEG.

In the late 1940s, Warren Gilson, who was a pioneer in devising physiologic instruments, started production of EEGs in Madison, Wisc.; Fig. HP5.9 iS a photograph of one of his later instruments (circa 1959).

It is noteworthy that following the Grass Model III EEG, all subsequent instruments were of the console type with the penwriter at the right end of the desktop of the console. Not easily identified in any of the foregoing figures are the two rotary switches for each channel. Each switch could connect each side of the differential amplifier input to any of the 21 electrodes of the 10-20 system. Switching also was provided to apply a step-function calibrating signal to all channels simultaneously, the step function typically being produced by depressing a pushbutton or rotating a knob. In addition, a low-current ohmmeter was provided to permit measurement of the resistance of any pair of electrodes. Some EEGs included a high — frequency filter for each channel to exclude muscle artifacts often seen when patients clenched their jaws.

Electroencephalographs have changed little since their development in the post-World War II days. Transistors have replaced the vacuum tubes, but the chart speed and frequency response are the same as those established by the first manufacturers of EEG machines, although many new recording techniques have been introduced.

Natural Language Processing in Biomedicine

FIGURE HP5.8 The Offner EEG which used the Crystograph recorder (a) and the Offner Dynagraph EEG (b), circa late 1940s (Courtesy of Franklin Offner.)

Natural Language Processing in Biomedicine

FIGURE HP5.9 The Gilson EEG, circa 1950 (Courtesy of Warren Gilson).


Caton R. 1875. The electric currents of the brain. Br Med J 12:278.

D’Arsonval A. 1891. Galvanograph et machines producant des courants sinusoidaux. Mem Soc Biol 3 (new series):530.

Einthoven W. 1903. Ein neues Galvanometer. Ann Phys 12 (suppl 4):1059.

Gibbs FA, Davis H, Lennox WG. 1935. The electroencephalogram in epilepsy as in conditions of impaired consciousness. Arch Neurol Psychiatry 34(6):1135.

Gloor P. 1969. Hans Berger on the electroencephalogram. EEG Clin Neurophysiol (suppl 28):350.

Grass AM. 1984. The Electroencephalographic Heritage. Quincy, Mass, Grass Instrument Co.

Jasper HH, Carmichael L. 1935. Special article: Electrical potentials from the intact human brain. Science 81:51.

Marey EJ. 1876. Des variations electriques des muscles du coeur en particulier etudiee au moyen de l’ectometre de M. Lippman, C R Acad Sci 82:975.

Offner F, Gerard RW. 1936. A high-speed crystal inkwriter. Science 84:209.

Toennies JF. 1932. Der Neurograph. Die Naturwiss 27(5):381.

Saha, S., Bronzino, J. D. “Ethical Issues Associated with the Use of Medical Technology.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000