Scientific research on humans includes investigation of the human body and behavior and the effect of various medications and treatments on humans. Research takes place in a laboratory, at home, or in a medical office of hospital.
Controversy surrounds discussion of the permissible ways to do human research and the nature of the proper relationship between researchers and subjects.
Categorizing Human Research
A traditional distinction in the natural sciences (for example, physics and chemistry) is between pure research and applied research, which mirrors a similar distinction between pure science and applied science. Pure research is done only to advance knowledge, while applied research tries to find solutions to specific problems or apply scientific knowledge to the development of technology to use in the world.
Scientists doing human research do both, but the type of research most likely in a clinical setting is applied research. Medical researchers often test new drugs and treatments to find out if they are safe and effective. Among such research trials, a commonly-held distinction is between therapeutic and nontherapeutic research. Therapeutic research is intended to benefit some or all of the specific research subjects (at least it is hoped it will). Nontherapeutic research is intended to increase knowledge and may benefit individuals in the future who take the medication or treatment in question, for instance, but the research is not intended to benefit the specific research subjects. The research subjects in therapeutic research will likely be suffering from some disease that the treatment being tested might help or cure. This is probably not the case in nontherapeutic research.
Moral Problems with Previous Research
Some research done in the past is now universally condemned as having harmed and wronged the subjects. Enemy soldiers were experimented on during the Second World War by various countries, notably Nazi Germany and Japan. The Nazi doctor Joseph Mengele carried out immoral experiments on civilians, harming and killing them in the process.
A number of things were obviously immoral about the Nazi experiments, including the facts that subjects were not free to refuse to participate and that researchers took little or no precautions against harming subjects during the research. In fact, some research seemed intent on measuring what happened when subjects were intentionally harmed. And some research seemed more the result of perverse and sadistic tendencies among Nazi researchers than for any meaningful scientific purpose.
Though not comparable to the Nazi research atrocities, the history of human research in the United States is not completely blame free. The famous “Tuskegee Syphilis Experiment” of 1932-1972 deceived and harmed African-American males in Alabama. There are numerous other examples of research on human subjects in the United States now commonly considered to have been immoral. Common problems were the anticipated harming of subjects, subjecting subjects to risks without their informed consent, and outright deception of subjects about the nature of the research and whether or not they were receiving treatment.
As a result of the above research abuses various corrective measures were taken and standards were created, including the Nuremberg Code, the Helsinki Declaration, the Belmont Report, and the use of institutional review boards. The Nuremberg Code stresses the need to inform human subjects of the nature of the research, disclose any risk of harm, avoid unneeded risk to the subjects, and obtain voluntary consent from the subjects. The research must be justified by aiming to benefit society, and some levels of harm to subjects cannot be justified no matter how noble the aim. The Helsinki Declaration carries on the tradition of the Nuremberg code and adds that subjects and their rights must be treated with respect. The well-being of the subjects takes precedence over the goals of the research. In contrast to Nuremberg, Helsinki allows possible exceptions to the requirement of obtaining voluntary consent from the subjects. Research might be done on minors and those with mental impairments, for example, where the whole population under study cannot give consent, but proxy consent must be obtained from guardians. To safeguard the subjects, ethical review committees should be used.
The Belmont Report in the United States in the 1970’s highlights three important principles to guide medical care of patients (therapy) and biomedical research on subjects: respect for persons, beneficence, and justice. Respect for persons incorporated what is now known as respect for autonomy (freedom of choice), and beneficence (doing good, benefiting the subject) included what is now known separately as non-maleficence (refraining from harming the subject). Though the same principles guide how physicians treat patients and how researchers should treat subjects, the Belmont Report recognizes that “medical or behavioral practice” (therapy) is different than research. The former provides diagnosis, prevention, and therapy to benefit specific individuals, while research only need contribute to knowledge in general. Both contexts should be governed by the principles, including that of beneficence. But some critics believe the Belmont Report does not clarify sufficiently whether and how research must or may not benefit the specific research subject.
Conflicts Between Therapy and Research
Stemming from the work of the Belmont Report, the ethicists Beauchamp and Childress popularized the relevance of the principles of respect for autonomy, beneficence, non-maleficence, and (distributive) justice for both clinical practice and medical research. The common view became that the behaviors of both physicians providing treatment and researchers conducting trials should be governed by these principles, even though therapy and research have different goals. So when trying to map out ethical research, thinkers tried to adapt principles that were more commonly thought of as applying to medical practice. Sometimes they had to be interpreted slightly differently for this to work.
To some extent it is understandable one might try to use the same principles. Not all research trials are done in a lab by scientists -- physicians treating patients may at the same time be carrying out clinical research trials on those patients. This is a common way to test the safety and efficacy of new medications and other medical treatments and of course is done only with the consent of the patients. Many trials done by physicians on their patients fall under the category of therapeutic research because the patients have a disease the tested treatment might help or cure.
But it has by now become apparent to some thinkers that perhaps slightly different moral principles are needed for health research than are needed for healthcare practice. Some believe there is a conflict between the two roles the provider is forced to assume in that they may demand conflicting obligations of the provider. The role of therapist demands the provider provide the best possible treatment for a patient. This is commonly held to be implied by the principle of beneficence, here “specific beneficence” because it applies to a specific patient. On the other hand, as a researcher, even in therapeutic research, the provider may be forced to give a patient less than the best possible treatment. (The duty of beneficence in the research context usually is interpreted to mean not that the researcher provide the best possible treatment for the specific subject, but only that the research in general be intended to benefit humanity.) Hence the provider as therapist will be at war with the provider as researcher.
This situation may come about in a common form of research known as a randomized controlled trial (RCT). In such a trial a new drug, for instance, is tested either against the existing standard of care or against a “dummy” pill, a placebo. (The placebo is considered the “gold standard” for such trials.) The research subject is not told which treatment they are getting – the new drug or the alternative. Now if the patient gets a placebo, and in fact it is not as good as the new drug, then the patient gets less than the best treatment. If instead the trial pits the new drug against the existing standard of care, and the patient receives either one, when the other one is better, then the patient still gets less than the best treatment. But recall that the physician, as therapist, is obligated to always give the patient the best treatment.
There seem two ways to look at this problem. One way is to claim it is not a real problem because of the doctrine of “equipoise.” The other way is to allow that there is a problem and to locate the problem in the fact that therapy is not research, even therapeutic research, the two endeavors should not have the same set of moral principles to guide them, and as long as a physician tries to use patients for research trials there will be a conflict.
Equipoise is a state of balance between options, or a neutrality or suspension of judgment because the evidence does not favor one side or the other. Some thinkers believe that if the physician claims equipoise (“theoretical” equipoise) or the medical community claims equipoise (“clinical” equipoise) then this absolves the physician from the charge of not providing the best available treatment for their patients when those patients are research subjects in a clinical trial. In such a trial, the physician personally and/or the medical community as a whole does not know whether the new drug, for instance, is really better or worse than the standard of care or than a placebo. So it is not as if the physician is intentionally giving the patient less than the best treatment. The patient may get the new drug or what it is tested against, and no one knows which one is better.
Others claim that citing equipoise does not solve the problem in all possible situations. If the drug is tested against a placebo, the fact is that some of the patients will get the placebo, but also consider there may have been a third alternative, the existing standard of care, and those patients would have received that if they were not in the trial. The standard of care is likely better than the placebo. So if the patients get the placebo in the trial instead of getting the standard of care, then in fact the physician is not giving those patients the best treatment possible. Furthermore, there is the possibility that the new drug is worse than the standard of care, meaning that in a trial testing the new drug against the standard of care (instead of a placebo) there is the possibility the patient does not receive the best available treatment. But when a physician takes on a patient the physician incurs an obligation to provide the best possible care, not guide the patient into an experiment that puts the patient at risk of receiving less or even of being harmed by the treatment in the hope that they will be lucky enough to get a cure.
The unsettling thought of a patient who as a research subject gets less than the standard of care is a reason the World Medical Association has called for using the standard of care instead of placebos in such clinical trials. Some thinkers believe equipoise is unrealistic to expect anyway. If the new drug is being tested on people, it must have shown some promise in earlier studies already. So how can one suspend judgment about whether it is likely to be better?
Those who reject the equipoise argument believe the problem is that the goal of therapy is not the goal of research, even therapeutic research. The physician is in conflict about the two roles. The physician’s obligation is to the patient, the researcher’s obligation is to gaining knowledge through the research. In therapeutic research a patient might hope to receive a newly discovered superior treatment, but that is not the goal of the research trial. There is nothing morally wrong about using placebos in research because research is not therapy. Any help the research subject gets from the treatment is incidental. (Miller and Brody.)
The Therapeutic Misconception?
Researchers talk of the “therapeutic misconception.” This occurs when despite cautions from the researcher to the subject that the goal of the research is primarily scientific knowledge and the treatment the subject receives is not intended as therapy, many patients develop the belief that the treatment is going to help them. The patient can even come to believe the primary purpose of the research is to treat them.
Physicians and researchers ponder studies showing that the therapeutic misconception is widespread. But what is perhaps ironic is that researchers and ethicists wonder why this happens when in fact physicians and other researchers themselves talk of “therapeutic research” and researchers are often physicians whose research subjects are their very own patients who are coming to them for treatment. Is it any wonder a subject with an illness participating in medical research being done by their own physician hopes and begins to believe that the treatment is going to help them?
Could researchers be expecting too much of their subjects? Why would anyone volunteer as a research subject anyway? Some people are genuinely altruistic and willing to accept considerable risk for the sake of science, and other people value the small compensation for time and trouble that some research subjects receive, but aside from that, the only motivation to participate is some version of the therapeutic misconception. If you are ill, why accept possibly significant risk except because the treatment just might be a cure for the disease you have? The therapeutic misconception helps recruit research subjects.
New Principles of Medical Research
Some researchers and ethicists believe the four traditional principles of biomedical ethics are more suited to the physician-patient relationship than the researcher-subject relationship. For example, Emanuel, Wendler, and Grady propose the following as principles for research:
- The research must have scientific or social value
- The research must be scientifically valid
- The selection of subjects must be fair
- The research benefits and risks must be in a favorable ratio
- The research should be subject to independent review
- The research subjects must provide informed consent
- The research subjects must be treated with respect
Notice that compared with the traditional principles of biomedical ethics, “beneficence” as a principle has disappeared and in its place are claims that the research must be of value and that any risks must be balanced by benefits, though it is debatable what counts as benefit. No claim is made that the research benefit or be intended to benefit any particular research subject.
Other Controversies in Human Research
Several other controversies include
- children as research subjects
- the ethics of stopping clinical trials early
- deception in research
- under-representation of populations in research, and
- research in developing countries
Minors as subjects: Controversy surrounds the use of minors or other persons who cannot give fully informed consent as research subjects. It is commonly held that this might be permissible if “assent” is obtained from such persons (assent to as much as they can understand) and consent obtained from parents or other legal guardian. But some ethicists maintain that even this should be only for therapeutic research aimed at helping the subjects. Nontherapeutic research on children or mentally impaired adults, aiming only for an advance in scientific knowledge but burdening the subjects with possibly significant risk of harm, is held to be morally impermissible.
Stopping trials early: Also controversial is the notion of prematurely stopping a research trial when the treatment being tested seems especially beneficial, harmful, or futile. Consider the following maxims sometimes followed:
- If the treatment early in the trial begins to be dramatically beneficial, stop the trial and give it to the control group (who are receiving placebos or the standard of care).
- If the treatment early in the trial begins to be drastically harmful or futile, stop the trial and save the treatment group from harm or possible harm.
Stopping trials is controversial because there are different types of trials (early phase vs. later phases, flexible trials and multi-stage trials) and not all are designed to be monitored and assessed at interim points. Early stopping of a trial not designed for it may be making a decision and applying it to a control group or the treatment group on the basis of statistically insignificant data. The seeming dramatic improvement or drastic deterioration seen in the treatment group could be due simply to chance.
Deception: Deception in research can occur through subjects being misled about the true purpose of the research. Some ethicists believe that no deception is ever justified in research, even though some amazing results have been obtained in earlier years through deception of research subjects (for example, the Milgrim experiment). Others however believe that deception could be justified if, in obtaining informed consent, the researchers informed the subjects that the research might not be investigating what the subjects have been led to believe, and if deception is the only way to gather the data. Deception about the risk of harm, however, would not be justified, and the subjects should be debriefed afterward.
Fairness in representation: Some ethicists have pointed out that there seems to be a need for more fairness in choosing populations in which to conduct research because minorities, women, and the impaired are usually under-represented. This results in the possibility of fewer research findings pertaining to them, and consequently less sophisticated healthcare available to those groups. Development of drugs and other treatments for cardiovascular disease, for instance, should not focus just on white males. While there is widespread agreement of the legitimacy of this criticism, a possible reply is that much medical research is funded by the pharmaceutical companies, who like any other company are attempting to maximize profit, and they should be free to spend their research dollars in the biggest markets, where they think they will get the most payoff.
Developing countries: Research on populations in developing countries is controversial in several regards. First, critics charge that pharmaceutical giants from the industrialized countries exploit people in third-world countries by using them for research where standards are lax and the companies have no intention of providing the drugs at prices affordable to those populations. It’s a case of the first world using the populations of the third world as guinea pigs.
A second area of controversy, however, concerns research on treatments clearly intended for those third-world markets. The question arises of whether it is permissible to test possible treatments against a placebo or the local standard of care, rather than having to test it against the best available standard of care worldwide.
An issue here is that some ethicists, among them those of the World Medical Association, believe that in a therapeutic randomized, controlled trial, testing a new treatment against a placebo is unfair to the research subjects getting the placebo, in that they could have received a current standard of care and been better off. So a new drug, for instance, should be tested against the current standard of care, not a placebo.
Trying to be nondiscriminatory, and fair to third world countries, such organizations call for every such test to be against the best available standard of care worldwide, not just some local standard of care, which might be less effective. The problem is that in a third world country, the local standard of care may be little or nothing, and clearly worse than the best available standard of care worldwide, but in that country the best worldwide standard of care is unaffordable. So if the new drug happens to lose to the worldwide standard of care it proves nothing, since that population will never get that standard of care. The new drug, though, might have bested the local standard of care and been a better and feasible alternative for that population, but if not testing against the local standard of care this will never be known. So organizations such as the WMA, thinking they are adopting the moral high ground to help third world countries, actually prevent such countries from finding better, affordable treatments. Ethicists from third world countries themselves make this criticism.
Ezekiel J. Emanuel, David Wendler, and Christine Grady, What Makes Clinical Research Ethical?” Journal of the American Medical Association, 2000
Benjamin Freedman, “Equipoise and the Ethics of Clinical Research,”
New England Journal of Medicine, 1987
Franklin G. Miller and Howard Brody, “Therapeutic Misconception in the Ethics of Clinical Trials,” The Hastings Center Report, 2003