How Reliable Are Our Moral Intuitions?

Peter Singer

In bioethics as in other areas of ethical debate, arguments very often circle back to our intuitions—those almost automatic responses we have to whether something “feels” right or wrong. But where do these intuitions come from, and how much reliance should we place on them?

Some unusual recent research has cast new light on the role of intuitive responses in ethical reasoning. At Prince-ton University, Joshua Greene, a philosophy Ph.D. student, became interested in a set of dilemmas known in the philosophical literature as “trolley problems.” In the standard trolley problem, you are standing by a railroad track when you notice that a trolley (or handcar) with no one aboard is rolling down the track, heading for a group of five people. They will all be killed if the trolley continues on its present track. The only thing you can do to prevent these five deaths is to throw a switch that will divert the trolley onto a side track, where it will kill only one person. When asked what they should do in these circumstances, most people say that the trolley should be diverted onto the side track, thus saving four lives.

In another version of the problem, the trolley, as before, is about to kill five people. This time, however, you are not standing near the track, but on a footbridge above the track. You cannot divert the trolley. You consider jumping off the bridge, in front of the trolley, thus sacrificing yourself to save the imperiled people, but you realize that you are far too light to stop the trolley. Standing next to you, however, is a very large stranger. The only way you can stop the trolley from killing five people is to push this large stranger off the footbridge, in front of the trolley. If you push the stranger off, he will be killed, but you will save the other five. When asked what they should do in these circumstances, most people say that it would be wrong to push the stranger off the bridge.

Many philosophers have tried to justify our intuitions in these situations, but Greene was more concerned to understand why we have them. He thought that the roots of the differing judgments we make about the two situations may lie in our different emotional responses to the idea of causing a stranger’s death by throwing a switch on a railway track, and pushing someone to his or her death with our bare hands. So together with more senior colleagues at Princeton University’s Center for the Study of Brain, Mind, and Behavior and Department of Psychology, Greene used functional Magnetic Resonance Imaging, or MRI, to examine brain activity when people make moral judgments.

Greene predicted that people asked to make a moral judgment about “personal” violations like pushing the stranger off the footbridge would show increased activity in areas of the brain associated with the emotions, when compared with people asked to make judgments about relatively “impersonal” violations like throwing a switch. But he also made a more specific prediction: that the minority of subjects who do consider that it would be right to push the stranger off the footbridge would be giving this response in spite of their emotions, and therefore they would take longer to reach this judgment than those who say that it would be wrong to push the stranger off the footbridge, and also longer than they would take to reach a judgment in a case that did not arouse such strong emotional responses.

Greene’s predictions were confirmed. When people were asked to make judgments in the “personal” cases, the parts of their brains associated with emotional activity were more active than when they were asked to make judgments in “impersonal” cases. More significant, those who came to the conclusion that it would be right to act in ways that involve a personal violation, but minimize harm overall—–for example, those who say that it would be right to push the stranger off the footbridge—took longer to form their judgment than those who said it would be wrong to do so.

Greene’s findings fit well in a broader evolutionary view of the origins of morality. For most of our evolutionary history, human beings have lived in small groups, and the same is almost certainly true of our pre-human primate and social mammal ancestors. In these groups, violence could only be inflicted in an up-close and personal way—by hitting, pushing, strangling, or using a stick or stone as a club. To deal with such situations, we have developed immediate, emotionally based responses to questions involving close, personal interactions with others. The thought of pushing the stranger off the footbridge elicits these emotionally based responses. Throwing a switch that diverts a trolley that will hit someone bears no resemblance to anything likely to have happened in the circumstances in which we and our ancestors lived. Hence the thought of doing it does not elicit the same emotional response as pushing someone off a bridge. So the salient feature that explains our different intuitive

judgments concerning the two cases is that the footbridge case is the kind of situation that was likely to arise during the eons of time over which we were evolving; whereas the standard trolley case describes a way of bringing about someone’s death that has only been possible in the past century or two, a time far too short to have any impact on our inherited patterns of emotional response.

Greene and his colleagues have brought us closer to an understanding of the origins of ethics. Does this help us to resolve ethical dilemmas? No, not in the sense of giving us answers. But as I have said, in many fields of ethics, including life-and-death decision-making in medical ethics, we rely heavily on intuitions. For example, many American doctors are opposed to voluntary euthanasia, but they are willing to with-hold life-support from a terminally ill patient. The former, they say, is killing, and therefore wrong, while the latter is only “letting nature take its course.” Greene’s research should make us more skeptical about such appeals to intuition. If our moral intuitions to more and less “personal” ways of killing vary because they derive from our evolutionary past, how can they justify the decisions we make today?

ReferenceGreene J.D., R.B. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen, “An MRI Investigation of Emotional Engagement in Moral Judgment,” Science 293 (2001): 2105–2108.


Peter Singer is DeCamp Professor of Bioethics at the University Center for Human Values at Princeton University. He is the author of Animal Liberation, Practical Ethics, How Are We to Live?, and Rethinking Life and Death.

Peter Singer

Peter Singer is DeCamp Professor of Bioethics at the University Center for Human Values at Princeton University. His books include Animal Liberation, How Are We to Live?, Writings on an Ethical Life, One World, and, most recently, Pushing Time Away.


In bioethics as in other areas of ethical debate, arguments very often circle back to our intuitions—those almost automatic responses we have to whether something “feels” right or wrong. But where do these intuitions come from, and how much reliance should we place on them? Some unusual recent research has cast new light on the …

This article is available to subscribers only.
Subscribe now or log in to read this article.