January 26, 2005: Features
March 28, 2005:
the right thing
By Katherine Hobson ’94
Illustrations by P.J. Loughran
By the time we’re adults, most of us are pretty sure we muddle through life doing the right thing most of the time. Moral decisions can be complex, sure, but some things are cut and dried – so simple that you will probably answer these questions quickly and automatically, without much thought: Would it ever be OK to kill your baby? If a woman were being brutally attacked in the courtyard of your apartment building, would you go to her aid or, at the very least, call the police? Would you save someone’s life if it involved little sacrifice on your part?
Don’t be so sure you’d automatically do the “right” thing. Scholars in Princeton’s psychology department are examining why and how we make such moral judgments. Mixing philosophy, neurobiology and psychology, they are exploring how the human brain’s very organization influences these choices, and what that means for the moral judgments that most of us make unthinkingly.
You probably answered the question about the baby most easily. Yet consider this scenario ...
Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside you hear the voices of soldiers who have come to search the house for valuables.
Your baby begins to cry loudly. You cover his mouth to block the sound. If you remove your hand from his mouth, his crying will summon the attention of the soldiers who will kill you, your child, and the others hiding out in the cellar. To save yourself and the others, you must smother and kill your child.
Is it appropriate to smother your child in order to save yourself and the other townspeople?
A group of researchers from the psychology department —postdoctoral fellow Joshua Greene *02, research staffer Leigh Nystrom, graduate student Andrew Engell, Professor John Darley, and Professor Jonathan Cohen — posed just this dilemma to 41 subjects. The so-called rational answer is clear: If you kill the baby, you and all the others will be saved; if you don’t, everyone (including the baby) will die. So the baby dies either way — simple math would favor smothering the child. But you probably understand why subjects’ responses were slow in coming and why there was little consensus — about half said it was OK to kill the baby, the other half said it wasn’t. Many factors influence the decision: a belief in utilitarianism; deep religious faith; a belief that some things, killing among them, are never justified; or simply a gut feeling.
The researchers were less interested in what the subjects decided than how. Using functional magnetic resonance imaging (fMRI), they monitored brain activity while the subjects were considering the dilemma. And in research published last fall in the journal Neuron, the researchers reported the patterns they found. When dilemmas like the crying-baby example were posed, areas of the brain involving emotion were activated. But so was an area involved in monitoring conflict — a region called the anterior cingulate cortex (or ACC) — as well as a region involved in abstract reasoning and cognitive control called the dorsolateral prefrontal cortex (or DLPFC).
The Princeton researchers hypothesized that tough personal moral conundrums involve a conflict between the social-emotional responses and higher cognitive processes. The presence of this conflict explains why people take a long time to answer these difficult moral questions as opposed to more clear-cut ones, such as “Is it OK for a mother who simply doesn’t want her infant to kill it?” Most people would say no, because the benefit to be gained by violating personal morality and killing the child isn’t offset by what the mother would gain in convenience or happiness. The time lag comes from the conflict between different parts of the brain, not because the cognitive process alone takes longer to work through the more complex scenario.
Researchers point to increased activity in the ACC — the signal that there is a conflict between an emotional response and higher processes that favor a different response – as evidence of the mental battle going on in these dilemmas. They also found that the area of the brain dealing with abstract reasoning — the DLPFC — was more active in response to these difficult dilemmas, which the authors interpret as a reflection of reasoning processes coming into conflict with the more automatic emotional responses. When people chose the “rational,” more utilitarian option — something that violates their intuition about how people ought to behave, but leads to the most good for the most people — they showed more DLPFC activity than when they went with their emotional response.
In analyzing those results, the researchers engaged in what they emphasize is speculation about why our brain might be set up to literally pit one side against the other. They suggest that the automatic aversion to hurting anyone close to us may have been inherited (either genetically or culturally) from our primate ancestors — it’s understandable, from an evolutionary perspective, to see why such an aversion would confer an advantage. (If you kill your child, you can’t pass along your genes.) The more abstract, unemotional calculations needed to overcome that initial response take place in the more recently evolved parts of the brain. It’s not that emotion is automatically going to lead you astray, just that it isn’t the final word in every situation. The automatic response may not always be the correct one.
Our current moral thinking may reflect the conditions that prevailed during most of our evolutionary history rather than the conditions we actually face today, says Greene, a postdoc with the Department of Psychology and the Center for the Study of Brain, Mind, and Behavior who received his Princeton doctorate in philosophy. We are often told, when confronted with a major decision, to “go with our gut.” Those intuitive judgments feel so right and come so easily. But where do they come from? (Hint: It’s not your gut.) Greene posits two scenarios, both adapted from Princeton philosopher and bioethics professor Peter Singer. First, you are driving along a sparsely traveled country road in your brand-new BMW convertible, which came equipped with expensive leather seats. You see a man covered in blood by the side of the road; you stop and he tells you that he had an accident while hiking and needs a lift to the hospital. He may lose his leg if you don’t take him; there is no one else around to help. But the blood will ruin your leather seats. Is it OK to leave him by the side of the road because you don’t want to spend the money to reupholster the seats? Obviously, the “right” thing to do is pretty clear: Take him to the hospital. Most would find a decision to leave him repugnant.
Yet Greene poses another scenario. You get a letter in the mail from a reputable international aid organization, asking for a donation of a few hundred dollars. The money, the letter says, will bring medical help to poor people in a country thousands of miles away. Is it OK to toss the letter and forget about it because you want to save money? According to Singer, these scenarios are morally equivalent; in both, people can easily be helped without undue sacrifice on your part. The only difference is proximity. Most people, says Greene, are inclined to think there must be “some good reason” why it’s not OK to abandon the hiker but is perfectly acceptable to throw away the appeal from the aid group. That decision would probably “feel right” to a lot of people, he says. Yet Greene writes, in an essay published in the October 2003 issue of Nature Reviews Neuroscience, that “maybe this pair of moral intuitions has nothing to do with ‘some good reason’ and everything to do with the way our brains happen to be built.” Because our ancestors didn’t have the capability to help anyone far away — or probably even realize that they existed — they didn’t face dilemmas like the aid-group plea. Our brains, Greene suggests, may be wired to respond to proximal moral dilemmas, not those that originate miles away.
Greene proposes four levels of moral decision-making. One is the basic instinct to protect and promote one’s own interests — someone swipes your food, you react angrily. Then there’s the human/empathetic response — seeing the hiker and helping him. One step higher is moral intuition based on cultural norms. If someone had grown up in a family, for example, that gave away 50 percent of its money to charity, that person might find responding to the mailed appeal a moral no-brainer, so to speak. Finally, there is a decision made by an individual who, “through his own philosophizing, has come to the conclusion that is largely independent of the conclusions come to by the local culture.”
Greene says, emphasizing that it is his own “informed speculation,” that moral judgment is a balance of different considerations — possibly involving all of these levels. “Sometimes the right thing to do is not obvious,” he says, “and sometimes it lines up with one of the competing interests in the moral marketplace.” One can see how a certain moral decision-making style might play out in different ethical dilemmas. A military policeman with a strong empathetic response to the abused detainees at Iraq’s Abu Ghraib prison — or who comes from a small town that valued compassion, or is an unusually independent thinker — might blow the whistle on the wrongdoing when his colleagues would not. To such an MP, the decision is as right and natural as the decision to pick up the hiker would be for everyone else.
The situation in Abu Ghraib, however, involves another element beyond simply making a moral judgment: standing up to authority and overcoming conformity. For more than 30 years, John Darley has been studying group processes and their effect on decisions. (Most recently, his work has focused on how and why we think people ought to be punished for different offenses.) His early research was prompted by an infamous crime still cited to show the coldhearted nature of a big city — the Kitty Genovese murder. In 1964, she was attacked several times over the course of 35 minutes in the courtyard of her apartment complex in Queens, New York. Many people heard her screams, yet no one helped or even called the police. Why did they fail to do “the right thing”? Darley’s assessment: Their conduct was “not admirable, but not unethical given the way they interpreted the situation.” What led him to that conclusion?
In 1973, Darley published the results of an experiment conducted using male Princeton undergraduates. Either in pairs or alone, they were given materials related to a supposed study on vision. After completing part of the phony experiment, they were told they’d have to go to another room for the rest of the study. To get there, they walked through a room where construction was going on. A workman (who was actually part of the experiment) led them through a maze of heavy equipment into another room where they were given the materials for the vision study and told not to communicate with each other. Designated seating varied — sometimes the pairs were placed facing each other, sometimes they were back-to-back. Four minutes after the experimenter left, there was a loud crash from the other room, followed by the exclamation “Oh, my leg!” and three seconds of groans. The scenario was devised so that one interpretation of the situation was that the worker needed help. Then again, maybe he was really all right.
Those who were alone responded to the crash 90 percent of the time. The facing pairs responded 80 percent of the time. However, of those pairs seated back-to-back, only 20 percent responded. Darley’s conclusion was that the pairs were being affected in different ways by two different processes: diffusion and definition. Darley’s hypothesis of diffusion of personal responsibility holds that the more bystanders to an emergency, the less likely that any one person will step in to help because the responsibility does not fall solely on him; that was true of both the face-to-face and back-to-back groups. But for the face-to-face groups, the definition of what was going on – the mental processes that identified the crash and groans as an injury-causing accident — was aided by seeing the reaction of the other person, such as the initial flinch at the crash and the expression of concern. Those who weren’t facing each other had no such cues; they were only aware that the other person didn’t initially respond — so they didn’t respond either. These initial failures to respond falsely signaled the other person that each thought nothing was wrong. They could not take cues from each other, compare their definitions of the situation, and see their own feelings reflected in the other person.
In another, similar experiment, instead of a staged crash, the room in which the participants were completing their questionnaires was gradually filled with smoke. Three of four lone subjects reported the smoke; only one-eighth of those tested in groups of three reported it. (Again, each took the initial inaction of the other participants as a sign that no one thought the smoke was dangerous.) This bystander effect explains more than why a lone pedestrian will leap into a river to save someone else, while a group of people may stand around looking helplessly at each other. If you always assume someone else will make the right decision, you may not speak up when ethical issues arise — such as when your colleagues are fudging the income statement at your company, or your military unit is engaged in questionable conduct. Conformity is a strong force. “You assume the conduct is proper unless you are censured,” says Darley.
That tendency to take cues from and conform to those around us is one of the factors behind the problems at Abu Ghraib, according to a Nov. 26, 2004, article in the journal Science by Princeton psychology professor Susan Fiske and two doctoral students in her department, Lasana Harris and Amy Cuddy. The researchers looked at a database of 322 meta-analyses (statistical summaries of findings across a number of individual studies) of research on the effects of social context, and identified a number of factors that made the behavior of those accused at the Iraqi prison predictable, says Fiske. Research has shown that certain circumstances – provocation, stress, uncomfortable weather conditions – like those present in Iraq can produce aggression in almost anyone. In addition, prejudice likely contributed to the conduct of the military police; the group of enemy prisoners was perceived as a hostile, contemptible “out-group” with little in common with their guards. “Given an environment conducive to aggression and prisoners deemed disgusting and subhuman, well-established principles of conformity to peers and obedience to authority may account for the widespread nature of the abuse,” the researchers wrote.
This doesn’t justify the behavior, and its prevention isn’t as simple as removing a few of the conditions that combine to produce it. For example, explains Fiske, “conformity and obedience and even aggression are really morally neutral – there are circumstances in which you want people to be aggressive.” Try having an army without strong social pressure to conform and obey, and you’ll have chaos. But there are clues from the research that suggest how this group-think can be derailed. Getting to know people as individuals and not merely as part of a monolithic group can combat prejudice and thus abuse, the researchers write. Moreover, when even one person speaks up against a collective action, it may cause peers to reconsider what’s going on. Whether it’s a prison in Abu Ghraib, a mysterious smoke-filled room, or a corporation using dubious accounting procedures, research suggests that “if you feel there’s something terrible going on, it’s worth speaking up,” says Fiske.
So what sort of person does speak up? Greene speculates that such a person is someone with a strongly developed moral intuition, whether out of empathy, cultural belief or independent reasoning. Darley says psychological distance from the pressure of the group also helps; the helicopter pilot who reported to his superiors the My Lai massacre, the 1968 killing of several hundred civilians in South Vietnam by U.S. troops, literally had a very different perspective on the events. And experience in a similar situation may help. The students in the 1973 experiment were told about the situation afterwards, and the response was immediately “Oh, I get it,” says Darley. They received a vivid demonstration of group processes in action, and that may have set them up to behave differently if they faced such a choice later in the real world.
It’s interesting that all these effects and cognitive processes go largely unnoticed by the person who is making the decision. Indeed, moral decisions feel inevitable. How many times have you read in the newspaper about a person who tried to save another at peril to himself saying, “Anyone would have done it — I’m not a hero”? In that case, doing the “right” thing came naturally — whether there was a battle raging between the emotional or cognitive parts of the brain, whether it was an instinctual, empathetic, culturally conditioned or independently arrived-at reason for the act. On the flip side, Fiske’s paper notes that perpetrators even of torture may not be aware that what they’re doing would horrify outsiders — they may see themselves as doing a “great service,” the researchers wrote.
Researchers at Princeton cannot yet fundamentally answer the questions of precisely why we act morally or immorally, but they are beginning to peel back the mysteries of evolution, neurobiology, social context, and consciousness that influence our decisions. And perhaps just the knowledge that there is often more to a moral decision than what “feels right” will prompt self-examination where there was none before.
Katherine Hobson ’94 covers medicine and science at U.S. News & World Report.
DILEMMAS: A PAW Online survey