Picture this: You’ve worked hard all year. You’re burned out. Every atom in your brain and body is crying out for a relaxing vacation. Luckily, you and your partner have managed to save up $3,000. You propose a trip to Hawaii — those blue waves are calling your name!
Just one problem: Your partner refuses, arguing that you both should donate the money to charity instead. Think how many malaria-preventing bednets $3,000 could buy for kids in developing countries!
You might find yourself thinking: Why does my partner seem to care more about strangers halfway around the world than about me?
A philosopher would tell you that your partner may be a utilitarian or consequentialist, someone who thinks that an action is moral if it produces good consequences and that everyone equally deserves to benefit from the good, not just those closest to us. By contrast, your response suggests you’re a deontologist, someone who thinks an action is moral if it’s fulfilling a duty — and we have special duties toward special people, like our partners, so we should prioritize our partner’s needs over a stranger’s.
According to research out of the Crockett Lab at Yale University, if you’re put off by the consequentialist’s anti–Hawaiian vacation response, you’re not alone. Neuroscientist Molly Crockett has conducted several studies to determine how we perceive different types of moral agents. She found that when we’re looking for a spouse or friend, we strongly prefer deontologists, viewing them as more moral and trustworthy than consequentialists.
In other words: When we’re looking for someone to date or hang out with, extreme do-gooders of the consequentialist variety need not apply. (It’s worth noting that deontologists can be hardcore do-gooders, too, just in their own very different way.)
Crockett’s studies raise a lot of questions: Why do we distrust consequentialists despite admiring their altruism? Are we right to distrust them, or should we try to override that impulse? And what does this mean for movements like effective altruism, which says we should devote our resources to causes that’ll do the most good for people, wherever in the world they might be?
I reached out to Crockett to discuss these issues. A transcript of our conversation, edited for length and clarity, follows.
In the past, it’s typically been philosophers who’ve investigated issues of morality and altruism, and they’ve focused a lot on sacrificial dilemmas.
The most famous one is the Trolley Problem: Should you make the active choice to divert a runaway trolley so that it kills one person if, by doing so, you can save five people along a different track from getting killed? The consequentialist says yes, because you’re maximizing overall good and outcomes are what matter. The deontologist says no, because you have a duty to not kill anyone as a means to an end, and your duties matter.
In your studies, you do examine these types of sacrificial dilemmas, which involve doing harm. But you also examine “impartial beneficence” dilemmas, which involve doing good, and specifically the idea that we shouldn’t prioritize our family and friends when we do good. Why did you decide to study those dilemmas?
Studying impartial beneficence is really psychologically juicy, because it gets at the heart of a lot of the conflicts we face in our social relationships as the world becomes global and we think about how our actions are affecting people we’re never going to meet. Being a good global citizen now butts up against our very powerful psychological tendencies to prioritize our families and friends. So we wanted to study the social consequences people might experience as a result of having consequentialist views.
And what did you find?
When it comes to sacrificial dilemmas, we find that generally people strongly favor nonconsequentialist social partners. We trust people a lot more if they say it’s not okay to sacrifice one person to save many others.
When it comes to impartial beneficence dilemmas, we see the same pattern. The preference is not as strong, which I think makes sense because a helpful action tends to weigh less heavily on us psychologically than a harmful action. But we still see that when it comes to deciding who we’ll be friends or spouses with, we tend to prefer nonconsequentialists.
There was an exception in the impartial beneficence dilemmas, right? It turned out that when we’re looking for a political leader, we actually prefer the consequentialist. To me, it makes a ton of intuitive sense that we’d prefer different types of moral agents in different social roles. Were your results seen as surprising?
Well, what’s remarkable is that moral psychology up until now has mostly been about hypothetical cases involving strangers. But new research suggests that actually relational context is super important when it comes to judging the morality of others.
I’ve recently started collaborating with Margaret Clark at Yale, who’s an expert in close relationships. We’re testing some predictions that moral obligations are relationship specific.
Here’s a classic example: Consider a woman, Wendy, who could easily provide a meal to a young child but fails to do so. Has Wendy done anything wrong? It depends on who the child is. If she’s failing to provide a meal to her own child, then absolutely she’s done something wrong! But if Wendy is a restaurant owner and the child is not otherwise starving, then they don’t have a relationship that creates special obligations prompting her to feed the child.
Totally. Philosophy abhors inconsistency, and applying deontology in some cases and consequentialism in others might come off as inconsistent. But maybe it’s actually the most rational thing to apply different moral philosophies in different relational contexts.
In your study, the story you tell about why we prefer to marry or befriend deontologists is that, naturally, if I’m looking for someone to marry I’m going to want someone who’ll give me preferential treatment over a stranger in another country. But just to kick the tires on that story a bit: Is it possible that our preference comes about not because we want someone who’ll prioritize us but because being with radical do-gooders makes us feel crappy about ourselves — because we feel like immoral jerks compared to them?
That’s a fascinating question and something we haven’t tested empirically, but it would be very consistent with the Stanford psychologist Benoit Monin’s work on “do-gooder derogation.” He essentially showed exactly what you predict, which is that people feel less warm toward people who are extremely moral and altruistic. His studies showed that the extent to which people dislike vegetarians is related to their own feelings of moral conflict around eating animals.
Yeah, we don’t tend to love being around people who make us grapple with uncomfortable questions. Especially if they’re very in-your-face or self-righteous about it and you have to be around them all the time, like with a romantic partner.
Your study also refers to something called the “partner choice model.” Can you explain that a bit?
“Partner choice” is a mechanism through which traits evolve because they promote being chosen as a social partner. There’s a lot of work suggesting that our preferences for cooperation evolved through partner choice mechanisms, because people who were naturally more cooperative were more likely to be chosen as social partners. They reaped the benefits of being chosen, both through social capital and through reproduction, and then they passed those traits to the next generation.
My idea is that some of our moral intuitions might be explained through the same mechanism. Our deontological intuitions, to the extent that they signal to others that we’re better social partners, make us more likely to be chosen, and therefore they get passed onto the next generation.
Wait, unpack this evolutionary explanation a bit. By “through reproduction,” do you mean that parents with deontological views are more likely to rear their kids with deontological views?
Both that, and … This is more speculative, but to the extent that deontological moral intuitions have a genetic component, it could be passed on that way as well. Obviously there’s not going to be a gene for deontological intuitions. There’s not a one-to-one mapping between genetics and complex psychological traits. But to the extent that these traits arise from brain processes (and there’s a lot of evidence that they do), there may be a heritable component.
This reminds me of the neurophilosopher Patricia Churchland’s new book, Conscience, about the biological basis of morality. Churchland and I recently talked about how brain differences, which are underwritten by differences in our genes, shape our moral attitudes — and how those can be highly heritable. So genetics isn’t everything, but it is playing some role.
Absolutely. Broadly, my work is quite compatible with Churchland’s views.
I think the argument she makes is consistent with some of our empirical work showing that when people are deciding whether to benefit themselves by harming another person, their brain activity tracks with how blameworthy other people would find the harmful choice. Conscience might manifest as the brain predicting how other people would view our actions.
When you write about the implications of your studies, you talk specifically about effective altruism, a movement supported by Peter Singer, who’s probably the most influential utilitarian philosopher alive. You say the studies’ findings suggest that if you’re an effective altruist you’re going to face some stumbling blocks in terms of how people perceive you, which could impact the movement’s ability to grow. What can effective altruists do to mitigate the potential negative perception of them?
I think there are a few possibilities. Here’s one: We’ve shown in some other work that when people are judging the praiseworthiness of good deeds, they consider both the benefits that those deeds bring about and also how good it feels to perform those actions. If anything, our data suggests people weight how good it feels more strongly in judging praiseworthiness, such that people might think that a good deed that brings very little benefit but gives you a really warm fuzzy glow is actually more praiseworthy than a good deed that feels detached and emotionless but brings about a lot of benefit.
Drawing on this insight, effective altruists might emphasize the personal satisfaction that can arise from donating to effective causes, and talk about their own personal experience with the movement in ways that convey what it means to them.
In my lab now, we’re starting to think a lot about narrative — how the stories we tell about our own and others’ behavior give rise to our sense of ourselves as moral beings, and how that can actually change our behavior over the long run. I think the effective altruism movement in some sense misses an opportunity to draw on the very powerful role that narratives play in shaping our psychology.
So, if I have a narrative about myself that emphasizes why having a more evidence-backed, cost-effective approach to giving actually makes me feel really good and gives me that glow, conveying that might get people more interested in my approach?
Potentially. Of course, conveying that may butt up against the “do-gooder derogation” effect. So you’d have to be careful about that.
I think this conversation just goes to show how much of a challenge it is to change moral behavior. There are so many different levers you can press to try to change behavior, but often they’re working at odds with one another. So if you press one, that inadvertently presses other levers that counteract its effect. It’s a complex system we’re dealing with.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.