Author Essays, Interviews, and Excerpts, Philosophy, Religion

A Theological Trolley Problem, A Guest Post from Ryan Darr

In The Best Effect, Ryan Darr describes the theological origins of consequentialism—the notion that we can morally judge an action by its effects alone. In this adaption from the book’s introduction, Ryan explores why we are so fascinated by (and may need to reconsider!) consequentialist ethics today.

book cover with purple background and an oil painting of a religious scene

The moral puzzles that a culture considers particularly interesting or difficult offer a unique window into its ethical presuppositions. In the Anglophone world, no moral puzzle has captured more attention in recent decades than the trolley problem. The philosopher Philippa Foot first introduced a version of the problem in 1967 in order to gain insight into the moral permissibility of abortion. Since Foot’s original paper, the trolley problem has been applied to many other ethical problems, from pandemics to climate change. “Trolleyology,” moreover, has taken on a life of its own and is now often treated as if it were more interesting in its own right than the real-world problems it supposedly illuminates. While trolleyology is not without detractors, it is pervasive in the academy. A quick search of “the trolley problem” now brings up over seven thousand academic texts, approximately half of which were published in the last four years. In many universities, undergraduate students puzzle over trolleys in introductory courses in philosophy and ethics—and increasingly in psychology as well. Nor is the trolley problem confined to the academy. In the last several years, two books have been written on it for a popular audience. And it is increasingly making its way into popular culture, from the TV show The Good Place to the board game Trolley by Trial to the countless trolley problem memes.

For the uninitiated, the standard trolley problem goes like this. Consider the following two cases. In the first, a trolley is racing down a track out of control. You are standing nearby and realize that the trolley is about to hit and kill five people who are on the track. You also notice a lever beside the track. If you pull the lever, the trolley will be diverted onto a side track and avoid the five people. There is, however, one person on the sidetrack who will be hit and killed. Do you pull the lever?

Now imagine a second case. Again, a trolley is racing down a track out of control, headed straight for five people. If you do nothing, all five will be killed. In this case, there is no lever and no sidetrack. You are instead standing on a footbridge that passes over the track. Near you on the footbridge is a large man—so large, in fact, that his mass would be sufficient to stop the trolley. If you push him off the bridge, he will fall in front of the trolley and stop it. He will be killed, but the five people on the track ahead will be saved. Do you push the man?

While judgments on these cases vary, the most common by far is to answer the first in the positive and the second in the negative: pull the switch; do not push the man. The rationale in the first case seems pretty straightforward: it is better for one person to die than for five people to die. The second case also seems rather clear. To push a man off a bridge in front of a moving train is widely considered murder, something that is never permissible, even when it is done for a good cause. What, then, is the problem? The problem arises when we make the parallels between the two cases explicit. In both cases, five people will die if you do not intervene. In both cases, the intervention that saves the five leads to the death of one. No one else is involved, and no other morally significant consequences follow. It would seem, then, that the answer to the two cases should be the same. What can justify the divergent moral judgments in the two cases, especially at the cost of four lives?

These cases are ethically complex and compelling, but what interests me most is what we learn from their current intellectual and cultural prominence. In academia, their prominence highlights the continuing importance of the fundamental division between consequentialists, who make moral judgments solely on the basis of outcomes, and their non-consequentialist opponents. And the broader popularity suggests that this division reflects something in the wider culture. On this issue, scholarly debates in ethics are not out of touch with wider cultural views.

The popularity of the trolley problem does more than highlight an influential division between two competing ethical views. It also suggests that consequentialism has a certain advantage. Some philosophers argue that to even consider the trolley problem is to already take up a particular moral stance, one in which moral principles like those of justice can potentially be set aside when their results are undesirable. And the popularity of the trolley problem indicates that many now take precisely that stance.

But considering the trolley problem often does more than open up the possibility that common moral principles can be violated for the sake of better consequences. Philosophers sometimes argue that deontologists— that is, those who believe we have moral duties that are independent of consequences—are at least prima facie heartless, choosing moral principles at the cost of real-world harms. The claim is an attempt to push the burden of proof onto consequentialism’s opponents. It presses us to ask: Is the refusal to push the large man in front of the train just a prioritization of our own moral principles above human lives? If it is successful, we will begin to suspect that common moral principles are not standards of praiseworthy behavior but potential barriers to human compassion.

The trolley problem often functions to do just that: push the burden of proof onto the non-consequentialist. If you agree to pull the lever in the first case, then you presumably accept that the death of five people is worse than the death of one. The question, then, is why you would not do the same thing in the second case. The non-consequentialist begins to look irrational. She is forced to rest significant weight on the differences between the cases, which the consequentialist will argue are insignificant. The burden of proof now lies with the non-consequentialist.

We can see the results of this shift in the burden of proof in trolley problem studies. Researchers find that most people reject consistently consequentialist views. The idea of pushing an innocent man to his death, even to save lives, is, to most of us, morally repulsive. When researchers ask participants in studies what they would do in trolley cases, consistently consequentialist answers are relatively rare. Matters are different, however, when it comes to moral reasoning. When participants give reasons for their judgments, they struggle to resist consequentialist reasoning. Having relied on consequentialist logic to answer the first case, participants find themselves struggling to explain why they do not follow the same logic in the second. This, I think, is the most interesting insight we gain from the popular attention given to trolley cases: even among the majority who resist consequentialist conclusions, consequentialist reasoning is the only form of moral reasoning widely taken to be unproblematic.

How should we understand this result? Some might say this: participants share a view that certain actions are simply off the table for consideration. Researchers then force them to put those actions back on the table, and the participants struggle to articulate considerations about the morally inconsiderable. Others offer a very different explanation, drawn from the dual process theory of cognition. According to them, we have both a rational processing system and an emotional processing system. The rational processing system reaches consistently consequentialist conclusions. This rational system is what we use in the first trolley case. The emotional processing system, by contrast, reaches its judgments immediately and without rational reflection. When it is triggered— as it is in the second case by the thought of violently shoving a man to his death— it renders an immediate judgment about the act, which explains our revulsion at pushing the man. When we have to justify our negative answer to the second case, the rational processing system, which is consequentialist, struggles to justify the emotional judgment.

This explanation builds the consequentialism/deontology divide into the human brain, effectively naturalizing it. Yet while both consequentialism and deontology become natural results of human cognition in this theory, only consequentialism remains rational. And as compelling as it is, this explanation faces a serious objection. By reading current ethical categories into the very nature of cognition, we risk treating them as if they were universal and timeless. The problem is this: how can we reconcile such a view with the historical and cultural particularity of these categories? Consequentialism, after all, is a relatively recent invention, a product of early modern and Enlightenment philosophy. The standard consequentialism/deontology division is even more recent. Studying the history of ethics should make us wary of overly simple explanations.

On this point, however, our historical stories about consequentialism have failed us. Given the current sense that consequentialism is uniquely rational, it can be difficult to imagine that consequentialism had to be invented, and this difficulty has interfered with our stories of its origins. In the common imagination, utilitarianism—the “classical” form of consequentialism in which morality consists of maximizing happiness—is often seen as the paradigmatic secular ethic, the natural result of tossing off religious and cultural prohibitions. In this narrative, utilitarianism is what is left when we remove irrational moral inhibitions. The implicit assumption is that utilitarianism did not have to be invented, only liberated. While the common story is right to see classical utilitarianism as a secular reform project, the invention of consequentialism did not occur with the classical utilitarians. The invention of consequentialism is, in fact, more complicated—and more theological—than is often assumed. The story of its invention is the story told in this book.

Ryan Darr is a postdoctoral research associate in religion, ecology, and expressive culture at the Yale University Institute of Sacred Music and a lecturer in the Yale Divinity School. He is the author of The Best Effect: Theology and the Origins of Consequentialism—available now on our website or wherever good books are sold.