Tim Pool, Sam Seder, and Thanos: Is Lesser-Evil Voting Like Pushing People Onto Trolley Tracks? (UNLOCKED)
In a 2019 debate, Pool brought up the Trolley Problem (and Thanos!) to criticize lesser-evil voting. He's very confused. But we don't need to embrace consequentialism to reject his position.
In an article earlier this month in Jacobin, I mentioned a 2019 debate between Sam Seder and Tim Pool. In the part I was talking about, Pool asks Seder if he’s a consequentialist or a deontologist. Seder isn’t familiar with the vocabulary. Pool, who knows so much about philosophy I once saw him nod along while a guest affirmed that Plato was “basically” the founder of the Illuminati, expresses amazement in a bro, you don’t even know consequentialism and deontology? sort of way.
At this point, a different (and much more boring) version of Tim Pool might say something like, “Consequentialists think morality is only about bringing about good outcomes and avoiding bad ones, while deontologists affirm the existence of other kinds of moral considerations—ones that are about actions themselves rather than the consequences those actions generate.” But Tim Pool is Tim Pool and this sort of explanation would be, let’s say, out of character for him. And anyway, by the time he got all that out, Sam might try to gently steer them back to the original topic. So Tim doesn’t do that. Instead, he starts talking about Thanos.
Like Sam and Tim, I spend a lot of time talking about politics on podcasts and YouTube shows and debate stages, and in fact I’ve done exactly that with Tim (once) and with Sam (many times). And like Tim—and, by his own cheerful affirmation, very much unlike Sam—I’m interested in philosophy, both for its own sake and because I think it often connects in interesting ways with the political project I spend most of my time promoting.
…all of which is to say that I’m not unsympathetic to Tim’s desire to connect some political and philosophical dots during his conversation with Sam. But, Tim. Bro. There are ways to do it and ways not to do it and if you want to introduce some philosophy talk into a conversation about the 2020 election, randomly dropping a couple of vocab words and acting like it’s some great own that the great majority of political commentators won’t know what you’re talking about is…not the way to do it.
It’s the way Tim does it, though, and he follows it up by comparing Sam’s advocacy of lesser-evil voting to the Marvel Cinematic Universe supervillain Thanos being willing to snap half of humanity out of existence in order to prevent overpopulation and hunger. Both of these, see, are instances of consequentialist moral reasoning. Sam and Thanos are willing to do things that deontologists would think are bad (voting for evil politicians, casting fifty percent of the human race into non-existence) for the sake of preventing a bad outcome (the victory of a more evil politician, overpopulation and hunger). So, see, the same way Thanos is bad, Sam is bad!
This is, I’m just going to say right now, not actually a great explanation of the distinction. And if your goal is to make your audience more sympathetic to deontology, it’s not a great way of doing that either, because frankly it makes deontology sound pretty asinine.
Imagine that an unusually considerate murderer offers you the choice of being attacked with a gun or with a knife. Being attacked with a knife is bad! If you’re nonetheless willing to say “yep, let’s go with the knife” because you like your chances of fending off a knife attack more than your chances of literally dodging a bullet, are you engaged in consequentialist reasoning?
You’re certainly engaged in reasoning that takes into account the likely consequences of your decisions—but if that’s consequentialism, who the hell isn’t a consequentialist?
The main point I want to make today isn’t about lesser-evil voting. I have more abstract fish to fry. But for the record:
My own position these days is more or less Sam-like. More precisely, it’s the one once expressed to me by Sam’s late Majority Report co-pilot Michael Brooks. There could well be “high risk, high reward” situations in which running a third party candidate would make sense even if it increased the chance of the greater evil taking power. It’s just that, given the extreme weakness of the American socialist Left, those aren’t anything like the circumstances in which we find ourselves. Nothing much is to be gained through running candidates who maybe get 2% of the vote, so it makes more sense to vote defensively—for the metaphorical knife attack rather than the metaphorical shooting. When I’ve made that case in the past, I’ve particularly emphasized the importance of preserving the tiny bit of institutional power that’s been built up by the American working class.
Whether I’m right or wrong about that assessment, what I want to focus on here are the higher-order questions of how the debate works and how it intersects with deontology vs. consequentialism. If you disagree with me about lesser-evil vs. third-party voting, the most likely reason is that you have different empirical predictions about how the two electoral strategies would play out in practice. Maybe you think that Democrats will move to the left if leftists pressure them by withholding our votes. Or you think I’m being too pessimistic about the prospects for successful third party candidacies. In either case, we’re arguing about which consequences are most likely rather than how much we should care about consequences.
A different way to disagree with me would be to say that even if I was right about the likely consequences of Trump or Biden winning, voting for Biden would still be wrong because “the lesser of two evils is still evil” and it’s categorically wrong to vote to empower evil candidates. That’s the position Tim took during the debate with Sam, although by the time the election rolled around he’d actually reversed course and decided that (a) voting for the lesser evil is fine and (b) the lesser evil was Donald Trump! Thus ended Tim’s long strange evolution from Occupy Wall Street to his present position as a bog-standard right-winger who loves tax cuts and hates labor unions.
Going back to his earlier position, though, the first point I want to make is that you have to reject consequentialism to hold the “it’s categorically wrong to vote for the lesser evil” position, but not vice versa. It’s entirely possible—and easy!—to think that non-consequentialist moral considerations matter a great deal in many contexts, but that no such considerations outweigh the consequence-based case for lesser-evil voting.
In laying out his position in the debate with Sam, Tim mentions the Trolley Problem in a way that shows that, like most people who know it not from the original academic context but from the problem’s pop-cultural afterlife—NBC’s The Good Place, Trolley Problem memes on social media, etc.—he doesn’t quite know which “problem” is being referred to in the second word of that phrase.what the “problem” is supposed to be.
Here’s a quick refresher, pulled from an obituary I wrote a few years ago for the problem’s originator, Judith Jarvis Thomson:
The prehistory of this philosophical puzzle goes back to Philippa Foot. In an essay crammed with examples intended to illustrate the complexities of an obscure idea in moral philosophy called the “doctrine of double effect,” she introduces the “driver of a runaway tram which he can only steer from one track to another.” If he does nothing, he’ll kill five workers doing repairs on the track. If he steers onto an alternate track, he’ll only kill one. Foot thought it was obvious that “we should say, without hesitation, that the driver should steer for the unoccupied track.”
I’ve introduced dozens of introductory classes of students to this example over the course of the decade and a half that I’ve taught philosophy classes, usually illustrating it with crude stick figure drawings on the chalkboard. When I’ve asked for a show of hands, I’ve never had a class where more than two or three students didn’t share Foot’s intuition that this would be the right thing to do. If that was all there was to it, no one would think there was a “problem” about the runaway trolley.
The complication comes in a 1976 paper by Judith Jarvis Thomson, “Killing, Letting Die, and the Trolley Problem.” As well as coining the phrase “the trolley problem,” she introduced the problem itself. It feels obvious that diverting the trolley would be the right thing to do, but what if we scrambled the example a bit? Add in a footbridge going over the trolley track and take away the alternate track with only one worker. Put two people on the footbridge. One of them, George, knows a lot about trolleys. (Perhaps he’s an engineer.) He knows the only way to stop a trolley in its tracks is with a sufficiently heavy weight. As luck would have it, the other man on the [bridge] is heavy enough to do the trick.
When I draw a new set of stick figures to illustrate that one on the board, hardly anyone raises their hand to agree that the right thing to do would be for George to push the man into the trolley’s path.
The intuitive wrongness of pushing the man onto the track is probably incompatible with consequentialism. There are moves consequentialists can make to try to get around having to endorse pushing him, but they all have at least a whiff of desperate ad hoc-ery to them. And once you further specify the circumstances—yes, you’re sure the weight will be enough to stop the trolley, yes, you’re sure word won’t spread about what happened, thus leading to people being afraid to stand on footbridges, etc.—it gets harder and harder for the consequentialist to avoid having to bite the bullet. At the end of the day, if only consequences matter, it’s probably not wrong to push the guy. So that’s probably enough to motivate rejecting consequentialism.
But remember that the intuitive wrongness of pushing him is only half of what’s at issue. The problem is how to reconcile the intuition that pushing him would be wrong with the intuition that steering onto the alternate track—or, in later versions, a bystander pulling a switch that diverts the train onto an alternate track—wouldn’t be similarly wrong, even though in both cases we’re faced with a tradeoff between bringing about one death and preventing five.
And whatever your favored solution to that problem, I have to say that voting for a candidate who will probably do all sorts of horrible things in order to prevent the election of his worse opponent sounds a whole lot more like the “diverting the trolley onto the second track” scenario than the “pushing the man” scenario, never mind the really grisly variation Thomson gives us later in the paper:
David is a great transplant surgeon. Five of his patients need new parts—one needs a heart, the others need, respectively, liver, stomach, spleen, and spinal cord, but all are of the same, relatively rare, blood-type. By chance, David learns of a healthy patient with that very blood-type. David can take the healthy specimen’s parts, killing him, and install them in his patients, saving them. Or he can refrain from taking the healthy specimen’s parts, letting his patients die.
There’s a vast and complicated literature on this stuff, but on the face of it, one reasonable place to start looking for morally relevant differences between the footbridge and transplant cases on the one hand and the diverting-onto-the-second-track case on the other is about the attitude of the intervener towards the victim. If the large man rolls off the tracks and thus survives, he’s just ruined your plan to save the five people on the track. Same deal if the “healthy specimen” manages to escape from your operating table. On the other hand, if the one guy on the second track manages to scramble out of the path of the oncoming train, the intervener would be massively relieved.
One way of understanding the difference—and there’s one paper where Thomson flirts with this is a possible solution to the problem—is to see the difference in Kantian terms. Kant was an extreme deontologist, and he thought his moral principle, the Categorical Imperative, didn’t allow for any exceptions. He provided a few different formulations of it, but the one Thomson has in mind is the Formula of Humanity:
Always treat humanity, whether in your own person or that of another, as an end in itself and not merely as a means to an end.
When I put that one on the board, I always end up circling “merely” at some point in the discussion, when students misinterpret Kant as having what would be the very silly position that it’s always wrong to use people as means to an end. Kant doesn’t think it’s wrong to ask your friend for a ride to the airport. He thinks it’s wrong to reduce people to the status of merely a means to an end, nothing more than tools you can use to accomplish your purposes—by, for example, pushing them onto trolley tracks.
Presidents kill people all the time. They order drone strikes, bombing raids, cruise missile attacks. It’s possible that, even if you elect the slightly less warmonger-y of the two major candidates in some election, the one you elect will kill someone the other would have spared, just because they’re two different people and there might be unpredictable differences in which strikes they sign off on. But your plan to save all the people the more hawkish candidate would have killed doesn’t require those deaths. You might, indeed, elect the less hawkish candidate and then hit the streets every day in anti-war protests, trying your best to ensure that people in whatever country even the comparatively dovish guy might bomb get off the trolley tracks unscathed.
For the record, my own moral position is something like the one Rawls calls “intuitionism” in A Theory of Justice and G.A. Cohen calls “radical pluralism” when Cohen defends it in his book Rescuing Justice and Equality. (In a footnote citing Frances Kamm, he also calls it “standard” deontology.) I don’t think there’s some single moral principle that rules the rest. There are various values that have to be balanced against each other and it’s all a huge mess. As Cohen says in Rescuing…, frustrating tradeoffs “are our fate.”
I do not say that such an intellectual predicament is satisfactory. But I do say that it is the predicament we are in. There are many attempts to escape it in the literature, and as many failures to do so.
One thing that means concretely is that of course avoiding sufficiently bad consequences can sometimes justify distasteful compromises but of course admitting that doesn’t mean we have to endorse the kind of sociopathically pure consequentialism that entails pushing large men onto trolley tracks or carving up healthy patients for their organs. Other kinds of moral considerations matter. Some of them are going to have a Kantian flavor. Even if you think (as I do) that Kant was wrong in a deep way about what morality is, the idea that there’s something objectionable about treating people as mere means is a deeply plausible claim about at least one part of what morality requires.
So: Is the Kantian solution to the Trolley Problem correct?
Maybe, maybe not. Thomson herself floated it but didn’t ultimately end up embracing it, and it might be worth doing a separate essay some Sunday just on the evolution of her thinking about this, but let’s put all that aside for now and leave it at “maybe, maybe not.”
Either way, voting for candidates who might order military interventions seems a lot less like pushing a man onto trolley tracks than diverting a train onto a secondary track. In fact, in this case, we’re diverting the trolley onto a fog-covered track—although doing so with the reasonable hope that, if anyone is standing on the track, it at least won’t be as many as were standing on the first one.
I’ve focused today on the ethics of voting, but in the Jacobin article I was raising a broader issue:
Pool’s [accusation that Sam Seder had Thanos-like ethics] was an extreme example, but many people less foolish than him share the same assumption: that leftists are only motivated by utilitarian calculations about the overall amount of happiness or suffering in a given society, while caring about rights is the domain of libertarians and other right-wingers.
In the article, I say that leftists do care about rights. We just have a different conception of which rights matter in which ways.
But I’ll concede now that this was an oversimplification.
What I should have said is that a lot of the ideas that have been historically important to the Left—ideas about domination and freedom, equality and dignity—are a poor fit with pure consequentialism, and that many left-wing thinkers have rightly explained those ideas in terms of non-consequentialist conceptions of justice.
But, in my experience, many leftists find themselves attracted to pure consequentialism, as well as another philosophical claim I’ve criticized in past entries on this Substack—skepticism about free will. They associate the idea that justice involves moral rights rather than just good and bad consequences with defenses of capitalist property rights, and they associate belief in free will with harsh retributive approaches to criminal justice on the one hand and “poor people should pull themselves up by their own bootstraps”-style victim-blaming about economic inequality on the other . They (rightly) reject all of these positions. They also (wrongly) associate free will denial and utilitarian morality with scientific materialism. The result of all of this is that they end up in effect spray painting a hammer and sickle (or a DSA rose) on the exact combination of philosophical views advocated by Sam Harris.
As I hope to show next Sunday, that’s a huge mistake.
If intuitionism is correct (the ultimate criterion of a moral principle is whether it is acceptable to us, for whatever is the relevant value of “us”, not some logical test of coherence or economical assumptions), then surely Seder is right to think normative moral and political philosophy is a waste of time. It is premised on a meta-norm of trying to get our first order norms more coherent, but there is no reason to want this or think it is possible.
Sometimes you say “I realize this thing I am doing is distasteful in isolation but it is worth it for the greater cause” and sometimes you say “No cause can be right if it means I have to do this”. Neither reaction can be condemned or applauded outside a real context. If we disagree on which reaction is appropriate, no system of logic is going to resolve it for us.
The only scientific study would be of how we came to have these attitudes - and whatever the answer, it wouldn’t tell us what we should do now. Improvement in our moral ideas comes through mass movements, not the study. And there is no logical way of proving they are improvements - but no need to do so.
All that being said, one advantage of consequences is that it is possible to refer to empirical reasons to modify our assessment of them. So if I am actually arguing against some leftist who refuses to vote for Democrats, I would usually try to appeal to consequences. But I would also say it shows a lack of solidarity and makes you a bourgeois individualist to prefer keeping your hands clean to avoiding giving power to the most reactionary wing of capital