Contra Parfit’s ‘Against Egoism’
My summary of Chappell’s summary of his summary of Parfit. And my commentary. Or: you should be egoistical because people are myopic and selfish.
Derek Parfit was the greatest analytical philosopher of the 20th century. His magnum opus is like 2000 pages. Luckily, Richard Chappell, a philosophy professor, wrote an academic book summarizing his work. Then summarized said book on his blog.1 Here’s my summary of that blog. I also critique Parfit to save egoism. I’ve read none of the primary sources, but my post is shorter (maybe past the point of usefulness).
Against Egoism
Rational Egoists believe that being rational means being self-interested: if you do whatever makes your life go best, then you’re at your most reasonable.
Parfit says that this is wrong for two reasons:
Intuition
It’s intuitive that, sometimes, it can be rational to sacrifice your life for others.
Imagine a situation where you sacrifice your life to save Bob, where:
You know in advance that if you didn’t help him, you’d go on to live happily ever after with no remorse. So if you die to save, it really is worse for you.
AND
You know everything about what will happen and think clearly and rationally, and then act morally (you’re a fully-informed rational actor).
Even in this case, most people would think it’s intuitive that you’re being rational to save him here. But misguided Rational Egoists will still insist that your sacrifice is irrational, just because you choose others’ interests over yours. Parfit says Rational Egoists are just unfair and dogmatic.
Theory
Compare these three principles:
A) What you prefer can never be intrinsically irrational, even if you, say, prefer a small benefit over a much greater one.
B) If you prefer a small benefit only because it happens now, over a much greater benefit later, you’re irrational.
C) If you prefer a small benefit only because it’s for you, over a much greater benefit to someone else, you’re irrational.
Rational Egoists only accept B. But we should treat B and C alike, due to the formal analogy between you and now (i.e. agent and temporal relativity).
If you weigh the interests of your current and future you the same (temporally neutral), but give more weight to yourself than to others (agent relative), then your theory is incompletely relative and less sound than if it is either fully relative or fully neutral, treating both these dimensions of variation alike.
Crucially, choices are made not only by certain people but also at certain times (think: your momentary self, i.e. the agent deliberating your choice, is different from your future selves that replace you later).
And so, you may ask why you should sacrifice
…your interests for others?
…your interests now for your future selves?
If you think there’s no reason to sacrifice your interests for others, parity of reasoning means there’s no reason to sacrifice your interests now for your future self.
Rational Egoists defend the need to be self-interested by appealing to objective features of normatively important phenomena like pain. Pain matters because of how it feels, not just when you feel it. This is a great defense of B, but Rational Egoists can’t appeal to this, because analogously, they then also have to support C. After all, pain matters because of how it feels, not merely who feels it.
So, we should either be more subjective or impartial. If you think it’s better to be impartial, rational egoism is wrong. QED.
Or so Parfit says. My two cents: I think the argument goes through, but it is more of an aspirational normative description. There is a kernel of truth to egoism, which is empirically, most people are just very egoistical and myopic. Most people don’t make more than non-trivial sacrifices for neither others (cf few people will donate more than non-trivial amounts of money to the poor even if it would be immensely helpful and we can save a life for ~$5k by donating to Givewell) nor their future selves (cf people smoking or not saving for retirement). Do we really know of examples, where fully informed and rational actors made great sacrifices without any social pressure or reward? We can never know if, say, there was ever an anonymous donor who has not (intended to) brag about their donations. When we see instances of altruism, they are often trivial sacrifices:
Apparently Parfit was far more compassionate than most people and that might have biased his philosophy against seeing any pragmatic value of egoism:
“The driving force behind Parfit’s moral concern was suffering. He couldn’t bear to see someone suffer—even thinking about suffering in the abstract could make him cry. He believed that no one, not even a monster like Hitler, could deserve to suffer at all. (He realized that there were practical reasons to lock such people up, but that was a different issue.)”2
“[Once when] World War I was brought up: “Suddenly in the middle of that discussion, Derek started to cry, really quite a bit. He was crying at the sadness of all those lives ended prematurely in the war.”3
Obviously, I’d hate to save egoism, but sadly we don’t live in a world of Parfitian saints. We live among myopic egoists who won’t save for retirement, much less save strangers. In such a world, pure altruism might be exploitable and strategically naive. So if you’re naturally inclined toward Parfit’s compassion, you might need a bit more egoism as a corrective on the margin lest you risk being exploited.
Of course, maybe for the selfish rest of us, locally, we oughta try to be more like Parfit and be suspicious of our self-serving rationalizations. A two-level view might also be pragmatic: Keep impartial altruistic aims for policy, institutions, and the big-picture direction of travel. Keep modest egoistic permissions for bandwidth, boundaries, and choosing altruistic projects that don’t burn you out.

