I think utilitarianism can be useful, but I’m not persuaded by the bullet-biting impulse.
This is chiefly because the actual test of ethics *is* emotional and heuristic. There’s no scoresheet to check against; the universe does not indicate its displeasure with our moral choices, except insofar as we, collectively and/or individually, are displeased by them.
The George Box quote: “All models are wrong, some are useful” captures some of the risk of unnecessary bullet biting. Consider two analogs from physics.
1. While working on the laws of electromagnetism, James Clerk Maxwell realized the math would only work if he added a term to one of the equations, which he called displacement current. Bullet biting for the win: this term, born out of a purely logical necessity (Maxwell’s laws, like Newton’s, are wrong, but much better than what we had before), was experimentally validated and allowed for the unification of electromagnetism and optics.
2. Newton’s laws of motion are a staggering achievement; it’s basically impossible to overstate their impact on physics, and on the world more broadly. They’re also wrong, in ways that became apparent through use. It breaks down for systems that are extremal, e.g. very large, very small, or very fast. Rather than “biting the Newtonian bullet,” quantum mechanics and the special and general theory of relativity were developed and experimentally tested, which resolved many of the issues in Newton’s laws, while leaving some blanks and open problems.
What’s key in both examples is that it’s our ability to test the theories that makes biting the bullet a good/or bad choice. You cannot actually quantify and validate a “util” without some prior aesthetic and emotional commitment; there’s no Michelson-Morley experiment to run to show the absence of ethical aether.
So I’ll keep utilitarian calculus in the toolbox for reasonably well-defined scenarios, like setting Pigouvian taxes, but I won’t worry much when I get absurd results out of it either (e.g. Bentham’s Bulldog’s belief that bees would be better off dead)
1. I actually think you can represent any ethical framework as a utilitarian cost function, provided you define the state space appropriately and use indicator functions. So in one sense I’m purely a utilitarian, except that I use an ensemble model. But in a much more real sense, I won’t pretend to do all this computation in the minute-to-minute activity of life.
2. This essay by Linch, ranking ways of knowing is a fun read, and roughly aligns with my beliefs. In particular, the notion of consilience is very valuable for building up a world model, though in this case, I don’t think it gets you past the problem of testing ethics that I described. https://open.substack.com/pub/linch/p/which-ways-of-knowing-actually-work
3. This essay on ethical aggregation by Daniel Muñoz: “Each Counts for One” ends with the conjecture that: “there is no convincing argument for, or against, the view that the numbers count—there is only a clash of basic intuitions.”, which is a more formal version of my issue with biting ethical bullets. https://open.substack.com/pub/bigifftrue/p/daniel-munoz-unc-chapel-hill-each
Spoken like a true utilitarian. I instinctively sympathize with utilitarianism, but shrug my shoulders at edge cases and am fine being inconsistent. I would pull the trolley lever to save five people, but not if one of them were a family member/close friend.
This is the silly edgelord utilitarianism of many thoughtful yet misguided teenagers. The lack of empathy that would be required to contemplate responding ‘yes’ to something like this would call into question why you even care about ‘good’ at all let alone utilitarianism.
If you’re actually serious then why aren’t you currently euthanizing yourself next to a hospital and allowing your organs to save the lives of perhaps 5-10 strangers?
I think utilitarianism can be useful, but I’m not persuaded by the bullet-biting impulse.
This is chiefly because the actual test of ethics *is* emotional and heuristic. There’s no scoresheet to check against; the universe does not indicate its displeasure with our moral choices, except insofar as we, collectively and/or individually, are displeased by them.
The George Box quote: “All models are wrong, some are useful” captures some of the risk of unnecessary bullet biting. Consider two analogs from physics.
1. While working on the laws of electromagnetism, James Clerk Maxwell realized the math would only work if he added a term to one of the equations, which he called displacement current. Bullet biting for the win: this term, born out of a purely logical necessity (Maxwell’s laws, like Newton’s, are wrong, but much better than what we had before), was experimentally validated and allowed for the unification of electromagnetism and optics.
2. Newton’s laws of motion are a staggering achievement; it’s basically impossible to overstate their impact on physics, and on the world more broadly. They’re also wrong, in ways that became apparent through use. It breaks down for systems that are extremal, e.g. very large, very small, or very fast. Rather than “biting the Newtonian bullet,” quantum mechanics and the special and general theory of relativity were developed and experimentally tested, which resolved many of the issues in Newton’s laws, while leaving some blanks and open problems.
What’s key in both examples is that it’s our ability to test the theories that makes biting the bullet a good/or bad choice. You cannot actually quantify and validate a “util” without some prior aesthetic and emotional commitment; there’s no Michelson-Morley experiment to run to show the absence of ethical aether.
So I’ll keep utilitarian calculus in the toolbox for reasonably well-defined scenarios, like setting Pigouvian taxes, but I won’t worry much when I get absurd results out of it either (e.g. Bentham’s Bulldog’s belief that bees would be better off dead)
Addendum:
1. I actually think you can represent any ethical framework as a utilitarian cost function, provided you define the state space appropriately and use indicator functions. So in one sense I’m purely a utilitarian, except that I use an ensemble model. But in a much more real sense, I won’t pretend to do all this computation in the minute-to-minute activity of life.
2. This essay by Linch, ranking ways of knowing is a fun read, and roughly aligns with my beliefs. In particular, the notion of consilience is very valuable for building up a world model, though in this case, I don’t think it gets you past the problem of testing ethics that I described. https://open.substack.com/pub/linch/p/which-ways-of-knowing-actually-work
3. This essay on ethical aggregation by Daniel Muñoz: “Each Counts for One” ends with the conjecture that: “there is no convincing argument for, or against, the view that the numbers count—there is only a clash of basic intuitions.”, which is a more formal version of my issue with biting ethical bullets. https://open.substack.com/pub/bigifftrue/p/daniel-munoz-unc-chapel-hill-each
Would you kill your family to save 5 Chinese billionaires? 😂
Depends. How broadly do you define my family, and which 5 billionaires?
Spoken like a true utilitarian. I instinctively sympathize with utilitarianism, but shrug my shoulders at edge cases and am fine being inconsistent. I would pull the trolley lever to save five people, but not if one of them were a family member/close friend.
This is the silly edgelord utilitarianism of many thoughtful yet misguided teenagers. The lack of empathy that would be required to contemplate responding ‘yes’ to something like this would call into question why you even care about ‘good’ at all let alone utilitarianism.
If you’re actually serious then why aren’t you currently euthanizing yourself next to a hospital and allowing your organs to save the lives of perhaps 5-10 strangers?