There’s been a recent flurry of social-media #discourse around ethical philosophy, mostly as applied to the idea of “longtermism,” which I think in turn is tied to the recent release of What We Owe the Future by William McAskill. This mostly came to my attention via Matt Yglesias questioning the naming of the school of thought, and Freddie deBoer declaring that the underlying utilitarian philosophy is fatally flawed. It was a whole big Thing for a while, though, with seemingly everybody in my egghead quadrant of Twitter slinging Takes on utilitarian ideas.
As I said on Twitter, this was useful to me largely as a reminder that there are, in fact, topics I find more tediously unproductive than wrangling about quantum foundations. On the quantum side, at least some of the news hooks tend to involve genuinely clever proposals or technical advances that offer the prospect of doing something that will provide new information. The ethical stuff seems to just go around and around in the same well-worn circles.
The arguments against utilitarian ideas that numerous people have brought out this week are basically the same stuff I remember sitting through thirty-mumble years ago in Philosophy 101: following the logic to some extreme leads to the conclusion that you’re obliged to do something that seems morally repellent. It’s not wrong, but the thing is, you can do the same game with basically every philosophical system coherent enough to have logic you can follow to some extreme (usually involving at least one step that leads non-philoosphers to say “Yeah, well, that’s a stupid thing to do…”). I definitely agree that “longtermism” can lead to some silly places, but so does everything else.
In the end, most of these arguments end up feeling like the Axiom of Choice situation in mathematics— that is, you can construct a coherent system based on any of these, but all of them will have some consequence that people who prefer a different choice of axiom consider unacceptable. And, ultimately, that makes arguing about the axioms sort of frustrating and pointless— all of the choices are equally bad, but at the same time all of the choices are equally good.
On the specific question of the utilitarian aspects of this whole business, I end up feeling that while the flaws in the idea are well known, it hangs around in the popular consciousness for more or less the same reason that many working scientists cling to a kind of Popperian view of science as hypothesis falsification. In both cases, the core logic can’t be extended indefinitely, but there’s a useful and appealing heuristic in there: “Will this make more people happy than it makes unhappy?” is a not-terrible guide to making ethical decisions in the same way that “What would I do to falsify this hypothesis?” is a not-terrible way to deciding how to attack problems in science. I’m basically on board with what Yglesias elsewhere calls “Good Old-Fashioned Effective Altruism,” in that I agree it’s probably useful to think in a somewhat utilitarian way when deciding between throwing $100 to your alma mater or donating $100 to eradicating malaria or the like.
More broadly, I find myself getting frustrated with the widespread tendency to both seek and grant praise to people for “Asking the Big Questions,” because so many of those end up having this axiomatic character that makes them fundamentally unanswerable. I don’t think we do enough to celebrate people who spend time answering small questions, whether those are experiments to test some scientific theory, or tweaks to make public services and policies more effective. Those incremental steps of real progress are what actually add up to make a better future, and they’re largely independent of the endless circling around the “Big Questions” that ends up being treated as the more significant intellectual activity.
This was meant to be a quick post while I waited for my computer to grind through some basic operations, but both the writing and the grinding ended up dragging on longer than I wanted. I’ll probably revisit this at some point, and if you’d like to see that, here’s a button:
If you object to any of this, or just want to provide me with diversion from tedious administrative tasks, the comments will be open:
Utilitarianism is just fine for marginal changes That why the fit with microeconomics is so good.