In my last post, I wrote about ethics and argued that utilitarianism is the logical underlying basis for all other types of ethics, including non-consequentialist ethics. This post is a response to David Friedman’s arguments around utilitarianism in his discussion about the morality of libertarianism in his book Machinery of Freedom (some chapters of which can be found on his website).
The non-aggression principle
Friedman discusses the libertarian principle of non-coercion, aka the non-aggression principle. These are the principles that aggression and coercion are morally wrong.
These principles seem compelling as a general rule, because freedom seems likely to produce good outcomes. When anyone consensually interacts, each person at very least believes they’ll likely be better off for it. However, Friedman points out that there are exceptions to the rule which are convincing evidence that aggression or coercion is not always morally wrong.
Friedman gives the example of a madman just about to open fire on a crowd. Is it morally right to defend others by stopping the madman? Most people would say yes, but doing so would undeniably be aggression towards the madman. One might try to resolve this by using a principle of least aggression rather than a principle of non-aggression, where the principle is to reduce total aggression (by some measure) rather than simply minimizing one’s own aggression. In this case, you’d be stopping more aggression than you’re causing, making it morally right. However, this cannot resolve situations where there is no initial aggressor.
If a fire has started that threatens to burn a house and the only way to put it out is to break into a neighbor’s house and steal a fire extinguisher, it seems like a reasonable moral action. One might justify this by saying that one could assume if the neighbor could be asked, they would be likely to consent to this. But what if we knew they wouldn’t consent?
One might suppose that the action is morally right as long as you pay damages to the neighbor after the fact. This seems reasonable, but it isn’t clear that this follows from any principle of least-aggression. This leads to a dead end where this principle seems to not be able to justify actions that seem obviously morally justifiable. Clearly at very least the non-aggression principle alone is not a complete moral theory, but only a useful heuristic in a most cases.
As I said in my last post, a principle like the non-aggression principle can really only be evaluated using utilitarianism: does the principle lead to more or less human well-being than other principles?
The thief
A naive utilitarian argument about the case of the fire and the extinguisher is that it is morally right if the damages are less than the benefit, however this principle broadly applied seems likely to lead to undesirable outcomes. It would mean that someone stealing from another would be justified in doing so if they benefit more than the victim loses.
If every thief knew exactly what to steal such that every theft was a net positive transfer (even after considering the deadweight losses that a general policy like this would cause), this might work out fine. But in reality such knowledge doesn’t usually exist and disputes about it seem likely to be expensive. While consensual trades are very likely to improve the well-being of both parties (because each knows their own preferences best), nonconsensual “trades” like are done by the thief are inherently likely to be substantially less … accurate. This is not only because of the knowledge problem, but also because of an incentive problem. A thief is not very incentivized to accurately estimate or act based on the cost to the victim.
So while there is good reason to encourage consensual trades, its hard to justify why we should expect encouraging these kind of non-consensual trades to make humanity better off.
Case vs Rule
A reason why this seemingly straightforward use of utilitarianism leads to something that looks like a bad result is that the example doesn’t include important aspects of reality. Deciding whether a moral rule is a good one based on what it says about a contrived case like this is like determining whether a box is structurally sound based on what happens if every molecule of air inside it randomly bounces towards the center at the same time. While this should theoretically be possible, and might cause the box to collapse from sudden loss of pressure on the inner walls, it is such an unlikely scenario that any analysis of it has very little practical value for deciding how to design the box.
Friedman brings up a case he believes invalidates utilitarianism as an adequate sole moral rule.
You are the sheriff of a small town plagued by a series of particularly brutal murders. Fortunately, the murderer has left town. Unfortunately, the townspeople do not believe that the murderer has left, and will regard your assertion that he has as an attempt to justify your own incompetence in failing to catch him.
Feeling is running high. If no murderer is produced, three or four innocent suspects will get lynched. There is an alternative. You can manufacture evidence to frame someone. Once he has been convicted and hung, the problem will be gone. Should you do it?
This is a similar situation where, within the contrived boundaries of the hypothetical, it seems clear that the utilitarian answer is “yes”, you should frame someone. However, if put into a more realistic context, I think the utilitarian answer would instead be “no”. This is another case where attempting to generalize the utilitarian answer to similar real-world examples, as a rule for behavior, is likely to lead to bad outcomes.
Since we are imperfect beings with imperfect knowledge, we should be expected to make mistakes. The sheriff in the example above cannot know for sure that lynchings will happen. He cannot know for sure that his best efforts to convince the town would ultimately fail, that his attempt to frame another person will succeed, or even that he actually knows what happened with the real murderer. And beyond that, a scenario like this happening seems so rare that making a rule for it would be far more likely to be abused than used properly.
John Stewart Mill argued for freedom of speech on the basis that no one can ever be sure that they themselves are right nor that another person is wrong. A similar logic holds for freedom of all kinds. We should usually let others be free to live their lives as they wish because those who would command others may be wrong about what’s best and have less incentive to correct themselves for someone else’s benefit.
When we are discussing questions of morality, the ultimate goal is not to find the perfect rule set such that we can figure out whether a hypothetical sheriff in a hypothetical town was acting morally or not, the goal should fundamentally be more practical: to find a set of moral rules which are practical to apply and which will be most likely to best achieve the moral result you’re looking for.
So while, in the contrived case at hand, the utilitarian morally right thing for the sheriff to do is to frame someone, it seems clear that in most similar real-world cases, the expected outcome of doing something like this is probably very negative. A policy of allowing sheriffs to make such judgments without consequences if they got caught is a policy very likely to lead to worse outcomes in any case where the sheriff is wrong in any of their many assumptions, or the sheriff has ulterior motives or conflicts of interest.
Another way to think about this is that we know the dynamics of a system of consensual interaction (aka the free market) lead towards outcomes that improve total utility over time. We know of few systems that approach this level of economic efficiency with any degree of certainty. If we do discover some new non-consensual mechanism that consistently leads towards continual improvement, perhaps we could create a general moral policy that could guide us to behavior likely to have positive outcomes. But until then, using the tools we do have to the best of their ability seems like the most likely way to maximize total future human happiness.
Saving drowning babies
After writing the above, I watched a video on Peter Singer’s paper Famine, Affluence, and Morality. I think there is a simple and straightforward utilitarian rebuttal to Singer’s argument, and it takes the same form as my rebuttal to Friedman’s example above. The consequences of a rule where everyone has to spend all their disposable income (or more than that) on solving the worlds immediate ills is likely to be very detrimental to total future human utility because of a reduction in investment in longer-term solutions.
However, after this I thought of an extension to the classic “drowning baby” scenario. If one baby per minute gets into a drowning scenario, and you’re the only one there to save the babies, are you obligated to spend as much time as possible saving babies from drowning? This seems like a reasonable candidate for defeating the utilitarian hypothesis. It might also be a reasonable candidate for showing how humans are fundamentally not driven to be completely morally good beings.
I think the second is more likely, and it seems fairly obvious that the self interest of any human has no reason to be connected with the well-being of another very far from them. So we shouldn’t expect our natural tendencies to be always morally good in the context of humanity as a whole. And further, we shouldn’t be holding someone to be morally “bad” for not making large sacrifices to help the world.
If every trade of your time is unequivocally a utilitarian improvement to humanity and there was nothing you could do to improve the total future utility of humanity more, it stands to reason that a utilitarian would believe that spending all your time saving babies is the morally best thing to do.
There’s a couple reasons this feels wrong. One is, as I said before, we’re self-interested human individuals and we just don’t wanna. The other is that it feels unfair. Why do I have to do this if other people aren’t going to? This gets us back to the concept of what general rule could we create here that would be likely to actually maximize total future utility. If the person who happened to be around the drowning babies was compelled to save as many as possible, but people who weren’t around were not compelled, then it would mean a lot of people would just make sure to stay far enough away from the drowning babies to escape the burden.
It does suggest that the morally good thing for individuals to do would be to enter into an agreement to share the burden of saving these babies evenly among all of humanity. This both feels more fair and seems more likely to actually affect the outcome a utilitarian would deem to be optimal in the given scenario.
Conclusion
We should recognize that what we think is reasonable for an individual to do can be different than what will lead to the ethically best outcome. And both of these can be different from what is reasonable to compel people to do (eg via government). This can be because of uncertainty around what is morally best, but can also be because unfairness to the individual can make it less likely that we get the best outcome (ie because they refuse to cooperate).
So to recap, there are several things we could think of when we think of morality:
What is the “right” thing for an individual to do in a scenario?
What is the thing for someone to do that leads to the best outcome for society as a whole?
What rules should society have that compel people to do or not do something?
These may all have different answers for a given type of scenario.
Ultimately, I still believe the utilitarian goal of maximizing total future human happiness is a complete ethical rule set that should be able to lead us to the right answer in all cases including most intuitive moral judgments, as long as we consider all the relevant factors including uncertainty and human proclivities for fairness. However, it should be recognized that as individuals, none of us are personally incentivized to maximize utility for others and we’ll only go beyond our rational self-interest when our brains are tricked into thinking that it is. Therefore from time to time, it may be occasionally be reasonable to create rules and institutions that share the burden of improving the condition of others very disconnected from us, where it can be shown to have a sufficient likelihood of improving the human condition more than individual action.
Related: https://thewaywardaxolotl.blogspot.com/2025/03/a-critique-of-utility.html