In this post I am going to defend deference to common-sense ethics to make decisions which don’t fall directly under the purview of Effective Altruism. While any decision can technically be made on the basis of maximizing expected welfare or a similar metric, it is often difficult to explicitly make such calculations, to the point that they are intractable or even counterproductive. So when we make minor daily decisions, or when we consider major actions where lots of variables are at stake besides the conventional issues of charity and cause prioritization, it may not be the right method.
Now you might be thinking – wait, Zeke, hasn’t this already been stated many times already? Don’t we all, in Effective Altruism, know about honesty and moral norms and Schelling fences and so on? What about the CEA’s guiding principles, aren’t those common-sense ethics? Are you just going to agree with things that have already been said before?
No, I’m not. In fact, none of these things are examples of common-sense ethics. All of them are abstracted principles in the same category as effective altruism, utilitarianism, deontological ethics, anti-speciesism, and other moral principles. If you believe in upholding integrity, that is a belief in the ideal of doing things on the basis of integrity; it’s not the same as doing things on the basis of whether they adhere to common-sense ethics. Sure these ideas will give similar advice in most cases, but they won’t necessarily do so.
What it actually means to adhere to common-sense ethics is to follow “the pre-theoretical moral judgments of ordinary people.” If you spent a long time thinking about moral philosophy and came up with a bunch of principles that you think are important, then your judgements are post-theoretical. If you spent years of your life reading or blogging about philosophy, you are not an ordinary person, you are a philosopher, even if you are an amateur one. And if you are a white, affluent programmer in Silicon Valley, you are not an ordinary person, you represent an unusual and small subset of the human population.
So to follow common-sense ethics means to ask oneself whether ordinary people would find a behavior to be morally objectionable, not whether you find it objectionable or whether it violates some particular list of behaviors. Now why should we do things in this manner?
It keeps the moral focus on the core aspects of Effective Altruism.
Posturing matters. By explicitly deferring to conceptual principles such as virtues or Schelling fences, we prevent the basic ideas of cause prioritization and personal contribution from seeming to be of primary importance. If we endlessly debate moral principles which aren’t directly part of EA philosophy, we are posturing as if they are highly important and overrule basic EA ideas. This weakens our philosophical position and detracts from our main mission. It also provides a signal to potential detractors that accusations that EA is not following these principles will get a lot of traction.
It prevents harmful dissonance and obliviousness to the broader population.
By focusing on what ordinary people believe, EAs will not be out of touch or elitist. It pushes us to avoid the other-minds fallacy. For instance, think about the campaign run by GWWC Cambridge with a simulated poverty village. It was not dishonest or antithetical to the interests of the poor. It is not vicious to show people what poverty is like, nor does it violate the Categorical Imperative. You can have a poverty simulation while being committed to others, while being scientifically minded, while being open, while having integrity, and while having a collaborative spirit (CEA values). Yet it offended people nonetheless, something which should have been obvious to anyone who stopped to think about what common-sense ethics really are.
It dodges philosophical issues and confusion.
The question of whether something is in accordance with common-sense ethics is essentially empirical. It is easier resolve theses kinds of issues than it is to resolve age-old philosophical debates. For example, Mechanical Turk surveys can easily provide decent survey data on people’s attitudes. An easier and cheaper way of doing this is to simply ask ordinary people what they think about something. Now we can devote our philosophical thinking to the core issues of decision theory and cause prioritization which matter for the whole world.
It recognizes psychological and social realities.
The simple fact of the world is that people’s moral attitudes are not philosophically consistent, and common-sense ethics is something that is always contingent upon framing and social context. Recent work in experimental philosophy has demonstrated that moral judgements are frequently vulnerable to framing effects and other cognitive biases. People apply double standards on the basis of whose ideology they agree with – for instance, people on the right wing are frequently just as politically correct as those on the left, they just have different standards and methods for judging what is or isn’t politically correct. Many people today think that it is okay to actually murder someone in order to save other people’s lives, but they probably wouldn’t think that it’s okay for a charity to steal money from someone in order to save lives (something which is almost certainly not as bad as pushing them in front of a train). Many people follow the Copenhagen Interpretation of Ethics, a perspective which is basically nonsensical when scrutinized rationally.
Because of this, you will never be able to use a consistent and rigorous set of virtues or rules to determine what will or won’t make people indignant. Any ethical model of forbidden actions which is restrictive enough to avoid things that spark people’s negative moral judgements will also have many false positives, limiting us from accomplishing things that people would actually be okay with.
So what, exactly, is your advice?
When you have a set of actions and you can’t directly estimate which of them would maximize welfare for sentient life in the future, you may do any of them that would satisfy the pre-theoretical moral judgements of ordinary people. In general, just worry less about these kinds of issues, stop overthinking morality, and just do what other people think would be okay, as long as you don’t have a good EA-based reason to do otherwise.