notes-philosophy-ethics-stochasticNonIntervention

Often you get the sense that, if there is a chance that something will do good and a chance that it will do evil, people tend to abstain. There are various non-ethical reasons for this; for example, negative reputation sticks stronger than positive reputation, so it's more important for one's reputation to avoid being known for causing evil than to be known for causing good; and the law punishes lawbreakers but does not reward people who help out to the excess of their responsibilities. Also, people are risk averse, so the negative utility of a bad event tends to be larger in magnitude than the positive utility of a corresponding good event. In addition, people optimistically misestimate outcomes, so in reality one must leave a margin of error when one things that an action may be neutral.

However, putting these other considerations aside, we can ask, ceteris paribus, is it unethical to undertake an action which will cause an equal amount of good and evil?

Utilitarian morality seems to suggest that an action should be undertaken if the good would outweigh the evil, whereas some strains of deontological morality seems to suggest that evil must not be done but good only should be done, so an individual is not free to choose to do evil even to do good (note that this type of ethics reflects or is reflected by many legal systems).

The issue is often captured by imagined scenarios such as "If a train is coming down a track and you can throw a switch to divert it to another track, and on the current track it will run over 5 people but on the alternate track it will run over 3 people (all attributes of the people are unknown to you, and no other person involved in causing the train to go on the track it is on had knowledge of them either), do you throw the switch?".

The possibly-neutral case here is when there are an equal number of people on each track. Are you then free to throw the switch at your whim?

My question here is slightly different. What if the actual amounts of good and evil to be done will be determined stochastically, and only the expectation (or some other calculation of the value of the distribution, perhaps one which penalizes variability or risk) is neutral? Does this change the ethical situation or not?

For example, imagine that you can choose to flip a magic coin. If the coin lands heads up, a good person who otherwise would have died will not die (e.g. they will be saved by you). If the coin lands heads down, a good person who otherwise would not have died will die (e.g. they will be killed by you). Of course, assume that there are not other covert effects such as the coin being a cursed or corrupting tool that damages you or aids the forces of darkness. Is it ethical to flip the coin at your whim? Or is that evil?

I wish to abstract away from risk aversion. Perhaps one person dying is "more bad" than one person living is good. Or perhaps the increased variability regarding death is itself of negative value. So, imagine that the probability of the coin landing heads-up is adjusted to exactly account for any such factors, leaving the value of the result of the coin flip truly neutral.

A crucial difference between this scenario and the train scenario is that, in the train scenario, the outcome of the decision will be neutral in any case; if you flip the switch, you will have saved as many people as you have killed. There is no chance that you will have killed someone but saved no one. But in the stochastic scenario, it is certain that either you will have saved someone and killed no one, or killed someone and saved no one (again, we must be careful to disregard any reputational or legal effects here and focus only on the ethics).

So, the question is, is stochastic non-intervention ethically distinct from deterministic non-intervention?