Morality and ethics
Ideally I would present here the axioms of a formal system which could be used to satisfactorally (in the opinion of any reasonable person) answer many of the most important moral and ethical questions. However, I have not yet found such a system. Instead, I will present a partially systemitized collection of semi-formal principles which can be used to satisfactorally (in my own opinion) answer some of these questions (at least, it seems to so far; surely it contains lots of errors and will have to be revised). By "semi-formal" I mean that it will be too pedantic and plodding and heavyhanded for the taste of people who don't like attempting to define everything and apply logic and reductionism to social affairs, but it will not be rigorous or complete enough for the taste of people who do. Also, there are plenty of unanswered questions.
The way I will use the terms morality and ethics is different from their usual usage. Usually, they are synonyms; but I will divide the things they usually refer to into two groups, and call one group "morality" and the other group "ethics".
By "morality" I mean the consideration of what is good, and what an individual should do, from their own point of view. Questions that morality is concerned with (although "concerned with" does not mean that it necessarily can answer them) include "Out of such-and-such a set of alternative situations, which one is best?", "How much better is this situation than that one?", and "If a person is faced with such-and-such a choice, what should they choose?", "How can I be a good person?". Despite the generality of those questions as I wrote them just now, morality abstracts away from intellectual questions, so morality does not tell you the answer to: "What should I do when faced with the situation: I am taking a quiz and the question is '2+2='"
By "ethics" I mean a semi-formal system of rules for people to follow, such that the question of whether an individual or action is in compliance with the system can be answered. Whereas morality talks about what you should do, from your own point of view, ethics provides a system for the purpose of people, together as a group, to regulate themselves. The hope is that having a well-thought-out ethical system in place will lead to better outcomes than if there were no such system, or than if there were a poorly-thought-out one. Questions that ethics is concerned with include, "If a person is faced with such-and-such a choice, what are they obliged to do? What are they permitted to do?," "Between such-and-such choices which are all permitted, which are better, and by how much?," "Which people are bad, and which are good? How bad and how good?", and "Should such-and-such a person be punished for such-and-such behavior?"
So, in my system, "morality" is really all you need in order to know what you should do. "Ethics" is just a technology for social interaction which is useful.
A less systematic, but more useful essay of mine on this topic is at .
What is good?
If a tree falls in a forest and no one (no being at all, in fact) experiences it (or any consequence of it)... is that good? Would it have been better if the tree had not fallen? Since no being's experience is changed in any way, these two situations are equally good.
Good is a function only of experiences; it is otherwise independent of the material configuration of the universe.
If there is a possible experience, which was pleasant and beautiful, but no one ever experiences it, is that better than if that experience were impossible, but another one were possible, one which was unpleasant and ugly, but no one ever experienced it either (assuming that other experiences weren't changed, for instance, no one contemplated either one)? Again, since no being's experience is changed in any way, these two situations are equally good.
Good is a function only of which experiences are actually experienced by beings.
Consider the possibility of, through some period of time, feeling a slightly pleasant sensation; now consider if exactly the same things happened during that time, and during the rest of your life (and everyone else's lives), and you did and thought the same thigns, except that, during that time, you didn't feel that pleasant sensation; is one of these situations better than the other? Yes, the pleasant one is better.
Other things being equal, pleasure is good.
Similarly, other things being equal, pain (in the broad sense; we'll call this "suffering") is bad. Similarly, other things being equal, happiness is good. Similarly, other things being equal, beauty is good.
Is beauty good in itself, or is it only good because it causes pleasure? I don't know. Is suffering just the opposite of pleasure, or are they different in kind? I don't know.
Which is more good, beauty or pleasure? It depends. You can imagine that if you were to have the choice of seeing the most beautiful sunset you have ever seen, but you had to feel very slightly like it was just a little bit too cold for about half a second first, you would want to see the sunset. But, on the other hand, if the tradeoff was being tortured for 100 years, you would prefer not to. Is there a systemic way to weigh such things against one another? I don't know.
Is variety of experiences good? Would it be better to re-experience the experience of appreciating the same beautiful song over and over again for your whole life, or would it be better to experience hearing different songs at different times, if they were all equally beautiful? It seems like you'd prefer the different songs, but maybe this is just because there is more pleasure in variety in this case, rather than because of an inherent good. As with the question of whether beauty is good in itself or just because it brings pleasure, I don't know the answer.
Another scenario that argues that pleasure isn't everything is: which is better, a life in which every moment is spent experiencing an enormous heroin rush, and nothing else, or a life in which various beautiful things are experienced? It seems like the latter is better, even though it involves less pleasure.
Are things absolutely good and bad, or only in comparison? If nothing existed except for one person being tortured, is that bad? It seems so. So, it seems like something can be good or bad absolutely.
Does it matter how many people experience something? If one person sees a beautiful sunset, is anything gained (aside from the social factors) if they call another person and tell them to come look? If two people are tortured, is it worse than one person being tortured? It seems like it does matter for more people to experience something. Does the value increase linearly -- for example, is it twice as good for two people to see a beautiful sunset as for only one person to see it (compared with no people seeing it)? I don't know.
Does the kind of being experiencing something affect the value of the experience? Is it better for a human to experience a pleasant sensation than a hampster, if it is exactly the same sensation? I don't know -- my guess is, no.
It certainly does not matter WHICH being (of the same kind) experiences something, or where, or when. The experience of a beautiful sunset is just as valuable whether you experience it now, or whether someone else experienced it 1000 years ago (provided that the experience itself, including thoughts, mood, etc, was exactly identical). (note: this is not to say that the value of an external EVENT does not differ depending on which being is involved; for instance, you may have more of an ability to appreciate beautiful sunsets than your neighbor, in which case the same event (seeing the sunset) evokes a different experience in you and in your neighbor, and one of these experiences may be more valuable than the other one.)
How do the number of people experiencing something compare to the magnitude of the experience? Does fairness matter? Is it better for one thousand trillion people to have a moment of laughter, if in exchange one person must be horribly tortured (perhaps that is what made them laught)? I don't know.
Is life itself (or consciousness itself) intrinsically valuable? It seems to me that life is only valuable because it allows us to have experiences, not valuable in itself. So life is extremely valuable, but only as a means to the real goal (experiences).
So, if that's what good is (good is when beings experience certain kinds of experiences, and not others), what should you do?
You should make choices that make the universe as good as possible. That is, for each choice, you should evaluate how much good may result from each choice, and then choose the alternative that causes the most good. We will call this the "morally correct choice".
Your own happiness, pleasure, pain, etc should be included in your consideration of consequences of a choice (but it should not be weighted higher than the happiness, pleasure, pain, etc of others).
You may be uncertain what the results of some of the choices will be. It seems that sometimes a certain lesser good can be worth more than an uncertain greater good. How to weigh it? I don't know.
I think that this sort of system might be what scholars call "pluralistic utilitarianism". Pluralistic because there are many final goals (pleasure, happiness, avoidance of suffering, beauty, maybe variety, possibly amongst others), not just one.
When people talk about maximizing the good, they often use the word "utility" for the good. A moral system that is based on utility is called utilitarian system. More generally, a system that is based on looking at the consequences is called a consequentialist system.
Sometimes you make mistakes and you might make a mistake in calculating the probable consequences of your actions. Evolution has provided you with an innate ability to mark some actions as morally "dangerous" in the sense that, if you think you should do those actions, there is a relatively high chance that you have miscalculated and/or the magnitude of negative consequences if you did miscalculate is likely to be large. These are those actions that seem to be "not nice". History seems (to me) to suggest that the world would be better off if people had been more reluctant to do things that seemed very un-nice (even when they thought, after careful consideration and reasoning, that those things would be good to do). Therefore I suggest that you adopt this rule, and decline to do things that are extremely un-nice, even if your reasoning leads you to think that they would lead to good consequences.
Taking time to think about what to do is itself an action, and it is an action that costs time. To fully estimate and weigh the consequences of most actions would take an enormous amount of time, so much time that the additional good gained by making the right choice would usually be outweighed by the good lost by not doing something else with that time. So, you shouldn't actually work out the consequences of each choice as much as your can, rather you should think about it a little bit and then make a guess. How long you should think about it depends on the importance of the choice and on the availability of time. You should be on the lookout for handy shortcuts that help you make better guesses. We've already mentioned one (niceness).
Should you follow rules? At first glance, you may think that if you should make each choice to maximize the good, then you shouldn't follow any rules, because a rule might tell you to do something other than maximize the good. However, you have to consider that, when other people are around (and they are), following rules might have good consequences in and of itself. Foremost among these is that it sets an example, making it more likely that other people will follow the rules. So, if you think that living under the rule of law is a good thing, then you should generally follow the law, even when it tells you to do something that is not maximially good (maybe even when it tells you to do something bad) -- because usually, the impact of setting the example will outweigh the loss of good (you may sometimes think that no one is watching, but there are plenty of examples where people thought that and were wrong -- also, if you follow rules, people in the future may be able to infer that a rule was probably followed even if they don't know who followed it -- for example if you don't take something that you could steal, and then the owner comes back and sees that it wasn't stolen even though a person might have been nearby).
Note that some of the rules that you should probably follow are meta-rules; for example, obeying the law. This rule is a pointer to a whole set of rules; and it's not even a fixed set, because the laws change over time. So, even if you disagree with some particular laws (not just particular instances in which the laws tell you to do other than what you would otherwise think is best), you should consider obeying them anyway, to strengthen the rule of law in general.
Another reason to follow rules are that they are one kind of shortcut for the procedure of estimating and weighing the consequences of actions.
However, under my system, there may be times when you should break a rule. For instance, if you make a promise, and then you find that thousands of people will be killed, in a very painful way, if you keep your promise and only one will be killed (painlessly) if you break it, then you should probably break it, despite a rule that you should keep your promises.
Therefore, under my system, strictly speaking, there are no compulsory/obligatory actions or forbidden actions, at the most fundamental moral level; each potential choice is considered, and the best one out of all of them is the one you "should" do (or, to put it another way, the best choice is always compulsory, and all other choices are always forbidden). So this system speaks in terms of "should", but not in terms of "must" or "may".
The primary function of this moral system is to tell you what you should do -- its function is not to judge you and tell you whether you are a good or a bad person. I use the phrase "perfectly good person (morally)" to mean a person who follows the system at all times. Another phrase that could have been used is "morally correct person". The (morally) will be omitted for the remainder of this section. This section is only a discussion of the already stated system, it is not adding to it.
Under my system, you are a perfect good person if you do what you should do, that is, if at each choice, you choose the option that seems likely to have the best consequences. "Seems" means seems to you, according to your guesses after your limited consideration. "Likely" and "best" involve some weighing of different sorts things against one another (like, pleasure vs. beauty, certainty vs. magnitude of utility vs. number of people affected), which can only be done by you, since this system doesn't tell you how to do it. Remember that the part of consequences which make them good or bad are which experiences are experienced by beings.
Even if all of your choices lead to bad consequences, you are a perfectly good person if you choose the least bad choice.
If you guessed wrong about the consequences of a choice, even if you "should have known" that it would turn out badly, that's fine, as long as you genuinely believed, at the time, that it was the best choice. It is okay to shortcut consideration and make guesses as long as you genuinely believe at the time that not spending time on further consideration of that choice is the best choice.
So, in short, you are a perfectly good person if you have good intentions, even if things turn out badly (but conversely, you are not a perfectly good person if you have bad intentions even if things turn out good).
But what if you are not a perfectly good person (no one is); can you still be a good person? Recall that the phrase "perfectly good person" was just a shorthand we introduced for a person who follows the system. The system itself doesn't say anything about people being good or bad, only choices. But although the system doesn't talk about whether you are good or bad, it does tell you how good or bad various choices are.
If you don't choose to do that which you think has the best consequences (for example, if you do something which you think is not best, considering its effects on all beings, but which is better for you personally), then that's a morally bad choice, relative to the best choice. The degree to which it is morally bad is the degree by which you think its consequences differ from a better alternative. So, for example, if you choose not to pick up a piece of litter, and you don't think that probably has very bad consequences, then that isn't as bad as choosing not to save someone's life (assuming that you thought saving that person's life would probably have very good consequences).
So, without providing a threshold for distinguishing good from bad, one might say that you are a good person to the extent that you make morally correct choices. Psychologically, perhaps it would be a good idea to be satisfied with yourself proportional to the extent that you make morally correct choices, or conversely, to be dissatisfied with yourself proportional to the extent that your choices deviate from the morally correct choices.
The above system of morality describes what you should do, and talks about how an individual can decide what to do. in most of the rest of this page, for ease of reading, I will pretend that the above system is accepted as THE system of morality, and will often refer to it merely as "morality", rather than as "the above system of morality". Now we turn our attention to a different goal, that is, to construct a social system by which different people can look at each other and judge each other's actions (and maybe each other) as "good" or "bad".
It would be nice if morality could be used for this purpose also, and it can to some extent, but you can easily see why it isn't sufficient in and of itself. Any person could make choices that they thought would have worse consequences than the alternatives and then claim that they simply miscalculated and thought that the consequences would be better. Or, since it is left up to the individual to decide how good various situations are, and how to combine and weigh different kinds of goods and to also weigh magnitude of the good against probability and number of people affected, a person could do things that they thought would have, on balance, worse consequences than the alternatives but claim that, by their lights, those consequences are better.
Therefore, we will augment our system of morality with a different system, in order to produce a structure that can serve a social regulatory function. The goal of the regulatory function is to somehow cause people to be "more good". Whereas morality is concerned with what you actually think is right and wrong, ethics is concerned with what you are allowed to argue is right and wrong as a justification for putting social pressure on other people. Just as government claims a monopoly on violence, the following ethical system claims a monopoly on social pressure; other conventions for social pressure (for example, etiquette) may be allowed, but only insofar as they are compatible with the ethical system.
Now, this separation sounds very clean, and it is, on the highest conceptual level, but we also have to remember that mostly we're just a bunch of primates running around with our primate social instincts. Our sense of morality and ethics are both related to our primate social instincts, and in the instincts the two might be muddled together. So, even if we thought that the best social regulatory structure is something totally different from our morality instincts -- and maybe it is, and maybe someday, a long time from today, society will work that way -- the chance of society soon successfully transitioning to something that flatly contradicts our instincts (much more than it already does) seems low.
Morality tells you what to do at every choice -- the only time it doesn't tell you exactly what to do is when two alternatives seem to have about equally good consequences. However, the following ethical system is less comprehensive and only tells you what to do sometimes.
This system has deontological as well as consequentialist aspects. Deontological means evaluating actions by their adherence to rules, rather than by their consequences (or by the intentions of the person doing them).
Fundamental deontological terms include duties (things you have to do; synonomously, things you must do; obligations) and prohibitions (things you must not do. Even when a choice is not determined absolutely by absolute duties and prohibition, this system still may have something to say about which alternative you "should" choose. When the system has nothing to say, this is called a "free choice".
Social sanctions are either censure or punishment. Censure is the opposite of community approbation -- it's when other publically state that something you have done is wrong. Punishment is when they withhold help, or hurt you, too (perhaps in some non-violent, or even merely verbal, way).
We define a term "ethically bad" (for conciseness, I will just say "bad" within the ethical section of this essay) which is distinct from "morally bad". Ethics is a system for the justification of the application of social pressure. If you say that some action is "bad", you mean that social sanctions may/should be applied in response to it, and if you say that some person is "bad", you mean that social sanctions may/should be applied to censure or punish them. "Evil" and "wrong" are roughly synonymous with "bad".
For example, it is not using the term correctly to say that a pillar of the community, who has not been censured or punished and whom everyone (including you) thinks should not be censured or punished, is a "bad person" ethically. Nor is it correct to say that they have committed ethically bad actions. Perhaps they have made morally bad choices, but if you don't think they should be punished, then (you don't think) those choices are ethically bad.
Next I will list some goals and desired properties of the ethical system. Many of these are inconsistent with each other (not in the technical logical sense, since the system is not yet specified fully enough to be able to deduce contradictions from the "inconsistent" pairs; by "inconsistent" I just mean that they seem to be in opposition in some way). "Pragmatic considerations" are not separated out, because, as a social technology, the entire ethical system may be said to be only a pragmatic consideration.
As a technology, an ethical system is a means to an end, and to judge whether or not it is beneficial, we look at whether its effects are better than the alternatives.
Part of the reason we follow ethical systems is because our primate social/moral/ethical instincts drive us to do so. An ethical system which punishes someone whom our instincts tell us does not deserve punishment, or which does not punish someone whom our instincts tell us should be punished, is seen as unjust, and to that extent people are less likely to follow an unjust it (or to augmented it with an additional ethical system that deals with those wrongs not "properly" addressed by the initial one; which amounts to not following it, since the ethical system claims a monopoly on the justification of social pressure).
When there is agreement on what the consequences of an action are and how they should be weighed, then the moral situation is determined, and in these cases the judgement of the ethical system should agree with that of the moral system. In particular, ethically good should in these cases coincide with morally good, and ethically bad with morally bad.
The justification is that, as noted above, part of the reason we follow ethical systems is because our primate social/moral/ethical instincts drive us to do so, and part of what these instincts do is to demand that people should be encouraged to do morally good things and discouraged from doing morally bad things. In other words, this is a special case of "Consistency with ethical instincts".
The ethical system should be suitable for all conceivable places and times, and for every culture -- indeed for every conceivable situation.
Good can be had by having different structures of social pressure in different cultures. For example, the rules of etiquette should be able to differ. Therefore, the ethical system should accomodate extensions to itself (by which I mean, other systems for justifing social pressure; the extensions need not be considered to be forms of "ethics"), provided that these extensions meet certain compatibility criteria.
This follows from universal applicability.
If some people follow this ethical system and other people follow other ethical systems, this system should still produce good results.
This follows from universal applicability.
Of course, strictly speaking, universal applicability demands the consideration of majorities of other people who follow all sorts of unusual and bizarre ethical systems. So, what we will really aim for is for this ethical system to produce the best results in the presence of other ethical systems that we consider likely to be common, at the expense of less good results in other less likely scenarios. Even if we attempt otherwise, our evaluation of this likihood will necessarily be biased by present-day realities, so the result will be that although this ethical system will be expected to "adequately handle" any conceivable situation, it will really be optimized for certain environments, probably ones similar to the environments in which it was created.
It should be considered worse to cause a bad consequence than to merely fail to prevent a bad consequence.
This is inconsistent with the property "Consistency with morality", because morality tells us to value actions by their consequences, regardless of whether each alternative was "affirmative" or "negative". For example, if there is some situation in which alternative A is to cause a large amount of harm in one area but a very slightly larger amount of good in another area, whereas another alternative causes neither that harm nor that good, this principal demands that the null alternative be chosen, whereas morality demands that the active alternative be chosen.
One justification is that people instinctively get more angry at people who caused harm than at people who failed to cause good.
Another justification is that it is simply easier to analyze and to agree upon what the (expected) consequences of a person's actions were than it is with regards to their inactions. There are two reasons for this; first, in every choice, there is usually more than one alternative, therefore as time progresses, there are many more things that a person chose not to do than things that they chose to do. Second, a person's actual action was obviously an alternative that they were faced with, however, you have to think a bit in order to figure out what other alternatives they had.
A person should not be able to acquire or lose duties merely due to external circumstances/environment. Duties are acquired or lost only in response to (voluntary) actions.
This does not prohibit the existence of duties whose fulfillment conditions refer to the environment and hence which may be trivially fulfilled in some environments and not in others. For example, you can make an agreement that if your friend is ever broke and you have money, you will lend eir $5.
The justification is that it seems unfair for two people who act in exactly the same way to have different duties or prohibitions imposed on them.
A person should be considered to have no duties initially. Combined with the previous, this means that a person starts off with no duties (although they do start out with prohibitions) and then may acquire duties only by eir actions.
My justification for this property is rather weak. It is just that I cannot think of any duties that seem sufficiently universal so as to merit inclusion.
Involuntary or coerced actions should not be treated as actions at all, rather they should be treated as part of the environment.
The system should be easy to discuss without confusion.
This property is important because the ethical system is intended to be used by the general public to discuss who should be sanctioned. If people can't easily get their point across within the framework of the system, this will lead to additional conflict.
To the extent possible, the system should make it easy for adherents to come to an agreement (on whether or not social pressure is justified in some situation).
This property is important because a system that doesn't help people come to an agreement isn't better than no system at all (i.e. the "null system" where everyone just argues about each situation with no prior framework for discussion or shared understanding).
To the extent possible, the system should make stark, "black and white" judgements.
The justification is that discussion about such outcomes is more clear. This desirable property is in conflict with many other desirable properties, however.
"Don't combine acting to do A to X with not consenting to the idea of A being done to you in an exactly similar situation. More simply (but not as precisely): Treat others only in ways that you're willing to be treated in the same situation." -- http://www.jcu.edu/philosophy/gensler/fe/fe-5--00.htm
It must be possible to always comply with the dicates of the ethical system, that is, to never do (ethically) bad things.
This is justified because if someone does bad things by definition means that social sanctions may/should be applied to them in response. It seems unfair to for it to be impossible to escape sanctions no matter what you do.
It must be practical to always comply with the dicates of the ethical system, that is, you should be able to avoid doing bad things without being "impractical".
Exactly which actions are impractical is of course not formally specified. But in general, impractical actions are those which most of the community doesn't do, and doesn't expect others to do.
This is justified because if someone does bad things by definition means that social sanctions may/should be applied to them in response, and it seems unfair for the community to apply sanctions to someone merely for doing what the rest of the community is doing, or for not doing things that the community doesn't expect them to do.
Note: both of the possibility/practicality of compliance properties include not just the actions necessary to carry out the dictates of the system, but also the actions necessary to figure out what the system demands. For example, if the ethical system requires 200 years of thought in some particular situation in order to definitely avoid sanctions, then compliance is impossible. If it requires 1 year of thought (for an action not amazingly momentus), compliance is not impossible but it is impractical.
One useful way to look for violations of the compliance properties is to check the scalability of the system with population. Consider the behavior of the system as the number of people involved rises unboundedly. By the property "universal applicability", the system is supposed to handle this case. Sometimes this reveals obvious impossibilities or impracticalities.
Aggressive actions are prohibited.
There may (or may not) be exceptions to this when the target has used aggression in the past, or when the target has threatened to use aggression.
The justification is that something like this prohibition is demanded by ethical instincts, therefore this follows from property "Consistency with ethical instincts". Another is that this will improve outcomes, because it will deter unprovoked aggressive, and this deterrence will allow people to trust each other more (which means they will spend less resources protecting themselves, and will initiate more trades and joint projects).
Deception is prohibited.
There may (or may not) be exceptions to this when the target has used deception in the past, or when the target has threatened to use deception.
The justification is that something like this prohibition is demanded by ethical instincts, therefore this follows from property "Consistency with ethical instincts". Another is that this will assist in the achievement of "Clarity" (that is, clarity within ethical discussions). Another is that will improve outcomes, by allowing people to trust each other more, and by allowing more reliable and more efficient communication.
It is prohibited to command someone to do something that is impossible.
It is never unethical to command someone to be ethical.
If the ethical system (before taking this section into consideration) would require someone to do something impossible, then that requirement is vacated.
However, a person is prohibited from placing themself into a situation for the purpose of increasing the probability of this section being activated (and hence allowing them to "get away with" behavior which would otherwise be unethical).
When you violate the ethical system, you have a duty to admit as much.
One justification is that something like this prohibition is demanded by ethical instincts, therefore this follows from property "Consistency with ethical instincts". Another is that this will assist in the achievement of "Clarity", because it provides an incentive for accused parties to state whether or not they think the accusation is valid.
When you violate the ethical system, you have a duty to try and reverse the effects of what you did. In some cases you are excused from this duty if the reversal would be so costly so as to be impractical; the lengths to which you must go depend on the severity of the violation.
The justification is that something like this prohibition is demanded by ethical instincts, therefore this follows from property "Consistency with ethical instincts". In addition, it improves outcomes by calibrating the degree of deterrence to match the extent of harm committed.
If one makes an agreement with another, one acquires a duty to follow through and to do what was agreed to.
If one breaks the agreement, though, in some cases and systems the "Duty to undo violations" may not hold --- in some cases some ethical systems may excuse one from attempting to undo the breakage, to the extent that the agreement itself is considered to violate ethical instincts (or according to some other criterion).
Note that, by property "Non-impact of involuntary actions," coerced agreements don't count.
To the extent that an ethical system allows the duty to undo violations of this duty to be excused by a subjective criterion such as "ethical instincts", that system violates the "objectivity" property.
The justification for this property is that something like it is demanded by ethical instincts, therefore this follows from property "Consistency with ethical instincts". In addition, it improves outcomes by introducing a mechanism for parties to make agreements with each other. In addition, it assists in the achievement of property "extensibility" by providing a mechanism to extend the system.
There is a prohibition against applying social pressure in a way not justified by the system, or by a compatible extension to the system ("compatible" as defined by the system).
Duty to fight injustice takes whole life: poss to extent possible: takes whole life: practic only when presented to you? but indep of duties
Sins of the fathers
Duties to self
When the parties who are discussing, including the target of potential social pressure, can agree on the utilitarian calculations, then the remainder of this ethical system is "preempted", and the justification for social pressure is based on the utilitarian moral system.
When the parties who are discussing, excluding the target of potential social pressure, can agree on the utilitarian calculations, then the situation depends on the nature of the target's disagreement.
To the extent the target is thought to be lying, and is thought to secretly agree with the utilitarian calculations, the other parties treat the situation as if the target agreed .If the target lies, of course, then in addition to the considerations above, the target has violated the prohibition against deception; therefore, taking into account those consequences, a target thought to be lying actually has it worse than one who merely openly disagrees, even though the preemption doctrine itself treats them the same.
To the extent that the target is thought to genuinely disagree, the situations further depends on whether or not the disagreement is classified as "intellectual/legitimate" or "illegitimate". When the target is thought to honestly disagree, there is only preemption when the disagreement is somewhat illegitimate. To the extent that it is illegitimate, there is preemption but the degree of social pressure ultimately applied is moderated. The distinctions of honesty and illegitmacy may be used in degrees.
A disagreement about the potential/probable consequences of actions is always intellectual. A disagreement about the relative values of situations may be legitimate or illegitimate, and that distinction is not further semi-formally constrained in this document. Informally, the idea behind legitimacy is to fill a similar role to the use of "would a reasonable person think that.." in law --- people who hold illegitimate value systems are thought to be so incomprehensible or difficult or "crazy" ("crazy", but not necessarily in the actual, mentally ill sense) by the others so as to be impossible to deal with on their own terms --- these are value judgements that the other discussants can't imagine holding themselves, and indeed, can't imagine how any human could honestly hold them. Like "a reasonable person", it's kind of like an inductive proof base case, a group decision-making procedure by which the purported "objectivity" of a viewpoint is decided.
Here's a table:
|Honestly disagree?||Disagreement about consequences, or values?||Value disagreement "legitimate"?||Social pressure justifiable based on utilitarian concerns|
For example, if you become aware that you can save millions of children from suffering by some trivial action with no substantial cost to yourself, and you choose not to save them, then large groups of other people would agree that according to the utilitarian calculus you should have done the action, and may sanction you (although if the people applying the pressure believed you when you claimed that you disagreed with the utilitarian calculations, then the pressure applied to you would be moderated -- or, if you disagreed because you thought that the action would not actually result in saving millions of children, then the disagreement would be intellectual and you wouldn't be pressured at all).
In the language of communications protocols, the deontological ethical system can be viewed as a "fallback" protocol, only used if when the preferred protocol (the utilitarian moral system) cannot be used (that is, when the parties to the discussion can't agree on the utilitarian calculation parameters).
None to start with, but may acquire by:
The legal system is built on top of ethics in a similar way to how ethics is built on top of morality. The main difference is that although morality is a fundamental philosophical concept, both ethics and the legal system are just social technologies.
Just as ethics provides more objectivity than morality, at the cost of being somewhat contradictory to our moral instincts and of requiring people to use semi-formal reasoning (which is not only annoying to some, but also takes time), the legal system provides even more objectivity at the cost of even more distance from our moral instincts and requiring even more formality (although it is still merely semi-formal, relying upon the judgement of humans to "interpret" things); in fact, the legal system is so formal that it stops relying upon reasoning within the heads of individuals, and instead uses a reasoning process run on top of groups of people, with a bureaucracy mediating intragroup communication, and it takes so much time that if you get involved you can't just (successfully) participate in it yourself, you have to hire a team of full-time experts to help.
And, just as the ultimate legitimacy of an ethical system is a question to be decided by a system of morality (but most of the time in "healthy" societies, one expects the system of morality to tell you to follow the ethical system), the ultimate legitimacy of a legal system is a question for a system of ethics (or a system of morality) (but most of the time in "healthy" societies, one expects ethics and morality to tell you to follow the legal system).
So, in one sense, the legal system is just another layer going further in the direction of the ethical system. But there is one big difference in when one or the other one is used in contemporary society -- the legal system is intended to be employed in disputes when the results of the process will be enforced by heavy coercion/violence, whereas in healthy contemporary societies, the ethical system usually defers to the legal system for such questions.
Thanks to RF (todo: ask him), although goodness knows he doesn't agree with me.