So far it looks like if we're moral realists (i.e., we believe there are objective moral facts) we are in deep doo-doo. In Why Be Moral, Glaucon and Adeimantus compellingly argue that it's better to appear moral than to actually be moral. The Euthyphro dilemma shows that appealing to God can't, on its own, give us an account of objective moral truth--if anything it points to a naturalist account. And if that isn't bad enough, Mackie's arguments from disagreement and queerness undercut the likelihood of a naturalistic account objective moral truth.
Before you go around raping and pillaging, lets take a look at what a naturalistic account of objective morality has to offer in terms of a response to the various problems that have emerged.
Started from the Bottom, Now We're Here
Contemporary naturalistic theories of objective morality (moral realism) pretty much all start from the same place: Moral obligation is justified in terms of reasons. There is some reason for which it's wrong to poke babies in the eyes for fun. There is some reason for which you shouldn't steal. There's some reason for which should help to feed a starving child. Reasons for and against an action are at the bottom of all naturalistic moral realist theories...and now we're here.
So far so good...except what happens if we don't share or we disagree about what the relevant reasons are in respect to what we should do? How do you decide whose reasons are the true indicator of moral facts? Maybe I have what I think is a good reason to steal Bob's money but you tell me that the reason I shouldn't steal Bob's money is because it harms Bob's interests. I respond, well, that it will hurt Bob isn't a reason for me not to take Bob's money--I couldn't give two hoots about Bob's interests. I only care about mine.
Ok, you reply, but suppose someone were to do the same to you, would that bother you? Of course it would. It would bother me because my interests are at least as important as the interests as the next person, and stealing from me would cause me to my interests to subverted by the interests of another.
Enter the principle of impartiality: Stealing from me isn't wrong because there's something special about stealing from me or that there's something special about my interests. Stealing from anyone is wrong. From the point of the view of the universe, ceteris paribus, all of our interests have equal weight/are worthy of equal consideration and so any act that unjustifiably preferences one set of interests over another is wrong.
This principle sounds good in theory but there are also good reasons to think we needn't alway act impartially nor that morality demands it. If I can only save one life: the life of a close family member or a stranger I've never met, it doesn't seem wrong for me to prefer the interests of my family member. What about spending money to extend my life 1 year or spending that same money to extend the life of a stranger for 5 years? What about my personal interest in going to a concert and the interests of a starving child who could eat for 3 months off the ticket price? Is it morally wrong for me to preference my own (possibly trivial) interests in such a situation? The point is, reasons as a ground for naturalistic moral realism seem to only get us so far. As it stands, we have no clear account of how to weigh them against each other or how to reconcile competing reasons.
Another Big Problem
So far we've said that appeals to reasons ground an account of objective morality. But where do reasons come from? (On some accounts) reasons are a reflection of our motivations. We all have different motives for action but different motives will generate different reasons for action. If I'm motivated to X then I have reason to X. But what if I'm not motivated to X (i.e. I have no desire to X), does it mean that I have no reason to X?
Since reasons underpin naturalistic morality, people having different reasons will imply different standards of wrong and right. This will undercut any hope at objectivity in morality.
What constitutes a good reason for action for you might not be a good reason for me, so I will use my reason to justify my action and you'll use your reason to justify your different action and we'll both be right. The only way out of this mess is to come up with a way to mediate between competing reasons...
Enter Smith's moral realism
Smith's Rationalist and Internalist Moral Realism
Smith has two main issues to deal with: (1) Explain how there can be objective morality despite the fact that we all can have different reasons for action and (2) explain his answer to (1) in a way that also addresses Mackie's argument from moral disagreement and argument from queerness.
Before proceeding, lets get one conceptual distinction out of the way: explanatory reasons vs justifying reasons. If I keep a lost wallet we can ask "why did you keep the wallet?" I can respond "because I like money and there was money in it." This would be an explanatory reason. The reason I give doesn't justify my behavior but it explains it. It is often said that explanatory reasons are agent-relative reasons. A subclass of explanatory reasons are motivational reasons. These are the specific sub-class of reasons which explain an agent's actions in terms of their particular motivations, desires, and means-end beliefs (i.e., beliefs about how to best realize what they are motivated to do).
A justifying reason, on the other hand, would be something like this: "I kept the wallet because I couldn't afford food for my children and it's true that if you are given a choice between letting your children go hungry and returning a wallet, you should not return the wallet." Justifying reasons are generally considered to be reasons we'd appeal to for or against acting in a certain way. Justifying reasons are sometimes called normative reasons.
How to Get Moral Objectivity from Reasons
Solution summary: Rationality is a universal quality and humans all possess it (to varying degrees). The desires you'd rationally have about a situation are the desires that we'd share universally about that situation. Since, under ideal conditions of rationality, we'd all have the same desires (and motivations), we'd also all have the same reasons for action (in a given moral situation). Therefore, we could, if acting rationally, all share the same reasons for action thereby giving rise to objective morality.
So, to repeat, the first main problem for Smith is this: Objective moral facts can be known by appealing to reasons. However, if not everyone thinks that the same reasons are good reasons for an action, then people will have different ideas about what is right and wrong, and objective morality doesn't get off the ground.
There's a side-issue that need resolving too. What kind of reasons are we talking about to ground moral judgment? Motivational or justifying reasons? If it's only agent-relative motivational reasons then it doesn't seem like the project will get very far. Clearly, we all have different motivations for doing things. On the other hand, if we're talking only about justifying/normative reasons then it doesn't seem that reasons have any power.
What I mean is, if knowledge of right and wrong doesn't motivate action, what use is it? If mere awareness of a normative reason doesn't motivate action, there doesn't seem to be any practical value in figuring out what's right and wrong. If, upon discovering a (normative) reason for acting morally, people who were going to act immorally aren't motivated to do act otherwise, what practical value is there to figuring out and explaining moral truths?
Because of this problem, Smith defends a position called "reasons internalism". Reasons internalism attempts to connect justifying reasons to agent-relative motivational reasons. In other words, reasons internalism tries to show that knowing a moral fact (justifying reason) will necessarily play a role in motivating the right actions.
Ok, now that we've got most of the terminology and context out of the way, lets take a look at how Smith attempts to deal with the problem of moral objectivity.
What is (naturalistic) moral rightness? Moral rightness is what we'd desire ourselves to do in a certain circumstance if we were fully rational. So, if you want to know what 'right' is, (a) imagine that you are perfectly rational and (b) imagine what you'd want done in that particular situation.
Consider an example: You find a wallet on the ground and want to know what to do. First imagine that you are perfectly rational and then imagine what you would want done in that particular circumstance. Under these conditions you have a good chance (not a guarantee) to know what the right thing to do is.
So, where does the objectivity come from? Ah! Ha! I'm glad you asked. Lets work backwards for a second. How do we determine what to do? We appeal to reasons. But of course, if we all have different reasons then we'll come up with different answers about what to do. But where do reasons come from? Reasons come from agent-specific desires, beliefs, and motivations. Obviously, we differ enormously in these agent-specific respects...so appealing to them will not get us commonly-held reasons.
The trick is to find a way to make everyone recognize and be motivated by the same reasons. The only way to do this is to find something that generates the same desires. We need something that is grounded in something universal: i.e., something we all share that is homogenous. Rationality. Ta! Da! Since rationality is universal, if in any particular situation we imagine ourselves as purely rational we will share the same motivations and desires (because they arise from the same source). Those same motivations and desires (across individuals) will in turn generate the same reasons for action (across individuals), which in turn will generate the same moral judgments about a particular moral situation.
Now, how does this connect to the agent-relative vs justifying reasons issue? Knowing what a (hypothetical) fully rational agent would want to do creates in (actual) you a desire to do that thing. Added to our pre-reflective set of desires, we now have a new desire to do what a purely rational agent would do. This new desire will play a motivational role in how we act (because we want to actualize desires). But since this new desire is something that would be universally shared (because it's what all purely rational being would want), it is not merely an explanatory reason (i.e., "because I wanted to do x") but a justifying reason (i.e., "because it's what all fully rational agents would want").
Issues:
1. Why should we suppose that there is an overlap between what is rational and what is moral?
2. Would our desires really be the same if we were all fully rational?
3. Can desires be rational or is reason content-free?
4. Is it true that knowing what a fully rational agent would want to do cause me to want to do that too?
Reply to Mackie's Argument from Queerness
Mackie says that moral properties can't be a feature of the world be cause they'd be metaphysically and epistemologically queer. I can come to know and study all the properties of matter and energy but how come no one has ever scientifically identified the property of 'rightness' in something? I know how my sense of sight, touch, smell, taste, and hearing work. But how come no one's ever discovered a moral sense organ? If we can sense these properties, surely there must be an organ or faculty for it.
Smith's reply is this: Rightness is simply the qualities or properties we would want acts to have in circumstance C if we were fully rational. There's nothing magical going on here. If you want to know what rightness is, think about what a fully rational being would want in a particular moral situation. The features that we'd want the acts to have in those situations is 'rightness'.
One might object that we've defined 'rightness' in terms of rationality, and maybe we can't give a naturalistic account of rationality. Ok, maybe so, but rationality is naturally realized; that is, it emerges from the natural world. A rational creature is simply one with a certain psychology-type. And psychology is something that can be studied scientifically, so it is therefore, a natural quality.
Reply to Mackie's Argument from Moral Disagreement
Recall that the argument from moral disagreement goes something like this: It's an empirical fact that there is and has been a lot of substantive moral disagreement between cultures, over history, within cultures, and between individuals of the same culture. Rather than saying this moral disagreement is a consequence of people misperceiving objective moral truth, it make more sense to say moral rules are socially constructed and reflect cultural ways of life.
In Smith's reply, notice how he employs a very similar strategy to Mackie's but starts with different evidence arguing for the opposite conclusion.
Convergence Claim:
Smith's basic reply is the convergence claim: If you removed all the distorting factors in people ethical reasoning (cognitive biases, cultural prejudices, uncritically accepted beliefs, dogma, ideology, religion, disagreement over non-moral facts) and engaged in rational discourse, everyone would eventually end up with the same moral conclusions.
Mackie is cherry-picking: He's only looking at instances of moral disagreement but there is and continues to be lots of important moral agreement in the world--across cultures and individuals. The empirical fact that moral arguments tend to illicit the agreement of our fellows gives us reason to believe that there will be a convergence in our desires under conditions of full rationality.
Abduction: The best explanation of moral agreement in the world is our convergence upon a set of extremely unobvious a priori moral truths. And convergence on these truths requires convergence in the desires that fully rational creatures would have.
Counter: But what about all the moral disagreement?
Replies:
1. : Alongside massive disagreement we find entrenched agreement. For example, there is widespread agreement on thick moral concepts (descriptive concepts that are also value-laden): courage, brutality, kindness, meanness, honesty. Moral agreement is so extensive that normativity has been incorporated into naturalistic descriptive concepts. If we look at how these concepts are used across cultures we will find significant overlap not only in the behaviors they describe but also in the moral evaluation of those behaviors.
Counter: But what about all the moral disagreement?
Replies:
1. : Alongside massive disagreement we find entrenched agreement. For example, there is widespread agreement on thick moral concepts (descriptive concepts that are also value-laden): courage, brutality, kindness, meanness, honesty. Moral agreement is so extensive that normativity has been incorporated into naturalistic descriptive concepts. If we look at how these concepts are used across cultures we will find significant overlap not only in the behaviors they describe but also in the moral evaluation of those behaviors.
2: Past moral disagreement was removed, inter alia, by a process of moral argument. The fact that rational argument can lead to changes in culture's and individual's moral evaluations of cultural practices and behaviors is strong evidence for the positive role of rationality in accessing moral truth. Consider, for example, slavery, women's rights. Essentially, there is moral progress across and within cultures, and one reason for this is rational discourse.
3: Current intractable disagreements can be explained away by absence of ideal conditions of reflection and discussion; i.e., if we removed the elements that distort or impede rational discourse, we'd have substantive moral agreement.
Issues:
3: Current intractable disagreements can be explained away by absence of ideal conditions of reflection and discussion; i.e., if we removed the elements that distort or impede rational discourse, we'd have substantive moral agreement.
Issues:
1. Is it rational arguments that bring about change in moral attitudes or is it something else like emotions and the ability to empathize?
2. If we did remove all the distorting influences, would there be a convergence of desires of fully rational people?
3. Is the convergence claim falsifiable? If it isn't, it doesn't mean it's false, only that as an empirical claim it will lose some strength.
Replies to Foot
Foot's main criticism is that logical consistency doesn't necessarily imply moral behaviour. Eg. A criminal can have logically consistent premises about what to do yet not arrive at the correct moral conclusion.
Reply
The criminal's flaw is his premises. He has a normative reason to gain wealth no matter what the cost to others. But a fully rational creature would not want this. His desire isn't what a fully rational creature would desire.
Counter: The problem of conflicting intuitions about what a fully rational creature would want
2. If we did remove all the distorting influences, would there be a convergence of desires of fully rational people?
3. Is the convergence claim falsifiable? If it isn't, it doesn't mean it's false, only that as an empirical claim it will lose some strength.
Replies to Foot
Foot's main criticism is that logical consistency doesn't necessarily imply moral behaviour. Eg. A criminal can have logically consistent premises about what to do yet not arrive at the correct moral conclusion.
Reply
The criminal's flaw is his premises. He has a normative reason to gain wealth no matter what the cost to others. But a fully rational creature would not want this. His desire isn't what a fully rational creature would desire.
Counter: The problem of conflicting intuitions about what a fully rational creature would want
What if the criminal says that he did rationally reflect on what a fully rational creature would want in his circumstance and he came up with a normative reason to gain wealth no matter what the cost to others. He comes to this conclusion even thought the vast majority of others conclude the contrary.
Reply: Intellectual Arrogance
Reply: Intellectual Arrogance
Just because his intuition differs from the vast majority doesn't mean, ipso facto, he is wrong. But the criminal is demonstrating intellectual arrogance. The criminal sticks to his opinion that he has reason to gain wealth no matter what the cost to other. He sticks to his view without good reason. He doesn't weigh his position "in light of the folk...the only court of appeal there is for claims about what we have normative reason to do."
Reflecting on what a perfectly rational individual will do doesn't guarantee the correct answer, it's a starting place. From there we engage in rational dialogue and check our intuitions and arguments against those of others. If they differ, they we need to find some reason for which we should prefer ours...especially if we are in the minority. It doesn't mean we're wrong, only that we shouldn't be so arrogant to suppose we have stumbled upon the truth while the majority (of epistemic peers) has erred.
Issue: Is this a satisfying reply?
Issue: Is this a satisfying reply?