Facebook

Showing posts with label reasons internalism. Show all posts
Showing posts with label reasons internalism. Show all posts

Tuesday, November 12, 2013

The Moral Problem: Michael Smith

Introduction and Context
So far it looks like if we're moral realists (i.e., we believe there are objective moral facts) we are in deep doo-doo.  In Why Be Moral, Glaucon and Adeimantus compellingly argue that it's better to appear moral than to actually be moral.  The Euthyphro dilemma shows that appealing to God can't, on its own, give us an account of objective moral truth--if anything it points to a naturalist account.  And if that isn't bad enough, Mackie's arguments from disagreement and queerness undercut the likelihood of a naturalistic account objective moral truth.  

Before you go around raping and pillaging, lets take a look at what a naturalistic account of objective morality has to offer in terms of a response to the various problems that have emerged. 


Started from the Bottom, Now We're Here
Contemporary naturalistic theories of objective morality (moral realism) pretty much all start from the same place:  Moral obligation is justified in terms of reasons.  There is some reason for which it's wrong to poke babies in the eyes for fun.  There is some reason for which you shouldn't steal.  There's some reason for which should help to feed a starving child.  Reasons for and against an action are at the bottom of all naturalistic moral realist theories...and now we're here.

So far so good...except what happens if we don't share or we disagree about what the relevant reasons are in respect to what we should do?  How do you decide whose reasons are the true indicator of moral facts?  Maybe I have what I think is a good reason to steal Bob's money but you tell me that the reason I shouldn't steal Bob's money is because it harms Bob's interests.  I respond, well, that it will hurt Bob isn't a reason for me not to take Bob's money--I couldn't give two hoots about Bob's interests.  I only care about mine.

Ok, you reply, but suppose someone were to do the same to you, would that bother you?  Of course it would.  It would bother me because my interests are at least as important as the interests as the next person, and stealing from me would cause me to my interests to subverted by the interests of another. 

Enter the principle of impartiality:  Stealing from me isn't wrong because there's something special about stealing from me or that there's something special about my interests.  Stealing from anyone is wrong.  From the point of the view of the universe, ceteris paribus, all of our interests have equal weight/are worthy of equal consideration and so any act that unjustifiably preferences one set of interests over another is wrong.  

This principle sounds good in theory but there are also good reasons to think we needn't alway act impartially nor that morality demands it.  If I can only save one life: the life of a close family member or a stranger I've never met, it doesn't seem wrong for me to prefer the interests of my family member.  What about spending money to extend my life 1 year or spending that same money to extend the life of a stranger for 5 years?  What about my personal interest in going to a concert and the interests of a starving child who could eat for 3 months off the ticket price?  Is it morally wrong for me to preference my own (possibly trivial) interests in such a situation?  The point is, reasons as a ground for naturalistic moral realism seem to only get us so far.  As it stands, we have no clear account of how to weigh them against each other or how to reconcile competing reasons. 

Another Big Problem
So far we've said that appeals to reasons ground an account of objective morality.  But where do reasons come from?  (On some accounts) reasons are a reflection of our motivations.  We all have different motives for action but different motives will generate different reasons for action. If I'm motivated to X then I have reason to X.  But what if I'm not motivated to X (i.e. I have no desire to X), does it mean that I have no reason to X?  

Since reasons underpin naturalistic morality, people having different reasons will imply different standards of wrong and right.  This will undercut any hope at objectivity in morality.

What constitutes a good reason for action for you might not be a good reason for me, so I will use my reason to justify my action and you'll use your reason to justify your different action and we'll both be right.  The only way out of this mess is to come up with a way to mediate between competing reasons...

Enter Smith's moral realism

Smith's Rationalist and Internalist Moral Realism 
Smith has two main issues to deal with:  (1)  Explain how there can be objective morality despite the fact that we all can have different reasons for action and (2) explain his answer to (1) in a way that also addresses Mackie's argument from moral disagreement and argument from queerness. 

Before proceeding, lets get one conceptual distinction out of the way:  explanatory reasons vs justifying reasons.   If I keep a lost wallet we can ask "why did you keep the wallet?"  I can respond "because I like money and there was money in it."  This would be an explanatory reason.  The reason I give doesn't justify my behavior but it explains it.  It is often said that explanatory reasons are agent-relative reasons.   A subclass of explanatory reasons are motivational reasons.  These are the specific sub-class of reasons which explain an agent's actions in terms of their particular motivations, desires, and means-end beliefs (i.e., beliefs about how to best realize what they are motivated to do).

A justifying reason, on the other hand, would be something like this:  "I kept the wallet because I couldn't afford food for my children and it's true that if you are given a choice between letting your children go hungry and returning a wallet, you should not return the wallet."  Justifying reasons are generally considered to be reasons we'd appeal to for or against acting in a certain way.  Justifying reasons are sometimes called normative reasons

How to Get Moral Objectivity from Reasons

Solution summary: Rationality is a universal quality and humans all possess it (to varying degrees).  The desires you'd rationally have about a situation are the desires that we'd share universally about that situation.  Since, under ideal conditions of rationality, we'd all have the same desires (and motivations), we'd also all have the same reasons for action (in a given moral situation).  Therefore, we could, if acting rationally, all share the same reasons for action thereby giving rise to objective morality.

So, to repeat, the first main problem for Smith is this: Objective moral facts can be known by appealing to reasons.   However, if not everyone thinks that the same reasons are good reasons for an action, then people will have different ideas about what is right and wrong, and objective morality doesn't get off the ground.  

There's a side-issue that need resolving too.  What kind of reasons are we talking about to ground moral judgment?  Motivational or justifying reasons?  If it's only agent-relative motivational reasons then it doesn't seem like the project will get very far.  Clearly, we all have different motivations for doing things.  On the other hand, if we're talking only about justifying/normative reasons then it doesn't seem that reasons have any power.  

What I mean is, if knowledge of right and wrong doesn't motivate action, what use is it?  If mere awareness of a normative reason doesn't motivate action, there doesn't seem to be any practical value in figuring out what's right and wrong.  If, upon discovering a (normative) reason for acting morally, people who were going to act immorally aren't motivated to do act otherwise, what practical value is there to figuring out and explaining moral truths?  

Because of this problem, Smith defends a position called "reasons internalism".  Reasons internalism attempts to connect justifying reasons to agent-relative motivational reasons.  In other words, reasons internalism tries to show that knowing a moral fact (justifying reason) will necessarily play a role in motivating the right actions.

Ok, now that we've got most of the terminology and context out of the way, lets take a look at how Smith attempts to deal with the problem of moral objectivity.  

What is (naturalistic) moral rightness?  Moral rightness is what we'd desire ourselves to do in a certain circumstance if we were fully rational.   So, if you want to know what 'right' is, (a) imagine that you are perfectly rational and (b) imagine what you'd want done in that particular situation.  

Consider an example:  You find a wallet on the ground and want to know what to do.  First imagine that you are perfectly rational and then imagine what you would want done in that particular circumstance.   Under these conditions you have a good chance (not a guarantee) to know what the right thing to do is. 

So, where does the objectivity come from?  Ah! Ha!  I'm glad you asked.  Lets work backwards for a second.  How do we determine what to do?  We appeal to reasons.  But of course, if we all have different reasons then we'll come up with different answers about what to do.  But where do reasons come from?  Reasons come from agent-specific desires, beliefs, and motivations.  Obviously, we differ enormously in these agent-specific respects...so appealing to them will not get us commonly-held reasons.  

The trick is to find a way to make everyone recognize and be motivated by the same reasons.  The only way to do this is to find something that generates the same desires.  We need something that is grounded in something universal: i.e., something we all share that is homogenous.  Rationality. Ta! Da!  Since rationality is universal, if in any particular situation we imagine ourselves as purely rational we will share the same motivations and desires (because they arise from the same source).  Those same motivations and desires (across individuals) will in turn generate the same reasons for action (across individuals), which in turn will generate the same moral judgments about a particular moral situation. 

Now, how does this connect to the agent-relative vs justifying reasons issue? Knowing what a (hypothetical) fully rational agent would want to do creates in (actual) you a desire to do that thing. Added to our pre-reflective set of desires, we now have a new desire to do what a purely rational agent would do.  This new desire will play a motivational role in how we act (because we want to actualize desires).  But since this new desire is something that would be universally shared (because it's what all purely rational being would want), it is not merely an explanatory reason (i.e., "because I wanted to do x") but a justifying reason (i.e., "because it's what all fully rational agents would want").

Issues:
1.  Why should we suppose that there is an overlap between what is rational and what is moral?
2.  Would our desires really be the same if we were all fully rational?
3.  Can desires be rational or is reason content-free?
4.  Is it true that knowing what a fully rational agent would want to do cause me to want to do that too?

Reply to Mackie's Argument from Queerness
Mackie says that moral properties can't be a feature of the world be cause they'd be metaphysically and epistemologically queer.  I can come to know and study all the properties of matter and energy but how come no one has ever scientifically identified the property of 'rightness' in something?  I know how my sense of sight, touch, smell, taste, and hearing work.  But how come no one's ever discovered a moral sense organ?  If we can sense these properties, surely there must be an organ or faculty for it.

Smith's reply is this:  Rightness is simply the qualities or properties we would want acts to have in circumstance C if we were fully rational.   There's nothing magical going on here.  If you want to know what rightness is, think about what a fully rational being would want in a particular moral situation. The features that we'd want the acts to have in those situations is 'rightness'.  

One might object that we've defined 'rightness' in terms of rationality, and maybe we can't give a naturalistic account of rationality.  Ok, maybe so, but rationality is naturally realized; that is, it emerges from the natural world.  A rational creature is simply one with a certain psychology-type.  And psychology is something that can be studied scientifically, so it is therefore, a natural quality. 

Reply to Mackie's Argument from Moral Disagreement
Recall that the argument from moral disagreement goes something like this:  It's an empirical fact that there is and has been a lot of substantive moral disagreement between cultures, over history, within cultures, and between individuals of the same culture.  Rather than saying this moral disagreement is a consequence of people misperceiving objective moral truth, it make more sense to say moral rules are socially constructed and reflect cultural ways of life.  

In Smith's reply, notice how he employs a very similar strategy to Mackie's but starts with different evidence arguing for the opposite conclusion.  

Convergence Claim: 
Smith's basic reply is the convergence claim:  If you removed all the distorting factors in people ethical reasoning (cognitive biases, cultural prejudices, uncritically accepted beliefs, dogma, ideology, religion, disagreement over non-moral facts) and engaged in rational discourse, everyone would eventually end up with the same moral conclusions.

Mackie is cherry-picking:  He's only looking at instances of moral disagreement but there is and continues to be lots of important moral agreement in the world--across cultures and individuals.  The empirical fact that moral arguments tend to illicit the agreement of our fellows gives us reason to believe that there will be a convergence in our desires under conditions of full rationality.

Abduction: The best explanation of moral agreement in the world is our convergence upon a set of extremely unobvious a priori moral truths. And convergence on these truths requires convergence in the desires that fully rational creatures would have.

Counter: But what about all the moral disagreement?

Replies:
1. : Alongside massive disagreement we find entrenched agreement.   For example, there is widespread agreement on thick moral concepts (descriptive concepts that are also value-laden): courage, brutality, kindness, meanness, honesty.  Moral agreement is so extensive that normativity has been incorporated into naturalistic descriptive concepts.  If we look at how these concepts are used across cultures we will find significant overlap not only in the behaviors they describe but also in the moral evaluation of those behaviors.
2:  Past moral disagreement was removed, inter alia, by a process of moral argument.   The fact that rational argument can lead to changes in culture's and individual's moral evaluations of cultural practices and behaviors is strong evidence for the positive role of rationality in accessing moral truth. Consider, for example, slavery, women's rights.  Essentially, there is moral progress across and within cultures, and one reason for this is rational discourse.

3: Current intractable disagreements can be explained away by absence of ideal conditions of reflection and discussion; i.e., if we removed the elements that distort or impede rational discourse, we'd have substantive moral agreement.

Issues:  
1.  Is it rational arguments that bring about change in moral attitudes or is it something else like emotions and the ability to empathize?
2. If we did remove all the distorting influences, would there be a convergence of desires of fully rational people?
3.  Is the convergence claim falsifiable? If it isn't, it doesn't mean it's false, only that as an empirical claim it will lose some strength.

Replies to Foot
Foot's main criticism is that logical consistency doesn't necessarily imply moral behaviour.  Eg. A criminal can have logically consistent premises about what to do yet not arrive at the correct moral conclusion.

Reply
The criminal's flaw is his premises. He has a normative reason to gain wealth no matter what the cost to others. But a fully rational creature would not want this.  His desire isn't what a fully rational creature would desire.

Counter:  The problem of conflicting intuitions about what a fully rational creature would want
What if the criminal says that he did rationally reflect on what a fully rational creature would want in his circumstance and he came up with a normative reason to gain wealth no matter what the cost to others.  He comes to this conclusion even thought the vast majority of others conclude the contrary.

Reply: Intellectual Arrogance
Just because his intuition differs from the vast majority doesn't mean, ipso facto, he is wrong.  But the criminal is demonstrating intellectual arrogance.  The criminal sticks to his opinion that he has reason to gain wealth no matter what the cost to other.   He sticks to his view without good reason. He doesn't weigh his position "in light of the folk...the only court of appeal there is for claims about what we have normative reason to do."  

Reflecting on what a perfectly rational individual will do doesn't guarantee the correct answer, it's a starting place.  From there we engage in rational dialogue and check our intuitions and arguments against those of others.  If they differ, they we need to find some reason for which we should prefer ours...especially if we are in the minority.  It doesn't mean we're wrong, only that we shouldn't be so arrogant to suppose we have stumbled upon the truth while the majority (of epistemic peers) has erred.

Issue:  Is this a satisfying reply?


















Monday, April 2, 2012

Normativity and Misunderstandings: Parfit


Notes and Thoughts on Parfit's "On What Matters", Vol. 2, Chapter on Normativity

Oh! Before we start...Welcome my 100th published post! That's a lot of rambling...Thank you for reading my stuff.  Here are a couple of fun facts about my blog: 

Total number of visits 6787
Most visits in a month 548 in Feb 2012
Strangest Stat: I've had 250 visits from Russia and 132 from Slovenia (shout out to my readers in Russia and Slovenia!)
Strangest keyword search that directed someone to my blog: "Can virtuous people do Zumba"  (directed to my post on Aquinas' "Summa Theologica" which I titled "Zumba Theologica"


Ok, back to philosophy...

Overview

In the last post we talked about the disagreement between Parfit and Williams on whether there is such a thing as intrinsic good.  The debate continues but in the context of reasons for action.  For Williams having a reason to act means that you have some desire that you act to fulfill.  For Parfit reasons are facts that count in favour of (or against) a certain act.  You can have a reason to act without having some corresponding desire you seek to fulfill.  

Parfit uses an example to illustrate his point.  It's called the Early Death example:

Suppose that you know that unless you take a certain medicine, you will die much younger, losing many years of happy life.  Though you know this and you have deliberated in a procedurally rational way on this and all of the other relevant fact, you are not motivated to take this medicine. 

On Williams view you have no reason to take the medicine because there is nothing in your motivational set (the set of desires, dispositions, tendencies, etc that motivate your actions).  On Parfit there are facts that give you good reasons that count in favour of you taking the medicine.  The split between the two, in the simplest terms is that for Williams reasons for action must by definition motivate action.  For Parfit reasons needn't in themselves motivate actions, but they can count in favour of certain actions when considered by a rational agent. 

Ok, I've already jumped into the arguments and we're not even out of the intro yet!  I know you're anxious for more, so lets enjoy philosophy! Yay!

[Note: If you get tired of reading what Parfit and Williams say you can skip to the bottom section entitled "My Thoughts"!  Regarding my thoughts on the issues, I appreciate any feedback 'cuz it'll help me identify weakness in my future paper on the topic.  Thanks!]

Misunderstandings


So, check it.  Parfit is all "the problem is that Williams doesn't understand what reasons are".  Again, for Williams reasons can only be things that motivate action, and the only things that can motivate action are internal to us (the things in our motivational set).   The extension of Williams' position is that it is unintelligible to conceive of an "external" reason; that is, a reason for action that is not part of our pre-existing motivational set (i.e. desires).  Now that I think of it, we can say this about Williams' concept of a reason for action:  

1. A reason must have motivational force on the agent

2. Since the only things that have motivational force on an agent are elements of his motivational set, all reasons are internal to the agent. 
3.  Since all reasons are internal, any talk of external reasons for action is stark nonsense. 

Case 1
Parfit's all, "dude, you sooooo don't understand what I mean by external reasons".   Lets go back to the early deph example.  Williams says that when we say that Sick Sam has a reason to take the medicine, it's true that we might mean that it would be better for him to take the medicine.  But Williams doesn't think that taking the medicine would be a reason for Sam.  The distinction is that other people might perceive the facts of the situation and say that they are reasons for taking the medicine but it doesn't mean that the same facts would be reasons for Sam.

Case 2

Lets use a different example to bring out the distinction.  Suppose an unfortunate young girl is raped and gets pregnant as a consequence.  For some people all the facts of the situation would be reasons for her to get an abortion.  For others, the same facts would not be reasons for an abortion.  Suppose the young girl has a motivational set that disposes her toward wanting an abortion.  The anti-abortion group can rant and rave about the facts that are reasons for them for why she shouldn't have an abortion.  

But Williams would argue that unless the girl has some sort of desire (after rational deliberation) to have a child, these facts won't constitute reasons for her.   They don't constitute reasons for her because they don't have any "grip" on her; they can't motivate her to act.  They have about as much motivational force on her decision as do the facts about her perpetrators favorite sports team.  

Again, Parfit wants to deny that reasons need to have motivational force.  He says that reasons are simply facts that count in favour (or against) some action.  But it seems we need to ax, reasons for whom? in favour of what?  Regarding the latter, ultimately in explaining our actions we will say something like "because doing x is right or good".  But good for whom?  How do you know?

I think Parit's position seems most plausible when we give uncontroversial examples.  Examples where "common sense" would tell us what is good.  But Parfit's claim is that there are objective values.  There are things that are objectively good.  When we give examples where people's intuitions or "common sense" differs, his position becomes less plausible.  

I feel like in making this post, I'm repeating myself a lot.  Maybe that's because that's what Parfit's doing in this section.  

Case 3
I don't know why but I'm going to give one more example to contrast the two positions...Suppose Bob the Bully enjoys hurting others; it gives him more pleasure than anything.  It's just the way he's wired.  He also believes that he is more important than any other person.  His motivational set is such that he doesn't care about the suffering of others at all.  There are certain facts that most of us (I hope) could point to that give reasons against acting like Bob.  The question is whether Bob would see these facts as reasons against him acting how he does.  He might agree that certain things are facts, but, given his psychological make up he probably wouldn't see them as reasons that count against his behaviour.  

Of course, Williams position is unappetizing for people who really want to believe in objective right and wrong.  We want to say something like, "look, bullying causes suffering and suffering is bad.  These facts are reasons against bullying-type behaviour." The problem is that while Bob might agree that bullying causes suffering (i.e., the facts), for him it's not a reason that counts against doing it.  Causing suffering makes him happy.  Bob has no reason to stop bullying although most people will think that he does.   The problem with Bob is that, despite being presented with all the facts, he has no reason to stop bullying. 

I just noticed something.  When I'm writing Williams' arguments I agree with him, but when I write Parfit's I want to agree with Parfit.  Moving on...

Normativity and Why Does It Matter?
So how does this all relate to normativity?  In short, normativity for Williams is particular to the agent's psychology.  Facts can only be reasons in favour/against something in relation to a particular agent's values and desires.  What will constitute a 'reason in favour' for a particular agent is a "psychological prediction" based on their desires and psychological make up.   For Parfit reasons are facts in favour/against actions that bring about objectively good/bad things/states.  

Resolving this question is important to Parfit because he believes that certain ways of living are intrinsically better than others.  How are we supposed to decide how to live our lives if we can't appeal to reasons that are for or against one way or another?

My Thoughts
I partly agree with Parfit that some ways of living are better than others.  I'm just not sure that we can say that this is an objective fact.  But suppose for a moment that Parfit's right, that there are intrinsically better ways of living than others and that we can appeal to reasons for or against certain ways of living. The problem is that how we respond to facts and states of affairs is a consequence of our evolutionary, biological, psychological, historical, and cultural history.  How could we possibly hope to disentangle the facts that we consider to be good reasons from our historical, individuals, cultural, and biological biases?  Parfit could reply that that's no reason not to try.

Ok, lets continue with the supposition.  How could we possibly adjudicate between differing opinions.  I think that part of what makes things 'right' hangs on certain biological facts about humans.  For instance, we are social animals thus require some sort of sanction on violence if we are to survive as a species.  I'll admit right now I have some Hobbesian tendencies.  Now, does the fact that a requirement for sanctions on violence is necessary (which arises out of our biology)  make sanctions on violence intrinsically good?  

What if the facts of our biology were different.  Suppose we were like some strange insect species where the female rips the head of the male in order to procreate.  Then would decapitation be an intrinsic good because it arose out of our nature and was necessary for the survival of the species? 

Lets set that worry aside.  There's another worry I have.  There is an assumption in Parfit that all rational individuals would respond the same way to facts that count as reasons in favour of something.  This arises out of his assertion that there are intrinsic goods.  The idea, I think, is that facts that give us reasons to act in ways that aligne with these intrinsic goods (happiness, compassion, knowledge, etc...) will be reasons for all of us.  

Think of it this way.  Fact A counts as a reason for an action because the action alignes the individual with some intrinsic good.  Every rational person who becomes aware of fact A can acknowledge that fact A is a reason in favour of acting a certain way.  But this can only be true if both people recognize the aim of the action as an intrinsic good.  I have trouble accepting the idea that there can be such consensus.  I think there is something to Williams here in that what we consider 'good' has much to do with our psychological make up.

Consider this example.  Suppose I'm walkin' down the street and I see some guy playing an instrument I've never seen before: a kazuba.  I think it sounds amazing--a cross between a kazoo and a tuba.  I'm totally inspired.  I go on the intertubes and look up what it is and order one with an handy instructional DVD.   Here's the thing.  A hundred other people walk by and none of them think the instrument sounds any good.  Non of them order themselves one.  

Pafit's reasons story goes something like this.  The facts about the sounds of the instrument were reasons in favour me learning to play.  But why didn't the others respond to those same facts?  Just as I did, they all heard the sweet soothing sounds of the kazuba.  The plausible story is something like what Williams says: the facts of that sweet kazuba sound gave me reasons in favour of learning to play because there was something in my psychological make up that made me perceive the kazuba sounds good.  If there weren't that particular fact about me, no amount of kazuba sonatas could ever give me reasons in favour of playing the kazuba.    The upshot is that what we value is deeply intertwined with who we are as individuals.  

Given the wide variety of individuals, it's not unreasonable to suggest that there is a corresponding variety of ways to ascribe values to things and concepts.  If it's true that there are differences in how  value ascribed to concepts and things, then some facts will count as reasons for for some people, and those same facts might be irrelevant or against an action for other people.  

The problem for Parfit is this:  It only makes sense to talk about reasons for or against something if that thing has some kind of normative value.  But, as Mackie pointed out long ago, there is ample empirical evidence to suggest that there are important normative disagreements between people and cultures.  Unless Parfit can give us some guidance and to what these objective normative values are, he going to have a difficult time making his case that there are such things.  

Even if he can point to instances of agreement, the fact that there is agreement is not evidence of objectivity.  It is only evidence of agreement.  Parfit for his part can turn things around and say that the burden of proof lies on the skeptic: that there are moral disagreements is only evidence of disagreement, not that one side isn't right.