Facebook

Showing posts with label metaethics. Show all posts
Showing posts with label metaethics. Show all posts

Friday, January 30, 2015

Review and Summary of "Ethics and Observation" by Harmon

Introduction
The major debate in ethics is whether there are objective moral facts. There are a variety of defenses and objections to either position. Those who say there are objective moral facts are called "realists" while those who deny realism are called anti-realists or nihilists (there are actually more positions such as constructivists and non-cognitivists but let's not worry about that).

A popular strategy realists use is to say that moral reasoning is analogous to either scientific reasoning or mathematical reasoning. In super condensed form, the former strategy plays on the idea that scientific reasoning is a recursive spiraling toward the truth. There's a continuous interplay between hypothesis/theory and observation, the one impacting the other. Hypotheses and theories are tested, confirmed or rejected based on the best available observations. Similarly, hypothesis and theories impact how we interpret our observations. 

Moral reasoning is no different. We begin "in the middle" with both a moral theory and observations. Particular observations (judgments) influence our moral theories and theories influence how we interpret our observations. In both domains, there's a back and forth between theory and observation and the trajectory or both enterprises aims at truth.

A similar but slightly different argument applies to the analogy between mathematical and moral reasoning. We just as we can't directly observe mathematical facts, we can't directly observe moral facts. We reason our way to them. Also, both moral and mathematical facts don't have a physical existence yet we are sensitive to them: they inform our actions and our view of the world.

Ok, so those are very simplified versions of the arguments. In "Ethics and Observation," Gilbert Harmon challenges both analogies, although he focuses mostly on the first. Harmon's conclusion (at least not in this article) isn't that there's not such thing as moral truth, rather that the analogy between scientific observation and moral observation doesn't hold.

Let's check out his argoomints...

Part 1: The Basic Issue
Can moral principles be tested and confirmed the same way scientific principles can? On the surface it seems they can.  For example, I want to know if the principle "whenever it's possible, you should save 5 lives rather than 1."

How do we test this? Well, we can do a thought experiment and see what our verdict would be. Harmon gives this example:
Suppose you're a doctor in the emergency section.  6 patients are brought in. All six are in danger of dying but one is much worse off than the others. You can just barely save that person if you devote all of your resources to him and let the others die. Or, you can save the other 5 if you are willing to ignore the most seriously injured person.
What do? It seems like this case confirms the moral principle in question.  However, this being philosophy, there will be counter-examples a-plenty.
You have 5 patients in the hospital who are dying, each in need of separate organ. ONe needs a kidney, another a lung, a third a heart, and so on. You can save all five if you take a single healthy person and remove his heart, lungs, kidneys, etc... to distribute to these five patients. Just such a healthy person walks into the hospital for routine tests. His test results confirm he'd be a match as an organ donor for all 5 dying patients. If you do nothing he'll survive and 5 will die. If you apply our principle of action, that you ought to always save 5 instead of one whenever you can, then, well, you'll save 5 and only lose one life...
What do? It seems like we've tested the moral principle and the test disconfirms it. The moral principle is false (or needs to be modified).

So, is this kind of testing the same as is done in the scientific realm? It doesn't look like it. Scienticians test their hypotheses and theories against the real world not just in their "imagination".  It doesn't seem like we can perceive the rightness or wrongness of an act the way you might perceive the color, shape, and mass of an object.

How do scienticians know bluebirds are blue? Because they can look out into the world and see the blueness of the bird in question. But how do we know if an act is wrong? It doesn't seem like you can point to the wrongness or rightness the way you can point to an object's physical properties. And so it seems as though there's a way in which scientific and moral observation are different.

How do we make moral judgments anyway? Harmon gives the following case. You're walking down the street and you happen upon two teenagers (damn teenagers!) pouring gasoline on a cat then ignite it.  It doesn't seem like you perform any reasoning process. It's not like you go. Hmmm, let me see, there's a moral principle that
P1. causing unnecessary suffering of innocent creatures is wrong, and 
P2. these youths are engaged in an instance of causing unnecessary suffering to an innocent creature,
C. therefore, what they are doing is wrong.
What actually happens is you move directly to the moral judgment. You just see it's wrong. The process is a direct observation that the act is wrong. No  reasoning required.

Here's the issue: Are you perceiving something objective or is your reaction simply a product of your particular psychology? That is, if you'd been around when people tortured cats for fun, you might not have made the same judgment from your observation. Likely, your judgment is a reflection of facts about you, not about the act. More on this in a bit...

Part 2: Theory-Laden Observation
Let's get clear on what is meant by observation. In philosophy of science, it's widely recognized that there are no "raw" observations. All observations are "theory-laden." This means that, implicitly or not, we interpret all of our experiences though the lens of a theory of the world. The most basic one is that there are physical things that cause my experiences. This can be a bit difficult to wrap your head around for people who don't live in the wacky world of philosophy but check it out: you don't perceive objects directly...

You have mental representations in "the theatre of your mind" which you interpret as being representative of external physical objects which are the ultimate cause of your perceptual experience. You could be in the matrix with the exact same experiences being piped into your consciousness and nothing would be different. The worlds would be indistinguishable. The leap from the experience as though there are physical objects in my visual field to there are physical objects in my visual field is an unconscious one. But it is nevertheless a theory. You interpret your experiences of the world with the theory that it is physical objects that are causing what you see in your head.

Anyhow, the point is that even at the most fundament level we interpret observations through the lens of a theory. This happens in everyday life and it also happens in science. When a scientician sees a vapor trail in a cloud chamber, he thinks "there's a proton". He has a theory about the fundamental units of matter and the ways they interact with other matter, and that colors his interpretation of the observation. The observation itself doesn't tell him there's a proton. He adds that as a way of interpreting the raw observation. I.e., that he perceive a "proton" is the product of his theory.

Another example comes from biology. Famously (although he has since rejected it), Dawkins interpreted all evolution as being grounded in the selfishness of genes. In short, all selection call be explained by appealing to genes. If you want to know why one trait is more prevalent than another it must be because the genotype responsible for that trait confers greater fitness. His theory of selection (i.e., that genes are the fundamental unit of selection) colors how he interprets evolution.

On the other hand, a competing theory says that, at least for social animals, you have to also take into account group selection; i.e., the unit of select can also be the group. The reason is that altruistic behavior is inexplicable at the genetic level. How can you explain why a genotype that codes for disadvantageous behaviors/traits could outcompete selfish behaviors/traits? The gene-view can't explain it. Disadvantageous traits by definition are disadvantageous and so should be outcompeted by selfish traits. And so this theory views the same evolutionary observations from the level of group selection.

In short, your theory about the unit of section for evolution will color how you interpret the raw data. Same data/raw observations, different interpretation. Your observations are theory-laden.

Similarly, in the moral domain, we interpret our raw observations though the lens of a theory about the world.  You have a theory of the moral domain that causes you to judge the teenagers torturing the cat as wrong. If you had a different (moral) theory of the world that excluded animals from the moral domain, you'd interpret your observation differently. Your moral observations are theory-laden.

In that observations are theory-laden, moral and non-moral judgments are alike. The theory we hold of the world colors how we interpret the raw observations in both domains. If there's a difference between moral and scientific observation, it must be something else...

Part 3: Observational Evidence
Here's the difference: In science you need to make assumptions about certain physical facts in order to asplain an observation. In ethics you don't have to assume there are moral facts; alz you need to do it know something about a person's psychology; you can explain someone's observation that an act is wrong based merely on facts about the observer.

The way I like to think about this is to imagine (some of you won't have to imagine cuz you already have this theory) that there are no objective moral facts. There are only moral opinions that arise out of upbringing and psychological disposition. Would the world appear to be any different? People would still opine on what is right or wrong. People would agree on some things, disagree on others. Everything would seem exactly the same...and we could explain people's observations too. We could say, this person believes x is right because his mommy told him so or this person thinks y is wrong because she has an aversion to it.  There'd be no unexplainable observations even if there were no moral facts.

Things are different in the case of science.  To make a scientific observation you have to assume that there are objective facts about the world.  I.e., there is matter and energy, and so on. When I observe that there's a cloud of vapor in a cloud chamber, the only way I can explain it is if I assume there are objective facts about the world: I.e., there's matter. I have theory that there are protons; I observe "stuff" happening that conforms with my theory. Thus, my theory is provisionally confirmed.. And unlike moral observations, you can't explain my observations by merely appealing to facts about my particular psychology and upbringing. You can't explain my observation without also assuming there are protons...there has to something there for me to observe!

To summarize thus far: you can asplain the moral observation ("that's wrong!") by merely appealing to psychological facts about the observer. Moral facts don't need to be "real" to explain why someone might make the judgment. On the other hand, you can't asplain why someone would observe what they take to be a proton unless there actually was something real that existed. They could be wrong about it being a proton, maybe it was a neutron but they had to observe something to explain why they report observing a proton. There must be some fact about the world in order to explain their observation. Merely appealing to psychological facts about the observer won't explain why they had the scientific observation/judgement they did.

Observations as Evidence for Theories
If I have a scientific theory that predicts a particle with such and such properties, observable under such a such conditions, observation of such a phenomena counts as confirming evidence for my theory.  But is this the case with moral observations and theories?

Does my observation/judgment that burning the cat is wrong lend credence to my theory that says that it's objectively wrong to burn cats? Let's return to imagining there are no objective moral facts. My judgment doesn't confirm my theory. I'm just extending my theory (i.e., beliefs) to the observation. This will happen regardless of whether there is an objective moral fact about the matter or not.

Again we can contrast this with the scientific observation. If my theory predicts protons but there are no protons (or nothing with the properties my theory predicts), this will impact my theory.  Whether there are or are not objective facts about the world matters for scientific observation and their effect on theory.

Let's slow down and make a distinction here between types of observation. There's a sense in which my observation that "it's wrong to torture cats" confirms my moral theory.  For example, I have a theory with the principle that it's wrong to cause unnecessary harm and suffering to innocent sentient beings. I see the teenagers lighting the cat and I immediately observe that the act is wrong. It is an instance of causing unnecessary harm and suffering and I observed it to be wrong. That confirms my moral principle. I said that such acts are wrong, I witnessed an instance of such an act, and I also observed that it was wrong. Moral principle is confirmed!

However, there's another sense of observation in which my observation doesn't confirm my theory because it doesn't explain why I think the act is wrong.  My observation conforms with my moral theory, that such acts are wrong, but it isn't evidence in favor of my principle. We can imagine another person a few centuries ago who thought it's great entertainment to light cats on fire. They can witness the exact same act, judge that it's not wrong and say "ah! that confirms my theory that it's not wrong to cause harm and suffering to animals! My judgment confirms it!"

Not so in the case of science. If a theory predicts an entity with certain properties and those properties are observed, observing the properties explains why someone has the observation they do. And, bonus, the observation is evidence in favor of their theory/hypothesis. If no entity were observed with the predicted properties, the observation couldn't be made.  In short, the scientific (physicalist) theory explains why you observe the entity and properties that you do because the theory is about how the world is. Even if you believed the theory to be false, you'd still have the same observation:

Think Galileo. When the Church officials looked through his telescope, they didn't believe his theory yet what they observed confirmed Galileo's theory, rather than theirs. Their theoretical beliefs couldn't prevent them from seeing the evidence that disconfirmed their own theory and confirmed Galileo's. (Although, in the end they decided to reject the evidence rather than their theory...)

Possible Objection: Scienticians can (and have for all of scientific history) observe the exact same phenomena yet disagree on the theoretical interpretation of the observation. For example, both gene-based and group selection-based evolutionists observe the exact same phenomena. They don't disagree about what's happening. Evolution through differential reproduction and natural section is happening. They disagree about what theory best explains differential reproduction and natural selection. And so, it looks like, in some respects, scientific observation is similar to moral observation.

Two people can observe the exact same thing and disagree as to what theory the observation supports.  Just as two people can observe kids torturing a cat and disagree as to whether the case supports their particular moral theories on the treatment of animals, two scientists can both observe the exact same instances of differential reproduction and natural selection and disagree over the unit of selection to explain the observation.  That is, they can disagree at the level of theory about how to interpret the observation.









Tuesday, November 12, 2013

The Moral Problem: Michael Smith

Introduction and Context
So far it looks like if we're moral realists (i.e., we believe there are objective moral facts) we are in deep doo-doo.  In Why Be Moral, Glaucon and Adeimantus compellingly argue that it's better to appear moral than to actually be moral.  The Euthyphro dilemma shows that appealing to God can't, on its own, give us an account of objective moral truth--if anything it points to a naturalist account.  And if that isn't bad enough, Mackie's arguments from disagreement and queerness undercut the likelihood of a naturalistic account objective moral truth.  

Before you go around raping and pillaging, lets take a look at what a naturalistic account of objective morality has to offer in terms of a response to the various problems that have emerged. 


Started from the Bottom, Now We're Here
Contemporary naturalistic theories of objective morality (moral realism) pretty much all start from the same place:  Moral obligation is justified in terms of reasons.  There is some reason for which it's wrong to poke babies in the eyes for fun.  There is some reason for which you shouldn't steal.  There's some reason for which should help to feed a starving child.  Reasons for and against an action are at the bottom of all naturalistic moral realist theories...and now we're here.

So far so good...except what happens if we don't share or we disagree about what the relevant reasons are in respect to what we should do?  How do you decide whose reasons are the true indicator of moral facts?  Maybe I have what I think is a good reason to steal Bob's money but you tell me that the reason I shouldn't steal Bob's money is because it harms Bob's interests.  I respond, well, that it will hurt Bob isn't a reason for me not to take Bob's money--I couldn't give two hoots about Bob's interests.  I only care about mine.

Ok, you reply, but suppose someone were to do the same to you, would that bother you?  Of course it would.  It would bother me because my interests are at least as important as the interests as the next person, and stealing from me would cause me to my interests to subverted by the interests of another. 

Enter the principle of impartiality:  Stealing from me isn't wrong because there's something special about stealing from me or that there's something special about my interests.  Stealing from anyone is wrong.  From the point of the view of the universe, ceteris paribus, all of our interests have equal weight/are worthy of equal consideration and so any act that unjustifiably preferences one set of interests over another is wrong.  

This principle sounds good in theory but there are also good reasons to think we needn't alway act impartially nor that morality demands it.  If I can only save one life: the life of a close family member or a stranger I've never met, it doesn't seem wrong for me to prefer the interests of my family member.  What about spending money to extend my life 1 year or spending that same money to extend the life of a stranger for 5 years?  What about my personal interest in going to a concert and the interests of a starving child who could eat for 3 months off the ticket price?  Is it morally wrong for me to preference my own (possibly trivial) interests in such a situation?  The point is, reasons as a ground for naturalistic moral realism seem to only get us so far.  As it stands, we have no clear account of how to weigh them against each other or how to reconcile competing reasons. 

Another Big Problem
So far we've said that appeals to reasons ground an account of objective morality.  But where do reasons come from?  (On some accounts) reasons are a reflection of our motivations.  We all have different motives for action but different motives will generate different reasons for action. If I'm motivated to X then I have reason to X.  But what if I'm not motivated to X (i.e. I have no desire to X), does it mean that I have no reason to X?  

Since reasons underpin naturalistic morality, people having different reasons will imply different standards of wrong and right.  This will undercut any hope at objectivity in morality.

What constitutes a good reason for action for you might not be a good reason for me, so I will use my reason to justify my action and you'll use your reason to justify your different action and we'll both be right.  The only way out of this mess is to come up with a way to mediate between competing reasons...

Enter Smith's moral realism

Smith's Rationalist and Internalist Moral Realism 
Smith has two main issues to deal with:  (1)  Explain how there can be objective morality despite the fact that we all can have different reasons for action and (2) explain his answer to (1) in a way that also addresses Mackie's argument from moral disagreement and argument from queerness. 

Before proceeding, lets get one conceptual distinction out of the way:  explanatory reasons vs justifying reasons.   If I keep a lost wallet we can ask "why did you keep the wallet?"  I can respond "because I like money and there was money in it."  This would be an explanatory reason.  The reason I give doesn't justify my behavior but it explains it.  It is often said that explanatory reasons are agent-relative reasons.   A subclass of explanatory reasons are motivational reasons.  These are the specific sub-class of reasons which explain an agent's actions in terms of their particular motivations, desires, and means-end beliefs (i.e., beliefs about how to best realize what they are motivated to do).

A justifying reason, on the other hand, would be something like this:  "I kept the wallet because I couldn't afford food for my children and it's true that if you are given a choice between letting your children go hungry and returning a wallet, you should not return the wallet."  Justifying reasons are generally considered to be reasons we'd appeal to for or against acting in a certain way.  Justifying reasons are sometimes called normative reasons

How to Get Moral Objectivity from Reasons

Solution summary: Rationality is a universal quality and humans all possess it (to varying degrees).  The desires you'd rationally have about a situation are the desires that we'd share universally about that situation.  Since, under ideal conditions of rationality, we'd all have the same desires (and motivations), we'd also all have the same reasons for action (in a given moral situation).  Therefore, we could, if acting rationally, all share the same reasons for action thereby giving rise to objective morality.

So, to repeat, the first main problem for Smith is this: Objective moral facts can be known by appealing to reasons.   However, if not everyone thinks that the same reasons are good reasons for an action, then people will have different ideas about what is right and wrong, and objective morality doesn't get off the ground.  

There's a side-issue that need resolving too.  What kind of reasons are we talking about to ground moral judgment?  Motivational or justifying reasons?  If it's only agent-relative motivational reasons then it doesn't seem like the project will get very far.  Clearly, we all have different motivations for doing things.  On the other hand, if we're talking only about justifying/normative reasons then it doesn't seem that reasons have any power.  

What I mean is, if knowledge of right and wrong doesn't motivate action, what use is it?  If mere awareness of a normative reason doesn't motivate action, there doesn't seem to be any practical value in figuring out what's right and wrong.  If, upon discovering a (normative) reason for acting morally, people who were going to act immorally aren't motivated to do act otherwise, what practical value is there to figuring out and explaining moral truths?  

Because of this problem, Smith defends a position called "reasons internalism".  Reasons internalism attempts to connect justifying reasons to agent-relative motivational reasons.  In other words, reasons internalism tries to show that knowing a moral fact (justifying reason) will necessarily play a role in motivating the right actions.

Ok, now that we've got most of the terminology and context out of the way, lets take a look at how Smith attempts to deal with the problem of moral objectivity.  

What is (naturalistic) moral rightness?  Moral rightness is what we'd desire ourselves to do in a certain circumstance if we were fully rational.   So, if you want to know what 'right' is, (a) imagine that you are perfectly rational and (b) imagine what you'd want done in that particular situation.  

Consider an example:  You find a wallet on the ground and want to know what to do.  First imagine that you are perfectly rational and then imagine what you would want done in that particular circumstance.   Under these conditions you have a good chance (not a guarantee) to know what the right thing to do is. 

So, where does the objectivity come from?  Ah! Ha!  I'm glad you asked.  Lets work backwards for a second.  How do we determine what to do?  We appeal to reasons.  But of course, if we all have different reasons then we'll come up with different answers about what to do.  But where do reasons come from?  Reasons come from agent-specific desires, beliefs, and motivations.  Obviously, we differ enormously in these agent-specific respects...so appealing to them will not get us commonly-held reasons.  

The trick is to find a way to make everyone recognize and be motivated by the same reasons.  The only way to do this is to find something that generates the same desires.  We need something that is grounded in something universal: i.e., something we all share that is homogenous.  Rationality. Ta! Da!  Since rationality is universal, if in any particular situation we imagine ourselves as purely rational we will share the same motivations and desires (because they arise from the same source).  Those same motivations and desires (across individuals) will in turn generate the same reasons for action (across individuals), which in turn will generate the same moral judgments about a particular moral situation. 

Now, how does this connect to the agent-relative vs justifying reasons issue? Knowing what a (hypothetical) fully rational agent would want to do creates in (actual) you a desire to do that thing. Added to our pre-reflective set of desires, we now have a new desire to do what a purely rational agent would do.  This new desire will play a motivational role in how we act (because we want to actualize desires).  But since this new desire is something that would be universally shared (because it's what all purely rational being would want), it is not merely an explanatory reason (i.e., "because I wanted to do x") but a justifying reason (i.e., "because it's what all fully rational agents would want").

Issues:
1.  Why should we suppose that there is an overlap between what is rational and what is moral?
2.  Would our desires really be the same if we were all fully rational?
3.  Can desires be rational or is reason content-free?
4.  Is it true that knowing what a fully rational agent would want to do cause me to want to do that too?

Reply to Mackie's Argument from Queerness
Mackie says that moral properties can't be a feature of the world be cause they'd be metaphysically and epistemologically queer.  I can come to know and study all the properties of matter and energy but how come no one has ever scientifically identified the property of 'rightness' in something?  I know how my sense of sight, touch, smell, taste, and hearing work.  But how come no one's ever discovered a moral sense organ?  If we can sense these properties, surely there must be an organ or faculty for it.

Smith's reply is this:  Rightness is simply the qualities or properties we would want acts to have in circumstance C if we were fully rational.   There's nothing magical going on here.  If you want to know what rightness is, think about what a fully rational being would want in a particular moral situation. The features that we'd want the acts to have in those situations is 'rightness'.  

One might object that we've defined 'rightness' in terms of rationality, and maybe we can't give a naturalistic account of rationality.  Ok, maybe so, but rationality is naturally realized; that is, it emerges from the natural world.  A rational creature is simply one with a certain psychology-type.  And psychology is something that can be studied scientifically, so it is therefore, a natural quality. 

Reply to Mackie's Argument from Moral Disagreement
Recall that the argument from moral disagreement goes something like this:  It's an empirical fact that there is and has been a lot of substantive moral disagreement between cultures, over history, within cultures, and between individuals of the same culture.  Rather than saying this moral disagreement is a consequence of people misperceiving objective moral truth, it make more sense to say moral rules are socially constructed and reflect cultural ways of life.  

In Smith's reply, notice how he employs a very similar strategy to Mackie's but starts with different evidence arguing for the opposite conclusion.  

Convergence Claim: 
Smith's basic reply is the convergence claim:  If you removed all the distorting factors in people ethical reasoning (cognitive biases, cultural prejudices, uncritically accepted beliefs, dogma, ideology, religion, disagreement over non-moral facts) and engaged in rational discourse, everyone would eventually end up with the same moral conclusions.

Mackie is cherry-picking:  He's only looking at instances of moral disagreement but there is and continues to be lots of important moral agreement in the world--across cultures and individuals.  The empirical fact that moral arguments tend to illicit the agreement of our fellows gives us reason to believe that there will be a convergence in our desires under conditions of full rationality.

Abduction: The best explanation of moral agreement in the world is our convergence upon a set of extremely unobvious a priori moral truths. And convergence on these truths requires convergence in the desires that fully rational creatures would have.

Counter: But what about all the moral disagreement?

Replies:
1. : Alongside massive disagreement we find entrenched agreement.   For example, there is widespread agreement on thick moral concepts (descriptive concepts that are also value-laden): courage, brutality, kindness, meanness, honesty.  Moral agreement is so extensive that normativity has been incorporated into naturalistic descriptive concepts.  If we look at how these concepts are used across cultures we will find significant overlap not only in the behaviors they describe but also in the moral evaluation of those behaviors.
2:  Past moral disagreement was removed, inter alia, by a process of moral argument.   The fact that rational argument can lead to changes in culture's and individual's moral evaluations of cultural practices and behaviors is strong evidence for the positive role of rationality in accessing moral truth. Consider, for example, slavery, women's rights.  Essentially, there is moral progress across and within cultures, and one reason for this is rational discourse.

3: Current intractable disagreements can be explained away by absence of ideal conditions of reflection and discussion; i.e., if we removed the elements that distort or impede rational discourse, we'd have substantive moral agreement.

Issues:  
1.  Is it rational arguments that bring about change in moral attitudes or is it something else like emotions and the ability to empathize?
2. If we did remove all the distorting influences, would there be a convergence of desires of fully rational people?
3.  Is the convergence claim falsifiable? If it isn't, it doesn't mean it's false, only that as an empirical claim it will lose some strength.

Replies to Foot
Foot's main criticism is that logical consistency doesn't necessarily imply moral behaviour.  Eg. A criminal can have logically consistent premises about what to do yet not arrive at the correct moral conclusion.

Reply
The criminal's flaw is his premises. He has a normative reason to gain wealth no matter what the cost to others. But a fully rational creature would not want this.  His desire isn't what a fully rational creature would desire.

Counter:  The problem of conflicting intuitions about what a fully rational creature would want
What if the criminal says that he did rationally reflect on what a fully rational creature would want in his circumstance and he came up with a normative reason to gain wealth no matter what the cost to others.  He comes to this conclusion even thought the vast majority of others conclude the contrary.

Reply: Intellectual Arrogance
Just because his intuition differs from the vast majority doesn't mean, ipso facto, he is wrong.  But the criminal is demonstrating intellectual arrogance.  The criminal sticks to his opinion that he has reason to gain wealth no matter what the cost to other.   He sticks to his view without good reason. He doesn't weigh his position "in light of the folk...the only court of appeal there is for claims about what we have normative reason to do."  

Reflecting on what a perfectly rational individual will do doesn't guarantee the correct answer, it's a starting place.  From there we engage in rational dialogue and check our intuitions and arguments against those of others.  If they differ, they we need to find some reason for which we should prefer ours...especially if we are in the minority.  It doesn't mean we're wrong, only that we shouldn't be so arrogant to suppose we have stumbled upon the truth while the majority (of epistemic peers) has erred.

Issue:  Is this a satisfying reply?


















Tuesday, April 17, 2012

Parfit on Moral Disagreement: Part 2


Notes and Thoughts on Moral Disagreement as Discussed in Parfit's "On What Matters" Ch. 34

Parfit Vs Basic Argument from Disagreement

Suppose you and your counterpart are in ideal epistemic conditions.  You are aware of and agree on all the non-normative facts yet your opinions differ on what the true moral belief is.  In a way, we can see this in the abortion debate and in the vegetarianism debate, to name a few.  Since both parties in these debates often agree on the  non-normative facts, we should not rationally suppose that our intuitions about the issue are true.  The basic idea is that, given epistemic parity, there is no compelling reason, beyond ego, to suppose that our normative belief is the correct one or that our opponent wrong.

Parfit says that the Intuitionist response must be that in such situations where everyone knows all the relevant non-normative facts there wouldn't be disagreement over moral assessment.  In such ideal conditions, everyone would agree on the normative belief.   For example, if everyone on all sides of the abortion debate agreed on and knew all the relevant non-normative facts, the Intuitionist has to say that they'd all agree on what is the correct normative belief regarding abortion.

Hmmm...I'm not too sure what to think of that claim.  Anyhow,  the bottom line here is that Parfit is making an empirical claim.  He's saying that, given all the relevant epistemic information, we'd make the same moral judgments.

The cool thing about empirical claims is that they can be tested.  Remember way back in the Intro of Part 1 I axed you to answer the trolly questions? video of trolley problem   Well, here's the thing.  There's a branch of philosophy called xphi (as in "experimental philosophy") where one of the things  they do is psychology-like experiments to test testable philosophical claims.  (check out this link to see some cool xphi experiments)          

     
There are some xphi experiments that test people's intuitions on moral questions.  It's been a while since I read it but I remember reading something on an xphi experiment testing people's intuitions on the trolly thought experiments.  I don't remember the details but I do remember that there wasn't unanimity in moral judgments for what to do in the trolly problems. I'll have to look it up later for my paper but I think something like 3/4 said they'd pull the lever and 3/4 said they wouldn't push the fat man.  The odd thing about the thought experiment is despite the outcome being the same (1 person dies to save 5), most people aren't consistent with their answer.

So how does this apply to Parfit's claim about er'body having the same intuitions given perfect information?  It seems to falsify it because, in the thought experiment, everyone has the same relevant non-normative facts, yet a statistically significant portion of respondents disagree with the majority.


There's a possible reply for Parfit here.  In his senario he says that not only would agents have all the same non-normative facts but they'd also be using the same normative concepts (and understand the relevant arguments and not be affected by distorting influences).  So, his way out is that he can say, despite the fact that er'body agrees about the non-normative facts in the trolley experiment, they aren't all starting with the same normative concepts.  That explains the different outcomes.

Lets pause for a second and look at what Parit (might) mean here by "shared normative concepts".  Given what he has said in previous chapters we can assume that included in this category of concepts is the notion of external reasons in favour of/against an action.  


A quick review of Parfit's notion of "reasons":  For Parfit, a reason is a fact, awareness of which counts in favour of/against a particular action.  This is in contrast to Williams' notion of a reason which is a psychological account of motivation:  I.e., a reason for action is when an agent had a psychological desire to be satisfied.  If an agent has no psychological desire to be satisfied, then they have no reason for an action.  As I have discussed in several other posts, Williams' notion of reason is purely internal to the agent whereas, Partit's is external to the agent and objectively true to all rational creatures. 

Another thing we might include in Parit's cluster of normative concepts in the idea of intrinsic goods; I.e., things that are good in themselves, not because they lead to further things.  I'm going to hypothesize that he's including this too.

"I Am a Computer" (*in a robot voice)
Now here's where things get a little tricky.  I could be totally wrong with this but I'm going to present an analogy to Parfit's hypothesis.  Imagine a moral decision making computer in the future that has a cool robot voice.  It says, "I am a computer" a lot.   Not only does it talk in a cool robot voice but it solves moral dilemas for us.  How might it do this?  One thing it needs is the data.  On Parfit's model that will be the non-normative facts.  The next thing it needs is algorithms to interpret and weigh the data.  These will be the normative concepts.  The program will provide the logical structure, which for Parfit will be the arguments (maybe?).   So, Parfit is saying that if two properly functioning computers with cool robot voices had all these elements in common, they give the same output.

Problem 1:  Which normative concepts?
Hmmm...he's probably right but I'm not sure this is a very meaningful claim.  It seems tautologous to say that provided we shared the same normative concepts we'd come up with the same normative conclusions.  Duh!  In a way it seems like he's side-stepping the whole problem.  

One of the main reasons for which we have moral disagreement is because we don't share the same normative concepts. Furthermore, there doesn't seem to be any objective way to arbitrate between competing normative concepts without already presupposing some prior meta-normative concepts.

Parfit might reply that reasons allow us to arbitrate between competing normative.  But I think this assumes everyone is going to be equally responsive to the same reasons.  I'm not sure that will be the case.  

Problem 2:  Non-testability 
I don't think this claim is testable.  Lets grant that in making a moral decision, two or more people meet all the right epistemic conditions and are using the same normative concepts.  There's still another problem.  It is highly unlikely that 2 or more people will ever share every normative concept.  Maybe Bob has normative concepts {A, B, C, D, E} and Mary has normative concepts {A, B, C, D}.  In some cases, concept E won't play any determining role in the outcome; in others it might.  

Of course, Parfit can reply that just because it's not actually a testable claim doesn't make it false.  He'd be right, but it doesn't make it true either.  It makes it untestable, and so difficult to know how things would actually play out. 

Parfit can also reply that in the case where E is a relevant factor, the disputants don't share a relevant concepts, and so his hypothesis is right.  After all, his claim is that if disputants shared relevant normative concepts, they'd get roughly the same moral answer.  He has a strong response here, but how significant is the claim that if people share relevant normative concepts, they will get a similar answer?  

Problem 3:  Problem of relevant concepts and weight.
Suppose we run the trolley experiment on some people.  We also check to see if they use the same normative concepts. One possible problem is that while they agree on which concepts are relevant, they might not agree on how the concepts should be applied or weighed.   For example we ask:  "are you applying external reasons to your analysis" and they answer in the affirmative.  It's possible that 2 people might agree on using the normative concept of external reasons, but they might disagree on what those reasons are and how they should be weighed in a particular situation.

Parfit might respond that people can make mistakes in their application and weighing of normative concepts. Mathematicians can disagree on things but they are still able to recognize mathematical truths.  That there are occasional mistakes doesn't mean there is no objective truth, it only means that someone has made a mistake.  The method needn't be infallible to support an Intuitionist account of moral realism--we're human, we're bound to err.  We could be deficient is our appraisal of which non-normative facts are relevant, how and which normative concepts should be applied, and there may be distorting influences (like culture/upbringing).  These facts only show that we can get it wrong, not that there are no objective normative truths.

A problem with this reply is that it is non-falsifiable.  Anytime there's a moral disagreement, Parfit can say either that someone has just made a mistake or that the two parties have slightly different normative concepts and one of them has the wrong one.  There's always some sort of ex post facto explanation open to him.   


Distorting Influences
Parfit partly acknowledges this problem by saying we can't simply claim that someone has been subject to a distorting influence anytime they end up with a belief that differs from our own.  He needs to give a more careful description of what constitute distorting influences.  He gives self-interest as a legitimate distorting factor of which we should be wary in our moral reasoning.  True dat...unless you're an egoist, in which case it is an elucidating factor...different normative concepts...oh! snap!

Parfit concludes this section by setting the bar for what is required for the Intuitionist account of moral realism to be true.  "Intuitionists need not claim that, in ideal conditions these disagreements would all be completely resolved.  But they must defend the claim that, in ideal conditions, there would not be deep and widespread moral disagreement."  

I can dig that.  However, a problem I foresee is how Parfit will show that convergence over values is a product of there being objective moral truths instead it being a product of cultural convergence or evolutionary forces.  Both stories could account for moral agreement.  What would really be compelling for the moral realist's case is to give examples of widely divergent cultures that share significant moral values. That, in my estimation, would carry some weight.  

Note: I wrote this before reading the entire chapter.  Later in the chapter Parfit makes stronger defenses of his claims and addresses some of my worries.  Why does he have to be so reasonable?  How am I supposed to write a critical paper about someone I'm starting to agree with?  Curse you Parfit!

Sunday, April 15, 2012

Moral Disagreement Part 1: Parfit

Notes and Thoughts on Parfit and Moral Disagreement (Ch. 34)


Introduction
Before getting into the philosophy, for those of you who aren't familiar with the trolley thought experiments, answer these questions before proceeding:


Trolly Q1:  You are driver of a runaway tram which you can only steer from one narrow track on to another; five men are working on one track and one man on the other.   You are currently on the track to kill the five.  Do you flip the switch to change tracks and cause the one man to die to save the 5?


Trolly Q2:  This time you aren't the driver of the runaway tram.  This time there's only one track, and there are 5 workers working on it.  If the train reaches them, they'll all die.  You are on an overpass.  You notice a fat man whose body weight is such that if you pushed him off he'll cause the tram to derail and the 5 workers will be saved.  Yay!  But only at the expense of the fat man's life.  Boo!  Do you push him off?


Was your answer different for the 2 questions?  The outcome was the same, so why might it be different?  What's the difference between the situations?  And most importantly, how do you justify your position?  Keep these questions in mind as we discuss this section.  In a way Parfit wants to argue that under ideal conditions we'd all give the same answer.  There's a fairly large body of empirical evidence to the contrary, but he'll probably reply that in some way the ideal conditions hadn't been met.


Overview
Yo, check it! (That means I'm about to drop some knowledge)  We is about to talk about D-Rock Parfit's replies to the argument from disagreement (and some variations).  The argument from disagreement is one of the classical arguments against moral realism (that there objectively true moral values).   I did a more detailed account of the argument from disagreement in my post on Mackie (Mackie calls it "the argument from diversity") but for the one or two of you not familiar with the argument it goes a lil' somethin' like this:
1.  There is somes peoples.
2.  The peoples is disagree very muchly about what is goodly and badly.
3.  If there were such a thing as objective moral values, we would expect to find agreement, not disagreement.
C.  Therefore, it is seeming there are no objectively true moral values, only relative values.  I.e., Moral truth is always relative to social/cultural/historical context; it is never objectively true.


Parfit says this argument is badly.  He does not like it.  Not one little bit!


Two last definitions, normative belief:  A belief about something's value.  For example, "telling the truth is good" or "causing unnecessary pain is baaaaaaaad."
Normative reason:  That's a reason in favour of doing 'x'.   Or, that's a reason against doing 'x'.  Or, that's a good/bad reason to do 'x'.


Ok, enough lol catz language.  Lets look at his criticisms.  (Can you imagine turning in a whole paper in lol catz language?  That'd be awesome. lol.)


I will add one more point.  Parfit is going to advance the thesis that ultimately it is our intuition that tells us that certain actions and reasons for actions are morally good.  This thesis is contrary to other realist positions that say we can know that something is good either by empirical means or by conceptual analysis.  


Parfit's Main Claims


Yo, check it!  Parfit is a moral realist:  He thinks that there are objective moral values.  More specifically he makes several claims:
(A).  There are some irreducibly normative reason-involving truths, some of which are moral truths.
This means there are objective truths about moral values.  For example, there are certain things that are objectively good--in the moral sense--to do, and knowing what these things are gives us reasons for or against an action.  Morally good things aren't good because of their consequences, but they are good in themselves.  Maybe, helping people is one of these things.  "But helping who?"  "I don't know, people who need help!"
(B).   Since these truths are not about natural properties, our knowledge of these truths cannot be based on perception or on evidence provided by empirical facts.
This one's pretty self-explanatory.  Things aren't good because of some property they have, things are good because they are essentially good.  Morally good things are intrinsic goods.
(C).  Positive substantive normative truths cannot be analytic, in the sense that their truth follows from their meaning.
Whether something is good is not discoverable by analyzing the meaning of the term.  E.g., That "justice" is good, isn't discoverable through its definition the way that we can discover what a bachelor is by understanding the meaning of "unmarried man".   Identifying something as good isn't a matter of analyzing the concept of the word.


Therefore
(D).  Our normative beliefs cannot be justified unless we are able to recognize in some other way that these beliefs are true.
Since we can't know what things are good based on empirical investigation, apprehension of properties, or conceptual analysis, we need another way to be able to identify what things are objectively good.


And, Parfit says, we do have this capacity.  Some of our normative beliefs give us reasons in favour/against certain actions and we are responsive to these reasons.  Some of our normative beliefs are "self-evident, and intrinsically credible".   


For example:  Poking children in the eyes with needles for fun is wrong.   According to Parfit, we just know that this normative belief is true.  We know this to be true just as we know that a something and its negation cannot both be true (at the same time and place).


Of course the big question is how do we know these assertions to be true without appeal to our sensory perception or definitions?  It's not like I need to see someone poke a child in the eyes to know it's wrong...Parfit proposes intuitionism:  The theory that we have intuitive abilities to respond to reasons and recognize some normative truths.


Parfit is well aware of the problems with intuitionism.  Where was people's moral intuition during slavery?  Back then people often argued it is intuitively true that Africans ought to be slaves.  And what about wars?  People do horrendous things because they take it to be intuitively true that the other side is evil or that they're on the side of good (see: US military culture, or any military culture for that matter...).


Nevertheless, in the less clear cases we ought not to rely only on intuition about the act.  This is where Parfit adds in a requirement that we assess the strengths and weaknesses of conflicting reasons, arguments, and principles.  The idea is that, as with particular acts, our intuition gives us "similar abilities to recognize truths about what is rational, and about what we have reasons to believe, and want, and do."


I'm not sure I agree with him here.  I think all one has to do is look at American politics to see that people clearly do not agree on what truths are rational "and what we have reasons to believe, and want, and do".  However, I do agree with him (and Scanlon) that appeal to reasons for/against committing a particular action "is the only defensible method".   I mean, for serious, if you can't give reasons for why you did something, beyond "I just felt it was the right thing to do", then you is wack.


The problem for Parfit is in trying to show that people's intuitions will magically aligne regarding what they think are good objectives, reasons and/or principles for action.  To deal with this objection Parfit argues that just because there might be disagreement over what 2 people find to be "self-evident", doesn't imply that we don't have the capacity to find out.  


Consider our other senses.  People can disagree over what they see or hear, yet we don't conclude from that that they are blind and deaf.  Well...sometimes, I do.  Nevertheless, despite their infrequent lapses, we don't conclude that vison and hearing aren't reliable ways of coming to know truths about the world.


One more reply to the anti-intuitionists is that people might find that the beliefs over which they disagree aren't of the self-evident variety; that's why they're disagreeing.   Well, that's sounds like a mighty convenient argument for Parfit.  "No, no, no! it's not that one of your intuitions is wrong, it's that regarding this particular matter, there is no self-evident truth--that's why you're disagreeing".  But this reply avoids one difficulty by creating another.


How is it that we are supposed to distinguish between intuitive beliefs that are self-evidently true and those that only appear to be self-evidently true?  If Parfit's answer is that we should resort to our intuition to sort it out, I'm gonna punch him right in the face!   Oh! I shouldn't say that.  Parfit's a nice guy.  Almost all accounts of him never fail to mention how nice he is.  But anyway, you know what I mean.  There's a vicious circle goin' on here, and it's making me dizzy!



Monday, April 2, 2012

Normativity and Misunderstandings: Parfit


Notes and Thoughts on Parfit's "On What Matters", Vol. 2, Chapter on Normativity

Oh! Before we start...Welcome my 100th published post! That's a lot of rambling...Thank you for reading my stuff.  Here are a couple of fun facts about my blog: 

Total number of visits 6787
Most visits in a month 548 in Feb 2012
Strangest Stat: I've had 250 visits from Russia and 132 from Slovenia (shout out to my readers in Russia and Slovenia!)
Strangest keyword search that directed someone to my blog: "Can virtuous people do Zumba"  (directed to my post on Aquinas' "Summa Theologica" which I titled "Zumba Theologica"


Ok, back to philosophy...

Overview

In the last post we talked about the disagreement between Parfit and Williams on whether there is such a thing as intrinsic good.  The debate continues but in the context of reasons for action.  For Williams having a reason to act means that you have some desire that you act to fulfill.  For Parfit reasons are facts that count in favour of (or against) a certain act.  You can have a reason to act without having some corresponding desire you seek to fulfill.  

Parfit uses an example to illustrate his point.  It's called the Early Death example:

Suppose that you know that unless you take a certain medicine, you will die much younger, losing many years of happy life.  Though you know this and you have deliberated in a procedurally rational way on this and all of the other relevant fact, you are not motivated to take this medicine. 

On Williams view you have no reason to take the medicine because there is nothing in your motivational set (the set of desires, dispositions, tendencies, etc that motivate your actions).  On Parfit there are facts that give you good reasons that count in favour of you taking the medicine.  The split between the two, in the simplest terms is that for Williams reasons for action must by definition motivate action.  For Parfit reasons needn't in themselves motivate actions, but they can count in favour of certain actions when considered by a rational agent. 

Ok, I've already jumped into the arguments and we're not even out of the intro yet!  I know you're anxious for more, so lets enjoy philosophy! Yay!

[Note: If you get tired of reading what Parfit and Williams say you can skip to the bottom section entitled "My Thoughts"!  Regarding my thoughts on the issues, I appreciate any feedback 'cuz it'll help me identify weakness in my future paper on the topic.  Thanks!]

Misunderstandings


So, check it.  Parfit is all "the problem is that Williams doesn't understand what reasons are".  Again, for Williams reasons can only be things that motivate action, and the only things that can motivate action are internal to us (the things in our motivational set).   The extension of Williams' position is that it is unintelligible to conceive of an "external" reason; that is, a reason for action that is not part of our pre-existing motivational set (i.e. desires).  Now that I think of it, we can say this about Williams' concept of a reason for action:  

1. A reason must have motivational force on the agent

2. Since the only things that have motivational force on an agent are elements of his motivational set, all reasons are internal to the agent. 
3.  Since all reasons are internal, any talk of external reasons for action is stark nonsense. 

Case 1
Parfit's all, "dude, you sooooo don't understand what I mean by external reasons".   Lets go back to the early deph example.  Williams says that when we say that Sick Sam has a reason to take the medicine, it's true that we might mean that it would be better for him to take the medicine.  But Williams doesn't think that taking the medicine would be a reason for Sam.  The distinction is that other people might perceive the facts of the situation and say that they are reasons for taking the medicine but it doesn't mean that the same facts would be reasons for Sam.

Case 2

Lets use a different example to bring out the distinction.  Suppose an unfortunate young girl is raped and gets pregnant as a consequence.  For some people all the facts of the situation would be reasons for her to get an abortion.  For others, the same facts would not be reasons for an abortion.  Suppose the young girl has a motivational set that disposes her toward wanting an abortion.  The anti-abortion group can rant and rave about the facts that are reasons for them for why she shouldn't have an abortion.  

But Williams would argue that unless the girl has some sort of desire (after rational deliberation) to have a child, these facts won't constitute reasons for her.   They don't constitute reasons for her because they don't have any "grip" on her; they can't motivate her to act.  They have about as much motivational force on her decision as do the facts about her perpetrators favorite sports team.  

Again, Parfit wants to deny that reasons need to have motivational force.  He says that reasons are simply facts that count in favour (or against) some action.  But it seems we need to ax, reasons for whom? in favour of what?  Regarding the latter, ultimately in explaining our actions we will say something like "because doing x is right or good".  But good for whom?  How do you know?

I think Parit's position seems most plausible when we give uncontroversial examples.  Examples where "common sense" would tell us what is good.  But Parfit's claim is that there are objective values.  There are things that are objectively good.  When we give examples where people's intuitions or "common sense" differs, his position becomes less plausible.  

I feel like in making this post, I'm repeating myself a lot.  Maybe that's because that's what Parfit's doing in this section.  

Case 3
I don't know why but I'm going to give one more example to contrast the two positions...Suppose Bob the Bully enjoys hurting others; it gives him more pleasure than anything.  It's just the way he's wired.  He also believes that he is more important than any other person.  His motivational set is such that he doesn't care about the suffering of others at all.  There are certain facts that most of us (I hope) could point to that give reasons against acting like Bob.  The question is whether Bob would see these facts as reasons against him acting how he does.  He might agree that certain things are facts, but, given his psychological make up he probably wouldn't see them as reasons that count against his behaviour.  

Of course, Williams position is unappetizing for people who really want to believe in objective right and wrong.  We want to say something like, "look, bullying causes suffering and suffering is bad.  These facts are reasons against bullying-type behaviour." The problem is that while Bob might agree that bullying causes suffering (i.e., the facts), for him it's not a reason that counts against doing it.  Causing suffering makes him happy.  Bob has no reason to stop bullying although most people will think that he does.   The problem with Bob is that, despite being presented with all the facts, he has no reason to stop bullying. 

I just noticed something.  When I'm writing Williams' arguments I agree with him, but when I write Parfit's I want to agree with Parfit.  Moving on...

Normativity and Why Does It Matter?
So how does this all relate to normativity?  In short, normativity for Williams is particular to the agent's psychology.  Facts can only be reasons in favour/against something in relation to a particular agent's values and desires.  What will constitute a 'reason in favour' for a particular agent is a "psychological prediction" based on their desires and psychological make up.   For Parfit reasons are facts in favour/against actions that bring about objectively good/bad things/states.  

Resolving this question is important to Parfit because he believes that certain ways of living are intrinsically better than others.  How are we supposed to decide how to live our lives if we can't appeal to reasons that are for or against one way or another?

My Thoughts
I partly agree with Parfit that some ways of living are better than others.  I'm just not sure that we can say that this is an objective fact.  But suppose for a moment that Parfit's right, that there are intrinsically better ways of living than others and that we can appeal to reasons for or against certain ways of living. The problem is that how we respond to facts and states of affairs is a consequence of our evolutionary, biological, psychological, historical, and cultural history.  How could we possibly hope to disentangle the facts that we consider to be good reasons from our historical, individuals, cultural, and biological biases?  Parfit could reply that that's no reason not to try.

Ok, lets continue with the supposition.  How could we possibly adjudicate between differing opinions.  I think that part of what makes things 'right' hangs on certain biological facts about humans.  For instance, we are social animals thus require some sort of sanction on violence if we are to survive as a species.  I'll admit right now I have some Hobbesian tendencies.  Now, does the fact that a requirement for sanctions on violence is necessary (which arises out of our biology)  make sanctions on violence intrinsically good?  

What if the facts of our biology were different.  Suppose we were like some strange insect species where the female rips the head of the male in order to procreate.  Then would decapitation be an intrinsic good because it arose out of our nature and was necessary for the survival of the species? 

Lets set that worry aside.  There's another worry I have.  There is an assumption in Parfit that all rational individuals would respond the same way to facts that count as reasons in favour of something.  This arises out of his assertion that there are intrinsic goods.  The idea, I think, is that facts that give us reasons to act in ways that aligne with these intrinsic goods (happiness, compassion, knowledge, etc...) will be reasons for all of us.  

Think of it this way.  Fact A counts as a reason for an action because the action alignes the individual with some intrinsic good.  Every rational person who becomes aware of fact A can acknowledge that fact A is a reason in favour of acting a certain way.  But this can only be true if both people recognize the aim of the action as an intrinsic good.  I have trouble accepting the idea that there can be such consensus.  I think there is something to Williams here in that what we consider 'good' has much to do with our psychological make up.

Consider this example.  Suppose I'm walkin' down the street and I see some guy playing an instrument I've never seen before: a kazuba.  I think it sounds amazing--a cross between a kazoo and a tuba.  I'm totally inspired.  I go on the intertubes and look up what it is and order one with an handy instructional DVD.   Here's the thing.  A hundred other people walk by and none of them think the instrument sounds any good.  Non of them order themselves one.  

Pafit's reasons story goes something like this.  The facts about the sounds of the instrument were reasons in favour me learning to play.  But why didn't the others respond to those same facts?  Just as I did, they all heard the sweet soothing sounds of the kazuba.  The plausible story is something like what Williams says: the facts of that sweet kazuba sound gave me reasons in favour of learning to play because there was something in my psychological make up that made me perceive the kazuba sounds good.  If there weren't that particular fact about me, no amount of kazuba sonatas could ever give me reasons in favour of playing the kazuba.    The upshot is that what we value is deeply intertwined with who we are as individuals.  

Given the wide variety of individuals, it's not unreasonable to suggest that there is a corresponding variety of ways to ascribe values to things and concepts.  If it's true that there are differences in how  value ascribed to concepts and things, then some facts will count as reasons for for some people, and those same facts might be irrelevant or against an action for other people.  

The problem for Parfit is this:  It only makes sense to talk about reasons for or against something if that thing has some kind of normative value.  But, as Mackie pointed out long ago, there is ample empirical evidence to suggest that there are important normative disagreements between people and cultures.  Unless Parfit can give us some guidance and to what these objective normative values are, he going to have a difficult time making his case that there are such things.  

Even if he can point to instances of agreement, the fact that there is agreement is not evidence of objectivity.  It is only evidence of agreement.  Parfit for his part can turn things around and say that the burden of proof lies on the skeptic: that there are moral disagreements is only evidence of disagreement, not that one side isn't right.