Friday, November 13, 2015

Repressed Emotions: What Are They Supposed to Be? Part 1

Introduction and Two Theories of Emotion
You'll often hear the term 'repressed emotion' bandied about in the self-help industrial complex. In this alt-world, if you have a mental or physical illness, you can bet your bottom dollar that it's caused by a repressed emotion (usually from your childhood).  The concept is often used as the cause and explanation of everything. I'll argue in Part 1 that the concept of repressed emotions as it's used in the self-help industrial complex is unintelligible. However, it does have more respectable pedigree in Freudian psychoanalysis. In Part 2, I'll take a look at the more sophisticated Freudian account.

A Tale of Two Theories of Emotion
Let's set aside repressed emotions for a moment. Before doing a preliminary evaluation of the concept we'll need to take a quick look at the two basic theories of what emotions are. 

I'll begin with what I'll call the cognitivist account. On this view, an emotion is made up of two elements: it's a conjunction of (a) a judgment/appraisal and (b) a feeling/sensation. For example, when I experience anger, the emotion is comprised of 

(a) a judgment or evaluation that person x is disrespecting me or treating me unfairly, and
(b) the phenomenological sensation of anger (tension, heat, etc...).

Or, for example, when I experience fear the emotion is comprised of 

(a) the judgment or evaluation that something is dangerous to me/can harm me, and
(b) the phenomenological sensation of fear (racing heat, strange feeling in the stomach, etc...)

In sum, the cognitivist account requires that emotions have cognitive content (i.e., beliefs, concepts, judgments, appraisals) as well as a phenomenological 'feel' (i.e., a 'what-it's-like-ness'). The ability to distinguish feelings from emotions marks one of the advantages of the cognitivist theory. The difficulty to explain how animals and pre-linguistic children can have emotions counts against this view because it would require ascribing to them sophisticated concepts and beliefs.

The other main theory of emotions argues that emotions are simply a kind of (i.e., a sub-class of) feeling. On this non-cognitivist theory, emotions don't have cognitive content, they are just specific kinds of 'feels'. For example, being angry just is feeling a certain way. Being scared just is feeling a certain way. To have an emotion you needn't have made any appraisal or judgment or have any beliefs about the object/person/situation causing your feeling. It's all in the feels...

A major advantage of this view is that it's easier to ascribe emotions to animals and pre-linguistic children. Making judgments and appraisals requires complex concepts, something not easily ascribed to infants and animals. A problem attributed to this view is the inability to distinguish kinds of feelings. For example, the 'feel' in your stomach when you are scared and when you are angry might be the same. The feeling theory has to say they are the same emotion. There's also the problem of 'an explosion of emotions'. For each slightly different feeling there is a different emotion since what defines an emotion is its feel. Different feel=different emotion.

I won't get into the debate between the two accounts of emotion. There are ways (i.e., there are volumes of articles) that proponents of each view try to respond to the various challenges and criticisms. I merely want to point to some of the advantages and disadvantages with each and provide a framework in which to give a preliminary assessment of 'repressed emotions'. If we're going to say something is a 'repressed emotion' we need some general idea of what an emotion is first...

First Pass: Repressed Emotions Don't Make No Damn Sense
The Cognitivist View: Let's suppose for a moment that the cognitivist view of emotions is correct. An emotion is the conjunction of a judgment/appraisal and a feeling. Suppose my local self-help guru tells me I have [insert illness] because of repressed anger (from my childhood--of course). First of all, on this view, part of having an emotion requires that I have made an appraisal or judgment about some situation or person. This implies that I have a belief

Mysteriously, when the guru asks me if there's anything about my childhood I'm angry about, nothing comes to mind (cuz everyone had perfect childhoods and never got angry about anything).  So, of course he doesn't name anything specific but encourages me to look deeper because there must be something. Why else would I have [insert illness]? 

Here's the thing. If I have an occurrent belief that situation x made me angry as a child, it's not a repressed belief. It's occurrent. So, that can't be the anger that's causing my [insert illness]. The belief has to be buried. It's a belief I can't access--yet it's there! Somehow, I believe something I don't know I believe! 

At this point, some charity is in order. The mind is not totally transparent, and subconscious and unconscious thought and belief are fairly widely recognized phenomena in psychology. So, let's grant for a moment that I have some subconscious belief that I was treated unfairly in situation x as a child. We'll set aside the strangeness of having a belief that you don't know you have.

At this point in the session, the guru will prod me with more questions, getting me to search my mind for instances where I might have believed myself to have been unfairly treated. As though by magic, I find a memory of an instance of being upset as a child! I judged my treatment as unfair when blah blah blah... 

Here's a question: Is this a judgment I formed as a child that I'm now recalling, i.e., is it a judgment buried deep in my consciousness until now, or is it a belief that I have formed now, after much suggestion and prodding? (Because "surely there was something in your childhood that upset you, otherwise there's no other possible way to explain your [insert illness]"). Also, consider how memory works. It's not like retrieving a file in your hard drive. When you "recall" something you are actually recalling the last instance you thought about it then reinterpreting in light of your life's current narrative. How likely is it that what you are recalling from so long ago is actually what happened? (Hint: Not likely).

Again, for the sake of argument, I'll be charitable. Suppose the guru has managed to get me to retrieve (the memory) of this old appraisal about how I was treated. Ok, so far I have one half of what makes an emotion an emotion. I have a judgment/appraisal. 

There's still a problem. If an emotion is made up of a conjunction of a belief and a feeling, I need to 'find' the feeling part. But how can you have a feeling that you don't feel? How does that make sense?

If my repressed anger is indeed an emotion then by definition it has an affective/phenomenological component. It has a 'feel'. But if it's repressed, I don't feel it...otherwise it wouldn't be a repressed emotion, it would just be an emotion that I'm experiencing

Simply put, either you feel a repressed emotion or you don't. If you feel it then it doesn't make sense to call it repressed because you're feeling it. If you don't feel it then how is it an emotion if emotions are constituted in part by a sensation? 

At this point, a proponent of repressed emotions could reply that when I bring the appraisal of my treatment as unjust to my conscious awareness then the feeling will follow. But this reply doesn't seem to work. It looks like the feeling of anger is being caused by the belief. In other words, I didn't have a repressed emotion, I had a repressed belief. When I dredged the belief out from my unconscious mind and onto the main stage of the theater of my mind, I reacted to it (in light of my current circumstances and coaching). The feeling and the belief weren't unified when there were in my subconscious which is what is needed in order for them to count as an emotion (on this view).  

Here, the repressed emotion theorist could give up on justifying the phenomena in terms of the cognitivist view. Maybe emotions just are feelings and I've buried my anger somewhere (?) and that's why I have [insert illness].

Non-Cognitivist View: The argument against repressed emotions under the second theory follows the same logic as above. If emotions just are feelings (and don't involve any appraisals/beliefs/conceptual content) then you have to explain how it makes sense to have a feeling that you don't feel. That's no emotion at all because there's no feeling and the feeling is what makes the emotion what it isLet me repeat that. If what makes an emotion an emotion is that it is a feeling, then it makes no sense to say you have an unfelt feeling. No feeling=no emotion. 

On the non-cognitivist theory, the notion of repressed emotion makes even less sense. 

It looks like if the notion of repressed emotions is going to be intelligible at all, we'll need a more sophisticated account. This means leaving the realm of the self-help industrial complex, going back to Freud himself and examining what he said about the concept. 

In Part 2 I'll look at Freud's own account of the concept. 

Aside on Philosophy
Often non-philosophers will criticize (and mischaracterize) philosophy as being all about asking people "well, what you mean by x?". In some contexts, this philosophical practice can be extremely annoying and not always useful. Allegedly, incessantly engaging in this practice is partly how Socrates got himself killed. 

But, as I hope you can see from above, there are cases where asking this sort of question is useful. Someone postulates a theoretical entity (i.e., repressed emotions) and a philosopher will naturally ask, "well, what do you mean by that?" Spending some time getting clear on our concepts helps to avoid postulating unnecessary and incoherent theoretical entities. 

That said, the more sophisticated Freudian account of repressed emotions won't be so easily dismissed. I don't know yet where my analysis will take me. Anyhow, there is value in carefully analyzing the contents of a concept and placing it within the context of major theories to see how well it fairs. 

Another thing philosophers make much ado about is whether a concept is "doing any work." If you can give an account of a phenomena with existing concepts and without appeal to a new concept, there has to be good reason to add the new theoretical entity. In Part 2, I'll pay more attention to the work Freud intended 'repressed emotions' to do in psychology and psychoanalysis. If there's no other way to capture the phenomena he refers to and do the theoretical work, then it might mean that the theories of emotions themselves have to change to accommodate repressed emotions. 

Until then, keep repressing your emotions.  They can't hurt you cuz they don't exist. 

Saturday, October 10, 2015

Gun Control Policy in Public Spaces

Can you write something about guns in the US of A without pissing off a large portion of your audience? Not likely. But let me try to propose a policy that has elements that both sides might endorse (excluding the most extreme ends of the spectrum).

A little auto-biography
As a Canadian born to pacifist parents, the American fascination with guns has always been mysterious to me. Most of the world probably shares this sentiment regarding Americans and their gun fetish. Why would you want to own something that can--with the simple tug of your finger-- kill another human being? Clearly there is something psychologically wrong with such a person. 

Anyhow, after living in the US of A since 2007, I've come to see that it's not just crazy True Patriots that like guns, it's a lot of "normal" Americans too. In fact, many of the people I've become friends with (and with whom I share political values in every other domain) loved them some guns. This softened my "judgy-ness". Some people just like to shoot things for fun or go hunting. "Piew! Piew!" It's not for me but I no longer see much harm in it. The demographic my friends represent certainly don't own guns with the delusion that they'll someday get to be a vigilante super-hero. 

Let's get to my policy suggestions...

Deez Nuts
The Liberal Media (controlled by Jews who, according to esteemed historian Ben Carson, didn't fight back) loves nothing more than to run stories involving True Patriots accidentally shooting themselves or their friends. While the specific numbers vary between studies, the trend is the same: for every time a gun is legitimately (and successfully) used in self-defense, proportionately more people are killed accidentally (between 100-400% more, depending on the study you look at and the inclusion criteria). If you include accidental non-fatal shootings, the numbers are much higher. 

A significant portion of the anti gun-control lobby sincerely believes that if they were in a public-shooter-type situation they would be able to save the day or at least have a positive impact. This is pure fantasy.  While I don't doubt that a combat marine or special forces operative could "take down" a shooter with minimal "collateral damage", it is only because they have years of combat training

I don't care how good a shot you are in a shooting range (or on your Xbox). There is a massive difference between trying to shoot a stationary target vs a moving (suicidal) target that is trying to kill you. Without training you have no idea how you're going to respond. You may simply freeze up. You may panic and shoot everywhere. You will likely be trembling from either fear or adrenaline undermining your capacity to aim. In a crowded area, failure to have perfect aim may result in further deaths--the exact thing you're trying to prevent.

Trained professionals (military, police, various security agencies) go through thousands of hours of training to condition themselves to deal with high stress situations. I don't care how badass you are in your fantasies: if you aren't trained for using your weapon when someone is trying to kill you, you will not perform the way you think you will. You are either going to have no effect on the situation or you will end up shooting innocent people in your panic. Merely owning a gun does not a brave or competent person make. 

The typical argument you hear from True Patriots is that if everyone is armed in a public place, suicidal shooters won't be able to do the damage they do. This is only possibly true if the people carrying weapons are trained to use them in high stakes situations. It is not true if they have never been in any situation remotely resembling a live fire fight. 

So here's the proposal: Anyone who wants to carry a gun in a school or similar public place can do so provided they have the relevant training. This would include at minimum (a) a one time course that provides training equivalent to what a SWAT team, marine, or comparable operative receives for live fire-fight situations; (b) mandatory annual training refresher course. 

Regarding (a) the big political issue will be who decides how much training is enough? Easy. Ask the marines or some other respected military/police institution to outline what they consider to be the minimal training required to send someone into a live fire fight. After all, these institutions know better than anyone else what's required. That is to say, we let the experts rather than politics decide. (Crazy. I know.)

Regarding (b) the answer is the same. Knowing how to react effectively in a live fire situation is a perishable skill and so there must be retraining if we want gun-carriers to be more likely to help then harm a situation. After all, that's the goal, right?'s my right. First of all, rights are not unrestricted. They are commensurate with capacities to exercise them. Second, no one is saying you can't go to shooting ranges, keep a gun in your house, or go hunting. This is about carrying a gun in a crowded public place. 

Although there may exist other arguments, the primary argument given for why fire arms should be permitted (or even encouraged) in public spaces is that True Patriots with guns will be able to diminish the casualty and fatality rates. This argument rests on a false assumption that mere possession of a gun is sufficient to, on average, accomplish this outcome. For the argument to have any chance at soundness a premise needs to be qualified. It isn't merely armed True Patriots that can reduce casualty and mortality rates in mass shootings but True Patriots with sufficient training. Otherwise, True Patriots will, on average, probably be a net liability or have no effect at all. 

Loose Ends
1. Gun control is not the same thing as the government coming to take your guns. Why do we always hear this straw man? 

2. As far back as the mid 90s, the NRA sponsored a law in congress (which was passed and repassed) that prevents the CDC from collecting and using any data on gun violence. The law reads that "[any data collected by the CDC] may not be used to advocate or promote gun control.” WTF?  If unrestricted gun ownership and lax gun control laws (and enforcement) really weren't related to higher gun death rates, as the NRA claims, then why does the NRA lobby against collecting the data? What are they so afraid of? If they're right, why not use the data to show it? And more importantly how can we have good policy without good data? To me the reasons are fairly obvious but I'll let you draw your own conclusions.

3. Gun control vs gun rights arguments usually go like this: 

    (a) War of empirical evidence. Continued disagreement for a variety of reasons having to do with biased interpretations and different quality of evidence being given equal credence, methodological disputes.

    (b) Normative: "Fine, I don't accept your studies and you don't accept mine. It doesn't matter anyway. Gun ownership is a right."  So, here's the thing I've always wondered...and if you are anti gun control proponent maybe you can let me know in the comments what you think.  Suppose the empirical data indicated fairly strongly that lax gun control laws (and enforcement) actually do lead to significantly higher rates of homicides and accidental deaths, ceterus parabus. Would you be willing to oppose gun control legislation despite the fact that more people will die and get injured? At how many preventable deaths per year might you change your mind? 100? 200? 1 000? Is there any number? Or is an unrestricted right to own a gun more important than any number of preventable deaths?  

Wednesday, September 2, 2015

Doing What Feels Right and Sartre's Existentialism

Existentialism can be summarized in one phrase, "existence precedes essence." But what does this mean? Pre-existentialism, philosophical systems presumed Man had an essence. To understand what that means consider a chair. Chairs don't just randomly pop into sir!  Before they are created someone or something has to have a concept of what a chair is (i.e., something for sitting), then the chair is created in conformity with the concept. The essence of a chair is that for which it is designed--which is sitting, if you didn't know.

Notice two things: (a) having an essence, in the sense I've described, implies a creator and (b) that the creation's essence is determined before it comes into existence. And so, artifacts--things like chairs, computers, cars, iphones, etc--all have creators and have an essence/nature/purpose before they come into existence.

Let's return to "existence precedes essence." For existentialists, human beings are unlike artifacts in that we exist before we have an essence/nature. We are not designed and so there is no predetermined essence that defines who or what we are. Who/what we are comes after we exist. We are thrust into the world and create ourselves through the actions we choose. This is what "existence precedes essence" means: first we exist, then we will acquire a nature (though our acts).  If I do harmful acts and act selfishly, then this is what I am. If I create and share, then this becomes my nature. There is no predetermined nature beyond what I actually choose to do. Again, contrast this with artifacts: first they have a nature/essence/purpose then they are brought into the world.

For existentialists, the human condition is (a) understanding that we are free to choose our own essence (through our actions) and (b) figuring out how we ought to create ourselves given we have no intrinsic nature. Another way of thinking about this is to say that we are responsible for who we are.

Radical Freedom and Responsibility
Although we have no common nature among us we all share the same conditions: We are "condemned" to be free thus we have to make choices about how to live and 'be'. There is nothing objective in the world to grab to guide us in our choices. Every choice is exactly that--our choice. And because every action is a consequence of our own choice, we bear absolute responsibility for it. Pretending you don't have a choice is what Sartre calls "bad faith". Bad faith is lying to yourself about the reality of your radical freedom. Even when you act in bad faith you cannot escape responsibility for your choices which define who you are. It was your choice to lie to yourself.

The existential condition is that there are no objective values to guide our decisions and no one but our selves to decide which values we adopt. Now some might say that we can turn to religion or authorites to guide us. There are a few problems with this. First, you are shirking your responsibility as a free being for decisions. By deferring to an outside source for your decisions your are denying your responsibility to choose for yourself.

One might say that choosing to follow this or that text or leader is a choice--and it is. But this doesn't allow you to escape the subjectivity of the human condition:  how you choose to interpret the various texts and advice will also be a matter of your own choice. There's no external source you can lean on to tell you how to interpret. You cannot escape the subjectivity of your existence and so you still must bear the responsibility of choosing one interpretation over another.

Furthermore, your essence is nothing more that the sum of the things that you choose to do. And if trying to deny your radical freedom and offload responsibility are part of your actions, then you are a coward. Anguish, for the existentialist, in part comes from knowing that he bears full responsibility for what he does and who he is and that this responsibility is inescapable.

Moral Choice
If there are no objective values in the world, it seems like anything goes. Again, just like with any other choice, the ethics that you choose define your essence. If you choose and act on a selfish ethic, then that's what you are and you are responsible for everything that comes from it.

But, in a sense, our actions aren't completely unconstrained. Here's where we exit subjectivity. Every choice that I make not only defines who I am but also defines the essence of Man as a whole because I am a part of that whole. This demands that I consider the consequences of my choices on what will be the nature of Man.

For every man, everything happens as if all mankind had its eyes fixed on him and were guiding itself by what he does. And every man ought to say to himself "Am I really the kind of man who has the right to act in such a way that humanity might guide itself by my actions? (Existentialism)

I am responsible, through my choices, for how Man is defined for my time in history because I am a part of Man.

But this is vague. How do I decide what to do in specific cases? Unfortunately, general moral principles can't tell us how to decide particular cases. Sartre give the following case:
[The young man's] father was on bad terms with his mother, and, moreover, was inclined to be a collaborationist; his older brother had been killed in the German offensive of 1940, and the young man, with somewhat immature but generous feelings, wanted to avenge him. His mother lived alone with him, very much upset by the half-treason of her husband and the death of her older son; the boy was her only consolation. The boy was faced with the choice of leaving for England and joining the Free French Forces--that is, leaving his mother behind or remaining with his mother and helping her to carry on.
He was fully aware that the woman lived only for him and that his going off--and perhaps his death--would plunge her into despair. He was also aware that every act that he did for his mother's sake was a sure thing, in the sense that it was helping her to carry on, whereas every effort he made toward going off and fighting was an uncertain move which might run aground and prove completely useless; for example, on his way to England he might, while passing through Spain, be detained indefinitely in a Spanish camp; he might reach England or Algiers and be stuck in an office at a desk job. As a result, he was faced with two very different kinds of action: one, concrete, immediate, but concerning only one individual; the other concerned an incomparably vaster group, a national collectivity, but for that very reason was dubious, and might be interrupted en route. And, at the same time, he was wavering between two kinds of ethics.
On the one hand, an ethics of sympathy, of personal devotion; on the other, a broader ethics, but one whose efficacy was more dubious. He had to choose between the two. Who could help him choose? Christian doctrine? No. Christian doctrine says, "Be charitable, love your neighbor, take the more rugged path, etc., etc." But which is the more rugged path? Whom should he love as a brother? The fighting man or his mother? Which does the greater good, the vague act of fighting in a group, or the concrete one of helping a particular human being to go on living? Who can decide a priori? Nobody. No book of ethics can tell him. The Kantian ethics says, "Never treat any person as a means, but as an end." Very well, if I stay with my mother, I'll treat her as an end and not as a means; but by virtue of this very fact, I'm running the risk of treating the people around me who are fighting, as means; and, conversely, if I go to join those who are fighting, I'll be treating them as an end, and, by doing that, I run the risk of treating my mother as a means. (Existentialism)
And so a general ethical principle can't help us decide specific cases. And besides, even if we do choose a general ethical principle, there is no guide to tell us which to choose as either a general guide of action or for particular cases. We also have to make that choice and we bear responsibility for it and accept that it now defines us in so far as we act on it.

Sartre and Emotions
General ethical principles can't tell me what to do in particular cases. Maybe I ought to do what feels right to me. If the young man feels that his love for his mother is great enough to sacrifice his other desires, then he should do that. But if the feeling of love for his mother isn't enough to make him give up everything else, then he ought to leave.

But how will he know that he loves his mother enough to give up everything else unless he actually does it? To know that his feeling leads him to the right choice he has to live that choice. He has to see how it plays out. He might try it and find out that the desire to avenge his brother overwhelms him and he's unhappy staying behind and so the feeling misguided him. But he can't know this before he lives it. The feeling can't tell him in advance what to do.

Consider another case that many can relate to. How do you know if marrying someone will be the right choice? Consulting your feelings before you're married can't tell you. You'll only know if it's the right choice if you actually do it. If it turns out well, it was the right choice. If it turns out poorly, it wasn't. The feeling can't tell you what's going to happen and so can't tell you in advance what the right thing to do is. Besides, if feelings were a good guide to marriage, we'd expect the divorce rate to be substantially lower.

It Isn't All Doom and Gloom
So, we are hurled into the world, condemned to be free with no fixed points to guide us in how we ought to live, yet we are somehow responsible for everything we do and are. This is the existential forlornness and anguish.  Forlorn because we cannot turn to anyone to make decisions for us and anguish because of the tremendous responsibility that comes with creating both our own essence and that of Man.

But chin up, butter cup! The good news is that existentialism is a philosophy of action. You might not have any guides but you get to create yourself--with every choice. Bonus, if you didn't like where things were going, you can reinvent yourself at any moment, if you so choose.
There is no reality except in action. Man is nothing else but his plan; he exists only to the extent that he fulfills himself; he is therefore nothing else than the ensemble of his acts, nothing else than his plan. (Existentialism)
Before you go skipping off into the sunset, existentialism is a demanding philosophy if you take it seriously. In creating yourself and defining Man, with every act you are saying "this is what I think all mankind ought to be--follow me!" When taken seriously, that is a massive responsibility to bear (hence, all the angst).

It also means that there is nothing that exists that is not expressed in action. There are no great authors with unwritten great books, no charitable people with kind deeds undone, no great loves who have not loved.  "Reality alone counts and [...] dreams, expectations, and hopes warrant no more than to define a man as a disappointed dream, miscarried hopes, as vain expectations" (Existentialism).  In other words, coulda woulda shoulda is worthless. At the end of the day, what matters is what you did with your life and whether it was an example for others to follow.

Thursday, August 6, 2015

Cilantro, Moral Truth, and Justification

I've been working on this paper for longer than I care to admit but I have to turn it in at some point. I've written about 4 or 5 different versions of it all with different solutions or non-solutions to the puzzle I present. Anyhow, a few notes:

(A) For some reason the footnotes didn't post to the blog so some of my clarificatory points aren't here. Here are two of the important ones. 
     (1) The anti-realist position I'm concerned with is error theory (there are no moral facts and moral propositions have cognitive content). 
     (2) In the last section I talk a lot about "evidence". What counts as evidence in moral arguments would require its own paper so I make some assumptions about what counts: moral judgments, intuitions, principles, and emotions. I'm happy to include other candidates. 

(B) For non-philosophers all you really need to know to understand this paper is what moral anti-realism is. In the simplest terms it's the view that there are no moral facts. Everything is, like, just your opinion, maaaaaaaan!

Cilantro, Moral Truth, and Justification
Appetizers: Anti-realism About Gustatory Facts 
At dinner tables around the world, there is perhaps no issue more divisive than whether cilantro is or is not a delicious garnish. It is the gustatory domain’s own abortion debate. There’s little middle ground and each side views the other as irredeemably wrong. In more reflective moments, most would agree there are no objective facts about the deliciousness of particular foods.  Abe can claim that cilantro is objectively delicious while Bob claims that cilantro is objectively disgusting but the fact of the matter is that there is no fact of the matter! Granting this assumption, is there any way that we could make sense of the idea that either Abe or Bob’s belief is better justified than the other? 

For the moment, I’m going to assume that there isn’t. It seems as though Abe and Bob could offer justifications for why cilantro is subjectively delicious or disgusting but I doubt any of these reasons would convince a third party of cilantro’s objective gustatory properties. Abe and Bob could insist that their arguments and reasons support objective gustatory facts but we’d dismiss their claims as category mistakes—they’re confusing their personal preferences for objective facts about the world. Any argument they give for objective gustatory facts about the world is better interpreted as facts about their subjective gustatory preferences being projected onto the world.

Now consider an analogous moral case and substitute your favorite moral propositions and their opposite for the gustatory ones. For example, Abe claims that it is an objective fact that slavery is a morally good institution while Bob claims the opposite—i.e., that it is an objective fact that slavery is a morally bad institution. If in the cilantro case anti-realism about objective gustatory facts leads us to accept neither competing belief is better justified than the other then it seems that consistency requires that anti-realism about moral facts lead us to also conclude that neither Abe nor Bob’s beliefs regarding slavery is more justified than the other.  Just as there are no objective facts about the deliciousness of cilantro, there are no objective facts about the moral badness or goodness of slavery, and so one position cannot be more justified than the other. Any argument is merely a projection of the interlocutor’s personal preferences or explainable by appeal to facts about their psychology.

There may be some extreme anti-realists out there that are willing to bite the bullet and concede the point.  However, I’m willing to bet that many anti-realists would deny that all moral beliefs are equally well-justified even if moral beliefs can't be objectively true or false. If I'm right then these anti-realists need an account of justification that doesn't depend on the notion of truth. Is this possible?

The framework for this paper is to examine the relationship between moral anti-realism and justification. Suppose we accept that MORAL ANTI-REALISM IS TRUE: There are no objective moral facts. On what bases can we then evaluate competing moral claims? Is justifying objective moral claims analogous to trying to justify objective gustatory claims? That is, since there really are no facts of the matter, one claim is just as well-justified (or unjustified) as the other. The puzzle for the anti-realist is to reconcile the following two assertions: MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.  I will argue that, if we adopt an externalist theory of justification, a Peircian fallibilism offers a potential solution to the puzzle. Before proposing my solution, I will consider and evaluate other attempts to reconcile the two assertions.

A Quick Word on Theories of Justification
Before proceeding we’re going to need to take a brief look at theories of justification and pare down the scope of my inquiry.

Two Theories of Justification
One way to analyze the concept of justification is along internalist/externalist lines. Internalists argue that a belief is justified so long as the believer is able to provide some sort of argument or supporting evidence when challenged. Externalists argue that a belief is justified if it was generated by a reliable belief-forming process, where "reliable" means that the process generates more true than false beliefs in the long run. For example, beliefs formed by visual perception are justified on the externalist view because visual perception generates more true beliefs than false beliefs in the long run.  So, my belief that there is a computer in front of me is justified because it was formed by my seeing it—which is a reliable process. It’s more likely that I’m actually seeing a computer than I am hallucinating it. 

I wish to side-step taking a definitive position on the internalist/externalist debate and suggest that both types of justification are plausible in ethics. We think that a moral belief is justified via normative reasons (internalist) but we also think particular ways of arriving at moral beliefs confer justification. For example, we think that moral judgments arrived at through careful reasoning and/or reflection are more justified than those produced by unreflective emotional knee-jerk reactions. And so it's plausible to think the reliability of the process that produces a moral belief is at least partially relevant to the belief's (relative) justification. For the remainder of this paper, I will grant myself that assumption and constrain the scope of justification to externalist justification. A full treatment of an internalist model in regards to my inquiry requires a paper unto itself—although I suspect internalist theories may have similar problems

Round 1: Externalist Justification Isn't Possible if There Are No Moral Facts
The simple argument against the possibility of an anti-realist account of externalist justification goes something like this. 

P1. Reliability is cashed out in terms of whether a process produces more true than false beliefs in the long run.
P2. Anti-realists deny that moral propositions can be true or false.
C1. So, there's no way to evaluate the reliability of a process when it comes to moral beliefs because the very attributes that we require to measure reliability aren't available.
C2. So, on the anti-realist model all moral beliefs are equally justified (or unjustified).

In short, anti-realists deny the very attribute (truth) required to measure reliability. If we can't know which processes are more reliable than others there is no externalist ground to say one moral belief is better justified than another.  But, again, surely some moral beliefs are better justified than others…but how? 

Consider the inference rule modus ponens . We know modus ponens to be a reliable belief-forming process from using it in other domains. It is content neutral. Its reliability credentials have been checked out so all we need to do is import it (and similar content-neutral processes) into the domain of ethics. Anti-realists can say that moral conclusions arrived at through modus ponens (or other combinations of formal rules of inference) are more justified than those that aren’t. 

Modus ponens and other valid argument structures are contingently reliable processes. That is, if the inputs (i.e., premises) are true then so too will be the outputs. The problem is that the anti-realist has denied the possibility of true inputs in the moral case. If inputs can be be neither true nor false then the conclusions are also neither true nor false. And worse yet, the same argument structure can yield apparently contradictory outputs.

Consider the following examples:
Ethics 1
1E. If you abort a fetus it is wrong. (Neither true nor false).
2E. I aborted a fetus. (Neither true nor false).
3E. Therefore, what I did was wrong. (Neither true nor false)

Ethics 2
1E*. If you abort a fetus it isn't wrong. (Neither true nor false).
2E*. I aborted a fetus. (Neither true nor false).
3E*. What I did wasn't wrong. (Neither true nor false).

It appears as though importing valid argument structures into ethics doesn’t give a solution to the puzzle of reconciling MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.

Round 2: Content-Generating Processes
Perhaps the problem here is the content neutrality of the above processes. We need processes that justify initial moral premises as well as yield conclusions. We have some familiar plausible candidate processes that might confer justification: reflective equilibrium, rational reflection, rational discourse, coherence with existing beliefs, idealizing what a purely rational agent would want, applying the principle of impartiality, universalization, to name a few.

Notice also that we think that some cognitive processes for ethical judgment don’t confer justification—i.e., are unreliable. For example, if I form a moral judgment when I'm extremely angry I might come to reject that belief once I calm down and employ one of the above methods instead. So, a belief arrived at as a consequence of a temporary acute emotional reaction is not well-justified.  

Moral psychology and social psychology are littered with experiments where either the subject or the environment are manipulated to produce judgments that the subject would reject upon learning of the manipulation. This seems to hint at an answer: Beliefs that have been formed by processes that involve manipulation of the environment and/or the subject’s mood are less reliable/less justified than processes that don't involve any obvious manipulations. 

The Same Problem?
This account implies that some processes are more likely to "get it right" than others. But what is it to “get something right” if there's no target for epistemic rightness (i.e., truth)? This seems to be the same problem from Section 1 all over again. If moral beliefs can neither be true nor false, in what way can we say that one process yields more true outputs than another? Sure, we might systematically reject judgments produced by some processes in favor of others, but why  prefer the outputs of one process over another if we can’t say that judgments from one is more likely to be true than another? The grounds for thinking judgments from one process are more justified than those from another seem to be that the judgments from one process are more likely to be true than another.

Round 3: Analogy with Scientific Justification and Fallibilism as a Possible Solution
Harmon argues that there is an important disanalogy between explaining ethical judgments and scientific judgments. We can’t explain why a physicist believes there is a proton without also postulating that there is something “out there” in the world causing the physicist’s observation, which is in turn interpreted as a proton. A moral judgment, on the other hand, can be explained merely by appeal to a subject’s psychology. We needn’t postulate any thing or moral properties “out there” that cause a subject to have a moral belief that x. 

Let’s accept Harmon’s argument. Despite the fact that the causes of scientific and moral judgments might differ, there may be ways in which justification functions similarly between both.
Scientists habitually couch their arguments and conclusions with language that is fallibilist.  Claims and conclusions are presented as provisional, based on the current available evidence and methods. The history of scientific discovery is one of revised and overturned conclusions in light of new evidence and recursive, self-correcting improvements in the scientific method itself.  

From the point of view of internalism vs externalism about justification, we might consider new data as a kind of internalist justification for a claim because they are reasons to believe one thing rather than another. Research methods, on the other hand, can be viewed as instances of the externalist’s processes that confer justification. The idea that some processes are more reliable than others is a familiar idea in scientific research. Claims that derive from methods (i.e., processes) that avoid possible known biases are more reliable and hence more justified than methods that don’t.

For example, the placebo effect is a well-known occurrence in medical science. If patients think they are receiving treatment—even if they aren’t—patients report subjective measures (e.g., reduced pain/discomfort) significantly more positively than non-treatment (i.e., control) groups. We also know that if the researcher knows which patients are in the treatment group and which aren’t, this can influence both the way the researcher asks the patient questions and how they interpret data (they’ll bias toward a positive interpretation). For these reasons we think that the results from medical research that are placebo-controlled and double-blinded are more reliable than those that aren’t. 

In short, data from a study that employs a more reliable process (e.g.,. double-blind, placebo controlled) is more justified than the data from a study that didn’t do either of these things. The more a process avoids known errors, the more justified its conclusions—despite the fact that the blinded study’s conclusions might also eventually be overturned. There is always the background understanding that new and better methods might come about and generate incommensurate data and/or conclusions but this doesn’t undermine the current relative justification that the current methods confer on the outputs beliefs. 

Analogously, moral and social psychology has produced a vast literature showing all the ways we think our moral thinking can go awry.  We know that a cluttered desk can cause us to pass harsher penalties than we would otherwise, that a temporary heightened emotional state greatly influences our judgments, that the feeling of empathy can lead us awry,  and that implicit biases can play important roles in our judgments—to name a few. In short, there are many ways in which it seems like both our basic hard-wiring and various forms of personal and environmental manipulation can cause us to make judgments that we, upon learning of the manipulation or the bias, would likely reject in favor of a judgment arising from unmanipulated deliberation or one of the familiar gold-standard methods of moral reasoning .

Perhaps an anti-realist can think of the activity of moral thinking not as one that aims at discovering some objective truth, rather one that seeks to avoid known cognitive errors and insulate against manipulation. In so far as our judgments derive from methods that avoid (known) cognitive errors and biases, our moral claims are better-justified. This however doesn’t entirely solve the puzzle. We still need to answer why we should choose the output of one type of process over another.  It’s easy to say that the manipulated judgment is “wrong” or is “mistaken” but how do we say this without appeal to truth? “Error” implies a “right” answer. One might just as easily say that the correct judgment is the manipulated or biased one and we are systematically mistaken to adopt the reflective judgment made in a cool dark room. 

The Main Challenge to The Anti-Realist: The moral anti-realist needs some general criteria to explain why we (ought to) systematically endorse judgments from one process rather than another. That we do isn’t enough. We must give an account of why one confers more justification than another.

Peirce, Methods of Inquiry, and The “Fixing of Opinion”
Before suggesting a possible criteria to answer the challenge, I want to sketch out a Peircian analysis of methods of inquiry since it inspired my suggestion. I’ll fill in other details later as they become relevant. Pierce argues that the “fixing of opinion” or “the settlement of opinion” is the sole object of inquiry (Peirce, p. 2-3).  While we needn’t commit to Peirce’s exclusivity claim regarding the purpose of inquiry, Peirce provides useful insight into why we might think some processes confer greater justification than others. I take him to be proposing two related desiderata for our methods of inquiry: that they produce beliefs that are (a) relatively stable over time and (b) relatively impervious to challenge.  The anti-realist can distinguish between processes’ justificatory status to the degree that they they achieve (a) and (b).

This possible anti-realist response to the challenge of justification is not without difficulties. I will explore two related ones. First, it isn’t clear that stability of beliefs is a condition for justification. As standard examples of recalcitrant and dogmatic Nazis or racists show, in fact, we have reason to believe stability might have little to do with justification. Second, I will need to show that stability marks something we think matters to justification: namely, the absence of cognitive errors and inclusion of all relevant evidence. This is the fallibilist aspect of the proposal: Stability needn’t be a proxy for truth tracking, however, it is a reasonable proxy for believing that we are avoiding errors. This is part of the answer but not all of it. 

Peirce notes, stability can be achieved various ways—not all of which we’d think confer justification. The second part of this fallibilist model of justification has to do with the degree to which a process excludes relevant evidence thereby making it susceptible to challenge. Thus, for Peirce, long-run stability requires that the method of inquiry take into account all relevant sources of evidence. A method that excludes forms of evidence will produce beliefs that are more likely to eventually be overturned or require that people be unable to make inferences from one case to another or from one domain to another.  Piece compares 4 methods of inquiry, which aim to produce stable beliefs. In so doing he illustrate how this second criteria (i.e., imperviousness to challenge) works with the first (i.e., stability) to produce a theory of justification.

To achieve stability of belief in the method of tenacity one dogmatically adheres to his chosen  doctrines by rejecting all contrary evidence and actively avoiding contradictory opinions. Coming into contact with others, however, necessarily exposes him to different and conflicting views. This method can’t accommodate ever-present new data without giving up on rationality and the ability to make inferences. Unless we live like hermits, the “social impulse” against this practice makes it unlikely to succeed. The method of authority is the latter method but practiced at the level of the state and with the addition of social control. This method fails to achieve long term stability in that it cannot control everyone's thoughts on every topic. This isn't a problem if people are unable to make inferences from one domain of inquiry into another, but to the extent that they are, stability will be undone and doubt will emerge.

The above two methods have an important common feature in regards to how they achieve stability. In both cases, when I need to decide between two competing beliefs, the method tells me I ought to pick the one that coheres with what I already believe. In other words, there will be cases where I will have to “quit reason” by rejecting contrary evidence and inferences. The above methods for arriving at beliefs systematically exclude relevant evidence in generating beliefs. 

Now, compare these methods with something like wide reflective equilibrium or rational discourse. With these methods, how do we determine what to believe?  Rather than exclusively referring back to what we or the state/religion already endorse, when these methods confront new evidence they adjust output beliefs accordinglyIn the long run, the supposition is that (for example) reflective equilibrium and rational discourse will lead to more stable beliefs than the above two methods because the outputs include the best available total evidence rather than reject it.

Reply to the First Challenge
Let’s restate the first challenge: Stability on its own doesn’t seem to confer justification. There are many way methods of inquiry by which we might achieve stable beliefs, not all of which confer justification. When we defend stability as a justificatory property, the concern is that we’re begging the question: when a process generates stable beliefs that we approve of, we think stability is good; when it generates beliefs we disagree with, we think stability is bad.

To reply to the challenge, let’s consider, for example, reflective equilibrium. With (wide) reflective equilibrium we suppose that by finding an equilibrium between everyone’s principles and considered judgments, in the long run we arrive at a view that no one could reasonably reject (because it also takes their views into account). Stability, on this method, arises as a consequence of taking everyone’s principles and considered judgments into account; i.e., it doesn’t obviously exclude any evidence that might, in the long run, diminish the stability of the beliefs the method produces. And so the critic of stability has a point that stability on its own might not confer justification. What matters is why the view is stable. It’s assumed that a moral view derived from processes that take into account competing view points are stable for the right reasons: they are less impervious to challenge. And they are less impervious to challenge because they don’t exclude evidence (widely construed).

Reply to the Second Challenge
The second challenge is to give positive reasons of thinking long-term stability confers justification. When we make knee-jerk judgments or manipulated judgments we end up with beliefs inconsistent with our other beliefs, and so we have reason to conclude we’ve made an error somewhere—either in endorsing an inference, a judgement, a general principle. Conversely, with a process like reflective equilibrium or rational discourse we eliminate inconsistencies in the long run. By proxy, we’re also eliminating errors in either our inferences, judgments, or general principles; i.e., things that undermine justification. In the long run, the supposition is that, some methods of inquiry (e.g., reflective equilibrium) will lead to fewer and fewer errors, in turn contributing to more and more stable beliefs.

This also helps answer a problem I raised in the first section of the paper.  A reliabilist account of justification has truth built in and so doesn’t help an anti-realist explain how following a valid argument structure can confer justification. If the inputs don’t have a truth value, then neither will the outputs and so one output is just as justified as another. A fallibilist approach provides a solution. Failure to follow a valid argument scheme undermines justification because it indicates an error in reasoning. And so, deliberative belief-forming process that don’t follow valid schemes are making errors and in that respect their outcomes are less justified than those that do. 

Elimination of errors is part of the answer to why we think beliefs derived from one process are better justified than beliefs derived from another. Long-term stability of output beliefs is partly a proxy for absence of errors and so, to the degree that a process generates stable beliefs we have reason to think those beliefs are less likely to contain or be the product of errors

The second part of the answer is similar to the reply to the first challenge: Processes that attempt to exclude classes of evidence or deny certain inferences aren’t going to be stable. There’s an analogy with science: a research method that regularly has its conclusions overturned because it fails to take into account certain sources of evidence is a process that generates beliefs that are less stable in the long run. The beliefs are less stable because important classes of evidence aren’t taken into account or controlled for (for example, various cognitive biases). Similarly, a moral reasoning process that fails to take into account certain sources of evidence (e.g., competing arguments, the fact that we are prone to certain cognitive errors) is also going to generate beliefs that less stable, and by extension, less justified.

The puzzle for the anti-realist is to reconcile commitment to no moral facts with the view that some moral beliefs are more justified than others. If we take an externalist account of justification, a Peircian fallibilism offers a possible solution to the puzzle. Why should we think the the outputs of one process are more justified than another if their output can’t be true? Some processes generate beliefs that are relatively more stable and impervious to challenge in the long run than do other process. Stability, on this model occurs as a consequence of taking into account all the relevant data and avoiding cognitive errors in generating output beliefs. By doing so, outputs are less likely to be overturned by excluded evidence. Stability also is a proxy for absence of error. If a process produces beliefs that are systematically over-ridden, it must be because the outputs are inconsistent with other judgments, beliefs, or inferences. Processes that systematically generate inconsistencies indicate errors, which in turn also undermine stability. 

I’d like to close with the following thought experiment. Suppose both realists and anti-realist agree on which processes confer greater relative justification than others. Would the realism/anti-realism debate matter much? Aren’t comparatively well-justified beliefs (and actions) what we’re really after in ethics?

Monday, July 27, 2015

Abortion, Animal Rights, and Moral Consistency

Ok, I can't take it any more. I'm supposed to be working on a paper but the recent flood of articles regarding Planned Parenthood and abortions are making me loco--but not for the reasons you might think. I'm not going argue for a position regarding abortion but I just want to point a few things out in regards to how a position on the abortion issue "bears" on animals rights if there is going to be a modicum of concern given to moral consistency.

Before continuing I just want to emphasize that this article isn't intended to engage with all of the philosophical literature on abortion--that would require a book or more. The intent is to look at some the most common reasons given for opposing abortion and how they relate to animal rights if moral consistency has any value. At the end I'll briefly
"flip it and reverse it" and suggest how a position on animal rights bears on abortion.

The Basic Argument
One of the most common arguments against abortion is that "it is murder." The argument goes something like this:

P1. Abortion is killing an innocent person.
P2. Killing an innocent person is murder.
P3. Murder is wrong.
C.  Therefore abortion is wrong (via transitivity).

Of course the argument only works if you accept P1 which is in fact where the real debate is. Is a fetus a person? And if so, what attributes confer personhood?

Often what you'll hear is that the fetus is a person because they are human and since everyone agrees it's wrong to kill innocent humans, abortion is also wrong. But being human is simply a biological category. What we want to know is what attributes the fetus has that makes its interests worthy of moral consideration. Saying "because it's human" only redescribes what we already know. Nobody is doubting that the fetus is trivially biologically human. We still need an answer to the question "what morally relevant attribute do human fetuses have that makes it so it's wrong to kill them?"

Taking a step back, this is one of my favorite things about philosophy. We ask questions for which the answer seems so obvious that no reasonable person would even think to ask the question in the first place, yet once we ask the question the answer doesn't seem so obvious after all. In this case, the general question is "why is it wrong to kill humans?". As should be clear now, answering "because they are human" is not very satisfying.  Duh! We know that! Surely, there must be something about humans that makes it so it's wrong to kill them. What is it?

One popular answer is "rationality". Ok, suppose we accept that. Is a fetus rational? Nope. Are some adult mammals rational? Yes (maybe they can't do upper division math but they have a minimal rationality that we recognize in human children). So, if rationality is truly the standard for moral consideration of interests, it seems like we should have less of a problem with abortion than we do with killing pigs and experimenting on primates. Pigs and other adult mammals are orders of magnitude more rational that a fetus--which isn't even rational--and at least as rational as young children.

The obvious reply is that a fetus is potentially rational. It will one day be rational and since rationality is what makes it so we shouldn't kill humans, we shouldn't kill the fetus. One problem with this reply is that it's not clear how potential properties confer current rights. If I will potentially be a landlord does that mean I should get all the rights a landlord has now? Does the fetus that will one day be a university student get all the rights of a university student now? We typically don't give children full rights of adulthood until they have the capacities to exercise those rights. How do you get rights for capacities you don't currently have? That seems a bit odd.

But let's grant that that you can somehow get rights based on your potential attributes, in this case rationality. Is rationality really the measure of moral consideration? Consider: a child poet and an adult logician are both about to die and you can only save one. Is it so clear that you should save the logician? Although rationality seems to play some role in whether we confer moral consideration, it doesn't seem to be the most important consideration. If it were, we should give more moral consideration to adult mammals than fetuses since adult mammals are more rational.

And, even if we grant that a fetus can have rights in virtue of a potential attribute, surely we should also take into account rights that derive from that same actual attribute.  In other words, if we want to say that a fetus has certain moral rights in virtue of its potential rationality, consistency demands that we also say that, in so far as living animals are actually rational, they have rights commensurate with their actual rationality.  It would be a strange moral theory that confers greater moral status commensurate with potential attributes than actual attributes.

You can run this same argument for potential and actual desires (to live). Although an animal might not be able to express it verbally, it's reasonable to infer from its behavior that it would rather live than die. Does a fetus have desires? Nope. Ok, so we can go the potential or future desires route but accepting this would seem to require us to also accept the actual desires of animals not to be killed.

OK, so a fetus isn't rational and maybe rationality isn't all there is to having moral status. Maybe the capacity to feel pain is what confers moral consideration? At least in the early stages of development, a fetus is incapable of feeling pain since it has no central nervous system. Animals, on the other hand, do feel pain, so, if pain is the marker of moral consideration, we should give moral consideration to living animals rather than to fetuses. Again we can appeal to potential pain (?) if this even makes sense. Even if we allow it, it seems as though the actual pain of animals should be weighed at least as heavily as the potential pain of a fetus that doesn't ever live to feel that pain--if that even makes sense.

"But a fetus has a beating heart!!!"  Perhaps after 6 weeks this is true. But again, suppose we accept that having a beating heart is what confers moral status and make termination impermissible. Animals  have beating hearts too and so too must have moral status and termination of their life is also impermissible.


Another possible answer is that it's wrong to kill a fetus because it has human DNA. First of all, this criteria is question-begging. We already know that the human fetus has human DNA. What we want to know is why merely having human DNA confers moral status. My fingernail clippings have human DNA. Do they have moral status? Maybe it's replicating human DNA that has moral status. But why? The various organs in my body all have replicating DNA, do those cells have moral status? That seems weird.

Life Begins at Conception
If we charitably employ the term "life" this is trivially true in a biological sense but of course mere descriptive biological facts don't necessarily imply moral conclusions. Typically, for something to be considered alive in a full sense we'd think some degree of self-sufficiency would come into play. Anyhow, is "being alive" all that's required for moral consideration of interests? If that's the case, all animals should also have their relevant interests considered in proportion to how alive they are.

"No! No! It's different because it's human life." Ok, fine. Tell me again what morally relevant attribute a human fetus has that other creatures don't have. And saying "because it's human" again and again doesn't answer the question. It merely redescribes the biological facts but says nothing of the moral facts. We need an answer to the question, "what morally relevant attribute do human fetuses have that living adult animals don't have?"

Life Begins at Conception and In Vitro Fertilization
Although not directly linked to the issue of animal rights, one of the most glaring inconsistencies with the anti-abortion movement is their silence on in vitro fertilization.  With most in vitro fertilization usually 8 eggs are fertilized. Do you think that every couple that engages in in vitro fertilization uses all 8? Nope. Maybe they'll use two...(unless you're Octo-mom).

Now, if those that argue that moral life begins at conception take their position seriously they should be protesting in vitro clinics rather than abortion clinics. For each person they prevent from going through with the fertilization procedure they save 6 or 7 human "lives" rather than a measly single life at an abortion clinic.

The reason why they would never do this is because politically their cause would fail and it wouldn't surprise me if at least some people who oppose abortion have used in vitro. Nothing like your own needs and desires to motivate a special pleading argument or initiate motivated reasoning.

To be fair, some in vitro clinics have gotten around this "inconvenient truth" by freezing whatever embryos aren't used. "Hey, we never terminated them, we just froze them forever"--or (more realistically) at least until we forget about it. Anyone with any intellectual honestly should see what a cop-out this is.

The Bottom Line
When we talk about rights we usually think of rights in terms of particular capacities. We don't give children the right to vote or to drive because we don't think they have the relevant capacities. When they develop those capacities they gain the relevant rights. Similarly, we don't give men the same reproductive rights as women because biologically they couldn't exercise these rights. If this is our model of rights (i.e., capacities) then it seems odd to confer rights to something without any relevant capacities. We can of course say that it has the capacity to live but this doesn't distinguish it from any other living thing and so consistency requires either we reject the argument or we confer those same rights on those other living things. 

And so, if anti-abortions were sincere in their arguments consistency demands that they just as sincere in their advocacy and protection of animal rights. In short, there should be no meat-eating anti-abortionists.

[Philosophers note: the capacities theory of rights isn't the only theory of rights.]

Flip it and Reverse It
Notice that the consistency requirement works the other way too. If you have strong views against killing animals yet are pro-choice, your views may be inconsistent depending on how you defend your position on animal rights and the stage in a fetuses development up to which you think abortion is permissible.  If you think abortion is permissible at a stage in its development where it has some of the attributes that are shared by living animals, then your position is likely to be inconsistent. Or if you think it's just wrong to terminate an animal's life prematurely for any reason then your case for the permissibility of abortion is paper-thin if moral consistency matters--especially if you are OK with late term abortions.

One other puzzle that pro-choice advocates have to deal with is to come up with a morally relevant criteria that distinguishes a late-stage fetus in the womb and a new-born infant. Well, let me qualify that. The distinction has to be made so long as the pro-choice advocate thinks it's wrong to terminate a healthy new-born but permissible to terminate a 3rd trimester fetus. What is the morally relevant attribute that the new-born has that the fetus doesn't have? 

And in the interest of fairness, animal rights proponents often point to the gruesomeness of killing animals at the factory level. If gruesomeness is a morally relevant property (which, in my view, is very plausible) late-term abortions are also gruesome and so this gruesomeness should inform our position. To get an idea of what late-term abortions are like, I suggest watching the documentary Lake of Fire which is probably one the best documentaries on the abortion debate.

Wednesday, May 20, 2015

Solving US Health Care Cost Problems: Free Market vs Government Policy. Part 1

I apologize in advance for the somewhat scattered nature of this post. I'm trying to work through some ideas. 

What I'm trying to figure out is whether certain cost problems in the US health care system can be solved using free market approaches or whether solutions require government intervention. The nature of the problems I'm looking at arise as a consequence of conflicting interests between insurance companies, hospitals, doctors, and patients. Any solution has to find a way to harmonize the respective interests of each. Recent evidence suggests government action works, although the case is far from certain. Also, supposing there are equally effective free-market solutions we still must ask why we'd chose one type of solution of the other. Let's get some statistics on the table first to get a general overview of the US health care situation.

Give or take a few, total US expenditures on health care in 2013 was 2.9 trillion. When we average that cost over the population the average per person's cost is $9, 225.00 (2013). To give those figures some context, OEDC average per person health care spending is $3, 448. The next highest spender is Switzerland at $6, 080.00 per person. Despite the familiar mantra of "but we have the best healthcare in the world", the US performs comparatively poorly in terms of many health outcomes. (To be fair, there are also a few areas where it doesn't, such as wait times for specialists and surgeries, and for cancer treatment outcomes.) Currently, health care costs represent about 17% of US GDP and is expected to rise to 22%, whereas the OECD average is 9.5% of GDP, the next highest being the Netherlands at 12%.

If you're like me, your thinking, "wait a minute, I didn't consume even close to $9 000.00 in health care this year. Where is this number coming from?" In other words, we can't just look at averages, we need to know how those costs are distributed across the population. That information will allow us to target cost saving policy at the costliest populations and/or health issues.  Are you ready for your head to 'asplode'?

In a single year, what percentage of total health care dollars spent do you think went to the top 5% of health care users? (I.e., the sickest people). Ready? About 49%. That's right. Just 5% of the population consumed almost half of all health care dollars spent in a year.  Now, what percent of the total health care dollars spent do you think went to the top 1% of health care consumers? Ready? The top 1% of health care consumers consumed about 30% of the total health care dollars spent in a single year.

Ok, let's look at the other end of the spectrum. What percentage of total health care spending did the bottom 50% consume? (I.e., the healthiest people or those with the cheapest conditions to treat). Ready? It's 3%. Yup, 50% of the population only consumes about 3% of the total health care dollars spent in a year.

(For more fun facts about the distribution of health care spending, here's a--wicka wicka--breakdown)

So, why--beyond shock value--should we care about these statistics? Because if we're going to set policy to decrease health care costs we're going to get way more bang for our buck if policy is directed at the top 5% of users rather than everyone all at once. There's very little to be gain by reducing the health care costs of the healthiest 50% whereas there are very likely cost savings available from the top 5%.

We might ask why treating this population is so expensive. To figure this out we need to know if their treatment is expensive because of the nature of the conditions for which they need treatment or the way the conditions are treated/billed/managed or they're always in and out of hospitals or it's some combination of all of the above.

It turns out that the 5 most expensive conditions to treat are heart disease, cancer, trauma, mental disorders, pulmonary conditions. But if only a small percentage of the population has these conditions, they won't account for the high costs. What we need to know is both what conditions are the most prevalent in the population and of those, which are the most expensive to treat.

A quarter of the population has at least one of these five chronic conditions: diabetes, heart disease, asthma, mood disorders, or hypertension.  Unfortunately, each of thes conditions is associated with other conditions and illnesses. Treating the primary conditions in conjunction with the associated illnesses accounts for 50% of all health care spending.

How do we put all this information together? If we want to figure out a way to reduce costs, clearly we want to go after the most costly people and conditions to treat. And if those two variables overlap, that's probably a good target. So, how should we do it? Does government need to implement some sort of policy or are there free market solutions? To answer this question I want to use as a case study of one hospital's method of reducing treatment costs. There are other successful models for cost reduction as well which I'll also look at briefly. The point I want to establish is that it is possible. Not only is it possible, but these successful models reduced costs, increased quality of care and health outcomes.

What I really want to know is whether these models were a consequence of government policy (i.e., the ACA) or whether they could have come about without a government mandate. If the reply is the latter, we must ask the obvious question: Then why didn't it happen pre-ACA? The pro-market person can correctly point out that it did in a very small handful of cases. Just look at the Mayo clinic and an HMO in Colorado. But there's a further question lurking. If these models were so successful, why weren't they copied? Presumably, when someone finds a superior business model, other businesses must copy it or lose out.

The pro-market person might reply that the regulatory environment pre-ACA interfered with market forces such that efficiencies weren't realizable. But their own example--the Mayo clinic seems to undermine this argument. I'll have to investigate this claim. On the other hand, the ACA (government action) brought in (Medical) Accountable Care Organizations (ACOs). ACOs are voluntary programs that reward Medicare and Medicaid providers (e.g., hospitals, clinics, doctors, etc...) for cost savings through innovation.  Health care economists and pretty much anyone else that works in health care policy have known for decades that preventative medicine and coordinated/managed care (this is when groups of specialists are paid as a team to manage patient outcomes) is the best way to bring costs down and improve patient outcomes.  So, why wasn't anyone doing these things pre ACA? Why didn't market forces converge on this more efficient model?

My answer is that there were two prisoner's dilemma-like situations. The first is between health insurance companies, the second is between doctors and hospitals. Let's take a look at the first. The health insurance business is an odd one.  You're trying to sell a product that you hope your customer will never use.  And so, the best customers from the insurance companies' point of view are the healthiest ones. Every one wants the healthy customers. No one wants the sick ones. Here's the deal with preventative care and managed care programs: Setting up these programs requires up-front investment that won't see returns for up to 5 years.

Here's the problem. If you're the only insurance company that invests in a preventative and managed health system you're going to have lots of healthy customers. This might seem good until you realize that you're the only one that invested in the program. What are the other companies going to do? They're going to try to poach your customers! You invested all that money and created healthy customers and now all the other insurance companies are going to swoop in and steal all the healthy customers you created. Seeing how this might happen, no insurance company wants to be the sucker--even though they want to have healthy customers! And so no insurance company makes the up front investment and we end up with no cost saving measures.

It looks like the only way you can get the health insurance companies to make the up front investment in these cost saving measures is if somehow "someone" gives each company assurances that all the others will do the same. This way, no one will end up a sucker by being the only one to make up front investments only to have the new healthy customers poached. It looks like the government is that "someone" who can offer the assurance by mandating preventative care be part of health care insurance policies. Doing so allows the insurance companies to exit the prisoner's dilemma.

The second prisoners' dilemma occurs between doctors/care providers and insurance companies (I'm less sure if this is technically a prisoner's dilemma, it might just be a more general case of conflicting interests). Doctors and care providers (e.g., hospitals) want to get paid more money rather than less. Insurance companies want to pay less rather than more. This leads to high costs. For example, if a doctor is charging for every test he orders and every minor consultation, he's likely to order more tests than is necessary. In fact, there's good evidence that this happens. Consider this data from 2012. For some types of tests, US doctors order significantly more than their OEDC counterparts.  So, how do we get doctors and hospitals to move to a managed care model? I.e., where they aren't necessarily under a fee-for-service model or at least where they're under a model that doesn't incentive ordering unnecessary tests and procedures?

Again, it looks like at least one answer is through government mandate. ACO programs under the ACA are voluntary for hospitals to enter.  Hospitals get to share whatever cost savings the hospitals generate through managed care initiatives. Interestingly, the hospitals are free to experiment with whatever managed care models they want. So, if the hospitals save medicare and medicaid 2 million compared the previous year, the hospital gets to keep just over half of the savings. The program has been a huge success.  Here are a couple highlights:

In the first year of the program 58 Shared Savings Program ACOs held spending $705 million below their targets and earned performance payments of more than $315 million as their share of program savings. Total net savings to Medicare is about $383 million in shared savings, including repayment of losses for one Track 2 ACO.

In the second year Pioneer ACOs generated estimated total model savings of over $96 million and at the same time qualified for shared savings payments of $68 million. They saved the Medicare Trust Funds approximately $41 million. 

Where We At?
So far it looks like there's a strong case to be made that government action can solve many of the cost problems with the US health care system.  Of course just because the government can solve these problems it doesn't follow necessarily that a market-based solution couldn't solve these problems. Some might even argue that the problems arose in the first place as a consequence of government intervention in the market. I'll look at these arguments in the next post. For now, amaze your friends with your new-found knowledge of healthcare statistics.