Wednesday, September 2, 2015

Doing What Feels Right and Sartre's Existentialism

Introduction
Existentialism is summarized in one phrase, "existence precedes essence." But what does this mean? Pre-existentialism, philosophical systems presumed Man had an essence. To understand what that means consider a chair. Chairs don't just randomly pop into existence. Before they are created someone or something has to have a concept of what a chair is (i.e., something for sitting), then the chair is created in conformity with the concept. The essence of a chair is that for which it is designed.

Notice two things: (a) having an essence, in the sense I've described, implies a creator and that the creation's essence is determined before it comes into existence. And so, artifacts--things like chairs, computers, cars, iphones, etc--all have creators and have an essence/nature/purpose before they come into existence.

Let's return to "existence precedes essence." For existentialists, human beings are unlike artifacts in that we exist before we have an essence/nature. We are not designed and so there is no predetermined essence that defines who or what we are. Who we are comes after we exist. We are thrust into the world and create ourselves through the actions we choose. This is what "existence precedes essence" means: first we exist, then we will acquire a nature (though our acts).  If I do harmful acts and act selfishly, then this is what I am. If I create and share, then this is my nature. There is no nature beyond what I actually do. Again, contrast this with artifacts: first they have a nature/essence/purpose then they are brought into the world.

For existentialists, the human condition is understanding that we are free to choose our own essence and figuring out how we ought to create ourselves given we have no intrinsic nature.

Radical Freedom and Responsibility
Although we have no common nature among us we all share the same conditions: We are "condemned" to be free yet we have to make choices about how to live and 'be'. There is nothing objective in the world to clutch to guide us in our choices. Every choice is exactly that--our choice. And because every action is a consequence of our own choice, we bear absolute responsibility for it. Pretending you don't have a choice is what Sartre calls "bad faith". You are denying the reality of your radical freedom but you cannot escape responsibility for your choices which define who you are.

Now some might say that we can turn to religion or authorites to guide us. There are a few problems with this. First, you are shirking your responsibility as a free being for decisions. By deferring to an outside source for your decisions your are denying your responsibility to choose for yourself. One might say that choosing to follow this or that text or leader is a choice--and it is. But how you choose to interpret the various texts and advice will also be a matter of your own choice. You cannot escape the subjectivity of your existence and so you still must bear the responsibility of choosing one interpretation over another.

Furthermore, your essence is nothing more that the sum of the things that you do. And if trying to deny your radical freedom and offload responsibility are part of your actions, then you are a coward. Anguish, for the existentialist, in part comes from knowing that he bears full responsibility for what he does and who he is.

Moral Choice
If there are no objective values in the world, it seems like anything goes. Again, just like with any other choice, the ethics that you choose define your essence. If you choose and act on a selfish ethic, then that's what you are and you are responsible for everything that comes from it.

But our actions aren't completely unconstrained. Here's where we exit subjectivity. Every choice that I make not only defines who I am but also defines the essence of Man as a whole because I am a part of that whole. This demands that I consider the consequences of my choices on what will be the nature of Man.

For every man, everything happens as if all mankind had its eyes fixed on him and were guiding itself by what he does. And every man ought to say to himself "Am I really the kind of man who has the right to act in such a way that humanity might guide itself by my actions?

I am responsible, through my choices, for how Man is defined for my time in history because I am a part of Man.

But this is vague? How do I decide what to do in specific cases? Unfortunately, general moral principles can't tell us how to decide particular cases. Sartre give the following case:
[The young man's] father was on bad terms with his mother, and, moreover, was inclined to be a collaborationist; his older brother had been killed in the German offensive of 1940, and the young man, with somewhat immature but generous feelings, wanted to avenge him. His mother lived alone with him, very much upset by the half-treason of her husband and the death of her older son; the boy was her only consolation. The boy was faced with the choice of leaving for England and joining the Free French Forces--that is, leaving his mother behind or remaining with his mother and helping her to carry on.
He was fully aware that the woman lived only for him and that his going off--and perhaps his death--would plunge her into despair. He was also aware that every act that he did for his mother's sake was a sure thing, in the sense that it was helping her to carry on, whereas every effort he made toward going off and fighting was an uncertain move which might run aground and prove completely useless; for example, on his way to England he might, while passing through Spain, be detained indefinitely in a Spanish camp; he might reach England or Algiers and be stuck in an office at a desk job. As a result, he was faced with two very different kinds of action: one, concrete, immediate, but concerning only one individual; the other concerned an incomparably vaster group, a national collectivity, but for that very reason was dubious, and might be interrupted en route. And, at the same time, he was wavering between two kinds of ethics.
On the one hand, an ethics of sympathy, of personal devotion; on the other, a broader ethics, but one whose efficacy was more dubious. He had to choose between the two. Who could help him choose? Christian doctrine? No. Christian doctrine says, "Be charitable, love your neighbor, take the more rugged path, etc., etc." But which is the more rugged path? Whom should he love as a brother? The fighting man or his mother? Which does the greater good, the vague act of fighting in a group, or the concrete one of helping a particular human being to go on living? Who can decide a priori? Nobody. No book of ethics can tell him. The Kantian ethics says, "Never treat any person as a means, but as an end." Very well, if I stay with my mother, I'll treat her as an end and not as a means; but by virtue of this very fact, I'm running the risk of treating the people around me who are fighting, as means; and, conversely, if I go to join those who are fighting, I'll be treating them as an end, and, by doing that, I run the risk of treating my mother as a means. 
And so a general ethic principle can't help us decide specific cases. And besides, even if we do choose a general ethical principle, there is no guide to tell us which to choose. We also have to make that choice and we bear responsibility for it and accept that it now defines us in so far as we act on it.

Sartre and Emotions
General ethical principles can't tell me what to do in particular cases. Maybe I ought to do what feels right to me. If the young man feels that his love for his mother is great enough to sacrifice his other desires, then he should do that. But if the feeling of love for his mother isn't enough to make him give up everything else, then he ought to leave.

But how will he know that he loves his mother enough to give up everything else unless he actually does it? To know that his feeling leads him to the right choice he has to live that choice. He has to see how it plays out. He might try it and find out that the desire to avenge his brother overwhelms him and he's unhappy staying behind. But he can't know this before he lives it. The feeling can't tell him in advance what to do.

Consider another case that many can relate to. How do you know if marrying someone will be the right choice? Consulting your feelings before you're married can't tell you. You'll only know if it's the right choice if you actually do it. If it turns out well, it was the right choice. If it turns out poorly, it wasn't. The feeling can't tell you what's going to happen. If feelings were a good guide to marriage, we'd expect the divorce rate to be substantially lower.

It Isn't All Doom and Gloom
So, we are hurled into the world, condemned to be free with no fixed lights to guide us in how we ought to live, yet we are somehow responsible for everything we do and are. This is the existential forlornness and anguish.  Forlorn because we cannot turn to anyone to make decisions for us and anguish because of the tremendous responsibility that comes with creating both our own essence and that of Man.

But chin up butter cup! The good news is that existentialism is a philosophy of action. You might not have any guides but you get to create yourself--with every choice. Bonus, if you didn't like where things were going, you can reinvent yourself at any moment, if you so choose.
There is no reality except in action. Man is nothing else but his plan; he exists only to the extent that he fulfills himself; he is therefore nothing else than the ensemble of his acts, nothing else than his plan.
Before you go skipping off into the sunset, existentialism is a demanding philosophy if you take it seriously. In creating yourself and defining Man, with every act you are saying "this is what I think all mankind ought to be--follow me!" When taken seriously, that is a massive responsibility to bear.

It also means that there is nothing that exists that is not expressed in action. There are great authors with unwritten great books, no charitable people with kind deeds undone, no great loves who have not loved.  "Reality alone counts and [...] dreams, expectations, and hopes warrant no more than to define a man as a disappointed dream, miscarried hopes, as vain expectations." In other words, coulda woulda shoulda is worthless. At the end of the day, what matters is what you did with your life and whether it was an example for others to follow.










Thursday, August 6, 2015

Cilantro, Moral Truth, and Justification

Preamble: 
I've been working on this paper for longer than I care to admit but I have to turn it in at some point. I've written about 4 or 5 different versions of it all with different solutions or non-solutions to the puzzle I present. Anyhow, a few notes:

(A) For some reason the footnotes didn't post to the blog so some of my clarificatory points aren't here. Here are two of the important ones. 
     (1) The anti-realist position I'm concerned with is error theory (there are no moral facts and moral propositions have cognitive content). 
     (2) In the last section I talk a lot about "evidence". What counts as evidence in moral arguments would require its own paper so I make some assumptions about what counts: moral judgments, intuitions, principles, and emotions. I'm happy to include other candidates. 

(B) For non-philosophers all you really need to know to understand this paper is what moral anti-realism is. In the simplest terms it's the view that there are no moral facts. Everything is, like, just your opinion, maaaaaaaan!

Cilantro, Moral Truth, and Justification
Appetizers: Anti-realism About Gustatory Facts 
At dinner tables around the world, there is perhaps no issue more divisive than whether cilantro is or is not a delicious garnish. It is the gustatory domain’s own abortion debate. There’s little middle ground and each side views the other as irredeemably wrong. In more reflective moments, most would agree there are no objective facts about the deliciousness of particular foods.  Abe can claim that cilantro is objectively delicious while Bob claims that cilantro is objectively disgusting but the fact of the matter is that there is no fact of the matter! Granting this assumption, is there any way that we could make sense of the idea that either Abe or Bob’s belief is better justified than the other? 

For the moment, I’m going to assume that there isn’t. It seems as though Abe and Bob could offer justifications for why cilantro is subjectively delicious or disgusting but I doubt any of these reasons would convince a third party of cilantro’s objective gustatory properties. Abe and Bob could insist that their arguments and reasons support objective gustatory facts but we’d dismiss their claims as category mistakes—they’re confusing their personal preferences for objective facts about the world. Any argument they give for objective gustatory facts about the world is better interpreted as facts about their subjective gustatory preferences being projected onto the world.

Now consider an analogous moral case and substitute your favorite moral propositions and their opposite for the gustatory ones. For example, Abe claims that it is an objective fact that slavery is a morally good institution while Bob claims the opposite—i.e., that it is an objective fact that slavery is a morally bad institution. If in the cilantro case anti-realism about objective gustatory facts leads us to accept neither competing belief is better justified than the other then it seems that consistency requires that anti-realism about moral facts lead us to also conclude that neither Abe nor Bob’s beliefs regarding slavery is more justified than the other.  Just as there are no objective facts about the deliciousness of cilantro, there are no objective facts about the moral badness or goodness of slavery, and so one position cannot be more justified than the other. Any argument is merely a projection of the interlocutor’s personal preferences or explainable by appeal to facts about their psychology.

There may be some extreme anti-realists out there that are willing to bite the bullet and concede the point.  However, I’m willing to bet that many anti-realists would deny that all moral beliefs are equally well-justified even if moral beliefs can't be objectively true or false. If I'm right then these anti-realists need an account of justification that doesn't depend on the notion of truth. Is this possible?

The framework for this paper is to examine the relationship between moral anti-realism and justification. Suppose we accept that MORAL ANTI-REALISM IS TRUE: There are no objective moral facts. On what bases can we then evaluate competing moral claims? Is justifying objective moral claims analogous to trying to justify objective gustatory claims? That is, since there really are no facts of the matter, one claim is just as well-justified (or unjustified) as the other. The puzzle for the anti-realist is to reconcile the following two assertions: MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.  I will argue that, if we adopt an externalist theory of justification, a Peircian fallibilism offers a potential solution to the puzzle. Before proposing my solution, I will consider and evaluate other attempts to reconcile the two assertions.

A Quick Word on Theories of Justification
Before proceeding we’re going to need to take a brief look at theories of justification and pare down the scope of my inquiry.

Two Theories of Justification
One way to analyze the concept of justification is along internalist/externalist lines. Internalists argue that a belief is justified so long as the believer is able to provide some sort of argument or supporting evidence when challenged. Externalists argue that a belief is justified if it was generated by a reliable belief-forming process, where "reliable" means that the process generates more true than false beliefs in the long run. For example, beliefs formed by visual perception are justified on the externalist view because visual perception generates more true beliefs than false beliefs in the long run.  So, my belief that there is a computer in front of me is justified because it was formed by my seeing it—which is a reliable process. It’s more likely that I’m actually seeing a computer than I am hallucinating it. 

I wish to side-step taking a definitive position on the internalist/externalist debate and suggest that both types of justification are plausible in ethics. We think that a moral belief is justified via normative reasons (internalist) but we also think particular ways of arriving at moral beliefs confer justification. For example, we think that moral judgments arrived at through careful reasoning and/or reflection are more justified than those produced by unreflective emotional knee-jerk reactions. And so it's plausible to think the reliability of the process that produces a moral belief is at least partially relevant to the belief's (relative) justification. For the remainder of this paper, I will grant myself that assumption and constrain the scope of justification to externalist justification. A full treatment of an internalist model in regards to my inquiry requires a paper unto itself—although I suspect internalist theories may have similar problems

Round 1: Externalist Justification Isn't Possible if There Are No Moral Facts
The simple argument against the possibility of an anti-realist account of externalist justification goes something like this. 

P1. Reliability is cashed out in terms of whether a process produces more true than false beliefs in the long run.
P2. Anti-realists deny that moral propositions can be true or false.
C1. So, there's no way to evaluate the reliability of a process when it comes to moral beliefs because the very attributes that we require to measure reliability aren't available.
C2. So, on the anti-realist model all moral beliefs are equally justified (or unjustified).

In short, anti-realists deny the very attribute (truth) required to measure reliability. If we can't know which processes are more reliable than others there is no externalist ground to say one moral belief is better justified than another.  But, again, surely some moral beliefs are better justified than others…but how? 

Reply
Consider the inference rule modus ponens . We know modus ponens to be a reliable belief-forming process from using it in other domains. It is content neutral. Its reliability credentials have been checked out so all we need to do is import it (and similar content-neutral processes) into the domain of ethics. Anti-realists can say that moral conclusions arrived at through modus ponens (or other combinations of formal rules of inference) are more justified than those that aren’t. 

Counter-Reply
Modus ponens and other valid argument structures are contingently reliable processes. That is, if the inputs (i.e., premises) are true then so too will be the outputs. The problem is that the anti-realist has denied the possibility of true inputs in the moral case. If inputs can be be neither true nor false then the conclusions are also neither true nor false. And worse yet, the same argument structure can yield apparently contradictory outputs.

Consider the following examples:
Ethics 1
1E. If you abort a fetus it is wrong. (Neither true nor false).
2E. I aborted a fetus. (Neither true nor false).
3E. Therefore, what I did was wrong. (Neither true nor false)

Ethics 2
1E*. If you abort a fetus it isn't wrong. (Neither true nor false).
2E*. I aborted a fetus. (Neither true nor false).
3E*. What I did wasn't wrong. (Neither true nor false).

It appears as though importing valid argument structures into ethics doesn’t give a solution to the puzzle of reconciling MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.

Round 2: Content-Generating Processes
Perhaps the problem here is the content neutrality of the above processes. We need processes that justify initial moral premises as well as yield conclusions. We have some familiar plausible candidate processes that might confer justification: reflective equilibrium, rational reflection, rational discourse, coherence with existing beliefs, idealizing what a purely rational agent would want, applying the principle of impartiality, universalization, to name a few.

Notice also that we think that some cognitive processes for ethical judgment don’t confer justification—i.e., are unreliable. For example, if I form a moral judgment when I'm extremely angry I might come to reject that belief once I calm down and employ one of the above methods instead. So, a belief arrived at as a consequence of a temporary acute emotional reaction is not well-justified.  

Moral psychology and social psychology are littered with experiments where either the subject or the environment are manipulated to produce judgments that the subject would reject upon learning of the manipulation. This seems to hint at an answer: Beliefs that have been formed by processes that involve manipulation of the environment and/or the subject’s mood are less reliable/less justified than processes that don't involve any obvious manipulations. 

The Same Problem?
This account implies that some processes are more likely to "get it right" than others. But what is it to “get something right” if there's no target for epistemic rightness (i.e., truth)? This seems to be the same problem from Section 1 all over again. If moral beliefs can neither be true nor false, in what way can we say that one process yields more true outputs than another? Sure, we might systematically reject judgments produced by some processes in favor of others, but why  prefer the outputs of one process over another if we can’t say that judgments from one is more likely to be true than another? The grounds for thinking judgments from one process are more justified than those from another seem to be that the judgments from one process are more likely to be true than another.

Round 3: Analogy with Scientific Justification and Fallibilism as a Possible Solution
Harmon argues that there is an important disanalogy between explaining ethical judgments and scientific judgments. We can’t explain why a physicist believes there is a proton without also postulating that there is something “out there” in the world causing the physicist’s observation, which is in turn interpreted as a proton. A moral judgment, on the other hand, can be explained merely by appeal to a subject’s psychology. We needn’t postulate any thing or moral properties “out there” that cause a subject to have a moral belief that x. 

Let’s accept Harmon’s argument. Despite the fact that the causes of scientific and moral judgments might differ, there may be ways in which justification functions similarly between both.
Scientists habitually couch their arguments and conclusions with language that is fallibilist.  Claims and conclusions are presented as provisional, based on the current available evidence and methods. The history of scientific discovery is one of revised and overturned conclusions in light of new evidence and recursive, self-correcting improvements in the scientific method itself.  

From the point of view of internalism vs externalism about justification, we might consider new data as a kind of internalist justification for a claim because they are reasons to believe one thing rather than another. Research methods, on the other hand, can be viewed as instances of the externalist’s processes that confer justification. The idea that some processes are more reliable than others is a familiar idea in scientific research. Claims that derive from methods (i.e., processes) that avoid possible known biases are more reliable and hence more justified than methods that don’t.

For example, the placebo effect is a well-known occurrence in medical science. If patients think they are receiving treatment—even if they aren’t—patients report subjective measures (e.g., reduced pain/discomfort) significantly more positively than non-treatment (i.e., control) groups. We also know that if the researcher knows which patients are in the treatment group and which aren’t, this can influence both the way the researcher asks the patient questions and how they interpret data (they’ll bias toward a positive interpretation). For these reasons we think that the results from medical research that are placebo-controlled and double-blinded are more reliable than those that aren’t. 

In short, data from a study that employs a more reliable process (e.g.,. double-blind, placebo controlled) is more justified than the data from a study that didn’t do either of these things. The more a process avoids known errors, the more justified its conclusions—despite the fact that the blinded study’s conclusions might also eventually be overturned. There is always the background understanding that new and better methods might come about and generate incommensurate data and/or conclusions but this doesn’t undermine the current relative justification that the current methods confer on the outputs beliefs. 

Analogously, moral and social psychology has produced a vast literature showing all the ways we think our moral thinking can go awry.  We know that a cluttered desk can cause us to pass harsher penalties than we would otherwise, that a temporary heightened emotional state greatly influences our judgments, that the feeling of empathy can lead us awry,  and that implicit biases can play important roles in our judgments—to name a few. In short, there are many ways in which it seems like both our basic hard-wiring and various forms of personal and environmental manipulation can cause us to make judgments that we, upon learning of the manipulation or the bias, would likely reject in favor of a judgment arising from unmanipulated deliberation or one of the familiar gold-standard methods of moral reasoning .

Perhaps an anti-realist can think of the activity of moral thinking not as one that aims at discovering some objective truth, rather one that seeks to avoid known cognitive errors and insulate against manipulation. In so far as our judgments derive from methods that avoid (known) cognitive errors and biases, our moral claims are better-justified. This however doesn’t entirely solve the puzzle. We still need to answer why we should choose the output of one type of process over another.  It’s easy to say that the manipulated judgment is “wrong” or is “mistaken” but how do we say this without appeal to truth? “Error” implies a “right” answer. One might just as easily say that the correct judgment is the manipulated or biased one and we are systematically mistaken to adopt the reflective judgment made in a cool dark room. 

The Main Challenge to The Anti-Realist: The moral anti-realist needs some general criteria to explain why we (ought to) systematically endorse judgments from one process rather than another. That we do isn’t enough. We must give an account of why one confers more justification than another.

Peirce, Methods of Inquiry, and The “Fixing of Opinion”
Before suggesting a possible criteria to answer the challenge, I want to sketch out a Peircian analysis of methods of inquiry since it inspired my suggestion. I’ll fill in other details later as they become relevant. Pierce argues that the “fixing of opinion” or “the settlement of opinion” is the sole object of inquiry (Peirce, p. 2-3).  While we needn’t commit to Peirce’s exclusivity claim regarding the purpose of inquiry, Peirce provides useful insight into why we might think some processes confer greater justification than others. I take him to be proposing two related desiderata for our methods of inquiry: that they produce beliefs that are (a) relatively stable over time and (b) relatively impervious to challenge.  The anti-realist can distinguish between processes’ justificatory status to the degree that they they achieve (a) and (b).

This possible anti-realist response to the challenge of justification is not without difficulties. I will explore two related ones. First, it isn’t clear that stability of beliefs is a condition for justification. As standard examples of recalcitrant and dogmatic Nazis or racists show, in fact, we have reason to believe stability might have little to do with justification. Second, I will need to show that stability marks something we think matters to justification: namely, the absence of cognitive errors and inclusion of all relevant evidence. This is the fallibilist aspect of the proposal: Stability needn’t be a proxy for truth tracking, however, it is a reasonable proxy for believing that we are avoiding errors. This is part of the answer but not all of it. 

Peirce notes, stability can be achieved various ways—not all of which we’d think confer justification. The second part of this fallibilist model of justification has to do with the degree to which a process excludes relevant evidence thereby making it susceptible to challenge. Thus, for Peirce, long-run stability requires that the method of inquiry take into account all relevant sources of evidence. A method that excludes forms of evidence will produce beliefs that are more likely to eventually be overturned or require that people be unable to make inferences from one case to another or from one domain to another.  Piece compares 4 methods of inquiry, which aim to produce stable beliefs. In so doing he illustrate how this second criteria (i.e., imperviousness to challenge) works with the first (i.e., stability) to produce a theory of justification.

To achieve stability of belief in the method of tenacity one dogmatically adheres to his chosen  doctrines by rejecting all contrary evidence and actively avoiding contradictory opinions. Coming into contact with others, however, necessarily exposes him to different and conflicting views. This method can’t accommodate ever-present new data without giving up on rationality and the ability to make inferences. Unless we live like hermits, the “social impulse” against this practice makes it unlikely to succeed. The method of authority is the latter method but practiced at the level of the state and with the addition of social control. This method fails to achieve long term stability in that it cannot control everyone's thoughts on every topic. This isn't a problem if people are unable to make inferences from one domain of inquiry into another, but to the extent that they are, stability will be undone and doubt will emerge.

The above two methods have an important common feature in regards to how they achieve stability. In both cases, when I need to decide between two competing beliefs, the method tells me I ought to pick the one that coheres with what I already believe. In other words, there will be cases where I will have to “quit reason” by rejecting contrary evidence and inferences. The above methods for arriving at beliefs systematically exclude relevant evidence in generating beliefs. 

Now, compare these methods with something like wide reflective equilibrium or rational discourse. With these methods, how do we determine what to believe?  Rather than exclusively referring back to what we or the state/religion already endorse, when these methods confront new evidence they adjust output beliefs accordinglyIn the long run, the supposition is that (for example) reflective equilibrium and rational discourse will lead to more stable beliefs than the above two methods because the outputs include the best available total evidence rather than reject it.

Reply to the First Challenge
Let’s restate the first challenge: Stability on its own doesn’t seem to confer justification. There are many way methods of inquiry by which we might achieve stable beliefs, not all of which confer justification. When we defend stability as a justificatory property, the concern is that we’re begging the question: when a process generates stable beliefs that we approve of, we think stability is good; when it generates beliefs we disagree with, we think stability is bad.

To reply to the challenge, let’s consider, for example, reflective equilibrium. With (wide) reflective equilibrium we suppose that by finding an equilibrium between everyone’s principles and considered judgments, in the long run we arrive at a view that no one could reasonably reject (because it also takes their views into account). Stability, on this method, arises as a consequence of taking everyone’s principles and considered judgments into account; i.e., it doesn’t obviously exclude any evidence that might, in the long run, diminish the stability of the beliefs the method produces. And so the critic of stability has a point that stability on its own might not confer justification. What matters is why the view is stable. It’s assumed that a moral view derived from processes that take into account competing view points are stable for the right reasons: they are less impervious to challenge. And they are less impervious to challenge because they don’t exclude evidence (widely construed).

Reply to the Second Challenge
The second challenge is to give positive reasons of thinking long-term stability confers justification. When we make knee-jerk judgments or manipulated judgments we end up with beliefs inconsistent with our other beliefs, and so we have reason to conclude we’ve made an error somewhere—either in endorsing an inference, a judgement, a general principle. Conversely, with a process like reflective equilibrium or rational discourse we eliminate inconsistencies in the long run. By proxy, we’re also eliminating errors in either our inferences, judgments, or general principles; i.e., things that undermine justification. In the long run, the supposition is that, some methods of inquiry (e.g., reflective equilibrium) will lead to fewer and fewer errors, in turn contributing to more and more stable beliefs.

This also helps answer a problem I raised in the first section of the paper.  A reliabilist account of justification has truth built in and so doesn’t help an anti-realist explain how following a valid argument structure can confer justification. If the inputs don’t have a truth value, then neither will the outputs and so one output is just as justified as another. A fallibilist approach provides a solution. Failure to follow a valid argument scheme undermines justification because it indicates an error in reasoning. And so, deliberative belief-forming process that don’t follow valid schemes are making errors and in that respect their outcomes are less justified than those that do. 

Elimination of errors is part of the answer to why we think beliefs derived from one process are better justified than beliefs derived from another. Long-term stability of output beliefs is partly a proxy for absence of errors and so, to the degree that a process generates stable beliefs we have reason to think those beliefs are less likely to contain or be the product of errors

The second part of the answer is similar to the reply to the first challenge: Processes that attempt to exclude classes of evidence or deny certain inferences aren’t going to be stable. There’s an analogy with science: a research method that regularly has its conclusions overturned because it fails to take into account certain sources of evidence is a process that generates beliefs that are less stable in the long run. The beliefs are less stable because important classes of evidence aren’t taken into account or controlled for (for example, various cognitive biases). Similarly, a moral reasoning process that fails to take into account certain sources of evidence (e.g., competing arguments, the fact that we are prone to certain cognitive errors) is also going to generate beliefs that less stable, and by extension, less justified.

Conclusion
The puzzle for the anti-realist is to reconcile commitment to no moral facts with the view that some moral beliefs are more justified than others. If we take an externalist account of justification, a Peircian fallibilism offers a possible solution to the puzzle. Why should we think the the outputs of one process are more justified than another if their output can’t be true? Some processes generate beliefs that are relatively more stable and impervious to challenge in the long run than do other process. Stability, on this model occurs as a consequence of taking into account all the relevant data and avoiding cognitive errors in generating output beliefs. By doing so, outputs are less likely to be overturned by excluded evidence. Stability also is a proxy for absence of error. If a process produces beliefs that are systematically over-ridden, it must be because the outputs are inconsistent with other judgments, beliefs, or inferences. Processes that systematically generate inconsistencies indicate errors, which in turn also undermine stability. 

I’d like to close with the following thought experiment. Suppose both realists and anti-realist agree on which processes confer greater relative justification than others. Would the realism/anti-realism debate matter much? Aren’t comparatively well-justified beliefs (and actions) what we’re really after in ethics?







Monday, July 27, 2015

Abortion, Animal Rights, and Moral Consistency

Ok, I can't take it any more. I'm supposed to be working on a paper but the recent flood of articles regarding Planned Parenthood and abortions are making me loco--but not for the reasons you might think. I'm not going argue for a position regarding abortion but I just want to point a few things out in regards to how a position on the abortion issue "bears" on animals rights if there is going to be a modicum of concern given to moral consistency.

Before continuing I just want to emphasize that this article isn't intended to engage with all of the philosophical literature on abortion--that would require a book or more. The intent is to look at some the most common reasons given for opposing abortion and how they relate to animal rights if moral consistency has any value. At the end I'll briefly
"flip it and reverse it" and suggest how a position on animal rights bears on abortion.


The Basic Argument
One of the most common arguments against abortion is that "it is murder." The argument goes something like this:

P1. Abortion is killing an innocent person.
P2. Killing an innocent person is murder.
P3. Murder is wrong.
C.  Therefore abortion is wrong (via transitivity).

Of course the argument only works if you accept P1 which is in fact where the real debate is. Is a fetus a person? And if so, what attributes confer personhood?

Often what you'll hear is that the fetus is a person because they are human and since everyone agrees it's wrong to kill innocent humans, abortion is also wrong. But being human is simply a biological category. What we want to know is what attributes the fetus has that makes its interests worthy of moral consideration. Saying "because it's human" only redescribes what we already know. Nobody is doubting that the fetus is trivially biologically human. We still need an answer to the question "what morally relevant attribute do human fetuses have that makes it so it's wrong to kill them?"

Taking a step back, this is one of my favorite things about philosophy. We ask questions for which the answer seems so obvious that no reasonable person would even think to ask the question in the first place, yet once we ask the question the answer doesn't seem so obvious after all. In this case, the general question is "why is it wrong to kill humans?". As should be clear now, answering "because they are human" is not very satisfying.  Duh! We know that! Surely, there must be something about humans that makes it so it's wrong to kill them. What is it?

Rationality
One popular answer is "rationality". Ok, suppose we accept that. Is a fetus rational? Nope. Are some adult mammals rational? Yes (maybe they can't do upper division math but they have a minimal rationality that we recognize in human children). So, if rationality is truly the standard for moral consideration of interests, it seems like we should have less of a problem with abortion than we do with killing pigs and experimenting on primates. Pigs and other adult mammals are orders of magnitude more rational that a fetus--which isn't even rational--and at least as rational as young children.

The obvious reply is that a fetus is potentially rational. It will one day be rational and since rationality is what makes it so we shouldn't kill humans, we shouldn't kill the fetus. One problem with this reply is that it's not clear how potential properties confer current rights. If I will potentially be a landlord does that mean I should get all the rights a landlord has now? Does the fetus that will one day be a university student get all the rights of a university student now? We typically don't give children full rights of adulthood until they have the capacities to exercise those rights. How do you get rights for capacities you don't currently have? That seems a bit odd.

But let's grant that that you can somehow get rights based on your potential attributes, in this case rationality. Is rationality really the measure of moral consideration? Consider: a child poet and an adult logician are both about to die and you can only save one. Is it so clear that you should save the logician? Although rationality seems to play some role in whether we confer moral consideration, it doesn't seem to be the most important consideration. If it were, we should give more moral consideration to adult mammals than fetuses since adult mammals are more rational.

And, even if we grant that a fetus can have rights in virtue of a potential attribute, surely we should also take into account rights that derive from that same actual attribute.  In other words, if we want to say that a fetus has certain moral rights in virtue of its potential rationality, consistency demands that we also say that, in so far as living animals are actually rational, they have rights commensurate with their actual rationality.  It would be a strange moral theory that confers greater moral status commensurate with potential attributes than actual attributes.

You can run this same argument for potential and actual desires (to live). Although an animal might not be able to express it verbally, it's reasonable to infer from its behavior that it would rather live than die. Does a fetus have desires? Nope. Ok, so we can go the potential or future desires route but accepting this would seem to require us to also accept the actual desires of animals not to be killed.

Pain
OK, so a fetus isn't rational and maybe rationality isn't all there is to having moral status. Maybe the capacity to feel pain is what confers moral consideration? At least in the early stages of development, a fetus is incapable of feeling pain since it has no central nervous system. Animals, on the other hand, do feel pain, so, if pain is the marker of moral consideration, we should give moral consideration to living animals rather than to fetuses. Again we can appeal to potential pain (?) if this even makes sense. Even if we allow it, it seems as though the actual pain of animals should be weighed at least as heavily as the potential pain of a fetus that doesn't ever live to feel that pain--if that even makes sense.

Heart
"But a fetus has a beating heart!!!"  Perhaps after 6 weeks this is true. But again, suppose we accept that having a beating heart is what confers moral status and make termination impermissible. Animals  have beating hearts too and so too must have moral status and termination of their life is also impermissible.


DNA

Another possible answer is that it's wrong to kill a fetus because it has human DNA. First of all, this criteria is question-begging. We already know that the human fetus has human DNA. What we want to know is why merely having human DNA confers moral status. My fingernail clippings have human DNA. Do they have moral status? Maybe it's replicating human DNA that has moral status. But why? The various organs in my body all have replicating DNA, do those cells have moral status? That seems weird.

Life Begins at Conception
If we charitably employ the term "life" this is trivially true in a biological sense but of course mere descriptive biological facts don't necessarily imply moral conclusions. Typically, for something to be considered alive in a full sense we'd think some degree of self-sufficiency would come into play. Anyhow, is "being alive" all that's required for moral consideration of interests? If that's the case, all animals should also have their relevant interests considered in proportion to how alive they are.

"No! No! It's different because it's human life." Ok, fine. Tell me again what morally relevant attribute a human fetus has that other creatures don't have. And saying "because it's human" again and again doesn't answer the question. It merely redescribes the biological facts but says nothing of the moral facts. We need an answer to the question, "what morally relevant attribute do human fetuses have that living adult animals don't have?"

Life Begins at Conception and In Vitro Fertilization
Although not directly linked to the issue of animal rights, one of the most glaring inconsistencies with the anti-abortion movement is their silence on in vitro fertilization.  With most in vitro fertilization usually 8 eggs are fertilized. Do you think that every couple that engages in in vitro fertilization uses all 8? Nope. Maybe they'll use two...(unless you're Octo-mom).

Now, if those that argue that moral life begins at conception take their position seriously they should be protesting in vitro clinics rather than abortion clinics. For each person they prevent from going through with the fertilization procedure they save 6 or 7 human "lives" rather than a measly single life at an abortion clinic.

The reason why they would never do this is because politically their cause would fail and it wouldn't surprise me if at least some people who oppose abortion have used in vitro. Nothing like your own needs and desires to motivate a special pleading argument or initiate motivated reasoning.

To be fair, some in vitro clinics have gotten around this "inconvenient truth" by freezing whatever embryos aren't used. "Hey, we never terminated them, we just froze them forever"--or (more realistically) at least until we forget about it. Anyone with any intellectual honestly should see what a cop-out this is.

The Bottom Line
When we talk about rights we usually think of rights in terms of particular capacities. We don't give children the right to vote or to drive because we don't think they have the relevant capacities. When they develop those capacities they gain the relevant rights. Similarly, we don't give men the same reproductive rights as women because biologically they couldn't exercise these rights. If this is our model of rights (i.e., capacities) then it seems odd to confer rights to something without any relevant capacities. We can of course say that it has the capacity to live but this doesn't distinguish it from any other living thing and so consistency requires either we reject the argument or we confer those same rights on those other living things. 

And so, if anti-abortions were sincere in their arguments consistency demands that they just as sincere in their advocacy and protection of animal rights. In short, there should be no meat-eating anti-abortionists.

[Philosophers note: the capacities theory of rights isn't the only theory of rights.]

Flip it and Reverse It
Notice that the consistency requirement works the other way too. If you have strong views against killing animals yet are pro-choice, your views may be inconsistent depending on how you defend your position on animal rights and the stage in a fetuses development up to which you think abortion is permissible.  If you think abortion is permissible at a stage in its development where it has some of the attributes that are shared by living animals, then your position is likely to be inconsistent. Or if you think it's just wrong to terminate an animal's life prematurely for any reason then your case for the permissibility of abortion is paper-thin if moral consistency matters--especially if you are OK with late term abortions.

One other puzzle that pro-choice advocates have to deal with is to come up with a morally relevant criteria that distinguishes a late-stage fetus in the womb and a new-born infant. Well, let me qualify that. The distinction has to be made so long as the pro-choice advocate thinks it's wrong to terminate a healthy new-born but permissible to terminate a 3rd trimester fetus. What is the morally relevant attribute that the new-born has that the fetus doesn't have? 

And in the interest of fairness, animal rights proponents often point to the gruesomeness of killing animals at the factory level. If gruesomeness is a morally relevant property (which, in my view, is very plausible) late-term abortions are also gruesome and so this gruesomeness should inform our position. To get an idea of what late-term abortions are like, I suggest watching the documentary Lake of Fire which is probably one the best documentaries on the abortion debate.




Wednesday, May 20, 2015

Solving US Health Care Cost Problems: Free Market vs Government Policy. Part 1

I apologize in advance for the somewhat scattered nature of this post. I'm trying to work through some ideas. 

What I'm trying to figure out is whether certain cost problems in the US health care system can be solved using free market approaches or whether solutions require government intervention. The nature of the problems I'm looking at arise as a consequence of conflicting interests between insurance companies, hospitals, doctors, and patients. Any solution has to find a way to harmonize the respective interests of each. Recent evidence suggests government action works, although the case is far from certain. Also, supposing there are equally effective free-market solutions we still must ask why we'd chose one type of solution of the other. Let's get some statistics on the table first to get a general overview of the US health care situation.

Give or take a few, total US expenditures on health care in 2013 was 2.9 trillion. When we average that cost over the population the average per person's cost is $9, 225.00 (2013). To give those figures some context, OEDC average per person health care spending is $3, 448. The next highest spender is Switzerland at $6, 080.00 per person. Despite the familiar mantra of "but we have the best healthcare in the world", the US performs comparatively poorly in terms of many health outcomes. (To be fair, there are also a few areas where it doesn't, such as wait times for specialists and surgeries, and for cancer treatment outcomes.) Currently, health care costs represent about 17% of US GDP and is expected to rise to 22%, whereas the OECD average is 9.5% of GDP, the next highest being the Netherlands at 12%.

If you're like me, your thinking, "wait a minute, I didn't consume even close to $9 000.00 in health care this year. Where is this number coming from?" In other words, we can't just look at averages, we need to know how those costs are distributed across the population. That information will allow us to target cost saving policy at the costliest populations and/or health issues.  Are you ready for your head to 'asplode'?

In a single year, what percentage of total health care dollars spent do you think went to the top 5% of health care users? (I.e., the sickest people). Ready? About 49%. That's right. Just 5% of the population consumed almost half of all health care dollars spent in a year.  Now, what percent of the total health care dollars spent do you think went to the top 1% of health care consumers? Ready? The top 1% of health care consumers consumed about 30% of the total health care dollars spent in a single year.

Ok, let's look at the other end of the spectrum. What percentage of total health care spending did the bottom 50% consume? (I.e., the healthiest people or those with the cheapest conditions to treat). Ready? It's 3%. Yup, 50% of the population only consumes about 3% of the total health care dollars spent in a year.

(For more fun facts about the distribution of health care spending, here's a--wicka wicka--breakdown)

So, why--beyond shock value--should we care about these statistics? Because if we're going to set policy to decrease health care costs we're going to get way more bang for our buck if policy is directed at the top 5% of users rather than everyone all at once. There's very little to be gain by reducing the health care costs of the healthiest 50% whereas there are very likely cost savings available from the top 5%.

We might ask why treating this population is so expensive. To figure this out we need to know if their treatment is expensive because of the nature of the conditions for which they need treatment or the way the conditions are treated/billed/managed or they're always in and out of hospitals or it's some combination of all of the above.

It turns out that the 5 most expensive conditions to treat are heart disease, cancer, trauma, mental disorders, pulmonary conditions. But if only a small percentage of the population has these conditions, they won't account for the high costs. What we need to know is both what conditions are the most prevalent in the population and of those, which are the most expensive to treat.

A quarter of the population has at least one of these five chronic conditions: diabetes, heart disease, asthma, mood disorders, or hypertension.  Unfortunately, each of thes conditions is associated with other conditions and illnesses. Treating the primary conditions in conjunction with the associated illnesses accounts for 50% of all health care spending.

How do we put all this information together? If we want to figure out a way to reduce costs, clearly we want to go after the most costly people and conditions to treat. And if those two variables overlap, that's probably a good target. So, how should we do it? Does government need to implement some sort of policy or are there free market solutions? To answer this question I want to use as a case study of one hospital's method of reducing treatment costs. There are other successful models for cost reduction as well which I'll also look at briefly. The point I want to establish is that it is possible. Not only is it possible, but these successful models reduced costs, increased quality of care and health outcomes.

What I really want to know is whether these models were a consequence of government policy (i.e., the ACA) or whether they could have come about without a government mandate. If the reply is the latter, we must ask the obvious question: Then why didn't it happen pre-ACA? The pro-market person can correctly point out that it did in a very small handful of cases. Just look at the Mayo clinic and an HMO in Colorado. But there's a further question lurking. If these models were so successful, why weren't they copied? Presumably, when someone finds a superior business model, other businesses must copy it or lose out.

The pro-market person might reply that the regulatory environment pre-ACA interfered with market forces such that efficiencies weren't realizable. But their own example--the Mayo clinic seems to undermine this argument. I'll have to investigate this claim. On the other hand, the ACA (government action) brought in (Medical) Accountable Care Organizations (ACOs). ACOs are voluntary programs that reward Medicare and Medicaid providers (e.g., hospitals, clinics, doctors, etc...) for cost savings through innovation.  Health care economists and pretty much anyone else that works in health care policy have known for decades that preventative medicine and coordinated/managed care (this is when groups of specialists are paid as a team to manage patient outcomes) is the best way to bring costs down and improve patient outcomes.  So, why wasn't anyone doing these things pre ACA? Why didn't market forces converge on this more efficient model?

My answer is that there were two prisoner's dilemma-like situations. The first is between health insurance companies, the second is between doctors and hospitals. Let's take a look at the first. The health insurance business is an odd one.  You're trying to sell a product that you hope your customer will never use.  And so, the best customers from the insurance companies' point of view are the healthiest ones. Every one wants the healthy customers. No one wants the sick ones. Here's the deal with preventative care and managed care programs: Setting up these programs requires up-front investment that won't see returns for up to 5 years.

Here's the problem. If you're the only insurance company that invests in a preventative and managed health system you're going to have lots of healthy customers. This might seem good until you realize that you're the only one that invested in the program. What are the other companies going to do? They're going to try to poach your customers! You invested all that money and created healthy customers and now all the other insurance companies are going to swoop in and steal all the healthy customers you created. Seeing how this might happen, no insurance company wants to be the sucker--even though they want to have healthy customers! And so no insurance company makes the up front investment and we end up with no cost saving measures.

It looks like the only way you can get the health insurance companies to make the up front investment in these cost saving measures is if somehow "someone" gives each company assurances that all the others will do the same. This way, no one will end up a sucker by being the only one to make up front investments only to have the new healthy customers poached. It looks like the government is that "someone" who can offer the assurance by mandating preventative care be part of health care insurance policies. Doing so allows the insurance companies to exit the prisoner's dilemma.

The second prisoners' dilemma occurs between doctors/care providers and insurance companies (I'm less sure if this is technically a prisoner's dilemma, it might just be a more general case of conflicting interests). Doctors and care providers (e.g., hospitals) want to get paid more money rather than less. Insurance companies want to pay less rather than more. This leads to high costs. For example, if a doctor is charging for every test he orders and every minor consultation, he's likely to order more tests than is necessary. In fact, there's good evidence that this happens. Consider this data from 2012. For some types of tests, US doctors order significantly more than their OEDC counterparts.  So, how do we get doctors and hospitals to move to a managed care model? I.e., where they aren't necessarily under a fee-for-service model or at least where they're under a model that doesn't incentive ordering unnecessary tests and procedures?

Again, it looks like at least one answer is through government mandate. ACO programs under the ACA are voluntary for hospitals to enter.  Hospitals get to share whatever cost savings the hospitals generate through managed care initiatives. Interestingly, the hospitals are free to experiment with whatever managed care models they want. So, if the hospitals save medicare and medicaid 2 million compared the previous year, the hospital gets to keep just over half of the savings. The program has been a huge success.  Here are a couple highlights:

In the first year of the program 58 Shared Savings Program ACOs held spending $705 million below their targets and earned performance payments of more than $315 million as their share of program savings. Total net savings to Medicare is about $383 million in shared savings, including repayment of losses for one Track 2 ACO.

In the second year Pioneer ACOs generated estimated total model savings of over $96 million and at the same time qualified for shared savings payments of $68 million. They saved the Medicare Trust Funds approximately $41 million. 

Where We At?
So far it looks like there's a strong case to be made that government action can solve many of the cost problems with the US health care system.  Of course just because the government can solve these problems it doesn't follow necessarily that a market-based solution couldn't solve these problems. Some might even argue that the problems arose in the first place as a consequence of government intervention in the market. I'll look at these arguments in the next post. For now, amaze your friends with your new-found knowledge of healthcare statistics.

Friday, May 15, 2015

Day 3: Psychological Jade and The Arrogance of Ignorance

Introduction
Before reading this, I suggest reading my post from yesterday since most of what I talk about here relates to it (and I end up retracting most of what I said).

'The arrogance of ignorance' is one of my favorite phrases. I'm not sure of its origins but I heard it first from Dr. Steven Novella. I think the phrase is the best way to capture a cluster of common cognitive errors. In no particular order: a) Assuming that because you are knowledgable in one domain that you know a lot about another (or are able to correctly evaluate another). b) Moving from small data sets/anecdotal experiences to broad conclusions. 'The arrogance of ignorance' is a close cousin to the Dunning-Kruger effect: You have so little knowledge of a particular domain that you are unable to assess how little you actually know and grossly overestimate how much you do know, in turn leading you to wildly wrong conclusions.

Anyhow, yesterday I was guilty of all of the above crimes. You'd think a guest pass to a hospital and a few hours of observation would give me enough authority and knowledge to correctly evaluate an entire subfield of medicine. Strangely, it didn't.

Today I went back to inpatient psych, and boy am I glad I did (from a pedagogical point of view). Let me try to both convey my experience and undo some of the misconceptions I had.

Sample bias
Most of the patients I met yesterday had been in the hospital for over a week. I was meeting them after they'd undergone treatment and had been stabilized. Of course they seemed normal to me! As I learned (and hopefully you will too once you read this post), what I did was the equivalent of walking into a surgery unit and looking only at the patients about to be discharged, then asking"why did they need surgery? They look fine to me!"  

So, what are patients like on admission and early in treatment?

Obviously there are a variety of disorders but all of them are severe. Here's the thing, unless you work in a hospital or have someone in your family with a severe mental illness you've probably never actually seen severe mental illness. To most of us, this is an invisible population because most of their lives are lived in care homes or in institutions and, unfortunately, in the streets.

The patients range from very well-spoken with linear thought to having only elementary vocabulary with disjointed unintelligible thought, and any combination of the above. Regardless of where they fall on the spectrum, most of them suffer from severe delusions. Some examples: (1) Being part of an intergalactic group of assassins being pursued by the (intergalactic) mafia, (2) believing that a family member is dead who isn't and all the evidence they have should lead them to conclude the opposite, (3) being pursued by terrorists and (actually) destroying windows and cars to avoid/prevent the terrorist plot, (4) having voices in their head telling them to kill others or kill themselves. There were more but this should suffice.

Interestingly, the ones that had grand conspiracy delusions, e.g., (1) and (2), were extremely pleasant to talk to. If you were to have a conversation with them and the content of the delusion never came up, you wouldn't suspect a thing. It was as though they simultaneously inhabit two realities. When you ask them, they know where they are and why they're there. They'll say "I just want to get better" but at the same time they'll discuss their delusions as though they're just as real as the chair they're sitting on.

Unlike what I hypothesized yesterday, these people don't "just need a little more social and material support." That's the equivalent of saying someone with cancer can be healed with a back rub.

I think my reaction yesterday is probably analogous to what happens with many deniers of modern medicine. They've been in the hospital to visit a friend or they read some article online--maybe even spoken to a disgruntled doctor.  But they've only seen 1 billionth of the data set, and only from the patient side of the bed. Things look very different from the doctor's side of the bed and as you get a larger data set...

Tip of the Iceberg
Another factor that led me to my (wayward) conclusions yesterday was I didn't ask enough questions about case histories. Once you read the case histories, your perspective will change very quickly. Every patient in there has a lifetime history of psychosis that is well documented. Almost all have been suffering the same symptoms since adolescence. Some have their condition for unknown or unknown biological reasons (usually genetic, as it runs in their families), others (there were 2 there) had suffered major brain injuries at some point earlier in their lives and haven't been the same since, others have their condition as a consequence of a life-time of substance abuse. For many it was a combination.

Someone who thinks that a little positive thinking or mere talk-therapy is going to solve these people's problems is extremely naive--like I was yesterday. Someone who thinks along these lines is mistaking people who have one-off breakdowns or depression with this other population. Like I said, unless you work in a hospital or have a family member (or work in law enforcement, probably) it's unlikely you've ever met anyone from this unfortunate population. Until you do, you can't fathom just how serious it is.

Philosophy of Science Lesson: Depression and Jade (Bear with me, You'll See How this Relates in a Moment)
What is jade? Up until the 19th century it was believed to be a kind of mineral. However, a French mineralogist (Alexis Damour) discovered that it was in fact two distinct minerals: Jadeite and nephrite, each with distinct chemical and structural properties. Nephrite is a microcrystalline interlocking fibrous matrix of the calcium, magnesium-iron rich amphibole mineral series tremolite (calcium-magnesium)-ferroactinolite (calcium-magnesium-iron). Jadeite is a sodium- and aluminium-rich pyroxene. The gem form of the mineral is a microcrystalline interlocking crystal matrix.

This isn't Rocks for Jocks 101, so why should we care? I'm getting there. Notice that both structurally and chemically, nephrite and jadeite are different. What does this mean? Well, we know that different chemical compositions will react differently and different microstructures will also also behave differently. Jadeite and nephrite have different fundament properties. From the point of view of science, if we think that science divides and studies the world in terms of its fundamental rather than superficial properties there's no such thing as jade. There's no one structure or chemical structure that is jade.

Here's another way to illustrate what I'm getting at. Why isn't there a science of green things? Why aren't there green-ologists? The reason is that green is a superficial property. Knowing that something is green gives us no predictive power in terms of how other green things will behave. It also provides no explanatory power for why it behaves the way it does. 

For example, green algae has very different fundamentals chemical properties from a green glass. Suppose I put a HCL on the green algae. Based on the chemical reaction, would I be able to predict what will happen if I put HCL on green glass? Does the algae's greenness explain why it reacts the way it does to HCL? Of course not. In science, I want to lump things into categories that are going to allow me to make generalizations and predictions about other things in that category.  

Learning about green algae doesn't help me learn anything important about green glass except what I already knew--they're both green. We don't lump the two into a scientific category because science is only concerned with "lumping" things in terms of shared fundamental properties rather than superficial properties. We ought to "split" superficial categories that contain objects that have different fundamental properties.

Ok, so what does all this have to do with psychiatry and psychiatric diagnoses?  Consider that a common diagnosis we hear about is depression. Most cases we (by we, I mean non-medical professionals) have encountered are probably infrequent non-pathological affairs, perhaps set off by a traumatic event. We know that, with support, most people eventually work through the depression and end up fine. The problem is, 'depression' is psychological jade

Different types of depression can manifest the same superficial symptoms but the underlying causal structures are different. (No, alt-meders, this isn't the same "root cause" you're thinking of but it's the one you ignore).  So, the mistake is to think, "ah, depression...we just need to treat it with x, that's what we did with the last case". But this is to treat depression like jade--i.e., as a homogenous category based only on superficial resemblance.

For example, I learned (the very surprising fact) that for many types of deep depression the most effective treatment is ECT (electroconvulsive therapy)--yes, you read that right! I had to ask the doctor twice because I couldn't believe my ears. Apparently, it's well studied. Of course, the current procedure is quite different from how it was in the early days but still...who'da thunk? 

"Jadists" about depression might think all cases of depression can be treated with ECT. This would be a mistake. There are different kinds of depression with different etiologies (underlying causal structures). It turns out that depression in manic depressives doesn't respond to ECT. Depression has its own jadite and nephrite (and more). The "root cause" of depression in manic depression is fundamentally different than it is in other kinds of depression.

Beside this overview of a famous philosophical argument, why am I talking about this? Because if you're a human being you're probably going to commit the same cognitive error that I made when it comes to psychological diagnoses. You hear that a patient (or someone you know) is depressed or has some other general psychological problem and you think about how you or someone you know dealt with it. You think, well, all they have to do is x (whatever worked for you or your friend). It's so simple! 

But you're treating the diagnosis like psychological jade. The diagnosis might have the same symptoms but it doesn't mean it has the same underlying fundamental structure and thus, there is no reason to suppose it will respond to the same intervention. It's a different kind

Worse yet, someone could dogmatically claim the all disease and/or psychological diagnoses share the the same "root cause". Such dogmatism precludes any chance of recovery since the same ineffective treatment will only be applied more and more vigorously. What's more, this way of thinking is the opposite of scientific thinking. Saying everything has the same "root cause" is just like being a green-ologist. You're confusing superficial similarity for fundamental similarity. To use the lingo of metaphysics, you're lumping when you should be splitting. 

We see green-ology all over the place in alt-med. For chiropractors, the "root cause" of all disease is some sort of spinal misalignment, for Ayurvedic medicine the "root cause" is chakra alignment (or some shit), for reflexologists the "root cause" is something to do with your feet (WTF? How are these people even a thing?), etc... (While I'm pointing the finger I should make clear that I have my own "root cause" default. I have a tendency to lump various problems as being caused a general lack of meaningful social relationships, belonging to a community, and sense of purpose.) And then there's alt-med's favorite: stress. The "root cause" of all disease--physical and mental is stress. More green-ology. 

To be charitable we can say that stress can trigger or make people more susceptible to disease but this is to confuse notions of causation. Let me illustrate. Suppose someone is in the hospital with a broken leg because they got hit by a car. What "caused" the broken leg? Being hit by a car, right? Now, just because the car was the trigger for the broken leg no one in their right mind would believe that removing the car will heal the leg. 

I can just imagine the doctors at an all-alt-med hospital: "We've cured your leg by getting rid of the 'root cause'--the car has been destroyed! You can walk now!"

So, while it's true that stress can trigger certain reactions, it doesn't follow that the solution to the problem is merely to remove the trigger. Yes, doing so may decrease the likelihood of the same event from occurring again, just like not getting hit by a car will prevent you from breaking your leg again (that way); however, this insight is often of trivial value. No one with more than two brain cells to rub together thinks chronic stress is good for them. What's the next great insight? A poor diet isn't good for your health? Revolutionary! Please collect your Nobel Prize.

The Lesson
The causes of many diseases, physical and mental, have to do with their fundamental underlying structural properties. This is why people respond differently to different treatments. Superficial similarities can cause us to lump when we really should be splitting. Overzealous lumping leads to failed treatment and frustrated patients. Overzealous lumpers are green-ologists. Don't be a green-ologist. 

Anyhow, this is just one more cautionary tale for me to heed. Hopefully, it gives you pause too the next time you diagnose someone (including yourself) and assume that superficial similarity implies fundamental similarity...

Also, hopefully this little digression shows the value of philosophy to science. You can't do one without the other.

In a Nutshell
The conclusions I drew in my last post were wrong. But I'm leaving that post up as a cautionary tale to both myself and to anyone reading this. My hope is that it reminds us how easily we can get things wrong when we only have a little bit of information, particularly about areas where we are not experts. People think it's "being a sheeple" to defer to experts. It's not. It's smart and good epistemic practice. Only arrogance fueled by ignorance would lead a person to think that they know more than an expert in that expert's domain.